Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Laboratory experiments in innovation research: a methodological overview and a review of the current literature

Laboratory experiments in innovation research: a methodological overview and a review of the... julia.brueggemann@wiwi.uni- goettingen.de Innovation research has developed a broad set of methodological approaches in Faculty of Economic Sciences, Chair recent decades. In this paper, we propose laboratory experiments as a fruitful of Economic Policy and SME Research, University of Göttingen, methodological addition to the existing methods in innovation research. Therefore, Platz der Göttinger Sieben 3, 37073 we provide an overview of the existing methods, discuss the advantages and Göttingen, Germany limitations of laboratory experiments, and review experimental studies dealing with different fields of innovation policy, namely intellectual property rights, financial instruments, payment schemes, and R&D competition. These studies show that laboratory experiments can fruitfully complement the established methods in innovation research and provide novel empirical evidence by creating and analyzing counterfactual situations. Keywords: Innovation research, Laboratory experiments, Methodology JEL-classification: C90, L50, O38 Introduction Fostering research and innovativeness to support economic growth and increase com- petitiveness has become a central paradigm for policy makers worldwide in recent de- cades. The European Commission has recently reaffirmed this goal by committing to spend up to 3 % of the European Union’s GDP to support private innovation activity until 2020. By means of this and other policy instruments, the EU thus aims to become an “innovation union” (COM(2014) 339). This paradigmatic focus has been adopted by the scientific community, which similarly discusses the topics of innovation and indus- trial policy broadly, trying to obtain insights and provide advice to policy makers con- cerning the design of policy instruments that optimally foster innovation activity (Mazzucato et al. 2015). Economic innovation research traditionally argues for government intervention in the case of market failure, which is characterized by the imperfect allocation of re- sources, for example, due to imperfect competition, information failures, negative ex- ternalities, public goods, and coordination failures (Bator, 1958). Given the political commitment to foster innovation activity, government interventions can provide rem- edies to market failures. For this purpose, several distinct methods of supporting pri- vate economic subjects in their innovation activities have been developed. Firstly, © 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 2 of 13 regulatory instruments such as rules, norms, and standards have been introduced, such as patents and copyright law. These regulations are compulsory for all economic actors and thus shape the overall market conditions for innovative products and processes. Secondly, financial instruments have been introduced to promote innovative activity, with examples including subsidies, cash grants, and reduced interest-loans, as well as disincentives like tariffs, taxes, and charges. Thirdly, there are “soft” instruments that include normative incentives such as moral appeals to economic actors and voluntary commitments like technical standards or public-private partnerships (Borrás and Edquist 2013; Vedung 1998). To analyze and evaluate the effects and optimal design of these instruments, eco- nomic innovation research has established a large number of empirical research methods. Along with the overall expansion and professionalization of experimental eco- nomics, behavioral evidence collected in laboratory experiments have become a vital complement to economic innovation research in recent years. Following Sørensen et al. (2010) and Chetty (2015), we suggest that lab experiments constitute a promising addition to the methodological toolkit in innovation research, thus advancing novel in- sights and providing predictions and policy implications by incorporating behavioral factors. We thus argue that laboratory experiments should be used if they yield add- itional evidence unattainable by other methods in a particular field of study. This reso- nates with the arguments by Falk and Heckman (2009); Chetty (2015); Madrian (2014); and Weimann (2015), who propose a pragmatic approach concerning the use of evi- dence derived from experimental methods, arguing that all empirical methods should be viewed as complementary (Falk and Heckman 2009). In this paper, we aim to con- tribute to the growing field of experimental innovation research, firstly by outlining the advantages and limitations of different methodological approaches in innovation re- search and more specifically laboratory experiments. Secondly, since former papers have not attempted to summarize and structure the existing experimental literature, we provide a literature review of the existing experimental approaches to the field of innovation policy with the most important studies from four sub-fields in which lab ex- periments have been conducted to date. We conclude by emphasizing the further use of laboratory experiments to innovation research. This paper is structured as follows: in chapter two, we outline the range of methods in economic innovation research, before discussing the scopes of the experimental method in detail in chapter three. Subsequently, we present a selection of laboratory experiments in the field of innovation policy, namely intellectual property rights, finan- cial instruments, payment schemes, and R&D competition. A conclusion is finally pro- vided in chapter four. Methodological approaches in innovation research A large number of research methods have been developed to analyze which policy in- struments might best foster innovative activity. Weimann (2015, pp. 247–248) catego- rizes the different methods of generating insight by their features regarding their ability to identify causal relations, their generalizability to other contexts (external validity) as well as their broad applicability; particularly, the trade-off between causality and exter- nal validity is emphasized. Thus, Weimann distinguishes between (1) neoclassical Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 3 of 13 models pointing out causal relationships, (2) “traditional” empirical research primarily showing correlations, (3) natural experiments attempting to substantiate causal rela- tionships, (4) randomized field experiments that optimally offset the trade-off between causality and external validity, and (5) laboratory experiments providing a strong caus- ality, yet lacking external validity. Figure 1 provides an overview of these methodo- logical approaches and their features in a Venn diagram. The figure shows that none of the existing methods is able to fulfill all three features identified by Weimann (2015) but can only meet one or two criteria. (1) Neoclassical models such as game theoretical or general equilibrium models have the advantages of enabling deriving causal relations and being easily applicable, yet they often lack external validity. Empirical investigations in innovation economics most commonly use the methods of (2) “traditional” empirical economic research, for in- stance, official patent statistics or micro firm-level data from surveys. For this, OLS es- timations are considered appropriate to analyze and quantify observable variables of innovation processes; however, for dynamic effects, these methods often lead to prob- lems of causality, endogeneity, and selectivity. A further shortcoming of using this form of data is that innovation surveys necessarily rely on the entrepreneurs’ willingness to voluntarily disclose information about their firm, which potentially biases the data. Fur- thermore, the extent to which government funding is actually used for research by the firms often remains unclear and the public funding decisions often lead to a selectivity bias, thus making public funding an endogenous variable, which establishes further de- pendencies between the respective variables (Busom, 2000). Moreover, patents and pa- tent pools are often used as an approximation for the innovation activity to estimate the firms’ innovation output. This prompts a number of issues, for example, because small and medium enterprises use other forms of protecting their innovations and pa- tent less than large firms, due to potentially expensive patent litigations and patent theft (Thomä and Bizer 2013). Nevertheless, this methodological approach to innovation research has strongly improved its data availability, methods, and research designs in the past 25 years, implementing methods such as difference-in-difference es- timators, sample selection models, instrumental variables, and non-parametric match- ing methods (Angrist and Pischke 2010; Zúñiga-Vicente et al. 2014). Overall, this Fig. 1 Methodological approaches and their features. Note: the figure is based on the classification by Weimann (2015) Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 4 of 13 approach entails a high level of external validity and applicability but often only a low level of causality. Another empirical means of evaluating policy instruments is (3) natural experiments, which feature a high level of external validity. Furthermore, due to improved methodo- logical approaches, causal relations have substantiated in recent years. However, the ap- plicability is often low, since it is difficult to find appropriate control groups that could enable a clear comparison (Weimann 2015). It has been argued that the issues involved with using the “traditional” methods of empirical economic can best be solved by conducting (4) randomized field ex- periments in which real-life incidents are treated similar to experiments. They are considered the “gold standard” for evaluating new policy instruments as they en- able identifying causality rather than mere correlations (Boockmann et al. 2014; Falck et al. 2013). As an example, Chatterji et al. (2013) suggest that the distribu- tion of building sites in new industrial areas could be randomized, which would lead to better results in subsequent impact analyses of cluster policies. While opti- mally combining external validity and causality, randomized field experiments suf- fer from a lack of applicability as their adequate design is time-consuming, expensive, and often highly impractical; consequently, other methods are regularly preferred (Angrist and Pischke 2010). (5) Laboratory experiments can be considered an alternative to overly costly and impractical field experimentation, combining a high level of causality with a high level of applicability. Despite the lower level of external validity, laboratory studies can be a valuable substitute for randomized field experiments and provide insightful new angles to research topics inaccessible through “traditional” empirical methods. Since each method has its own strengths and weaknesses, the method used for a par- ticular research question should be chosen depending on the object of research, the availability of data, and the possibility for conducting field experimentation. Overall, a mix of complementary empirical methods might thus be the most promising approach (Weimann, 2015). In the following, we focus on laboratory experiments, which are the most recent addition to the methodological toolbox of innovation research, including discussing their limitations and advantages. Limitations and advantages of experimental methods Although lab experiments can be transferred and used to derive relevant policy impli- cations, there are systematic limitations to this approach. Critics of lab experiments such as Levitt and List (2007, 2008) emphasize the restrictions, while Falk and Heck- man (2009) provide refutations. Observation Participants are observed and act in an artificial environment, which might influence their behavior due to expectancy effects and the experimenter demand bias. Barmettler et al. (2012) contradict this argument and show experimentally that complete anonym- ity between the experimenter and participants does not change the latter’s behavior. Furthermore, it is argued that close social observation is not limited to the lab but ra- ther is a feature common to all economic interactions. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 5 of 13 Stakes It can be argued that the stakes in experiments are too low to induce realistic behavior in participants. Experiments with varying stake sizes yield mixed results depending on the experimental situation (Camerer and Hogarth 1999). However, Falk and Heckman (2009) ask how often people take choices involving sums equal to their monthly in- comes and how representative such high-stake experiments would actually be. Conse- quently, they suggest that the average level of stakes in laboratory experiments correspond to the most common choices that individuals take. Sample size The sample sizes of lab experiments are criticized as being too small, although this is refuted such that sample sizes are stated to adequately correspond to this method and thus yield valid assertions. Participants Student participant pools are considered unrepresentative of the overall population. While this might not be a problem when testing theories, in the case of innovation ex- periments, other populations such as researchers or entrepreneurs might be more ap- propriate experimental participants, depending on the research question. Self-selection There is a self-selection bias since students with particular traits sign up for participant pools. Nevertheless, student pools ensure that the selection can be controlled and pro- vide information on participants’ demographics, personal backgrounds, and prefer- ences. Thus, the disadvantages connected to selection biases—which are potentially prevalent in field experiments as well as other empirical research methods—can be somewhat controlled. Learning Participants often cannot learn in experiments and adjust their behavior accordingly, yet this is also a prevalent factor in many economic interactions outside of the lab, as real-world interactions can often be considered as one-shot games with no chance of learning in repeated decisions. Furthermore, a large number of repeated games have been considered in experimental settings to determine learning effects, for example, Cooper et al. (1999) with regard to incentive systems. External validity Lab experiments are considered as lacking external validity, meaning that they produce unrealistic data without further relevance for understanding the “real world”: a criticism that holds true both for lab experiments and theoretic models (Weimann, 2015, pp. 240–241). The challenge in designing experiments is to establish the best way of isolat- ing the causal effect of interest and thus providing insights about universally prevalent effects that transfer to other economic situations outside of the lab. In a recent study, Herbst and Mas (2015) show how well-designed experiments can ensure that individual behavior outside the lab is captured adequately, thereby gaining a higher external valid- ity than traditionally assumed for laboratory studies. Further studies comparing labora- tory and field evidence will have to show whether this might change the general perception of the external validity of lab experiments (Charness and Fehr 2015). How- ever, in some research contexts, it might not be possible to substantially increase the external validity. In such cases, lab experiments can serve as a starting point to isolate clear effects of specific innovation instruments. Subsequently, these effects have to be Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 6 of 13 investigated with other methods involving a higher external validity, e.g., field experi- ments in a firm. These methods then have to show whether the initial results from the laboratory hold in contexts outside the lab. Generalizability The lack of generalizability of behavioral patterns resulting from lab experiments that refrain from testing a theoretical model is criticized. While the arguments mentioned above reduce this problem, it remains a considerable drawback to some experimental evidence. Nevertheless, every empirical method faces this issue due to the unavoidable dependency of data on a specific context. Overall, lab experiments entail several distinct advantages as they provide researchers with the means of deriving causal relations from controlled manipulations of specific conditions, while controlling all surrounding factors. This ensures precise measure- ments and makes it possible to preclude confounding effects such as multiple incen- tives or repeated interactions. The experimenter thus retains almost complete control of the decision environment, namely the material payoffs, the information given to par- ticipants, the order of decisions, the duration, and iterations of the experiment. Partici- pants are assigned randomly, which reduces the selection bias. Moreover, they are incentivized monetarily for their decisions, whereby it can be assumed that decisions are taken seriously: “In this sense, behavior in the laboratory is reliable and real: Partici- pants in the lab are human beings who perceive their behavior as relevant, experience real emotions, and take decisions with real economic consequences” (Falk and Heck- man 2009, p. 536). The results are replicable and they allow investigating specific institu- tions at a relatively low cost. This can be particularly useful when considering exogenous changes like policy interventions and new regulations, where counterfactual situations can be created and their effects tested far more easily in lab rather than field experiments. With the possibility of altering only one factor—e.g., the patent regime—lab experiments allow analyzing the relevance of a particular factor without other factors confounding the observed behavior. Furthermore, lab experiments enable the researcher to examine differ- ent innovation types and effects of incentives and splitting up the innovation process to observe individual behavior at particular points of the process (Falk and Heckman 2009; Smith 1994, 2003). In the following, we review examples of different fields of innovation research where lab experiments have been put forth to provide novel insights. Review By analyzing the effects of specific policy instruments via economic experiments, sev- eral of the advantages of lab experiments described above can be used fruitfully. In par- ticular, it becomes possible to compare counterfactual data of decision situations with and without a particular instrument. Therefore, it is possible to analyze subjects’ spe- cific reactions to changes in the framework conditions, which is almost impossible when using “real-world” data. There are additional merits to the controlled lab environ- ment, in which only one factor is changed; for instance, innovation behavior and its de- velopment can be observed and analyzed over several periods. Of course, the innovation process is necessarily stylized in lab experiments; nevertheless, a number of promising ideas concerning how to transfer the innovation process into the laboratory Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 7 of 13 have been provided in recent years. Table 1 comprises the experiments reviewed in the following chapters and summarizes in brief the particular task subjects had to solve. Intellectual property rights For instance, there are several experiments implementing (real effort) search tasks to simulate the innovation process. Buchanan and Wilson (2014) design an experimental environment with subjects producing, trading, and consuming rivalrous and non- rivalrous goods. Rivalrous goods are produced out of two complements and can be sold. By contrast, producing non-rivalrous goods is possible by participating in a search task in order to find the “favorite good” of the specific period, which is more valuable than the rivalrous good and—in opposition to rivalrous goods—can be sold several times. The authors implement one treatment with intellectual property, in which selling and transferring the non-rivalrous good is restricted to the respective owner, as well as one treatment without intellectual property, where non-rivalrous goods can be created several times. The authors find no differences in the value of produced non-rivalrous goods and the average money earned regardless of intellectual property protection. Table 1 Overview on reviewed experiments Field of Short title Type of task Subjects’ task in the experiment research Intellectual Buchanan and Wilson 2014 Real effort Producing and trading rivalrous and non- property search task rivalrous goods composed of colors rights Meloso et al. 2009 Real effort Solving the knapsack problem and trading the search task potential components Buccafusco and Sprigman 2010 Creative task Creating and trading poems Crosetto 2010 Creative task Creating and extending words and deciding whether to use IP protection Brüggemann et al. 2015 Creative task Creating and extending words, setting license fees Financial Brüggemann and Meub 2015 Creative task Creating and extending words, setting license instruments fees Brüggemann 2015 Creative task Creating and extending words, setting license fees Payment Eckartz et al. 2012 Real effort Combining as many words as possible from 12 schemes search task given letters Ederer and Manso 2012 Real effort Managing a virtual lemonade stand search task Erat and Gneezy 2015 Creative task Solving rebus puzzles Bradler 2015 Creative task Imagining unusual uses for items R&D Isaac and Reynolds 1988 Investment task Taking investment choices under competition competition Isaac and Reynolds 1992 Investment task Taking investment choices including the game bingo Sbriglia and Hey 1994 Search task Finding a letter combination by buying different letter trails under competition Zizzo 2002 Investment task Competing for a prize over several periods Silipo 2005 Investment task Accumulating “knowledge units” under risk and competition Cantner et al. 2009 Search task Searching for product specifications of a car including investment and competition Aghion et al. 2014 Investment task Competing for finding an innovation including investment and risk Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 8 of 13 Overall, Buchanan and Wilson suggest that intellectual property protection does not spur innovativeness. However, the protection only serves as an additional incentive, whereas the existence of entrepreneurial individuals is more important. The respective entrepreneurs subsequently profit substantially from the protection, as well as generat- ing wealth without intellectual property protection. Meloso et al. (2009) use another kind of search task—namely the knapsack proble- m—to simulate intellectual discovery in a patent and a non-patent market system, in which components of potential discoveries are traded. The goal of the knapsack prob- lem is to combine inputs of a particular value and realize an optimal weighing of the components. In sum, the number of subjects who were able to find the correct solution to the knapsack task was higher in the markets system, which has the advantages that no scope of intellectual property rights has to be defined beforehand and that it entails no monopoly rights. Therefore, the authors state that markets do not necessarily fail—as theoretical contributions suggest—for non-excludable and non-rival goods. Buccafusco and Sprigman (2010) let subjects write poems and implement a market for the poems. Depending on the initial distribution of intellectual property rights, they find different preferences of the innovators, owners, and buyers. There is a robust endowment effect that manifests itself in the high offers of innovators and a significantly lower willing- ness to pay among the buyers. This experiment has the advantage of simulating the innovation activity most closely on an individual level, yet it is not possible to further evaluate the particular poems and determine a ranking for the quality of the innovations. Including further features of the innovation process—namely creativity, ownership, and investment choices—Crosetto (2010) developed a task to simulate innovative activ- ity based upon the board game Scrabble. He uses his setting to analyze the individual behavior when subjects have to create and extend words and are able to select between the intellectual property schemes of open source and fixed license fees. He finds that subjects’ propensity to provide their innovations open source is more likely when the level of license fees is high. Brüggemann et al. (2015) extend this experimental setting to test for the effect of different regulatory incentive schemes on the individual innovativeness. They compare a treatment with the possibility to choose the amount of license fees to a system without license fees and further implement the ability to communicate. They find that communication does not change the innovative behavior and that welfare is higher in the no-license-fee system than in the license-fee system. However, when given the possi- bility to license innovations, subjects display a high demand for being rewarded monetar- ily rather than providing innovations to other participants free of charge. Financial instruments There is broad literature about the difficulties in analyzing the effect of subsidies and other public programs to foster innovativeness due to endogeneity and selection bias problems. Although the methods used have advanced substantially in past years, lab experiments can contribute to this sub-field of innovation research (Blundell and Costa Dias 2009). In some cases, experiments might be the only way to provide insights about new—and potentially costly—policy instruments before they are implemented in the “real world.” This approach might thus be a particularly promising methodological choice when new institutional frame- work conditions are tested, which aim at fostering the innovative activity. Nevertheless, there is only a limited number of studies dealing with financial instruments to date. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 9 of 13 Using the Scrabble-based word creation task introduced by Crosetto (2010), Brüggemann and Meub (2015) analyze the individual behavior in two types of innovation contests by awarding subjects with a bonus for the best innovation in one treatment and for the largest innovation effort in another, comparing individual performance to a benchmark treatment without a prize. They find that the willingness to cooperate decreases when innovation con- tests are introduced, while the overall welfare remains constant across treatments. Further- more,using thesamewordtask, Brüggemann (2015) analyzes the effects of two distinct forms of subsidies on innovativeness: first, by supplying resources determined for innovative activities and, second, by providing additional financial resources not restricted to the use in innovative activities. She finds that both forms of subsidy lead to a crowding-out of private investment and negative welfare effects when the costs for the subsidy are included. Furthermore, subsidies fail to induce a positive effect on the individual innovation behavior. Payment schemes Another class of experiments focuses on the creative element of innovation and the ef- fects of different payment schemes. Eckartz et al. (2012) test the effects of different pay- ment schemes on creativity using a word-based real effort task, where subjects have to combine as many words as possible out of 12 prescribed letters within a certain time. They examine a flat fee, a linear payment, and a tournament and find no substantial differences between the three incentive schemes. Similarly analyzing different payment schemes, Ederer and Manso (2012) compare the innovative activity when offering a fixed wage, a wage based upon pay-for-performance, and a split wage, which is fixed at the beginning and based upon performance later on. In a search task, subjects have to manage a lemonade stand, whereby they have to decide upon several variables such as the location, content, and price to find the most profitable solution. The authors find that the split wage with tolerance for early failure and compensation for long-term suc- cess leads to more innovative effort and higher overall welfare. Erat and Gneezy (2015) compare three payment schemes, namely a pay-for- performance scheme, a competitive scheme, and a benchmark without incentives. Un- like Ederer and Manso (2012), they use rebus puzzles as a creative task and find that competition reduces creativity and a pay-for-performance scheme does not change cre- ativity in comparison to a situation without incentives. Comparing the two financial in- centives, creativity is higher in a pay-for-performance scheme. Bradler (2015) used the “unusual uses task”—an established creativity test—to compare ac- complishment, self-reporting, and risk behavior. In the task, subjects have to imagine as many uses for a particular object as possible in a certain time, choosing their preferred pay- ment scheme prior to the task, i.e., a tournament or a fixed payment. She finds that the dif- ferent payment schemes appeal to different types of subjects: risk-loving subjects with a high self-assessment tend to choose the tournament; however, in contrast to previous studies, cre- ative subjects do not tend to choose the tournament more often than the fixed payment. R&D competition Finally, in the experiments on R&D competition, the authors focus on different investment task to analyze the individual behavior in competitive and innovative environments. Experiments on patent races and R&D competition were first Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 10 of 13 established by Isaac and Reynolds (1988) to simulate a one-stage stochastic inven- tion model and subsequently a two-staged model (Isaac and Reynolds 1992). This class of experiments aims to test the findings of models with empirical evidence, whereby—in contrast to the experiments described before—they do not analyze specific policy instruments. Sbriglia and Hey (1994) develop a costly combinator- ial task representing research competition for a patentable innovation to analyze three behavioral problems of patent races, namely how subjects select their search procedures, which investment strategies they use, and how information is processed. The authors identify different types of innovators: the “winners”,who search successfully, do not act randomly, and invest more in comparison to the “losers”, who are unable to establish a strategic search procedure. Furthermore, stronger competition accelerates the rate of investment, and with a higher num- ber of periods, successful players more commonly adapt their searching behavior. Zizzo (2002) tests the multi-stage patent race model by Harris and Vickers (1987) with an investment task where subjects compete for a monetary prize over several periods. Their results disconfirm the theoretical assertions, as leaders of a patent race do not invest more than their followers. Furthermore, the authors find no virtual monopoly and investments do not change as predicted by the model. Silipo (2005) analyze the cooperation and break-up behavior in joint ven- tures in a dynamic patent race model theoretically and experimentally. In the model, they find that the starting positions of the competitors are crucial for be- ing cooperative or not: if the innovators start at different points of the research process, the probability of joint ventures decreases, while in joint ventures, the pace of the process slows down. The results of their experiment correspond to the model, aside from some races in which subjects perform worse than anticipated. Cantner et al. (2009) test a patent race model limited to a duopoly market without price competition by implementing a multi-dimensional search task with uncertainty. They find that different strategies solve the task, namely risky innovative investment and risk-free imitations. On average, subjects choose the risky innovative investment based upon the risk of an investment failure, their anticipated revenue, and their relative success in the experiment. Furthermore, the gap in subjects’ earnings has a positive impact on their investment in the next periods. Finally, Aghion et al. (2014) analyze the effects of competition on a step-by- step innovation by means of a risky investment task with different levels of competition and time horizons. The results show an increase in investment for neck-and-neck firms, yet a decrease in investment for firms lagging behind. Conclusions In this paper, we present the limitations and advantages of using laboratory experi- ments for innovation research and review 18 examples from four specific fields in which lab experiments already have been conducted. As the experimental method yields promising results in testing intellectual property rights, financial instruments, payment schemes, and R&D competition, we suggest that laboratory experiments can serve as a useful additional tool to innovation economists and represent a source of promising new insights for innovation research. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 11 of 13 In particular, we argue that lab experiments should be used to target specific policy questions and thus provide measures for the effectiveness of specific instruments prior to their introduction. This approach has—in marked contrast to all other methods—the advantages of yielding evidence from counterfactual situations and a strong control of the setting, for example, when testing external incentives for innovative activity or changing pa- rameters of the institutional framework. Therefore, we follow Chetty (2015) and Weimann (2015), who suggest a pragmatic perspective on behavioral economics, thus adding experi- mental evidence to the existing methods whenever its particular advantages outweigh its limitations. Within this pragmatic perspective on laboratory experiments, there are three ways in which this field of research can contribute to public policy: by presenting new policy instruments, developing better predictions regarding the effects of existing policies, and more accurately measuring welfare implications. Besides the policy implications, this strand of literature can be used to derive managerial implications. Particularly, studies on external incentives for fostering innovative activities are of relevance, since they give managers prac- tical advice on how to best foster innovative activities of their employees, by using, e.g., ex- periments analyzing the optimal payment schemes for innovative activities. We hope that this overview encourages other researchers to use lab experiments in innovation research, which could be further developed in several domains of innovation research: as the existing laboratory studies on financial instruments measure effectiveness, future studies might focus on measuring efficiency, which would reflect promising progress in evaluating new means of public policy. Further- more, lab experiments might be helpful as a methodological starting point for devel- oping new policy instruments. From a managerial perspective, future experimental innovation research might address the more comprehensive understanding of the innovation process itself. For example, experimental researchers might analyze in- novative work in teams and thus decompose the innovation process into its compo- nents, which is effectively possible in a laboratory environment. Moreover, the role of external incentives to encourage employees’ innovativeness might be further emphasized. Competing interests The authors declare that they have no competing interests. Authors’ contributions Both, JB and KB, developed and wrote the paper. Both authors read and approved the final manuscript. Acknowledgements Financial support from the German Federal Ministry of Education and Research via the Hans-Böckler-Stiftung is gratefully acknowledged. Further, we would like to thank Till Proeger for his very helpful comments. Received: 28 January 2016 Accepted: 31 May 2016 References Aghion, P., Bechtold, S., Cassar, L., & Herz, H. (2014). The causal effects of competition on innovation: experimental evidence (National Bureau of Economic Research Working Paper (No. w19987)). Angrist, J. D., & Pischke, J.-S. (2010). The credibility revolution in empirical economics: how better research design is taking the con out of econometrics. Journal of Economic Perspectives, 24(2), 3–30. doi:10.1257/ jep.24.2.3 . Barmettler, F., Fehr, E., & Zehnder, C. (2012). Big experimenter is watching you!: anonymity and prosocial behavior in the laboratory. Games and Economic Behavior, 75(1), 17–34. doi:10.1016/j.geb.2011.09.003 . Bator, F. M. (1958). The anatomy of market failure. The Quarterly Journal of Economics, 72(3), 351–379. doi:10.2307/ 1882231 . Blundell, R., & Costa Dias, M. (2009). Alternative approaches to evaluation in empirical microeconomics. The Journal of Human Resources, 44(3), 565–640. doi:10.3368/jhr.44.3.565 . Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 12 of 13 Boockmann, B., Buch, C. M., & Schnitzer, M. (2014). Evidenzbasierte Wirtschaftspolitik in Deutschland: Defizite und Potentiale. Perspektiven der Wirtschaftspolitik, 15(4), 307–232. doi:10.1515/pwp-2014-0024 . Borrás, S., & Edquist, C. (2013). The choice of innovation policy instruments. Technological Forecasting and Social Change, 80(8), 1513–1522. doi:10.1016/j.techfore.2013.03.002 . Bradler, C. (2015). How creative are you?: an experimental study on self-selection in a competitive incentive scheme for creative performance (ZEW - Centre for European Economic Research Discussion Paper (No. 15-021)). Brüggemann, J. (2015). The effectiveness of public subsidies for private innovations: an experimental approach (cege Discussion Paper (No. 266)). Brüggemann, J., & Meub, L. (2015). Experimental evidence on the effects of innovation contests (cege Discussion Paper (No. 251)). Brüggemann, J., Crosetto, P., Meub, L., & Bizer, K. (2015). Intellectual property rights hinder sequential innovation: experimental evidence (cege Discussion Paper (No. 227)). Buccafusco, C., & Sprigman, C. (2010). Valuing intellectual property: an experiment. Cornell Law Review, 96(1), 1–46. Buchanan, J. A., & Wilson, B. J. (2014). An experiment on protecting intellectual property. Experimental Economics, 17(4), 691–716. doi:10.1007/s10683-013-9390-8 . Busom, I. (2000). An empirical evaluation of the effects of R&D subsidies. Economics of Innovation and New Technology, 9(2), 111–148. doi:10.1080/10438590000000006 . Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor- production framework. Journal of Risk and Uncertainty, 19(1-3), 7–42. doi:10.1023/A:1007850605129 . Cantner, U., Güth, W., Nicklisch, A., & Weiland, T. (2009). Competition in product design: an experiment exploring innovation behavior. Metroeconomica, 60(4), 724–752. doi:10.1111/j.1467-999X.2009.04057.x . Charness, G., & Fehr, E. (2015). From the lab to the real world. Science, 350(6260), 512–513. doi:10.1126/science.aad4343 . Chatterji, A. K., Glaeser, E., & Kerr, W. (2013). Clusters of entrepreneurship and innovation (National Bureau of Economic Research Working Paper (No. w19013)). Chetty, R. (2015). Behavioral economics and public policy: a pragmatic perspective. American Economic Review: Papers and Proceedings, 105(5), 1–33. doi:10.1257/aer.p20151108 . COM(2014) 339. Research and innovation as sources of renewed growth. Cooper,D.J., Kagel, J. H., Lo, W.,&Gu,Q.L.(1999). Gaming against managers in incentive systems: experimental results with Chinese students and Chinese managers. The American Economic Review, 89(4), 781–804. doi:10.1257/ aer.89.4.781 . Crosetto, P. (2010). To patent or not to patent: A pilot experiment on incentives to copyright in a sequential innovation setting. In P. J. Ågerfalk, C. Boldyreff, J. González-Barahona, G. Madey, & J. Noll (Eds.), IFIP advances in information and communication technology: Vol. 319. Open source software. New horizons. 6th International IFIP WG 2.13 Conference on Open Source Systems (pp. 53–72). Berlin: Springer. Eckartz, K., Kirchkamp, O., & Schunk, D. (2012). How do incentives affect creativity? (CESifo Working Paper No. 4049). Ederer, F., & Manso, G. (2012). Is pay-for-performance detrimental to innovation? Management Science, 59(7), 1496–1513. doi:10.1287/mnsc.1120.1683 . Erat, S., & Gneezy, U. (2015). Incentives for creativity. Experimental Economics. doi:10.1007/s10683-015-9440-5 . first published online. Falck, O., Wiederhold, S., & Wößmann, L. (2013). Innovationspolitik muss auf überzeugender Evidenz basieren. ifo Schnelldienst, 66(5), 14–19. Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538. doi:10.1126/science.1168244 . Harris, C., & Vickers, J. (1987). Racing with uncertainty. The Review of Economic Studies, 54(1), 1–21. Herbst, D., & Mas, A. (2015). Peer effects on worker output in the laboratory generalize to the field. Science, 350(6260), 545–549. doi:10.1126/science.aaa7154 . Isaac, R. M., & Reynolds, S. S. (1988). Appropriability and market structure in a stochastic invention model. The Quarterly Journal of Economics, 103(4), 647–671. doi:10.2307/1886068 . Isaac, R. M., & Reynolds, S. S. (1992). Schumpeterian competition in experimental markets. Journal of Economic Behavior & Organization, 17(1), 59–100. doi:10.1016/0167-2681(92)90079-Q . Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174. doi:10.1257/jep.21.2.153 . Levitt, S. D., & List, J. A. (2008). Homo economicus evolves. Science, 319(5865), 909–910. doi:10.1126/science.1153911 . Madrian, B. C. (2014). Applying insights from behavioral economics to policy design. Annual Review of Economics, 6, 663–688. doi:10.1146/annurev-economics-080213-041033 . Mazzucato, M., Cimoli, M., Dosi, G., Stiglitz, J. E., Landesmann, M. A., Pianta, M., Walz, R., Page, T.(2015). Which industrial policy does Europe need? Intereconomics, 50(3), 120–155. doi:10.1007/s10272-015-0535-1 . Meloso, D., Copic, J., & Bossaerts, P. (2009). Promoting intellectual discovery: patents versus markets. Science, 323(5919), 1335–1339. doi:10.1126/science.1158624 . Sbriglia, P., & Hey, J. D. (1994). Experiments in multi-stage R&D competition. Empirical Economics, 19(2), 291–316. doi:10. 1007/BF01175876 . Silipo, D. B. (2005). The evolution of cooperation in patent races: theory and experimental evidence. Journal of Economics, 85(1), 1–38. doi:10.1007/s00712-005-0115-0 . Smith, V. L. (1994). Economics in the laboratory. Journal of Economic Perspectives, 8(1), 113–131. doi:10.1257/jep.8.1.113 . Smith, V. L. (2003). Constructivist and ecological rationality in economics. The American Economic Review, 93(3), 465–508. doi:10.1257/000282803322156954 . Sørensen, F., Mattson, J., & Sundbo, J. (2010). Experimental methods in innovation research. Research Policy, 39(3), 313–323. doi:10.1016/j.respol.2010.01.006 . Thomä, J., & Bizer, K. (2013). To protect or not to protect?: modes of appropriability in the small enterprise sector. Research Policy, 42(1), 35–49. doi:10.1016/j.respol.2012.04.019 . Vedung, E. (1998). Policy instruments: Typologies and theories. In M.-L. Bemelmans-Videc, R. C. Rist, & E. Vedung (Eds.), Carrots, sticks and sermons. Policy instruments and their evaluation (pp. 21–58). New Brunswick: Transaction Publishers. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 13 of 13 Weimann, J. (2015). Die Rolle von Verhaltensökonomik und experimenteller Forschung in Wirtschaftswissenschaft und Politikberatung. Perspektiven der Wirtschaftspolitik, 16(3), 231–252. doi:10.1515/pwp-2015-0017 . Zizzo, D. J. (2002). Racing with uncertainty: a patent race experiment. International Journal of Industrial Organization, 20(6), 877–902. doi:10.1016/S0167-7187(01)00087-X . Zúñiga-Vicente, J. Á., Alonso-Borrego, C., Forcadell, F. J., & Galán, J. I. (2014). Assessing the effect of public subsidies on firm R&D investment: a survey. Journal of Economic Surveys, 28(1), 36–67. doi:10.1111/j.1467-6419.2012.00738.x . Submit your manuscript to a journal and benefi t from: 7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the fi eld 7 Retaining the copyright to your article Submit your next manuscript at 7 springeropen.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Innovation and Entrepreneurship Springer Journals

Laboratory experiments in innovation research: a methodological overview and a review of the current literature

Loading next page...
 
/lp/springer-journals/laboratory-experiments-in-innovation-research-a-methodological-WuB28Q4N0h
Publisher
Springer Journals
Copyright
Copyright © 2016 by The Author(s).
Subject
Business and Management; Entrepreneurship; Economic Geography; Economic Policy
eISSN
2192-5372
DOI
10.1186/s13731-016-0053-9
Publisher site
See Article on Publisher Site

Abstract

julia.brueggemann@wiwi.uni- goettingen.de Innovation research has developed a broad set of methodological approaches in Faculty of Economic Sciences, Chair recent decades. In this paper, we propose laboratory experiments as a fruitful of Economic Policy and SME Research, University of Göttingen, methodological addition to the existing methods in innovation research. Therefore, Platz der Göttinger Sieben 3, 37073 we provide an overview of the existing methods, discuss the advantages and Göttingen, Germany limitations of laboratory experiments, and review experimental studies dealing with different fields of innovation policy, namely intellectual property rights, financial instruments, payment schemes, and R&D competition. These studies show that laboratory experiments can fruitfully complement the established methods in innovation research and provide novel empirical evidence by creating and analyzing counterfactual situations. Keywords: Innovation research, Laboratory experiments, Methodology JEL-classification: C90, L50, O38 Introduction Fostering research and innovativeness to support economic growth and increase com- petitiveness has become a central paradigm for policy makers worldwide in recent de- cades. The European Commission has recently reaffirmed this goal by committing to spend up to 3 % of the European Union’s GDP to support private innovation activity until 2020. By means of this and other policy instruments, the EU thus aims to become an “innovation union” (COM(2014) 339). This paradigmatic focus has been adopted by the scientific community, which similarly discusses the topics of innovation and indus- trial policy broadly, trying to obtain insights and provide advice to policy makers con- cerning the design of policy instruments that optimally foster innovation activity (Mazzucato et al. 2015). Economic innovation research traditionally argues for government intervention in the case of market failure, which is characterized by the imperfect allocation of re- sources, for example, due to imperfect competition, information failures, negative ex- ternalities, public goods, and coordination failures (Bator, 1958). Given the political commitment to foster innovation activity, government interventions can provide rem- edies to market failures. For this purpose, several distinct methods of supporting pri- vate economic subjects in their innovation activities have been developed. Firstly, © 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 2 of 13 regulatory instruments such as rules, norms, and standards have been introduced, such as patents and copyright law. These regulations are compulsory for all economic actors and thus shape the overall market conditions for innovative products and processes. Secondly, financial instruments have been introduced to promote innovative activity, with examples including subsidies, cash grants, and reduced interest-loans, as well as disincentives like tariffs, taxes, and charges. Thirdly, there are “soft” instruments that include normative incentives such as moral appeals to economic actors and voluntary commitments like technical standards or public-private partnerships (Borrás and Edquist 2013; Vedung 1998). To analyze and evaluate the effects and optimal design of these instruments, eco- nomic innovation research has established a large number of empirical research methods. Along with the overall expansion and professionalization of experimental eco- nomics, behavioral evidence collected in laboratory experiments have become a vital complement to economic innovation research in recent years. Following Sørensen et al. (2010) and Chetty (2015), we suggest that lab experiments constitute a promising addition to the methodological toolkit in innovation research, thus advancing novel in- sights and providing predictions and policy implications by incorporating behavioral factors. We thus argue that laboratory experiments should be used if they yield add- itional evidence unattainable by other methods in a particular field of study. This reso- nates with the arguments by Falk and Heckman (2009); Chetty (2015); Madrian (2014); and Weimann (2015), who propose a pragmatic approach concerning the use of evi- dence derived from experimental methods, arguing that all empirical methods should be viewed as complementary (Falk and Heckman 2009). In this paper, we aim to con- tribute to the growing field of experimental innovation research, firstly by outlining the advantages and limitations of different methodological approaches in innovation re- search and more specifically laboratory experiments. Secondly, since former papers have not attempted to summarize and structure the existing experimental literature, we provide a literature review of the existing experimental approaches to the field of innovation policy with the most important studies from four sub-fields in which lab ex- periments have been conducted to date. We conclude by emphasizing the further use of laboratory experiments to innovation research. This paper is structured as follows: in chapter two, we outline the range of methods in economic innovation research, before discussing the scopes of the experimental method in detail in chapter three. Subsequently, we present a selection of laboratory experiments in the field of innovation policy, namely intellectual property rights, finan- cial instruments, payment schemes, and R&D competition. A conclusion is finally pro- vided in chapter four. Methodological approaches in innovation research A large number of research methods have been developed to analyze which policy in- struments might best foster innovative activity. Weimann (2015, pp. 247–248) catego- rizes the different methods of generating insight by their features regarding their ability to identify causal relations, their generalizability to other contexts (external validity) as well as their broad applicability; particularly, the trade-off between causality and exter- nal validity is emphasized. Thus, Weimann distinguishes between (1) neoclassical Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 3 of 13 models pointing out causal relationships, (2) “traditional” empirical research primarily showing correlations, (3) natural experiments attempting to substantiate causal rela- tionships, (4) randomized field experiments that optimally offset the trade-off between causality and external validity, and (5) laboratory experiments providing a strong caus- ality, yet lacking external validity. Figure 1 provides an overview of these methodo- logical approaches and their features in a Venn diagram. The figure shows that none of the existing methods is able to fulfill all three features identified by Weimann (2015) but can only meet one or two criteria. (1) Neoclassical models such as game theoretical or general equilibrium models have the advantages of enabling deriving causal relations and being easily applicable, yet they often lack external validity. Empirical investigations in innovation economics most commonly use the methods of (2) “traditional” empirical economic research, for in- stance, official patent statistics or micro firm-level data from surveys. For this, OLS es- timations are considered appropriate to analyze and quantify observable variables of innovation processes; however, for dynamic effects, these methods often lead to prob- lems of causality, endogeneity, and selectivity. A further shortcoming of using this form of data is that innovation surveys necessarily rely on the entrepreneurs’ willingness to voluntarily disclose information about their firm, which potentially biases the data. Fur- thermore, the extent to which government funding is actually used for research by the firms often remains unclear and the public funding decisions often lead to a selectivity bias, thus making public funding an endogenous variable, which establishes further de- pendencies between the respective variables (Busom, 2000). Moreover, patents and pa- tent pools are often used as an approximation for the innovation activity to estimate the firms’ innovation output. This prompts a number of issues, for example, because small and medium enterprises use other forms of protecting their innovations and pa- tent less than large firms, due to potentially expensive patent litigations and patent theft (Thomä and Bizer 2013). Nevertheless, this methodological approach to innovation research has strongly improved its data availability, methods, and research designs in the past 25 years, implementing methods such as difference-in-difference es- timators, sample selection models, instrumental variables, and non-parametric match- ing methods (Angrist and Pischke 2010; Zúñiga-Vicente et al. 2014). Overall, this Fig. 1 Methodological approaches and their features. Note: the figure is based on the classification by Weimann (2015) Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 4 of 13 approach entails a high level of external validity and applicability but often only a low level of causality. Another empirical means of evaluating policy instruments is (3) natural experiments, which feature a high level of external validity. Furthermore, due to improved methodo- logical approaches, causal relations have substantiated in recent years. However, the ap- plicability is often low, since it is difficult to find appropriate control groups that could enable a clear comparison (Weimann 2015). It has been argued that the issues involved with using the “traditional” methods of empirical economic can best be solved by conducting (4) randomized field ex- periments in which real-life incidents are treated similar to experiments. They are considered the “gold standard” for evaluating new policy instruments as they en- able identifying causality rather than mere correlations (Boockmann et al. 2014; Falck et al. 2013). As an example, Chatterji et al. (2013) suggest that the distribu- tion of building sites in new industrial areas could be randomized, which would lead to better results in subsequent impact analyses of cluster policies. While opti- mally combining external validity and causality, randomized field experiments suf- fer from a lack of applicability as their adequate design is time-consuming, expensive, and often highly impractical; consequently, other methods are regularly preferred (Angrist and Pischke 2010). (5) Laboratory experiments can be considered an alternative to overly costly and impractical field experimentation, combining a high level of causality with a high level of applicability. Despite the lower level of external validity, laboratory studies can be a valuable substitute for randomized field experiments and provide insightful new angles to research topics inaccessible through “traditional” empirical methods. Since each method has its own strengths and weaknesses, the method used for a par- ticular research question should be chosen depending on the object of research, the availability of data, and the possibility for conducting field experimentation. Overall, a mix of complementary empirical methods might thus be the most promising approach (Weimann, 2015). In the following, we focus on laboratory experiments, which are the most recent addition to the methodological toolbox of innovation research, including discussing their limitations and advantages. Limitations and advantages of experimental methods Although lab experiments can be transferred and used to derive relevant policy impli- cations, there are systematic limitations to this approach. Critics of lab experiments such as Levitt and List (2007, 2008) emphasize the restrictions, while Falk and Heck- man (2009) provide refutations. Observation Participants are observed and act in an artificial environment, which might influence their behavior due to expectancy effects and the experimenter demand bias. Barmettler et al. (2012) contradict this argument and show experimentally that complete anonym- ity between the experimenter and participants does not change the latter’s behavior. Furthermore, it is argued that close social observation is not limited to the lab but ra- ther is a feature common to all economic interactions. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 5 of 13 Stakes It can be argued that the stakes in experiments are too low to induce realistic behavior in participants. Experiments with varying stake sizes yield mixed results depending on the experimental situation (Camerer and Hogarth 1999). However, Falk and Heckman (2009) ask how often people take choices involving sums equal to their monthly in- comes and how representative such high-stake experiments would actually be. Conse- quently, they suggest that the average level of stakes in laboratory experiments correspond to the most common choices that individuals take. Sample size The sample sizes of lab experiments are criticized as being too small, although this is refuted such that sample sizes are stated to adequately correspond to this method and thus yield valid assertions. Participants Student participant pools are considered unrepresentative of the overall population. While this might not be a problem when testing theories, in the case of innovation ex- periments, other populations such as researchers or entrepreneurs might be more ap- propriate experimental participants, depending on the research question. Self-selection There is a self-selection bias since students with particular traits sign up for participant pools. Nevertheless, student pools ensure that the selection can be controlled and pro- vide information on participants’ demographics, personal backgrounds, and prefer- ences. Thus, the disadvantages connected to selection biases—which are potentially prevalent in field experiments as well as other empirical research methods—can be somewhat controlled. Learning Participants often cannot learn in experiments and adjust their behavior accordingly, yet this is also a prevalent factor in many economic interactions outside of the lab, as real-world interactions can often be considered as one-shot games with no chance of learning in repeated decisions. Furthermore, a large number of repeated games have been considered in experimental settings to determine learning effects, for example, Cooper et al. (1999) with regard to incentive systems. External validity Lab experiments are considered as lacking external validity, meaning that they produce unrealistic data without further relevance for understanding the “real world”: a criticism that holds true both for lab experiments and theoretic models (Weimann, 2015, pp. 240–241). The challenge in designing experiments is to establish the best way of isolat- ing the causal effect of interest and thus providing insights about universally prevalent effects that transfer to other economic situations outside of the lab. In a recent study, Herbst and Mas (2015) show how well-designed experiments can ensure that individual behavior outside the lab is captured adequately, thereby gaining a higher external valid- ity than traditionally assumed for laboratory studies. Further studies comparing labora- tory and field evidence will have to show whether this might change the general perception of the external validity of lab experiments (Charness and Fehr 2015). How- ever, in some research contexts, it might not be possible to substantially increase the external validity. In such cases, lab experiments can serve as a starting point to isolate clear effects of specific innovation instruments. Subsequently, these effects have to be Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 6 of 13 investigated with other methods involving a higher external validity, e.g., field experi- ments in a firm. These methods then have to show whether the initial results from the laboratory hold in contexts outside the lab. Generalizability The lack of generalizability of behavioral patterns resulting from lab experiments that refrain from testing a theoretical model is criticized. While the arguments mentioned above reduce this problem, it remains a considerable drawback to some experimental evidence. Nevertheless, every empirical method faces this issue due to the unavoidable dependency of data on a specific context. Overall, lab experiments entail several distinct advantages as they provide researchers with the means of deriving causal relations from controlled manipulations of specific conditions, while controlling all surrounding factors. This ensures precise measure- ments and makes it possible to preclude confounding effects such as multiple incen- tives or repeated interactions. The experimenter thus retains almost complete control of the decision environment, namely the material payoffs, the information given to par- ticipants, the order of decisions, the duration, and iterations of the experiment. Partici- pants are assigned randomly, which reduces the selection bias. Moreover, they are incentivized monetarily for their decisions, whereby it can be assumed that decisions are taken seriously: “In this sense, behavior in the laboratory is reliable and real: Partici- pants in the lab are human beings who perceive their behavior as relevant, experience real emotions, and take decisions with real economic consequences” (Falk and Heck- man 2009, p. 536). The results are replicable and they allow investigating specific institu- tions at a relatively low cost. This can be particularly useful when considering exogenous changes like policy interventions and new regulations, where counterfactual situations can be created and their effects tested far more easily in lab rather than field experiments. With the possibility of altering only one factor—e.g., the patent regime—lab experiments allow analyzing the relevance of a particular factor without other factors confounding the observed behavior. Furthermore, lab experiments enable the researcher to examine differ- ent innovation types and effects of incentives and splitting up the innovation process to observe individual behavior at particular points of the process (Falk and Heckman 2009; Smith 1994, 2003). In the following, we review examples of different fields of innovation research where lab experiments have been put forth to provide novel insights. Review By analyzing the effects of specific policy instruments via economic experiments, sev- eral of the advantages of lab experiments described above can be used fruitfully. In par- ticular, it becomes possible to compare counterfactual data of decision situations with and without a particular instrument. Therefore, it is possible to analyze subjects’ spe- cific reactions to changes in the framework conditions, which is almost impossible when using “real-world” data. There are additional merits to the controlled lab environ- ment, in which only one factor is changed; for instance, innovation behavior and its de- velopment can be observed and analyzed over several periods. Of course, the innovation process is necessarily stylized in lab experiments; nevertheless, a number of promising ideas concerning how to transfer the innovation process into the laboratory Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 7 of 13 have been provided in recent years. Table 1 comprises the experiments reviewed in the following chapters and summarizes in brief the particular task subjects had to solve. Intellectual property rights For instance, there are several experiments implementing (real effort) search tasks to simulate the innovation process. Buchanan and Wilson (2014) design an experimental environment with subjects producing, trading, and consuming rivalrous and non- rivalrous goods. Rivalrous goods are produced out of two complements and can be sold. By contrast, producing non-rivalrous goods is possible by participating in a search task in order to find the “favorite good” of the specific period, which is more valuable than the rivalrous good and—in opposition to rivalrous goods—can be sold several times. The authors implement one treatment with intellectual property, in which selling and transferring the non-rivalrous good is restricted to the respective owner, as well as one treatment without intellectual property, where non-rivalrous goods can be created several times. The authors find no differences in the value of produced non-rivalrous goods and the average money earned regardless of intellectual property protection. Table 1 Overview on reviewed experiments Field of Short title Type of task Subjects’ task in the experiment research Intellectual Buchanan and Wilson 2014 Real effort Producing and trading rivalrous and non- property search task rivalrous goods composed of colors rights Meloso et al. 2009 Real effort Solving the knapsack problem and trading the search task potential components Buccafusco and Sprigman 2010 Creative task Creating and trading poems Crosetto 2010 Creative task Creating and extending words and deciding whether to use IP protection Brüggemann et al. 2015 Creative task Creating and extending words, setting license fees Financial Brüggemann and Meub 2015 Creative task Creating and extending words, setting license instruments fees Brüggemann 2015 Creative task Creating and extending words, setting license fees Payment Eckartz et al. 2012 Real effort Combining as many words as possible from 12 schemes search task given letters Ederer and Manso 2012 Real effort Managing a virtual lemonade stand search task Erat and Gneezy 2015 Creative task Solving rebus puzzles Bradler 2015 Creative task Imagining unusual uses for items R&D Isaac and Reynolds 1988 Investment task Taking investment choices under competition competition Isaac and Reynolds 1992 Investment task Taking investment choices including the game bingo Sbriglia and Hey 1994 Search task Finding a letter combination by buying different letter trails under competition Zizzo 2002 Investment task Competing for a prize over several periods Silipo 2005 Investment task Accumulating “knowledge units” under risk and competition Cantner et al. 2009 Search task Searching for product specifications of a car including investment and competition Aghion et al. 2014 Investment task Competing for finding an innovation including investment and risk Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 8 of 13 Overall, Buchanan and Wilson suggest that intellectual property protection does not spur innovativeness. However, the protection only serves as an additional incentive, whereas the existence of entrepreneurial individuals is more important. The respective entrepreneurs subsequently profit substantially from the protection, as well as generat- ing wealth without intellectual property protection. Meloso et al. (2009) use another kind of search task—namely the knapsack proble- m—to simulate intellectual discovery in a patent and a non-patent market system, in which components of potential discoveries are traded. The goal of the knapsack prob- lem is to combine inputs of a particular value and realize an optimal weighing of the components. In sum, the number of subjects who were able to find the correct solution to the knapsack task was higher in the markets system, which has the advantages that no scope of intellectual property rights has to be defined beforehand and that it entails no monopoly rights. Therefore, the authors state that markets do not necessarily fail—as theoretical contributions suggest—for non-excludable and non-rival goods. Buccafusco and Sprigman (2010) let subjects write poems and implement a market for the poems. Depending on the initial distribution of intellectual property rights, they find different preferences of the innovators, owners, and buyers. There is a robust endowment effect that manifests itself in the high offers of innovators and a significantly lower willing- ness to pay among the buyers. This experiment has the advantage of simulating the innovation activity most closely on an individual level, yet it is not possible to further evaluate the particular poems and determine a ranking for the quality of the innovations. Including further features of the innovation process—namely creativity, ownership, and investment choices—Crosetto (2010) developed a task to simulate innovative activ- ity based upon the board game Scrabble. He uses his setting to analyze the individual behavior when subjects have to create and extend words and are able to select between the intellectual property schemes of open source and fixed license fees. He finds that subjects’ propensity to provide their innovations open source is more likely when the level of license fees is high. Brüggemann et al. (2015) extend this experimental setting to test for the effect of different regulatory incentive schemes on the individual innovativeness. They compare a treatment with the possibility to choose the amount of license fees to a system without license fees and further implement the ability to communicate. They find that communication does not change the innovative behavior and that welfare is higher in the no-license-fee system than in the license-fee system. However, when given the possi- bility to license innovations, subjects display a high demand for being rewarded monetar- ily rather than providing innovations to other participants free of charge. Financial instruments There is broad literature about the difficulties in analyzing the effect of subsidies and other public programs to foster innovativeness due to endogeneity and selection bias problems. Although the methods used have advanced substantially in past years, lab experiments can contribute to this sub-field of innovation research (Blundell and Costa Dias 2009). In some cases, experiments might be the only way to provide insights about new—and potentially costly—policy instruments before they are implemented in the “real world.” This approach might thus be a particularly promising methodological choice when new institutional frame- work conditions are tested, which aim at fostering the innovative activity. Nevertheless, there is only a limited number of studies dealing with financial instruments to date. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 9 of 13 Using the Scrabble-based word creation task introduced by Crosetto (2010), Brüggemann and Meub (2015) analyze the individual behavior in two types of innovation contests by awarding subjects with a bonus for the best innovation in one treatment and for the largest innovation effort in another, comparing individual performance to a benchmark treatment without a prize. They find that the willingness to cooperate decreases when innovation con- tests are introduced, while the overall welfare remains constant across treatments. Further- more,using thesamewordtask, Brüggemann (2015) analyzes the effects of two distinct forms of subsidies on innovativeness: first, by supplying resources determined for innovative activities and, second, by providing additional financial resources not restricted to the use in innovative activities. She finds that both forms of subsidy lead to a crowding-out of private investment and negative welfare effects when the costs for the subsidy are included. Furthermore, subsidies fail to induce a positive effect on the individual innovation behavior. Payment schemes Another class of experiments focuses on the creative element of innovation and the ef- fects of different payment schemes. Eckartz et al. (2012) test the effects of different pay- ment schemes on creativity using a word-based real effort task, where subjects have to combine as many words as possible out of 12 prescribed letters within a certain time. They examine a flat fee, a linear payment, and a tournament and find no substantial differences between the three incentive schemes. Similarly analyzing different payment schemes, Ederer and Manso (2012) compare the innovative activity when offering a fixed wage, a wage based upon pay-for-performance, and a split wage, which is fixed at the beginning and based upon performance later on. In a search task, subjects have to manage a lemonade stand, whereby they have to decide upon several variables such as the location, content, and price to find the most profitable solution. The authors find that the split wage with tolerance for early failure and compensation for long-term suc- cess leads to more innovative effort and higher overall welfare. Erat and Gneezy (2015) compare three payment schemes, namely a pay-for- performance scheme, a competitive scheme, and a benchmark without incentives. Un- like Ederer and Manso (2012), they use rebus puzzles as a creative task and find that competition reduces creativity and a pay-for-performance scheme does not change cre- ativity in comparison to a situation without incentives. Comparing the two financial in- centives, creativity is higher in a pay-for-performance scheme. Bradler (2015) used the “unusual uses task”—an established creativity test—to compare ac- complishment, self-reporting, and risk behavior. In the task, subjects have to imagine as many uses for a particular object as possible in a certain time, choosing their preferred pay- ment scheme prior to the task, i.e., a tournament or a fixed payment. She finds that the dif- ferent payment schemes appeal to different types of subjects: risk-loving subjects with a high self-assessment tend to choose the tournament; however, in contrast to previous studies, cre- ative subjects do not tend to choose the tournament more often than the fixed payment. R&D competition Finally, in the experiments on R&D competition, the authors focus on different investment task to analyze the individual behavior in competitive and innovative environments. Experiments on patent races and R&D competition were first Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 10 of 13 established by Isaac and Reynolds (1988) to simulate a one-stage stochastic inven- tion model and subsequently a two-staged model (Isaac and Reynolds 1992). This class of experiments aims to test the findings of models with empirical evidence, whereby—in contrast to the experiments described before—they do not analyze specific policy instruments. Sbriglia and Hey (1994) develop a costly combinator- ial task representing research competition for a patentable innovation to analyze three behavioral problems of patent races, namely how subjects select their search procedures, which investment strategies they use, and how information is processed. The authors identify different types of innovators: the “winners”,who search successfully, do not act randomly, and invest more in comparison to the “losers”, who are unable to establish a strategic search procedure. Furthermore, stronger competition accelerates the rate of investment, and with a higher num- ber of periods, successful players more commonly adapt their searching behavior. Zizzo (2002) tests the multi-stage patent race model by Harris and Vickers (1987) with an investment task where subjects compete for a monetary prize over several periods. Their results disconfirm the theoretical assertions, as leaders of a patent race do not invest more than their followers. Furthermore, the authors find no virtual monopoly and investments do not change as predicted by the model. Silipo (2005) analyze the cooperation and break-up behavior in joint ven- tures in a dynamic patent race model theoretically and experimentally. In the model, they find that the starting positions of the competitors are crucial for be- ing cooperative or not: if the innovators start at different points of the research process, the probability of joint ventures decreases, while in joint ventures, the pace of the process slows down. The results of their experiment correspond to the model, aside from some races in which subjects perform worse than anticipated. Cantner et al. (2009) test a patent race model limited to a duopoly market without price competition by implementing a multi-dimensional search task with uncertainty. They find that different strategies solve the task, namely risky innovative investment and risk-free imitations. On average, subjects choose the risky innovative investment based upon the risk of an investment failure, their anticipated revenue, and their relative success in the experiment. Furthermore, the gap in subjects’ earnings has a positive impact on their investment in the next periods. Finally, Aghion et al. (2014) analyze the effects of competition on a step-by- step innovation by means of a risky investment task with different levels of competition and time horizons. The results show an increase in investment for neck-and-neck firms, yet a decrease in investment for firms lagging behind. Conclusions In this paper, we present the limitations and advantages of using laboratory experi- ments for innovation research and review 18 examples from four specific fields in which lab experiments already have been conducted. As the experimental method yields promising results in testing intellectual property rights, financial instruments, payment schemes, and R&D competition, we suggest that laboratory experiments can serve as a useful additional tool to innovation economists and represent a source of promising new insights for innovation research. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 11 of 13 In particular, we argue that lab experiments should be used to target specific policy questions and thus provide measures for the effectiveness of specific instruments prior to their introduction. This approach has—in marked contrast to all other methods—the advantages of yielding evidence from counterfactual situations and a strong control of the setting, for example, when testing external incentives for innovative activity or changing pa- rameters of the institutional framework. Therefore, we follow Chetty (2015) and Weimann (2015), who suggest a pragmatic perspective on behavioral economics, thus adding experi- mental evidence to the existing methods whenever its particular advantages outweigh its limitations. Within this pragmatic perspective on laboratory experiments, there are three ways in which this field of research can contribute to public policy: by presenting new policy instruments, developing better predictions regarding the effects of existing policies, and more accurately measuring welfare implications. Besides the policy implications, this strand of literature can be used to derive managerial implications. Particularly, studies on external incentives for fostering innovative activities are of relevance, since they give managers prac- tical advice on how to best foster innovative activities of their employees, by using, e.g., ex- periments analyzing the optimal payment schemes for innovative activities. We hope that this overview encourages other researchers to use lab experiments in innovation research, which could be further developed in several domains of innovation research: as the existing laboratory studies on financial instruments measure effectiveness, future studies might focus on measuring efficiency, which would reflect promising progress in evaluating new means of public policy. Further- more, lab experiments might be helpful as a methodological starting point for devel- oping new policy instruments. From a managerial perspective, future experimental innovation research might address the more comprehensive understanding of the innovation process itself. For example, experimental researchers might analyze in- novative work in teams and thus decompose the innovation process into its compo- nents, which is effectively possible in a laboratory environment. Moreover, the role of external incentives to encourage employees’ innovativeness might be further emphasized. Competing interests The authors declare that they have no competing interests. Authors’ contributions Both, JB and KB, developed and wrote the paper. Both authors read and approved the final manuscript. Acknowledgements Financial support from the German Federal Ministry of Education and Research via the Hans-Böckler-Stiftung is gratefully acknowledged. Further, we would like to thank Till Proeger for his very helpful comments. Received: 28 January 2016 Accepted: 31 May 2016 References Aghion, P., Bechtold, S., Cassar, L., & Herz, H. (2014). The causal effects of competition on innovation: experimental evidence (National Bureau of Economic Research Working Paper (No. w19987)). Angrist, J. D., & Pischke, J.-S. (2010). The credibility revolution in empirical economics: how better research design is taking the con out of econometrics. Journal of Economic Perspectives, 24(2), 3–30. doi:10.1257/ jep.24.2.3 . Barmettler, F., Fehr, E., & Zehnder, C. (2012). Big experimenter is watching you!: anonymity and prosocial behavior in the laboratory. Games and Economic Behavior, 75(1), 17–34. doi:10.1016/j.geb.2011.09.003 . Bator, F. M. (1958). The anatomy of market failure. The Quarterly Journal of Economics, 72(3), 351–379. doi:10.2307/ 1882231 . Blundell, R., & Costa Dias, M. (2009). Alternative approaches to evaluation in empirical microeconomics. The Journal of Human Resources, 44(3), 565–640. doi:10.3368/jhr.44.3.565 . Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 12 of 13 Boockmann, B., Buch, C. M., & Schnitzer, M. (2014). Evidenzbasierte Wirtschaftspolitik in Deutschland: Defizite und Potentiale. Perspektiven der Wirtschaftspolitik, 15(4), 307–232. doi:10.1515/pwp-2014-0024 . Borrás, S., & Edquist, C. (2013). The choice of innovation policy instruments. Technological Forecasting and Social Change, 80(8), 1513–1522. doi:10.1016/j.techfore.2013.03.002 . Bradler, C. (2015). How creative are you?: an experimental study on self-selection in a competitive incentive scheme for creative performance (ZEW - Centre for European Economic Research Discussion Paper (No. 15-021)). Brüggemann, J. (2015). The effectiveness of public subsidies for private innovations: an experimental approach (cege Discussion Paper (No. 266)). Brüggemann, J., & Meub, L. (2015). Experimental evidence on the effects of innovation contests (cege Discussion Paper (No. 251)). Brüggemann, J., Crosetto, P., Meub, L., & Bizer, K. (2015). Intellectual property rights hinder sequential innovation: experimental evidence (cege Discussion Paper (No. 227)). Buccafusco, C., & Sprigman, C. (2010). Valuing intellectual property: an experiment. Cornell Law Review, 96(1), 1–46. Buchanan, J. A., & Wilson, B. J. (2014). An experiment on protecting intellectual property. Experimental Economics, 17(4), 691–716. doi:10.1007/s10683-013-9390-8 . Busom, I. (2000). An empirical evaluation of the effects of R&D subsidies. Economics of Innovation and New Technology, 9(2), 111–148. doi:10.1080/10438590000000006 . Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor- production framework. Journal of Risk and Uncertainty, 19(1-3), 7–42. doi:10.1023/A:1007850605129 . Cantner, U., Güth, W., Nicklisch, A., & Weiland, T. (2009). Competition in product design: an experiment exploring innovation behavior. Metroeconomica, 60(4), 724–752. doi:10.1111/j.1467-999X.2009.04057.x . Charness, G., & Fehr, E. (2015). From the lab to the real world. Science, 350(6260), 512–513. doi:10.1126/science.aad4343 . Chatterji, A. K., Glaeser, E., & Kerr, W. (2013). Clusters of entrepreneurship and innovation (National Bureau of Economic Research Working Paper (No. w19013)). Chetty, R. (2015). Behavioral economics and public policy: a pragmatic perspective. American Economic Review: Papers and Proceedings, 105(5), 1–33. doi:10.1257/aer.p20151108 . COM(2014) 339. Research and innovation as sources of renewed growth. Cooper,D.J., Kagel, J. H., Lo, W.,&Gu,Q.L.(1999). Gaming against managers in incentive systems: experimental results with Chinese students and Chinese managers. The American Economic Review, 89(4), 781–804. doi:10.1257/ aer.89.4.781 . Crosetto, P. (2010). To patent or not to patent: A pilot experiment on incentives to copyright in a sequential innovation setting. In P. J. Ågerfalk, C. Boldyreff, J. González-Barahona, G. Madey, & J. Noll (Eds.), IFIP advances in information and communication technology: Vol. 319. Open source software. New horizons. 6th International IFIP WG 2.13 Conference on Open Source Systems (pp. 53–72). Berlin: Springer. Eckartz, K., Kirchkamp, O., & Schunk, D. (2012). How do incentives affect creativity? (CESifo Working Paper No. 4049). Ederer, F., & Manso, G. (2012). Is pay-for-performance detrimental to innovation? Management Science, 59(7), 1496–1513. doi:10.1287/mnsc.1120.1683 . Erat, S., & Gneezy, U. (2015). Incentives for creativity. Experimental Economics. doi:10.1007/s10683-015-9440-5 . first published online. Falck, O., Wiederhold, S., & Wößmann, L. (2013). Innovationspolitik muss auf überzeugender Evidenz basieren. ifo Schnelldienst, 66(5), 14–19. Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538. doi:10.1126/science.1168244 . Harris, C., & Vickers, J. (1987). Racing with uncertainty. The Review of Economic Studies, 54(1), 1–21. Herbst, D., & Mas, A. (2015). Peer effects on worker output in the laboratory generalize to the field. Science, 350(6260), 545–549. doi:10.1126/science.aaa7154 . Isaac, R. M., & Reynolds, S. S. (1988). Appropriability and market structure in a stochastic invention model. The Quarterly Journal of Economics, 103(4), 647–671. doi:10.2307/1886068 . Isaac, R. M., & Reynolds, S. S. (1992). Schumpeterian competition in experimental markets. Journal of Economic Behavior & Organization, 17(1), 59–100. doi:10.1016/0167-2681(92)90079-Q . Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174. doi:10.1257/jep.21.2.153 . Levitt, S. D., & List, J. A. (2008). Homo economicus evolves. Science, 319(5865), 909–910. doi:10.1126/science.1153911 . Madrian, B. C. (2014). Applying insights from behavioral economics to policy design. Annual Review of Economics, 6, 663–688. doi:10.1146/annurev-economics-080213-041033 . Mazzucato, M., Cimoli, M., Dosi, G., Stiglitz, J. E., Landesmann, M. A., Pianta, M., Walz, R., Page, T.(2015). Which industrial policy does Europe need? Intereconomics, 50(3), 120–155. doi:10.1007/s10272-015-0535-1 . Meloso, D., Copic, J., & Bossaerts, P. (2009). Promoting intellectual discovery: patents versus markets. Science, 323(5919), 1335–1339. doi:10.1126/science.1158624 . Sbriglia, P., & Hey, J. D. (1994). Experiments in multi-stage R&D competition. Empirical Economics, 19(2), 291–316. doi:10. 1007/BF01175876 . Silipo, D. B. (2005). The evolution of cooperation in patent races: theory and experimental evidence. Journal of Economics, 85(1), 1–38. doi:10.1007/s00712-005-0115-0 . Smith, V. L. (1994). Economics in the laboratory. Journal of Economic Perspectives, 8(1), 113–131. doi:10.1257/jep.8.1.113 . Smith, V. L. (2003). Constructivist and ecological rationality in economics. The American Economic Review, 93(3), 465–508. doi:10.1257/000282803322156954 . Sørensen, F., Mattson, J., & Sundbo, J. (2010). Experimental methods in innovation research. Research Policy, 39(3), 313–323. doi:10.1016/j.respol.2010.01.006 . Thomä, J., & Bizer, K. (2013). To protect or not to protect?: modes of appropriability in the small enterprise sector. Research Policy, 42(1), 35–49. doi:10.1016/j.respol.2012.04.019 . Vedung, E. (1998). Policy instruments: Typologies and theories. In M.-L. Bemelmans-Videc, R. C. Rist, & E. Vedung (Eds.), Carrots, sticks and sermons. Policy instruments and their evaluation (pp. 21–58). New Brunswick: Transaction Publishers. Brüggemann and Bizer Journal of Innovation and Entrepreneurship (2016) 5:24 Page 13 of 13 Weimann, J. (2015). Die Rolle von Verhaltensökonomik und experimenteller Forschung in Wirtschaftswissenschaft und Politikberatung. Perspektiven der Wirtschaftspolitik, 16(3), 231–252. doi:10.1515/pwp-2015-0017 . Zizzo, D. J. (2002). Racing with uncertainty: a patent race experiment. International Journal of Industrial Organization, 20(6), 877–902. doi:10.1016/S0167-7187(01)00087-X . Zúñiga-Vicente, J. Á., Alonso-Borrego, C., Forcadell, F. J., & Galán, J. I. (2014). Assessing the effect of public subsidies on firm R&D investment: a survey. Journal of Economic Surveys, 28(1), 36–67. doi:10.1111/j.1467-6419.2012.00738.x . Submit your manuscript to a journal and benefi t from: 7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the fi eld 7 Retaining the copyright to your article Submit your next manuscript at 7 springeropen.com

Journal

Journal of Innovation and EntrepreneurshipSpringer Journals

Published: Jun 13, 2016

References