Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Tradeoff Between Distributed Social Learning and Herding Effect in Online Rating Systems: Evidence From a Real-World Intervention

Tradeoff Between Distributed Social Learning and Herding Effect in Online Rating Systems:... We investigated how social diffusion increased client participation in an online rating system and, in turn, how this herding effect may affect the metrics of client feedback over the course of years. In a field study, we set up a transparent feedback system for university services: During the process of making service requests, clients were presented with short-term trends of client satisfaction with relevant service outcomes. Deploying this feedback system initially increased satisfaction moderately. Thereafter, mean satisfaction levels remained stable between 50% and 60%. Interestingly, at the individual client level, satisfaction increased significantly with experience despite the lack of any global trend across all users. These conflicting results can be explained at the social network level: If satisfied clients attracted new clients with more negative attitudes (a herding effect), then the net increase in service clients may dampen changes in global trends at the individual level. Three observations support this hypothesis: first, the number of service clients providing feedback increased monotonically over time. Second, spatial analysis of service requests showed a pattern of expansion from floor to floor. Finally, satisfaction increased over iterations only in clients who scored below average. Keywords social learning, herding effect, rating systems, online reviews, social feedback known about how individual satisfaction changes over time Introduction and how social networks affect participation, and thus free Feedback has an important role in a wide range of social phe- ridership (Marwell & Ames, 1979)—especially in monopo- nomena, from the evolution of cooperation (Nowak, 2006) to listic service provision settings such as found in the public sustainable economic development (Platteau, 2000) and, con- sector (Vedung, 1997). versely, poverty traps (Adato, Carter, & May, 2006). In social Traditionally, feedback systems were deployed to facili- organizations, the issue of engendering a feedback loop is a tate learning at the institutional level, where the role of the central conundrum: Many institutions attempt to get the pub- clients is merely to provide information (Gerson, 1993). In lic more involved in improving their services by developing recent years, however, several businesses (e.g., Google, systems for eliciting feedback and evaluations from their cli- Yelp, SeeClickFix) and nonprofits (e.g., FixMyStreet.org) ents (Miller, Resnick, & Zeckhauser, 2002; Pavett, 1983; Vigoda, 2000). However, creating a sustained feedback loop between users and public service providers is difficult (Jaeger City College, The City University of New York, NY, USA Hunter College, The City University of New York, NY, USA & Thompson, 2003). The public sector adds an additional Yale University, New Haven, CT, USA obstacle: The monopoly nature of service provision. In Google Inc., New York, NY, USA e-commerce sites, the provider has strong market incentives 5 University of the South, Toulon-Var, La Garde, France to address the concerns of online reviews lest business go Princeton University, NJ, USA elsewhere. In contrast, below we describe a system where Corresponding Author: both the provider and the consumer of the service are beholden Ofer Tchernichovski, Hunter College, The City University of New York, to each other through monopolistic (and monopsonstic) 695 Park Ave, New York, NY 10065, USA. dynamics (Holmes, Levine, & Schmitz, 2012). Little is Email: tchernichovski@gmail.com Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 SAGE Open Figure 1. Transparent reporting system for services requests: (A) A dashboard displaying trends of mean monthly client satisfaction with service outcome over 6 months was attached to service request forms. Display is dynamically adjusted according to the service type selection of the user to facilitate comparisons. (B) Feedback request to the client were triggered blindly after a fixed, 1-week delay (the delay is our upper bound estimate of the service completion time). Client evaluation statistics fed back automatically to the same dashboard with moving average of monthly trends over 6 months (data integration window). have been experimenting with two-way communications distribution of interest within the group, and the distribution systems (Lee, 2014), where citizens can follow up on their of resources (Marwell & Ames, 1979). We do not know to requests and post comments about service outcome. what extent these factors might also apply in the case of Consequently, public opinion about service quality is both monopolistic public service provision, potentially affecting mirrored and affected by highly distributed rating systems, client likelihood of submitting feedback (Hu, Zhang, & whose statistical and dynamic features are poorly under- Pavlou, 2009; Moe & Trusov, 2011). Feedback level might stood. For example, recent data suggest that the increased also depend on the expectation of reward (the likelihood of a usage of government websites may negatively influences satisfactory service outcome), which may change with poli- citizens’ satisfaction with public service provision despite cies and/or with time. (or perhaps because) the increase in accessibility and trans- Taken together, complexities in the temporal dynamics of parency (Porumbescu, 2015). client expectations versus experience, combined with the Indicators of service provision, such as mean satisfaction effect social diffusion, may impose long-term biases in indi- levels, might be affected by sampling biases at two levels cators of service provision. Slow accumulation of biases can (Kaufmann, Kraay, & Mastruzzi, 2004; Stipak, 1979): client make it difficult to detect causality of observed changes, or decision to request a public service (participation level) and alternatively, it might mask real changes in public service client decision to rate the service outcome (feedback level). quality. This study is a preliminary attempt to disentangle Participation level may bias service provision indicators in a some of those factors, by analyzing changes in participation complex manner. For example, people who are less likely to level, feedback level, and satisfaction level over years. participate might have a more negative attitude toward pub- The data used for this study were obtained from an admin- lic services. A major factor in participation level is the herd- istrative initiative at The City College of New York (CCNY) ing effect (Hardin, 1968; Helbing, Farkas, & Vicsek, 2000), Science Division, which aimed at resolving long-lasting where social diffusion may gradually change public opinion problems with maintenance services provided to research and either facilitate or decrease participation levels, poten- labs. The Science Division deployed a web application, tially leading to sampling biases over time. That is, early on, which was developed to facilitate service request and to people with more positive attitudes toward the service might allow faculty and staff of research labs to request services be more likely to try the service and to send positive feed- directly. Current clients received feedback statistics about back, while the herding effect may attract more reluctant cus- service outcome from recent users via a dashboard attached tomers who might send more critical evaluations. to service request forms (Figure 1A). In an attempt to reduce A second source of potential bias is feedback level, as cli- provider and client triage across service types (i.e., discour- ent tendency to send feedback may change with experience. agement), the system presented client satisfaction as short- In the classic “tragedy of the commons” scenario (Hardin, term trends rather than cumulative scores. This way, even a 1968), a common grazing area is overused and eventually small change in mean client satisfaction could become destroyed by the farmers (free riders) who only care about immediately apparent in the dashboard display as a positive their individual gains. Empirically, the principal factors that or a negative trend in client satisfaction (Figure 1, bar affect the level of free ridership are group size, the graphs). Our administration hoped that this would improve Tchernichovski et al. 3 services without centralized sanctioning (Baldassarri & provider, but we had no access to their internal operations, Grossman, 2011), by synchronizing social learning across hence it is an “external,” third-party system. workers and clients of monopolistic university services. We analyzed data collected via this system over 4 years to Client Access Control evaluate how slow changes in participation and feedback The site was available online without restrictions (no login) level may affect measures of client satisfaction level over from any device connected (wired or wirelessly) via the col- time. We then performed spatial analysis to examine if lege division network, which was available via WIFI to all changes in participation level may display any spatial struc- occupants of the Science building. ture, which could indicate a herding effect (Benkler, 2006; Helbing et al., 2000). Finally, at the individual client level, we analyzed changes with satisfaction level over iterations Requesting User Evaluations (Feedback) (experience), to assess if social learning (across clients and The system relied entirely on requesting user evaluations, and providers of services through the feedback system) might implemented externally. Feedback was requested blindly have influenced satisfaction with service outcome. Our from each user (Figure 1B) via email after a week delay. This results suggest that herding effect might have imposed sam- approach allowed, at the cost of uncertainty, to arbitrarily pling biases, resulting in opposing effects on individual and attach our reporting system to any relevant university service. population-level performance indicators of service provi- The feedback request email included a link to a simple form sion. We propose simple measures for reducing such biases, where user scored the outcome as either satisfied, partially and potentially improve the reliability of similar information satisfied, or unsatisfied/nothing happened. The feedback form systems. did not present clients with any information (no dashboard). In sum, the problem that our field study attempted to address is how to improve public services by enabling social learning, online, across and among clients and providers of Dashboard Presentation of Satisfaction With services. The approach was to set a voluntary service rating Service Outcome system and present client feedback statistics as short-term Trends in mean client satisfaction were presented in the trends (as opposed to cumulative star rating). We tested for dashboard for each service type in monthly bins (Figure 1A). slow progressive changes in client satisfaction with service %satisfaction was calculated as the proportion of reports that outcome, focusing on interactions between changes in par- received a “satisfied” score, presenting six monthly bins, ticipation levels (herding effect) and our estimates of changes with the first bin representing accumulating data from the in client satisfaction over years. current month. Materials and Methods Workflow Analysis of data obtained by this administrative initiative has During the baseline period (first 9 months of 2006), the Web been approved by the CCNY Institutional Review Board application for university services was deployed without pre- (IRB). All institutional data were deidentified and then ana- senting the dashboard. Service requests were automatically lyzed according to the CCNY IRB regulations. emailed to the appropriate service providers. Client feedback about service outcome was requested only sparsely (n = 33) to obtain baseline information about client satisfaction with System Design service outcome. After 9 months, the feedback system was A transparent reporting system was developed between activated without otherwise changing the user interface. The 2005-2006 by faculty volunteers to address persistent prob- system requested feedback from each client 1 week after each lems in resolving maintenance issues in about 100 research service request. A dashboard was attached to the service laboratories and several core facilities in the CCNY Science request forms with bar graphs presenting mean values and Division. The Web Application includes forms designed for trends of monthly satisfaction with service outcome (Figure requesting services for problems of various categories, 1). Those graphs were visible to all users as well as to service including electrical, plumbing, HVAC (heating and cooling), management and workers. Service events that received carpentry, building integrity, pest control, custodial, rest- “unsatisfied” score were automatically emailed to the service rooms, and so on. The system replaced a facilitator, whose providers without any further follow-up. This mechanism job was to receive reports from building occupants and sub- remained intact without significant modifications for 4 years. mit work orders when necessary. Instead, it enabled direct communication with service providers via online forms Data Analysis and Statistics accessible via any web browser without restrictions to all the occupants of the division building. Each form submission Users scored the service outcome as either satisfied, partially triggered an automated email to the appropriate service satisfied, or unsatisfied. We pooled the “partially satisfied” 4 SAGE Open and “unsatisfied” scores into a single category to obtain a binary measure for each period, namely, the percentage of fully satisfied clients. Similarly, in the figures we present %satisfaction as the proportion of reports (or clients) that scored service outcome as fully satisfied. Data were ana- lyzed using Matlab 8. The Matlab Statistics package was used to calculate Pearson correlation coefficients and p val- ues. Spatial analysis was performed by computing a matrix with the number of unique users per quarter per floor. We then normalized each column to represent proportions, and smoothed the matrix using a 2 × 2 Hun filter. For analysis of the effect of repeated requests at the individual level, we con- sidered the first 14 reports (submitted per individual), for which we had sufficient sample size of 53 users. We calcu- lated Pearson correlation coefficients and p values over those first 14 requests per subject. Programming The web application was programmed using HTML, PHP, and MySQL. Feedback was managed via custom C++ appli- cation using Embarcadero RAD Studio. This way, dashboard images of graphs were automatically updated when the web page was open or refreshed. Results Presenting a dashboard with short-term trends of satisfaction with service outcome (Figure 1A) resulted in a moderate increase in the quarterly satisfaction rate with service out- Figure 2. Trends in client satisfaction with service outcome: (A) comes. Satisfaction rate increased from 33% during the 6 Time course of mean quarterly client satisfaction with service months baseline period (n = 33 subjects) to 56.5 ± 1.7% outcome. Each bar represents the proportion of fully satisfied (Figure 2A, means and SEM across quarters hereafter, clients per quarter. During the baseline period (red bar), feedback Pearson’s χ = 4.1, p = .04). After the initial increase, client was requested sparsely, and our system operated without satisfaction levels remained stable with no apparent trend displaying trends of client satisfaction. (B) Per client satisfaction over time (Figure 2A, R = .02, ns). However, analysis at the levels as a function of the chronological order of service requests. Each marker represents the proportion of fully satisfied clients individual client level (pooled across individuals) showed per category (first request, second request, etc.). (C) Quarterly that satisfaction increased linearly with the chronological rates of client feedback. order of service requests (Figure 2B, Pearson correlation: R = .63, p = .0007). Limiting the analysis to clients who sub- mitted at least 14 reports (our upper bound for analysis, see diffusion, from floor to floor of the Science building over the methods), to account for attrition (n = 53 subjects) we still course of the study (Figure 3B), suggesting that offline local identify a significant positive trend (Pearson correlation: R communication might have played a role in the increased = .48, p = .006), indicating increased satisfaction over itera- participation (Rogers, 1962). tions. Client feedback levels remain stable at 43 ± 1.3% Finally, we analyzed satisfaction over time at the individ- throughout the study, with no apparent trend (Figure 2C), ual client level to test for heterogeneity across client pools excluding a scenario of changes in mean satisfaction through with different attitude toward the university services. attrition. Dividing clients into those whose mean score was above the Although the number of Science Division members was pooled average (positive attitude) versus those below the approximately the same during experiential period, we pooled average (negative attitude), we found a strong hetero- observed a persistent increase in the number of unique ser- geneity of responses: the positive trend was driven by clients vice clients per quarter, at a rate of about 13% per year who scored below average (Figure 3C) with a remarkable (Figure 3A, Pearson correlation: R = .43, p = .006). increase from about 20% to 60% satisfaction within that Examining the role of social diffusion in facilitating partici- group. In contrast, no significant trends were observed in cli- pation through a spatial analysis, we observed a slow spatial ents who scored above average (Figure 3D). Tchernichovski et al. 5 dampen the global trend. For example, as service client pool increased at a rate of about 13% per year (Figure 3A), and as satisfaction rate were about 15% lower in new clients (Figure 2B), if participation was unbiased, then the flow of new cus- tomers should have reduced the mean annual satisfaction rate by about 2%. However, our data suggest that participation was biased: Spatial analysis of service requests showed a pattern of expansion from floor to floor (Figure 3B), suggest- ing that clients with positive attitude herded clients with a more negative attitude toward university services. In clients with negative attitude (Figure 3C-D), satisfaction increased much more strongly with experience, at about 30% to 40%. Therefore, herding effect could have reduced satisfaction by 4% to 5% given the flow rate. Note that the overall increase in individual clients satisfaction rate over 4 years was about 20% (about 5% per year), comparable to the putative damp- ening effect we propose. Overall, our results suggest that slow but persistent improvement in satisfaction for existing users and social contagion in utilization of the information system might have brought clients with less positive attitude toward the services into the information ecology, hence masking the increase in client satisfaction. We found it interesting that client feedback levels remained high (between 40% and 50%) over the entire study (Figure 2C). Feedback was requested only once, a week after each report, and no additional measures were taken to encourage clients to send feedback. It is difficult to compare those rates to existing data because institutions rarely report the percentage of clients who responded to automated feed- Figure 3. Analysis of participation and satisfaction levels: (A) back requests. However, typically, feedback levels are lower Number of unique clients per quarter. (B) Smoothed map of the than 20% and nonresponse bias is significant (Groves, 2006; spatial distribution of reports across the floors of the CCNY Science Division building. Intensity map represents proportions Lambert & Harrington, 1990). Optimization of survey meth- of individuals submitting service requests across building space. ods can increase client response rate to some extent Floors order is transformed to ensure that the spatial data remain (Kaplowitz, Hadlock, & Levine, 2004). Beyond that, in a unidentifiable. (C) Per client satisfaction levels as a function of repeated game (Xiao, Zhang, Shi, & Gao, 2012) client moti- the chronological order of service requests for clients who score vation to share information may depend on their expectation below average, pooled across clients as in Figure 2B. (D) Same as of reward, for example, in their perception of feedback use- (C) for clients who score above average. Note. CCNY = The City College of New York. fulness (Racherla & Friske, 2012), or by deviations from prior expectation of reward (Moe & Schweidel, 2011). Perhaps presenting information about client satisfaction at Discussion the time of service request, had affected client expectation of Our results suggest that the effects of launching transparent reward and contributed to stability in feedback rate. Testing feedback systems on monopolistic service provisions might this hypothesis would require randomized control studies, be complex, and simple cumulative performance estimates which were not feasible in this administrative study. have limited bearing on capturing it. In our case, even though Our analysis has many limitations. The most serious one we did not observe any global trends in client satisfaction is that without randomized controls, we could not test if the over 4 years (Figure 2A), a detailed analysis detected an presentation of short-term trends on the dashboard was increased satisfaction with experience at the individual client beneficial or not. However, we can certainly conclude that level (Figure 2B). We suspect that this effect was masked by the trends presented to the clients in real time did not fully a secondary herding effect (Benkler, 2006; Helbing et al., capture the long-term improvement in client satisfaction, 2000), where satisfied clients attracted more reluctant ones, which we observed only a posteriori at the individual client who at least initially submitted lower satisfaction scores, level. In retrospect, presenting trends of satisfaction as a dampening the global trend. Note that even if the expansion function of client experience with the service (pooled in client pool were a random effect, the positive correlation across individuals as in Figure 2B) rather than monthly between iterations (experience) and satisfaction alone would trends could have reduce biases, and could have provided a 6 SAGE Open better chance to synchronized learning across clients and with experience, an increase in participation rate, and persis- providers of services. tently high feedback levels. Therefore, under certain condi- This study was conducted at a particularly interesting tions, designing a communication system that presents trends time in the history of communication technologies, just when of client satisfaction with service outcomes may result in the usage of online communication and mobile devices improved client satisfaction with monopolistic public service became widespread. It is therefore possible that the findings provision through learning. Future work studying how the we present here are historically contingent. That is, there design of communication ecosystems can affect the evolu- could be a novelty effect that resulted from the transition tion of client engagement and satisfaction could become use- from a phone/paper-based request system to an online inter- ful if the two effects we describe here can be separated. face that would have faded had we continued data collection Achieving high public participation in reviewing service out- for another 5 or more years. However, as we did collect data come can potentially allow distributed management of pub- for a 5-year time period that far exceeds that of most experi- lic services. Such systems can be tested on large scales. We mental timelines, we do not believe that our observed results hope that this preliminary study would facilitate efforts for are due to novelty or the particulars of the online interface embedding governance within Internet-based social dynam- but rather to the improved steady state relationship between ics. Such systems could make it possible to create communi- service quality and user information. It is also possible that if cation ecologies that enable real-time (and bottom-up) social our approach were so widely adopted such that users were learning. saturated with such coupled, socially informed interfaces, our feedback dashboard would fail to elicit the level (and Authors’ Note quality) of sustained user (or service provider) response. The data reported in the paper are archived at the CCNY Science However, if presenting the short-term trends in client satis- Division server. faction is indeed the cause of the improved validity of infor- mation for both users and service providers, then there is Acknowledgments little reason to expect the surrounding ecology of informa- We thank CCNY Science Division deans M. Gunner and R. Stark, tion to strongly bias the underlying relationships. That is, CCNY Vice Presidents R. Santos, CCNY Physical Plant Services better quality (and more) information leading to better managers R. Slowski and G. Miller, and K. Woods for their con- responsiveness in one service could be sustained without tinuous support and cooperation. We thank L. Parra and D. affecting usage or information feedback dynamics across Baldassarri for commenting on the manuscript. services, especially in the case of monopolistic public ser- vices. This assumption rests on the notion that the ease of use Declaration of Conflicting Interests and perceived usefulness of the interface remains constant The author(s) declared no potential conflicts of interest with respect even in a changing information technology landscape (Davis, to the research, authorship, and/or publication of this article. Bagozzi, & Warshaw, 1989). The ecological validity of this field study allows us to Funding cautiously propose some general implications. Other than presenting trends, the manner in which we published volun- The author(s) disclosed receipt of the following financial support for the research and/or authorship of this article: Financial support tary client scores on service request forms is fairly similar to for this study was provided by the John D. and Catherine T. distributed platforms for client rating such as Yelp and MacArthur Foundation as part of its support of the Research Amazon, who impose external rating on many arbitrary ser- Network on Connected Learning to D. Conley. vices, as we did. It is well established that the dismal propor- tion of service clients who choose to post feedback in such References platforms is unlikely to be representative (Gao, Greenwood, Agarwal, & Jeffrey, 2015; Zervas, Proserpio, & Byers, Adato, M., Carter, M. R., & May, J. (2006). Exploring poverty traps and social exclusion in South Africa using qualitative 2015). Our study adds to this by suggesting that herding and quantitative data. Journal of Development Studies, 42, effect may impose dynamic biases on indicators of service 226-247. provision. Such effects should be taken into account when Baldassarri, D., & Grossman, G. (2011). Centralized sanction- attempting to increase client base or feedback levels, which ing and legitimate authority promote cooperation in humans. might slowly bias the sampling of clients with different atti- Proceedings of the National Academy of Sciences of the United tude toward the service. Long-term dynamical biases in sam- States of America, 108, 11023-11027. pling should particularly concern public services, where Benkler, Y. (2006). The wealth of networks: How social produc- incremental improvements over years can be guided by cli- tion transforms markets and freedom. New Haven, CT: Yale ent feedback. University Press. Despite the lack of persistent positive trends, deploying a Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User accep- transparent feedback system for university service resulted tance of computer technology: A comparison of two theoretical in positive outcomes: an improvement in client satisfaction models. Management Science, 35, 982-1003. Tchernichovski et al. 7 Gao, G., Greenwood, B. N., Agarwal, R., & Jeffrey, S. (2015). from Seoul, South Korea. Administration & Society. doi: Vocal minority and silent majority: How do online ratings 10.1177/0095399715593314 reflect population perceptions of quality? MIS Quarterly, 39, Racherla, P., & Friske, W. (2012). Perceived “usefulness” of 565-589. doi:10.2139/ssrn.2629837 online consumer reviews: An exploratory investigation across Gerson, R. (1993). Measuring customer satisfaction. Crisp Learning. three services categories. Electronic Commerce Research and Retrieved from http://dl.acm.org/citation.cfm?id=1408233 Applications, 11, 548-559. Groves, R. (2006). Nonresponse rates and nonresponse bias in Rogers, E. (1962). Diffusion of innovations. New York, USA: Free household surveys. Public Opinion Quarterly, 70, 646-675. Press of Glencoe, 1, 79-134.. Hardin, G. (1968). The tragedy of the commons. Science (New York, Stipak, B. (1979). Are there sensible ways to analyze and use sub- N.Y.), 162, 1243-1248. doi:10.1126/science.162.3859.1243 jective indicators of urban service quality? Social Indicators Helbing, D., Farkas, I., & Vicsek, T. (2000). Simulating dynamical Research, 6, 421-438. features of escape panic. Nature, 407, 487-490. Vedung, E. (1997). Public policy and program evaluation. New Holmes, T. J., Levine, D. K., & Schmitz, J. A. (2012). Monopoly and Brunswick, NJ: Transaction Publishers. doi:10.2307/2667008 the incentive to innovate when adoption involves switchover Vigoda, E. (2000). Are you being served? The responsiveness disruptions. American Economic Journal: Microeconomics, 4, of public administration to citizens’ demands: An empirical 1-33. examination in Israel. Public Administration, 78, 165-191. Hu, N., Zhang, J., & Pavlou, P. (2009). Overcoming the J-shaped Xiao, X., Zhang, Q., Shi, Y., & Gao, Y. (2012). How much to share: distribution of product reviews. Communications of the ACM. A repeated game model for peer-to-peer streaming under ser- Retrieved from http://dl.acm.org/citation.cfm?id=1562800 vice differentiation incentives. IEEE Transactions on Parallel Jaeger, P. T., & Thompson, K. M. (2003). E-government around the and Distributed Systems, 23, 288-295. world: Lessons, challenges, and future directions. Government Zervas, G., Proserpio, D., & Byers, J. (2015, January 28). A first Information Quarterly, 20, 389-394. look at online reputation on Airbnb, where every stay is above Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A com- average. Retrieved from https://ssrn.com/abstract=2554500 parison of web and mail survey response rates. Public Opinion Quarterly, 68, 94-101. Author Biographies Kaufmann, D., Kraay, A., & Mastruzzi, M. (2004). Governance Ofer Tchernichovski, PhD, is a Neuroscience professor at Hunter matters III: Governance indicators for 1996, 1998, 2000, and College and CUNY Graduate Center. His research focuses on vocal 2002. The World Bank Economic Review, 18, 253-287. learning and cultural evolution in songbirds and humans. Lambert, D., & Harrington, T. (1990). Measuring nonresponse bias in customer service mail surveys. Journal of Business Marissa King, PhD, is a professor of Organizational Behavior at Logistics, 11, 5-26. the Yale School of Management. Her research examines network- Lee, N. (2014). Facebook nation. New York, NY: Springer. based learning processes. Marwell, G., & Ames, R. (1979). Experiments on the provision of Peter Brinkmann, PhD, was a college professor in mathematics public goods. I. Resources, interest, group size, and the free- and is now a software engineer at Google. rider problem. American Journal of Sociology, 84, 1335-1360. Miller, N., Resnick, P., & Zeckhauser, R. (2002). Eliciting honest Xanadu Halkias, PhD, is a registered patent agent at Baker Botts, feedback in electronic markets (Working paper). Cambridge, LLP. Previously, she was an adjunct professor and a researcher at MA: Harvard Kennedy School. the Universite du Sud, Toulon, France. Her research focuses on Moe, W., & Schweidel, D. A. (2011). Online product opinions: advanced signal processing, deep Artificial Intelligence and Incidence, evaluation, and evolution. Marketing Science, 31, machine learning. 372-386. Daniel Fimiarz is employed at the City College of New York/ Moe, W., & Trusov, M. (2011). The value of social dynamics in CUNY in a capacity of a core facilities manager. He uses his com- online product ratings forums. Journal of Marketing Research, puter science background to improve divisional IT infrastructure. 48, 444-456. Nowak, M. A. (2006). Five rules for the evolution of cooperation. Laurent Mars, PhD is an associate dean in the Science Division at Science (New York, N.Y.), 314, 1560-1563. the City College of the City University of New York. He teaches Pavett, C. M. (1983). Evaluation of the impact of feedback on per- general chemistry courses. formance and motivation. Human Relations, 36, 641-654. Platteau, J.-P. (2000). Institutions, social norms, and economic Dalton Conley, PhD, is Henry Putnam University professor of development. Amsterdam, The Netherlands: Harwood Sociology at Princeton University and a Research Associate at the Academic. National Bureau of Economic Research and Adjunct Professor of Porumbescu, G. A. (2015). Does transparency improve citi- Community Medicine at Mount Sinai School of Medicine. His research zens’ perceptions of government performance? Evidence focuses on genetic and social transmission across generations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png SAGE Open SAGE

Tradeoff Between Distributed Social Learning and Herding Effect in Online Rating Systems: Evidence From a Real-World Intervention

Loading next page...
 
/lp/sage/tradeoff-between-distributed-social-learning-and-herding-effect-in-mCvQPztY19

References (40)

Publisher
SAGE
Copyright
Copyright © 2022 by SAGE Publications Inc, unless otherwise noted. Manuscript content on this site is licensed under Creative Commons Licenses.
ISSN
2158-2440
eISSN
2158-2440
DOI
10.1177/2158244017691078
Publisher site
See Article on Publisher Site

Abstract

We investigated how social diffusion increased client participation in an online rating system and, in turn, how this herding effect may affect the metrics of client feedback over the course of years. In a field study, we set up a transparent feedback system for university services: During the process of making service requests, clients were presented with short-term trends of client satisfaction with relevant service outcomes. Deploying this feedback system initially increased satisfaction moderately. Thereafter, mean satisfaction levels remained stable between 50% and 60%. Interestingly, at the individual client level, satisfaction increased significantly with experience despite the lack of any global trend across all users. These conflicting results can be explained at the social network level: If satisfied clients attracted new clients with more negative attitudes (a herding effect), then the net increase in service clients may dampen changes in global trends at the individual level. Three observations support this hypothesis: first, the number of service clients providing feedback increased monotonically over time. Second, spatial analysis of service requests showed a pattern of expansion from floor to floor. Finally, satisfaction increased over iterations only in clients who scored below average. Keywords social learning, herding effect, rating systems, online reviews, social feedback known about how individual satisfaction changes over time Introduction and how social networks affect participation, and thus free Feedback has an important role in a wide range of social phe- ridership (Marwell & Ames, 1979)—especially in monopo- nomena, from the evolution of cooperation (Nowak, 2006) to listic service provision settings such as found in the public sustainable economic development (Platteau, 2000) and, con- sector (Vedung, 1997). versely, poverty traps (Adato, Carter, & May, 2006). In social Traditionally, feedback systems were deployed to facili- organizations, the issue of engendering a feedback loop is a tate learning at the institutional level, where the role of the central conundrum: Many institutions attempt to get the pub- clients is merely to provide information (Gerson, 1993). In lic more involved in improving their services by developing recent years, however, several businesses (e.g., Google, systems for eliciting feedback and evaluations from their cli- Yelp, SeeClickFix) and nonprofits (e.g., FixMyStreet.org) ents (Miller, Resnick, & Zeckhauser, 2002; Pavett, 1983; Vigoda, 2000). However, creating a sustained feedback loop between users and public service providers is difficult (Jaeger City College, The City University of New York, NY, USA Hunter College, The City University of New York, NY, USA & Thompson, 2003). The public sector adds an additional Yale University, New Haven, CT, USA obstacle: The monopoly nature of service provision. In Google Inc., New York, NY, USA e-commerce sites, the provider has strong market incentives 5 University of the South, Toulon-Var, La Garde, France to address the concerns of online reviews lest business go Princeton University, NJ, USA elsewhere. In contrast, below we describe a system where Corresponding Author: both the provider and the consumer of the service are beholden Ofer Tchernichovski, Hunter College, The City University of New York, to each other through monopolistic (and monopsonstic) 695 Park Ave, New York, NY 10065, USA. dynamics (Holmes, Levine, & Schmitz, 2012). Little is Email: tchernichovski@gmail.com Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 SAGE Open Figure 1. Transparent reporting system for services requests: (A) A dashboard displaying trends of mean monthly client satisfaction with service outcome over 6 months was attached to service request forms. Display is dynamically adjusted according to the service type selection of the user to facilitate comparisons. (B) Feedback request to the client were triggered blindly after a fixed, 1-week delay (the delay is our upper bound estimate of the service completion time). Client evaluation statistics fed back automatically to the same dashboard with moving average of monthly trends over 6 months (data integration window). have been experimenting with two-way communications distribution of interest within the group, and the distribution systems (Lee, 2014), where citizens can follow up on their of resources (Marwell & Ames, 1979). We do not know to requests and post comments about service outcome. what extent these factors might also apply in the case of Consequently, public opinion about service quality is both monopolistic public service provision, potentially affecting mirrored and affected by highly distributed rating systems, client likelihood of submitting feedback (Hu, Zhang, & whose statistical and dynamic features are poorly under- Pavlou, 2009; Moe & Trusov, 2011). Feedback level might stood. For example, recent data suggest that the increased also depend on the expectation of reward (the likelihood of a usage of government websites may negatively influences satisfactory service outcome), which may change with poli- citizens’ satisfaction with public service provision despite cies and/or with time. (or perhaps because) the increase in accessibility and trans- Taken together, complexities in the temporal dynamics of parency (Porumbescu, 2015). client expectations versus experience, combined with the Indicators of service provision, such as mean satisfaction effect social diffusion, may impose long-term biases in indi- levels, might be affected by sampling biases at two levels cators of service provision. Slow accumulation of biases can (Kaufmann, Kraay, & Mastruzzi, 2004; Stipak, 1979): client make it difficult to detect causality of observed changes, or decision to request a public service (participation level) and alternatively, it might mask real changes in public service client decision to rate the service outcome (feedback level). quality. This study is a preliminary attempt to disentangle Participation level may bias service provision indicators in a some of those factors, by analyzing changes in participation complex manner. For example, people who are less likely to level, feedback level, and satisfaction level over years. participate might have a more negative attitude toward pub- The data used for this study were obtained from an admin- lic services. A major factor in participation level is the herd- istrative initiative at The City College of New York (CCNY) ing effect (Hardin, 1968; Helbing, Farkas, & Vicsek, 2000), Science Division, which aimed at resolving long-lasting where social diffusion may gradually change public opinion problems with maintenance services provided to research and either facilitate or decrease participation levels, poten- labs. The Science Division deployed a web application, tially leading to sampling biases over time. That is, early on, which was developed to facilitate service request and to people with more positive attitudes toward the service might allow faculty and staff of research labs to request services be more likely to try the service and to send positive feed- directly. Current clients received feedback statistics about back, while the herding effect may attract more reluctant cus- service outcome from recent users via a dashboard attached tomers who might send more critical evaluations. to service request forms (Figure 1A). In an attempt to reduce A second source of potential bias is feedback level, as cli- provider and client triage across service types (i.e., discour- ent tendency to send feedback may change with experience. agement), the system presented client satisfaction as short- In the classic “tragedy of the commons” scenario (Hardin, term trends rather than cumulative scores. This way, even a 1968), a common grazing area is overused and eventually small change in mean client satisfaction could become destroyed by the farmers (free riders) who only care about immediately apparent in the dashboard display as a positive their individual gains. Empirically, the principal factors that or a negative trend in client satisfaction (Figure 1, bar affect the level of free ridership are group size, the graphs). Our administration hoped that this would improve Tchernichovski et al. 3 services without centralized sanctioning (Baldassarri & provider, but we had no access to their internal operations, Grossman, 2011), by synchronizing social learning across hence it is an “external,” third-party system. workers and clients of monopolistic university services. We analyzed data collected via this system over 4 years to Client Access Control evaluate how slow changes in participation and feedback The site was available online without restrictions (no login) level may affect measures of client satisfaction level over from any device connected (wired or wirelessly) via the col- time. We then performed spatial analysis to examine if lege division network, which was available via WIFI to all changes in participation level may display any spatial struc- occupants of the Science building. ture, which could indicate a herding effect (Benkler, 2006; Helbing et al., 2000). Finally, at the individual client level, we analyzed changes with satisfaction level over iterations Requesting User Evaluations (Feedback) (experience), to assess if social learning (across clients and The system relied entirely on requesting user evaluations, and providers of services through the feedback system) might implemented externally. Feedback was requested blindly have influenced satisfaction with service outcome. Our from each user (Figure 1B) via email after a week delay. This results suggest that herding effect might have imposed sam- approach allowed, at the cost of uncertainty, to arbitrarily pling biases, resulting in opposing effects on individual and attach our reporting system to any relevant university service. population-level performance indicators of service provi- The feedback request email included a link to a simple form sion. We propose simple measures for reducing such biases, where user scored the outcome as either satisfied, partially and potentially improve the reliability of similar information satisfied, or unsatisfied/nothing happened. The feedback form systems. did not present clients with any information (no dashboard). In sum, the problem that our field study attempted to address is how to improve public services by enabling social learning, online, across and among clients and providers of Dashboard Presentation of Satisfaction With services. The approach was to set a voluntary service rating Service Outcome system and present client feedback statistics as short-term Trends in mean client satisfaction were presented in the trends (as opposed to cumulative star rating). We tested for dashboard for each service type in monthly bins (Figure 1A). slow progressive changes in client satisfaction with service %satisfaction was calculated as the proportion of reports that outcome, focusing on interactions between changes in par- received a “satisfied” score, presenting six monthly bins, ticipation levels (herding effect) and our estimates of changes with the first bin representing accumulating data from the in client satisfaction over years. current month. Materials and Methods Workflow Analysis of data obtained by this administrative initiative has During the baseline period (first 9 months of 2006), the Web been approved by the CCNY Institutional Review Board application for university services was deployed without pre- (IRB). All institutional data were deidentified and then ana- senting the dashboard. Service requests were automatically lyzed according to the CCNY IRB regulations. emailed to the appropriate service providers. Client feedback about service outcome was requested only sparsely (n = 33) to obtain baseline information about client satisfaction with System Design service outcome. After 9 months, the feedback system was A transparent reporting system was developed between activated without otherwise changing the user interface. The 2005-2006 by faculty volunteers to address persistent prob- system requested feedback from each client 1 week after each lems in resolving maintenance issues in about 100 research service request. A dashboard was attached to the service laboratories and several core facilities in the CCNY Science request forms with bar graphs presenting mean values and Division. The Web Application includes forms designed for trends of monthly satisfaction with service outcome (Figure requesting services for problems of various categories, 1). Those graphs were visible to all users as well as to service including electrical, plumbing, HVAC (heating and cooling), management and workers. Service events that received carpentry, building integrity, pest control, custodial, rest- “unsatisfied” score were automatically emailed to the service rooms, and so on. The system replaced a facilitator, whose providers without any further follow-up. This mechanism job was to receive reports from building occupants and sub- remained intact without significant modifications for 4 years. mit work orders when necessary. Instead, it enabled direct communication with service providers via online forms Data Analysis and Statistics accessible via any web browser without restrictions to all the occupants of the division building. Each form submission Users scored the service outcome as either satisfied, partially triggered an automated email to the appropriate service satisfied, or unsatisfied. We pooled the “partially satisfied” 4 SAGE Open and “unsatisfied” scores into a single category to obtain a binary measure for each period, namely, the percentage of fully satisfied clients. Similarly, in the figures we present %satisfaction as the proportion of reports (or clients) that scored service outcome as fully satisfied. Data were ana- lyzed using Matlab 8. The Matlab Statistics package was used to calculate Pearson correlation coefficients and p val- ues. Spatial analysis was performed by computing a matrix with the number of unique users per quarter per floor. We then normalized each column to represent proportions, and smoothed the matrix using a 2 × 2 Hun filter. For analysis of the effect of repeated requests at the individual level, we con- sidered the first 14 reports (submitted per individual), for which we had sufficient sample size of 53 users. We calcu- lated Pearson correlation coefficients and p values over those first 14 requests per subject. Programming The web application was programmed using HTML, PHP, and MySQL. Feedback was managed via custom C++ appli- cation using Embarcadero RAD Studio. This way, dashboard images of graphs were automatically updated when the web page was open or refreshed. Results Presenting a dashboard with short-term trends of satisfaction with service outcome (Figure 1A) resulted in a moderate increase in the quarterly satisfaction rate with service out- Figure 2. Trends in client satisfaction with service outcome: (A) comes. Satisfaction rate increased from 33% during the 6 Time course of mean quarterly client satisfaction with service months baseline period (n = 33 subjects) to 56.5 ± 1.7% outcome. Each bar represents the proportion of fully satisfied (Figure 2A, means and SEM across quarters hereafter, clients per quarter. During the baseline period (red bar), feedback Pearson’s χ = 4.1, p = .04). After the initial increase, client was requested sparsely, and our system operated without satisfaction levels remained stable with no apparent trend displaying trends of client satisfaction. (B) Per client satisfaction over time (Figure 2A, R = .02, ns). However, analysis at the levels as a function of the chronological order of service requests. Each marker represents the proportion of fully satisfied clients individual client level (pooled across individuals) showed per category (first request, second request, etc.). (C) Quarterly that satisfaction increased linearly with the chronological rates of client feedback. order of service requests (Figure 2B, Pearson correlation: R = .63, p = .0007). Limiting the analysis to clients who sub- mitted at least 14 reports (our upper bound for analysis, see diffusion, from floor to floor of the Science building over the methods), to account for attrition (n = 53 subjects) we still course of the study (Figure 3B), suggesting that offline local identify a significant positive trend (Pearson correlation: R communication might have played a role in the increased = .48, p = .006), indicating increased satisfaction over itera- participation (Rogers, 1962). tions. Client feedback levels remain stable at 43 ± 1.3% Finally, we analyzed satisfaction over time at the individ- throughout the study, with no apparent trend (Figure 2C), ual client level to test for heterogeneity across client pools excluding a scenario of changes in mean satisfaction through with different attitude toward the university services. attrition. Dividing clients into those whose mean score was above the Although the number of Science Division members was pooled average (positive attitude) versus those below the approximately the same during experiential period, we pooled average (negative attitude), we found a strong hetero- observed a persistent increase in the number of unique ser- geneity of responses: the positive trend was driven by clients vice clients per quarter, at a rate of about 13% per year who scored below average (Figure 3C) with a remarkable (Figure 3A, Pearson correlation: R = .43, p = .006). increase from about 20% to 60% satisfaction within that Examining the role of social diffusion in facilitating partici- group. In contrast, no significant trends were observed in cli- pation through a spatial analysis, we observed a slow spatial ents who scored above average (Figure 3D). Tchernichovski et al. 5 dampen the global trend. For example, as service client pool increased at a rate of about 13% per year (Figure 3A), and as satisfaction rate were about 15% lower in new clients (Figure 2B), if participation was unbiased, then the flow of new cus- tomers should have reduced the mean annual satisfaction rate by about 2%. However, our data suggest that participation was biased: Spatial analysis of service requests showed a pattern of expansion from floor to floor (Figure 3B), suggest- ing that clients with positive attitude herded clients with a more negative attitude toward university services. In clients with negative attitude (Figure 3C-D), satisfaction increased much more strongly with experience, at about 30% to 40%. Therefore, herding effect could have reduced satisfaction by 4% to 5% given the flow rate. Note that the overall increase in individual clients satisfaction rate over 4 years was about 20% (about 5% per year), comparable to the putative damp- ening effect we propose. Overall, our results suggest that slow but persistent improvement in satisfaction for existing users and social contagion in utilization of the information system might have brought clients with less positive attitude toward the services into the information ecology, hence masking the increase in client satisfaction. We found it interesting that client feedback levels remained high (between 40% and 50%) over the entire study (Figure 2C). Feedback was requested only once, a week after each report, and no additional measures were taken to encourage clients to send feedback. It is difficult to compare those rates to existing data because institutions rarely report the percentage of clients who responded to automated feed- Figure 3. Analysis of participation and satisfaction levels: (A) back requests. However, typically, feedback levels are lower Number of unique clients per quarter. (B) Smoothed map of the than 20% and nonresponse bias is significant (Groves, 2006; spatial distribution of reports across the floors of the CCNY Science Division building. Intensity map represents proportions Lambert & Harrington, 1990). Optimization of survey meth- of individuals submitting service requests across building space. ods can increase client response rate to some extent Floors order is transformed to ensure that the spatial data remain (Kaplowitz, Hadlock, & Levine, 2004). Beyond that, in a unidentifiable. (C) Per client satisfaction levels as a function of repeated game (Xiao, Zhang, Shi, & Gao, 2012) client moti- the chronological order of service requests for clients who score vation to share information may depend on their expectation below average, pooled across clients as in Figure 2B. (D) Same as of reward, for example, in their perception of feedback use- (C) for clients who score above average. Note. CCNY = The City College of New York. fulness (Racherla & Friske, 2012), or by deviations from prior expectation of reward (Moe & Schweidel, 2011). Perhaps presenting information about client satisfaction at Discussion the time of service request, had affected client expectation of Our results suggest that the effects of launching transparent reward and contributed to stability in feedback rate. Testing feedback systems on monopolistic service provisions might this hypothesis would require randomized control studies, be complex, and simple cumulative performance estimates which were not feasible in this administrative study. have limited bearing on capturing it. In our case, even though Our analysis has many limitations. The most serious one we did not observe any global trends in client satisfaction is that without randomized controls, we could not test if the over 4 years (Figure 2A), a detailed analysis detected an presentation of short-term trends on the dashboard was increased satisfaction with experience at the individual client beneficial or not. However, we can certainly conclude that level (Figure 2B). We suspect that this effect was masked by the trends presented to the clients in real time did not fully a secondary herding effect (Benkler, 2006; Helbing et al., capture the long-term improvement in client satisfaction, 2000), where satisfied clients attracted more reluctant ones, which we observed only a posteriori at the individual client who at least initially submitted lower satisfaction scores, level. In retrospect, presenting trends of satisfaction as a dampening the global trend. Note that even if the expansion function of client experience with the service (pooled in client pool were a random effect, the positive correlation across individuals as in Figure 2B) rather than monthly between iterations (experience) and satisfaction alone would trends could have reduce biases, and could have provided a 6 SAGE Open better chance to synchronized learning across clients and with experience, an increase in participation rate, and persis- providers of services. tently high feedback levels. Therefore, under certain condi- This study was conducted at a particularly interesting tions, designing a communication system that presents trends time in the history of communication technologies, just when of client satisfaction with service outcomes may result in the usage of online communication and mobile devices improved client satisfaction with monopolistic public service became widespread. It is therefore possible that the findings provision through learning. Future work studying how the we present here are historically contingent. That is, there design of communication ecosystems can affect the evolu- could be a novelty effect that resulted from the transition tion of client engagement and satisfaction could become use- from a phone/paper-based request system to an online inter- ful if the two effects we describe here can be separated. face that would have faded had we continued data collection Achieving high public participation in reviewing service out- for another 5 or more years. However, as we did collect data come can potentially allow distributed management of pub- for a 5-year time period that far exceeds that of most experi- lic services. Such systems can be tested on large scales. We mental timelines, we do not believe that our observed results hope that this preliminary study would facilitate efforts for are due to novelty or the particulars of the online interface embedding governance within Internet-based social dynam- but rather to the improved steady state relationship between ics. Such systems could make it possible to create communi- service quality and user information. It is also possible that if cation ecologies that enable real-time (and bottom-up) social our approach were so widely adopted such that users were learning. saturated with such coupled, socially informed interfaces, our feedback dashboard would fail to elicit the level (and Authors’ Note quality) of sustained user (or service provider) response. The data reported in the paper are archived at the CCNY Science However, if presenting the short-term trends in client satis- Division server. faction is indeed the cause of the improved validity of infor- mation for both users and service providers, then there is Acknowledgments little reason to expect the surrounding ecology of informa- We thank CCNY Science Division deans M. Gunner and R. Stark, tion to strongly bias the underlying relationships. That is, CCNY Vice Presidents R. Santos, CCNY Physical Plant Services better quality (and more) information leading to better managers R. Slowski and G. Miller, and K. Woods for their con- responsiveness in one service could be sustained without tinuous support and cooperation. We thank L. Parra and D. affecting usage or information feedback dynamics across Baldassarri for commenting on the manuscript. services, especially in the case of monopolistic public ser- vices. This assumption rests on the notion that the ease of use Declaration of Conflicting Interests and perceived usefulness of the interface remains constant The author(s) declared no potential conflicts of interest with respect even in a changing information technology landscape (Davis, to the research, authorship, and/or publication of this article. Bagozzi, & Warshaw, 1989). The ecological validity of this field study allows us to Funding cautiously propose some general implications. Other than presenting trends, the manner in which we published volun- The author(s) disclosed receipt of the following financial support for the research and/or authorship of this article: Financial support tary client scores on service request forms is fairly similar to for this study was provided by the John D. and Catherine T. distributed platforms for client rating such as Yelp and MacArthur Foundation as part of its support of the Research Amazon, who impose external rating on many arbitrary ser- Network on Connected Learning to D. Conley. vices, as we did. It is well established that the dismal propor- tion of service clients who choose to post feedback in such References platforms is unlikely to be representative (Gao, Greenwood, Agarwal, & Jeffrey, 2015; Zervas, Proserpio, & Byers, Adato, M., Carter, M. R., & May, J. (2006). Exploring poverty traps and social exclusion in South Africa using qualitative 2015). Our study adds to this by suggesting that herding and quantitative data. Journal of Development Studies, 42, effect may impose dynamic biases on indicators of service 226-247. provision. Such effects should be taken into account when Baldassarri, D., & Grossman, G. (2011). Centralized sanction- attempting to increase client base or feedback levels, which ing and legitimate authority promote cooperation in humans. might slowly bias the sampling of clients with different atti- Proceedings of the National Academy of Sciences of the United tude toward the service. Long-term dynamical biases in sam- States of America, 108, 11023-11027. pling should particularly concern public services, where Benkler, Y. (2006). The wealth of networks: How social produc- incremental improvements over years can be guided by cli- tion transforms markets and freedom. New Haven, CT: Yale ent feedback. University Press. Despite the lack of persistent positive trends, deploying a Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User accep- transparent feedback system for university service resulted tance of computer technology: A comparison of two theoretical in positive outcomes: an improvement in client satisfaction models. Management Science, 35, 982-1003. Tchernichovski et al. 7 Gao, G., Greenwood, B. N., Agarwal, R., & Jeffrey, S. (2015). from Seoul, South Korea. Administration & Society. doi: Vocal minority and silent majority: How do online ratings 10.1177/0095399715593314 reflect population perceptions of quality? MIS Quarterly, 39, Racherla, P., & Friske, W. (2012). Perceived “usefulness” of 565-589. doi:10.2139/ssrn.2629837 online consumer reviews: An exploratory investigation across Gerson, R. (1993). Measuring customer satisfaction. Crisp Learning. three services categories. Electronic Commerce Research and Retrieved from http://dl.acm.org/citation.cfm?id=1408233 Applications, 11, 548-559. Groves, R. (2006). Nonresponse rates and nonresponse bias in Rogers, E. (1962). Diffusion of innovations. New York, USA: Free household surveys. Public Opinion Quarterly, 70, 646-675. Press of Glencoe, 1, 79-134.. Hardin, G. (1968). The tragedy of the commons. Science (New York, Stipak, B. (1979). Are there sensible ways to analyze and use sub- N.Y.), 162, 1243-1248. doi:10.1126/science.162.3859.1243 jective indicators of urban service quality? Social Indicators Helbing, D., Farkas, I., & Vicsek, T. (2000). Simulating dynamical Research, 6, 421-438. features of escape panic. Nature, 407, 487-490. Vedung, E. (1997). Public policy and program evaluation. New Holmes, T. J., Levine, D. K., & Schmitz, J. A. (2012). Monopoly and Brunswick, NJ: Transaction Publishers. doi:10.2307/2667008 the incentive to innovate when adoption involves switchover Vigoda, E. (2000). Are you being served? The responsiveness disruptions. American Economic Journal: Microeconomics, 4, of public administration to citizens’ demands: An empirical 1-33. examination in Israel. Public Administration, 78, 165-191. Hu, N., Zhang, J., & Pavlou, P. (2009). Overcoming the J-shaped Xiao, X., Zhang, Q., Shi, Y., & Gao, Y. (2012). How much to share: distribution of product reviews. Communications of the ACM. A repeated game model for peer-to-peer streaming under ser- Retrieved from http://dl.acm.org/citation.cfm?id=1562800 vice differentiation incentives. IEEE Transactions on Parallel Jaeger, P. T., & Thompson, K. M. (2003). E-government around the and Distributed Systems, 23, 288-295. world: Lessons, challenges, and future directions. Government Zervas, G., Proserpio, D., & Byers, J. (2015, January 28). A first Information Quarterly, 20, 389-394. look at online reputation on Airbnb, where every stay is above Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A com- average. Retrieved from https://ssrn.com/abstract=2554500 parison of web and mail survey response rates. Public Opinion Quarterly, 68, 94-101. Author Biographies Kaufmann, D., Kraay, A., & Mastruzzi, M. (2004). Governance Ofer Tchernichovski, PhD, is a Neuroscience professor at Hunter matters III: Governance indicators for 1996, 1998, 2000, and College and CUNY Graduate Center. His research focuses on vocal 2002. The World Bank Economic Review, 18, 253-287. learning and cultural evolution in songbirds and humans. Lambert, D., & Harrington, T. (1990). Measuring nonresponse bias in customer service mail surveys. Journal of Business Marissa King, PhD, is a professor of Organizational Behavior at Logistics, 11, 5-26. the Yale School of Management. Her research examines network- Lee, N. (2014). Facebook nation. New York, NY: Springer. based learning processes. Marwell, G., & Ames, R. (1979). Experiments on the provision of Peter Brinkmann, PhD, was a college professor in mathematics public goods. I. Resources, interest, group size, and the free- and is now a software engineer at Google. rider problem. American Journal of Sociology, 84, 1335-1360. Miller, N., Resnick, P., & Zeckhauser, R. (2002). Eliciting honest Xanadu Halkias, PhD, is a registered patent agent at Baker Botts, feedback in electronic markets (Working paper). Cambridge, LLP. Previously, she was an adjunct professor and a researcher at MA: Harvard Kennedy School. the Universite du Sud, Toulon, France. Her research focuses on Moe, W., & Schweidel, D. A. (2011). Online product opinions: advanced signal processing, deep Artificial Intelligence and Incidence, evaluation, and evolution. Marketing Science, 31, machine learning. 372-386. Daniel Fimiarz is employed at the City College of New York/ Moe, W., & Trusov, M. (2011). The value of social dynamics in CUNY in a capacity of a core facilities manager. He uses his com- online product ratings forums. Journal of Marketing Research, puter science background to improve divisional IT infrastructure. 48, 444-456. Nowak, M. A. (2006). Five rules for the evolution of cooperation. Laurent Mars, PhD is an associate dean in the Science Division at Science (New York, N.Y.), 314, 1560-1563. the City College of the City University of New York. He teaches Pavett, C. M. (1983). Evaluation of the impact of feedback on per- general chemistry courses. formance and motivation. Human Relations, 36, 641-654. Platteau, J.-P. (2000). Institutions, social norms, and economic Dalton Conley, PhD, is Henry Putnam University professor of development. Amsterdam, The Netherlands: Harwood Sociology at Princeton University and a Research Associate at the Academic. National Bureau of Economic Research and Adjunct Professor of Porumbescu, G. A. (2015). Does transparency improve citi- Community Medicine at Mount Sinai School of Medicine. His research zens’ perceptions of government performance? Evidence focuses on genetic and social transmission across generations.

Journal

SAGE OpenSAGE

Published: Feb 1, 2017

Keywords: social learning; herding effect; rating systems; online reviews; social feedback

There are no references for this article.