Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Two Strongly Truthful Mechanisms for Three Heterogeneous Agents Answering One Question

Two Strongly Truthful Mechanisms for Three Heterogeneous Agents Answering One Question TwoStronglyTruthfulMechanisms forThree Heterogeneous Agents AnsweringOne uestion GRANTSCHOENEBECK , UniversityofMichigan FANG-YIYU,GeorgeMasonUniversity Peer prediction mechanisms incentivize self-interested agents to truthfully report their signals even in the absence of veri- ication, by comparing agents’ reports with their peers. We propose two new mechanisms, Source and Target Diferential PeerPrediction,andprovevery strongguarantees foravery generalseting. Our Diferential Peer Prediction mechanisms stronglyartruthful e : Truth-telling is a strict Bayesian Nash equilibrium. Also, truth-telling pays strictly higher than any other equilibria, excluding permutation equilibria, which pays the same amount as truth-telling. he guarantees hold asymmetric for priors among agents which the mechanisms need not know (detail-fr)ein e the singlequestionseting .Moreover,theyonlyrequirthr e eeagents ,eachofwhichsubmits singleitem a rep:ort tworeport their signals (answers),and the other reports her forecast (prediction of one of the another agent’s reports). Our prooftechnique is straightforward,conceptually motivated,andturns on the logarithmic scoringrule’sspecialproperties. Moreover, we can recast the Bayesian Truth Serum mechanism [20] into our framework. We can also extend our results tothe setingcontinuous of signals with aslightly weakerguarantee on the optimality ofthe truthfulequilibrium. CCS Concepts: •Information systems→ Incentive schemes; • heory of computation → uality of equilibria ; • Mathematics of computing →Information the.ory AdditionalKeyWordsandPhrases: Peerprediction,Logscoringrule,Prediction Market. 1 INTRODUCTION hree friends, Alice, Bob, and Chloe, watch a political debate on television. We want to ask their opinions on whowonthedebate.Weareafraidtheymaybeless thantruthfulunless wecanpaythemfor truthfulanswers. husweseektodesignmechanismsthatrewardtheagentsfortruth-telling.heiropinionsmaysystematically difer, but are nonetheless related. For example, it turns out Alice values description and argumentation, Bob values argumentationandpresentation,andChloevalues descriptionandpresentation. Inthispaper,wedesigntwopeerpredictionmechanismsbyaskingAlice,Bob,andChloetoplaythreecharac- ters:theexpertwhomakespredictions,target thewhomisbeingpredicted,and sourthe cewhohelpspredictions. he source and target are asked for their opinions. In the most straight-forward seting, the expert makes two predictions(e.g.70%chanceof“yes”and30%chanceof“no”)ofthetarget’sopinion:aninitialpredictionbefore thesource’sopinionis revealedandanimprovedpredictionaterwards. Ourpoliticaldebatemotivationmightbequixotic(attheveryleastweneedtoensurethattheydonotcommu- nicateduringthedebatesothatwecanelicitthetarget’spredictionwithandwithoutthesource’sinformation). However, peer-grading can easily it into this paradigm: we might ask Bob and Carol to grade a paper, while Alice tries to predict Carol’s mark for the paper before and ater seeing Bob’s mark. Similarly, Alice, Bob, and Bothauthorscontributedequally tothisresearch. Authors’addresses:GrantSchoenebeck,schoeneb@umich.edu,UniversityofMichigan;Fang-YiYu,fyy2412@gmu.edu,GeorgeMasonUni- versity. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copiesarenotmadeordistributedforproitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationontheirstpage. Copyrightsfor third-partycomponentsof thisworkmustbehonored. For allother uses, contacttheowner/author(s). c2022Copyrightheldby theowner/author(s). 2167-8375/2022/11-ART https://doi.org/10.1145/3565560 ACMTrans. Econ. Comput. 2 • GrantSchoenebeck and Fang-YiYu Carolmightbepeer-reviewingapaper,illingoutaorsurdoing vey, anycrowdsourcingtask(e.g.labelingdata formachinelearningapplications).Insuchcases,itisnaturaltorewardagentsfordoingagoodjob,andalsoto havethemupdatea predictionwithadditionalinformation. Forsimplicity(andtocollectagradefromallthreeagents),Alice,Bob,andCloe,mightplayallthreecharacters: Alice could predict Cloe’s signal before and ater seeing Bob’s; Bob could predict Alice’s signal before and ater seeingCloe’s;andCloecouldpredictBob’ssignalbeforeandater seeingAlice’s. his problem is known in the literature as peer prediction or information elicitation without veriication. In thesingle-questionseting agentsareonlyaskedonequestion.Incentivizingagentsisimportantsothattheynot onlyparticipate,butprovidethoughtfulandaccurateinformation.Ourgoalistoelicittruthfulinformationfrom agents withminimalrequirements. Drawingfrompreviouspeerpredictionliterature,wewouldlikeourmechanismstohavethefollowingdesir- ableproperties: StronglyTruthful[12] ProvidingtruthfulanswersisaBayesianNashequilibrium(BNE)andalsoguaran- tees the maximum agents’ welfare among any equilibrium. his maximum is “strict” with the exception of a few unnatural permutation equilibria where agents report according to a relabeling of the signals (deined more formally in Sect.his 2). will incentivize the agents to tell the truth–even if they believe the other agents will disagree with them. Moreover, they have no incentive to coordinate on an equilib- rium where they do not report truthfully. In particular, note that playing a permutation equilibrium still requiresas muchefortfromtheagents as playingtruth-telling. General Signalshemechanismshouldworkheter for ogeneousagentswhomayevenhavcontinuous e sig- nals (with a weaker truthfulness guarantee). In our above example, the friends may not have the same political leanings, and the mechanism should be robust to that. Furthermore, instead of a single winner, wemay wanttoelicitthemagnitudeoftheir (perceived)victory. Detail-Freehe mechanism is not required to know the speciics about the diferent agents (e.g. the afore- mentionedjointprior).Intheaboveexample,themechanismshouldnotberequiredtoknowtheapriori politicalleanings ofthe diferentagents. On FewAgents Wewouldlikeour mechanisms toworkusingas fewagents as possible,inour case,three. Single-ItemReportsWewouldliketomakeiteasyforagentssothattheyprovideverylitleinformation: onlyoneitem,eithertheirsignaloraprediction.Inourcase,twoagentswillneedtoprovidetheirsignals (e.g. whom they believe won the debate). he remaining agent will need to provide a prediction on one outcome—a single real value. (e.g. their forecast for how likely a particular other agent was to choose a particularcandidate as thevictor). 1.1 Our Contributions • We deine two Diferential Peer Prediction mechanisms (Mechanism 1 and 2) which are strongly-truthful and detail-free for the single question seting and only require a single item report from three agents. Moreover,theagents neednotbehomogeneous andtheir signals may becontinuous. • Mechanism 1 rewards the source for the improvement of the experts prediction. We can use any strictly proper scoring rule (see Deinition 2.2) to measure the improvement, and truth-telling is an equilibrium. Moreover,ifweuse thelogscoringrule,truth-tellinghas thehighesttotalpaymentamongallequilibria. • Mechanism 2, which rewards the target for the improvement of the experts prediction, exploits special propertiesofthelogscoringrule(seeTechniquesbelowfordetails),whichmaybeofindependentinterest. Hereanonymity may berequiredtopreserveprivacy. Kong and Schoenebeck [12] show that it is not possible for truth-telling to pay strictly more than permutation equilibrium in detail-free mechanisms. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 3 Here,themechanismcanbegeneralizedbyreplacingtheexpertwithasuitablepredictorthatpredictsthe target’sreportgiveninformationfromasource(whichcouldbethecollectionofmanyagents).Weshow howtorecasttheBayesianTruthSerummechanismintotheframeworkofthisMechanism(Sect.4).his givesaddedintuitionfor its guarantees. • Weprovideasimple,conceptuallymotivatedprooffortheguaranteesofDiferentialPeerPredictionmech- anisms.Especially incontrasttothemostclosely relatedwork([10])our proofis very simple. 1.2 Summaryof Our Techniques Target Incentive Mechanisms. Many of the mechanisms for the single question use what source incen- we call tives:theypayagentsforreportingasignalthatimprovesthepredictionofanotheragent’ssignal.heoriginal peer prediction mechanism [15] does exactly this. To apply this idea to the detail-free seting [31, 33], mecha- nisms take a two-step approach: they irst elicit an agent’s prediction of some target agent’s report, and then measurehowmuchthatpredictionimprovesgivena reportfroma sourceagent. InSection3.2,weexplicitlydevelopatechnique,which targetincentiv wecall es,forrewardingcertainagents for signal reports that agree with a prediction about them. We show that log scoring rules can elicit signals as well as forecasts by paying the diference of log scoring rule on the signal between an initial prediction and an improvedprediction.hismaybeofindependentinterest,andisalsothefoundationfortheresultsinSections3.2 and 4. InformationMonotonicity Weuse. informationmonotonicity,atoolfrominformationtheory,toobtainstrong truthfulness.Likethepresentpaper,thecoreoftheargumentthattheDisagreementMechanism[10]isstrongly truthful(forsymmetricequilibrium)isbasedoninformationmonotonicity.However,becauseitishardtocharac- terizetheequilibriaintheDisagreementMechanism,theanalysisendsupbeingquitecomplex.Aframeworkfor derivingstronglytruthfulmechanismsfrominformationmonotonicity,whichweimplicitlyemploy,isdistilled inKongandSchoenebeck[12]. In Section 3, we use the above techniques to develop strongly truthful mechanisms, source-Diferential Peer Prediction and target-Diferential Peer Prediction, for the single question seting. Source-Diferential Peer Pre- diction is quite similar to the Knowledge-Free Peer Prediction Mechanism [33], however, it is strongly truthful whichweshowusinginformationmonotonicityoflogscoringrule.Target-DiferentialPeerPredictionaddition- ally uses thetargetincentivetechniques abovetoshowitis stronglytruthful. 1.3 RelatedWork Single Task Seting.In this seting, each agent receives a single signal from a common prior. Miller et al. [15] introduce the irst mechanism for single task signal elicitation that has truth-telling as a strict Bayesian Nash equilibrium and does not need veriication. However, their mechanism requires full knowledge of the common prior and there exist some equilibria where agents get paid more than truth-telling. At a high level, the agents can all simply submit the reports with the highest expected payment and this will typically yield a payment much higher than that of truth-telling. Note that this is both natural to coordinate on (in fact, Gao et al. [6] foundthatinanonlineexperiment,agentsdidexactlythis)anddoesnotrequireanyeforttowardthetaskfrom theagents.Kongetal.[9]modifytheabovemechanismsuchthattruth-tellingpaysstrictlybeterthananyother equilibriumbutstillrequiresthefullknowledgeofthecommonprior. Prelec [20] designs the irst detail-free peer prediction mechanism—Bayesian truth serum (BTS). Moreover, BTS is strongly truthful and can easily be made to have one-item reports. However, BTS requires an ininite number of participants, does not work for heterogeneous agents, and requires the signal space to be inite. he analysis, while rather short, is equally opaque. A key insight of this work is to ask agents not only about their ownsignals,butforecasts (prediction)oftheother agents’reports. ACMTrans. Econ. Comput. 4 • GrantSchoenebeck and Fang-YiYu Aseriesofworks[1,22,23,31–33]relaxthelargepopulationrequirementofBTSbutlosethestronglytruth- ful property. Zhang and Chen [33] is unique among prior work in the single question seting in that it works for heterogeneous agents whereas other previous detail-free mechanisms require homogeneous agents with conditionally independentsignals. Toobtainthestronglytruthfulproperty,KongandSchoenebeck[10]introducetheDisagreementMechanism whichisdetail-free,stronglytruthful(forsymmetricequilibrium),andworksforsixagents.husitgeneralizes BTS to the inite agent seting while retaining strong truthfulness. However, it requires homogeneous agents, cannot handle continuous signals, and fundamentally requires that each agent reports both a signal and a pre- diction.Moreover,itsanalysisisquiteinvolved.However,itiswithintheBTSframework,inthatitonlyasksfor agents’ signals and predictions, whereas our mechanism typically asks at least one agent for a prediction ater seeingthesignalofanother agent. Finally, most of these works either have multiple rounds [32, 33], or work only if the common prior is sym- metric[1,13,20,22,31],thoughsometimesthiscanberelaxedtoarestrictionmorelikepositivecorrelation[32]. Our mechanisms also have multiple rounds; however, we can simplify them to single round but this requires askingquestions thatmay beslightly morecomplexthantheBTS framework. Prelec[21],postedsubsequentlytotheconferencepublicationofthiswork[25]butdevelopedindependently, uses very similar techniques to this work combined with the seting explored in [32] where agents are asked questions before and ater seeing their signal. Similar to our target DPP mechanism, the mechanisms in Prelec [21] are target incentive mechanisms and pay the target by log scoring rule on diferent pairs of initial and improved predictions (e.g. one agent’s predictions before and ater geting her signal that requires additional temporal coordination). On the other hand, with the above additional temporal coordination, the mechanisms canworkontwoagents,andour mechanismrequiresatleastthreeagents for thesetingweconsider. Surprisingly, and, with the exception of a footnote in Miller et al. [15], unmentioned by any of the above works, the idea of target incentive mechanisms with the log scoring rule can be dated back over 20 years to a (so far unpublished) working paper [19] which studies information pump games that also use improvement of predictionsonthelogscoringruletoencouragetruthfulreports.Inparticular,thepaperpresentsaspecialcase of our main technical lemma (Lemma 3.4) that requires a slightly stronger assumption than our second order stochasticrelevant(Deinition2.1).Besidesaweakerassumption,ourconnectiontoinformationtheoryenables us todesignstronglytruthfulmechanisms insteadoftruthfulmechanisms. Continuous Single Task Seting. Kong et al. [13] shows how to generalize both BTS and the Disagreement Mechanism(withsimilarpropertiesincludinghomogeneousagents),intoarestrictedcontinuoussetingwhere signals are Gaussians related in a simple manner. he generalization of the Disagreement Mechanism requires thenumber ofagents toincreasewiththedimensionofthecontinuous space. he aforementioned Radanovic and Faltings [23] considers continuous signals. However, it uses a discretiza- tionapproachwhichyields exceedingly complexreports. Additionally,itrequireshomogeneous agents. In a slightly diferentseting, Kongand Schoenebeck [11] study eliciting agents’ forecasts for some (possibly unveriiable)event,whicharecontinuousvaluesbetween0and1.However,hereweareconcernedwitheliciting signals whichcanbefroma muchricher space. Multi-task Seting. In the multi-task seting, introduced in Dasgupta and Ghosh [5], agents are assigned a batchofapriorisimilartaskswhichrequireeachagents’privateinformationtobeabinarysignal.Severalworks extend this to multiple-choice questions [5, 8, 12, 24, 27]. Recently, a sequence of works study the robustness andlimitationofthemulti-taskseting[3, 26,34]. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 5 he multi-task mechanisms and our single-task mechanism each ofer advantages. he key advantage of the multi-task mechanisms is that agents are only asked for their signal, and not a prediction. Multi-task mecha- nismsaccomplishthisby,implicitlyorexplicitly,learningsomerelationbetweenthereportsofdiferentagents. However, because of this, multi-task mechanisms strongly depend on an assumption that both the joint distri- bution of signals on diferent questions are i.i.d. and that the agents apply the same (possibly random) strategy to each task in an i.i.d. manner. his assumption is not unreasonable in certain crowd-sourcing, peer review, and peer grading setings, but is likely violated in a survey seting. In the seting of the present paper, no such assumptionis neededas themechanismcanbeappliedindividually toeachquestionor task. Even in setings where the i.i.d. assumption holds, it may be the case that (in practice) agents receive infor- mation in addition to the elicited signal so that the above learning approach fails. For example, an agent may like a paper, but believe it to be on a generally unpopular topic, and therefore conclude that the mechanism will incorrectly predict her rating. his is because the relationbetween agents’ reports arelearned on all topics and so may be incorrect when applied to the subset of papers on unpopular topics. In such a case the strategic guarantees of the multi-task mechanisms may fail. Our mechanism mitigates this problem by having agents themselves doing the prediction, who also have access to the contextual information which will naturally be incorporatedintotheir prediction. Another drawback of the multi-task seting, as its name suggests, is the number of questions required for eachagent.Mechanismstendtoeithermakeassumptionsaboutthecorrelationbetweensignals(e.g.,[5])orthe structure must be learned (e.g., [24, 27]). In the later case, the strategic guarantees are parameterize � d by an whichonlydecreasesasymptoticallyinthenumberofagents[24].AnexceptiontothisistheDMImechanism[8], butthisstillotenrequiresafairlylargenumberoftaskstoworkatallandhasadditionalrestrictions.However, recent work [3] shows that the pairing mechanism [24] combined with proper machine learning can work in setings withas fewas four tasks per agent. Incontrast, our mechanismonly requiresa singletask. 2 PRELIMINARIES 2.1 PeerPrediction Mechanism here are three characters, Alice , Bob and Chloe in our mechanisms. Alice (and respectively Bob, Chloe) has a privately observed signal � (respectively �, �) from a setA (respectively B, C). hey all share a common belief thattheirsignals (�,�,�) aregeneratedfromarandomvariable (�, �,� ) whichtakesvaluesAfr×om B×Cwith a probabilitymeasur � calle e dthecommonprior. � describes howagents’privatesignals relatetoeachother. AgentsareBayesian.Forinstance,aterAlicer�eceiv = �,she es updatesherbelieftoposterior the �((�,� ) = (·, ·) | � = �) whichisadistributionovertheremainingsignals. � (W· e| �will ) instead use tosimplifythe �,� |� notion.Similarly Alice’sposterior ofBob’ssignalis�denote (· | �d),bywhichis a distribution B. on � |� A peer prediction mechanism on Alice, Bob, and Chloe has three payment (� ,�functions ,� ). he mech- � � � anism irst collects reprorts := (� ,� ,� ) from agents. It pays Alice with � (r) (and Bob and Chloe anal- � � � � ogously). Alice’s strategy � is a (random) function from her signal to a report. All agents are rational and risk-neutral so they are only interested in maximizing their (expected) payment. hus, given a strategy proile � := (� ,� ,� ), Alice, for example, wants to maximize ex-ante herpaymentunder common prior� which is � � � � (� ;�) := E [� (r)].Letex-anteagents’welfaredenotethesumofex-antepaymenttoallagents, � (� ;�) + � �,� � � � (� ;�) + � (� ;�). A strategy proile � is aBayesian Nash equilibrium under common prior� if by changing � � the strategy unilaterally, an agent’s payment can only weakly decrease strict . It isBayaesian Nash equilibrium ifanagent’s paymentstrictly decreasesas her strategy changes. We want to design peer prediction mechanisms to “elicit” all agents to report their information truthfully without veriication. We say Alice�’s strategy istruthfulfor a mechanism M if Alice truthfully reports the ACMTrans. Econ. Comput. 6 • GrantSchoenebeck and Fang-YiYu informationrequestedbythemechanism. Wecallthestrategyproile � truth-tellingifeachagentreportstruth- fully. Moreover, we want to design detail-fremee chanisms which have no knowledge about the common prior � except agents’ (possible non-truthful) reports. However, agents can always relabel their signals and detail- freemechanismscannotdistinguishsuchastrategyproilefromthetruth-tellingstrategyproile.Wecallthese strategy proilespermutation strategy proiles . hey can be translated back to truth-telling reports by some per- mutations applied to each component A × Bof × C—that is, the agents report according to a relabeling of the signals. Wenowdeinesomegoalsforourmechanismthatdiferinhowuniquethehighpayofoftruth-tellingis.We callamechanism truthful ifthetruth-tellingstrategy � ispraoile strictBayesianNashequilibrium.However, inatruthfulmechanism,non-truth-tellingequilibriamayyieldahigherex-antepaymentforeachagent.Inthis paper,weaimforstronglytruthfulmechanisms[12]whicharenotonlytruthfulbutalsoensuretheex-ante agents’ welfare in the truth-telling strategy � isprstrictly oile beter than all non-permutation equilibria. Note that in a symmetric game, this ensures that each agent’s individual expected ex-ante payment is maximized by truth-tellingcomparedtoany other symmetric equilibrium. Now,wedeinethesetofcommonpriorsthatourdetail-freemechanismscanworkon.Notepeerreportsare notusefulwhen theagents’ signals areindependentof eachother.hus, a peer predictionmechanism needs to exploitsomeinterdependencebetweenagents’signals. Deinition2.1(ZhangandChen[33. ])Acommonprior� is⟨�, �,� ⟩-secondorderstochasticrelevant ifforany ′ ′ distinct signals �,� ∈ B, there is � ∈ A, such that � (· | �,�) ≠ � (· | �,� ). hus, when Alice with � � |�,� � |�,� is making a prediction of Chloe’s signal, Bob’s signal is relevant so that his signal induces diferent predictions when� = � or � = � . Wecall� second orderstochastic relevant iftheabovestatementholds for any permutation {�, �,� }.of To avoid measure theoretic concerns, we initially �rehas quirefullthat support, and the joint signal space A × B × C tobeinite.InAppendix G, wewillshowhowtoextendour resultstogeneralmeasurablespaces. 2.2 Proper Scoring Rules Scoringrulesarepowerfultoolstodesignmechanismsforelicitingpredictions.Considerainitesetofpossible outcomesΩ, e.g., Ω = {sunny,rainy }. An expert, Alice, irst reports a distribution � ∈ P(Ω) as her prediction of the outcome, wher P(Ωe) denotes the set of all probability measur Ω.eshen, on the mechanism and Alice observe the outcome �. he mechanism gives Alice aPSscor [�,e�]. Alice maximizes her expected score by reportingher truebelieffor theoutcome � (thepr, obabilityofeachpossibly outcome �): of Deinition 2.2.A scoring rule PS : Ω × P(Ω) ↦→ R isproper if for any distributions �,� ∈ P(Ω) we have ˆ ˆ E [PS[�, � ]] ≥ E PS[�, �] .A scoringrule PS isstrictlyproper whentheequality holds only � = �if . � ∼� � ∼� Givenanyconvexfunction �,onecandeineanewproperscoringrule PS [12].Inthispaper,weconsidera specialscoringrulecalledlogarithmic the scoring rule [30], deinedas LSR[�, � ] := log (�(� )) , (1) Here we do not deine the notion of truthful reports formally, because it is intuitive in our mechanisms. For general seting, we can use query modelstoformalizeit[29]. OurdeinitionhassomeminordiferencesfromZhangandChen[33]’s,foreaseofexposition.Forinstance,theyonlyrequirethestatement holdsfor onepermutation {�of ,�,� } insteadof allthepermutations. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 7 where� : Ω → R is the probability density function �. Another of popular scoring ruleBrier is scoring rule (quadraticscoringrule)[2],deinedas ′ 2 QSR[�, � ] := 2�(� ) − �(� ) . (2) � ∈Ω 2.3 Information Theory Peer prediction mechanisms and prediction markets incentivize agents to truthfully report their signals. One key idea these mechanisms use is that agents’ signals are interdependent and strategic manipulation can only dismantlethis structure.Hereweintroduceseveralbasic notions frominformationtheory [4]. heKL-divergenceisameasureofthedissimilarityoftwodistributions: � and� beprobability Let measures on a inite set Ω with density functions � and� respectively. heKL divergence (also called relative entropy) from� to� is� (�∥� ) := −�(� ) log (�(� )/�(� )). �� � ∈Ω We now introduce mutual information, which measures the amount of information between two random variables: Given a random variable (�,� ) on a inite set X × Y , let� (�,�) be the probability density of the �,� random variable(�,� ), and let� (�) and � (�) be the marginal probability density � andof � respectively. � � hemutualinformation �(� ;� ) istheKL-divergencefromthejointdistributiontotheproductofmarginals: � (�,�) �,� �(� ;� ) := � (�,�) log = � (� ∥� ⊗ � ) �,� �� �,� � � � (�)� (�) � � �∈X,�∈Y where⊗denotesthetensorproductbetweendistributions.Mor (�,e�o,v�er),is ifarandomvariable,mutual the informationbetw�eenand � conditional�onis �(� ;� | � ) := E [� (� ∥� ⊗ � )]. � �� (�,� ) |� � |� � |� he data-processing inequality shows no manipulation of the signals can improve mutual information be- tweentworandomvariables,andtheinequality is offundamentalimportanceininformationtheory. TR2.3 (Daa R ay). If� → � → � formsaMarkovchain, �(� ;� ) ≥ �(� ;� ). Because the mutual information is symmetric, neither can � manipulating increase the mutual information between� and�.hus,wesay mutualinformationis informationmonotoneinbothcoordinates. By basic algebraic manipulations, Kong and Schoenebeck [12] relate proper scoring rules to mutual informa- tion.For tworandomvariables � and�, E [LSR[�, �(� | �)] − LSR[�, �(� )]] = �(� ;� ). (3) �,� Wecangeneralizethemutualinformationintwoways[12].heirst �−is ��tousing deinethe �-divergence, where� is a convex function, to measure the distance between the joint distribution and the product of the marginal distributions. he KL-divergence is just a special �-div caseergence of the. his retains the symmetry betweentheinputs. he second way is to use a diferent proper scoring rule. As mentioned, any convex�function gives rise to � � a proper scoring rule PS . hen the Bregman mutual information can be deined as in��Eqn. � (�(3,):� ) := � � E PS [�, � (· | �)] − PS [�, � (·)] .Notethatbythepropertiesofproperscoring ��� is rulesinformation �,� � |� � monotoneintheirstcoordinate;however,ingeneralitis notinformationmonotoneinthesecond. hus, by Eqn. (3), mutual information is the unique measure that is both a Bregman mutual information and an �-MI.his observationis onekeyfor designingour stronglytruthfulmechanisms. Randomvariables�,� and� formaMarkovchainiftheconditionaldistribution � dependsonlyon of� andisconditionallyindependent of� . ACMTrans. Econ. Comput. 8 • GrantSchoenebeck and Fang-YiYu 3 EXPERTS, TARGETSANDSOURCES: STRONGLYTRUTHFULPEERPREDICTION MECHANISMS In this section, we show how to design strongly truthful mechanismssignals to elicit by implicitly agents’ run- ninga predictionmarket. Our mechanisms have three characters, Alice, Bob, and Chloe, and there are three roles: expert, target, and source: • Anexpertmakes predictions ona target’s report, • atargetis askedtoreporthis signal, and • asourceprovides her informationtoanexperttoimprovetheexpert’s prediction. By asking agents to play these three roles, we design two strongly truthful mechanisms based on two diferent ideas. he irst mechanismsourisce diferential peer prediction (S-DPP). his mechanism is based onknothe wledge- freepeerprediction mechanismbyZhangandChen[33],whichsour rewar cebydsho a wusefulhersignalisforan experttopredictatarget’sreport.heirmechanismisonlytruthfulbutnotstronglytruthful.Wecarefullyshit the payment functions and employ Eqn. (3) and the data-processing inequality on log scoring rule to achieve thestronglytruthfulguarantee. We further propose a second mechanism, target diferential peer prediction (T-DPP). Instead of rewarding a source, the T-DPP mechanism rewartarget ds a by the diference of the logarithmic scoring rule on her signal betweenaninitialpredictionandanimprovedprediction.LaterinSect.4weshowBayesiantruthserumcanbe seenas a specialcaseofour T-DPPmechanism. henwediscusshowtoremovethetemporalseparationbetweenagentsmakingreportsinSection3.3where agents only needtoreportonce,andtheir reports donotdependonother agents’reports. 3.1 The SourceDiferentialPeer Prediction Mechanism hemainideaoftheS-DPPmechanismisthatitrewardsasourcebytheusefulnessofhersignalforpredictions. Speciically,supposeAliceactsasanexpert,Bobasthetarget,andChloeasthesource.Ourmechanismirstasks Alice to make an initial prediction � on Bob’s report. hen ater Chloe reports her signal, we collect Alice’s improved prediction � ater seeing Chloe’s additional information. In each case, Alice maximizes her utility by reportingher Bayesianposterior conditionedonher information. hepaymentsforAliceandBobaresimple.S-DPPpaysAlicethesumofthelogarithmicscoringruleonthose two predictions. S-DPP pays Bob zero. Chloe’s payment consists of two parts. First, we pay her the prediction scoreoftheimprovedprediction � .Bythedeinitionofaproperscoringrule(Deinition2.2),Chloewillreport truthfully to maximize it. For the second part, we subtract from Chloe’s payment three times the score of the initial prediction �. his ensures the ex-anteagent welfareequals the mutual information, which is maximized atthetruth-tellingstrategyproile.ToensureBobalsoreportshissignaltruthfully,wepermuteBobandChloe’s rolesinthemechanismuniformly atrandom. TR 3.1. If the common prior � is second order stochastic relevant on a inite set with full support, Mecha- nism 1 isstronglytruthful: (1) hetruth-telling strategy pr �oile isastrict BayesianNash equilibrium. (2) he ex-ante agents’ welfare in the truth-telling strategy � isprstrictly oile beter than all non-permutation strategyproiles. WedefertheprooftoAppendixC.Intuitively,becausethelogarithmicscoringruleisproper,Alice(theexpert) willmaketruthfulpredictionswhenBobandChloereporttheirsignalstruthfully.Similarly,thesourceiswilling toreporther signaltruthfully tomaximizetheimprovedpredictionscore.his showsMechanism 1istruthful. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 9 MechanismT1wo-roundSourceDiferentialPeer Prediction Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. LSR is thelogarithmic scoringrule(1). 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Aliceis informedwhois thetargetandpredicts the�with target’s �. report 4: Giventhesource’srep�o,rttheexpertmakes another prediction � . ˆ ˆ 5: hepaymenttotheexpert LSRis[�,� ] + LSR[�,� ]. 6: hepaymenttothetarget 0. is ˆ ˆ 7: hepaymenttothesource LSRis[�,� ] − 3 LSR[�,� ]. Toshowthesourceiswillingtoreporttruthfully,weshowLemma3.2whichadataprocessinginequalityfor secondorderstochastic relevantdistributions, andpresenttheproofinAppendix C. La3.2. Letrandomvariable(�,�,� ) be ⟨�,�,� ⟩-stochasticrelevantonainiteXspace × Y × Z withfull support.Givenadeterministicfunction � : Y → Y, � (� | �,�) � (� | �,�(�)) � |�� � |�� E log − E log ≥ 0. �,�,� �,�,� � (� | �) � (� | �) � |� � |� Moreover,equality occursonly �ifisanidentity function, �(�) = �. hough Lemma 3.2 only considers the log scoring rule, it is straightforward to show the source is willing to reporttruthfullywhenweuseanystrictlyproperscoringrule.Consequentially,theS-DPPmechanismwillstill havetruth-tellingasanequilibrium.However,thetotalpaymentatthetruth-tellingstrategyproilemaynotbe maximum. Note that we can ask Alice, Bob, and Chloe to play all three characters, and have the identical guarantee as heorem 3.1. We illustrate this modiication � agentsonin Sect. 5. Furthermore, if the agents’ common prior � is symmetric, the above modiication creates a symmetric game where each agent’s expected payment at the truth-tellingstrategy proileis bothnon-negativeandmaximizedamongallsymmetric equilibria. 3.2 TargetDiferentialPeer Prediction Mechanism he target diferential peer prediction mechanism (T-DPP) is identical to the S-DPP except for the payment functions. In contrast to the S-DPP mechanism, T-DPP rewards a target. We show that paying the diference between initial prediction and an improved prediction on a target’s signal can incentivize the target to report truthfully.(Lemma 3.4) OurmechanismpaysAlicethesumofthelogscoringruleonthosetwopredictions.hemechanismpaysBob ˆ ˆ theimprovementfromtheinitial�prtoediction theimprovedprediction � .Finally,Chloe’spaymentdepends on Alice’s irst initial pre�diction , which is independent of Chloe’s action. To ensure Chloe also reports her signaltruthfully,wepermutetherolesofBob andChloeuniformly atrandominthemechanismas well. TR 3.3. If the common prior � is second order stochastic relevant on a inite set with full support, Mecha- nism 2 isstronglytruthful Althoughthetheoreticalguaranteeinheorems3.1and3.3areidentical,inSect.5wediscussthattargetDPP may bemorerobustifwewanttoreplacetheexpertas anmachinelearningalgorithm. ACMTrans. Econ. Comput. 10 • GrantSchoenebeck and Fang-YiYu MechanismT2wo-roundTargetDiferentialPeer Prediction Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. LSR is thelogarithmic scoringrule(1). 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Aliceis informedwhois thetargetandpredicts the�with target’s �. report 4: Giventhesource’srep�o,rttheexpertmakes another prediction � . ˆ ˆ 5: hepaymenttotheexpert LSRis[�,� ] + LSR[�,� ]. ˆ ˆ 6: hepaymenttothetarget LSRis [�,� ] − LSR[�,� ]. 7: hepaymenttothesource−2is LSR[�,� ]. WedefertheprooftoAppendixD,andprovideasketchhere.WeirstshowMechanism2istruthful.Because the log scoring rule is proper, Alice (the expert) will make the truthful predictions when Bob and Chloe report their signals truthfully. hus, the diicult part is to show the target is willing to report his signal truthfully, if theexpertandthesourcearetruthful.BecausetherolesofBobandChloearesymmetricinthemechanism,we canassumeBob is thetargetandChloeis thesourcefromnowon. La 3.4 (LaR RR R R RR). Suppose Alice and Chloe are truthful, and the common prior is ⟨�, �,� ⟩-second order stochastic relevant. As the target, Bob’s best response is to report his signal truthfully. his is a generalization of a lemma in Prelec [20] and Kong and Schoenebeck [12], and extends to non- symmetric prior and inite agent seting. he main idea to prove lemma 3.4 ismaximizing to show that Bob’s expected payment is equivalent to maximizing the reward of a proper scoring rule applied to predicting Chloe’s report with prediction �(� | �(�)). herefore, by the property of proper scoring rules, Bob is incentivized to tell the truth. With Lemma 3.4, the rest of the proof of theorem 3.3 is identical to the proof of heorem 3.1 which is includedinAppendix D. PRLa3.4. Given Alice and Chloe are truthful, let � : B → B be a Bob’s (deterministic) best response.LetAlice,BobandChloe’ssignals �,�band e �respectively.WhenAliceandChloebothreporttruth- fully, Chloe’s report �is ˆ = �. Alice’s initial prediction � = � is(· | �), and her improved prediction is � |� � = � (· | �,�).Hence,Bob withstrategy � gets payment � |�,� LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] � |�� � |� Because� isabestresponse,for �all∈ B,reporting �(�) maximizesBob’sexpectedpaymentconditionalon � = �, E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] . (4) (�,�)∼�,� |�=� � |�,� � |� heexantepaymentofBob is computedby summingover(4)with � w,as:eight �(�) := E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] (�,�,�)∼� � |�,� � |� Prelec[19]alsoshowsaweakerversionoftheabovelemma.However,hisproofrequiresastrongerassumptionthansecondorderstochastic ′ ′ relevant:for any distinctsignals �,� ∈ B andsignals� ∈ A � ∈ C, � (� | �,�) ≠ � (� | �,� ). � |�,� � |�,� ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 11 whichis maximize �.dNoovwer,wecanswap therole�ofand�. �(�) = E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] (�,�,�)∼� � |�,� � |� = E log(� (�(�) | �,�)) − log(� (�(�) | �) (by thedeinition(1)) �,�,� � |�,� � |� � (�(�) | �,�) � |�,� = E log �,�,� � (�(�) | �) � |� � (�(�),� | �) �,� |� = E log �,�,� � (�(�) | �)� (� | �) � |� � |� � (� | �,�(�)) � |�,� = E log �,�,� � (� | �) � |� heabovevaluecanbeseenastheexantepredictionscoreofBobwhoreports�prediction (· | �,�(�)) for � |�,� Chloe’ssignal.Similarly,theexantepaymentofBob whenhis strategy�isistruth-telling � (� | �,�) � |�,� �(�) = E log . �,�,� � (� | �) � |� hediferencebetween �(�) and�(�) is � (� | �,�) � (� | �,�(�)) � |�,� � |�,� �(�) − �(�) = E log − E log . �,�,� �,�,� � (� | �) � (� | �) � |� � |� First, by Lemma 3.2, we know �(�) ≥ �(�). However, because� is a best response, the inequality is in fact equality�,(�) = �(�).By thesecondpartofLemma 3.2,this�sho isan wsidentity�and = �. NotethattheproofusesA)thelogscoringruleisaBregmanmutualinformationwhichcanbewritenasthe diference between two proper scoring rules and B) the log scoring rule�is -mutual also ainformation which is symmetric between the inputs. Furthermore, through both mechanisms work with the log scoring rule, the S-DPP can work with general proper scoring rule, but the T-DPP cannot. Proposition 3.5 provides a counter- examplewheretheBrierscoringrule(2)appliedinthereversewaydoesnotelicitthetargettoreporttruthfully whichshowsa distinctionbetweenthelogscoringruleandother scoringrules. PR3.5. IfwereplacethelogscoringrulewiththeBrierscoring (2),therruleeexists⟨an �, �,� ⟩-second orderstochasticrelevantprior � such that reporting hissignaltruthfully isnot abest responseforBob. PR. Let A = {1},and B = C = {1, 2, 3}. Wedeinea⟨�, �,� ⟩-secondorderstochastic relevantprior 0.12 0.11 0.16 © ª (�(1,�,�)) = ­0.04 0.05 0.18® . �,� 0.15 0.18 0.01 « ¬ By direct computations, Bob’s payment 0.0878isunder truth-telling strategy, but he0can .0990getif he misre- ports1 as 2. 3.3 Single-roundDPPMechanismforFinite Signal Spaces When the signal spaces are inite, the above two-round mechanisms (Mechanisms 1 and 2) can be reduced to single-round mechanisms by using a virtual � .signal hat is for Alice’s improved prediction we provide Alice with a random virtual signal � instead of the actual report from the source, and pay her the prediction score when the source’s report is equal to the virtual � = �signal . Here we state only the single-round target-DPP; thesingle-roundsource-DPPcanbedeinedanalogously. ACMTrans. Econ. Comput. 12 • GrantSchoenebeck and Fang-YiYu MechanismSingle 3 RoundT-DPP Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. heempty∅set is neither Bin nor C. 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Sample� uniformlyfrom X ∪ {∅}whereX isthesignalspaceofthesource,andtellthe � andexpwho ert � � isthetarget. 4: if� = ∅ then ⊲ initialprediction 5: heexpertmakes a prediction � of�. 6: else ⊲ improvedprediction 7: heexpertmakes prediction � of�pretendingthesource’srep�ort = � . 8: endif ˆ ˆ 9: hepaymenttotheexpert 1[is � = �] · LSR[�,� ] + 1[� = ∅] · LSR[�,� ]. ˆ ˆ 10: hetarget’s payment has threecases: 1[� = �] · LSR[�,� ] − 1[� = ∅] · LSR[�,� ]. 11: hepaymenttothesource−2is · 1[� = ∅] LSR[�,� ]. Mechanism 3 has the same truthfulness guarantees as Mechanism 2. he proof is the same and is presented inAppendix E. TR 3.6. If agents’ common beliefs are stochastic relevant andBthe andset C are inite, Mechanism 3 is stronglytruthful. Remark 3.7. Mechanism 3 uses the virtual signal trick to decouple the dependency between the expert’s (Al- ice’s) prediction and the source’s (Chloe�’s)∈signal, X . Furthermore, the logarithmic scoring rule is a local proper scoring rule [17] such that the LSRscor [�, �e] = log�(� ) only depends on the probability � . Hence at wecanfurthersimplifyAlice’sreportbyaskinghertopredictthepr∈obability [0, 1] ofasingle densityvirtual signal� ∈ X inthetarget’s (e.g.Bob’s)signalspace. his trick can be extended to setings with a countably ininite set of signals. For example N , for signals in we can generate the virtual signal from a Poisson distribution (which dominates the counting measure) and normalize the payments correspondingly. However, this trick does not work on general measurable spaces, e.g. realnumbers,becausetheprobabilityofthevirtualsignalmatchingthesource’sreportcanbezero. 4 BAYESIANTRUTHSERUMAS APREDICTION MARKET In this section, we revisit the original Bayesian Truth Serum (BTS) by Prelec [20] from the perspective of pre- diction markets. We irst deine the seting, which is a special case of ours (Mechanism 2), and use the idea of predictionmarkets (Appendix A)tounderstandBTS. 4.1 Seting of BTS here are� agents. hey all share a common prior �. We call� admissibleif it consists of two main elements: states and signals. he state� is a random variable{in 1, . . .,� }, � ≥ 2 which represents the true state of the world. Each agent �observes a signal� from a inite set Ω. he agents have a common prior consist- ing of� (�) and � (· | �) such that the prior joint distribution � , . . .,� isProf(� = � , . . .,� = � ) = � � |� 1 � 1 1 � � Î Î � (�) � (� | �). � � |� � �∈[� ] �∈[�] NowwerestatethemaintheoremconcerningBayesianTruthSerum: ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 13 Mechanismhe 4 originalBTS Require:hecommonprior is admissible�,and > 1. 1: Agent�reports�ˆ ∈ Ω and� ∈ P(Ω). � � (�) 2: For each agent �, choose a reference agent �≠ �uniformly at random. Compute � ∈ P(Ω) such that for −�� all� ∈ Ω (�) � (�) = 1[�ˆ = �] (5) −�� � − 2 �≠�,� whichis theempiricaldistribution � − 2ofagents’ theother reports. 3: hepredictionscoreandinformation�scor are eof h i h i (�) (�) ˆ ˆ � = LSR �ˆ ,� − LSR �ˆ ,� and� = LSR �ˆ ,� − LSR �ˆ ,� . Pre � � � Im � � � −�� −�� Andthepayment�tois� + � � . Pre Im TR 4.1 ([20]). For all � > 1, if the common prior � is admissible and � → ∞, Mechanism 4 is strongly truthful. 4.2 Information ScoreasPrediction Market Prelec [20] uses a clever algebraic calculation to prove this main result. Kong and Schoenebeck [12] use infor- mation theory to show that for BTS the ex-ante agents’ welfare for the truth-telling strategy proile is strictly beterthanforallothernon-permutationequilibria.HereweusetheideaofpredictionmarketstoshowBTSisa truthfulmechanism,anduseMechanism2toreproduceBTSwhenthecommonprioris � →admissible ∞. and he payment from BTS consists of two parts,information the scor , �e, and theprediction scor, �e . he Im Pre prediction score is exactly the log scoring rule which is well-studied in the previous literature. However, the role of the information score is more complicated. Here we provide an interpretation based on Mechanism 2. Informally,theinformationscoreistheimprovementfromoneagent’spredictiontotheaggregatingprediction fromallagentsononeagent’ssignalwhichformalizedinProposition4.2.hus,byLemma3.4,reportingsignal truthfully maximizes theagent’s informationscore. Nowweformalizethisidea.Consider �= 1and �= 2inBTSandcallthemBobandAlicerespectively.Welet Chloebethecollectionofother {3,agents 4, . . .,�}.Let’srunMechanism2onthisinformationstructure.Bobis the target. Alice’s initial pre�diction = � (·is | � ). When Chloe’s signal� is ,� , . . .,� , Alice’s improved � |� 2 3 4 � 1 2 prediction�is= � (· | � ) where� := (� ,� , . . .,� ) is the collection of all agents’ reports expect � |� −1 −1 2 3 � 1 −1 Bob’s.By Lemma 3.4,Bob’spayment,LSR[�ˆ ,� ] − LSR[�ˆ ,� ], whichequals 1 1 LSR[�ˆ , � (· | � )] − LSR[�ˆ , � (· | � )], (6) 1 � |� −1 1 � |� 2 1 −1 1 2 is maximizedinexpectationwhenBob reports his private � . signal NotethatBob’spaymenthere(eq(6))isnearlyidenticaltoBob’sinformationscoreintheBTS(Mechanism4) (�) atthetruth-tellingstrategyLSRproile: [�ˆ ,� ] − LSR[�ˆ ,� ] whichequals 1 1 2 −{1,2} h i (�) LSR �ˆ ,� − LSR �ˆ , � (· | � ) . (7) 1 1 � |� 2 1 2 −{1,2} heonlydiferencebetween(6)and(7)isthattheformer�ˆprusing edicts � (· | � ) intheirsttermwhile � |� −1 1 −1 (�) thelateruses � .herefore,theoriginalBTSreducestoaspecialcaseofMechanism � → ∞,if2aswecan −{1,2} (�) showlim �(� | � ) = lim � . Formally, �→∞ 1 −1 �→∞ −{1,2} ACMTrans. Econ. Comput. 14 • GrantSchoenebeck and Fang-YiYu PR 4.2. Forall � = 1, . . .,� and � ∈ Ω, � (·|�) X|� (�) � (� ) − � (� | � ) −−−−−−−→ 0as � → ∞. � |� −1 1 −1 −{1,2} hat isthediferencebetweentheseestimatorsconvergestozeroinpr�obability goestoininity as . (�) he proposition follows by seeing that, ixing the state�of , both the � world(·) and � (· | � ) −1 � |� 1 −1 −{1,2} converge to� (� | �) which is the posterior distribution of Bob’s signal given the state of the world. How- � |� ever,proposition4.2requiresagents’signalaresymmetricandconditionallyindependent.InSect.5,wediscuss (�) that in practice, we may replace the simple � average (·) with another learning algorithm to relax these −{1,2} assumptions. 5 DISCUSSION ANDAPPLICATIONS We deine two Diferential Peer Prediction mechanisms, S-DPP and T-DPP which are strongly-truthful, detail- free, and only require a single item report from three agents. In addition to the nice theoretical guarantees, our coreobservationisthatpayinganagentthediferencebetweenaninitialandimprovedpredictionofhissignal is a powerfulpeer predictiontool. We believe that our mechanism can be applied in several domains including peer grading, peer review, sur- veys,andcrowd-sourcing.Infact,SrinivasanandMorgenstern[28]useourDPPmechanismsintheirproposed market for peer-review process. Moreover, in peer grading, multi-round review processes are already used in practice[18]. As discussed in the related work, existing single-task mechanisms either make strong assumptions on the signal distribution, e.g., symmetric, or also use multiple rounds. We believe our multi-round mechanism are more practical than mechanisms requiring strong assumptions on the signal distribution, which may not hold inthesedomains. While multi-task peer-prediction mechanism can also sometimes be deployed in this area, the present mech- anism has three advantages: 1) it only requires one question or task while the multi-task peer prediction mech- anisms oten require many. his is of great importance in peer grading and peer review where each agent may only grade a small handful of items. 2) Other mechanisms require learning the relationship between signals; however, in the proposed applications (e.g. peer grading and peer review) the agents typically see much more information than the mere score, the relationship of signals may depend on particular traits of diferent items. Our mechanisms mitigate this problem because the expert can also see the item and use its traits to inform her prediction.3)Unlikethemultitaskseting,ourmechanismdoesnotrequirequestionsandresponsestobei.i.d.. Our presentation involves each agent only giving a one item response. As highlighted in the introduction, our mechanism can easily be adapted so that each agent plays all three roles, and thus provides a signal and a prediction.Speciically�,giv agents, en wecouldassigneachagentaninde [�].xInintheirstround,eachagent reportshersignalandaninitialprediction �+ 1’sreporte on dsignal.Inthesecondround,agent �receives �− 1’s reported signal, and makes an improved prediction �+ 1’sonreported signal.his variant mechanism treats agentssymmetricallyandcancollectmoresignals,whichisotenthegoal.Furthermore,thissymmetricdesign may bemorefair. Our mechanism does require some coordination between agents, but in general it is quite minimal. First, we assume that the identities of agents are established. Because we allow heterogeneous agents, the expert must know who the target is to respond. However, in practice, this could be relaxed to knowing the “type” of each of the agents as long as knowing the type is suicient to specify the joint prior. Additionally, if agents are homogeneous,agents’identitiesareirrelevant.Second,agentscannotbepaiduntilallthereportsareinbecause Weusemodular arithmetic here. ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 15 some payments rely on all reports. However, in the single-round mechanisms, no additional coordination is required: agents can interact with the mechanism in any order. Even, in the two-round mechanisms, the only requirementisthattheexpertmustparticipateaterthesource.Inthecasewhererolescansafelybecorrelated with arrival times, the irst arrivals can be assigned to source/target and the inal to be the expert, and then no further coordinationis required. Machine-Learning Aided Peer Prediction. Given the ubiquity of learning algorithms, in our S-DPP and T-DPP, we may use a learning algorithm to replace the character of the expert that makes predictions on the target’s report.Withthismodiication,agentsonlyneedtoreporttheirsignalswithoutmakingcomplicatedpredictions. herefore,usinglearningalgorithms as a surrogatemay greatlysimplify thecomplicationofour mechanisms. Nowwediscusspossibleconditionsforthoselearningalgorithmstoensuretruthfulness.S-DPPrequiresthat the learning algorithms can improve their prediction based on one agent’s signal, namely the source’s. On the other hand, by Lemma 3.4, T-DPP really only requires that the learning algorithms can make two predictions onthetarget’sreportsuchthattheimprovedpredictionisbeterthantheinitialone.heconditioninT-DPPis weakerthantheconditioninS-DPP,becausealearningalgorithmmaynothavediscernibleimprovementbased ononeagent’s(source’s)signal,butcanstillmakeanimprovedpredictionwithenoughinformation.Forinstance, theinitialpredictionsintheBTS(Mechanism4)isone�agent’s andthe prediction improvedpredictionisthe (�) empiricalaverage � .Wecanreplacetheempiricalaveragewithanylearningalgorithmwhichusesallother −�,� agents’signals tomakeimprovedpredictions. Asmentionedbefore,iftheagentsareprivytoadditionalinformationwhichsystematicallychangestherela- tionshipbetweenagents’signals,themachinelearningalgorithmsappliedtotheentiredata,butnotgivenaccess to the instances themselves, may not work. For example, if two agents agree in their assessments of dramatic moviesbutalwaysdisagreeintheirassessmentsofcomedymovies.heissueisthattherelationshipcannotbe properly learned without information about the movie itself. To combat this issue, the machine learner could takeas inputtheinstances themselves[14]. OnefuturedirectionistousethismachinerytoanalysewhenBTSretainsitsstronglytruthfulguarantee,e.g. for whatparameters ofiniteand/or heterogeneous agents. ACKNOWLEDGMENTS Grant Schoenebeck and Fang-Yi Yu are pleased to acknowledge the support of the National Science Founda- tion grants 1618187 and 2007256. Fang-Yi Yu is pleased to acknowledge the support of the National Science Foundationgrants 2007887.WewouldliketothankDrazenPrelec for referencetoa relatedwork[19]. REFERENCES [1] Aurélien Baillon. 2017. Bayesian markets to elicit private information. Proceedings of the National Academy of Sciences 114, 30 (2017), 7958–7962. [2] Glenn W Brier.1950. Veriicationof forecastsexpressedin termsof prMonthly obability weather . revie 78,w1(1950), 1–3. [3] Noah Burrell and Grant Schoenebeck. 2021. Measurement Integrity in Peer Prediction: A Peer AssessmentarXiv CaseprStudy eprint . arXiv:2108.05521(2021). [4] homasM. CoverandJoyA. homas. 2001.Elementsof Information he.orWile y y,USA. https://doi.org/10.1002/0471200611 [5] Anirban Dasgupta and Arpita Ghosh. 2013. Crowdsourced judgement elicitation with endogenous 22ndprInternational oiciency. In WorldWideWebConference,WWW’13,RiodeJaneiro,Brazil,May13-17,,Daniel 2013 Schwabe,VirgílioA.F.Almeida,HartmutGlaser, RicardoBaeza-Yates,andSueB.Moon(Eds.).InternationalWorldWideWebConferencesSteeringCommitee/ACM,319–330. https: //doi.org/10.1145/2488388.2488417 [6] XiAliceGao,AndrewMao,YilingChen,andRyanPrescotAdams.2014.Trickortreat:putingpeerprediction Proceto edings thetest.In of the iteenthACMconferenceon Economicsandcomputation .ACM,507–524. [7] Robin Hanson. 2003. Combinatorialinformation marketInformation design. SystemsFrontiers 5,1(2003), 107–119. ACMTrans. Econ. Comput. 16 • GrantSchoenebeck and Fang-YiYu [8] Yuqing Kong. 2020. Dominantly truthful multi-task peer prediction with a constantPrnumb oceeerdings of tasks. of theIn Fourteenth AnnualACM-SIAMSymposium on DiscreteAlgorithms .SIAM, 2398–2411. [9] Yuqing Kong, Katrina Liget, and Grant Schoenebeck. 2016. Puting peer prediction under the micro (economic) scope and making truth-telling focal. International In Conferenceon WebandInternet Economics .Springer,251–264. [10] Yuqing Kong and Grant Schoenebeck. 2018. Equilibrium Selection in Information Elicitation without Veriication via Information Monotonicity9th .In Innovationsin heoreticalComputer Science Confer . ence [11] Yuqing Kong and Grant Schoenebeck. 2018. Water from Two Rocks: Maximizing the Mutual Information. Proceedings ofInthe 2018 ACMConferenceon EconomicsandComputation . ACM,177–194. [12] Yuqing Kong and Grant Schoenebeck. 2019. An information theoretic framework for designing information elicitation mechanisms thatrewardtruth-telling. ACMTransactionson EconomicsandComputation (TEA 7, 1C)(2019), 2. [13] YuqingKong,GrantSchoenebeck,Fang-YiYu,andBiaoshuaiTao.2020. InformationElicitationMechanismsforStatisticalEstimation. In hirty-FourthAAAI Conferenceon Ariicialintelligence (AAAI . 2020) [14] YangLiuandYilingChen.2017. Machine-learningaidedpeerprPrediction. oceedingsofInthe2017ACMConferenceonEconomicsand Computation . ACM,63–80. [15] N.Miller,P.Resnick,andR.Zeckhauser.2005. Elicitinginformativefeedback:hepeer-preManagement dictionmetho Scienced.(2005), 1359–1373. [16] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. 2010. Estimating divergence functionals and the likelihood ratio by convexriskminimization. IEEETransactionson Information he56,ory11(2010), 5847–5861. [17] MathewParry,APhilipDawid,StefenLauritzen,etal.2012. Properlocalscoring heAnnals rules.ofStatistics 40,1(2012),561–592. [18] Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, and Daphne Koller. 2013. Tuned models of peer assessment in MOOCs.arXivpreprintarXiv:1307.2579 (2013). [19] Drazen Prelec. 2001. Atwo-person scoring rulefor subjectivereports. MassachusetsInstituteofTechnology workingpaper. [20] Drazen Prelec. 2004. A Bayesian Truth Serum for SubjectivScience e Data.306, 5695 (2004), 462–466. https://doi.org/10.1126/science. 1102081arXiv:htps://www.science.org/doi/pdf/10.1126/science.1102081 [21] Drazen Prelec. 2021. BilateralBayesiantruth serum:henxm signalscase. AvailableatSSRN 3908446. [22] Goran Radanovic and Boi Faltings. 2013. A robust bayesian truth serum for non-binarPryosignals. ceedingsInof the 27th AAAI Conferenceon ArtiicialIntelligence (AAAI” . 833–839. 13) [23] GoranRadanovicandBoiFaltings.2014. IncentivesfortruthfulinformationelicitationofProcontinuous ceedingsofthe signals 28th.In AAAI Conferenceon ArtiicialIntelligence (AAAI” . 770–776. 14) [24] Grant Schoenebeck and Fang-Yi Yu. 2020. Learning and Strongly Truthful Multi-Task Peer Prediction: A Variation arXival Approach. preprintarXiv:2009.14730 (2020). [25] GrantSchoenebeckandFang-YiYu.2020. Twostronglytruthfulmechanismsforthreeheterogeneousagentsansweringonequestion. In InternationalConference on WebandInternet Economics .Springer,119–132. [26] GrantSchoenebeck,Fang-YiYu,andYichiZhang.2021. InformationElicitationfromProRocewedings dyCroofwds.theIn WebConfer- ence 2021.3974–3986. [27] Victor Shnayder, Arpit Agarwal, Rafael Frongillo, and David C. Parkes. 2016. Informed Truthfulness in Multi-Task Peer Prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation (Maastricht, he Netherlands) (EC ’16). ACM, New York, NY, USA, 179–196. [28] Siddarth Srinivasan and Jamie Morgenstern. 2021. Auctions and Prediction Markets for Scientiic arXiv prPeeprint er Review. arXiv:2109.00923(2021). [29] Bo Waggoner and Yiling Chen. 2013. Information elicitation sans vPreriication. oceedings of the In 3rd Workshop on Social Computing andUser GeneratedContent (SC13). [30] RobertL Winkler.1969. Scoring rulesandtheevaluationof probabilityJ.Aassessors. mer.Statist.Asso64,c.327(1969), 1073–1078. [31] Jens Witkowski and David C. Parkes. 2011. A Robust Bayesian Truth Serum for Small Populations. ProceedingsIn of the 26th AAAI Conferenceon ArtiicialIntelligence (AAAI. 2012) [32] Jens Witkowski and David C. Parkes. 2012. Peer prediction without a common Proceeprior dings. Iofn the 13th ACM Conference on ElectronicCommerce,EC2012,Valencia,Spain,June 4-8,.2012 ACM,964–981. [33] Peter Zhang and Yiling Chen. 2014. Elicitability and knowledge-free elicitation with Proceepdings eer preofdiction. the 2014Ininter- national conference on Autonomous agents and multi-agent systems . International Foundation for Autonomous Agents and Multiagent Systems, 245–252. [34] Shuran Zheng, Fang-Yi Yu, and Yiling Chen. 2021. he Limits of Multi-task PeerCoRR Prediction. abs/2106.03176 (2021). arXiv:2106.03176 https://arxiv.org/abs/2106.03176 ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 17 A INTRODUCTION TOPREDICTION MARKETS Now we want to get the collective prediction from a large group of experts. If we ask them all to report the predictionsimultaneouslyandpayeachofthemthelogscoringruleontheirpredictions,weonlyreceivemany diferentpredictions anditis notclear howtoaggregatethosepredictions intoa singleprediction. Hanson’s [7] idea is to approach theseexpquentially erts . he mechanism asks experts to pregiv dict, en pre- dictions that previous experts hav, and e made pays the experts the diference of score between their prediction minus thescoreofthepreviousone.Formally, (1) hedesigner chooses aninitialpr�eˆdiction ,e.g.,theuniformdistribution Ω. on (2) heexperts�= 1, 2, . . .,� arriveinorder.Eachexp �changes ert theprediction �ˆ frtoom�ˆ �−1 � (3) hemarketends andtheevent’soutcome � ∈ Ω isobserved. (4) Expert�receivesa payof PS[�, �ˆ ] − PS[�, �ˆ ]. � �−1 herefore,eachexpert(strictly)maximizeshisexpectedscorebyreportinghistruthbeliefgivenhisownknowl- edgeandthepredictionofthepreviousexperts. Suppose instead of multiple experts arriving in order we have one expert (Alice) but multiple signals arrive in order. For example, Alice is asked to predict the champion of a tennis tournament � ∈ Ω is the wherset e of players. As the tournaments proceeds, Alice collects additional (� ) signals which inform the outcome. � �=1,...,� Formally, (1) hedesigner chooses aninitialpr�eˆdiction . (2) Inround �= 1, 2, . . .,�,a signal� arrives,andAlicechanges theprediction �ˆ tofr �ˆom � �−1 � (3) Attheend,theoutcome � ∈ Ω isobserved. (4) Alicereceivesa payof (PS[�, �ˆ ] − PS[�, �ˆ ]). � �−1 �=1 Withbelief � ifAlicereportstruthfullyineachround,she�(will � | �rep ,�ort , . . .,� )atround�.Ifweuselog 1 2 � scoring rule, her payment at round �will be�(� ;� |� , . . .,� ). Her overall payment will b�e(� , . . .,� ;� ), � 1 �−1 1 � which maximizes her payment. his is an illustration of the chain rule for �(�Mutual ,� ;� ) =Information: �(� ;�� |� ) + �(� ;� ). B DATAPROCESSING INEQUALITY here are several proofs for the data processing inequality (heorem 2.3). However, for information elicitation, weotenaimforastrictdataprocessinginequalitysuchthatgivenapair(of �,�random ) ifavariables random function � : Y → Y is not a invertible function, �(� ;� ) > �(� ;�(� )). In this section, we will show such guaranteeholds�ifand� arestochastic relevant(deinedlater). Wesayapairofrandomvariable �,� onainitespace X × Y isstochasticrelevant ifforanydistinct � and ′ ′ � inX, � (· | �) ≠ � (· | � ).Andtheaboveconditionalsoholds whenwee�xchange and�. � |� � |� TRB.1. If(�,� )onainitespace X×Y isstochasticrelevantandhasfullsupport.Forallrandomfunction � fromY toY wheretherandomness�ofisindependent(of �,� ), �(� ;� ) = �(� ;�(� )) ifand only if � isa deterministicinvertiblefunction.�(�Other ;� ) >wise �(� ;�, (� )). Moreover, we can extend this to conditional mutual information when the random variable is second order stochastic relevant(Deinition 2.1). PRB.2. If(�,�,� )onainitespace W×X×Yissecondorderstochasticrelevantandhasfullsupport. Forany random function � fromY toY,iftherandomness�ofisindependent ofrandom variable (�,�,� ), �(� ;� | � ) = �(� ;�(� ) | � ) ACMTrans. Econ. Comput. 18 • GrantSchoenebeck and Fang-YiYu ifand only if � isanone-to-onefunction.Other�(wise � ;� ,| � ) > �(� ;�(� ) | � ). B.1 Proof of TheoremB.1 TR B.3 (J’ ay). Let� be a random variable on a probability space (X, F, �) and let � : R → R be a convex function. hen �(E[� ]) ≤ E[�(� )]. he equality holds if and only � agr ifee almost everywhereontherange�ofwith alinearfunction. Given a random function � : Y → Y, we use � : Y × Y → R to denote it’s transition matrix where �(�,�ˆ) = Pr[�(�) = �ˆ] forall�,�ˆ ∈ Y.Let� betherandomvariable �(� ). Variationalrepresentation. Bythevariationalrepresentationofmutualinformation Φ(�[)16=,�24], loglet �, ∗ ′ Φ (�) = exp(� − 1) andΦ (�) = 1 + log� themutualinformationbet � wandeen� is �(� ;� ) = sup E [�(�,� )] − E [Φ (�(�,� ))] � � ⊗� �,� � � �:X×Y→R andthemaximumhappens when � (�,�) �,� � (�,�) := Φ . (8) � (�)� (�) � � ˆ ˆ ˆ Wedeine� for� and� similarly.Withthesenotions, themutualinformation � and� bisetween h  i ˆ ˆ ˆ ˆ ˆ �(� ;� ) = E [� (�, � )] − E Φ � (�, � ) � � ⊗� ˆ � ˆ �,� � ¹  ¹ ˆ ˆ = E � (�,�ˆ)�(�,�ˆ)��ˆ − E Φ � (�,�ˆ) �(�,�ˆ)��ˆ � � ⊗� �,� � � ¹   ¹ ˆ ˆ ≤ E � (�,�ˆ)�(�,�ˆ)��ˆ − E Φ � (�,�ˆ)�(�,�ˆ)��ˆ � � ⊗� �,� � � helastinequalityholdsduetoconveΦxity andofJensen’sinequality. �Let (�,�) := � (�,�ˆ)�(�,�ˆ)��ˆ forall �,�.Wehave �(� ;� ) ≤ E [�(�,�)] − E [Φ (�(�,�))] (9) � � ⊗� �,� � � ≤ sup E [�(�,� )] − E [Φ (�(�,� ))] (10) � � ⊗� �,� � � �:X×Y→R =�(� ;� ). Suicient condition. We irst show the equality holds � is anifinvertible function. Hence, we need to show (9) and (10) are equalities. Because � is an invertible function, � is a permutation matrix. hus, for �,� all ∫ ∫ ∗ ∗ ˆ ˆ Φ � (�,�ˆ) �(�,�ˆ)��ˆ = Φ � (�,�ˆ)�(�,�ˆ)��ˆ ,and(9)is equality.For (10),for�alland�, �(�,�) = � (�,�ˆ)�(�,�ˆ)��ˆ =� (�,�(�)) (deterministic function) � (�,�(�)) �,� =Φ (by (8)) � (�)� (�(�)) � ˆ � (�,�) �,� =Φ (invertible) � (�)� (�) � � =� (�,�) ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 19 herefore,(10)is anequality.his completes theproof. Necessarycondition. Nowweshowtheequalityholdsonly � isif aninvertiblefunction, �isi.e apermutation . matrix.Weirstshowaweakerstatement, �isinjective.Formally �, (let �) := {�ˆ : �(�,�ˆ) > 0} isthesupportof ′ ′ ′ �on�.Wesay�isinjectiveifforall�distinct ,� thesupport�of (�, ·)and�(� , ·)aredisjoint, � (�)∩� (� ) = ∅. � � We prove this by contradiction: � is not ifinjective�(and � ;� ) = �(� ;� ), (�,� ) is not stochastic relevant. �(� ;� ) = �(� ;� ) implies(9)and(10)areequalities. Because(9)is anequality � and,giv � forenall�ˆ ∈ � (�), �(�,�) = � (�,�ˆ) (11) Because(10)is anequality,for � and all �, �(�,�) = � (�,�). (12) ∗ ∗ Suppose � is notinjective.here �exists , � and� inY suchthat � ≠ � and� ∈ � (� ) ∩ � (� ). For all�, 1 2 1 2 � 1 � 2 � (�,� ) =�(�,� ) (by(12)) 1 1 =� (�,� ) (by(11)and�ˆ ∈ � (� )) � 1 =�(�,� ) (by(11)and�ˆ ∈ � (� )) 2 � 2 =� (�,� ) (by(12)) SinceΦ is invertible,for � all � (�,� ) � (�,� ) �,� 1 �,� 2 � (�)� (� ) � (�)� (� ) � � 1 � � 2 herefore,� (· | � ) = � (· | � ), and (�,� ) is not stochastic relevant. his shows the Marko � isv kernel � |� 1 � |� 2 injectiveandhavea deterministic inversefunction. Now we show if � is injectiv � ise, a permutation wYhenis a inite space. Because � is a Markov kernel |� (�)| ≥ 1 for all�. Moreover, because� is injectiv | ∪e,� (�)| = |� (�)| ≥ |Y|. On the other hand, � � � � � ∪ � (�) = {�ˆ : ∃�, (�,�ˆ) ∈ � } ⊆ Y, | ∪ � (�)| ≤ |Y|.herefore,bypigeonholeprinciple |� (�), | = 1forall�, � � � � � � whichis one-to-one. B.2 Proof of PropositionB.2 PRB.2. Givenrandomvariable (�,�,� )deinepointwiseconditionalmutualinformationbetween � and� given� = � as �(� ;� | � = � ) := � � (· | � ) ⊗ � (· | � ) ∥ � (· | � ) �� � |� � |� (�,� ) |� whichis themutualinformati�on |� betw = � eand en � |� = � . First observe that conditional mutual information �(� ;� | � ) is the average pointwise conditional mutual informationbetw�eenand� acrossdiferent � , �(� ;� | � ) = �(� ;� | � = � ) � (� )��. hus,wecanapply heoremB.1toeachpointwiseconditionalmutualinformation. he suicient condition is straightforward. For the necessary condition we can reuse the argument in the proofofheoremB.1.LetΦ(�) = � log� and � (�,� | � ) �,� |� � (�,� | � ) := Φ . � (� | � )� (� | � ) � |� � |� Note that the proof implicitly use the property that the (�,�distribution , � ) has a fullof support. In particular, (11) and (12) only holds on thesupportof thedistribution. ACMTrans. Econ. Comput. 20 • GrantSchoenebeck and Fang-YiYu ˆ ˆ ˆ We deine� (�,� | � ) for�, �, and� similarly, and we let �(�,� | � ) := � (�,�ˆ | � )�(�,�ˆ)��ˆ. By similar derivation,wehaveanalogy of (11)and(12):For �,�,all � and�ˆ ∈ � (�) �(�,� | � ) = � (�,�ˆ | � ) (13) and �(�,� | � ) = � (�,� | � ) (14) ∗ ∗ Suppose � isnotinjective.here �exists , � and� suchthat � ≠ � and� ∈ � (� ) ∩ � (� ).Forall� and� 1 2 1 2 � 1 � 2 � (�,� | � ) =�(�,� | � ) (by(14)) 1 1 =� (�,� | � ) (by(13)and� ∈ � (� )) � 1 =�(�,� | � ) (by(13)and� ∈ � (� )) 2 � 2 =� (�,� | � ) SinceΦ is injective,for � and all� � (�,� | � ) � (�,� | � ) �,� |� 1 �,� |� 2 � (� | � )� (� | � ) � (� | � )� (� | � ) 1 2 � |� � |� � |� � |� herefore,thereexistsdistinct � and� suchthatfor � all 1 2 � (· | � ,� ) = � (· | � ,� ). � |�,� 1 � |�,� 2 his contradicts thecondition (�,�,�that ) is secondorderstochastic relevant. C PROOFSIN SECT.3.1 PRLa3.2. � (� | �,�) � (� | �,�(�)) � |�,� � |�,� E log − E log �,�,� �,�,� � (� | �) � (� | �) � |� � |� � (� | �,�) � |�,� = E log �,�,� � (� | �,�(�)) � |�,� � (� | �,�) � |�,� = E E log | � = �,� = � �,� � � (� | �,�(�)) � |�,� = E � (� (· | �,�(�))∥� (· | �,�)) . �,� �� � |�,� � |�,� ′ ′ Let�(�,�,� ) := � (� (· | �,� )∥� (· | �,�)) which is the KL-divergence from random� variable �� � |�,� � |�,� conditional�on= � and� = � to� conditional�on= � and� = � . hus, wehave E � (� (· | �,�(�))∥� (· | �,�)) = E [�(�,�,�(�))] . (15) �,� �� � |�,� � |�,� �,� FirstnotethatbyJensen’sinequality(heor�em (�,�B.3 ,�()�)) ≥ 0forall� and�,so(15)isnon-negative.his showstheirstpart. Let� = {� : �(�) ≠ �} ⊆ Y which is the event such � disagr that ee with the identity mapping. � Because is⟨�,�,� ⟩-second order stochastic relevant,�for ∈ �allthere is �, � (· | �,�) ≠ � (· | �,�(�)), so � � |�,� � |�,� �(�,�,�(�)) > 0 by Jensen’s inequality (heorem B.3). herefore, when equality holds, the probability of event � is zero,and� is anidentity because X × Y × Z is a initespace. PRTR3.1. heproofhastwoparts.Mechanism1istruthfulandthetruth-tellingstrategyproile maximizes theexanteagentwelfare. Truthfulness. We irst show Mechanism 1 is truthful. For the expert Alice, suppose Bob and Chloe provide ˆ ˆ theirsignalstruthfully.HerexpectedpaymentconsistsoftwopreLSR diction [�,� ] andscor LSRes[�,� ] where ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 21 ˆ ˆ � isherirstprediction � isandthesecond.heexpectedirstpredictionscore(undertherandomnessofBob’s signal� conditionalonAlice’ssignal�)bis eing E [LSR[�,� ]] ≤ E [LSR[�, � (· | �)]] �∼� (·|�) �∼� (·|�) � |� � |� � |� which is less than reporting truthful � pr(e·diction | �) since log scoring rule is proper (Deinition 2.2). � |� Similarly,her expectedpaymentis maximizedwhenher improv�edis pr�ediction (· | �,�). � |�,� IfChloeisthesource,shewilltellthetruthgivenAliceandBobreporttruthfullybyLemma3.2.Formally,let Alice’s ,Bob’s and Chloe’s signal �, �is, and�respectively. Let � : C → C denote a Chloe’s (deterministic) best response.Alice’sinitialpredictionandBob�’ssignal (· | �).is BecauseChloeunilaterallydeviate,Alice’sim- � |� provedprediction � is (· | �,�(�)).herefore,Chloe’spaymentLSR is[�, � (· | �,�(�))]−3 LSR[�, � (· | � |�,� � |�,� � |� �)]. Note that regardless Chloe’s report the initial pr � e=diction � (· | �)is. Hence equivalently Chloe’s best � |� responsealsomaximizes LSR[�, � (· | �,�ˆ)] − LSR[�, � (· | �)].Takingexpectationoversignal �, �,� and ˆ � |� � |�,� strategy� wehave �(�) := � (�,�,�) LSR[�, � (· | �,�(�))] − LSR[�, � (· | �)] �,�,� � |�,� � |� �,�,� = E log(� (� | �,�(�))) − log(� (� | �)) (by (1)) �,�,� � |�,� � |� � (� | �,�(�)) � |�,� = E log �,�,�ˆ � (� | �) � |� Similarly,theexantepaymentofChloewhenher strategy is�truth-telling is � (� | �,�) � |�,� �(�) = E log . �,�,� � (� | �) � |� hediferencebetween �(�) and�(�) is � (� | �,�) � (� | �,�(�)) � |�,� � |�,� �(�) − �(�) = E log − E log �,�,� �,�,� � (� | �) � (� | �) � |� � |� First, by Lemma 3.2, we know�(�) ≥ �(�). However, because� is a best response, the inequality is in fact equality�,(�) = �(�).By thesecondpartofLemma 3.2,this�sho isan wsidentity�and = �. IfChloeisthetarget,heractiondoesnotafectherexpectpayment,soreportinghersignaltruthfullyisabest response strategy. By randomizing the roles of source and target, both Bob and Chloe will report their signals truthfully. Strongly truthful. Now we show the truth-telling strategy � maximizes proile the ex ante agent welfare under �. If Bob is the target, the ex ante agent welfare (before anyone receives signals) in truth-telling strategy proile� is � (�;�) = E 2 LSR[�, � (· | �,�)] − LSR[�, � (· | �)] � (�,�,�)∼� � |�,� � |� � (� | �,�) � |�,� =2 E log (�,�,�)∼� � (� | �) � |� =2�(�;� | � ) whichis theconditionalinformationbetweenBob’sandChloe’ssignals givenAlice’ssignal. Ontheotherhand, � =let(� ,� ,� ) beanequilibriumstrategyproilewhereBobandChloereportsignals � � � � (�) and � (� ) respectively. Since � is an equilibrium, if Bob is the target, Alice � will with predict signal � � ACMTrans. Econ. Comput. 22 • GrantSchoenebeck and Fang-YiYu ˆ ˆ truthfully, and rep�ort= � (· | �) and� = � (· | �,� (�)). By a similar computation, the ex � (� ) |� � (� ) |�,� (� ) � � � � anteagentwelfareis Õ Õ � (� ;�) = 2�(� (�);� (� ) | � ) ≤ 2�(�;� | � ) = � (�, �). � � � � � � heinequalityisbasedonthedataprocessinginequality(heorem2.3).Moreover,byPropositionB.2,theequal- ity holds only�if is a permutationstrategy proile. D PROOFIN SECT.3.2 D.1 Proof of Theorem3.3 heproofis mostly identicaltoheorem3.1 inAppendix C. Weincludeitfor completeness. PRTR3.3. he proof also has two parts. Mechanism 2 is truthful and the truth-telling strategy proilemaximizes theexanteagentwelfare. WeirstshowMechanism2istruthful.FortheexpertAlice,theproofisidenticaltotheproofofheorem3.1. By Lemma 3.4, if Bob is the target, he will tell the truth given Alice and Chloe report truthfully. If Bob is the source,hisactiondoesnotafecthisexpectpayment,soreportinghissignaltruthfullyisabestresponsestrategy. By randomizingtheroleofsourceandtarget, bothBob andChloewillreporttheir signals truthfully. heprooffor stronglytruthfulis identicaltotheproofofheorem3.1. Note that if we randomize the roles amount Alice, Bob, and Chloe, each agent has a non-negative expected paymentatthetruth-tellingequilibrium. E PROOFOFTHEOREM 3.6 For the expert Alice, suppose Bob and Chloe provide their signals truthfully. Her payment consists of two pre- diction scores: When the random variable � = ∅, the prediction score (under the randomness of Bob�’s signal conditionalonAlice’ssignal�)bis eing E [LSR[�,� ]] ≤ E [LSR[�, � (· | �)]] �∼� (·|�) �∼� (·|�) � |� � |� � |� Sincelogscoringruleisproper(Deinition2.2),reportingtruthful � pr (·e|diction �) maximizesit.Similarly, � |� when� ≠ ∅,her(conditional)expectedpaymentismaximizedwhenherimprov�edpre(diction · | �,� ). is � |�,� For the target Bob, suppose Alice and Chloe report truthfully. We will follow the proof of Lemma 3.4 to show Bob’s best response is truth-telling. � :Let B → B be a Bob’s (deterministic) best response. Bob’s expected paymentdepends onfour values:signals �, �, �,andvirtualsignal � : � = 1[� = �] LSR[�(�), � (· | �,� )] − 1[� = ∅] LSR[�(�), � (· | �)]. � � |�,� � |� AndBob’sexpectedpaymentis � (�) = E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] . � �,�,� � |�,� � |� |C| + 1 hus, by the same argument in Lemma 3.4 Bob’s best response is truth-telling. If Bob is the source, his action doesnotafecthisexpectpayment,soreportinghissignaltruthfullyisabestresponsestrategy.Byrandomizing theroleofsourceandtarget,bothBob andChloewillreporttheir signals truthfully. heproofofstronglytruthfulis identicaltotheproofofheorem3.3. ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 23 F SKETCHPROOFFORPROPOSITION 4.2 A consistent predictor � of a value� given evidence � ,� , . . . is one where more information leads to a beter 1 2 predictionsuchthat lim Pr[|�(� ,� , . . .,� ) − � | ≥ �] → 0. 1 2 � �→∞ (�) hepropositionfollowsbyseeingthat, �and ixing � ,both� (� ) and� (� | � ,� , . . .,� ) areboth � |� 2 3 � 1 −1 −{1,2} consistentestimators � for(� | �). � |� (�) � (� ) istheempiricaldistribution � − 2indep ofendentsamplesfrom � (· | �) toestimate � (� | �) � |� � |� −{1,2} andis thereforea consistentestimator. Ontheotherhand,because � and� ,� , . . .,� areindependentconditional �,the on posteriordistribution 1 2 3 � � (� | � ,� , . . .,� ) is consistent. hat is�for ∈ [�all], Pr[|�(� = � | � ,� , . . .,� ) − 1| ≥ � | � = �] → 0. 2 3 � 2 3 � � |� −1 hus � (· | � ,� , . . .,� ) = � (· | �)� (� | � ,� , . . .,� ) 2 3 � 2 3 � � |� � |� � |� 1 −1 1 −1 isalsoa consistentpredictor � (of � | �) whichcompletes theproof. � |� G GENERALMEASURE SPACES G.1 Setings here are three characters, Alice , Bob and Chloe. Consider three measur (A,eSspaces , � ), (B, S , � ), and � � � � (C, S , � ). LetX := A × B × C, S := S × S × S , and � = � ⊗ � ⊗ � where⊗ denotes the product � � � � � � � � � betweendistributions. P(Let X) bethesetofprobabilitydensity function X withonrespect� to. Alice(andrespectivelyBob,Chloe)hasaprivatelyobserv�e(rdesp signal ectively �,�)fromsetA(respectively B C). hey all share acommon prior beliefthat their signals (�,�,�) is generated from a random variable X := (�, �,� ) on (X, S) with a probability measur � ∈ Pe(X), and a positive density function � > 0. We consider a uniform second orderstochasticrelefor vantgeneralmeasurespaceas follow: DeinitionG.1.Arandomvariable(�, �,� ) inA ×B × C withaprobabilitymeasur � isnot e ⟨�, �,� ⟩-uniform stochasticrelevant ifthereexista signal � ∈ A andtwodistinctsignals �,� ∈ B suchthat theposterior � is on identicalwhether � = � with � = � or � = � with � = �, � (· | �,�) = � (· | �,� ) almostsurelyon� . � |�,� � |�,� � Otherwisewecall � ⟨�, �,� ⟩-uniformstochasticrelevant.hus,whenAliceismakingapredictiontoChloe’s signal,Bob’ssignalis always relevantandinduces diferentpredictions � = � orwhen � = � . Wecall� uniformsecondorderstochasticreleif vantit⟨�is ,�,� ⟩-uniformstochasticrelevant⟨�wher ,�,� ⟩e is any permutation{�of , �,� }. Formally, P(X) isthe setof alldistributions X thatonareabsolutely continuous with respect to � .measur For � ∈e P(X), wedenote the density�ofwith respect�byto�(·). For example, if X is a discrete space, we can �assetthe counting measurXe.is If an Euclidean spaceR , wecan usetheLebesguemeasure. One major diference between ⟨�,�,� ⟩-stochastic relevant (Deinition⟨�2.1) ,�,�and ⟩-uniform second order stochastic relevant (Def- ′ ∗ ∗ ∗ ′ inition G.1) is the quantiier �: Givenofall distinct�,pair � , it is suicient to hav � esuch one that � (·|� ,�) ≠ � (·|� ,� ). � |�� � |�� However,foruniformstochasticrelevant,itrequir �, �esfor(·all |�,�) ≠ � (·|�,� ).Oneissueforsecondorderstochasticrelevant � |�� � |�� in general measure space is that we can change measure zero point to make such distribution stochastic irrelevant, and the probability to ∗ ∗ ∗ ′ derive� such that � (·|� ,�) ≠ � (·|� ,� ) may bezero. � |�� � |�� ACMTrans. Econ. Comput. 24 • GrantSchoenebeck and Fang-YiYu G.2 Theorem3.1 and3.3 ongeneral measurespaces Here,westateanalogous resultstoheorem3.1 and3.3. heproofs aremostly identical. TR G.2. Given a measure space(X, S, � ) if the common prior � is uniform second order stochastic rel- evant on the measurable space (X, S), and � is absolutely continuous with resp�ect, Metochanism 1 has the followingproperties: (1) hetruth-telling strategy pr �oile isastrict BayesianNash Equilibrium. (2) heexanteagentwelfareinthetruth-tellingstrategy � isprstrictly oile beterthanallnon-invertiblestrategy proiles. Herethemaximumagentwelfarehappensnotonlyatpermutationstrategyproiles,butalsoinvertiblestrat- egyproile.hislimitationisduetothestrictnessofdataprocessinginequality(heoremB.1).Forexample,con- siderapairofrandomvariables (�,� ) on Z ×Z .Let� beaMarkovoperatorsuchthat � ∈for Z , �(�) = � >0 >0 >0 with probability 1/2 and�(�) = −� otherwise. Although � is not an one-to-one function, �(� ;� ) = �(�(� );� ). Ontheother hand,followtheproofofheoremB.1,wecansay theequality � isholds injewhen ctive. heguaranteeofMechanism 2is thesame. TR G.3. Given a measure space(X, S, � ) if the common prior � is uniform second order stochastic rel- evant on the measurable space (X, S), and � is absolutely continuous with resp�ect, Metochanism 2 has the followingproperties: (1) hetruth-telling strategy pr �oile isastrict BayesianNash Equilibrium. (2) heexanteagentwelfareinthetruth-tellingstrategy � isprstrictly oile beterthanallnon-invertiblestrategy proiles. ACMTrans. Econ. Comput. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Economics and Computation Association for Computing Machinery

Two Strongly Truthful Mechanisms for Three Heterogeneous Agents Answering One Question

Loading next page...
 
/lp/association-for-computing-machinery/two-strongly-truthful-mechanisms-for-three-heterogeneous-agents-iNH3khfZK5

References (41)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2022 Copyright held by the owner/author(s).
ISSN
2167-8375
eISSN
2167-8383
DOI
10.1145/3565560
Publisher site
See Article on Publisher Site

Abstract

TwoStronglyTruthfulMechanisms forThree Heterogeneous Agents AnsweringOne uestion GRANTSCHOENEBECK , UniversityofMichigan FANG-YIYU,GeorgeMasonUniversity Peer prediction mechanisms incentivize self-interested agents to truthfully report their signals even in the absence of veri- ication, by comparing agents’ reports with their peers. We propose two new mechanisms, Source and Target Diferential PeerPrediction,andprovevery strongguarantees foravery generalseting. Our Diferential Peer Prediction mechanisms stronglyartruthful e : Truth-telling is a strict Bayesian Nash equilibrium. Also, truth-telling pays strictly higher than any other equilibria, excluding permutation equilibria, which pays the same amount as truth-telling. he guarantees hold asymmetric for priors among agents which the mechanisms need not know (detail-fr)ein e the singlequestionseting .Moreover,theyonlyrequirthr e eeagents ,eachofwhichsubmits singleitem a rep:ort tworeport their signals (answers),and the other reports her forecast (prediction of one of the another agent’s reports). Our prooftechnique is straightforward,conceptually motivated,andturns on the logarithmic scoringrule’sspecialproperties. Moreover, we can recast the Bayesian Truth Serum mechanism [20] into our framework. We can also extend our results tothe setingcontinuous of signals with aslightly weakerguarantee on the optimality ofthe truthfulequilibrium. CCS Concepts: •Information systems→ Incentive schemes; • heory of computation → uality of equilibria ; • Mathematics of computing →Information the.ory AdditionalKeyWordsandPhrases: Peerprediction,Logscoringrule,Prediction Market. 1 INTRODUCTION hree friends, Alice, Bob, and Chloe, watch a political debate on television. We want to ask their opinions on whowonthedebate.Weareafraidtheymaybeless thantruthfulunless wecanpaythemfor truthfulanswers. husweseektodesignmechanismsthatrewardtheagentsfortruth-telling.heiropinionsmaysystematically difer, but are nonetheless related. For example, it turns out Alice values description and argumentation, Bob values argumentationandpresentation,andChloevalues descriptionandpresentation. Inthispaper,wedesigntwopeerpredictionmechanismsbyaskingAlice,Bob,andChloetoplaythreecharac- ters:theexpertwhomakespredictions,target thewhomisbeingpredicted,and sourthe cewhohelpspredictions. he source and target are asked for their opinions. In the most straight-forward seting, the expert makes two predictions(e.g.70%chanceof“yes”and30%chanceof“no”)ofthetarget’sopinion:aninitialpredictionbefore thesource’sopinionis revealedandanimprovedpredictionaterwards. Ourpoliticaldebatemotivationmightbequixotic(attheveryleastweneedtoensurethattheydonotcommu- nicateduringthedebatesothatwecanelicitthetarget’spredictionwithandwithoutthesource’sinformation). However, peer-grading can easily it into this paradigm: we might ask Bob and Carol to grade a paper, while Alice tries to predict Carol’s mark for the paper before and ater seeing Bob’s mark. Similarly, Alice, Bob, and Bothauthorscontributedequally tothisresearch. Authors’addresses:GrantSchoenebeck,schoeneb@umich.edu,UniversityofMichigan;Fang-YiYu,fyy2412@gmu.edu,GeorgeMasonUni- versity. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copiesarenotmadeordistributedforproitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationontheirstpage. Copyrightsfor third-partycomponentsof thisworkmustbehonored. For allother uses, contacttheowner/author(s). c2022Copyrightheldby theowner/author(s). 2167-8375/2022/11-ART https://doi.org/10.1145/3565560 ACMTrans. Econ. Comput. 2 • GrantSchoenebeck and Fang-YiYu Carolmightbepeer-reviewingapaper,illingoutaorsurdoing vey, anycrowdsourcingtask(e.g.labelingdata formachinelearningapplications).Insuchcases,itisnaturaltorewardagentsfordoingagoodjob,andalsoto havethemupdatea predictionwithadditionalinformation. Forsimplicity(andtocollectagradefromallthreeagents),Alice,Bob,andCloe,mightplayallthreecharacters: Alice could predict Cloe’s signal before and ater seeing Bob’s; Bob could predict Alice’s signal before and ater seeingCloe’s;andCloecouldpredictBob’ssignalbeforeandater seeingAlice’s. his problem is known in the literature as peer prediction or information elicitation without veriication. In thesingle-questionseting agentsareonlyaskedonequestion.Incentivizingagentsisimportantsothattheynot onlyparticipate,butprovidethoughtfulandaccurateinformation.Ourgoalistoelicittruthfulinformationfrom agents withminimalrequirements. Drawingfrompreviouspeerpredictionliterature,wewouldlikeourmechanismstohavethefollowingdesir- ableproperties: StronglyTruthful[12] ProvidingtruthfulanswersisaBayesianNashequilibrium(BNE)andalsoguaran- tees the maximum agents’ welfare among any equilibrium. his maximum is “strict” with the exception of a few unnatural permutation equilibria where agents report according to a relabeling of the signals (deined more formally in Sect.his 2). will incentivize the agents to tell the truth–even if they believe the other agents will disagree with them. Moreover, they have no incentive to coordinate on an equilib- rium where they do not report truthfully. In particular, note that playing a permutation equilibrium still requiresas muchefortfromtheagents as playingtruth-telling. General Signalshemechanismshouldworkheter for ogeneousagentswhomayevenhavcontinuous e sig- nals (with a weaker truthfulness guarantee). In our above example, the friends may not have the same political leanings, and the mechanism should be robust to that. Furthermore, instead of a single winner, wemay wanttoelicitthemagnitudeoftheir (perceived)victory. Detail-Freehe mechanism is not required to know the speciics about the diferent agents (e.g. the afore- mentionedjointprior).Intheaboveexample,themechanismshouldnotberequiredtoknowtheapriori politicalleanings ofthe diferentagents. On FewAgents Wewouldlikeour mechanisms toworkusingas fewagents as possible,inour case,three. Single-ItemReportsWewouldliketomakeiteasyforagentssothattheyprovideverylitleinformation: onlyoneitem,eithertheirsignaloraprediction.Inourcase,twoagentswillneedtoprovidetheirsignals (e.g. whom they believe won the debate). he remaining agent will need to provide a prediction on one outcome—a single real value. (e.g. their forecast for how likely a particular other agent was to choose a particularcandidate as thevictor). 1.1 Our Contributions • We deine two Diferential Peer Prediction mechanisms (Mechanism 1 and 2) which are strongly-truthful and detail-free for the single question seting and only require a single item report from three agents. Moreover,theagents neednotbehomogeneous andtheir signals may becontinuous. • Mechanism 1 rewards the source for the improvement of the experts prediction. We can use any strictly proper scoring rule (see Deinition 2.2) to measure the improvement, and truth-telling is an equilibrium. Moreover,ifweuse thelogscoringrule,truth-tellinghas thehighesttotalpaymentamongallequilibria. • Mechanism 2, which rewards the target for the improvement of the experts prediction, exploits special propertiesofthelogscoringrule(seeTechniquesbelowfordetails),whichmaybeofindependentinterest. Hereanonymity may berequiredtopreserveprivacy. Kong and Schoenebeck [12] show that it is not possible for truth-telling to pay strictly more than permutation equilibrium in detail-free mechanisms. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 3 Here,themechanismcanbegeneralizedbyreplacingtheexpertwithasuitablepredictorthatpredictsthe target’sreportgiveninformationfromasource(whichcouldbethecollectionofmanyagents).Weshow howtorecasttheBayesianTruthSerummechanismintotheframeworkofthisMechanism(Sect.4).his givesaddedintuitionfor its guarantees. • Weprovideasimple,conceptuallymotivatedprooffortheguaranteesofDiferentialPeerPredictionmech- anisms.Especially incontrasttothemostclosely relatedwork([10])our proofis very simple. 1.2 Summaryof Our Techniques Target Incentive Mechanisms. Many of the mechanisms for the single question use what source incen- we call tives:theypayagentsforreportingasignalthatimprovesthepredictionofanotheragent’ssignal.heoriginal peer prediction mechanism [15] does exactly this. To apply this idea to the detail-free seting [31, 33], mecha- nisms take a two-step approach: they irst elicit an agent’s prediction of some target agent’s report, and then measurehowmuchthatpredictionimprovesgivena reportfroma sourceagent. InSection3.2,weexplicitlydevelopatechnique,which targetincentiv wecall es,forrewardingcertainagents for signal reports that agree with a prediction about them. We show that log scoring rules can elicit signals as well as forecasts by paying the diference of log scoring rule on the signal between an initial prediction and an improvedprediction.hismaybeofindependentinterest,andisalsothefoundationfortheresultsinSections3.2 and 4. InformationMonotonicity Weuse. informationmonotonicity,atoolfrominformationtheory,toobtainstrong truthfulness.Likethepresentpaper,thecoreoftheargumentthattheDisagreementMechanism[10]isstrongly truthful(forsymmetricequilibrium)isbasedoninformationmonotonicity.However,becauseitishardtocharac- terizetheequilibriaintheDisagreementMechanism,theanalysisendsupbeingquitecomplex.Aframeworkfor derivingstronglytruthfulmechanismsfrominformationmonotonicity,whichweimplicitlyemploy,isdistilled inKongandSchoenebeck[12]. In Section 3, we use the above techniques to develop strongly truthful mechanisms, source-Diferential Peer Prediction and target-Diferential Peer Prediction, for the single question seting. Source-Diferential Peer Pre- diction is quite similar to the Knowledge-Free Peer Prediction Mechanism [33], however, it is strongly truthful whichweshowusinginformationmonotonicityoflogscoringrule.Target-DiferentialPeerPredictionaddition- ally uses thetargetincentivetechniques abovetoshowitis stronglytruthful. 1.3 RelatedWork Single Task Seting.In this seting, each agent receives a single signal from a common prior. Miller et al. [15] introduce the irst mechanism for single task signal elicitation that has truth-telling as a strict Bayesian Nash equilibrium and does not need veriication. However, their mechanism requires full knowledge of the common prior and there exist some equilibria where agents get paid more than truth-telling. At a high level, the agents can all simply submit the reports with the highest expected payment and this will typically yield a payment much higher than that of truth-telling. Note that this is both natural to coordinate on (in fact, Gao et al. [6] foundthatinanonlineexperiment,agentsdidexactlythis)anddoesnotrequireanyeforttowardthetaskfrom theagents.Kongetal.[9]modifytheabovemechanismsuchthattruth-tellingpaysstrictlybeterthananyother equilibriumbutstillrequiresthefullknowledgeofthecommonprior. Prelec [20] designs the irst detail-free peer prediction mechanism—Bayesian truth serum (BTS). Moreover, BTS is strongly truthful and can easily be made to have one-item reports. However, BTS requires an ininite number of participants, does not work for heterogeneous agents, and requires the signal space to be inite. he analysis, while rather short, is equally opaque. A key insight of this work is to ask agents not only about their ownsignals,butforecasts (prediction)oftheother agents’reports. ACMTrans. Econ. Comput. 4 • GrantSchoenebeck and Fang-YiYu Aseriesofworks[1,22,23,31–33]relaxthelargepopulationrequirementofBTSbutlosethestronglytruth- ful property. Zhang and Chen [33] is unique among prior work in the single question seting in that it works for heterogeneous agents whereas other previous detail-free mechanisms require homogeneous agents with conditionally independentsignals. Toobtainthestronglytruthfulproperty,KongandSchoenebeck[10]introducetheDisagreementMechanism whichisdetail-free,stronglytruthful(forsymmetricequilibrium),andworksforsixagents.husitgeneralizes BTS to the inite agent seting while retaining strong truthfulness. However, it requires homogeneous agents, cannot handle continuous signals, and fundamentally requires that each agent reports both a signal and a pre- diction.Moreover,itsanalysisisquiteinvolved.However,itiswithintheBTSframework,inthatitonlyasksfor agents’ signals and predictions, whereas our mechanism typically asks at least one agent for a prediction ater seeingthesignalofanother agent. Finally, most of these works either have multiple rounds [32, 33], or work only if the common prior is sym- metric[1,13,20,22,31],thoughsometimesthiscanberelaxedtoarestrictionmorelikepositivecorrelation[32]. Our mechanisms also have multiple rounds; however, we can simplify them to single round but this requires askingquestions thatmay beslightly morecomplexthantheBTS framework. Prelec[21],postedsubsequentlytotheconferencepublicationofthiswork[25]butdevelopedindependently, uses very similar techniques to this work combined with the seting explored in [32] where agents are asked questions before and ater seeing their signal. Similar to our target DPP mechanism, the mechanisms in Prelec [21] are target incentive mechanisms and pay the target by log scoring rule on diferent pairs of initial and improved predictions (e.g. one agent’s predictions before and ater geting her signal that requires additional temporal coordination). On the other hand, with the above additional temporal coordination, the mechanisms canworkontwoagents,andour mechanismrequiresatleastthreeagents for thesetingweconsider. Surprisingly, and, with the exception of a footnote in Miller et al. [15], unmentioned by any of the above works, the idea of target incentive mechanisms with the log scoring rule can be dated back over 20 years to a (so far unpublished) working paper [19] which studies information pump games that also use improvement of predictionsonthelogscoringruletoencouragetruthfulreports.Inparticular,thepaperpresentsaspecialcase of our main technical lemma (Lemma 3.4) that requires a slightly stronger assumption than our second order stochasticrelevant(Deinition2.1).Besidesaweakerassumption,ourconnectiontoinformationtheoryenables us todesignstronglytruthfulmechanisms insteadoftruthfulmechanisms. Continuous Single Task Seting. Kong et al. [13] shows how to generalize both BTS and the Disagreement Mechanism(withsimilarpropertiesincludinghomogeneousagents),intoarestrictedcontinuoussetingwhere signals are Gaussians related in a simple manner. he generalization of the Disagreement Mechanism requires thenumber ofagents toincreasewiththedimensionofthecontinuous space. he aforementioned Radanovic and Faltings [23] considers continuous signals. However, it uses a discretiza- tionapproachwhichyields exceedingly complexreports. Additionally,itrequireshomogeneous agents. In a slightly diferentseting, Kongand Schoenebeck [11] study eliciting agents’ forecasts for some (possibly unveriiable)event,whicharecontinuousvaluesbetween0and1.However,hereweareconcernedwitheliciting signals whichcanbefroma muchricher space. Multi-task Seting. In the multi-task seting, introduced in Dasgupta and Ghosh [5], agents are assigned a batchofapriorisimilartaskswhichrequireeachagents’privateinformationtobeabinarysignal.Severalworks extend this to multiple-choice questions [5, 8, 12, 24, 27]. Recently, a sequence of works study the robustness andlimitationofthemulti-taskseting[3, 26,34]. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 5 he multi-task mechanisms and our single-task mechanism each ofer advantages. he key advantage of the multi-task mechanisms is that agents are only asked for their signal, and not a prediction. Multi-task mecha- nismsaccomplishthisby,implicitlyorexplicitly,learningsomerelationbetweenthereportsofdiferentagents. However, because of this, multi-task mechanisms strongly depend on an assumption that both the joint distri- bution of signals on diferent questions are i.i.d. and that the agents apply the same (possibly random) strategy to each task in an i.i.d. manner. his assumption is not unreasonable in certain crowd-sourcing, peer review, and peer grading setings, but is likely violated in a survey seting. In the seting of the present paper, no such assumptionis neededas themechanismcanbeappliedindividually toeachquestionor task. Even in setings where the i.i.d. assumption holds, it may be the case that (in practice) agents receive infor- mation in addition to the elicited signal so that the above learning approach fails. For example, an agent may like a paper, but believe it to be on a generally unpopular topic, and therefore conclude that the mechanism will incorrectly predict her rating. his is because the relationbetween agents’ reports arelearned on all topics and so may be incorrect when applied to the subset of papers on unpopular topics. In such a case the strategic guarantees of the multi-task mechanisms may fail. Our mechanism mitigates this problem by having agents themselves doing the prediction, who also have access to the contextual information which will naturally be incorporatedintotheir prediction. Another drawback of the multi-task seting, as its name suggests, is the number of questions required for eachagent.Mechanismstendtoeithermakeassumptionsaboutthecorrelationbetweensignals(e.g.,[5])orthe structure must be learned (e.g., [24, 27]). In the later case, the strategic guarantees are parameterize � d by an whichonlydecreasesasymptoticallyinthenumberofagents[24].AnexceptiontothisistheDMImechanism[8], butthisstillotenrequiresafairlylargenumberoftaskstoworkatallandhasadditionalrestrictions.However, recent work [3] shows that the pairing mechanism [24] combined with proper machine learning can work in setings withas fewas four tasks per agent. Incontrast, our mechanismonly requiresa singletask. 2 PRELIMINARIES 2.1 PeerPrediction Mechanism here are three characters, Alice , Bob and Chloe in our mechanisms. Alice (and respectively Bob, Chloe) has a privately observed signal � (respectively �, �) from a setA (respectively B, C). hey all share a common belief thattheirsignals (�,�,�) aregeneratedfromarandomvariable (�, �,� ) whichtakesvaluesAfr×om B×Cwith a probabilitymeasur � calle e dthecommonprior. � describes howagents’privatesignals relatetoeachother. AgentsareBayesian.Forinstance,aterAlicer�eceiv = �,she es updatesherbelieftoposterior the �((�,� ) = (·, ·) | � = �) whichisadistributionovertheremainingsignals. � (W· e| �will ) instead use tosimplifythe �,� |� notion.Similarly Alice’sposterior ofBob’ssignalis�denote (· | �d),bywhichis a distribution B. on � |� A peer prediction mechanism on Alice, Bob, and Chloe has three payment (� ,�functions ,� ). he mech- � � � anism irst collects reprorts := (� ,� ,� ) from agents. It pays Alice with � (r) (and Bob and Chloe anal- � � � � ogously). Alice’s strategy � is a (random) function from her signal to a report. All agents are rational and risk-neutral so they are only interested in maximizing their (expected) payment. hus, given a strategy proile � := (� ,� ,� ), Alice, for example, wants to maximize ex-ante herpaymentunder common prior� which is � � � � (� ;�) := E [� (r)].Letex-anteagents’welfaredenotethesumofex-antepaymenttoallagents, � (� ;�) + � �,� � � � (� ;�) + � (� ;�). A strategy proile � is aBayesian Nash equilibrium under common prior� if by changing � � the strategy unilaterally, an agent’s payment can only weakly decrease strict . It isBayaesian Nash equilibrium ifanagent’s paymentstrictly decreasesas her strategy changes. We want to design peer prediction mechanisms to “elicit” all agents to report their information truthfully without veriication. We say Alice�’s strategy istruthfulfor a mechanism M if Alice truthfully reports the ACMTrans. Econ. Comput. 6 • GrantSchoenebeck and Fang-YiYu informationrequestedbythemechanism. Wecallthestrategyproile � truth-tellingifeachagentreportstruth- fully. Moreover, we want to design detail-fremee chanisms which have no knowledge about the common prior � except agents’ (possible non-truthful) reports. However, agents can always relabel their signals and detail- freemechanismscannotdistinguishsuchastrategyproilefromthetruth-tellingstrategyproile.Wecallthese strategy proilespermutation strategy proiles . hey can be translated back to truth-telling reports by some per- mutations applied to each component A × Bof × C—that is, the agents report according to a relabeling of the signals. Wenowdeinesomegoalsforourmechanismthatdiferinhowuniquethehighpayofoftruth-tellingis.We callamechanism truthful ifthetruth-tellingstrategy � ispraoile strictBayesianNashequilibrium.However, inatruthfulmechanism,non-truth-tellingequilibriamayyieldahigherex-antepaymentforeachagent.Inthis paper,weaimforstronglytruthfulmechanisms[12]whicharenotonlytruthfulbutalsoensuretheex-ante agents’ welfare in the truth-telling strategy � isprstrictly oile beter than all non-permutation equilibria. Note that in a symmetric game, this ensures that each agent’s individual expected ex-ante payment is maximized by truth-tellingcomparedtoany other symmetric equilibrium. Now,wedeinethesetofcommonpriorsthatourdetail-freemechanismscanworkon.Notepeerreportsare notusefulwhen theagents’ signals areindependentof eachother.hus, a peer predictionmechanism needs to exploitsomeinterdependencebetweenagents’signals. Deinition2.1(ZhangandChen[33. ])Acommonprior� is⟨�, �,� ⟩-secondorderstochasticrelevant ifforany ′ ′ distinct signals �,� ∈ B, there is � ∈ A, such that � (· | �,�) ≠ � (· | �,� ). hus, when Alice with � � |�,� � |�,� is making a prediction of Chloe’s signal, Bob’s signal is relevant so that his signal induces diferent predictions when� = � or � = � . Wecall� second orderstochastic relevant iftheabovestatementholds for any permutation {�, �,� }.of To avoid measure theoretic concerns, we initially �rehas quirefullthat support, and the joint signal space A × B × C tobeinite.InAppendix G, wewillshowhowtoextendour resultstogeneralmeasurablespaces. 2.2 Proper Scoring Rules Scoringrulesarepowerfultoolstodesignmechanismsforelicitingpredictions.Considerainitesetofpossible outcomesΩ, e.g., Ω = {sunny,rainy }. An expert, Alice, irst reports a distribution � ∈ P(Ω) as her prediction of the outcome, wher P(Ωe) denotes the set of all probability measur Ω.eshen, on the mechanism and Alice observe the outcome �. he mechanism gives Alice aPSscor [�,e�]. Alice maximizes her expected score by reportingher truebelieffor theoutcome � (thepr, obabilityofeachpossibly outcome �): of Deinition 2.2.A scoring rule PS : Ω × P(Ω) ↦→ R isproper if for any distributions �,� ∈ P(Ω) we have ˆ ˆ E [PS[�, � ]] ≥ E PS[�, �] .A scoringrule PS isstrictlyproper whentheequality holds only � = �if . � ∼� � ∼� Givenanyconvexfunction �,onecandeineanewproperscoringrule PS [12].Inthispaper,weconsidera specialscoringrulecalledlogarithmic the scoring rule [30], deinedas LSR[�, � ] := log (�(� )) , (1) Here we do not deine the notion of truthful reports formally, because it is intuitive in our mechanisms. For general seting, we can use query modelstoformalizeit[29]. OurdeinitionhassomeminordiferencesfromZhangandChen[33]’s,foreaseofexposition.Forinstance,theyonlyrequirethestatement holdsfor onepermutation {�of ,�,� } insteadof allthepermutations. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 7 where� : Ω → R is the probability density function �. Another of popular scoring ruleBrier is scoring rule (quadraticscoringrule)[2],deinedas ′ 2 QSR[�, � ] := 2�(� ) − �(� ) . (2) � ∈Ω 2.3 Information Theory Peer prediction mechanisms and prediction markets incentivize agents to truthfully report their signals. One key idea these mechanisms use is that agents’ signals are interdependent and strategic manipulation can only dismantlethis structure.Hereweintroduceseveralbasic notions frominformationtheory [4]. heKL-divergenceisameasureofthedissimilarityoftwodistributions: � and� beprobability Let measures on a inite set Ω with density functions � and� respectively. heKL divergence (also called relative entropy) from� to� is� (�∥� ) := −�(� ) log (�(� )/�(� )). �� � ∈Ω We now introduce mutual information, which measures the amount of information between two random variables: Given a random variable (�,� ) on a inite set X × Y , let� (�,�) be the probability density of the �,� random variable(�,� ), and let� (�) and � (�) be the marginal probability density � andof � respectively. � � hemutualinformation �(� ;� ) istheKL-divergencefromthejointdistributiontotheproductofmarginals: � (�,�) �,� �(� ;� ) := � (�,�) log = � (� ∥� ⊗ � ) �,� �� �,� � � � (�)� (�) � � �∈X,�∈Y where⊗denotesthetensorproductbetweendistributions.Mor (�,e�o,v�er),is ifarandomvariable,mutual the informationbetw�eenand � conditional�onis �(� ;� | � ) := E [� (� ∥� ⊗ � )]. � �� (�,� ) |� � |� � |� he data-processing inequality shows no manipulation of the signals can improve mutual information be- tweentworandomvariables,andtheinequality is offundamentalimportanceininformationtheory. TR2.3 (Daa R ay). If� → � → � formsaMarkovchain, �(� ;� ) ≥ �(� ;� ). Because the mutual information is symmetric, neither can � manipulating increase the mutual information between� and�.hus,wesay mutualinformationis informationmonotoneinbothcoordinates. By basic algebraic manipulations, Kong and Schoenebeck [12] relate proper scoring rules to mutual informa- tion.For tworandomvariables � and�, E [LSR[�, �(� | �)] − LSR[�, �(� )]] = �(� ;� ). (3) �,� Wecangeneralizethemutualinformationintwoways[12].heirst �−is ��tousing deinethe �-divergence, where� is a convex function, to measure the distance between the joint distribution and the product of the marginal distributions. he KL-divergence is just a special �-div caseergence of the. his retains the symmetry betweentheinputs. he second way is to use a diferent proper scoring rule. As mentioned, any convex�function gives rise to � � a proper scoring rule PS . hen the Bregman mutual information can be deined as in��Eqn. � (�(3,):� ) := � � E PS [�, � (· | �)] − PS [�, � (·)] .Notethatbythepropertiesofproperscoring ��� is rulesinformation �,� � |� � monotoneintheirstcoordinate;however,ingeneralitis notinformationmonotoneinthesecond. hus, by Eqn. (3), mutual information is the unique measure that is both a Bregman mutual information and an �-MI.his observationis onekeyfor designingour stronglytruthfulmechanisms. Randomvariables�,� and� formaMarkovchainiftheconditionaldistribution � dependsonlyon of� andisconditionallyindependent of� . ACMTrans. Econ. Comput. 8 • GrantSchoenebeck and Fang-YiYu 3 EXPERTS, TARGETSANDSOURCES: STRONGLYTRUTHFULPEERPREDICTION MECHANISMS In this section, we show how to design strongly truthful mechanismssignals to elicit by implicitly agents’ run- ninga predictionmarket. Our mechanisms have three characters, Alice, Bob, and Chloe, and there are three roles: expert, target, and source: • Anexpertmakes predictions ona target’s report, • atargetis askedtoreporthis signal, and • asourceprovides her informationtoanexperttoimprovetheexpert’s prediction. By asking agents to play these three roles, we design two strongly truthful mechanisms based on two diferent ideas. he irst mechanismsourisce diferential peer prediction (S-DPP). his mechanism is based onknothe wledge- freepeerprediction mechanismbyZhangandChen[33],whichsour rewar cebydsho a wusefulhersignalisforan experttopredictatarget’sreport.heirmechanismisonlytruthfulbutnotstronglytruthful.Wecarefullyshit the payment functions and employ Eqn. (3) and the data-processing inequality on log scoring rule to achieve thestronglytruthfulguarantee. We further propose a second mechanism, target diferential peer prediction (T-DPP). Instead of rewarding a source, the T-DPP mechanism rewartarget ds a by the diference of the logarithmic scoring rule on her signal betweenaninitialpredictionandanimprovedprediction.LaterinSect.4weshowBayesiantruthserumcanbe seenas a specialcaseofour T-DPPmechanism. henwediscusshowtoremovethetemporalseparationbetweenagentsmakingreportsinSection3.3where agents only needtoreportonce,andtheir reports donotdependonother agents’reports. 3.1 The SourceDiferentialPeer Prediction Mechanism hemainideaoftheS-DPPmechanismisthatitrewardsasourcebytheusefulnessofhersignalforpredictions. Speciically,supposeAliceactsasanexpert,Bobasthetarget,andChloeasthesource.Ourmechanismirstasks Alice to make an initial prediction � on Bob’s report. hen ater Chloe reports her signal, we collect Alice’s improved prediction � ater seeing Chloe’s additional information. In each case, Alice maximizes her utility by reportingher Bayesianposterior conditionedonher information. hepaymentsforAliceandBobaresimple.S-DPPpaysAlicethesumofthelogarithmicscoringruleonthose two predictions. S-DPP pays Bob zero. Chloe’s payment consists of two parts. First, we pay her the prediction scoreoftheimprovedprediction � .Bythedeinitionofaproperscoringrule(Deinition2.2),Chloewillreport truthfully to maximize it. For the second part, we subtract from Chloe’s payment three times the score of the initial prediction �. his ensures the ex-anteagent welfareequals the mutual information, which is maximized atthetruth-tellingstrategyproile.ToensureBobalsoreportshissignaltruthfully,wepermuteBobandChloe’s rolesinthemechanismuniformly atrandom. TR 3.1. If the common prior � is second order stochastic relevant on a inite set with full support, Mecha- nism 1 isstronglytruthful: (1) hetruth-telling strategy pr �oile isastrict BayesianNash equilibrium. (2) he ex-ante agents’ welfare in the truth-telling strategy � isprstrictly oile beter than all non-permutation strategyproiles. WedefertheprooftoAppendixC.Intuitively,becausethelogarithmicscoringruleisproper,Alice(theexpert) willmaketruthfulpredictionswhenBobandChloereporttheirsignalstruthfully.Similarly,thesourceiswilling toreporther signaltruthfully tomaximizetheimprovedpredictionscore.his showsMechanism 1istruthful. ACMTrans. Econ. Comput. TwoStronglyTruthfulMechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 9 MechanismT1wo-roundSourceDiferentialPeer Prediction Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. LSR is thelogarithmic scoringrule(1). 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Aliceis informedwhois thetargetandpredicts the�with target’s �. report 4: Giventhesource’srep�o,rttheexpertmakes another prediction � . ˆ ˆ 5: hepaymenttotheexpert LSRis[�,� ] + LSR[�,� ]. 6: hepaymenttothetarget 0. is ˆ ˆ 7: hepaymenttothesource LSRis[�,� ] − 3 LSR[�,� ]. Toshowthesourceiswillingtoreporttruthfully,weshowLemma3.2whichadataprocessinginequalityfor secondorderstochastic relevantdistributions, andpresenttheproofinAppendix C. La3.2. Letrandomvariable(�,�,� ) be ⟨�,�,� ⟩-stochasticrelevantonainiteXspace × Y × Z withfull support.Givenadeterministicfunction � : Y → Y, � (� | �,�) � (� | �,�(�)) � |�� � |�� E log − E log ≥ 0. �,�,� �,�,� � (� | �) � (� | �) � |� � |� Moreover,equality occursonly �ifisanidentity function, �(�) = �. hough Lemma 3.2 only considers the log scoring rule, it is straightforward to show the source is willing to reporttruthfullywhenweuseanystrictlyproperscoringrule.Consequentially,theS-DPPmechanismwillstill havetruth-tellingasanequilibrium.However,thetotalpaymentatthetruth-tellingstrategyproilemaynotbe maximum. Note that we can ask Alice, Bob, and Chloe to play all three characters, and have the identical guarantee as heorem 3.1. We illustrate this modiication � agentsonin Sect. 5. Furthermore, if the agents’ common prior � is symmetric, the above modiication creates a symmetric game where each agent’s expected payment at the truth-tellingstrategy proileis bothnon-negativeandmaximizedamongallsymmetric equilibria. 3.2 TargetDiferentialPeer Prediction Mechanism he target diferential peer prediction mechanism (T-DPP) is identical to the S-DPP except for the payment functions. In contrast to the S-DPP mechanism, T-DPP rewards a target. We show that paying the diference between initial prediction and an improved prediction on a target’s signal can incentivize the target to report truthfully.(Lemma 3.4) OurmechanismpaysAlicethesumofthelogscoringruleonthosetwopredictions.hemechanismpaysBob ˆ ˆ theimprovementfromtheinitial�prtoediction theimprovedprediction � .Finally,Chloe’spaymentdepends on Alice’s irst initial pre�diction , which is independent of Chloe’s action. To ensure Chloe also reports her signaltruthfully,wepermutetherolesofBob andChloeuniformly atrandominthemechanismas well. TR 3.3. If the common prior � is second order stochastic relevant on a inite set with full support, Mecha- nism 2 isstronglytruthful Althoughthetheoreticalguaranteeinheorems3.1and3.3areidentical,inSect.5wediscussthattargetDPP may bemorerobustifwewanttoreplacetheexpertas anmachinelearningalgorithm. ACMTrans. Econ. Comput. 10 • GrantSchoenebeck and Fang-YiYu MechanismT2wo-roundTargetDiferentialPeer Prediction Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. LSR is thelogarithmic scoringrule(1). 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Aliceis informedwhois thetargetandpredicts the�with target’s �. report 4: Giventhesource’srep�o,rttheexpertmakes another prediction � . ˆ ˆ 5: hepaymenttotheexpert LSRis[�,� ] + LSR[�,� ]. ˆ ˆ 6: hepaymenttothetarget LSRis [�,� ] − LSR[�,� ]. 7: hepaymenttothesource−2is LSR[�,� ]. WedefertheprooftoAppendixD,andprovideasketchhere.WeirstshowMechanism2istruthful.Because the log scoring rule is proper, Alice (the expert) will make the truthful predictions when Bob and Chloe report their signals truthfully. hus, the diicult part is to show the target is willing to report his signal truthfully, if theexpertandthesourcearetruthful.BecausetherolesofBobandChloearesymmetricinthemechanism,we canassumeBob is thetargetandChloeis thesourcefromnowon. La 3.4 (LaR RR R R RR). Suppose Alice and Chloe are truthful, and the common prior is ⟨�, �,� ⟩-second order stochastic relevant. As the target, Bob’s best response is to report his signal truthfully. his is a generalization of a lemma in Prelec [20] and Kong and Schoenebeck [12], and extends to non- symmetric prior and inite agent seting. he main idea to prove lemma 3.4 ismaximizing to show that Bob’s expected payment is equivalent to maximizing the reward of a proper scoring rule applied to predicting Chloe’s report with prediction �(� | �(�)). herefore, by the property of proper scoring rules, Bob is incentivized to tell the truth. With Lemma 3.4, the rest of the proof of theorem 3.3 is identical to the proof of heorem 3.1 which is includedinAppendix D. PRLa3.4. Given Alice and Chloe are truthful, let � : B → B be a Bob’s (deterministic) best response.LetAlice,BobandChloe’ssignals �,�band e �respectively.WhenAliceandChloebothreporttruth- fully, Chloe’s report �is ˆ = �. Alice’s initial prediction � = � is(· | �), and her improved prediction is � |� � = � (· | �,�).Hence,Bob withstrategy � gets payment � |�,� LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] � |�� � |� Because� isabestresponse,for �all∈ B,reporting �(�) maximizesBob’sexpectedpaymentconditionalon � = �, E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] . (4) (�,�)∼�,� |�=� � |�,� � |� heexantepaymentofBob is computedby summingover(4)with � w,as:eight �(�) := E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] (�,�,�)∼� � |�,� � |� Prelec[19]alsoshowsaweakerversionoftheabovelemma.However,hisproofrequiresastrongerassumptionthansecondorderstochastic ′ ′ relevant:for any distinctsignals �,� ∈ B andsignals� ∈ A � ∈ C, � (� | �,�) ≠ � (� | �,� ). � |�,� � |�,� ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 11 whichis maximize �.dNoovwer,wecanswap therole�ofand�. �(�) = E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] (�,�,�)∼� � |�,� � |� = E log(� (�(�) | �,�)) − log(� (�(�) | �) (by thedeinition(1)) �,�,� � |�,� � |� � (�(�) | �,�) � |�,� = E log �,�,� � (�(�) | �) � |� � (�(�),� | �) �,� |� = E log �,�,� � (�(�) | �)� (� | �) � |� � |� � (� | �,�(�)) � |�,� = E log �,�,� � (� | �) � |� heabovevaluecanbeseenastheexantepredictionscoreofBobwhoreports�prediction (· | �,�(�)) for � |�,� Chloe’ssignal.Similarly,theexantepaymentofBob whenhis strategy�isistruth-telling � (� | �,�) � |�,� �(�) = E log . �,�,� � (� | �) � |� hediferencebetween �(�) and�(�) is � (� | �,�) � (� | �,�(�)) � |�,� � |�,� �(�) − �(�) = E log − E log . �,�,� �,�,� � (� | �) � (� | �) � |� � |� First, by Lemma 3.2, we know �(�) ≥ �(�). However, because� is a best response, the inequality is in fact equality�,(�) = �(�).By thesecondpartofLemma 3.2,this�sho isan wsidentity�and = �. NotethattheproofusesA)thelogscoringruleisaBregmanmutualinformationwhichcanbewritenasthe diference between two proper scoring rules and B) the log scoring rule�is -mutual also ainformation which is symmetric between the inputs. Furthermore, through both mechanisms work with the log scoring rule, the S-DPP can work with general proper scoring rule, but the T-DPP cannot. Proposition 3.5 provides a counter- examplewheretheBrierscoringrule(2)appliedinthereversewaydoesnotelicitthetargettoreporttruthfully whichshowsa distinctionbetweenthelogscoringruleandother scoringrules. PR3.5. IfwereplacethelogscoringrulewiththeBrierscoring (2),therruleeexists⟨an �, �,� ⟩-second orderstochasticrelevantprior � such that reporting hissignaltruthfully isnot abest responseforBob. PR. Let A = {1},and B = C = {1, 2, 3}. Wedeinea⟨�, �,� ⟩-secondorderstochastic relevantprior 0.12 0.11 0.16 © ª (�(1,�,�)) = ­0.04 0.05 0.18® . �,� 0.15 0.18 0.01 « ¬ By direct computations, Bob’s payment 0.0878isunder truth-telling strategy, but he0can .0990getif he misre- ports1 as 2. 3.3 Single-roundDPPMechanismforFinite Signal Spaces When the signal spaces are inite, the above two-round mechanisms (Mechanisms 1 and 2) can be reduced to single-round mechanisms by using a virtual � .signal hat is for Alice’s improved prediction we provide Alice with a random virtual signal � instead of the actual report from the source, and pay her the prediction score when the source’s report is equal to the virtual � = �signal . Here we state only the single-round target-DPP; thesingle-roundsource-DPPcanbedeinedanalogously. ACMTrans. Econ. Comput. 12 • GrantSchoenebeck and Fang-YiYu MechanismSingle 3 RoundT-DPP Require:Alice, Bob, and Chloe have private signals � ∈ A, � ∈ B, and � ∈ C drawn from second order stochastic relevantcommon�prior knowntoallthreeagents. heempty∅set is neither Bin nor C. 1: BobandChloereporttheir signals, � and�ˆ. 2: Set Alice as the expert. Set Bob or Chloetarget as the and the other assour thece uniformly at random. We use �todenotethetarget’s report, and �tousedenotethesource’sreport. 3: Sample� uniformlyfrom X ∪ {∅}whereX isthesignalspaceofthesource,andtellthe � andexpwho ert � � isthetarget. 4: if� = ∅ then ⊲ initialprediction 5: heexpertmakes a prediction � of�. 6: else ⊲ improvedprediction 7: heexpertmakes prediction � of�pretendingthesource’srep�ort = � . 8: endif ˆ ˆ 9: hepaymenttotheexpert 1[is � = �] · LSR[�,� ] + 1[� = ∅] · LSR[�,� ]. ˆ ˆ 10: hetarget’s payment has threecases: 1[� = �] · LSR[�,� ] − 1[� = ∅] · LSR[�,� ]. 11: hepaymenttothesource−2is · 1[� = ∅] LSR[�,� ]. Mechanism 3 has the same truthfulness guarantees as Mechanism 2. he proof is the same and is presented inAppendix E. TR 3.6. If agents’ common beliefs are stochastic relevant andBthe andset C are inite, Mechanism 3 is stronglytruthful. Remark 3.7. Mechanism 3 uses the virtual signal trick to decouple the dependency between the expert’s (Al- ice’s) prediction and the source’s (Chloe�’s)∈signal, X . Furthermore, the logarithmic scoring rule is a local proper scoring rule [17] such that the LSRscor [�, �e] = log�(� ) only depends on the probability � . Hence at wecanfurthersimplifyAlice’sreportbyaskinghertopredictthepr∈obability [0, 1] ofasingle densityvirtual signal� ∈ X inthetarget’s (e.g.Bob’s)signalspace. his trick can be extended to setings with a countably ininite set of signals. For example N , for signals in we can generate the virtual signal from a Poisson distribution (which dominates the counting measure) and normalize the payments correspondingly. However, this trick does not work on general measurable spaces, e.g. realnumbers,becausetheprobabilityofthevirtualsignalmatchingthesource’sreportcanbezero. 4 BAYESIANTRUTHSERUMAS APREDICTION MARKET In this section, we revisit the original Bayesian Truth Serum (BTS) by Prelec [20] from the perspective of pre- diction markets. We irst deine the seting, which is a special case of ours (Mechanism 2), and use the idea of predictionmarkets (Appendix A)tounderstandBTS. 4.1 Seting of BTS here are� agents. hey all share a common prior �. We call� admissibleif it consists of two main elements: states and signals. he state� is a random variable{in 1, . . .,� }, � ≥ 2 which represents the true state of the world. Each agent �observes a signal� from a inite set Ω. he agents have a common prior consist- ing of� (�) and � (· | �) such that the prior joint distribution � , . . .,� isProf(� = � , . . .,� = � ) = � � |� 1 � 1 1 � � Î Î � (�) � (� | �). � � |� � �∈[� ] �∈[�] NowwerestatethemaintheoremconcerningBayesianTruthSerum: ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 13 Mechanismhe 4 originalBTS Require:hecommonprior is admissible�,and > 1. 1: Agent�reports�ˆ ∈ Ω and� ∈ P(Ω). � � (�) 2: For each agent �, choose a reference agent �≠ �uniformly at random. Compute � ∈ P(Ω) such that for −�� all� ∈ Ω (�) � (�) = 1[�ˆ = �] (5) −�� � − 2 �≠�,� whichis theempiricaldistribution � − 2ofagents’ theother reports. 3: hepredictionscoreandinformation�scor are eof h i h i (�) (�) ˆ ˆ � = LSR �ˆ ,� − LSR �ˆ ,� and� = LSR �ˆ ,� − LSR �ˆ ,� . Pre � � � Im � � � −�� −�� Andthepayment�tois� + � � . Pre Im TR 4.1 ([20]). For all � > 1, if the common prior � is admissible and � → ∞, Mechanism 4 is strongly truthful. 4.2 Information ScoreasPrediction Market Prelec [20] uses a clever algebraic calculation to prove this main result. Kong and Schoenebeck [12] use infor- mation theory to show that for BTS the ex-ante agents’ welfare for the truth-telling strategy proile is strictly beterthanforallothernon-permutationequilibria.HereweusetheideaofpredictionmarketstoshowBTSisa truthfulmechanism,anduseMechanism2toreproduceBTSwhenthecommonprioris � →admissible ∞. and he payment from BTS consists of two parts,information the scor , �e, and theprediction scor, �e . he Im Pre prediction score is exactly the log scoring rule which is well-studied in the previous literature. However, the role of the information score is more complicated. Here we provide an interpretation based on Mechanism 2. Informally,theinformationscoreistheimprovementfromoneagent’spredictiontotheaggregatingprediction fromallagentsononeagent’ssignalwhichformalizedinProposition4.2.hus,byLemma3.4,reportingsignal truthfully maximizes theagent’s informationscore. Nowweformalizethisidea.Consider �= 1and �= 2inBTSandcallthemBobandAlicerespectively.Welet Chloebethecollectionofother {3,agents 4, . . .,�}.Let’srunMechanism2onthisinformationstructure.Bobis the target. Alice’s initial pre�diction = � (·is | � ). When Chloe’s signal� is ,� , . . .,� , Alice’s improved � |� 2 3 4 � 1 2 prediction�is= � (· | � ) where� := (� ,� , . . .,� ) is the collection of all agents’ reports expect � |� −1 −1 2 3 � 1 −1 Bob’s.By Lemma 3.4,Bob’spayment,LSR[�ˆ ,� ] − LSR[�ˆ ,� ], whichequals 1 1 LSR[�ˆ , � (· | � )] − LSR[�ˆ , � (· | � )], (6) 1 � |� −1 1 � |� 2 1 −1 1 2 is maximizedinexpectationwhenBob reports his private � . signal NotethatBob’spaymenthere(eq(6))isnearlyidenticaltoBob’sinformationscoreintheBTS(Mechanism4) (�) atthetruth-tellingstrategyLSRproile: [�ˆ ,� ] − LSR[�ˆ ,� ] whichequals 1 1 2 −{1,2} h i (�) LSR �ˆ ,� − LSR �ˆ , � (· | � ) . (7) 1 1 � |� 2 1 2 −{1,2} heonlydiferencebetween(6)and(7)isthattheformer�ˆprusing edicts � (· | � ) intheirsttermwhile � |� −1 1 −1 (�) thelateruses � .herefore,theoriginalBTSreducestoaspecialcaseofMechanism � → ∞,if2aswecan −{1,2} (�) showlim �(� | � ) = lim � . Formally, �→∞ 1 −1 �→∞ −{1,2} ACMTrans. Econ. Comput. 14 • GrantSchoenebeck and Fang-YiYu PR 4.2. Forall � = 1, . . .,� and � ∈ Ω, � (·|�) X|� (�) � (� ) − � (� | � ) −−−−−−−→ 0as � → ∞. � |� −1 1 −1 −{1,2} hat isthediferencebetweentheseestimatorsconvergestozeroinpr�obability goestoininity as . (�) he proposition follows by seeing that, ixing the state�of , both the � world(·) and � (· | � ) −1 � |� 1 −1 −{1,2} converge to� (� | �) which is the posterior distribution of Bob’s signal given the state of the world. How- � |� ever,proposition4.2requiresagents’signalaresymmetricandconditionallyindependent.InSect.5,wediscuss (�) that in practice, we may replace the simple � average (·) with another learning algorithm to relax these −{1,2} assumptions. 5 DISCUSSION ANDAPPLICATIONS We deine two Diferential Peer Prediction mechanisms, S-DPP and T-DPP which are strongly-truthful, detail- free, and only require a single item report from three agents. In addition to the nice theoretical guarantees, our coreobservationisthatpayinganagentthediferencebetweenaninitialandimprovedpredictionofhissignal is a powerfulpeer predictiontool. We believe that our mechanism can be applied in several domains including peer grading, peer review, sur- veys,andcrowd-sourcing.Infact,SrinivasanandMorgenstern[28]useourDPPmechanismsintheirproposed market for peer-review process. Moreover, in peer grading, multi-round review processes are already used in practice[18]. As discussed in the related work, existing single-task mechanisms either make strong assumptions on the signal distribution, e.g., symmetric, or also use multiple rounds. We believe our multi-round mechanism are more practical than mechanisms requiring strong assumptions on the signal distribution, which may not hold inthesedomains. While multi-task peer-prediction mechanism can also sometimes be deployed in this area, the present mech- anism has three advantages: 1) it only requires one question or task while the multi-task peer prediction mech- anisms oten require many. his is of great importance in peer grading and peer review where each agent may only grade a small handful of items. 2) Other mechanisms require learning the relationship between signals; however, in the proposed applications (e.g. peer grading and peer review) the agents typically see much more information than the mere score, the relationship of signals may depend on particular traits of diferent items. Our mechanisms mitigate this problem because the expert can also see the item and use its traits to inform her prediction.3)Unlikethemultitaskseting,ourmechanismdoesnotrequirequestionsandresponsestobei.i.d.. Our presentation involves each agent only giving a one item response. As highlighted in the introduction, our mechanism can easily be adapted so that each agent plays all three roles, and thus provides a signal and a prediction.Speciically�,giv agents, en wecouldassigneachagentaninde [�].xInintheirstround,eachagent reportshersignalandaninitialprediction �+ 1’sreporte on dsignal.Inthesecondround,agent �receives �− 1’s reported signal, and makes an improved prediction �+ 1’sonreported signal.his variant mechanism treats agentssymmetricallyandcancollectmoresignals,whichisotenthegoal.Furthermore,thissymmetricdesign may bemorefair. Our mechanism does require some coordination between agents, but in general it is quite minimal. First, we assume that the identities of agents are established. Because we allow heterogeneous agents, the expert must know who the target is to respond. However, in practice, this could be relaxed to knowing the “type” of each of the agents as long as knowing the type is suicient to specify the joint prior. Additionally, if agents are homogeneous,agents’identitiesareirrelevant.Second,agentscannotbepaiduntilallthereportsareinbecause Weusemodular arithmetic here. ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 15 some payments rely on all reports. However, in the single-round mechanisms, no additional coordination is required: agents can interact with the mechanism in any order. Even, in the two-round mechanisms, the only requirementisthattheexpertmustparticipateaterthesource.Inthecasewhererolescansafelybecorrelated with arrival times, the irst arrivals can be assigned to source/target and the inal to be the expert, and then no further coordinationis required. Machine-Learning Aided Peer Prediction. Given the ubiquity of learning algorithms, in our S-DPP and T-DPP, we may use a learning algorithm to replace the character of the expert that makes predictions on the target’s report.Withthismodiication,agentsonlyneedtoreporttheirsignalswithoutmakingcomplicatedpredictions. herefore,usinglearningalgorithms as a surrogatemay greatlysimplify thecomplicationofour mechanisms. Nowwediscusspossibleconditionsforthoselearningalgorithmstoensuretruthfulness.S-DPPrequiresthat the learning algorithms can improve their prediction based on one agent’s signal, namely the source’s. On the other hand, by Lemma 3.4, T-DPP really only requires that the learning algorithms can make two predictions onthetarget’sreportsuchthattheimprovedpredictionisbeterthantheinitialone.heconditioninT-DPPis weakerthantheconditioninS-DPP,becausealearningalgorithmmaynothavediscernibleimprovementbased ononeagent’s(source’s)signal,butcanstillmakeanimprovedpredictionwithenoughinformation.Forinstance, theinitialpredictionsintheBTS(Mechanism4)isone�agent’s andthe prediction improvedpredictionisthe (�) empiricalaverage � .Wecanreplacetheempiricalaveragewithanylearningalgorithmwhichusesallother −�,� agents’signals tomakeimprovedpredictions. Asmentionedbefore,iftheagentsareprivytoadditionalinformationwhichsystematicallychangestherela- tionshipbetweenagents’signals,themachinelearningalgorithmsappliedtotheentiredata,butnotgivenaccess to the instances themselves, may not work. For example, if two agents agree in their assessments of dramatic moviesbutalwaysdisagreeintheirassessmentsofcomedymovies.heissueisthattherelationshipcannotbe properly learned without information about the movie itself. To combat this issue, the machine learner could takeas inputtheinstances themselves[14]. OnefuturedirectionistousethismachinerytoanalysewhenBTSretainsitsstronglytruthfulguarantee,e.g. for whatparameters ofiniteand/or heterogeneous agents. ACKNOWLEDGMENTS Grant Schoenebeck and Fang-Yi Yu are pleased to acknowledge the support of the National Science Founda- tion grants 1618187 and 2007256. Fang-Yi Yu is pleased to acknowledge the support of the National Science Foundationgrants 2007887.WewouldliketothankDrazenPrelec for referencetoa relatedwork[19]. REFERENCES [1] Aurélien Baillon. 2017. Bayesian markets to elicit private information. Proceedings of the National Academy of Sciences 114, 30 (2017), 7958–7962. [2] Glenn W Brier.1950. Veriicationof forecastsexpressedin termsof prMonthly obability weather . revie 78,w1(1950), 1–3. [3] Noah Burrell and Grant Schoenebeck. 2021. Measurement Integrity in Peer Prediction: A Peer AssessmentarXiv CaseprStudy eprint . arXiv:2108.05521(2021). [4] homasM. CoverandJoyA. homas. 2001.Elementsof Information he.orWile y y,USA. https://doi.org/10.1002/0471200611 [5] Anirban Dasgupta and Arpita Ghosh. 2013. Crowdsourced judgement elicitation with endogenous 22ndprInternational oiciency. In WorldWideWebConference,WWW’13,RiodeJaneiro,Brazil,May13-17,,Daniel 2013 Schwabe,VirgílioA.F.Almeida,HartmutGlaser, RicardoBaeza-Yates,andSueB.Moon(Eds.).InternationalWorldWideWebConferencesSteeringCommitee/ACM,319–330. https: //doi.org/10.1145/2488388.2488417 [6] XiAliceGao,AndrewMao,YilingChen,andRyanPrescotAdams.2014.Trickortreat:putingpeerprediction Proceto edings thetest.In of the iteenthACMconferenceon Economicsandcomputation .ACM,507–524. [7] Robin Hanson. 2003. Combinatorialinformation marketInformation design. SystemsFrontiers 5,1(2003), 107–119. ACMTrans. Econ. Comput. 16 • GrantSchoenebeck and Fang-YiYu [8] Yuqing Kong. 2020. Dominantly truthful multi-task peer prediction with a constantPrnumb oceeerdings of tasks. of theIn Fourteenth AnnualACM-SIAMSymposium on DiscreteAlgorithms .SIAM, 2398–2411. [9] Yuqing Kong, Katrina Liget, and Grant Schoenebeck. 2016. Puting peer prediction under the micro (economic) scope and making truth-telling focal. International In Conferenceon WebandInternet Economics .Springer,251–264. [10] Yuqing Kong and Grant Schoenebeck. 2018. Equilibrium Selection in Information Elicitation without Veriication via Information Monotonicity9th .In Innovationsin heoreticalComputer Science Confer . ence [11] Yuqing Kong and Grant Schoenebeck. 2018. Water from Two Rocks: Maximizing the Mutual Information. Proceedings ofInthe 2018 ACMConferenceon EconomicsandComputation . ACM,177–194. [12] Yuqing Kong and Grant Schoenebeck. 2019. An information theoretic framework for designing information elicitation mechanisms thatrewardtruth-telling. ACMTransactionson EconomicsandComputation (TEA 7, 1C)(2019), 2. [13] YuqingKong,GrantSchoenebeck,Fang-YiYu,andBiaoshuaiTao.2020. InformationElicitationMechanismsforStatisticalEstimation. In hirty-FourthAAAI Conferenceon Ariicialintelligence (AAAI . 2020) [14] YangLiuandYilingChen.2017. Machine-learningaidedpeerprPrediction. oceedingsofInthe2017ACMConferenceonEconomicsand Computation . ACM,63–80. [15] N.Miller,P.Resnick,andR.Zeckhauser.2005. Elicitinginformativefeedback:hepeer-preManagement dictionmetho Scienced.(2005), 1359–1373. [16] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. 2010. Estimating divergence functionals and the likelihood ratio by convexriskminimization. IEEETransactionson Information he56,ory11(2010), 5847–5861. [17] MathewParry,APhilipDawid,StefenLauritzen,etal.2012. Properlocalscoring heAnnals rules.ofStatistics 40,1(2012),561–592. [18] Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, and Daphne Koller. 2013. Tuned models of peer assessment in MOOCs.arXivpreprintarXiv:1307.2579 (2013). [19] Drazen Prelec. 2001. Atwo-person scoring rulefor subjectivereports. MassachusetsInstituteofTechnology workingpaper. [20] Drazen Prelec. 2004. A Bayesian Truth Serum for SubjectivScience e Data.306, 5695 (2004), 462–466. https://doi.org/10.1126/science. 1102081arXiv:htps://www.science.org/doi/pdf/10.1126/science.1102081 [21] Drazen Prelec. 2021. BilateralBayesiantruth serum:henxm signalscase. AvailableatSSRN 3908446. [22] Goran Radanovic and Boi Faltings. 2013. A robust bayesian truth serum for non-binarPryosignals. ceedingsInof the 27th AAAI Conferenceon ArtiicialIntelligence (AAAI” . 833–839. 13) [23] GoranRadanovicandBoiFaltings.2014. IncentivesfortruthfulinformationelicitationofProcontinuous ceedingsofthe signals 28th.In AAAI Conferenceon ArtiicialIntelligence (AAAI” . 770–776. 14) [24] Grant Schoenebeck and Fang-Yi Yu. 2020. Learning and Strongly Truthful Multi-Task Peer Prediction: A Variation arXival Approach. preprintarXiv:2009.14730 (2020). [25] GrantSchoenebeckandFang-YiYu.2020. Twostronglytruthfulmechanismsforthreeheterogeneousagentsansweringonequestion. In InternationalConference on WebandInternet Economics .Springer,119–132. [26] GrantSchoenebeck,Fang-YiYu,andYichiZhang.2021. InformationElicitationfromProRocewedings dyCroofwds.theIn WebConfer- ence 2021.3974–3986. [27] Victor Shnayder, Arpit Agarwal, Rafael Frongillo, and David C. Parkes. 2016. Informed Truthfulness in Multi-Task Peer Prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation (Maastricht, he Netherlands) (EC ’16). ACM, New York, NY, USA, 179–196. [28] Siddarth Srinivasan and Jamie Morgenstern. 2021. Auctions and Prediction Markets for Scientiic arXiv prPeeprint er Review. arXiv:2109.00923(2021). [29] Bo Waggoner and Yiling Chen. 2013. Information elicitation sans vPreriication. oceedings of the In 3rd Workshop on Social Computing andUser GeneratedContent (SC13). [30] RobertL Winkler.1969. Scoring rulesandtheevaluationof probabilityJ.Aassessors. mer.Statist.Asso64,c.327(1969), 1073–1078. [31] Jens Witkowski and David C. Parkes. 2011. A Robust Bayesian Truth Serum for Small Populations. ProceedingsIn of the 26th AAAI Conferenceon ArtiicialIntelligence (AAAI. 2012) [32] Jens Witkowski and David C. Parkes. 2012. Peer prediction without a common Proceeprior dings. Iofn the 13th ACM Conference on ElectronicCommerce,EC2012,Valencia,Spain,June 4-8,.2012 ACM,964–981. [33] Peter Zhang and Yiling Chen. 2014. Elicitability and knowledge-free elicitation with Proceepdings eer preofdiction. the 2014Ininter- national conference on Autonomous agents and multi-agent systems . International Foundation for Autonomous Agents and Multiagent Systems, 245–252. [34] Shuran Zheng, Fang-Yi Yu, and Yiling Chen. 2021. he Limits of Multi-task PeerCoRR Prediction. abs/2106.03176 (2021). arXiv:2106.03176 https://arxiv.org/abs/2106.03176 ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 17 A INTRODUCTION TOPREDICTION MARKETS Now we want to get the collective prediction from a large group of experts. If we ask them all to report the predictionsimultaneouslyandpayeachofthemthelogscoringruleontheirpredictions,weonlyreceivemany diferentpredictions anditis notclear howtoaggregatethosepredictions intoa singleprediction. Hanson’s [7] idea is to approach theseexpquentially erts . he mechanism asks experts to pregiv dict, en pre- dictions that previous experts hav, and e made pays the experts the diference of score between their prediction minus thescoreofthepreviousone.Formally, (1) hedesigner chooses aninitialpr�eˆdiction ,e.g.,theuniformdistribution Ω. on (2) heexperts�= 1, 2, . . .,� arriveinorder.Eachexp �changes ert theprediction �ˆ frtoom�ˆ �−1 � (3) hemarketends andtheevent’soutcome � ∈ Ω isobserved. (4) Expert�receivesa payof PS[�, �ˆ ] − PS[�, �ˆ ]. � �−1 herefore,eachexpert(strictly)maximizeshisexpectedscorebyreportinghistruthbeliefgivenhisownknowl- edgeandthepredictionofthepreviousexperts. Suppose instead of multiple experts arriving in order we have one expert (Alice) but multiple signals arrive in order. For example, Alice is asked to predict the champion of a tennis tournament � ∈ Ω is the wherset e of players. As the tournaments proceeds, Alice collects additional (� ) signals which inform the outcome. � �=1,...,� Formally, (1) hedesigner chooses aninitialpr�eˆdiction . (2) Inround �= 1, 2, . . .,�,a signal� arrives,andAlicechanges theprediction �ˆ tofr �ˆom � �−1 � (3) Attheend,theoutcome � ∈ Ω isobserved. (4) Alicereceivesa payof (PS[�, �ˆ ] − PS[�, �ˆ ]). � �−1 �=1 Withbelief � ifAlicereportstruthfullyineachround,she�(will � | �rep ,�ort , . . .,� )atround�.Ifweuselog 1 2 � scoring rule, her payment at round �will be�(� ;� |� , . . .,� ). Her overall payment will b�e(� , . . .,� ;� ), � 1 �−1 1 � which maximizes her payment. his is an illustration of the chain rule for �(�Mutual ,� ;� ) =Information: �(� ;�� |� ) + �(� ;� ). B DATAPROCESSING INEQUALITY here are several proofs for the data processing inequality (heorem 2.3). However, for information elicitation, weotenaimforastrictdataprocessinginequalitysuchthatgivenapair(of �,�random ) ifavariables random function � : Y → Y is not a invertible function, �(� ;� ) > �(� ;�(� )). In this section, we will show such guaranteeholds�ifand� arestochastic relevant(deinedlater). Wesayapairofrandomvariable �,� onainitespace X × Y isstochasticrelevant ifforanydistinct � and ′ ′ � inX, � (· | �) ≠ � (· | � ).Andtheaboveconditionalsoholds whenwee�xchange and�. � |� � |� TRB.1. If(�,� )onainitespace X×Y isstochasticrelevantandhasfullsupport.Forallrandomfunction � fromY toY wheretherandomness�ofisindependent(of �,� ), �(� ;� ) = �(� ;�(� )) ifand only if � isa deterministicinvertiblefunction.�(�Other ;� ) >wise �(� ;�, (� )). Moreover, we can extend this to conditional mutual information when the random variable is second order stochastic relevant(Deinition 2.1). PRB.2. If(�,�,� )onainitespace W×X×Yissecondorderstochasticrelevantandhasfullsupport. Forany random function � fromY toY,iftherandomness�ofisindependent ofrandom variable (�,�,� ), �(� ;� | � ) = �(� ;�(� ) | � ) ACMTrans. Econ. Comput. 18 • GrantSchoenebeck and Fang-YiYu ifand only if � isanone-to-onefunction.Other�(wise � ;� ,| � ) > �(� ;�(� ) | � ). B.1 Proof of TheoremB.1 TR B.3 (J’ ay). Let� be a random variable on a probability space (X, F, �) and let � : R → R be a convex function. hen �(E[� ]) ≤ E[�(� )]. he equality holds if and only � agr ifee almost everywhereontherange�ofwith alinearfunction. Given a random function � : Y → Y, we use � : Y × Y → R to denote it’s transition matrix where �(�,�ˆ) = Pr[�(�) = �ˆ] forall�,�ˆ ∈ Y.Let� betherandomvariable �(� ). Variationalrepresentation. Bythevariationalrepresentationofmutualinformation Φ(�[)16=,�24], loglet �, ∗ ′ Φ (�) = exp(� − 1) andΦ (�) = 1 + log� themutualinformationbet � wandeen� is �(� ;� ) = sup E [�(�,� )] − E [Φ (�(�,� ))] � � ⊗� �,� � � �:X×Y→R andthemaximumhappens when � (�,�) �,� � (�,�) := Φ . (8) � (�)� (�) � � ˆ ˆ ˆ Wedeine� for� and� similarly.Withthesenotions, themutualinformation � and� bisetween h  i ˆ ˆ ˆ ˆ ˆ �(� ;� ) = E [� (�, � )] − E Φ � (�, � ) � � ⊗� ˆ � ˆ �,� � ¹  ¹ ˆ ˆ = E � (�,�ˆ)�(�,�ˆ)��ˆ − E Φ � (�,�ˆ) �(�,�ˆ)��ˆ � � ⊗� �,� � � ¹   ¹ ˆ ˆ ≤ E � (�,�ˆ)�(�,�ˆ)��ˆ − E Φ � (�,�ˆ)�(�,�ˆ)��ˆ � � ⊗� �,� � � helastinequalityholdsduetoconveΦxity andofJensen’sinequality. �Let (�,�) := � (�,�ˆ)�(�,�ˆ)��ˆ forall �,�.Wehave �(� ;� ) ≤ E [�(�,�)] − E [Φ (�(�,�))] (9) � � ⊗� �,� � � ≤ sup E [�(�,� )] − E [Φ (�(�,� ))] (10) � � ⊗� �,� � � �:X×Y→R =�(� ;� ). Suicient condition. We irst show the equality holds � is anifinvertible function. Hence, we need to show (9) and (10) are equalities. Because � is an invertible function, � is a permutation matrix. hus, for �,� all ∫ ∫ ∗ ∗ ˆ ˆ Φ � (�,�ˆ) �(�,�ˆ)��ˆ = Φ � (�,�ˆ)�(�,�ˆ)��ˆ ,and(9)is equality.For (10),for�alland�, �(�,�) = � (�,�ˆ)�(�,�ˆ)��ˆ =� (�,�(�)) (deterministic function) � (�,�(�)) �,� =Φ (by (8)) � (�)� (�(�)) � ˆ � (�,�) �,� =Φ (invertible) � (�)� (�) � � =� (�,�) ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 19 herefore,(10)is anequality.his completes theproof. Necessarycondition. Nowweshowtheequalityholdsonly � isif aninvertiblefunction, �isi.e apermutation . matrix.Weirstshowaweakerstatement, �isinjective.Formally �, (let �) := {�ˆ : �(�,�ˆ) > 0} isthesupportof ′ ′ ′ �on�.Wesay�isinjectiveifforall�distinct ,� thesupport�of (�, ·)and�(� , ·)aredisjoint, � (�)∩� (� ) = ∅. � � We prove this by contradiction: � is not ifinjective�(and � ;� ) = �(� ;� ), (�,� ) is not stochastic relevant. �(� ;� ) = �(� ;� ) implies(9)and(10)areequalities. Because(9)is anequality � and,giv � forenall�ˆ ∈ � (�), �(�,�) = � (�,�ˆ) (11) Because(10)is anequality,for � and all �, �(�,�) = � (�,�). (12) ∗ ∗ Suppose � is notinjective.here �exists , � and� inY suchthat � ≠ � and� ∈ � (� ) ∩ � (� ). For all�, 1 2 1 2 � 1 � 2 � (�,� ) =�(�,� ) (by(12)) 1 1 =� (�,� ) (by(11)and�ˆ ∈ � (� )) � 1 =�(�,� ) (by(11)and�ˆ ∈ � (� )) 2 � 2 =� (�,� ) (by(12)) SinceΦ is invertible,for � all � (�,� ) � (�,� ) �,� 1 �,� 2 � (�)� (� ) � (�)� (� ) � � 1 � � 2 herefore,� (· | � ) = � (· | � ), and (�,� ) is not stochastic relevant. his shows the Marko � isv kernel � |� 1 � |� 2 injectiveandhavea deterministic inversefunction. Now we show if � is injectiv � ise, a permutation wYhenis a inite space. Because � is a Markov kernel |� (�)| ≥ 1 for all�. Moreover, because� is injectiv | ∪e,� (�)| = |� (�)| ≥ |Y|. On the other hand, � � � � � ∪ � (�) = {�ˆ : ∃�, (�,�ˆ) ∈ � } ⊆ Y, | ∪ � (�)| ≤ |Y|.herefore,bypigeonholeprinciple |� (�), | = 1forall�, � � � � � � whichis one-to-one. B.2 Proof of PropositionB.2 PRB.2. Givenrandomvariable (�,�,� )deinepointwiseconditionalmutualinformationbetween � and� given� = � as �(� ;� | � = � ) := � � (· | � ) ⊗ � (· | � ) ∥ � (· | � ) �� � |� � |� (�,� ) |� whichis themutualinformati�on |� betw = � eand en � |� = � . First observe that conditional mutual information �(� ;� | � ) is the average pointwise conditional mutual informationbetw�eenand� acrossdiferent � , �(� ;� | � ) = �(� ;� | � = � ) � (� )��. hus,wecanapply heoremB.1toeachpointwiseconditionalmutualinformation. he suicient condition is straightforward. For the necessary condition we can reuse the argument in the proofofheoremB.1.LetΦ(�) = � log� and � (�,� | � ) �,� |� � (�,� | � ) := Φ . � (� | � )� (� | � ) � |� � |� Note that the proof implicitly use the property that the (�,�distribution , � ) has a fullof support. In particular, (11) and (12) only holds on thesupportof thedistribution. ACMTrans. Econ. Comput. 20 • GrantSchoenebeck and Fang-YiYu ˆ ˆ ˆ We deine� (�,� | � ) for�, �, and� similarly, and we let �(�,� | � ) := � (�,�ˆ | � )�(�,�ˆ)��ˆ. By similar derivation,wehaveanalogy of (11)and(12):For �,�,all � and�ˆ ∈ � (�) �(�,� | � ) = � (�,�ˆ | � ) (13) and �(�,� | � ) = � (�,� | � ) (14) ∗ ∗ Suppose � isnotinjective.here �exists , � and� suchthat � ≠ � and� ∈ � (� ) ∩ � (� ).Forall� and� 1 2 1 2 � 1 � 2 � (�,� | � ) =�(�,� | � ) (by(14)) 1 1 =� (�,� | � ) (by(13)and� ∈ � (� )) � 1 =�(�,� | � ) (by(13)and� ∈ � (� )) 2 � 2 =� (�,� | � ) SinceΦ is injective,for � and all� � (�,� | � ) � (�,� | � ) �,� |� 1 �,� |� 2 � (� | � )� (� | � ) � (� | � )� (� | � ) 1 2 � |� � |� � |� � |� herefore,thereexistsdistinct � and� suchthatfor � all 1 2 � (· | � ,� ) = � (· | � ,� ). � |�,� 1 � |�,� 2 his contradicts thecondition (�,�,�that ) is secondorderstochastic relevant. C PROOFSIN SECT.3.1 PRLa3.2. � (� | �,�) � (� | �,�(�)) � |�,� � |�,� E log − E log �,�,� �,�,� � (� | �) � (� | �) � |� � |� � (� | �,�) � |�,� = E log �,�,� � (� | �,�(�)) � |�,� � (� | �,�) � |�,� = E E log | � = �,� = � �,� � � (� | �,�(�)) � |�,� = E � (� (· | �,�(�))∥� (· | �,�)) . �,� �� � |�,� � |�,� ′ ′ Let�(�,�,� ) := � (� (· | �,� )∥� (· | �,�)) which is the KL-divergence from random� variable �� � |�,� � |�,� conditional�on= � and� = � to� conditional�on= � and� = � . hus, wehave E � (� (· | �,�(�))∥� (· | �,�)) = E [�(�,�,�(�))] . (15) �,� �� � |�,� � |�,� �,� FirstnotethatbyJensen’sinequality(heor�em (�,�B.3 ,�()�)) ≥ 0forall� and�,so(15)isnon-negative.his showstheirstpart. Let� = {� : �(�) ≠ �} ⊆ Y which is the event such � disagr that ee with the identity mapping. � Because is⟨�,�,� ⟩-second order stochastic relevant,�for ∈ �allthere is �, � (· | �,�) ≠ � (· | �,�(�)), so � � |�,� � |�,� �(�,�,�(�)) > 0 by Jensen’s inequality (heorem B.3). herefore, when equality holds, the probability of event � is zero,and� is anidentity because X × Y × Z is a initespace. PRTR3.1. heproofhastwoparts.Mechanism1istruthfulandthetruth-tellingstrategyproile maximizes theexanteagentwelfare. Truthfulness. We irst show Mechanism 1 is truthful. For the expert Alice, suppose Bob and Chloe provide ˆ ˆ theirsignalstruthfully.HerexpectedpaymentconsistsoftwopreLSR diction [�,� ] andscor LSRes[�,� ] where ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 21 ˆ ˆ � isherirstprediction � isandthesecond.heexpectedirstpredictionscore(undertherandomnessofBob’s signal� conditionalonAlice’ssignal�)bis eing E [LSR[�,� ]] ≤ E [LSR[�, � (· | �)]] �∼� (·|�) �∼� (·|�) � |� � |� � |� which is less than reporting truthful � pr(e·diction | �) since log scoring rule is proper (Deinition 2.2). � |� Similarly,her expectedpaymentis maximizedwhenher improv�edis pr�ediction (· | �,�). � |�,� IfChloeisthesource,shewilltellthetruthgivenAliceandBobreporttruthfullybyLemma3.2.Formally,let Alice’s ,Bob’s and Chloe’s signal �, �is, and�respectively. Let � : C → C denote a Chloe’s (deterministic) best response.Alice’sinitialpredictionandBob�’ssignal (· | �).is BecauseChloeunilaterallydeviate,Alice’sim- � |� provedprediction � is (· | �,�(�)).herefore,Chloe’spaymentLSR is[�, � (· | �,�(�))]−3 LSR[�, � (· | � |�,� � |�,� � |� �)]. Note that regardless Chloe’s report the initial pr � e=diction � (· | �)is. Hence equivalently Chloe’s best � |� responsealsomaximizes LSR[�, � (· | �,�ˆ)] − LSR[�, � (· | �)].Takingexpectationoversignal �, �,� and ˆ � |� � |�,� strategy� wehave �(�) := � (�,�,�) LSR[�, � (· | �,�(�))] − LSR[�, � (· | �)] �,�,� � |�,� � |� �,�,� = E log(� (� | �,�(�))) − log(� (� | �)) (by (1)) �,�,� � |�,� � |� � (� | �,�(�)) � |�,� = E log �,�,�ˆ � (� | �) � |� Similarly,theexantepaymentofChloewhenher strategy is�truth-telling is � (� | �,�) � |�,� �(�) = E log . �,�,� � (� | �) � |� hediferencebetween �(�) and�(�) is � (� | �,�) � (� | �,�(�)) � |�,� � |�,� �(�) − �(�) = E log − E log �,�,� �,�,� � (� | �) � (� | �) � |� � |� First, by Lemma 3.2, we know�(�) ≥ �(�). However, because� is a best response, the inequality is in fact equality�,(�) = �(�).By thesecondpartofLemma 3.2,this�sho isan wsidentity�and = �. IfChloeisthetarget,heractiondoesnotafectherexpectpayment,soreportinghersignaltruthfullyisabest response strategy. By randomizing the roles of source and target, both Bob and Chloe will report their signals truthfully. Strongly truthful. Now we show the truth-telling strategy � maximizes proile the ex ante agent welfare under �. If Bob is the target, the ex ante agent welfare (before anyone receives signals) in truth-telling strategy proile� is � (�;�) = E 2 LSR[�, � (· | �,�)] − LSR[�, � (· | �)] � (�,�,�)∼� � |�,� � |� � (� | �,�) � |�,� =2 E log (�,�,�)∼� � (� | �) � |� =2�(�;� | � ) whichis theconditionalinformationbetweenBob’sandChloe’ssignals givenAlice’ssignal. Ontheotherhand, � =let(� ,� ,� ) beanequilibriumstrategyproilewhereBobandChloereportsignals � � � � (�) and � (� ) respectively. Since � is an equilibrium, if Bob is the target, Alice � will with predict signal � � ACMTrans. Econ. Comput. 22 • GrantSchoenebeck and Fang-YiYu ˆ ˆ truthfully, and rep�ort= � (· | �) and� = � (· | �,� (�)). By a similar computation, the ex � (� ) |� � (� ) |�,� (� ) � � � � anteagentwelfareis Õ Õ � (� ;�) = 2�(� (�);� (� ) | � ) ≤ 2�(�;� | � ) = � (�, �). � � � � � � heinequalityisbasedonthedataprocessinginequality(heorem2.3).Moreover,byPropositionB.2,theequal- ity holds only�if is a permutationstrategy proile. D PROOFIN SECT.3.2 D.1 Proof of Theorem3.3 heproofis mostly identicaltoheorem3.1 inAppendix C. Weincludeitfor completeness. PRTR3.3. he proof also has two parts. Mechanism 2 is truthful and the truth-telling strategy proilemaximizes theexanteagentwelfare. WeirstshowMechanism2istruthful.FortheexpertAlice,theproofisidenticaltotheproofofheorem3.1. By Lemma 3.4, if Bob is the target, he will tell the truth given Alice and Chloe report truthfully. If Bob is the source,hisactiondoesnotafecthisexpectpayment,soreportinghissignaltruthfullyisabestresponsestrategy. By randomizingtheroleofsourceandtarget, bothBob andChloewillreporttheir signals truthfully. heprooffor stronglytruthfulis identicaltotheproofofheorem3.1. Note that if we randomize the roles amount Alice, Bob, and Chloe, each agent has a non-negative expected paymentatthetruth-tellingequilibrium. E PROOFOFTHEOREM 3.6 For the expert Alice, suppose Bob and Chloe provide their signals truthfully. Her payment consists of two pre- diction scores: When the random variable � = ∅, the prediction score (under the randomness of Bob�’s signal conditionalonAlice’ssignal�)bis eing E [LSR[�,� ]] ≤ E [LSR[�, � (· | �)]] �∼� (·|�) �∼� (·|�) � |� � |� � |� Sincelogscoringruleisproper(Deinition2.2),reportingtruthful � pr (·e|diction �) maximizesit.Similarly, � |� when� ≠ ∅,her(conditional)expectedpaymentismaximizedwhenherimprov�edpre(diction · | �,� ). is � |�,� For the target Bob, suppose Alice and Chloe report truthfully. We will follow the proof of Lemma 3.4 to show Bob’s best response is truth-telling. � :Let B → B be a Bob’s (deterministic) best response. Bob’s expected paymentdepends onfour values:signals �, �, �,andvirtualsignal � : � = 1[� = �] LSR[�(�), � (· | �,� )] − 1[� = ∅] LSR[�(�), � (· | �)]. � � |�,� � |� AndBob’sexpectedpaymentis � (�) = E LSR[�(�), � (· | �,�)] − LSR[�(�), � (· | �)] . � �,�,� � |�,� � |� |C| + 1 hus, by the same argument in Lemma 3.4 Bob’s best response is truth-telling. If Bob is the source, his action doesnotafecthisexpectpayment,soreportinghissignaltruthfullyisabestresponsestrategy.Byrandomizing theroleofsourceandtarget,bothBob andChloewillreporttheir signals truthfully. heproofofstronglytruthfulis identicaltotheproofofheorem3.3. ACMTrans. Econ. Comput. TwoStronglyTruthful Mechanisms forThree Heterogeneous AgentsAnsweringOne uestion • 23 F SKETCHPROOFFORPROPOSITION 4.2 A consistent predictor � of a value� given evidence � ,� , . . . is one where more information leads to a beter 1 2 predictionsuchthat lim Pr[|�(� ,� , . . .,� ) − � | ≥ �] → 0. 1 2 � �→∞ (�) hepropositionfollowsbyseeingthat, �and ixing � ,both� (� ) and� (� | � ,� , . . .,� ) areboth � |� 2 3 � 1 −1 −{1,2} consistentestimators � for(� | �). � |� (�) � (� ) istheempiricaldistribution � − 2indep ofendentsamplesfrom � (· | �) toestimate � (� | �) � |� � |� −{1,2} andis thereforea consistentestimator. Ontheotherhand,because � and� ,� , . . .,� areindependentconditional �,the on posteriordistribution 1 2 3 � � (� | � ,� , . . .,� ) is consistent. hat is�for ∈ [�all], Pr[|�(� = � | � ,� , . . .,� ) − 1| ≥ � | � = �] → 0. 2 3 � 2 3 � � |� −1 hus � (· | � ,� , . . .,� ) = � (· | �)� (� | � ,� , . . .,� ) 2 3 � 2 3 � � |� � |� � |� 1 −1 1 −1 isalsoa consistentpredictor � (of � | �) whichcompletes theproof. � |� G GENERALMEASURE SPACES G.1 Setings here are three characters, Alice , Bob and Chloe. Consider three measur (A,eSspaces , � ), (B, S , � ), and � � � � (C, S , � ). LetX := A × B × C, S := S × S × S , and � = � ⊗ � ⊗ � where⊗ denotes the product � � � � � � � � � betweendistributions. P(Let X) bethesetofprobabilitydensity function X withonrespect� to. Alice(andrespectivelyBob,Chloe)hasaprivatelyobserv�e(rdesp signal ectively �,�)fromsetA(respectively B C). hey all share acommon prior beliefthat their signals (�,�,�) is generated from a random variable X := (�, �,� ) on (X, S) with a probability measur � ∈ Pe(X), and a positive density function � > 0. We consider a uniform second orderstochasticrelefor vantgeneralmeasurespaceas follow: DeinitionG.1.Arandomvariable(�, �,� ) inA ×B × C withaprobabilitymeasur � isnot e ⟨�, �,� ⟩-uniform stochasticrelevant ifthereexista signal � ∈ A andtwodistinctsignals �,� ∈ B suchthat theposterior � is on identicalwhether � = � with � = � or � = � with � = �, � (· | �,�) = � (· | �,� ) almostsurelyon� . � |�,� � |�,� � Otherwisewecall � ⟨�, �,� ⟩-uniformstochasticrelevant.hus,whenAliceismakingapredictiontoChloe’s signal,Bob’ssignalis always relevantandinduces diferentpredictions � = � orwhen � = � . Wecall� uniformsecondorderstochasticreleif vantit⟨�is ,�,� ⟩-uniformstochasticrelevant⟨�wher ,�,� ⟩e is any permutation{�of , �,� }. Formally, P(X) isthe setof alldistributions X thatonareabsolutely continuous with respect to � .measur For � ∈e P(X), wedenote the density�ofwith respect�byto�(·). For example, if X is a discrete space, we can �assetthe counting measurXe.is If an Euclidean spaceR , wecan usetheLebesguemeasure. One major diference between ⟨�,�,� ⟩-stochastic relevant (Deinition⟨�2.1) ,�,�and ⟩-uniform second order stochastic relevant (Def- ′ ∗ ∗ ∗ ′ inition G.1) is the quantiier �: Givenofall distinct�,pair � , it is suicient to hav � esuch one that � (·|� ,�) ≠ � (·|� ,� ). � |�� � |�� However,foruniformstochasticrelevant,itrequir �, �esfor(·all |�,�) ≠ � (·|�,� ).Oneissueforsecondorderstochasticrelevant � |�� � |�� in general measure space is that we can change measure zero point to make such distribution stochastic irrelevant, and the probability to ∗ ∗ ∗ ′ derive� such that � (·|� ,�) ≠ � (·|� ,� ) may bezero. � |�� � |�� ACMTrans. Econ. Comput. 24 • GrantSchoenebeck and Fang-YiYu G.2 Theorem3.1 and3.3 ongeneral measurespaces Here,westateanalogous resultstoheorem3.1 and3.3. heproofs aremostly identical. TR G.2. Given a measure space(X, S, � ) if the common prior � is uniform second order stochastic rel- evant on the measurable space (X, S), and � is absolutely continuous with resp�ect, Metochanism 1 has the followingproperties: (1) hetruth-telling strategy pr �oile isastrict BayesianNash Equilibrium. (2) heexanteagentwelfareinthetruth-tellingstrategy � isprstrictly oile beterthanallnon-invertiblestrategy proiles. Herethemaximumagentwelfarehappensnotonlyatpermutationstrategyproiles,butalsoinvertiblestrat- egyproile.hislimitationisduetothestrictnessofdataprocessinginequality(heoremB.1).Forexample,con- siderapairofrandomvariables (�,� ) on Z ×Z .Let� beaMarkovoperatorsuchthat � ∈for Z , �(�) = � >0 >0 >0 with probability 1/2 and�(�) = −� otherwise. Although � is not an one-to-one function, �(� ;� ) = �(�(� );� ). Ontheother hand,followtheproofofheoremB.1,wecansay theequality � isholds injewhen ctive. heguaranteeofMechanism 2is thesame. TR G.3. Given a measure space(X, S, � ) if the common prior � is uniform second order stochastic rel- evant on the measurable space (X, S), and � is absolutely continuous with resp�ect, Metochanism 2 has the followingproperties: (1) hetruth-telling strategy pr �oile isastrict BayesianNash Equilibrium. (2) heexanteagentwelfareinthetruth-tellingstrategy � isprstrictly oile beterthanallnon-invertiblestrategy proiles. ACMTrans. Econ. Comput.

Journal

ACM Transactions on Economics and ComputationAssociation for Computing Machinery

Published: Feb 21, 2023

Keywords: Peer prediction

There are no references for this article.