Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

MODES OF CONVERGENCE TO THE TRUTH: STEPS TOWARD A BETTER EPISTEMOLOGY OF INDUCTION

MODES OF CONVERGENCE TO THE TRUTH: STEPS TOWARD A BETTER EPISTEMOLOGY OF INDUCTION Abstract Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Review of Symbolic Logic Cambridge University Press

MODES OF CONVERGENCE TO THE TRUTH: STEPS TOWARD A BETTER EPISTEMOLOGY OF INDUCTION

Review of Symbolic Logic , Volume 15 (2): 34 – Jun 1, 2022

MODES OF CONVERGENCE TO THE TRUTH: STEPS TOWARD A BETTER EPISTEMOLOGY OF INDUCTION

Review of Symbolic Logic , Volume 15 (2): 34 – Jun 1, 2022

Abstract

Abstract Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well.

Loading next page...
 
/lp/cambridge-university-press/modes-of-convergence-to-the-truth-steps-toward-a-better-epistemology-eP0e5fK6Dq
Publisher
Cambridge University Press
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Association for Symbolic Logic
ISSN
1755-0211
eISSN
1755-0203
DOI
10.1017/S1755020321000605
Publisher site
See Article on Publisher Site

Abstract

Abstract Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well.

Journal

Review of Symbolic LogicCambridge University Press

Published: Jun 1, 2022

Keywords: 03B48; 62F15; learning theory; enumerative induction; probability and inductive logic; Bayesian inference

References