Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Probabilistic logic under coherence: complexity and algorithms

Probabilistic logic under coherence: complexity and algorithms In previous work [V. Biazzo, A. Gilio, T. Lukasiewicz and G. Sanfilippo, Probabilistic logic under coherence, model-theoretic probabilistic logic, and default reasoning in System P, Journal of Applied Non-Classical Logics 12(2) (2002) 189–213.], we have explored the relationship between probabilistic reasoning under coherence and model-theoretic probabilistic reasoning. In particular, we have shown that the notions of g-coherence and of g-coherent entailment in probabilistic reasoning under coherence can be expressed by combining notions in model-theoretic probabilistic reasoning with concepts from default reasoning. In this paper, we continue this line of research. Based on the above semantic results, we draw a precise picture of the computational complexity of probabilistic reasoning under coherence. Moreover, we introduce transformations for probabilistic reasoning under coherence, which reduce an instance of deciding g-coherence or of computing tight intervals under g-coherent entailment to a smaller problem instance, and which can be done very efficiently. Furthermore, we present new algorithms for deciding g-coherence and for computing tight intervals under g-coherent entailment, which reformulate previous algorithms using terminology from default reasoning. They are based on reductions to standard problems in model-theoretic probabilistic reasoning, which in turn can be reduced to linear optimization problems. Hence, efficient techniques for model-theoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence (for example, column generation techniques). We describe several such techniques, which transform problem instances in model-theoretic probabilistic reasoning into smaller problem instances. We also describe a technique for obtaining a reduced set of variables for the associated linear optimization problems in the conjunctive case, and give new characterizations of this reduced set as a set of non-decomposable variables, and using the concept of random gain. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Annals of Mathematics and Artificial Intelligence Springer Journals

Loading next page...
 
/lp/springer-journals/probabilistic-logic-under-coherence-complexity-and-algorithms-Glp0P3xjik

References (81)

Publisher
Springer Journals
Copyright
Copyright © 2006 by Springer Science+Business Media, Inc.
Subject
Computer Science; Complexity; Computer Science, general ; Mathematics, general; Artificial Intelligence (incl. Robotics)
ISSN
1012-2443
eISSN
1573-7470
DOI
10.1007/s10472-005-9005-y
Publisher site
See Article on Publisher Site

Abstract

In previous work [V. Biazzo, A. Gilio, T. Lukasiewicz and G. Sanfilippo, Probabilistic logic under coherence, model-theoretic probabilistic logic, and default reasoning in System P, Journal of Applied Non-Classical Logics 12(2) (2002) 189–213.], we have explored the relationship between probabilistic reasoning under coherence and model-theoretic probabilistic reasoning. In particular, we have shown that the notions of g-coherence and of g-coherent entailment in probabilistic reasoning under coherence can be expressed by combining notions in model-theoretic probabilistic reasoning with concepts from default reasoning. In this paper, we continue this line of research. Based on the above semantic results, we draw a precise picture of the computational complexity of probabilistic reasoning under coherence. Moreover, we introduce transformations for probabilistic reasoning under coherence, which reduce an instance of deciding g-coherence or of computing tight intervals under g-coherent entailment to a smaller problem instance, and which can be done very efficiently. Furthermore, we present new algorithms for deciding g-coherence and for computing tight intervals under g-coherent entailment, which reformulate previous algorithms using terminology from default reasoning. They are based on reductions to standard problems in model-theoretic probabilistic reasoning, which in turn can be reduced to linear optimization problems. Hence, efficient techniques for model-theoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence (for example, column generation techniques). We describe several such techniques, which transform problem instances in model-theoretic probabilistic reasoning into smaller problem instances. We also describe a technique for obtaining a reduced set of variables for the associated linear optimization problems in the conjunctive case, and give new characterizations of this reduced set as a set of non-decomposable variables, and using the concept of random gain.

Journal

Annals of Mathematics and Artificial IntelligenceSpringer Journals

Published: Jan 6, 2006

There are no references for this article.