Access the full text.
Sign up today, get DeepDyve free for 14 days.
Y. Xiang, Dylan Loker (2018)
De-Causalizing NAT-Modeled Bayesian Networks for Inference Efficiency
Paul Maaskant, Marek Druzdzel (2008)
An Independence of Causal Interactions Model for Opposing Inuences
N. Friedman, Z. Yakhini (1996)
On the Sample Complexity of Learning Bayesian NetworksArXiv, abs/1302.3579
N. Friedman, M. Goldszmidt (1996)
Learning Bayesian Networks with Local StructureArXiv, abs/1302.3577
M. Henrion (1987)
Some Practical Issues in Constructing Belief Networks
J. Rissanen (1978)
Modeling By Shortest Data Description*Autom., 14
Jirka Vomlel, P. Tichavský (2014)
An Approximate Tensor-Based Inference Method Applied to the Game of Minesweeper
D. Chickering (1995)
A Transformational Characterization of Equivalent Bayesian Network StructuresArXiv, abs/1302.4938
Colin Lee, P. Beek (2017)
Metaheuristics for Score-and-Search Bayesian Network Structure Learning
Y. Xiang (2012)
Non-impeding noisy-AND tree causal models over multi-valued variablesInt. J. Approx. Reason., 53
D. Chickering, D. Heckerman, Christopher Meek (1997)
A Bayesian Approach to Learning Bayesian Networks with Local StructureArXiv, abs/1302.1528
Y. Xiang, Qian Jiang (2018)
NAT model–based compression of Bayesian network CPTs over multivalued variablesComputational Intelligence, 34
Y. Xiang (2019)
Direct causal structure extraction from pairwise interaction patterns in NAT modeling Bayesian networksInt. J. Approx. Reason., 105
D. Heckerman, D. Geiger, D. Chickering (1994)
Learning Bayesian Networks: The Combination of Knowledge and Statistical DataMachine Learning, 20
Steven Woudenberg, L. Gaag, C. Rademaker (2015)
An intercausal cancellation model for Bayesian-network engineeringInt. J. Approx. Reason., 63
Craig Boutilier, N. Friedman, M. Goldszmidt, D. Koller (1996)
Context-Specific Independence in Bayesian NetworksArXiv, abs/1302.3562
Bayesian networks (BNs) encode conditional independence to avoid combinatorial explosion on the number of variables, but are subject to exponential growth of space and inference time on the number of causes per effect variable. Among space-efficient local models, we focus on the Non-Impeding Noisy-AND Tree (NIN-AND Tree or NAT) models, due to their multiple merits, and on NAT-modeled BNs, where each multi-parent variable family may be encoded as a NAT-model. Although BN inference is generally exponential on treewidth, the inference is tractable with NAT-modeled BNs of high treewidth and low density. In this work, we present the first study to learn NAT-modeled BNs from data. We apply the MDL principle to learning NAT-modeled BNs by developing a corresponding scoring function, and we couple it with heuristic structure search. We show that when data satisfy NAT causal independence, high treewidth, and low density structure, learning underlying NAT modeled BNs is feasible.
Annals of Mathematics and Artificial Intelligence – Springer Journals
Published: Nov 1, 2021
Keywords: Bayesian networks; Causal independence models; Probabilistic inference; Local structures; Machine Learning; 68T05; 68T37
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.