Abstract (eng)
The main contribution of this thesis is the development and empirical evaluation of IBE*, a synthesis of Jeffrey Conditionalization and IBE (Inference to the Best Explanation) that generalizes explanationist updating to cases of uncertain evidence. It is argued that there are merits to be expected from studying (probabilistic) alternatives to Bayesian inference. The 'Alien Die' model and Brier scores are introduced. In simulations with full certainty of evidence, we succeed in replicating a recent key finding by Igor Douven: The explanationist is faster, but also incurs a slightly higher Brier score. In simulations with fixed (un-)certainty, the explanationist is again faster and also more accurate. Also in simulations with random uncertainty, the explanationist is the substantially faster and more accurate variant: We find a decisive shortcoming of the Bayesian approach. IBE* seems to be counteracting the problem of uncertain evidence. We introduce networks and visualize them under different parametrisation. We then run collective belief updates on these networks. With full certainty, in contrast to the Bayesian, the explanationist crosses the threshold for the true bias on all topologies. With uncertainty of evidence, this advantage is again more pronounced. We found a vast speed advantage of the explanationist. Then, controversies regarding the choice of specific parameters and concepts used in our simulations are addressed. We develop a definition of computer simulations and reflect on epistemological issues. We point out the potential of simulations for the social sciences. Finally, current limitations and further directions of our research are identified.