This is the empirical, decision-theory bayesian way to solve this problem. The only axiom we need to have is "the more evidence there is for a theory, the larger the probability for that theory being true". Start from scratch:
Consider a hypothetical space containing all of the (infinite) possible theories which explain why the world is the way that it is. Each of these theories has equal (and infinitesimally small) probability. Similar theories are closer together; dissimilar theories are farther apart.
If we have the theory, "The universe is the way that it is because a race of technologically superior beings are running an emulated universe with the variables for the laws of physics tuned to be ours", and then discovered that at the sub-quark level the wave-particles have little serial numbers which say "This discrete particle #(3^^^3)-27 is property of Emulation Softhardware, Inc, to be used in Universe Emulator #918298172, by the Bijnak race in their experiment on how sentient life evolves in a universe where the laws of physics are such: [very accurate description of how reality works]", then this evidence would increase the probability of our theory heavily, and decrease the probability of almost every other theory. This transfer of probability is lossless: the total probability of all theories will always be 1.
Before we had that evidence, though, could we have evaluated our theory? Technically, yes, we could have compared its slightly-above-zero probability with the slightly-above-zero probability of every other theory, but this math is beyond us from a computational standpoint. Not only that, but the human brain has several adaptations which aided us strongly in reproducing in the ancestral environment which makes thinking about theories in this way very, very difficult, and any product of such thinking is highly dubious.
---
I have to deviate from the topic to explain, in a very technical manner, the phenomenon of positive reinforcement bias. The following is an experiment conducted on a group of students. They were told a string of three numbers, "2, 4, 6", and told that there was some hidden rule by which such strings of three numbers were generated. They could write their own strings of 3 numbers down, and get feedback on whether or not their string conformed to the rule. They were then to guess the rule.
Here is how one student's thinking went:
1. 4, 6, 2 --- Testing 'string is digits 2, 4, 6, at random' --- NO
2. 4, 6, 8 --- Testing 'string is n, n+2, n+4' --- YES
3. 6, 8, 10 --- Testing 'string is n, n+2, n+4' --- YES
4. 21, 23, 25 --- Testing 'string is n, n+2, n+4' --- YES
At which point the student declared the rule to be 'string is n, n+2, n+4'. Sounds logical, right?
The actual rule was 'string is in ascending order'. The student only ever made tests which would come out positive if their theory is correct, and negative if their theory is not correct. This is positive reinforcement bias.
The first test reduced the probability of 'string is random 2,4,6' (and all theories which allowed 4,6,2 as a string) to 0, and split the probability it had before up between all probabilities which did not allow 4,6,2 as a string (including 'string is n,n+2,n+4', along with many other billions of theories). Each of these theories gained a very small amount of probability. The next two tests probably ruled out *some* theories (for instance, the theory of 'strings can only be 2,4,6' or the theory of 'strings are 3 random even numbers'), and the selected theory of 'n,n+2,n+4' did gain some probability, but not much. And every time 'n,n+2,n+4' gained probability, so did 'ascending order', such that they both had equal probability and more testing was needed to distinguish between them. But because the student didn't even think of 'ascending order', he didn't realize this. He had already....
privileged the hypothesis. A superhuman thinking machine would be able to think of all of the possibilities, and would have noticed that all of the tests ran promoted many, many hypotheses, not just the selected hypothesis, and continued running experiments. But as humans, we are unable to do this, and we
should not try.
---
So when we take into account our new evidence of the serial numbers on the sub-quark particles, this takes all the probability from ALL hypotheses which can be tagged with 'non-intelligent designer' and moves it to ALL hypotheses which can be tagged with 'intelligent designer'. (Hypotheses which have statements such as 'evidence of intelligence is likely to be randomly, not intelligently, generated' remain unaffected either way). A larger chunk of probability goes to theories which talk about an emulated universe. The theories which could have
predicted in advanced the
exact wording of the serial number's disclaimer get the largest chunk of probability.
Now let's connect this to positive reinforcement bias. Confirmation bias is the phenomenon of humans only looking at evidence which supports the selected hypothesis; do not confuse this for positive reinforcement bias, which is that even a human who is being completely impartial towards a selected hypothesis will still tend to perform experiments in which a positive outcome affirms the hypothesis and a negative outcome opposes the hypothesis, and we don't throw the hypothesis out until an experiment which
should have come out 'yes' comes out 'no'. The problem with this is, it tends to have a narrowing effect on our field-of-vision of the possible hypotheses. When we find some evidence which supports the hypothesis, we zoom-in closer on that hypothesis without noticing if the evidence supports other hypotheses which we are panning our camera away from.
Experiments of the other variety, for instance testing the hypothesis 'n,n+2,n+4' with '9,5,6' and getting an affirmative 'no', tend to have a broadening effect on our field-of-vision. We are MUCH more likely to find evidence which, rather than affirming our current hypothesis, affirms other hypotheses without hurting our current hypothesis.
---
Now it's time to talk about Christianity.
- There isn't nearly enough evidence to locate Christianity in the space of all hypotheses. A single book riddled with errors, supported by a mountain of positive reinforcement bias, is not good enough.
- When Christians are presented with evidence which opposes their particular selected hypothesis in the Christianity neighborhood of hypotheses, they switch to a nearby hypothesis which was carefully constructed to avoid losing any probability over the new evidence. No additional probability is given to this new hypothesis, because there is no opposing hypothesis for the probability to come from, and we have Conservation of Probability (it always has to total up to 1, you can't make it out of nothing).
- In order for Christians to actually get evidence for their beliefs which moves probability from other hypotheses into theirs, they must take a prediction about reality that their hypothesis makes (such as the Rapture) which other hypotheses do not make, and see if it comes true, and if it does then some probability gets moved out of the theories which did not predict the Rapture and into the theories that did. Retroactively fitting Christianity (shuffling around theory-space) to match the experimental outcome of previous tests does not count.
- Atheism is similar. It is impossible to get hard evidence for the non-existence of something, that is, evidence which will move a significantly large amount of probability from the collection of theories 'x exists, and [excuse] for why [evidence] seems to say otherwise' to the theory 'x does not exist'.
Nothing to do with faith. This is how reality works. Period. Any discussions of the veracity of religion
really aught to be conducted in this context.
I will write generalized bayes' law representations of all of the theoretical calculations in this post on request. Also, expect more clarification and more writing of a similar nature when I get my pink name.
further reading:
Disclaimer: I made this post after seeing the phrase "insufficient evidence" used
subjectively. Insufficient evidence is a very, very well-defined term. It doesn't mean, "I'd like more evidence before I personally believe this". It means "There is not enough evidence to believe this", and the amount of evidence required is empirical and well-defined.