Theomatics is not a trivial subject to master, by any means. The presentations in this site, intended to reveal the technical details of inherent flaws in current Theomatic research, can be even more challenging to grasp than the subject itself.

In an attempt to orient and equip the reader in the simplest possible manner to discern the truth, as one compares the content of this site with the publications of the author of Theomatics, Del Washburn, we present the following insight.


Each letter of the alphabets of the ancient languages of the Bible also represents a number. Biblical languages did not have separate symbols for numbers; numbers were represented with letters according to standard numerical values assigned to them. Therefore any word or phrase in these ancient biblical languages also has a numerical value, which is the sum of the values of its letters.

When an unusually large number of phrases related to a given biblical topic are found to have numerical values which are multiples of a certain number F we define F to be a Theomatic factor for this topic. Theomatics is the practice of observing and interpreting the presence of such numerical patterns in the Bible that are so unlikely they could never have occurred by chance. In order to define a theomatic factor then, one must have a basic understanding of the nature of randomness.


There are certain kinds of random events, such as the roll of a die or the toss of a coin, that are relatively predictable. Such events have expected outcomes that can be studied and compared to an actual outcome of the event.

For example, if we toss a quarter 10 times and get 10 heads, this is evidently unusual. This is not an outcome we would generally expect from flipping a fair coin 10 times, though it is certainly possible. We can perform this kind of event many, many times, look at all the outcomes and compare them. If we did, we'd find out that on average we get 10 heads in about 20 flips rather than in just 10 flips.

The difference then between a very unusual outcome and a quite normal one for such an event lies not only in the number of heads observed but also in the total number of flips. Statisticians call this value, the total number of flips, the sample size. We call each particular flip an element of the sample. A flip that is a head, since we are counting heads, is called a success, and the number of heads observed is the outcome. When the number of successes are more than expected in a particular outcome, excluding elements of the sample that are not successes makes the outcome seem even more unlikely.

For example, if in our coin tossing experiment, after getting 10 heads in 20 flips, we look back at all of the flips and decide that 5 of the tails are invalid because we didn't flip the coin properly, we have just made our result seem less likely than it did at first: getting at least 10 heads in 20 flips is quite normal (happens about half the time), but getting at least 10 heads in only 15 flips is not so normal (will happen about every 7th time we try it).


It is in such manipulation of the sample that Washburn generally makes his fatal mistake. Essentially, Washburn makes a fundamental error in most every Theomatic experiment by subjectively considering whether or not to include each element in his sample... after deciding what a success is.

In the context of Theomatics an element is a phrase from the original text of the Bible, and a success is a phrase whose value (the sum of the numerical values historically assigned to the Greek letters in the phrase) is a multiple of some special number he has in mind: the Theomatic factor.

When Washburn presents an instance of Theomatics he identifies a particular biblical subject or theme as a Theomatic topic by noticing that a couple of phrases containing this theme are divisible by an unusual Theomatic factor. Once this is in place Washburn gathers all the appropriate phrases that relate to his topic. These phrases define his sample. His final step is to test each phrase in the sample to see if it is a success and then report his result.

As he forms a sample, Washburn tries to evaluate whether each phrase "makes sense" or not before including it in the sample. If he thinks a phrase doesn't make sense or is awkward for some reason, he does not include it in his sample and simply ignores it. In this process he chooses to subjectively define what makes a phrase acceptable, deciding the matter on a case by case basis... after choosing his Theomatic factor.

This would not actually be a problem if Washburn could do this filtering blindly and honestly, without considering whether a phrase is a success or not as he makes this decision. If he could make his determination completely independently of whether each phrase is a multiple of his factor, there would be no problem with his approach since he would tend to throw out successes and failures in the same way. In our illustration it would be like deciding whether a flip was valid before noticing whether it was heads or tails. If we were truly blind we would tend to throw out as many heads as tails and the average outcome would not be affected by our sample reduction process.

However, if Washburn knows which phrases are successes as he decides whether to keep them or not, then we have a problem. This is like deciding whether a flip is valid after knowing that it is a heads or a tails. This is not a blind decision unless the criteria for a "bad flip" are very clear. In Washburn's case, his selection criteria are never explicitly stated... what makes for a "good phrase" is never made clear.

Thus, the whole question in any Theomatics experiment boils down to this one important question: "Does Del Washburn evaluate the acceptability of a phrase without regard to its divisibility?" This is, essentially, a matter of the heart, a matter of trust in his integrity, based on the way he chooses to conduct his research.

Although he claims to be honest, and may actually think he is being objective in this filtering, it is evident that Washburn cannot be blindly objective: he does admit that he is very, very good at seeing successful phrases by simply looking at them. He has been toying around with Theomatics for at least a quarter of a century now and has evidently done millions of these calculations. When he is making his decision about whether to include a phrase in his sample he says he can often discern whether it is a success. In effect, he decides whether he flipped the coin properly while staring at the outcome of the flip: he is not blind.

It is certainly not necessary for Washburn to conduct his experiments in this manner -- there is an easy way to avoid this dependence on his integrity: simply include all phrases in his sample that meet a pre-defined definition of an acceptable phrase. This would be very simple for him to do, but he does not do so. He chooses to conduct his sample-reduction procedure in a subjective manner and thus to expose his results to personal bias: this subjectivity is completely unnecessary.

Upon reflection, this is indeed strange behavior for one who values objective scientific methodology so highly. Washburn does indeed value objective precision and scientific methodology: this fact is very clearly evident in his writings. Introducing subjectivity like this is clearly not desirable in Theomatics because it makes his results naturally questionable. In order to avoid this Washburn must either include in his sample every single phrase that relates to his subject or define exactly what constitutes a sensible phrase before choosing the Theomatic factor. He generally does neither.


Washburn does perceive the weakness inherent in his approach and makes a reasonable attempt on one occasion to construct his sample objectively. In a focused study of Theomatics in the story of the Prodigal Son, A Statistical and Probability Analysis: Luke 15:10-32, apparently his very best instance of Theomatic significance (as of late 2001), Washburn carefully defines an acceptable phrase exactly and counts all phrases in his sample that meet this definition. The result of his experiment, consequently, is clearly random... odds of 1 in 3, believe it or not. Even this is generous, since correcting numerous errors in his study, in both observation and analysis, actually imply odds of 1 in 1... comparable in our example to getting at least 1 head in 20 flips of a coin!

Rather than accepting this result as the outcome of his experiment and ignoring the Theomatic topic, as we expect would most any objective observer, Washburn attempts to justify a reduction in the sample size theoretically rather than subjectively. He claims to know how many phrases could have produced the number of successful phrases he found, even though it actually took nearly double the number of phrases in this particular case. In spite of the evidence in front of him he reduces his sample size by about half... just throwing out failures... to finally arrive at odds he can live with: 1 in 38,318!

It is only in this manner... by actually imagining an event that did not occur... that Washburn is able to maintain his sensationalism. This particular study in Luke 15 is the only instance we find in any of his publications where he removes all subjectivity in his experimental design. However, in this case, he simply can't bring himself to objectively evaluate the obviously bland result.

Why does Washburn resort to such subjectivity? After an extensive review of his work, we find that Washburn has never produced anything of statistical significance in an objective manner, without being arbitrarily subjective... ever. One may only conclude that he cannot produce significance in Theomatics apart from this arbitrary, subjective manipulation of the data. This is how Washburn works... evidently how he must work in order to get Theomatics to "work." If he could produce objectively verifiable results in Theomatics, we are sure that he would... and frequently.


Regardless what Washburn claims about the nature of his work, the following fact remains indisputable: In every instance of Theomatics we have considered, and we have considered all that he has made public, very carefully examining in particular his "showcase" examples, when subjectivity is avoided by carefully defining both the elements of the sample and what constitutes a success and consistently following these definitions, the outcome is quite random. We are unaware of any single Theomatic instance when the outcome is not completely random.

What is evident from looking at the facts is this: in employing this subjective procedure, filtering out phrases he thinks should not belong in the sample for one reason or another, after deciding what constitutes "success," Washburn makes his results seem much more unlikely than they actually are by inappropriately reducing the sample size: he seems to prefer throwing out failures rather than successes. We demonstrate this fact repeatedly in our research.


This site contains a great deal of detail, including some very rigorous advanced mathematical statistics. The tables and figures and symbols can be quite intimidating to those unaccustomed to the rigor of technical analysis. However, it is our purpose to enable most anyone to gain a deep appreciation for the conclusions drawn from our research without having to understand all of the detail.

Once comfortable with the main terms and concepts presented in this Summary, we suggest becoming familiar with the material in the first part of the Methodology presentation, and then going through the specific Theomatic examples in order as they are presented on the main page.

If you are not sufficiently convinced by our research that Theomatics has not yet been defined and presented in a scientific manner, such that it should not be referenced as a resource in Christian Apologetics in its current state, we would very much appreciate knowing why, and how we might improve our presentation.