Paternity issues
Parentage, especially paternity
Misattributed paternity rates and non-paternity rates
Paternity test statistics don't show the rate of paternity fraud
Paternity testing and Islam
Paternity fraud
Paternity tests for peace of mind
Some links to paternity testing services
Paternity testing fallacies
Children's rights

Many articles published on the web have a naive theme: "stop worrying about biological paternity - what matters is raising children". Many others have a contemptuous theme: "children need money to be raised - find the nearest man with a wallet". Here are some commentaries on several of those papers.
Paternity commentaries - index

Some papers on this website on the subject of paternity testing:
"The truth is out there" - Commentary on "Move to outlaw secret DNA testing by fathers"
"Knowledge is bliss" - Towards a society without paternity surprises
What is the crime if men seek confirmation that children are theirs?
"A matter of opinion" - Unofficial paternity tests and the impacts on children
Home & weblog
Blog archive & site history
Site map & search

Paternity test statistics don't show the rate of paternity fraud

The AABB (American Association of Blood Banks [1]) annually publishes consolidated statistics from many accredited paternity testing services. These may reveal, for example, that 28% of paternity tests were "negative", and the man tested is not the biological father of the child. (28% is a typical figure).

This does not mean that 28% of men in the population are not the biological fathers of the children they believe are theirs. It does not even mean this in sub-populations "at risk", such as broken families. And it certainly does not mean that there is a 28% rate of paternity fraud [2] in any given population or sub-population! In fact, it means very little other than what it says - these are simply the statistics for a given set of paternity tests, saying little about the rate of misattributed paternity [3] for the sub-population who have been tested.

Why don't they show this?

The short answer is: "because they were not designed for this purpose, and statistics typically have to be designed for a purpose in order to be reliable for that purpose".

A longer answer is given by the AABB themselves, in their 2003 annual report [4]:

"It is important to understand the significance of the exclusion rate, especially since the statistic has been misinterpreted in the past. For example, several organizations have used the exclusion rate to suggest improperly that 30% of men are misled into believing they are biological fathers of children. This suggestion is incorrect. The exclusion rate includes a number of factors...."

They then list various reasons, and these are expanded in "Examples where statistics overestimate the rate of non-paternity" below.

Examples where statistics overestimate the rate of non-paternity

AABB: "One is that the men are alleged to be fathers. This is important as a woman may allege several men as possible fathers were misled into believing they were fathers and then later discovered they are not. The testing merely sorts out which man is the biological father so presumably that man can assume his parental role".

In fact, this case applies even if the men were not misled. Take a simple example, where a woman honestly says to two men: "one of you is the biological father, but I don't know which". In this simple example, there is no attempt to mislead, and there is no fraud.

  • Suppose they decide to have a single paternity test involving one of the men, and if it is negative they will assume the other man is the biological father. This single paternity test has a 50% exclusion rate. (If there were a million cases like this, about half would be negative. As long as there were no clues about which man to test first).
  • This same 50% rate applies if both men have paternity tests at the same time. One would be positive, the other negative.
  • Suppose, instead, they still want to ensure that they have a positive test, but they are more cost-conscious. One man has the test, with the logic that there is no need for two tests if the first is positive. Then the other has a test only if the first is negative. In this case, there is a 33% exclusion rate. (If there were a million cases like this, about half would give a positive result to the first test, and about half would give a negative result to the first, and a positive result for the second. As long as there were no clues about which man to test first).

AABB: "Another factor is that sometimes men are accused and tested because a man who is not excluded is alleging that the mother had multiple sexual partners as part of his defense".

This appears to be the somewhat unusual case where, because a paternity test can never say with 100% certainty "you are the biological father", a man continues to claim it is someone else in spite of considerable evidence that he is the one. The second one will almost certainly be negative, so there will be a 50% exclusion in this pair of tests, yet probably no one has been misled. (Except, perhaps the court, by the first man!)

If, by a very small chance, the second test is also positive, then here is an example of two positive tests where in fact one man was not the biological father! The statistics would underestimate the rate of non-paternity, and actually belong in the next section. In fact, there are 1000s of men in the UK's CSA system who have identical twins, and so the man in the scheme and his twin would both test the same way, for example both test positive.

AABB: "Sometimes a man is required to be tested because of a legal presumption, that is, when the mother properly names the correct father but because she is (was) married to someone else, there is a legal presumption that the husband is the father. The husband is then tested to rebut the legal presumption, not because he was misled into believing he is the biological father of the child".

This could apply to the CSA in the UK too. The paternity test is taken because the courts or the CSA has to be convinced of what all the people already know.

Examples where statistics underestimate the rate of non-paternity

These are cases where a man doesn't have a paternity test, even though it would be negative if he did so. So the number of negative tests is unduly low in these examples.

  • A man erroneously thinks he is the biological father, perhaps because this is a genuine case of paternity fraud, and therefore chooses not to have a paternity test.
  • A man is erroneously claimed to be the biological father, perhaps because this is a genuine case of paternity fraud, but isn't given the chance of having a paternity test. Laws that restrict the use of paternity tests can cause paternity testing statistics to underestimate the rates of non-paternity, in some cases.

Conclusions and a better statistic

As stated earlier, these statistics were not designed for this purpose, and statistics typically have to be designed for a purpose in order to be reliable for that purpose. There are factors which might make numbers such as "28%" higher than the real non-paternity rate, and some which might make them too low. No one knows the real non-paternity rate for this sub-population. And, as shown above, sometimes, no one has been misled, and sometimes there is no fraud. These statistics simply provide little or no prediction for paternity fraud.

There is a statistic which doesn't separately appear in the AABB annual report, but may be better at predicting the rate of misattributed paternity for some sub-populations. This is the statistic for "motherless" aka "peace of mind" paternity tests [5]. The reason it may be a better statistic to use is that, being unofficial, and typically secret, it doesn't suffer any of the above complications imposed by the legal system, such as "presumption of parentage", "allegations in defence", "not allowed to have a paternity test", etc.

It is quite possible, and should certainly be investigated, that in intact families where the man is suspicious enough to have a motherless paternity test, the rate of misattributed paternity really is the 10% or so revealed by these results, at least in Australia. It would be useful for this statistic to be separated out and made visible in annual reports from paternity testing services in all countries where this is an issue.

References

[1] American Association of Blood Banks. This also accredits some European, including UK, paternity testing services. It publishes reports annually about paternity testing statistics for accredited testing services, although some services don't respond.

[2] Here, "paternity fraud" refers to cases where the man is deceived in order to obtain money from him, for example via the child support system. Perhaps it could usefully include cases where the man suspects he is not the biological father, but is prevented from finding the truth.

[3] Here, "misattributed paternity" refers to the non-judgemental identification of children who have a biological father other than the man who thinks he is the biological father. There may be no attempt at deception. There may be no intent to obtain money. Or there may be both of these. So it is simply the inclusive term, used where those details are not known, or not relevant in the context.

[4] "ANNUAL REPORT SUMMARY FOR TESTING IN 2003"
Prepared by the Parentage Testing Standards Program Unit October 2004
One section is called: "Misconceptions in parentage testing".

[5] Dr Ainsley Newson, submission G283 to the Australian Law Reform Commission and Australian Health Ethics Committee (ALRC/AHEC) Joint Enquiry, 2002-12-23.
""... a large provider of parentage testing services reported that its rate of non-paternity in motherless tests was 10% ...; and a smaller accredited laboratory reported that its rate of non-paternity for motherless tests was 11% ....."

Page last updated: 16 March, 2006 © Copyright Barry Pearson 2005