Paternity test statistics don't show the rate of paternity fraud
The AABB (American Association of Blood Banks [1]) annually publishes consolidated statistics from many accredited paternity testing services. These may reveal, for example, that 28% of paternity tests were "negative", and the man tested is not the biological father of the child. (28% is a typical figure). This does not mean that 28% of men in the population are not the biological fathers of the children they believe are theirs. It does not even mean this in sub-populations "at risk", such as broken families. And it certainly does not mean that there is a 28% rate of paternity fraud [2] in any given population or sub-population! In fact, it means very little other than what it says - these are simply the statistics for a given set of paternity tests, saying little about the rate of misattributed paternity [3] for the sub-population who have been tested. Why don't they show this?The short answer is: "because they were not designed for this purpose, and statistics typically have to be designed for a purpose in order to be reliable for that purpose". A longer answer is given by the AABB themselves, in their 2003 annual report [4]:
They then list various reasons, and these are expanded in "Examples where statistics overestimate the rate of non-paternity" below. Examples where statistics overestimate the rate of non-paternity
In fact, this case applies even if the men were not misled. Take a simple example, where a woman honestly says to two men: "one of you is the biological father, but I don't know which". In this simple example, there is no attempt to mislead, and there is no fraud.
This appears to be the somewhat unusual case where, because a paternity test can never say with 100% certainty "you are the biological father", a man continues to claim it is someone else in spite of considerable evidence that he is the one. The second one will almost certainly be negative, so there will be a 50% exclusion in this pair of tests, yet probably no one has been misled. (Except, perhaps the court, by the first man!) If, by a very small chance, the second test is also positive, then here is an example of two positive tests where in fact one man was not the biological father! The statistics would underestimate the rate of non-paternity, and actually belong in the next section. In fact, there are 1000s of men in the UK's CSA system who have identical twins, and so the man in the scheme and his twin would both test the same way, for example both test positive.
This could apply to the CSA in the UK too. The paternity test is taken because the courts or the CSA has to be convinced of what all the people already know. Examples where statistics underestimate the rate of non-paternityThese are cases where a man doesn't have a paternity test, even though it would be negative if he did so. So the number of negative tests is unduly low in these examples.
Conclusions and a better statisticAs stated earlier, these statistics were not designed for this purpose, and statistics typically have to be designed for a purpose in order to be reliable for that purpose. There are factors which might make numbers such as "28%" higher than the real non-paternity rate, and some which might make them too low. No one knows the real non-paternity rate for this sub-population. And, as shown above, sometimes, no one has been misled, and sometimes there is no fraud. These statistics simply provide little or no prediction for paternity fraud. There is a statistic which doesn't separately appear in the AABB annual report, but may be better at predicting the rate of misattributed paternity for some sub-populations. This is the statistic for "motherless" aka "peace of mind" paternity tests [5]. The reason it may be a better statistic to use is that, being unofficial, and typically secret, it doesn't suffer any of the above complications imposed by the legal system, such as "presumption of parentage", "allegations in defence", "not allowed to have a paternity test", etc. It is quite possible, and should certainly be investigated, that in intact families where the man is suspicious enough to have a motherless paternity test, the rate of misattributed paternity really is the 10% or so revealed by these results, at least in Australia. It would be useful for this statistic to be separated out and made visible in annual reports from paternity testing services in all countries where this is an issue. References[1] American Association of Blood Banks. This also accredits some European, including UK, paternity testing services. It publishes reports annually about paternity testing statistics for accredited testing services, although some services don't respond. [2] Here, "paternity fraud" refers to cases where the man is deceived in order to obtain money from him, for example via the child support system. Perhaps it could usefully include cases where the man suspects he is not the biological father, but is prevented from finding the truth. [3] Here, "misattributed paternity" refers to the non-judgemental identification of children who have a biological father other than the man who thinks he is the biological father. There may be no attempt at deception. There may be no intent to obtain money. Or there may be both of these. So it is simply the inclusive term, used where those details are not known, or not relevant in the context. [4] [5] Dr Ainsley Newson, submission G283 to the Australian Law Reform Commission and Australian Health Ethics Committee (ALRC/AHEC) Joint Enquiry, 2002-12-23. |
| Page last updated: 16 March, 2006 | © Copyright Barry Pearson 2005 |