x Errors, an explanation

In middle school, we tend to ‘like’ maths because the answer is exact, because we can ‘get it right’. The steady trend to use calculators instead of doing mental arithmetic is balanced by a similar move towards estimation, so that you have, at least in theory, an idea what sort of answer you should be getting. So that means that you ‘get an answer’ and are expected to test it against your estimate, so see if your answer is reasonable. this seems to me to be a sensible approach to checking.

Real-life mathematics demands that there be a model to follow, no matter how simple. When this is done, there is a basis of assumptions made and these, once we’ve had a go at running numbers through the model, we can then compare our result with the real world in some way and this then informs us whether the model is good or not. This is merely a more formalised route to the checking described above.

Those assumptions frequently include some numerical values and we should do some thinking about how our answer varies with changes of input. This is sometimes called sensitivity analysis.

Similarly, when we’re doing experiments, notably in Physics, we are aware that the measurements we take can have error and we are taught to process our errors. This results in what many of my pupils called ‘grey maths’, because you no longer have an answer as much as a spread of values, a field of probability, in which the answer lies. For some students this is a step too far because it pulls away the foundation of certainty upon which all their belief in maths is based. No longer is there a ‘right’ answer. These same students hated it when I demanded that they take their exact answer and apply it back to the problem, so that the ‘answer’ was couched int he same terms as the question. My observation is that for some this was almost as hard as the problem of formulating the question into some approachable maths.


Theory:

Let ∂ be an error in x; then a correct measurement is (x±∂). If that is at some point raised to the power n then the expansion starts xn ± n∂xn-1 + n(n-1)/2 ∂2 xn-2 -....   but if ∂ is small compared to x then  ∂2 is very small, so we can ignore all terms in powers of ∂, so the relative error in this component ((close - right) / right) is  ((x+∂)n - xn) / xn     = ± n∂       

Similarly, if two items a±α and b±β are measured, with exact values a and b then, ignoring any second order values such as αβ,then the following combinations of a and b occur when calculating relative error

(you should check the algebra for yourself, of course)


(a±α) (b±β) - (a+b) =  (α+β)  i.e simple addition of errors result from multiplication

          (a+b)

(a±α) ± (b±β) - (a+b) =  ± (aα+bβ)  i.e complex addition of errors result from addition

          (a+b)                          (a+b)

I say division follows from multiplication, ± (α-β)  and you may wish to look at index,

(a±α)^(b±β)  or (a±α)b±β.      A challenge.


Be aware that lower school exercises are assumed well understood here. the measurement of say 1.5 is seen and understood to be given to 2 significant figures and therefore lies in the interval (1.45, 1.55-).  Physical measurements may well be taken with a recognised larger error, such as say 2150 mm being correct to within three centimetres, so in the interval (2120, 2180). You might argue that this is a poor choice of units or representation; I do this to exemplify typical issues. There is a clear need for conventions in declaration of error.


Examples:

1. Measure an area as 5 x 8 metres. Assume the measurement is correct to the nearest 2 cm. Find the largest and smallest possible values of the area and calculate the percentage error.

5.02x8.02 = 40.2604; 4.98x7.98 = 39.7404 so the maximum% error is 0.26/40*100%=0.6504%. Compare this with .4% in 5 and 0.25% in 8, adding to 0.65%.

  1. 1.2.A power rating is 8W±2% and the accompanying current is 5A±4%, what is the voltage? W=IV, so V=W/I.  Doing this the crude way, the maximum error occurs when the biggest power is divided by the smallest current, ie. (8*1.02) / (5*0.96) = 1.56672, not the expected 1.6, so the percentage error is  2.08% error. The approximation is to subtract the % errors.


Exercises to follow.....



1.    You measure the length of a triangle as 3,4,5 cm but you recognise error of one mm; what then is the error in the angle assumed to be 90º?



Theoretical stuff:

1    Make up some theory for this problem, short sides a and b, errors α, β and approximate the error in the assumed right angle between them.

2   Prove the declaration made in example 2 above. Call the variable W and A with errors ω and α respectively. Recognise that (A+α)-1  = A -Aα + ... and thus show that the relative error is indeed (ω - α). This is quite hard A-level algebra.



Afterthought

All of the above is based on simple arithmetic with just a little A-level content for the expansion of series.

The statistical version of error understands that there be a distribution of some description, from which a measurement is taken (a ‘sample’, in some sense). Such a sample is then assumed to give information about the larger, less well understood, population. This may include ‘knowing’ the mean, or the variance, on the basis of long experience. The statistical techniques for making conclusions about the larger population are covered elsewhere and not by me; some of that is inside the content of Further Maths A-levels, though it varies with the board chosen.  Therein lies a different problem for schools: do you pick the course most likely to result in a good grade or the one which gives the greatest learning while the teaching support is available? Sadly, the economic push is for whatever is easiest, not best for the pupils’ education, but best for the school. Within that is an assumption that a significant fraction of the people making such choices go no further with the subject (or, that if they do, it doesn’t matter how little they know). I say that in turn assumes that the educational environment of the next institution is being assumed to be at least as good as the current one. Or that such thinking is irrelevant, as in ‘not our problem’.

I am concerned (both worried and interested) about statistical error when applied to social medicine: A recent example follows from the suggestion that there is a lot of undeclared physical abuse resulting in poor mental health. That then presumes that the crime figures (for <topic>) are but a sample of a larger population. For example, there is a published claim in late 2015 that the UK crime figures for some sort of sexual abuse are maybe eight times lower than the incidence of reportable crime (I’m giving my impression of what I thought was being said). I do not understand how this extension can be done and I have directly asked those who I think may know so that I may explain to others. I am also interested in how one might test such claims, maybe by exhaustive research within a sub-population.


DJS 20151209

lately © David Scoins 2017