0
   

Numeric Decimal Precision from Whole Numbers

 
 
CDobyns
 
Reply Sun 10 Jan, 2010 09:16 pm
We've all heard the distinction between accuracy and precision. With accuracy defined as how close a measured value is true value, and precision defined as how close a measured value is to other values obtained by the same methodology or technique.

I was wondering if anyone can offer some discussion points (in support or in rebuttal) to the idea that it's theoretically impossible to derive a greater level of precision from numeric values in the aggregate (i.e. measures of central tendency), than the level of the decimal precision of the values used to obtain that summary calculation. So, a series of whole number values cannot be summarized (1,3,5,6) in order to obtain a composite value (average) with any greater level of decimal precision than 4 (or is it 3.75)?

I've got a answer that I've provided in response to this question, but what's the feedback on this from others?

  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Question • Score: 0 • Views: 1,575 • Replies: 6
No top replies

 
fresco
 
  1  
Reply Mon 11 Jan, 2010 01:47 am
@CDobyns,
It seems to me that "precision" and "accuracy" are psychological constructs. At the end of the day, they are about applied mathematics and "what works" in particular situations.
0 Replies
 
oolongteasup
 
  1  
Reply Mon 11 Jan, 2010 04:29 am
@CDobyns,
http://en.wikipedia.org/wiki/Accuracy

the link is pretty juicy

i think whole number values can be summarised

the average is the best estimate of an accurate value (mean) whereas the precision shows the variability of the test results enabling the probability generating function to be estimated (mean,variance, skewness, kurtosis ...)

but if is the answer is a digit, it's a digit

i toss a coin twice it comes down heads once and tails once and yet i ascribe a 0.5 probability to either outcome
CDobyns
 
  1  
Reply Mon 11 Jan, 2010 01:59 pm
@oolongteasup,
Okay, some good preliminary feedback on this. I'm not sure I'm seeing much so far as to either support or rebuttal on this issue - exactly yet. And I'm not sure I would characterize precision and accuracy as psychological constructs.

Accuracy and precision are two separate concepts. The classic illustration distinguishing the two is to consider a target or bullseye. Arrows surrounding a bullseye indicate a high degree of accuracy; arrows very near to each other (possibly nowhere near the bullseye) indicate a high degree of precision. To be accurate an arrow must be near the target; to be precise successive arrows must be near each other. Consistently hitting the very center of the bullseye indicates both accuracy and precision.

I'm not going to reveal my answer that I've provided just yet. I'd like to see some more external input first. Any takers?
fresco
 
  1  
Reply Mon 11 Jan, 2010 05:09 pm
@CDobyns,
I understand the concepts of precision and accuracy but the latter certainly begs the question of the meaning of the "true" value. In experimental terms (and that is the domain of application according to Wiki) this will always involve the question of functionality rather than objectivity. The "bullseye" is in essence the "psychological construct".
0 Replies
 
engineer
 
  1  
Reply Tue 12 Jan, 2010 09:00 am
@CDobyns,
Interesting question. Here's my take.

Assume you have a series of one digit integers with a potential error of +/- 0.5 for each one. The standard deviation of the error for N data points is around 0.29. I computed that empirically using a spreadsheet. That means that the error bar around the mean of the errors at 95% confidence is:

1.96*0.29/sqrt(N)

where N is the sample size and 1.96 is the Z score that corresponds to 95% confidence. If you say that an error bar of 0.05 represents one significant digit improvement in the calculation of the mean over the precision of the original data (0.5), then you need 129 data points to confidently say that the mean computed from the average of the data points is accurate to the tenths place even though the original data is only accurate to the ones place.

So depending on the size of the data set, yes, you could derive a greater level of precision from the values in aggregate than the level of precision of the values used in the calculation.

Edit: To further test this concept, I used a spreadsheet to generate 1000 means of integer errors of various sample sizes. At a sample size of 50, 111 of 1000 failed to meet the 0.05 accuracy. At a sample size of 100, that dropped to 45 samples. At 200 it was 4 out of a 1000. At N values of 500 and 1000, no means outside the desired accuracy where found.
0 Replies
 
oolongteasup
 
  1  
Reply Wed 13 Jan, 2010 11:48 pm
@CDobyns,
its a mathematical axiom that the number of significant figures in measurements is the maximum accuracy that can be achieved

statistical analysis of data reveals the likelihood of events
0 Replies
 
 

Related Topics

Evolution 101 - Discussion by gungasnake
Typing Equations on a PC - Discussion by Brandon9000
The Future of Artificial Intelligence - Discussion by Brandon9000
The well known Mind vs Brain. - Discussion by crayon851
Scientists Offer Proof of 'Dark Matter' - Discussion by oralloy
Blue Saturn - Discussion by oralloy
Bald Eagle-DDT Myth Still Flying High - Discussion by gungasnake
DDT: A Weapon of Mass Survival - Discussion by gungasnake
 
  1. Forums
  2. » Numeric Decimal Precision from Whole Numbers
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 04/26/2024 at 01:18:33