@CDobyns,
Interesting question. Here's my take.
Assume you have a series of one digit integers with a potential error of +/- 0.5 for each one. The standard deviation of the error for N data points is around 0.29. I computed that empirically using a spreadsheet. That means that the error bar around the mean of the errors at 95% confidence is:
1.96*0.29/sqrt(N)
where N is the sample size and 1.96 is the Z score that corresponds to 95% confidence. If you say that an error bar of 0.05 represents one significant digit improvement in the calculation of the mean over the precision of the original data (0.5), then you need 129 data points to confidently say that the mean computed from the average of the data points is accurate to the tenths place even though the original data is only accurate to the ones place.
So depending on the size of the data set, yes, you could derive a greater level of precision from the values in aggregate than the level of precision of the values used in the calculation.
Edit: To further test this concept, I used a spreadsheet to generate 1000 means of integer errors of various sample sizes. At a sample size of 50, 111 of 1000 failed to meet the 0.05 accuracy. At a sample size of 100, that dropped to 45 samples. At 200 it was 4 out of a 1000. At N values of 500 and 1000, no means outside the desired accuracy where found.