Thomas
I also used Bruce Ellis's of Bell labs string of bits from noise function (samples if sum of key strokes occuring in 10 seconds across a University departments are even or odd to determine if a bit in a number is zero or one, bit at a time to generate large numbers - generated a table of big numbers that way - analysed table - concurs to this theorem - so there goes your POV
)
Further holding a view that this LAW is unproven means you doubt what the University of Georgia publishes about its maths professors. You should read the answer at its source if you want to see a sterling proof. Do you contend that the proof Hill came up with is incomplete or inaccurate, or is there no rational reason for your views?
Also you method is actually the first process attempted then shown to be false! Why when it looks so good? Well as you change the group size you get different answers for each size as you let N approach infinity. So if you generalise you formulea to be limited from 1 -> a^N(n + 1) for all values of a this formulea should converge to your answer if your answer is uniquely true.
But for different values of a these types of summation converges to all different results, showing by contradiction you can't test it this way.
Craven
Its only about sets in that it was originally posted as a Number Theorum axion to describe the entire infinite set of positive integers. It was never, every taught to apply on finite subsets by Pure mathematicans, although that's where applied mathematicians quickly took it to make use of it.
When I spoke of Generators I was speaking of the the term Generators of a set - like the way our 4 DNA components G A T C generate our entire DNA strings. From number theory you can apply a theorum to the generators of a set to show how it affects the set as a whole.
Counting photons recieved per period on a collector to set bits in number is a pretty good random number generator. And I made no statement as to what my assumptions about randomness of random number generators are - so its incorrect to assume I am unaware of their limitations or distributions.
* * *
You need to forget randomness from this discussion and consider the entire set of positive integers to understand the original theorem. My small programming example has had the unfortunate side effect of making folk think statistical analysis rather than apply number theory correctly to an infinte set of numbers.
OCCOM BILL
You'd be better to argue most random numbers that don't have a real world noise function are taking advantage of an illconditioned function to generate big numbers with a specificed distribution - then modding them by primes to get remainders - its the underlying distribution function and the effects of modding by primes you should examine to investigate bias.
The original Number theory analysis looked at all numbers from group P1 = {x: in 1 -> N} as N heads to infinity and counted digit occurence in P (i) for digits 1 -> 9. Then this was check against the group P2 = {x: in 1 -> 2N} as N heads to infinity and then P3 = {x: in 1 -> 9N} as N heads to infinity etc to show as you had convergence for digit(i) in each group - but the answers were all different for each group!
Then you consider P = {x: 1-> round(log(aN))} where a = any constant as N heads to infinity. This converges to the same numbers for all values of a to digit(i) occurs log((i+1)/i).
No distribution functions there!