@memester,
memester;108132 wrote:Are these reasonable assumptions to use in examination of mimicry ?
How about let's look at another scenario too. Let's say that 99% of bird which eat a Viceroy will not eat that "look" of butterfly again.
Of Queens, 1% of birds eating it will never eat that "look" of butterfly again. And to the final assumption: does 50/50 fit with mimicry theory better or worse than other ratios of mimic /mimicked ?
Av = the fraction of the population of predators that never attack a Viceroy in the first place due to co-mimicry
Dq = the fraction of predators that will never attack a butterfly that looks like a Queen once having eaten a Queen
Pq = the fraction of Queens in the combined population of Viceroys and Queens
In this simple model, Av = Pq * Dq ... if the population of Queens is non-zero and Queens are in any way distasteful to the predators, then Av will always be positive.
memester;108132 wrote:That is assuming that predators cannot learn from others' experience. Or from parental presentation of acceptable foods. That's a big assumption.
... actually, if predators learn from the experience of others, then it benefits both species (that is, less than 100% of the predator population will ever attach a Viceroy
or a Queen) ...
Pl = the fraction of predators that learn not to eat Viceroys and Queens from others
In this case, Av = Pl + (1-Pl)*Pq*Dq ... and for Pl > 0, this number is always greater than in the previous model.
memester;108132 wrote:We CAN assume that first try of food is from parent's beak. And it is GOOD. Of course !
... well, yes - if a model didn't make assumptions, it wouldn't be a model, would it?

...
memester;108132 wrote:No, not shown at all. Even assuming those all those things which are not safe to assume.
... actually, I'm buying it so far ... and the fact that your first three objections missed the mark doesn't portend well for the rest ...
memester;108132 wrote:And no, definitely DOES NOT explain "why it it is the way it is".
... sure it does - the "talent for mimicry" hypothesis explains the regional variation that is otherwise left unexplained

...
EDIT: perhaps the confusion here revolves around hypotheses in general and their "truth status" ... any number of hypotheses could explain a fact, but that doesn't mean they're all
right 
---------- Post added 12-04-2009 at 03:11 PM ----------
QuinticNon;108070 wrote:An interesting read on Shannon. He was not concerned with meaning whatsoever. His contribution was calculating maximum throughput, and in the process, developed a calculation for the measurement of entropy. Not convinced it was the "basis" of his work, rather than a notable agent to address in effective communication.
... it is the basic statistical metric for measuring information that he introduces in his seminal paper, though - the more uncertain a random variable, the greater its entropy (and thus the more information associated with it); the less compressible a bit string (i.e., the more uncertainty in the string), the greater it entropy (and thus the more information it contains) ...
QuinticNon;108070 wrote:We can't maximize information.
... you can if it is Shannon information - that's one of the reasons I think you should consider it inappropriate for your ontology ...
QuinticNon;108070 wrote:An interesting read on Wiener too.
... that could be due to the fact that my knowledge of Wiener's use of negentropy in his theory of information is all second-hand - typically, it's presented as a more intuitive alternative to Shannon entropy/information ...
QuinticNon;108070 wrote:Might you confuse a "truth" of nature with an "observance" of nature?
... would such a state of affairs require that there be no "truth" in nature (outside of human reflection)? ... or that the "truth" in nature is beyond human comprehension? ...