Jump to content
Science Forums

Undecidable ??


Recommended Posts

Let suppose the following algorithm :

 

srand(time(NULL));

 

bool stop=false;

 

while(!stop)

{

double p=(double)rand()/(double)RAND_MAX;

if(p>.5)

a=1;

else

a=-1;

 

avga+=a;

 

double p=(double)rand()/(double)RAND_MAX;

if(p>.5)

b=1;

else

b=-1;

 

avgb+=b;

 

if(avga=avgb) stop=true;

}

 

 

are there possibilities it never stops ??

Link to comment
Share on other sites

Let suppose the following algorithm :

are there possibilities it never stops ??

Jpittelo appear to have written an implementation of a published, but fairly obscure math question, in the “coin flipping” family.

 

The question is usually phrased “if once repeatedly flips a fair coin, will the number of heads eventually exactly equal the number of tails?”, or, put more formally does the probability approach 1 as the number of flips approaches infinity?

 

The answer is a yes. Eventually, both the number of heads will exactly equal the number of tails. Eventually, jpittelo’s program, although a slightly different problem than the simplest “coin flipping” problem will exit. What’s more, if will stop with line

if(avga==avgb) stop=true;

changed to

if(avga+FIXEDDIF==avgb) stop=true;

for any constant integer value FIXEDDIF. The answer to the question “if once repeatedly flips a fair coin, will the number of heads eventually exactly equal the number of tails plus one million?” is yes.

 

The proof/demonstration of this, while slightly detailed, is straightforward.

 

Approximating the expected value of the number of coin tosses at which # heads = # tails is computationally intense. To my knowledge, there’s no published proof that it has, or does not have, an expected value – that is, it may be an example of a ”St. Petersburg paradox”.

 

The problem becomes even more interesting when changed into something like “if once repeatedly flips a fair coin, will the number of heads eventually exactly equal the number of tails times two?” Perhaps surprisingly, the answer to this is “no”. For example, the probability for this particular question can be calculated without much difficulty, and is about 0.5729. Although the probability varies, the answer to question “if once repeatedly flips a fair coin, will the number of heads eventually exactly equal the number of tails times X?” remains no for any rational value of X. That is, changing the line

if(avga==avgb) stop=true;

to

if(avga*INTA==avgb*INTB) stop=true;

for any integers INTA not equal INTB causes the program to be undecidable.

Link to comment
Share on other sites

Right, in fact it was

 

f(avga==0&&avgb==0) stop=true;

 

BTW this is in the context of the simulation of the EPRB paradox, in which the hypothesis set involves <A>=<B>=0.

 

Because if a simulation is done, then one cannot "force" this hypothesis by simulating A,B in {-1,1} for example, and then substract the mean values like in the correlation <(A-<A>)(B-<B>)>...?? Indeed if one write A'=A-<A>..then A' is NOT in {-1,1}..which erronates the results by violating another hypothesis.

Link to comment
Share on other sites

While Craig has handled the *math* problem--and he knows more that a bit more math than I do--I'll answer the computer science problem:

 

Turing's Halting Problem rears its ugly head. In computer science, the answer is definitely you cannot prove that it ever halts. The difference here is that the math problem is solved with a "fair coin" and "sufficiently large sample" of flips. CS folk will tell you that those pseudo-random number generators are notoriously non-random, and you can easily get a lopsided result over an *infinite* dataset *because* the numbers are not equivalent to a "fair coin". In addition, while its "likely" that it might halt no matter whether you used a pseudo-random algorithm or the output from a beta-decay device (guaranateed to be perfectly random over time), there's no way to *prove* that a particular run will ever get to the magic match that you're looking for: there's always the chance that the difference could forever end up being >1.

 

Same problem, very different results, depending on who you're talking to!

 

Uncountably,

Buffy

Link to comment
Share on other sites

Thanks, Buffy, for a computer scientist’s perspective. Like many Math majors, I fancy myself isomorphic to a computer scientist, so here goes...

Turing's Halting Problem rears its ugly head. In computer science, the answer is definitely you cannot prove that it [the program in post #1?] ever halts.
As Buffy notes, the Halting Problem has been proven undecidable. However, the Halting Problem is, briefly: “given any program and initial input, determine if the program ever halts.” Given a specific program and initial input, it may be possible to determine if the program ever halts. For example, the C program:

main(){}

can be trivially proven to halt, which this one:

main(){do{}}

can be trivially proven not to halt.

 

Assuming one had the full object code (including any late-bound library code) and initial data to it, it might be possible to solve “the Special Halting Problem” associated with a particular program.

 

Solving the Special Halting problem for a given “coin flipping” program, with a given pseudo-random number generator and seed value is a potentially interesting, and deep. For many PRNGs, it’s possible to select a “bad” seed value to make the resulting “Very Special Halting Problem” easier.

CS folk will tell you that those pseudo-random number generators are notoriously non-random, and you can easily get a lopsided result over an *infinite* dataset *because* the numbers are not equivalent to a "fair coin".
True. I’ve used simple “coin flipping” programs to make a preliminary determination of a “code unknown” PRNG’s or true RNG’s suitability for statistics and cryptographic applications – many fail this simple test, “mean creeping” in such a way that the “halt if # heads = # tails” program’s halting problem is undecidable.
In addition, while its "likely" that it might halt no matter whether you used a pseudo-random algorithm or the output from a beta-decay device (guaranateed to be perfectly random over time), there's no way to *prove* that a particular run will ever get to the magic match that you're looking for: there's always the chance that the difference could forever end up being >1.
For the true RNG, I agree. For a particular PRNG, I suspect it is possible to prove the match will occur. For example, for the trivial case of a crappy PRNG that just alternates between 2 values, the “# head = # tails” match always occurs after exactly 2 itterations.

 

The question of whether a proof that a mathematical function approaches a particular value is equivalent to a solution to a Special Halting Problem, is, I fear, over my head – though fun to ponder.

Link to comment
Share on other sites

aagh, darn, if you want math to look pretty, please use the latex tags, it works, and your post looks a bit cooler :)

[math]\normal A=-1^n \text{ for n=0,1...} \\B=-1^{(n+1)}\text{ where n>1 and B(0)=1} \\\text{then }A(2n)=0 \text{ and } B(2n)=\frac{1}{(2n+1)}>0 [/math]

never mind the whole programmatic approach and cryptical latex synthax which btw looks like this:

\normal A=-1^n \text{ for n=0,1...}\\

B=-1^{(n+1)}\text{ where n>1 and B(0)=1}\\

\text{then }A(2n)=0\text{ and }B(2n)=\frac{1}{(2n+1)}>0

makes you almost feel like you are a hacker... :P

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...