• Technomage (unregistered)

The answer to the lemon drop problem is exactly 9/110, computed in 4 milliseconds using a recursive algorithm. :)

Using a language with easy memoization makes a recursive solution very straightforward. Built-in rational arithmetic is just the icing on the cake.

I don't know why the problem proposer has an issue with recursive solutions. Especially if tail call elimination is guaranteed, they can be the most elegant and obviously correct implementations.

• WTF?? (unregistered) in reply to Technomage

4 milliseconds ?? mine ran in 1.03 milliseconds... (same answer)

• (cs) in reply to ajw
SenTree:
I think he's effectively picking a random candy every day, so the odds remain what they were at the start, 9/10.
I expected my 'intuitive' guess to be wrong...
eb0ny:
I tried DP approach and got 0.0(81) or 8.18182%.
ajw:
Actual figure is 0.08181... as previously given.
...but it's surprising by how much. Just goes to show how easy it is to be fooled by probabilities.
ajw:
I'm not sure I can see any greatly simpler solution, so I'd be delighted if the OP could enlighten us on this one.
There should be an analytic solution, but I suspect its derivation may be a bit messy. I've already demonstrated it's outside my skill set !
• seriously? (unregistered) in reply to WTF??

you shouldn't criticize an obviously acceptable time...

when your's isn't the best..

0.8 milliseconds

• sota (unregistered) in reply to eric76
eric76:
What are the odds that when one piece of candy remains, it will be lemon flavored?

Since we know that George prefers lemon, the odds of that being the last one in the bag is zero. It's the same principal that guarantees candy corn will be at the bottom of your Halloween sack.

• mykelyk (unregistered)

Why exactly do you think recursion is bad style? The algorithm is inherently recursive, the recursion here could be a simple goto with parameters, so why not? Probably the compiler will tail-optimize it anyway (maybe not during debugging or such).

Please stop bashing recursive algorithms in iterative ones for the sake of your very own "good style".

PS. I'm from Italy, so excuse my bad English.

PPS. Eric problem

let x = #l (number of lemons ...) let y = #c (number of cherries ...) let p(x,y) = pl(probability that last is lemon ...)

let x != or y != 0 (the game end when there are only 1 candy)

Of course p(0,y) = 0 and p(x,0) = 1;

Then we have p(x,y) where x != 0 and y != 0 let px = x/(x+y) and py y/(x+y) (probability to extract a lemon or a cherries this turn)

If a lemon is extracted then it will be eaten so the probability will change to p(x-1,y)

Else a cherry is extracted and you extract a new one with the same probabilities px and py

So p(x,y) = pxp(x-1,y) + py(pxp(x-1,y) + pyp(x,y-1)) Or p(x,y) = (py + 1)pxp(x-1,y) + py**2*p(x,y-1)

The Haskell code is produced with a find-replace

```
p 0 y = 0
p x 0 = 1
p x y = (py+1)*px*p(x-1,y) + py**2*p(x,y-1)
where
px = x/(x+y)
py = y/(x+y)

-- solution
main = print (p 10 90)
```

Cristal clear, a bright example of the elegance of recursion, but actually is VERY slow.

So recursion is slow, recursion suck. You can easily see than this function is not tail-recursive so it is slow, we need to uglify it in c, add accumulators etc.

WRONG!

It's very slow because it will unnecessarily recompute p(x,y) with the same (x,y) over and over, this is the optimization we have to do and it is totally transversal to the whole recursive-iterative thing.

this code (again Haskell) will do the trick

```p :: Int -> Int -> Double
p 0 y = 0
p x 0 = 1
p x y = (py+1)*px*(ps !! (x-1) !! y) + py**2*(ps !! x !! (y-1))
where
(dx, dy) = (fromIntegral x, fromIntegral y)
px = dx / (dx+dy)
py = dy / (dx+dy)

ps = do
x <- [0..]
return [p x y | y <- [0..]]

-- solution
main = print (p 10 90)
```

• Technomage (unregistered) in reply to seriously?

I was afraid I'd trigger a competition. :P

After switching to floating point and eliminating a common subexpression, I have a less correct and less understandable solution that now runs in 1-2 milliseconds. ;)

But yeah, times will vary by hardware, optimization, etc. The important part was the "obviously acceptable time."

• mykelyk (unregistered) in reply to mykelyk

One day I will learn to reread.. (inverted x and y and a comma typo)

mykelyk:
The Haskell code is produced with a find-replace
```p 0 y = 0
p x 0 = 1 --      vvvvvvvvvvv         vvvvvvvvvvv
p x y = (py+1)px(p (x-1) y) + py*2(p x (y-1))
where
px = x/(x+y)
py = y/(x+y)
-- solution     vv vv
main = print (p 90 10)
```

SNIP

this code (again Haskell) will do the trick

```p :: Int -> Int -> Double
p 0 y = 0
p x 0 = 1
p x y = (py+1)*px*(ps !! (x-1) !! y) + py**2*(ps !! x !! (y-1))
where
(dx, dy) = (fromIntegral x, fromIntegral y)
px = dx / (dx+dy)
py = dy / (dx+dy)

ps = do
y <- [0..]
return [p x y | x <- [0..]]

-- solution     vv vv
main = print (p 90 10) -- 8.1818..e-2 (8.18%)
```
• Mike (unregistered) in reply to SenTree
Ian Gent:
SenTree:
I can't see the solution but I can guess the answer on the following logic.

These kind of puzzles, the answer is always 0, 1/2, or 1. It can't be either 0 or 1, because there is a non-zero chance of picking out lemons until there are none left, or cherries till there are none left.

Therefore the answer is a half.

Actually there's a small chance that the answer is either 1/100 or 99/100, because those numbers come up sometime, but I think 1/2 is more likely.

I think he's effectively picking a random candy every day, so the odds remain what they were at the start, 9/10.
I wrote an algorithm that just runs the test (always take lemon, take next candy after a cherry) a bunch of times and it consistently says ~8.2% for lemon. If I were a better statistician I'd probably be able to solve it on paper, but brute force isn't bad for estimates.
• consequat (unregistered) in reply to seriously?
seriously?:
you shouldn't criticize an obviously acceptable time...

when your's isn't the best..

0.8 milliseconds

HA! I ran mine and it finished around January 2nd 10:42:34, 1941...

I knew better than to replace my motherboards cheep knock-off Chinese capacitors for state of the art flux-capacitors!!

• me (unregistered) in reply to ObstinateCoder
ObstinateCoder:
Because it takes so much time to change struct foo foo[4000]; to struct foo *foo = malloc(sizeof(foo) * count);
<nitpicking> Is this C or C++?

In C++, you would have to cast the return of the malloc():

struct foo *foo = (struct foo *) malloc(sizeof(foo) * count);

or, shorter:

foo *foo = (foo *) malloc(sizeof(foo) * count);

In C, you do not need that cast (and malloc() already indicates that it might be C); however, sizeof(foo) does not work there; you need to use sizeof(struct foo), unless there was a typedef. </nitpicking>

• NerfedCharPlayer (unregistered)

Thank God for my keyboard layout where if i miss the shift I get "select + " which does not compile! MY KEYBOARD LAYOUT IS DEV PROOF!

• (cs)

100%, because my George is smart enough to look in the flupping bag first, and take out the hated Cherry flavored candies.

• (cs) in reply to halcyon1234
halcyon1234:
100%, because my George is smart enough to look in the flupping bag first, and take out the hated Cherry flavored candies.

Or traded the cherries with Della who actually likes cherry and dislikes lemon.

• Ian Gent (unregistered) in reply to Technomage
Technomage:
The answer to the lemon drop problem is exactly 9/110, computed in 4 milliseconds using a recursive algorithm. :)

Using a language with easy memoization makes a recursive solution very straightforward. Built-in rational arithmetic is just the icing on the cake.

I don't know why the problem proposer has an issue with recursive solutions. Especially if tail call elimination is guaranteed, they can be the most elegant and obviously correct implementations.

I agree with everything. I used Common Lisp and did get 9/110 but in 28 ms.

Again, I agree about there being no problem with recursion. The issue is memoization. (Which wasn't free for me in Common Lisp and led to bugs.)

• Ian Gent (unregistered)

A bit of playing around leads to the empirical conjecture that under the rules of the game, if we have x lemons and y cherries,

P(last candy is lemon) = x / (x+y)(y+1)

I have no idea why but every test case I've tried fits it.

Of course my last conjecture (that the answer was 0.5) was garbage.

• Technomage (unregistered) in reply to Ian Gent
Ian Gent:
I agree with everything. I used Common Lisp and did get 9/110 but in 28 ms.

Again, I agree about there being no problem with recursion. The issue is memoization. (Which wasn't free for me in Common Lisp and led to bugs.)

I must admit the memoization wasn't completely free for me either. I used PLT Scheme in conjunction with Dave Herman's memoize.plt library. After doing the memoization manually on too many Project Euler problems, I was very happy to run across these macros. A quick import, and the only change I needed to make the recursive solution fully memoized was change one "define" to "define/memo".

You might check to see if someone has provided similar macros for Common Lisp.

• Capt. Obvious (unregistered) in reply to drf

[quote user=drf]2. The potential security implications of a buffer overrun attack, depending on where the application is used.[/quote] Well, it should notice it wants to write off the end of the array, and fail with an error to the user, as opposed to fail dramatically. [quote user=drf]3. The developer implemented a workaround "feature" to accomodate a buffer overrun that would still require testing.[/quote] Sure. But the likelihood of having introduced a bug is quite small. The likelihood of dynamic allocation producing a bug is larger (albeit a one time cost). [quote user=drf]4. Depending again on the specifics on the application, the conversion to dynamically-allocated memory would likely be trivial[/quote] In that case, hard-coding the array would be a bad solution. But I'll give the coder the benefit of the doubt and say it's non-trivial. Depending on the specifics of the application, failure may be more desirable than dynamic reallocation.
Okay, the instances of my extreme example are probably outweighed by yours 1,000,000 to 1. [quote user=drf]5. Potential end-user support requests and user grievances should the application crash unexpectedly.[/quote] If an error message says "You must provide less than 40,000 records to this application." non-technical users can figure it out. [quote user=drf]6. Additional documentation necessary to properly document the circumstances and workaround in (3).[/quote] Yes, if someone did this in a production environment, they better document why.

I can see doing this if

the number of records isn't known when the array is first allocated it's only used internally the number of records grows at such a slow rate that the new number will last at least a year or two the failure is graceful and provides enough info in the "Critical Error" dialog to allow a developer to find the #define in less than 10 minutes (counting getting the app out of source control) and it's none-mission critical.

Other than those limitations, I think you were overly harsh.

• (cs)

I thought the 8 Report was due to some dialects of SQL letting you refer to columns by number instead of name, but I looked it up and it only applies to the ORDER BY clause. (The real WTF there is that no one ever seems to reply to "how do I query columns by number?" with "whyTF do you want to?")

• IByte (unregistered)

Well, Mark, since you've decided to crowd-source spell checking: the word is "indelible".

Fixed! Thanks! - MB

• fgsfds (unregistered) in reply to I AM GENIUS
I AM GENIUS:
⑧ - second strongest in gensokyo
• (cs)

I'm The Octopus! I got 8 of everything!

• (cs)

WHO 8 THE GREAT REPORT?

• CoyneT (unregistered) in reply to Gunnar
Gunnar:
Hmm no, it's recursive. I don't know C(++?) but it actually seems to work.

Sure. But, then, so does this:

```int SumOf(int* a[2000],int i)
{
if (i>=2000) return 0;
return SumOf(a,i+1)+a[i];
}

```

... if you have enough stack.

I wonder if the original example was loop racism or recursion fanaticism?

• (cs) in reply to Julian Calaby
Julian Calaby:
Problems with the binary search function:
1. Creates three unnecessary variables and doesn's use one of them.
2. The type of sr.
3. No returns on the recursive calls
4. Recursive calls are utterly unnecessary as this can easily be performed iteratively.
5. Confusing comment.
6. immsi will be parsed as a float for every comparison.
7. Single space indents are bad style.
8. strcmp() is not the right way to compare two floats, unless you really want 0 to not equal 0.00.
9. Why the hell are we passing around a huge array of strings representing floating point numbers anyway?
10. I hope the data is sorted =)

1, 3, 4 and 5 indicate to me that this was originally implemented as an iterative function, then made recursive. Why, I can't fathom.

Well, 1 is not a WTF on itself, it will generate compiler warnings, but those variables won't affect the program in any way.

1. It's an array of strings, what is the problem?

2. Yes, that is quite wrong. The program will work on most computer architectres, but is teoreticaly wrong. I don't know if it is valid C++, but that is not that important on real life. Anybody that try to maintain the code will be confused by this problem and it will generate a warning, that is probably ignored toghether with the ones of unused variables. That is a problem, I junt don't think that it is a WTF on itself.

3. It can easily be perfomed on a recursive way too. By the way, it was easily performed on a recursive way, that is a bit more clear than the iterative way (proof of that is that the developer probably couldn't make it work iteratively).

4. It was iterative, the comment was just left there.

5. Yeah, that is slow. Just the developing team can know if that is a problem of a feature.

6. Not a WTF at all.

7. Yes, may be a problem. But comparing floating point numbers are guaranteed to fail, while comparing strings will only fail if those strings were created by the wrong algorithms.

8. Ok, no idea if it should be that way, but I can think about reasons for that. One of them is to simplify the use of fixed point numbers (use them as float, compare them as strings). Anyway, passing huge arrays around has no performance hit, so if the array was needed for some other reason, there is no problem reusing it here.

9. Well, the developer was smart enough to read the algorithm for a binary search, then, when he couldn't make it work, was smart enough to redo it recursively and get something that (badly) works. I guess he could have read the text that always come with binary search explanations, that say it only works on ordered arrays.

• mudkipz (unregistered) in reply to eric76
eric76:
For some reason, people seem to prefer to solve things recursively even when that is a ridiculously bad approach.

Consider this problem:

George bought a sack of 100 pieces of candy at the store. 90 of the pieces are lemon flavored and ten are cherry flavored. Of the two, George prefers the lemon flavored candies.

Every day George randomly picks a piece of candy out of the bag. If it is lemon flavored, he eats it and puts the bag away for the next day.

But if the candy he chose is cherry flavored, he puts it back in the bag and then randomly picks a candy out of the bag and eats it regardless of the flavor. In other words, he'll only put a piece of candy back at most once per day.

What are the odds that when one piece of candy remains, it will be lemon flavored?

I posed the problem at a company where I used to work. All but one person tried to do it recursively. The remaining person tried to do it using an Excel spreadsheet!!!

Recursion isn't a bad approach to brute-force the computation. Any brute-force approach will end up with large state trees, and the recursive approach has the advantage of being clear, since it necessarily encodes conditional probabilities.

Doing it algebraically (on paper) and plugging in values is faster though.

• Gregg Lind (unregistered) in reply to ajw

That was a fun problem!

• Gregg Lind (unregistered) in reply to Ian Gent

Can we see the CL code for it? Might be instructive for those of use wishing to learn CL!

• Dr. Kenneth Noisewater (unregistered)
```\$ time perl candybag.pl
For [1000] bags, the last candy was:
Lemon:  81 times (8.1%)
Cherry: 919 times (91.9%)

real    0m0.508s
user    0m0.500s
sys     0m0.007s
```

and that's with just a loop, splice and push..

• Sean (unregistered) in reply to CoyneT
CoyneT:
I wonder if the original example was loop racism or recursion fanaticism?

Likely just an example of a beginner programmer who hasn't been exposed much to optimization concerns.

When generally stated or thought about the problem lends itself very well to a recursive solution. The general thought is for binary search you start in the middle, if you found the element return it, if not and it's less than the middle element search the bottom half, otherwise search the top half. If someone doesn't know well enough to understand or think about optimization, recursion is the most straight forward and simplest way to both program and view the solution. Granted it's not a very well written or commented piece of code but I hardly think it's worthy of that much of a WTF.

And to those of your bringing up pointless concerns like "the array better be sorted or you'll have issues." Obviously it should be sorted before being sent to the function, and a binsearch function should not be wasting time checking if it's sorted. If I'm stupid enough to call "binsearch" on an unsorted array then I deserve to only get my result if I'm very lucky.

• (cs)

Someone email 8@. and tell him the binary search is crashing when given more than 4000 comments!

• Monday (unregistered) in reply to Tim

Guess you never heard of the Y2k bug?

• Monday (unregistered) in reply to Tim
Tim:
Seems to me the 4,000 Records solution is exactly the correct one is most circumstances - it minimises effort, retesting and chance of any unintended consequences.

If this was in a production application, The only circumstances under which I would normally let one of my developers spend time rewriting this module to do dynamic allocation would be if (a) if I thought the amount of data would was likely grow to 10 times its current value within the lifetime of the application (and even then I would be tempted to just increase the size of the array further, or (b) the allocation of 40000 records used an excessive amount of memory.

Guess you never heard of the Y2k bug?

//now with quotey goodness

• SpamBot (unregistered)

The person who said that the probability of the last candy is lemon is zero is absolutely right. I don't care what maths the rest of you did - you're wrong. It's IMPOSSIBLE if every time he gets a cherry one he puts it back that this will happen any other way.

How am I so certain? My daughter and I spent all night picking out all the red sweets and putting the lemon ones back - result, pot of 100% lemon sweets.

• PC Paul (unregistered) in reply to SpamBot
SpamBot:
The person who said that the probability of the last candy is lemon is zero is absolutely right. I don't care what maths the rest of you did - you're wrong. It's IMPOSSIBLE if every time he gets a cherry one he puts it back that this will happen any other way.

How am I so certain? My daughter and I spent all night picking out all the red sweets and putting the lemon ones back - result, pot of 100% lemon sweets.

Impressive. But next time read the question properly first...

"But if the candy he chose is cherry flavored, he puts it back in the bag and then randomly picks a candy out of the bag and eats it regardless of the flavor."

Cherry ones only go back if they are the first ones out. Second ones get eaten.

I only know because I thought the same thing at first...

• lima (unregistered)
```bool binsearch (char* sr[20000],char* imsi,int lo,int hi)
{
int low, high, test, found1;

low = lo;
high = hi;
test = 0;

found1 = -1;

if (low > high)
return false;
else
{
test = ((low + high) / 2);

if (strcmp(sr[test],imsi) == 0 )
return true;
else if (atof(imsi) > atof(sr[test]))
binsearch(sr,imsi,test+1,high);
else
binsearch(sr,imsi,low,test-1);
} //end while loop
}```

nicely documented... now i know where the end of the while loop is .... now its just a matter of finding the beginning...

• jmucchiello (unregistered) in reply to blah
blah:
TRWTF is
`test = ((low + high) / 2);`
is susceptible to *signed* integer overflow. Not pretty.
That's true. Except it's probably also wrong. A binary search would use
`low + ((high - low) / 2)`
to find the break point.
• Darth Eru (unregistered)

A recursive solution for the cherries and lemons puzzle is perfectly acceptable, because it's just a puzzle. In order to find the exact probability, a simple way is to construct the decision/probability tree. This is easiest to code using recursion.

Nobody is going to use the most efficient algorithm possible to solve a trivial problem unless they're being graded. The problem comes when people start using programmer friendly algorithms in applications where efficiency actually matters.

• eric76 (unregistered) in reply to mykelyk
mykelyk:
Why exactly do you think recursion is bad style? The algorithm is inherently recursive, the recursion here could be a simple goto with parameters, so why not? Probably the compiler will tail-optimize it anyway (maybe not during debugging or such).

Please stop bashing recursive algorithms in iterative ones for the sake of your very own "good style".

I don't think that recursion is bad style.

I think that it is often slower than doing it itteratively, but for many problems, the elegance and simplicity of the algorithm compared to a very small performance hit still makes it a very valid choice.

This example shows one in which the performance hit is very large. For example, you can code this problem as follows:

```double p( int lemons, int cherries )
{
if ( lemons == 0 ) return 0 ;
else if ( cherries == 0 ) return 1 ;

double denom = ( lemons + cherries ) * ( lemons + cherries ) ;
double choose_lemon = lemons * ( lemons + 2 * cherries )
/ denom ;
double choose_cherry = cherries * cherries / denom ;

return p( lemons - 1, cherries ) * choose_lemon +
p( lemons, cherries - 1 ) * choose_cherry ;
}```

The problem for the unwary is that the execution time increases dramatically as you increase the number of candies.

For example, if you insert a counter into the function to see how many times it is called, you find that with 10 lemons and 10 cherries, it is called 369,511 times, with 20 lemons and 10 cherries, it is called 6,009,029 times, and with 30 lemons and 10 cherries, it is called 1,695,321,055 times.

If you modify the algorithm so that it keeps a table of those values it has already calculated and uses those values when needed instead of recalculating them over and over and over again, then the algorithm is very well behaved.

• mykelyk (unregistered) in reply to jmucchiello
jmucchiello:
blah:
TRWTF is
`test = ((low + high) / 2);`
is susceptible to *signed* integer overflow. Not pretty.
That's true. Except it's probably also wrong. A binary search would use
`low + ((high - low) / 2)`
to find the break point.

Learn math.

• Dr. Kenneth Noisewater (unregistered) in reply to Darth Eru
Darth Eru:
Nobody is going to use the most efficient algorithm possible to solve a trivial problem unless they're being graded. The problem comes when people start using programmer friendly algorithms in applications where efficiency actually matters.

Heh...

```#!/usr/bin/perl -w
#

require 5;
use strict;

my (    \$MAX_BAGS, %bag_results,
);
##########################80 Columns..  W00t!!!#################################

\$MAX_BAGS = 100000; # how many bags
\$bag_results{'lemon'}   = 0;
\$bag_results{'cherry'}  = 0;

for (my \$i = 0; \$i < \$MAX_BAGS; \$i++) {
\$bag_results{&empty_bag}++;
}

## begin output
print STDOUT    "For [\$MAX_BAGS] bags, the last candy was:\n" .

"\tLemon:  " .
\$bag_results{'lemon'}  .
" times (" .
sprintf("%.4f",
((\$bag_results{'lemon'}  / \$MAX_BAGS) * 100)
) .
"%)\n" .

"\tCherry: " .
\$bag_results{'cherry'} .
" times (" .
sprintf("%.4f",
((\$bag_results{'cherry'} / \$MAX_BAGS) * 100)
) .
"%)\n";
## end output

#### SUBS ####
sub empty_bag { # empty is a verb in this context
my (@bag, \$pick);       # go to the candy store
# fill the candy bag
for (my \$i=0;  \$i<=89; \$i++) { \$bag[\$i] = 'lemon';  }
for (my \$i=90; \$i<=99; \$i++) { \$bag[\$i] = 'cherry'; }
# time to eat!
while (@bag > 0) {
\$pick = splice(@bag,int(rand(@bag)),1); # pick a candy
if (\$pick eq 'cherry') {                        # if it's yucky
push @bag, \$pick;                       # put it back
\$pick = splice(@bag,int(rand(@bag)),1); # and pick again
} # otherwise just eat it
} # that was tasty!
return \$pick;   # the type of the last candy, before it was et
}
```

• Dweeb (unregistered) in reply to Someone

Since binary search is O(log n), I wouldn't worry about a stack overflow. Example: If the data set was HUGE: 1 trillion records, it could recurse no more than 40 times.

• (cs)

//beginning of while loop The 8 report had me in hysterics!

• (cs) in reply to CoyneT
CoyneT:
I wonder if the original example was loop racism or recursion fanaticism?
What about your loop fanaticism? Recursion on a sub-linear algorithm (ie. O(log n)) is *not* in any way a WTF if it expresses the problem correctly. The call depth is less than 30 for a list of a billion items.
• Odie (unregistered) in reply to drf
drf:
Tim:
Seems to me the 4,000 Records solution is exactly the correct one is most circumstances - it minimises effort, retesting and chance of any unintended consequences.

If this was in a production application, The only circumstances under which I would normally let one of my developers spend time rewriting this module to do dynamic allocation would be if (a) if I thought the amount of data would was likely grow to 10 times its current value within the lifetime of the application (and even then I would be tempted to just increase the size of the array further, or (b) the allocation of 40000 records used an excessive amount of memory.

Can't agree with you there.

1. Assuming the lifetime of the application isn't known, it becomes a liability. Its uses cannot be known a priori.
2. The potential security implications of a buffer overrun attack, depending on where the application is used.
3. The developer implemented a workaround "feature" to accomodate a buffer overrun that would still require testing.
4. Depending again on the specifics on the application, the conversion to dynamically-allocated memory would likely be trivial.
5. Potential end-user support requests and user grievances should the application crash unexpectedly.
6. Additional documentation necessary to properly document the circumstances and workaround in (3).

If the application were to be replaced, strictly non-critical, not security-sensitive, used by technical users, and poorly-coded, I might be tempted then to simply increase the size of the array. But only then. It is not the correct solution.

Not to mention that the story says that a larger array was implemented - but would only be used if teh smaller array overran (I think)....Seems more than just a little strange - it would be simpler (assuming you really didn't want dynamic memory allocation) yo ALWAYS use the larger array...

• Logician (unregistered) in reply to Julian Calaby
Julian Calaby:
Problems with the binary search function:
1. Creates three unnecessary variables and doesn's use one of them.
2. The type of sr.
3. No returns on the recursive calls
4. Recursive calls are utterly unnecessary as this can easily be performed iteratively.
5. Confusing comment.
6. immsi will be parsed as a float for every comparison.
7. Single space indents are bad style.
8. strcmp() is not the right way to compare two floats, unless you really want 0 to not equal 0.00.
9. Why the hell are we passing around a huge array of strings representing floating point numbers anyway?
10. I hope the data is sorted =)

1, 3, 4 and 5 indicate to me that this was originally implemented as an iterative function, then made recursive. Why, I can't fathom.

To make it suitable for WTF, of course.....

• Me (unregistered) in reply to greydude
greydude:
halcyon1234:
100%, because my George is smart enough to look in the flupping bag first, and take out the hated Cherry flavored candies.

Or traded the cherries with Della who actually likes cherry and dislikes lemon.

Or not buy them in the first place.

• AC (unregistered)
What are the odds that when one piece of candy remains, it will be lemon flavored? I posed the problem at a company where I used to work. All but one person tried to do it recursively. The remaining person tried to do it using an Excel spreadsheet!!!
Using a Mathematica workbook probably requires the least amount of coding, and yes, it gives the exact result!
```Clear[f];
For[i = 0, i <= 100, i++, f[100, i] = 0];
f[100, 10] = 1;
f[x_, y_] := f[x, y] = f[x + 1, y + 1]*Power[(y + 1)/(x + 1), 2] + f[x + 1, y]*(1 - Power[y/(x + 1), 2]);
f[1, 0]
```
The output is 9/110, as expected :)
• AC (unregistered) in reply to Ian Gent
A bit of playing around leads to the empirical conjecture that under the rules of the game, if we have x lemons and y cherries, P(last candy is lemon) = x / (x+y)(y+1)
Interesting. And if you retake the candy twice instead of once, it gets much much more complicated. The Mathematica solution above gives the answer
```       887141338076788244188142268723341
3834704073959105699592528245108996693847
1743678760963442153425719249461222434743
4207760772261225563002659685632820991948
1697166286990104462944110850140103070751 /
198635071695965793806900320300451686
0739472623877800789820713163853264546010
9613829230065156077505066735961430095223
7297313688808994679085530660365926400000
0000000000000000000000000000000000000000```
which is approximately 0.00446619.
• (cs) in reply to SpamBot
SpamBot:
The person who said that the probability of the last candy is lemon is zero is absolutely right. I don't care what maths the rest of you did - you're wrong. It's IMPOSSIBLE if every time he gets a cherry one he puts it back that this will happen any other way.

How am I so certain? My daughter and I spent all night picking out all the red sweets and putting the lemon ones back - result, pot of 100% lemon sweets.

No Good! Wide Right!