• bvs23bkv33 (unregistered)

    first comment posting timeout, comment overflow

  • 3stdev (unregistered)

    try { Frist(); } catch (Exception e) { Secnod(); }

  • LCrawford (unregistered)

    The synchronous call is not automatically bad - it depends on the program design. It could be a background thread, and there may not even be a user waiting on immediate results.

  • someone (unregistered)

    I'm guilty of the "if unsuccessful, recurse" methodology. I was going to justify why mine was not as bad as this, but no. I realise now it's bad no matter how you approach it.

  • (nodebb)

    Using recursion for this is silly, however some times you do need to try an operation a few times before it succeeds. Just yesterday i had to add retry logic to an Outlook integration program, because an operation fails some times for no particular reason, and the same operation succeeds if retried.

  • Damien (unregistered)

    You're failing to realise how clever this code is. It has an automatic limit on the number of retries it does attempt. True, it throws a non-obvious StackOverflowException there instead of something more relevant, but no additional code was required. (And of course, the specific limit it applies isn't controlled)

  • King (unregistered)

    We need to see what is hidden in the removed lines where "//Some setup work" is placed. This is maybe only a Minor WTF.

  • Brian (unregistered) in reply to someone

    Hey, nothing wrong with recursion, provided there's some kind of reasonable and deterministic terminating condition (retry count would be a good option in this case). It's infinite recursion that's the WTF.

    Also, I found that error message particularly amusing, like it's the poor server's fault that the code fails. Bad server! No biscuit!

  • alexdelphi (unregistered) in reply to tahir_ahmadov

    Well, I can imagine a case of asynchronous recursion on top of a homemade message loop... when you can't use iteration but register a promise of sorts to execute the function the next time

  • (>'-')> (unregistered)

    Agree, this is a perfectly reasonable approach. All that is needed is a limit on the number of retries, which could be added by measuring the maximum stack depth on startup and only making the request every d/n calls - either by counting stack frames in an additional parameter or a global variable, or by using random numbers and setting the probability of the request being made according to the depth of the stack.

  • giammin (unregistered) in reply to someone

    "if unsuccessful, curse"

  • snoofle (unregistered) in reply to giammin

    "if unsuccessful, curse"..

    That's only on the FIRST failure. After that, you REcurse...

  • King (unregistered) in reply to snoofle

    Is moderation not working here anymore? The F word was used!

  • (nodebb)

    But... isn't this exactly how any call going thru Ethernet protocol works now? I was told that, when I tell my browser to go to foo.bar.com, it tries to send a request, and if something is busy, it waits a bit and tries again. If still busy, it waits longer and tries again... and from the Ethernet routing point of view, forever. It's only the code in my browser that decides it should warn me that things are taking way too long.
    Or is there some built-in Limit_Retries in the Ethernet part of the game?

  • Your Name. (unregistered)

    What does the runtime have to do with tail recursion? An optimizing compiler ought to be able to recognize it at compile time, at least in the current case, so it's the responsibility of the compiler to make it go away – i.e., to convert it into a loop – at compile time.

  • Brian Boorman (google) in reply to cellocgw

    That's not how Ethernet behaves at all. You are thinking about TCP/IP - you know, that protocol that is /sometimes/ transported on an Ethernet link.

  • sizer99 (google)

    @cellocgw: Two issues here. First, if Ethernet detects a collision it will 'randomly' back off and retry, but only up to 16 times. However at the level of accessing a website you're really talking about tcp/ip timeouts and retries - these also have a limit, though it's set by your tcp/ip stack, not by standard.

    Anyhow, the bigger thing is that they don't retry /recursively/ like in this example. They just do a loop. The difference is between: 'bool xmit_with_retry( packet ) { if( !xmit(packet) ) xmit_with_retry (packet); }' and 'bool xmit_with_retry( packet ) { for int i=0; i < MAX_TRIES; ++i ) { if( xmit( packet ) break; } }' (forgive my horrible code, it's just quick example). The problem with the first is that there's no limit to how many times it will retry, and every time you retry you are eating up more space on your program stack till eventually the program crashes.

    Addendum 2018-11-15 14:48: Edit: And whoops, I didn't return a bool, but hopefully you get the point.

  • someone (unregistered) in reply to Brian

    Nope, it was infinite. I think I did it after some pseudorandom process, after which if it didn't generate good results (some value was duplicated, which was somewhat unlikely), it would try again by calling itself.

  • (nodebb)

    C# compiled explicitly for x64 will emit code that does tail recursion.

  • Kaewberg (unregistered)

    I usally create an ExponentialStandoff class. new ExponentialStandof(() -> webServiceCall()).perform()

  • Ashley Sheridan (unregistered)

    Inception: that word does not mean what you think it means...

  • Little Bobby Tables (unregistered)

    What I particularly like about this is:

    "BAD SERVER! No biscuits for you!"


  • Debra (unregistered)

    "Try Try Try" was a Smashing Pumpkins song. I always think about it when doing exceptional stuff.

  • smf (unregistered)

    there are two types of people, those who don't like recursion and there are two types of people

    I only use recursion when forced to, that has served me well. Using recursion when you aren't forced to, is always a mistake.

  • (nodebb)

    I need to remember to un-"fix" code I did recently that retried using an alternate IP address if the first didn't work out....

  • riking (unregistered)

    This is an excellent way to get "thundering herd" problems and make it impossible to recover from server downtime.

    If your server normally gets 1 query per second from this code, and all servers are down for 60 minutes (it was DNS), you will get 3,600 queries all at the same time when it starts to recover.

Leave a comment on “Tryception”

Log In or post as a guest

Replying to comment #:

« Return to Article