- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Assuming that the lack of comment was in the original, how much did it cost to work out WTF this code was intended to do?
Admin
This is a well known use of select(), e.g. https://stackoverflow.com/questions/3125645/why-use-select-instead-of-sleep https://howthisstuffworks.blogspot.com/2007/11/how-to-sleep-using-select-call.html etc. This is even in the select() documentation (man select).
whether it's WTF or not depends on the context. If one needs to wait for tens or hundreds of services sequentially, then 750ms on each will matter a lot.
Admin
Presumably the cost of spooling up the perl(1) process for each invocation is considered negligible? Depending on the system's workload, it could easily take more than 500ms just to load the interpreter, parse the select call and start executing the program. It smells, to me, like someone was trying to show-off their l33t skillz.
Admin
I might buy that if this was perl code. But it's not. It's shell code that calls into Perl (and creates a dependency that the system have Perl installed).
Any excessive delay could be handled by shutting down all the services in parallel and then checking them sequentially. Actual delay should only then be longest-service-shutdown-time plus 1 second (plus negligible time to check each service's status). Hardly a time waster for the user.
Admin
More likely it depends on whether or not the perl binary and all of its shared libs and other dependencies are in the disk cache or not. On my machine right now, I just ran
perl --version
from a cold cache, and it took 425 ms to run the first time. On the second time, once everything was loaded into the disk cache, it took a much more reasonable 10 ms to run.Admin
And since this will be potentially running every 750 ms, that will keep perl in the disk cache much of the time. So the cache misses should be rare.
It also takes time to load the
sleep
binary into memory.Admin
If everything is using sleep like they should, sleep is probably already loaded into memory. Also, I expect that the sleep binary is much smaller and faster to load than perl is.
Admin
If you were using "ksh", then sleep is a shell built-in...so very efficient sleeping.
Admin
The are surely doing it wrong.
Anyone knows that in order to sleep with subsecond accuracy you shuld use the Time::HiRes module!
So of cause the sleep should be:
perl -e 'use Time::HiRes qw(sleep); printf("Entering sleep\n");sleep(0.25);printf("Exiting sleep\n")'
(Prints added for proper enterprisiness :-) )
Yours Yazeran.
Admin
From the Winsock Lame List at https://tangentsoft.net/wskfaq/articles/lame-list.html
Admin
So this WTF contains three parts:
It might surprise the younglings, but back in the day everyone and everything used Perl and if you use any Debian-based system you do, too. AIX always has been rather old-school, so it would not surprise me if Perl was a hard system dependency. I'd frown more upon using Python because of its availability than Perl.
That said, the OP assumes that the version using Perl was the first and some kind of premature optimization, but what if the developer actually tested and maybe even got user feedback about that, so what we see is already the result of a proper development cycle? Then it would be a sign of being on the wrong side of the Dunning-Kruger curve to criticize this. Since we don't know, we should probably ignore this part.
If we accept that using Perl is fine and using a .25 seconds sleep is necessary, you'd have to provide a better solution than the given one, to WTF the select-based one. If you can't, you might be calling a solution stupid, which is the best one, that is possible, which again puts you at the wrong side of the Dunning-Kruger curve.
Admin
Admin
That's only lame because Win32 select() is lame. Windows select() is only for sockets. You can't use it on say, a file handle (CreateFile).
On POSIX compatible systems, select() is a general system call - it handles any file descriptor, whether it was opened using open(), creat(), socket(), pipe() or other call that returns a file descriptor (int fd). (And yes, you can use it on files for non-blocking).
As such, it's generally understood that using select() for sub-second sleeps is a supported function of select(). BSD systems should use poll() instead for the same effect.
Admin
@Steve_The_Cynic: As you point out, this is for Winsock, meaning Windows, which has a millisecond-level sleep API. The original code was for Unix, which frequently doesn't, and specifically for AIX, a vaguely Unix-compatible OS from IBM, which definitely doesn't unless they've added it recently.
Admin
It's supported even on Windows, in the sense that it works and does what you expect it to do. On the other hand, if you can't sub-second sleep on a UNIX-type system without abusing I/O multiplexer calls, well, that, too, is inexcusably lame.
Admin
Of course. If you're willing to rewrite AIX to use systemd, perhaps. Practically, the OS shuts down services the way it does, you cannot change it.
Admin
Not even sure why you need to wait for the service to shut down. on Linux you can delete a running service with no adverser effect anyway. Files only get truly deleted once no process is using them
Admin
The OS here: thanks to all who responded in a spirit of devil's advocacy, but I can confirm that:
Admin
File locks, modified shared object libraries, sockets, sub processes, etc etc etc.
I love the way people like to say It works on
$DIFFERENT_OPERATING_SYSTEM
as though it's a valid point.This is AIX. It's a flavour of Unix. It probably works the same.
Admin
"If one needs to wait for tens or hundreds of services sequentially, then 750ms on each will matter a lot."
Only if you actually have to wait. If those services are independent then the chance that each of them will not be ready when you first check, but will be ready 250-999 milliseconds later, is extremely small (and smaller as the number of services, thus the significance of the wait, goes up). The scenario where waiting 250ms vs 1000 will actually matter is very much an edge case. Of course it's possible the one in question is such an edge case, but there's no reason to suppose so.
Admin
This way is explicitly recommended by perldoc -f sleep:
Time::HiRes was likely not always installed on something as arcane as AIX, which likely ran Perl 5.6 when this was written.
Using setitimer via the syscall function, with manually packed structs, and catching the signal doesn't make a nice one-liner at all and would be a much bigger WTF.