- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
-Frist
Admin
My eye twitched when the code sprintf()'ed an
unsigned long
with%d
.Also,
TimeToExecute = -TimeToExecute;
doesn't make any sense, seeing as howTimeToExecute
isunsigned long
.Admin
I love when the code takes a negative time to run, I can run programs before my PC is even started.
Admin
Side bar: always always when you work with "GetTickCount", or millis() in other systems, etc, get the absolute value of the time difference, since these things tend to roll over when the INT_MAX (or similar) is reached. Yes, non-Windows systems often stay up for longer than the lifetime of a GetTickCount cycle (around 50 days if Int32 is applicable). No developer or tester will ever test a scenario for more than 50 days, but an end-user will...
Admin
Best guess on the
TimeToExecute = -TimeToExecute
is an ill-thought attempt to handle the clock wrapping during the measured time (which it does every 50 days of uptime sinceGetTickCount()
returnsDWORD
). Especially ill-thought as the correct solution is... do nothing, as unsigned arithmetic handles it correctly already.Steve is right about the
unsigned long
vs.%d
being twitch-inducing, although despite it being clearly Wrong, it won't cause a problem in practice. I'm inferring fromGetTickCount()
that this is running on Windows, which means long and int are both 32 bits. And the signedness isn't a problem until a function runs for 25 days... or until the negation issue occurs, of course. Which means that this almost cancels out the previous bug, you just have to read the output knowing "times greater than 20 seconds get highlighted with a leading dash."Admin
I am so confused. I imagine I'm not the only one.
Admin
Someone was trying to make entries stand out in the log so they could search for them and find things to optimize in execution time.
Admin
Jibe (to be in agreement or concord with) is the word you're thinking of, not jive.
Admin
I don't even know what GetTickCount() is - it ain't the high performance counter, that has to be handled differently.
Addendum 2023-09-05 12:03: https://learn.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter
Admin
And to top it off, it contains one of my favorite bug-magnets, TWICE. Did we (as an industry) learn NOTHING about brace usage, from Apple's "goto fail"?
Admin
There must be something else going on as well.
As far as I can see this code would report correctly if the TimeToExecute is less than 20 seconds, so that doesn't really fit in with "rarely ever any long-running methods" unless their idea of long-running is only over 20 seconds.
Admin
"Hi, this is your user. This function took 30 seconds to respond."
"That's weird. Our stats say none of our functions takes more than 20 seconds"....
Some time later
..."oh, there's one here that took -30 seconds"
Admin
It might not make any sense, but it's perfectly valid C. -20,000 has a value somewhat in excess of 4,000,000,000.
Addendum 2023-09-05 17:26: Also, I'd like to see how/where lasttickcount is set up.
Admin
Clearly someone was getting raked over the coals because the app wasn't performing well. They were told, in no uncertain terms, that if any performance measurements showed a time exceeding 20s, then their head would be on the chopping block.
Once again, we learn that any blindly-used metric can be gamed.
Admin
Ah, good one -- clearly, that was a feature, not a bug!
Admin
I only said that it doesn't make sense.
Admin
What was most telling about this is that it shows that Apple had a complete lack of a proper test suite or procedure for ensuring important security code actually worked correctly.
It shows just how little faith one can have in closed source software, even from a massive corporation, when that same sort of mistake would be very unlikely to make it into a stable release of any open/cooperative software like openssl and gnutls (that's not to say there are never bugs that come up, but not this sort of this sort of silliness.)
Admin
That's not how management works. All they learned is they need to do better damage control the next time it happens, they will spend millions more in PR and marketing and not a single cent to improve the quality - maybe some harsh words to overworked underpaid devs ;-)
Admin
What is remarkable to me is how much faith people put in open source software. Especially from a security standpoint. I mean, the entire argument is frankly ludicrous.
I mean, just think about it/ If your code has a serious reason to demand security who is more likely to be digging through that code? Is it going to be some hobbyist software dev using his free time to help the community like the good samaritan that he is by finding and fixing bugs and and vulnerabilities or is it going to be a bad actor whose paid job is to dig through code and find those same bugs and vulnerabilities to exploit?
I know where I'd bet my money. And it ain't on people being good.
Admin
I guess you've never looked at projects like openssl, gnutls, and so many others that make up the FOSS landscape. Or GCC, clang, and many other dev tools. Or the Linux kernel itself. Vulnerabilities and issues are routinely found, worked on, and resolved through the community efforts.
There can be bad actors, but they have a lot of hurdles to get over when there are extremely nature tools for testing and comparing, and how easy it is to look at all of the changes in a pull request and verify it all well before a decision to commit it to the main code base is made.
I am not saying that all closed is necessarily bad, but it is clear that it has a lot more disadvantages compared to well vetted cooperative efforts has shown.
Addendum 2023-09-11 08:58: s/nature/mature/
Admin
I am not talking about making malicious modifications. If you find a bug to exploit why would you report them to the developers?
And while there are some open source projects that are large and full of contributors like the ones listed those are the exception and not the norm. The vast majority of open source software is some devs private project sitting on github with maybe one or a handful of people contributing. And in those conditions it's all too easy to just read the code and find a bug to exploit without ever reporting it to anyone and have a lot of time to do so before anyone notices.
And yet because of the few ones you listed people have an almost cult like belief that all open source is automatically safe.
Admin
No software can be assumed to be safe, and I have never seen anyone say that OSS is automatically safe; most people in that scene understand that that takes work and doesn't just come free.
That said, small projects are less likely to be targeted by virtue of there being little to nothing to gain from exploiting something with a small install base, and large projects tend to have enough security minded people that exploits are often found and patched.
So while no software can ever be assumed to be safe, the OSS does have a well deserved reputation for being safer overall by efforts of cooperative communities that have often led to discoveries and resolutions to problems far more rapidly and effective than a lot of commercial and closed source offerings. Is it always the case? No, but it is the prevailing reality in a great many cases.
Admin
Addendum 2023-09-15 00:30: Bah, didn't separate my reply from the quote enough.
Small projects don't necessarly have a small install base. Remeber left-pad?