- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I was working on a DB for a online game, and I had a problem where the Mysql db on the server I was working on would sometimes just completely randomly not accept a connection.
It's never done it 4999 times in a row though.
Admin
I have asp pages that make several attempts to connect to a database, sleeping a second between each failed attempt, and finally connecting to a read-only backup of the database if it still fails. It's an Access/Jet database though. They connect with the adShareDenyNone option for shared access, but it doesn't always do the job.
I also built my own message queue, like MSMQ, with a retry queue and double post checking for guaranteed single delivery, but written from scratch, in javascript. I called it the "Data Link". It's used by several of our colocated sites to synchronize (access/jet) all their database updates and alerts to each other, with extra care to avoid key collisions. I suppose it's a complicated WTF in retrospect, but it all works. I wrote it all a few years ago, while still in college.
Admin
Unless a deadlock or some other error prevented the delete from functioning.
Admin
This is a mighty fine WTF. I quit my first programming job because a 'Robert' of a boss kept blaming me for his bugs.
Admin
Well for one thing if you SELECT * you're returning all the columns when you really don't need them all
Admin
Clearly the conditions have changed. If a process fails because the data is not ready and you retry when the data is ready, you are not doing the same thing. This is not to nitpick and say that the change in the time domain precludes any chances of repetition. If the process is done 1000 times before the data is ready and once after the data is ready, then you have 1 failed attempt, 999 insane attempts, and once successful attempt.
Admin
I would of course, always toss and turn at night wondering if MAYBE it woulda deleted on the 5001st try.
CAPTCHA: perfection
Admin
Admin
Admin
We use our own session handling in our PHP apps. For larger apps, its often better to be able to explictly start/end sessions in code, and not rely on PHP to save/load for you. As for storing it in a table - I've never seen anyone NOT do so. Why would you want to store the info in a gazillion little files on the web server? Storing session info in a db table has a lot of benefits, not the least being that you can then generate statistics on the sessions.
captcha: captcha
Thats awesome.
Admin
Reminds me quite a bit of those who post here at the DailyWTF with statements like "Of course it's a WTF, it was written in _____ ." i.e., people who make excuses and never take responsibility for their own crappy code. Unfortunately, that's way too common in this industry. It's always someone else's fault.
Admin
Actually, that's how I regulary do existense checks in PL/SQL. I know it's not optimal for the performance, but it hardly matters. (In those cases where it could matter, I use something else). Simple reason: The alternatives require more lines of code.
Admin
I can't image that you would ever have duplicate Session Id's because if you did there'd be no secure way to differentiate between two distinct computers with the same session.
I'd make sense that s_user_id is indexed as well, so MySQL wouldn't perform a full table scan. ;)
Admin
Null or 1 or 0 or what ever can be used.
Select null from sessions where s_user_id = input_id;
You are just looking for a row returned, and do not care what is in it.
CAPTCHA Test: billgates
I passed!
Admin
One word - Brilliant!!
Admin
Luck.
Admin
Your statement is a non-sequitor. MOST databases do NOT require a commit outside a transaction, and NONE of them show rows after a delete on the same connection!
Steve
Admin
That doesn't work on most databases though, and some have NO relative instruction. ALSO, both would generally take the same amount of time to run ANYWAY!
Steve
Admin
Ermmm.....what happens if I have 2 or more sessions open with this application at the same time, and close one?
Stupid web browser logging me out at random. Must be an IE bug.
Admin
We sometimes use DHT to store sessions across a server farm. But only in applications that need very high throughput. It's way faster than using a DB, assuming the number of nodes is relatively small (no more than 8 nodes) and the size of data stored in the sessions doesn't exceed 4k.
Anyway, storing sessions in files on a one-file-per-session basis is not a good idea if the number of concurrent sessions are above 10000.
Admin
Enough said.
Admin
So, just running the DELETE (without checking the SELECT first) wasn't good enough for him?
Anyway, if you use this code to work around time-out or hard-down bugs from the database (which seems like what it's doing), then I would recommend a sleep() inside the loop. Else you'll be wasting a lot of CPU cycles, and still only leave a few seconds window for the DB to come back up.
captcha: quality!
Admin
Back when I was a consultant one of my favorite non-development tasks was reviewing client's existing projects and writing up why it's failing, and what can be done to fix it.... I used to see guys like Robert all the time. Bitter, neophytes that should have never been allowed near a keyboard. Few Roberts survived after my reports were presented to their supervisors. Either they quit claiming the company was "out to get them" or they were shuffled off to "analysis" jobs where they could use their business knowledge without actually touching code again.... "Presentation Day" was always fun, especially when topics like "Professionalism in code comments" came up ;)
-Me
Admin
Well, to me it seems most likely that the table is locked and reattempting the same query over and over finally might get what we want to happen. Of course, had Robert known what he was trying, he'd propably first obtained the lock before trying anything else. But hey, you can't always do it the easy way :)
Admin
Unfortunately, that's way too common in ALL OF HUMANITY.
sincerely,
Richard Nixon
Admin
In SQL Server, if you're searching on a primary key it'll stop searching after it finds a match. A simple select should work just fine.
Admin
I don't see the problem. If the database design includes a suitable index, then " select count(*) from foo where pri_key = 'bar' " should execute efficiently. Especially when you're testing if something is in the index or not. I agree that it's pointless to bring back columns that you don't need.
Catchpa: "null". Should I have left the box blank? ;)
Admin
Personally, if I needed to do a series of database operations, then I'd put them into a stored procedure/function, then execute that from the PHP. That can deal with the issue of checking for locks, or sit and block while the table is unavailable for delete.
It seems far more efficient to make one call to the database server, which does the hard work, than making 5000 repeated calls.
Admin
But you'd still had to handle the case when your database is unreachable and that would involve retrying of some kind. Of course not retrying 5000 times without sleep().
Admin
Maybe they're using a really weird database where queries fail if a record or table is locked... In that case only I see the point of retrying.
The WTF's that I've found:
• if $user_id is not set or not a number, all queries fail
• where is the initialization of $iTimes? One more loop like that in the code and $iTimes may already be 5000 when he gets to this one...
• why add . "" to the end of the query?
• why call the last if to continue at the end of a loop, doing nothing there will also continue the loop...
• why not set $bDeleted to true if the delete query executes fine?
• why check if the record exists at all? You can always try to delete a record - even if it doesn't exist. "DELETE FROM sessions WHERE s_user_id = 1 AND s_user_id = 2" will even execute.
Coditor
Admin
Dunno if others have said this or not, since I didn't read all of the comments yet...
Look, this guy is doing a two-step process when he only needs the second step. The key point is that there is _nothing_wrong_ with doing a conditional delete if the delete's 'where' condition doesn't exist. Just do the delete and be done with it. If the DB tells you that you affected 1+ rows, then hooray for you. If it tells you that you affected 0 rows, you're not going to magically delete more the second time or 5,000th time.
It's the same as doing a "replace" sort of operation (update if exists, insert if not). Just do an update first. If you affect 0 rows, do an insert. If you affect 1 row, you're done. That saves you from doing a select count(*)-type query.
Admin
In general, it's not true. If the code a part of a longer transaction, it can stuck in a deadlock. There is no other way of handling a deadlock as doing a ROLLBACK of one of the deadlocked transactions and rerunning that transaction again. This is a normal process that most professional application servers can do automatically. The transaction may also fail due to the DBMS being restarted. This is why a few simple repetitions may not be sufficient if we run an HA system. It is reasonable to wait before each attempt. These parameters (repeat count and repeat delay) are usually configurable by the AS admin. In some cases it is not a completely insane idea setting repetitions count to more than 100.
However, you are right the Robert's code wouldn't help, because the DB engine would ignore all 4999 executions of the query till the end of the transaction. Such protection code should be placed on top of the transaction, not inside it.
Though this code is a really WTF (because of the SQL injection), sometimes Robert is right. Some third party code is a piece of crap, and even its creators know this. We had an issue with the Jboss Application Server's JMS JDBC 3 provider that crashed randomly once or twice a week under heavy load. And we finally found in the FAQ (not in the official documentation) a sentence which meant: "Don't use this component. It's a piece of crap. It is known to break uder heavy load. It is scheduled for removal in the next version. Use JDBC 2 version instead." So it is reasonable to code the mission-critical parts of the system by yourself and not rely on third party components.
Admin
Anonymous wrote the following post at 09-21-2006 2:37 PM:
I would of course, always toss and turn at night wondering if MAYBE it woulda deleted on the 5001st try.
Good point. Maybe it would be a good idea if this loop didn't have an arbitrary limit of 5000 iterations and would just continue on indefinitely. Heck, we could even start a seperate thread to continue trying.
Actually, why not start up a bunch of threads (off the top of my head, I'm thinking....5000?) that all repeatedly try to delete that record. I mean, one of them is bound to succeed, right?
Admin
By default, mysql wont accept more than 100 simultaneous connections.
Admin
I just wonder if Hungarians actually code that way and if so, how did that habbit start in the first place.
if (see(Ceausescu.class)) {
<font size="-1"> Gulyásleves </font>g<font size="-1">Goulash = new </font><font size="-1">Gulyásleves(extraMeet);</font>
throw g<font size="-1">Goulash;</font>
}
Admin
personall, I love the comment at the top of the code block... it really does offer so much insight into the brain of this clown
Admin
I think there is actually a third programmer who never wrote a bug: Paula!
Admin
It usually passes on the 4999th time.
Admin
An optimized WTF.
$iTimes = 0; // lets define this in case it's been defined elsewhere.
$bDeleted = false; // lets define this in case it's been defined elsewhere.
$imax = 5000; // lets define this too.
if(!$user_id)
{
trigger_error( "No User ID.", E_USER_ERROR );
die();
}
// no use re-defining this each loop;
$sSQL = "SELECT 1 FROM sessions " .
"WHERE s_user_id = " . $user_id . "";
// no use re-defining this each loop either;
$dSQL = "DELETE FROM sessions " .
"WHERE s_user_id = " . $user_id . "";
// now the loop
while ( $iTimes < $imax ) // no need for $bDeleted comparison here
{
$iTimes++;
$dRsrc = $qryObj->Exec( $dSQL ); // try the delete first
$sRsrc = $qryObj->Exec( $sSQL ); // then trust it
if ( !$dRsrc || !$sRsrc )
{ continue; //query failed, try again
}
// Check if deleted
if ( $sRsrc->NumRows() == 0 )
{
$iTimes = $imax; // Robert would be proud of this pointless saftey catch
$bDeleted = true;
break;
}
}
// Now we bail if needed
if ( !$bDeleted )
{
trigger_error( "Delete failed.", E_USER_ERROR );
die();
}
Admin
(repeat 4,999 times)
which would return a 1 for every matching row...
I don't know what RDBMSes you are using, but most of those that I work with (e.g. PostgreSQL, Firebird SQL) don't even have commands "outside of a transaction".
Also, the SELECT may very well return rows after the DELETE was executed unless the transaction isolation level is set to serializable. All it takes is another concurrent committed or even uncommitted INSERT.
Admin
Serendipity.
Admin
drop the table (from orbit). it's the only way to be sure.
alternative: burn it with fire.
captcha: no thanks!
Admin
I worked on an application that did this, it was designed this way such that other applications could 'link into' your existing session by passing your credentials (userID and a session password, plus other stuff to fill out the destination form, or whatever action was necessary). when the page loaded, a db call was made to check the session password against the userid. if the session password was an old one (which expired after 8 hours) or if it was expired/incorrect, you would be forwarded to the login page, and a variiable set to forward you to the original target page, after you re-verify. This way, you could go to any computer (or Citrix terminal in our case) and get to a page from your session by opening a link.
Admin
Having worked with the abomination that is PHP, I can sympathise fully with the guy.
Sure the 5000 is just a 'work you bloody stupid thing' moment, but it obviously didn't work the first time, otherwise why would he have bothered?
Not that there's much fantastic about the code that is there, but there's plenty worse out there.
Admin
IF EXISTS (SELECT COUNT(s_user_id) from sessions WHERE s_user_id = input_id)
or
SELECT TOP 1 from sessions WHERE s_user_id = input_id
would be much better. Otherwise you are counting all the rows in the table when you really only need to know if there is one row there.
Probably doesn't apply in this case but some of the tables I deal with have over 100 millions rows so needlessly counting rows is just bad form.
Admin
You misspelled "for any self-consistent recursive axiomatic system powerful enough to describe the arithmetic of the natural numbers (Peano arithmetic), there are true propositions about the naturals that cannot be proved from the axioms." - Kurt Gödel
It's an easy mistake to make.
Admin
It looks to me like it is just fine with commands outside transactions... Did you mean that protocols like JDBC don't allow commands outside transactions?
Admin
I might be wrong (not knowing which RDBMS you are using) but I'd guess that the first statement always evaluates to true.
Admin
Yup, definitely has to ba a user error. Stupid users!
Admin
Enlightenment.