• foxyshadis (unregistered) in reply to csi235
    csi235:

    Alex Papadimoulis:
    A Netscape engineer who shan't be named once passed a pointer to JavaScript, stored it as a string and later passed it back to C, killing 30

     

    32. My brain reformatted itself upon seeing this. It just couldn't handle it.

    It's the newest excuse for interprocess pointer-passing. You're not passing a pointer! You're passing a string! That makes it all okay, and means none of the rules about pointers and processes apply. :D

     Really, the best thing about having thrown away the entire clusterfuck-in-a-clusterfuck NS4 codebase, was going on to repeat half the mistakes and create a whole new half to fill the gaps during the mozilla development. It's still a mess, anything incorporating a half-dozen new technologies invented soley for a single suite would, but at least it's steadily gone down as over-engineering gets replaced by the newer volunteers.

  • (cs) in reply to Carnildo
    Carnildo:
    Nice theory.  I'm currently working on a code library that, sooner or later, is going to need to be re-written from the ground up.  It's a cross-platform compatibility library, designed around the limitations of Windows 3.1 and Macintosh System 6.  The memory allocation system is based around allocations of 64KB or less, the polling-based main event loop is incompatible with the MacOSX system of registering event callbacks, and the file-access sytem is based around the assumption of 8.3 filenames.  There's leftover code from the days when this library supported things like VAX VMS, or a windowed application in a character-based display, and code for handling both ASCII and EBCDIC.

    Further, the whole thing is written in object-oriented C, with all the fun that entails, and it's optimized for code size, not maintainability.
     
    Well, I just heard about the theory myself, and it doesn't seem unreasonable. If I had the your codebase handed to me, I might want to rewrite it too, and I might actually do that it if nobody stopped me.
    Maybe you CAN actually refactor and clean up the code instead of simply rewriting it.
    Does the library work NOW, on todays systems? Why break it. You can have a field day tearing out everything related to windows 3.11, MacOs and VMS until only the bare essentials are left. Now increase the buffer sizes, adapt the model to more modern APIs, and add documentation as needed.
     
    You can probably delete faster than you can rewrite. I think it's worth a try at least. 
     
  • Olddog (unregistered)

    This is a slight stretch off topic but I'll try to show the relevance.  In an article today on MSNBC.com, A U.S. judge ordered a U.K. based company, an international anti-spam organization that creates blacklists, to pay $11.7 million in damages to a U.S. direct e-mail marketing company, because it hurt the marketing company's ability to do business.

    What scares me is that this might become an argument for web advertisers ( popup ads ) using the same argument against popup blocking. Has anyone run across any articles concerning popup blocking legality?

  • brinke (unregistered) in reply to csi235

    great article.  but what IS the ULTIMATE WTF.  did i miss it?

  • jimmajamma (unregistered)

    I'll reduce the setup as I could go on for days...

    Its the early days of the browser wars and JS and Java, I think NS 4.x. 

    I was tasked to do a feasibility study for a java applet being used as a data connector back to the webserver that JavaScript could use (since js couldnt get to the server without a page refresh) - funny enough the concept is just like the basis of our new latest and greatest browser tech - Ajax.

    1. I got an applet running in a page, I exposed a public method to JS, and added the appropraite tag - I think RUNSCRIPT on the applet tag.

    2. I created a button and some JS to call the function on the applet.

    3. I knew from prior apps that I could write a java applet that connected back to the server - no brainer.

    Now, since 1 and 2 worked and 3 worked independently - I "ASSUMED" (my mistake) that 1, 2 and 3 would work in conjunction - reasonable assumption I thought.

    Great - feasibility study done, I reported the affirmative results, it definitely seemed like it would work - I did everything short of actually writing the whole thing.  Come game day, I actually hook it all together, I press the HTML button, it fires the JS, it calls the Applet - guess what?  Error box: something along the lines of "Java cannot initiate a network call form JavaScript - Yet."

    "YET", I love that one - "Yet".  That's really helpful.  And it screwed me an dmade me look like an incompentent.  BTW, at the time, IE performed the task perfectly - much has changed since then of course...

    This is just the most prominent of a boat load of things like this that I ran into using Netscape back in the day.  That said, IMHO, Firefox kicks butt these days as the web development platform of choice. 

    Thanks for the memories.


     

  • me (unregistered) in reply to Olddog
    Anonymous:

    This is a slight stretch off topic but I'll try to show the relevance.  In an article today on MSNBC.com, A U.S. judge ordered a U.K. based company, an international anti-spam organization that creates blacklists, to pay $11.7 million in damages to a U.S. direct e-mail marketing company, because it hurt the marketing company's ability to do business.

    What scares me is that this might become an argument for web advertisers ( popup ads ) using the same argument against popup blocking. Has anyone run across any articles concerning popup blocking legality?



    If I'm running HTML code on my computer, I get to choose how to run it. A blacklist is different.
  • Garo (unregistered) in reply to me

    This is one of the best writed WTF what I have had a change to read here!


    - Garo 

  • (cs) in reply to jimmajamma
    Anonymous:

    I'll reduce the setup as I could go on for days...

    Its the early days of the browser wars and JS and Java, I think NS 4.x. 

    I was tasked to do a feasibility study for a java applet being used as a data connector back to the webserver that JavaScript could use (since js couldnt get to the server without a page refresh) - funny enough the concept is just like the basis of our new latest and greatest browser tech - Ajax.

    1. I got an applet running in a page, I exposed a public method to JS, and added the appropraite tag - I think RUNSCRIPT on the applet tag.

    2. I created a button and some JS to call the function on the applet.

    3. I knew from prior apps that I could write a java applet that connected back to the server - no brainer.

    Now, since 1 and 2 worked and 3 worked independently - I "ASSUMED" (my mistake) that 1, 2 and 3 would work in conjunction - reasonable assumption I thought.

    Great - feasibility study done, I reported the affirmative results, it definitely seemed like it would work - I did everything short of actually writing the whole thing.  Come game day, I actually hook it all together, I press the HTML button, it fires the JS, it calls the Applet - guess what?  Error box: something along the lines of "Java cannot initiate a network call form JavaScript - Yet."

    "YET", I love that one - "Yet".  That's really helpful.  And it screwed me an dmade me look like an incompentent.  BTW, at the time, IE performed the task perfectly - much has changed since then of course...

    This is just the most prominent of a boat load of things like this that I ran into using Netscape back in the day.  That said, IMHO, Firefox kicks butt these days as the web development platform of choice. 

    Thanks for the memories.


    Well, at least this worked unerd IE. :)

    (You might have been able to fool NS if you just copied the parameters into your applet and had another thread do the http call.) 

  • (cs) in reply to me

    Anonymous:

    If I'm running HTML code on my computer, I get to choose how to run it. A blacklist is different.

    Why? It's also my decision whether or not to use a blacklist for spam filtering. BTW, popup advertising has at least the positive effect of funding the website I'm using,

    while spam is only annoying.
     

  • jimmajamma (unregistered) in reply to VGR

    VGR, I back your sentiments 100%.  At a certain point the code becomes almost secondary to the evolved design, which is what contains the real value.  Once code gets to a certain point, in many cases, a project should just get rearchitected.  I'm a big proponent of starting with a fresh project, and carefully moving existing code into it - more like a porting process (to a different language) than anything else - port to the new architecture.  I believe this is the single biggest reason that many projects fail (of all sorts - I've seen many consulting projects either die this way, or get saved from the brink by just the process I'm mentioning).  The old netscape is a clear example of this, and your point about Mozilla is perfect - look how great a product we can get when the code is scrapped but the ideas move forward.  No offense to Netscape lovers, it was a huge POS - the programmability back then was just attrocious.  I posted another response here with a story about one of my more painful netscape experiences...

    One last thing, related to this idea, and I think this is a great example, and man I wish the idiot architect who caused this problem would read this...  I worked at Lotus consulting for a few years.  They were working on this project to scan large volumes of content into Notes Databases.  The original team consisted of an architect and 5 developers, they took between 6 months and a year to architect and write the code for this monstrous overthreaded design that could achieve max 60 documents per minute.  The executable was gigantic, and maintenance was a pain in the balls.  A friend of mine tasked with seeing if he could improve the project, in one weekend mind you, rewrote the heart of the application and improved the perfomance close to tenfold.  He ultimately reduced the footprint by some huge factor - from something like 60 meg to 6, and as you could imagine it reduced the code size as well, and therefore the understandability and maintainability of the code. 

    He had the benefit of really understanding everything the code needed to do, so he was able to storm right at that target instead of over architecting something that might have needed to be something else - that was his one true advantage.  It didn't hurt that the original architect had a huge superiority complex and was full of himself.  But just imagine that - from 6 months with a full team, to a weekend with one good developer, and the output was so much better...
     

  • jimmajamma (unregistered) in reply to biziclop

    I think I researched that, but the problem then was how to get the results back to JS.  My memory is foggy, but I don't think the DOM was developed enough back then to do the things we needed, and that obviously can do today with Ajax technology.  Also, I think at the least performance was pretty important, we were counting in 10ths of seconds in terms of performance - so any extra delay was bad.  Its funny, back when the web was "new" and people were converting/comparing their client/server process to the web, the delay was a huge deal - now it seems secondary - so even if the web version is slower, the other benefits clearly outweigh the disavantages of slow page refreshes...

  • Mick Horan (unregistered)

    Hi Blake,

    WOW cofounder of Firefox I hope I can pick your brain with a Firefox question.

    I have a Dell 5100 system running Microsoft Windows XP Professional, Media Center.

    I've tried several times to Install and run Firefox (most recently yesterday) without success.

    It goes through the install and seems to be intalling fine until it looks like it is doing some sort of search for files and never ends.

    I just know ried to load and run Firefox and it just says it's loading.

    Is there a compapatibility problem between Firefox on The Media Center?

    Thanks for the help.

    Mick 

     

     

     

     

  • Anonymous (unregistered) in reply to Anonymous
    Anonymous:
    byte_lancer:
    We really want to know more about this 30 part.

    Did it kill 30 processes (WTF) ? or threads ? or connections ?

    Pleeeeease tell me that's sarcasm.  It's a news-story-like phrase, like "hurricane damages AOL headquarters, killing 30."

     

    Actually, this is what happened:

    Engineer 1: You bloody did what?

    Engineer 2: It's brilliant actually. I pass that pointer to JavaScript as string and...

    Engineer 1: You didn't... Tell me you didn't!

    Engineer 2: Yes I did, and it's spelled out in black and white in ISO 9899-1999:TC2.

    Engineer 1: What? (dumbfound)

    Engineer 2: Some people still haven't memorized every single word of the C specification...

    Engineer 1: You're fired!

    ... Next day... Engineer 2: (Holding an Uzi in both hand, dressed like Rambo) tatatatatatatatat

    ...Evening News... "Today, 30 marketing employees at AOL was shot down by an engineer, witnesses claimed that the engineer was screaming "ISO 9899-1999:TC2!" all the time he was mowing down the victims."

  • (cs) in reply to Belcat
    Belcat:

    You forgot the other big WTF: Netscape 6.  Releasing a browser that could barely work was a real WTF and probably cost them whatever was left of their Netscape fans...

     

    Couldn't agree more. I was a massive Netscape fan-boy. I the days of Netscape 3 and IE 3 I became hooked on Netscape and just couldn't imagine why any fool would run IE above Netscape. Then came NS 4.75 Communicator and my faith waivered but I stuck with Netscape, then came the dissaster that was NS 6. It was slow, it was buggy and it messed up even simple sites (including the first site I ever got paid money to build!). That was it, I switched to IE and didn't look back till someone gave me a copy of FireBird 0.7. I never looked back and I'll be first in line for the new FireFox 2 when it comes out. It never ceases to amaze me that Netscrape still exists. Who still uses it?

    Bart. 

  • Chad Austin (unregistered)

    I was one of the last interns at Netscape, I think around the time that Blake started working there in person.  I had just finished by sophomore year in college and was like "Yeah!  Silicon valley!  Netscape!  This is cool!"  But things didn't quite add up...  only one person on the rendering team was left, there were none or few automated tests (so just about every release regressed in some way), and the code review process was so slow and draconian it's amazing _anything_ happened.

    So, I was hired as a QA guy, but only because the original engineering spot I applied for had been filled.  Nonetheless, I was given time to do software development on the side.  So I decided I'd fix a relatively simple bug that had been around for quite a while.  It took _two months_ to get through the review and super reviewer process.  (Which was also the primary way knowledge transfer was accomplished, btw.)

    Now I work at a startup where if your change passes the tests, it's a go.  As long as you've added new tests for the missing functionality, anyway.  And knowledge transfer is accomplished by pair programming.  Now I can look back at my Netscape summer with a great big, educational WTF.

    p.s. SWIG (Simplified Wrapper and Interface Generator) stores C pointers in strings in some (all?) scripting language implementations.

     

  • Nick (unregistered) in reply to Jeff Olhoeft
    Anonymous:

    > a Netscape engineer who shan't be named once passed a pointer to JavaScript, stored it as a string

    > and later passed it back to C, killing 30

     31, my brain dropped core and I can't get it to reboot.

     

    How about a WIN32 client (which could be on a remote machine from the server)

    Taking a HWND, serializing it into a CArchive object. Converting that to a "CBlob"

    *sending it over the network to the server*

    Eventually the server sends it back (via the same mechanism).

     

    The client decodes it, and uses the handle to find the "correct" window to display the results in.

     

  • Nick (unregistered) in reply to Carnildo
    Carnildo:
    Anonymous:
    A strictly-conforming C program may convert an object pointer (though not a function pointer) to a string by casting it to void* and formatting it using sprintf with the %p format specifier (provided the receiving buffer is large enough for the output). Such a string may be stored in any suitable medium and later converted back to an object pointer of the original type (via sscanf) and used, provided it's used during the lifetime of the object that it points to, without invoking undefined behavior.

    It's spelled out in black and white in ISO 9899-1999:TC2.


    Yes, but does the spec say anything about what should happen if that string is passed through a Javascript applet?  I'm fairly sure that's undefined behavior.

     

    Spec also doesn't necessarily say "it's a good idea" -- how they heck are you supposed to check the pointer is good when you get it back in a reasonable manner? 

  • anonymous (unregistered) in reply to Nick
    Anonymous:

     How about a WIN32 client (which could be on a remote machine from the server)

    Taking a HWND, serializing it into a CArchive object. Converting that to a "CBlob"

    *sending it over the network to the server*

    Eventually the server sends it back (via the same mechanism).

     
    The client decodes it, and uses the handle to find the "correct" window to display the results in.

    The ultimate multiplayer internet game will be a FPS where the screen is rendered on the server, and send to the client trough a VNC alike conexion.  Nowdays theres not enough bandwith even to implement its as "lets check this bad idea", but in 2/3 years will be doable.

     

  • (cs) in reply to stevekj

    stevekj:
    I don't think you quite got the point of Joel's article.  It wasn't that code never has to be rehashed, sometimes extensively.  As you and Joel both know, it does.  The point is that throwing everything out and starting from scratch is the wrong way to go about that, especially if you want to be commercially successful.  The correct and commercially successful way to rehash your code is to do it a bit at a time, moving this here, moving that there, untangling this knot, sweeping all the lint away from that other ball of mud over there, and at each step your code still does exactly what it did before, and you can still ship it if you have to.

    This might have the advantage that you always have a releasable product, but it can make matters much more complicated. For example basic maintainability features such as component-based architecture, design by contract, separation of concerns etc. are not easily available as an afterthought and implementing them into a tangled mess of spaghetti code will typically involve much more and much harder work than redoing the architecture from scratch. I learned this the hard way when I tried to make a Windows GUI port of a relatively small(!) DOS application I once wrote that used an ncurses-like library. The problem was that user interface, data processing and internal logic were horribly entangled. I managed through hard and painful work to make a GUI with basic features, but I gave up when I found that implementing the stuff I actually wanted the GUI to do was much harder still. And the code at that time was still a maintenance mess. I easily figured that rewriting the whole thing would be a lot easier, though I never got around to do that. That way I learned the benefits of separation of concerns the easy way. Yes, the easy way, because I was damn lucky to learn this lesson through the failure of such a small one-man project.

    The best thing to do of course is not to write code like this in the first place. But people often don't know better the first time around, and then the second best thing can very well be to rewrite the application. Period. That does not mean you cannot reuse. Even if you decide to rewrite (or let's say rearchitect), you can still salvage. You have code that's known to work and that's hard to rewrite? Fine - just port it to the new architecture. This approach has the huge advantage that salvaging code is a conscious decision. Every piece of legacy (rsp. prototype) code thus has to pass a sanity filter:

    • Does this code work reliably? If not, how expensive is it to make it work?
    • Does this code do what we want it to do? If not, how expensive is it to make it do that?
    • And how expensive would it be to write a replacement that works and contains all required features?

    It might be argued that it isn't worth the effort to evaluate every piece of code that way. But code that's not worth the evaluation effort is easily rewritten anyway. So "Should I even bother to evaluate the costs of reusing this?" will just be the first criterion for reuse, which at least 80% of the code in a badly designed application will fail right away.

  • no name (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:

    stevekj:
    I don't think you quite got the point of Joel's article.  It wasn't that code never has to be rehashed, sometimes extensively.  As you and Joel both know, it does.  The point is that throwing everything out and starting from scratch is the wrong way to go about that, especially if you want to be commercially successful.  The correct and commercially successful way to rehash your code is to do it a bit at a time, moving this here, moving that there, untangling this knot, sweeping all the lint away from that other ball of mud over there, and at each step your code still does exactly what it did before, and you can still ship it if you have to.

    This might have the advantage that you always have a releasable product, but it can make matters much more complicated. For example basic maintainability features such as component-based architecture, design by contract, separation of concerns etc. are not easily available as an afterthought and implementing them into a tangled mess of spaghetti code will typically involve much more and much harder work than redoing the architecture from scratch. I learned this the hard way when I tried to make a Windows GUI port of a relatively small(!) DOS application I once wrote that used an ncurses-like library. The problem was that user interface, data processing and internal logic were horribly entangled. I managed through hard and painful work to make a GUI with basic features, but I gave up when I found that implementing the stuff I actually wanted the GUI to do was much harder still. And the code at that time was still a maintenance mess. I easily figured that rewriting the whole thing would be a lot easier, though I never got around to do that. That way I learned the benefits of separation of concerns the easy way. Yes, the easy way, because I was damn lucky to learn this lesson through the failure of such a small one-man project.

    The best thing to do of course is not to write code like this in the first place. But people often don't know better the first time around, and then the second best thing can very well be to rewrite the application. Period. That does not mean you cannot reuse. Even if you decide to rewrite (or let's say rearchitect), you can still salvage. You have code that's known to work and that's hard to rewrite? Fine - just port it to the new architecture. This approach has the huge advantage that salvaging code is a conscious decision. Every piece of legacy (rsp. prototype) code thus has to pass a sanity filter:

    • Does this code work reliably? If not, how expensive is it to make it work?
    • Does this code do what we want it to do? If not, how expensive is it to make it do that?
    • And how expensive would it be to write a replacement that works and contains all required features?

    It might be argued that it isn't worth the effort to evaluate every piece of code that way. But code that's not worth the evaluation effort is easily rewritten anyway. So "Should I even bother to evaluate the costs of reusing this?" will just be the first criterion for reuse, which at least 80% of the code in a badly designed application will fail right away.

    Not that I believe the gradual rewrite approach is always the right one, but you missed the idea.

    In your case you wouldn't have started writing a Win32 gui. You would start with the original DOS app, and work on seperating the gui (tui?) from the backend. Once you got to the point where you have properly seperated front & back ends - then you would work on replacing the DOS front end. That way at any point you could still go back to the old front end and do a DOS release (even if the Win32 front end was half finished).

    Both sides have thier pros and cons (doesn't everything?). If you go for the gradual rewrite, it will (usually) take longer. But at any point you can say "good enough for now" and make a release. For comercial software (or even internal company software) that can mean the difference between life and death for a project. It can even apply to open source. Would mozilla have maintained critical mass (i.e. enough interested contributers) if AOL/Netscape hadn't been paying the saleries for some of the main developers?

    On the other hand, a gradual rewrite can mean rewriting and reworking code that you're just going to throw away at the next stage (e.g. rewriting your DOS gui, knowing your going to throw it away in the final stage when you actually switch to Win32). A lot of the time all that lost work is going to make things longer overall. If the old code is really messed up, it can be difficult to find a gradual approach that doesn't just devolve into rewriting everything.

    Doing a full rewrite can let you take advantage of lessons learned, and perhaps come out with a cleaner design. Just watch out for the second system syndrome (we tend to see a lot of that here :-). You also need to be sure the project can survive the deadzone until the rewrite is good enough for release. I'd recomend taking your worste case estimate and multipling by 3.

  • Raafschild (unregistered) in reply to Nick
    Anonymous:
    Carnildo:
    Anonymous:
    A strictly-conforming C program may convert an object pointer (though not a function pointer) to a string by casting it to void* and formatting it using sprintf with the %p format specifier (provided the receiving buffer is large enough for the output). Such a string may be stored in any suitable medium and later converted back to an object pointer of the original type (via sscanf) and used, provided it's used during the lifetime of the object that it points to, without invoking undefined behavior.

    It's spelled out in black and white in ISO 9899-1999:TC2.


    Yes, but does the spec say anything about what should happen if that string is passed through a Javascript applet?  I'm fairly sure that's undefined behavior.

     

    Spec also doesn't necessarily say "it's a good idea" -- how they heck are you supposed to check the pointer is good when you get it back in a reasonable manner? 

     

    The important bit is 'provided it's used during the lifetime of the object that it points to'. When the javascript sends the pointer back to the server, it is not guaranteed that it talks to the same process, let alone that the object that the pointer points to is still available.

    .
     

  • (cs)

    i remember this "feature" in netscape and distinctly remember thinking "hmmm this feature doesn't work... oh well back to IE"

  • Anonymous (unregistered) in reply to cconroy
    cconroy:
    Anonymous:
    Side WTF:  We once had a client that wanted their web site to work with Netscape 5.

    Let me guess: You were designing the official web site of Leisure Suit Larry 4? 

     

    I'm not sure if I'm the only one who picked up on this...  Absolutely brilliant, I love it.  Who knows, maybe Leisure Suit Larry 4 will turn up with  Duke Nukem 4 in a distant future.

  • Micael Baerens (unregistered)

    "...passed a pointer to JavaScript, stored it as a string and later passed it back to C..."

    ...printed on paper, placed on a wooden table...

  • csrster (unregistered) in reply to Belcat
    Belcat:

    You forgot the other big WTF: Netscape 6.  Releasing a browser that could barely work was a real WTF and probably cost them whatever was left of their Netscape fans...

     

    I remember Netscape 6. There was a Netscape 7???

    --

    Colin 

     

  • Kiss me, I'm polish (unregistered) in reply to csrster

    Gotta love those Netscape 8's huge number of international versions:

    1. Canadian English

     Yes, that's it. There's even not a UK build, chaps. And forget 8.1 in other than american English.

  • Charles Perreault (unregistered) in reply to dfssgsgsdf
    Anonymous:
    Side WTF:  We once had a client that wanted their web site to work with Netscape 5.

     

    At my previous job, I HAD to produce a web application in ASP.NET backward compatible with Netscape 4 Gold... because that's what ONE of the scientist was using on its sparc workstation (on solaris 5).  Just imagine the client side validation code, the CSS and all the little non-working DOM javascript.  I truly hate legacy Netscapers.

     

    P.S. Yes Mozilla works on SPARC, but that scientist is still using Netscape 4.

  • (cs) in reply to Charles Perreault
    Anonymous:
    Anonymous:
    Side WTF:  We once had a client that wanted their web site to work with Netscape 5.

     

    At my previous job, I HAD to produce a web application in ASP.NET backward compatible with Netscape 4 Gold... because that's what ONE of the scientist was using on its sparc workstation (on solaris 5).  Just imagine the client side validation code, the CSS and all the little non-working DOM javascript.  I truly hate legacy Netscapers.

     

    P.S. Yes Mozilla works on SPARC, but that scientist is still using Netscape 4.

    *lol*, that's  when people like me have a chance to shine. Never learned anything beyond HTML 3.1, so I'm hardly able to make a web app that breaks on Netscape 4.

     

     

    ;-) 

     

  • Seriema (unregistered)

    Oh wow that's ... incredible! I wtf'd so much my food almost came out my nose! And you've got a wonderfull writing style, just thought I'd let you know =)

  • CodeMechanic (unregistered) in reply to VGR

    You have completely missed Joel's point, which, in my words, is "A company trying to make a profit cannot afford to simply stop its efforts to enhance its products while it completely re-engineers the current version." If you are not trying to make a profit or if you have armies of _qualified_ programmers donating their time and skill, perhaps you can get out ahead of the rest of us.

     Of course it is possible to avoid having to throw away a codebase, but, as you point out, it requires diligence, skill and discipline, all of which seem to be in short supply.

  • Captcha: Zork. (unregistered) in reply to CodeMechanic

    What I fail to understand as I work at these businesses is.. How the hell does the Marketing department get away with all of this? 

     

    They have no accountability.   If sales are low, It's Sales' fault.   If the product doesn't do what they promise, it's the developers / production's fault. Noone ever says 'Let's look at Marketing's effect on product sales.'

     

     And yet, in every company, marketing is king.   If marketing wants something, hop to it.   They greenlight projects, and approve everything.

     

    I sware I'm in the wrong line of work. 

  • dasmb (unregistered)

    Anybody still want to claim that Netscape lost the browser wars because of Microsoft's monopolistic practices?

     

     

  • (cs) in reply to dasmb
    Anonymous:

    Anybody still want to claim that Netscape lost the browser wars because of Microsoft's monopolistic practices?

    Definitely. MS won the war when IE4 was not much better than NS4. The NS6 desaster happened when MS already had 80+ percent.
     

  • It's a Feature (unregistered) in reply to Michael Wojcik

    Anonymous:
    A strictly-conforming C program may convert an object pointer (though not a function pointer) to a string by casting it to void* and formatting it using sprintf with the %p format specifier (provided the receiving buffer is large enough for the output). Such a string may be stored in any suitable medium and later converted back to an object pointer of the original type (via sscanf) and used, provided it's used during the lifetime of the object that it points to, without invoking undefined behavior. It's spelled out in black and white in ISO 9899-1999:TC2. The real WTF is that some people still haven't memorized every single word of the C specification. Geez. (Prolepsis: Of course I agree pointer->string->pointer is not a Good Idea.)

     

    WTF???

     

    a pointer, from a server application, converted to a string and passed to Javascript (which runs on a stateless client), then passed back to C (running on the server).  Assuming the insanity works, the real danger is that a hacker figures out how stupid you were and changes the pointer in the Javascript to something else--crashing your server, or worse yet, figuring out how to take control of your server.

  • (cs) in reply to It's a Feature
    Anonymous:
     

    a pointer, from a server application, converted to a string and passed to Javascript (which runs on a stateless client), then passed back to C (running on the server).  Assuming the insanity works, the real danger is that a hacker figures out how stupid you were and changes the pointer in the Javascript to something else--crashing your server, or worse yet, figuring out how to take control of your server.

    Todays WTF is about a browser, so it's all client-side. Granted, in most cases, there is a server that delivers content... but the strange conversion is not done on the server. BTW, a browser is not stateless.

  • John Q. Public (unregistered) in reply to zip

    I know ex-AOL-employees who have completely changed their tune. One of them, after having an AOL account for 9 years, and working for AOL himslef for some years, is chaning to Comcast, despite that Comcast charges our area an outrageous sum for Internet.

    The lesser of two evils...

  • (cs) in reply to Olddog
    Anonymous:

    What scares me is that this might become an argument for web advertisers ( popup ads ) using the same argument against popup blocking. Has anyone run across any articles concerning popup blocking legality?

     

    If I can't block your popups, I am not going to visit your site. The web is not the Internet, and quite frankly, I don't see the real value that Javascript and AJAX are adding to the network. That is merely reinventing the wheel. 

  • (cs) in reply to anonymous

    The ultimate multiplayer internet game will be a FPS where the screen is rendered on the server, and send to the client trough a VNC alike conexion.  Nowdays theres not enough bandwith even to implement its as "lets check this bad idea", but in 2/3 years will be doable.

    You might want to look into how X11 works, particularly when running on dumb X terminals.

  • (cs) in reply to neven
    neven:

    Awesome post - best writing I've seen on The Daily WTF. Kudos etc.

     

    Agreed. The WTF content is good, but the writing style is excellent. The dry humor kills me. Too bad I was out of town and didn't read this one until just now.
  • Michael Wojcik (unregistered) in reply to Carnildo
    Carnildo:
    Anonymous:
    A strictly-conforming C program may convert an object pointer (though not a function pointer) to a string by casting it to void* and formatting it using sprintf with the %p format specifier (provided the receiving buffer is large enough for the output). Such a string may be stored in any suitable medium and later converted back to an object pointer of the original type (via sscanf) and used, provided it's used during the lifetime of the object that it points to, without invoking undefined behavior.

    It's spelled out in black and white in ISO 9899-1999:TC2.


    Yes, but does the spec say anything about what should happen if that string is passed through a Javascript applet?  I'm fairly sure that's undefined behavior.


    You may be sure, but you're wrong.  As long as the string is recovered unchanged and converted back to a pointer during the lifetime of the object it points to, behavior is defined.  That seems pretty clear in my original post above, and it's quite clear in the standard.  There's no exception in the standard for being "passed through a Javascript applet".

    You can take the result of %p-formatting, reverse it, encrypt it with AES-128, carve it onto a stone tablet, and bury it at midnight in an undisclosed location; as long as you get the original string back into the program, you can convert it back to a pointer of the appropriate type and dereference it as long as the object is still around.

    And as I wrote the first time, of course it's a bad idea.  I pointed out it was defined behavior only because the additional irony makes it even funnier: anyone can produce undefined behavior in a C program (and most C programmers do), but it takes skill to write strictly-conforming code that's still WTF-stupid.  (Note that IOCCC tricks often are not strictly-conforming.)

     

  • webdev101 (unregistered) in reply to byte_lancer
    byte_lancer:
    Alex Papadimoulis:

    a Netscape engineer who shan't be named once passed a pointer to JavaScript, stored it as a string and later passed it back to C, killing 30 

    We really want to know more about this 30 part.

    Did it kill 30 processes (WTF) ? or threads ? or connections ?

    I have a hunch this has got something to do with the string processing functionality in spidermonkey.

    Alex Papadimoulis:

    To make things right, Netscape quickly released a version 7.01 that included it, but only after renaming it the "popup suppressor" ("popup decongestant" was trademarked).

    After reading this submission, I think we must trademark the term "WTF suprressor".

     

    And "WTFdecongestant"

  • webdev101 (unregistered) in reply to dfssgsgsdf
    Anonymous:
    Side WTF:  We once had a client that wanted their web site to work with Netscape 5.

     

    My boss at the last job wanted to make all the site to work with all the browser including NS 5 and lower and not only that he wanted us to use javascript...

  • skztr (unregistered)

    The whole concept of "Popup Blocking" is a WTF. You don't need to "block" popups. They aren't somehow "coming at you" from malicious websites. They are on the website. The web browser just needs to not interpret them.

    The idea that popups somehow "get through" sometimes is further WTF.

    - I'm browsing a webpage
    - If I want another window, I will explicitely request my web browser give me one 

    For _ALL_ web browsers, all they need to do to "block popups" is: NOTHING. Just dont support them.
    It's like having a "Virus Blocker" seperately coded which has a bunch of logic aimed at circumventing the "takeOverComputer()" function. Wouldnt it be easier to just not have that function in the first place?

    There is no option in Firefox to DISABLE popups. Just to "block" them (which of course doesnt work) 

  • eric bloedow (unregistered)

    that "it was marketing's fault" reminds me of two stories: a company was about to make several huge hardware sales, BUT then someone made the mistake of saying "our next system will be much better", so the buyers decided NOT to buy, and waited for the next system...then they made the SAME mistake AGAIN, and the company went out of business before the third system was ever finished... one story, told to Scott Adams, said the Marketing people made the mistake both times, and the second time, the company decided to save money by firing everyone EXCEPT MARKETING...the other story said the CEO himself said "the next one will be better" twice...i can't remember the name of the second company, but that story was in a book, "dumbest business decisions".

  • A human in the... now I guess. (unregistered) in reply to anonymous

    Hello, Stadia!

Leave a comment on “Blake Ross on Popup Suppression”

Log In or post as a guest

Replying to comment #:

« Return to Article