• Norman Diamond (unregistered) in reply to anonymous
    anonymous:
    Shaun:
    It still happens - in 2006 I came across a similar piece of code written in C# which used pixel-by-pixel rendering of an image (direct to the the screen no less) written by an "expert". A few hours work to use arrays and off-screen rendering before transferring the final image to the screen and I had one very happy customer who could now page through the images as fast as they liked rather than wait several seconds per image - I didn't milk it as much as I should have done :)
    There's no reason why it should have taken longer to render to the screen than to render to an array. They're both just places in memory. I'm guessing that the "plot pixel" routine had some horrid overhead.
    Yes there is. The pipeline from the CPU to the graphics memory (via the graphics chip[*]) is slower than to main memory. The pipleine from the graphics memory (via the graphics chip[*]) to the CPU is even slower.

    [* In a PC where the graphics chip grabs some of the main memory, I'm not sure if the CPU can still access that memory directly, but whether or not it can, performance goes down even more: the graphics chip always has to go through that slow pipeline to get the data to display on the screen.]

  • v (unregistered) in reply to This isn't a WTF. This is just someone being nasty and hurtful.
    This isn't a WTF. This is just someone being nasty and hurtful.:
    I'm appalled.

    You'd write a message saying something like "Thanks Doc, really great stuff, but I noticed you could do it a better way, and here are the details...".

    Publically humiliating someone who did 99% of the work for you, just because you had to correct the 1%, makes Mike a total fucking nobhead. Seriously.

    The real WTF is that you decided to publish a story where the only content is some asshole bragging about how he simultaneously ripped-off AND trolled someone vastly better than he'd ever be.

    Shame on you.

    +1

  • Neil (unregistered) in reply to Val
    Val:
    balazs:
    The Daily Happy Ending, where is my WTF??? Every second programmer does write to file byte by byte at least once at some point in their career.
    In a decent language on a decent operating system you don't have to care. You should be able to write byte by byte and the let the system decide how it will buffer it.
    Case in point:

    I wrote a utility to de-interlace 64 images into a single image using Quick C for Windows.

    But its standard library doesn't allow for more than 20 files.

    So I used the low-level API instead.

    Which was great on the hard disk, because of 32-bit file access.

    But on network drives each read and write triggered a separate synchronous round-trip to the server...

  • Video Toaster (unregistered) in reply to Anon
    Anon:
    TRWTF is the Commodore Amiga right?

    Atari ST FTW!

    You can suck my Fat Agnus!

  • Barf 4Eva (unregistered) in reply to GWO
    GWO:
    And the WTF here is that Mike got something for free -- something he admitted he couldn't get round to himself -- then made some improvements, and then felt like that entitled him to act like a dick to the person who gave it to him in the first place.

    Mike is TRWTF.

    Couldn't agree more. Mike sounds like a total d*ck.

  • SomeName (unregistered) in reply to balazs

    I did and still do - I expect streams to be buffered by default. But on other hand I never had any performance issues so didn't have to investigate.

Leave a comment on “Write Universe to Disk”

Log In or post as a guest

Replying to comment #:

« Return to Article