• (cs) in reply to nimis
    nimis:
    JimmyCrackedCorn:
    Of course the bigger WTF is having a large important application written in Perl. Really. Wow.
    There is absolutely nothing wrong with Perl, and you are a disgusting ignorant bigot.

    Perl has its uses, in its place. Same as so does ant, and so does selenium, and so does FORTRAN (although not so much any more). Some developers even believe that even java can be an acceptable tool in some applications.

  • (cs) in reply to Grown up
    Grown up:
    Sorry, some of us graduated from kindergarten years ago.

    Seriously, the FRIST meme was hackneyed on day one.

    Captcha: wisi :adj: "Smartah"

    I'm wisi than you.

    Puns on the captcha are equally fucking stupid, of course.

  • Pita (unregistered)

    I wanted to add a pithy comment, but I kept drawing a blank.

  • Joe (unregistered) in reply to Lorne Kates
    Lorne Kates:
    <xml:BlankPlaceholderConcrete><bl<b>ink> </bl<b>ink></xml>
    FTFY.
  • Anon (unregistered) in reply to Grown up
    Grown up:
    Sorry, some of us graduated from kindergarten years ago.

    Seriously, the FRIST meme was hackneyed on day one.

    Captcha: wisi :adj: "Smartah"

    I'm wisi than you.

    I see what you did there ;)

    Criticizing one hackneyed meme and then including another. Bravo sir, bravo.

  • usitas (unregistered)

    Right or wrong, this is simply how checking for spaces has always been done.

  • Jeremy (unregistered)

    I always like when people do the stupid way the stupid way. Even if you thought you had to do something with predefined spaces: no loops? Not $blank[10] instead of $blank10? Why not blank($any_number); ?

    I mean, I get that stupid is as stupid does, but I enjoy the layers of stupid almost as much as the thing being pointed out as stupid.

  • Herr Otto Flick (unregistered)

    This is Perl right? Perl being the language where you can do $blanks10 = " " x 10 ?

  • Herr Otto Flick (unregistered) in reply to nimis
    nimis:
    JimmyCrackedCorn:
    Of course the bigger WTF is having a large important application written in Perl. Really. Wow.
    There is absolutely nothing wrong with Perl, and you are a disgusting ignorant bigot.

    There is so much wrong with Perl that I keep a wiki page at $JOB with all the stupid shit that people get wrong when they come to play in Perl-land for a couple of hours.

  • (cs) in reply to JimmyCrackedCorn
    JimmyCrackedCorn:
    Of course the bigger WTF is having a large important application written in Perl. Really. Wow.

    +1

    Having just inherited a large Perl app that I'm maintaining for free, I fully support this comment.

  • (cs) in reply to Grown up
    Grown up:
    Sorry, some of us graduated from kindergarten years ago.

    new meme!

    "Sorry, some of us graduated from Perl years ago."

  • (cs) in reply to Bananafish
    Bananafish:
    So true, faoileag. A 25-line Perl script takes 38 hours to process 2,000,000 tiny little files. The same job can be done in 37 hours with a 69 line Java program, 28 hours with a 45 line Awk script, or 11 hours with a 700 line COBOL program.

    #Lines <> efficiency.

    Maybe that's why this statement would be a comment?

    So, did you verify these run times? Write the programs? Inquiring minds need to know!

    Then again, I'm in the process of writing a Perl script right now. Only a couple of hundred lines (many are comments!).

  • ¯\(°_o)/¯ I DUNNO LOL (unregistered) in reply to Grown up
    Grown up:
    Sorry, some of us graduated from kindergarten years ago.

    Seriously, the FRIST meme was hackneyed on day one.

    Captcha: wisi :adj: "Smartah"

    I'm wisi than you.

    Sorry, some of us have graduated from crawling on all-fours years ago.

    Seriously, the calling-out-the-capcha meme was hackneyed on day zero.

  • (cs) in reply to chubertdev
    chubertdev:
    new meme!

    "Sorry, some of us graduated from Perl years ago."

    Sorry, some of us graduated from new memes years ago.

  • real-modo (unregistered)
    $blank0 = "FILE_NOT_FOUND";

    captcha: ideo. 1. n. The idea that array indexes should start at 0. 2. adj. A state of idiocy induced by lack of tea.

  • (cs) in reply to golddog
    golddog:
    What faoileag said re: value of rewriting. One of our core applications is being rewritten. It really needed it; as I understand the story, we bought software from somebody many years ago. The organization from which we bought it was somebody who sort of hacked together a kind-of working application in classic ASP (pre-.NET).

    Anyway, it's being brought up to more modern technololgies (MVC, ajax, etc). We've had a team of between four and seven developers going at the rewrite for nearly a couple of years now. Easily spent > USD $1M. Closing in on wrapping up the rewrite currently.

    The new site is great--functional, responsive, easier to maintain, all the good things one hopes with a rewrite. I think it's going to make sales easier and draw business.

    Still, I'm not sure management realized the time and cost which would be undertaken when this project was set out upon.

    The point is, a for-profit organization cannot spend money on things which don't increase profitability, and we can't define the savings of a rewrite in terms of "fixes/changes will take x% less time afterwards."

    The Freakonomics podcast on "sunken cost" would be a good argument against that.

  • (cs) in reply to DCRoss
    DCRoss:
    chubertdev:
    new meme!

    "Sorry, some of us graduated from Perl years ago."

    Sorry, some of us graduated from new memes years ago.

    Well played.

  • Bananafish (unregistered) in reply to herby
    herby:
    Bananafish:
    So true, faoileag. A 25-line Perl script takes 38 hours to process 2,000,000 tiny little files. The same job can be done in 37 hours with a 69 line Java program, 28 hours with a 45 line Awk script, or 11 hours with a 700 line COBOL program.

    #Lines <> efficiency.

    Maybe that's why this statement would be a comment?

    So, did you verify these run times? Write the programs? Inquiring minds need to know!

    Then again, I'm in the process of writing a Perl script right now. Only a couple of hundred lines (many are comments!).

    Runtimes for Perl, Awk, and Java were calculated by determining average file time and multiplying by the number of files left. After setting up the regular Perl script, I wrote Java and Awk equivilants. But after seeing how slow these were and how long it would take to process the lot, I went for the fastest thing I know. A well designed and structured COBOL program was lightning fast in this case. I've only run the COBOL version 2 or 3 times when the number of files is large enough to require unreasonable processing time (super large data runs or backlog due to some type of problem). Normally, the Perl script runs as part of a data acquisition run and can keep up with the load.

    All comes down to the right tool for the job. When it's Perl, I use it, unless the right tool is something else. NEVER thought the right tool would be COBOL, hahahahaha. Lern't sumpin' that day ;)

  • Stu (unregistered) in reply to faoileag
    faoileag:
    Remy Porter:
    I maintain applications written in the late 90s.
    So do I right now, and they are written in perl to boot :-)
    I used to maintain VB applications written in the early 90s. VB3 ported to VB4 ported to VB5 ported to VB6, initially running against an Access DB then ported to MS-SQL 2000 then ported to MS-SQL 2005.

    I was sacked but I'm betting they're still using that payroll system.

  • n+1 (unregistered)

    I built the prototype for a large application using Perl in 3 weeks.

    The java version to replace the prototype was to be build by others. After more than 4 years, less functionality than the prototype and an order of magnitude bigger in lines of code the replacement was abandoned.

  • Charles (unregistered) in reply to Mike

    LOL Mike, thanks for that. Reminds me of my very first IT position at a large telecom... the bash script I wrote to automate some of our end of night stuff was a vast improvement over the larry running it every night.

    Thanks to that job, 15 years later I still think of my function keys as PF? .

    Good time, good times.

  • Nour Yame (unregistered) in reply to vindico
    vindico:
    faoileag:
    JimmyCrackedCorn:
    I understand legacy applications, but I still wonder about any calculation that could determine when technical debt makes it worth it to do a re-engineering/refurbishing/rewriting project.
    As long as it is possible to fix even critical bugs (say, a memory leak in an embedded application written in c++) and new features are not only demanded by customers but can also be added, the pain of rewriting a large legacy app is usually far too great for most companies.

    The rewrite takes time and a lot of manpower. Think 1, 2 or more years until app 2.0 has the same functionality of app 1.0. From a business point of view that's a lot of money to invest, and you want to see some return on that investment.

    Wow, one or two years would be like a dream come true. At my job, the Big Rewrite is up to 5 years (and counting), but still has less than half the functionality of the original. The guy who was in charge of this massive brain-fart quit a few months ago, and now it looks like nobody is really interested in finishing it. ;-)

    Our application suite was going to be rewritten with the latest technologies, including out sourcing a lot of the boring work. There was a lot of competition amongst senior staff as to who would be involved in what was expected to be the flagship product, and people who were left behind on the existing product were pitied.

    But years passed and millions were spent without anything to show for it - and finally the word came that the whole project was being scrapped and along with it all the staff who were working on it, as it was obvious only the people on the existing product were needed.

  • anonymous (unregistered) in reply to Bananafish
    Bananafish:
    herby:
    Bananafish:
    So true, faoileag. A 25-line Perl script takes 38 hours to process 2,000,000 tiny little files. The same job can be done in 37 hours with a 69 line Java program, 28 hours with a 45 line Awk script, or 11 hours with a 700 line COBOL program.

    #Lines <> efficiency.

    Maybe that's why this statement would be a comment?

    So, did you verify these run times? Write the programs? Inquiring minds need to know!

    Then again, I'm in the process of writing a Perl script right now. Only a couple of hundred lines (many are comments!).

    Runtimes for Perl, Awk, and Java were calculated by determining average file time and multiplying by the number of files left. After setting up the regular Perl script, I wrote Java and Awk equivilants. But after seeing how slow these were and how long it would take to process the lot, I went for the fastest thing I know. A well designed and structured COBOL program was lightning fast in this case. I've only run the COBOL version 2 or 3 times when the number of files is large enough to require unreasonable processing time (super large data runs or backlog due to some type of problem). Normally, the Perl script runs as part of a data acquisition run and can keep up with the load.

    All comes down to the right tool for the job. When it's Perl, I use it, unless the right tool is something else. NEVER thought the right tool would be COBOL, hahahahaha. Lern't sumpin' that day ;)

    When you're dealing with lots of tiny files, the overhead required to open and close the file can be substantial. I'm curious how long each program would've taken if it did nothing but open each file, read its contents, and close it.

  • (cs) in reply to n+1
    n+1:
    I built the prototype for a large application using Perl in 3 weeks.

    The java version to replace the prototype was to be build by others. After more than 4 years, less functionality than the prototype and an order of magnitude bigger in lines of code the replacement was abandoned.

    You have a WTF. Then you use Java, and you have an exponential WTF.

  • Charles F. (unregistered) in reply to anonymous
    anonymous:
    When you're dealing with lots of tiny files, the overhead required to open and close the file can be substantial. I'm curious how long each program would've taken if it did nothing but open each file, read its contents, and close it.
    Me, too. These run times don't pass the smell test. The fact that the commenter is most comfortable with COBOL suggests that a lack of proficiency in Perl, Java and AWK may factor in to the results.
  • Anonypony (unregistered)

    I have to admit, there's a certain amount of elegance to being able to do this:

    # insert 17 spaces:
    $space = $blank10 + $blank7
    
    # insert 42 spaces:
    $space = $blank10 * 4 + $blank2
    
    # insert 1 million spaces:
    $space = $blank10 ** 6
    

    ...Okay, so "elegant" isn't quite the right word.

  • (cs) in reply to Abigo
    Abigo:
    I think $d is the saddest of all variables.
    +1 Finally something in the universe that can make me laugh ...
  • (cs) in reply to DCRoss
    DCRoss:
    chubertdev:
    new meme!

    "Sorry, some of us graduated from Perl years ago."

    Sorry, some of us graduated from new memes years ago.

    Sorry, some of us graduated from repeating the new meme with variations ad nauseam years ago.

  • Your Name (unregistered) in reply to Matt Westwood
    Matt Westwood:
    DCRoss:
    chubertdev:
    new meme!

    "Sorry, some of us graduated from Perl years ago."

    Sorry, some of us graduated from new memes years ago.

    Sorry, some of us graduated from repeating the new meme with variations ad nauseam years ago.

    Right or wrong, this is simply how new memes were created back then.

  • (cs) in reply to Matt Westwood
    Matt Westwood:
    DCRoss:
    chubertdev:
    new meme!

    "Sorry, some of us graduated from Perl years ago."

    Sorry, some of us graduated from new memes years ago.

    Sorry, some of us graduated from repeating the new meme with variations ad nauseam years ago.

    Sorry, some of us graduated from wanting to watch the world burn years ago.

  • _darkstar_ (unregistered) in reply to Herr Otto Flick
    This is Perl right? Perl being the language where you can do $blanks10 = " " x 10 ?

    Yes. Also the language that sports one of the largest collection of esoteric operators (most are artifacts of the very terse syntax and weren't intentionally included in the language):

    Operator     Nickname                Function
    ================================================
    0+           Venus                   numification
    @{[ ]}       Babycart                list interpolation
    !!           Bang bang               boolean conversion
    }{           Eskimo greeting         END block for one-liners
    ~~           Inchworm                scalar
    ~-           Inchworm on a stick     high-precedence decrement
    -~           Inchworm on a stick     high-precedence increment
    -+-          Space station           high-precedence numification
    =( )=        Goatse                  scalar / list context
    =< >=~       Flaming X-Wing          match input and assign captures
    

    There are many more. Try Googling "esoteric operators perl".

    By the way:

    I have 20 years of Perl programming experience.

    Yes, the syntax is terse and next to unreadable to outsiders.

    Yes, you can write horrible, horrible code in Perl.

    And no, that doesn't make it a bad language.

    But the widespread antipattern of "just write more code until it works" that I've seen way too many places will backfire in the worst conceivable way if you have chosen such a flexible language.

  • TN (unregistered)

    That is way too many lines for perl. This is the same thing:

    ${'blank'.$} = ' ' x $ foreach 1..10;

    Now that is perl!

  • Bananafish (unregistered) in reply to Charles F.
    Charles F.:
    anonymous:
    When you're dealing with lots of tiny files, the overhead required to open and close the file can be substantial. I'm curious how long each program would've taken if it did nothing but open each file, read its contents, and close it.
    Me, too. These run times don't pass the smell test. The fact that the commenter is most comfortable with COBOL suggests that a lack of proficiency in Perl, Java and AWK may factor in to the results.

    The files contain text representations of genetic data. I would expect MUCH better stats if the program only had to open, read, and close. But the program has to open all the files in a particular location, read them all, and decide which of 4 characters determines a particular location value AFTER calculating where all the files intersect (byte 1 of file 3 may be byte 17 of file 72) times thousands per folder, times hundreds of folders. It also needs to handle 7 exception letters that are meaningful to humans but do not exist in any file. These are "invented" by several algorithms applied when greater than an arbitrary precision of consensus cannot be determined, and some must be recalculated after the exceptions are applied. Yep, it changes at any time, depending on what we find in the files. Stinks in any language, but when it cures cancer it will smell nicer.

    I am not "most comfortable" with COBOL. I chose to use it. The real overhead here was not reading the files but rather determining the intersection of thousands of strings of characters with variable lengths and "official" starting positions. I decided the data structure was good for minimizing reads and loops. If you read a record that is defined as a single string of characters, you can copy it to another record at the top level and assign all descendants in one shot rather than having to loop at least once and then do splits and/or joins to get what you want. COBOL does that for you if you define your records properly and can populate dozens of variables and arrays with only one read and a few assigns. Others require you to run loops of one kind or another, and that's where all your time goes.

    If you can do that in Perl (or anything else) I'm all ears.

  • bambam (unregistered) in reply to Bananafish
    Bananafish:
    Charles F.:
    anonymous:
    When you're dealing with lots of tiny files, the overhead required to open and close the file can be substantial. I'm curious how long each program would've taken if it did nothing but open each file, read its contents, and close it.
    Me, too. These run times don't pass the smell test. The fact that the commenter is most comfortable with COBOL suggests that a lack of proficiency in Perl, Java and AWK may factor in to the results.

    The files contain text representations of genetic data. I would expect MUCH better stats if the program only had to open, read, and close. But the program has to open all the files in a particular location, read them all, and decide which of 4 characters determines a particular location value AFTER calculating where all the files intersect (byte 1 of file 3 may be byte 17 of file 72) times thousands per folder, times hundreds of folders. It also needs to handle 7 exception letters that are meaningful to humans but do not exist in any file. These are "invented" by several algorithms applied when greater than an arbitrary precision of consensus cannot be determined, and some must be recalculated after the exceptions are applied. Yep, it changes at any time, depending on what we find in the files. Stinks in any language, but when it cures cancer it will smell nicer.

    I am not "most comfortable" with COBOL. I chose to use it. The real overhead here was not reading the files but rather determining the intersection of thousands of strings of characters with variable lengths and "official" starting positions. I decided the data structure was good for minimizing reads and loops. If you read a record that is defined as a single string of characters, you can copy it to another record at the top level and assign all descendants in one shot rather than having to loop at least once and then do splits and/or joins to get what you want. COBOL does that for you if you define your records properly and can populate dozens of variables and arrays with only one read and a few assigns. Others require you to run loops of one kind or another, and that's where all your time goes.

    If you can do that in Perl (or anything else) I'm all ears.

    C should be required learning for all programmers. It is used to write most compilers/interpreters.

    union record
    {
    char wholerecord[100];
    struct recordtypea {
       char field1[10];
       char field2[90];
       } recorda;
    struct recordtypeb {
       char field1[20];
       char field2[80];
       } recordb;
    };
    
  • Murray (unregistered) in reply to eViLegion

    In case the 'space' character changes in the future.

    $blank1 = " "; $blank2 = $blank1.$blank1; $blank4 = $blank2.$blank2; $blank8 = $blank4.$blank4; $blank16 = $blank8.$blank8; $blank32 = $blank16.$blank16;

    eViLegion:
    $blank1 = " "; $blank2 = "  "; $blank4 = "    "; $blank8 = "        "; $blank16 = "                "; $blank32 = "                                "; etc...
  • Julius Caesar (unregistered) in reply to Grown up
    Grown up:
    Sorry, some of us graduated from kindergarten years ago.

    Seriously, the FRIST meme was hackneyed on day one.

    Perhaps, but still far less hackneyed than taking the time out of your day to complain about it.

  • Norman Diamond (unregistered) in reply to Anonymous
    Anonymous:
    I don't like Perl. I lost a bet when I predicted that the first Perl 6 interpreter will be released before Duke Nukem Forever.
    The Perl 6 interpreter is coded in Perl 5, until it becomes stable enough to rewrite in Perl 6. Now remember that Perl is a write-only language, so every time the developers catch a bug, they can't fix the interpreter and they have to rewrite it from scratch.
  • Brad Gilbert (unregistered) in reply to Norman Diamond
    Norman Diamond:
    Anonymous:
    I don't like Perl. I lost a bet when I predicted that the first Perl 6 interpreter will be released before Duke Nukem Forever.
    The Perl 6 interpreter is coded in Perl 5, until it becomes stable enough to rewrite in Perl 6. Now remember that Perl is a write-only language, so every time the developers catch a bug, they can't fix the interpreter and they have to rewrite it from scratch.

    Wrong the Rakudo Perl6 compiler is mostly written in Perl6, or a subset of it. Actually the more lowlevel code that gets rewritten to Perl6, the faster it tends to get.

    The Niecza Perl6 compiler is written in a mixture of C# and Perl6 (as far as I can tell from the github repository.)

    The Perlito Perl5/6 compiler/translator is (apparently) written in a variety of languages.

    There is even a Perl6 compiler written in Haskell named Pugs. It was the first real implementation of Perl6. It has fallen by the wayside though.

    Also I take issue with you calling Perl5 a write-only language. Just because you don't want to learn how to read, and write in it, doesn't mean it can't be done.

  • Cheong (unregistered)

    I wonder if he put the "blanks" inside static array with increasing length, in attempt to perform table lookup for the "space conversion", will it end in TDWTF?

  • (cs) in reply to JimmyCrackedCorn
    JimmyCrackedCorn:
    faoileag:
    JimmyCrackedCorn:
    Of course the bigger WTF is having a large important application written in Perl. Really. Wow.
    Perl was the defacto standard for web programming at the turn of the century. You don't throw away away thousands and thousands of lines of code, just because in the meantime other scripting languages have gained more followers.
    You're right. My bad. Was thinking about anything written in the last 5 years or so.

    Still plenty of new code being written in Perl. More Perl coding than ever happening right now.

  • Norman Diamond (unregistered) in reply to Brad Gilbert
    Brad Gilbert:
    Norman Diamond:
    Anonymous:
    I don't like Perl. I lost a bet when I predicted that the first Perl 6 interpreter will be released before Duke Nukem Forever.
    The Perl 6 interpreter is coded in Perl 5, until it becomes stable enough to rewrite in Perl 6. Now remember that Perl is a write-only language, so every time the developers catch a bug, they can't fix the interpreter and they have to rewrite it from scratch.
    Wrong the Rakudo Perl6 compiler is mostly written in Perl6, or a subset of it. Actually the more lowlevel code that gets rewritten to Perl6, the faster it tends to get.

    The Niecza Perl6 compiler is written in a mixture of C# and Perl6 (as far as I can tell from the github repository.)

    The Perlito Perl5/6 compiler/translator is (apparently) written in a variety of languages.

    There is even a Perl6 compiler written in Haskell named Pugs. It was the first real implementation of Perl6. It has fallen by the wayside though.

    Also I take issue with you calling Perl5 a write-only language. Just because you don't want to learn how to read, and write in it, doesn't mean it can't be done.

    Oops. The next time I viciously attack myself for some or other WTF that I make myself, please come defend me from myself as vigorously as you defended Perl before swine.

    To be serious for a minute (a very short minute, it won't last long), I am aware that write only code can be written in any language, even though Perl has pride of place next to APL, Intercal, and brainfuck.

  • (cs) in reply to Norman Diamond

    Whether Perl is write-only or not depends on the quality of the developer using it. A colleague sitting two cubicles next to me produces perfect Perl code. I myself try to. Our Chinese friends are however not so precise in that.

  • :-) (unregistered) in reply to JimmyCrackedCorn
    JimmyCrackedCorn:
    The developer was the bosses' 14 year old nephew?

    This story is about someone too amateurish to even comment upon.

    And yet you commented anyway.

  • Brad (unregistered) in reply to faoileag

    If it really is an old application, it might have been written in Perl 4; or by a Perl 4 programmer (my didn't appear until Perl5). Which could also explain the lack of use strict.

  • Charles F. (unregistered) in reply to Bananafish
    Bananafish:
    If you read a record that is defined as a single string of characters, you can copy it to another record at the top level and assign all descendants in one shot rather than having to loop at least once and then do splits and/or joins to get what you want.
    OK, I understand what a "record that is defined as a single string of characters" is, but how does such a record have "descendants" and what the hell is the "top level" of a record that is a string? I'd be happy to explain how to solve this problem in Java, if I understood it better. It sounds like the code you are looking for looks like this:
    Record record1 = Record.valueOf("GATACA");
    Record record2 = record1;
    Java is an object-oriented language. If you are thinking of data as strings, when the strings are really just encodings of the data, you'll probably write terrible, terrible Java code.

    Although admittedly I can't really follow the description quoted above, Perl is insanely great at handling data encoded as strings; in fact, that's its original area of focus. It seems like it would be at least as effective as the COBOL solution, if not better.

    Finally, this exact data (genome sequencing) is a common topic on "big data" discussions and I have never seen anyone post that any particular language is better at it than any other. What I can say is that for any particular approach, there is usually a language that expresses that approach most elegantly. "Direct translations" of things that are straightforward in COBOL tend to be disastrous in other languages.

  • someguy (unregistered) in reply to _darkstar_
    _darkstar_:
    !! Bang bang boolean conversion

    I'm not very familiar with many scripting languages besides bash, but even I know this from C.

  • Jay (unregistered) in reply to vindico
    vindico:
    faoileag:
    JimmyCrackedCorn:
    I understand legacy applications, but I still wonder about any calculation that could determine when technical debt makes it worth it to do a re-engineering/refurbishing/rewriting project.
    As long as it is possible to fix even critical bugs (say, a memory leak in an embedded application written in c++) and new features are not only demanded by customers but can also be added, the pain of rewriting a large legacy app is usually far too great for most companies.

    The rewrite takes time and a lot of manpower. Think 1, 2 or more years until app 2.0 has the same functionality of app 1.0. From a business point of view that's a lot of money to invest, and you want to see some return on that investment.

    Wow, one or two years would be like a dream come true. At my job, the Big Rewrite is up to 5 years (and counting), but still has less than half the functionality of the original. The guy who was in charge of this massive brain-fart quit a few months ago, and now it looks like nobody is really interested in finishing it. ;-)

    I don't buy a new car every year. Why not? Wouldn't a new car be better than my old car? Probably, but would it be ENOUGH better to justify the cost? At some point, of course, it becomes difficult to get spare parts for the old car, or it starts breaking down constantly, or for some other reason the maintenance cost becomes unacceptably high. Or a new car comes out with some feature that makes it worth junking the old car.

    There are lots of apps that I'd like to rewrite using a newer technology, or just to clean up ugly code. But if I did that, not only would I have to take time writing the code, but someone would have to test it, and there would still be the danger that I would be introducing bugs into a system that was working for ... why? Just so I can say that now its new, shiny, and pretty?

  • Jay (unregistered) in reply to faoileag
    faoileag:
    JimmyCrackedCorn:
    Of course the bigger WTF is having a large important application written in Perl. Really. Wow.
    Perl was the defacto standard for web programming at the turn of the century. You don't throw away away thousands and thousands of lines of code, just because in the meantime other scripting languages have gained more followers.

    Right or wrong, that's how web programming was ...

    No! Stop! Sorry, I lost my head there for a moment.

  • BillR (unregistered) in reply to Norman Diamond
    Norman Diamond:
    The Perl 6 interpreter is coded in Perl 5, until it becomes stable enough to rewrite in Perl 6. Now remember that Perl is a write-only language, so every time the developers catch a bug, they can't fix the interpreter and they have to rewrite it from scratch.

    That's the worst troll ever. Or you're incredibly ignorant. Not sure which. Both?

    Also: Perl's only a "write-only" language if you decide that you want it to be. That's what having a lot of flexibility gives you: a responsibility not to be lazy. Just last week I had to go back and update a script I wrote for a customer in 2008. It's only 450-ish lines long, but it was no problem adapting it for their new environment.

  • (cs) in reply to Jay
    Jay:
    I don't buy a new car every year. Why not? Wouldn't a new car be better than my old car? Probably, but would it be ENOUGH better to justify the cost? At some point, of course, it becomes difficult to get spare parts for the old car, or it starts breaking down constantly, or for some other reason the maintenance cost becomes unacceptably high. Or a new car comes out with some feature that makes it worth junking the old car.

    There are lots of apps that I'd like to rewrite using a newer technology, or just to clean up ugly code. But if I did that, not only would I have to take time writing the code, but someone would have to test it, and there would still be the danger that I would be introducing bugs into a system that was working for ... why? Just so I can say that now its new, shiny, and pretty?

    We need more car analogies

Leave a comment on “Drawing a Blank”

Log In or post as a guest

Replying to comment #:

« Return to Article