• A cat that ain't laughing (unregistered) in reply to A Gould
    A Gould:
    Herbert:
    Great. So some people write ugly code in LabVIEW code. Can you find a language that a dedicated programmer *couldn't* write ugly code in with sufficient determination?

    Lolcode. Fairly sure anything written in that language is funny by definition.

    Fuck Off!

  • Johnny (unregistered) in reply to Meep
    Meep:
    Steve:
    Would some of the LabView evangelists please post screenshots of LabView programs that aren't incomprehensible? I've never seen one before. (That last one by Yair is about the best I've ever encountered).

    Not an evangelist, but I've used it and thought that, while it takes some work to understand, it's not bad.

    Any complex machine is going to take some work to comprehend. Ever tried reading a patent?

    But isn't that the point? Anything complex will needs some work to understand. Where is the WTF? Are Complex programs the WTF? I assumed everyone in this industry is (or has at some stage) worked on complex programs. Sometimes it's just not possible to do things simply.

  • Alessandrs (unregistered) in reply to Boris Vladamir
    Boris Vladamir:
    Nagesh:
    This is naturel for most proceses. Once proces is imploding it will show less complex diagram. Once you explode proces, it will get huge, cumbersum and complex.
    Typical American stupidity. I heard story of United States Space Program's engineering of a pen to work in zero gravity whereas the Cosmonauts simply brought pencils!

    They say necessity is the mother of invention. The Reds needed to do this on the cheap, so they come up with a cheap solution. At the time, the yankees had loads of money funding space exploration, so simple solutions were invisible to them - they didn't have the need to save money.

    Project managers, take note: Always claim to have about 25% of the budget you have - not only does that allow a massive buffer for overruns, it will also help your Techos to see the simpler cheaper solutions, rather than reinventing the wheel (or more commonly, a date/time class). The more money you have available, the more money a project costs. The more money a project costs, the more effort there probably is reinventing existing functionality simply because one of your monkeys "...never new the standard libraries could do that...". A techo spending even 1/2 a day googling how to do it with standard libraries is far cheaper, both in the long and the short terms than having them recreate flawed equivalents.

  • Alessandrs (unregistered) in reply to Harold III, Sr.
    Harold III:
    Boris Vladamir:
    Nagesh:
    This is naturel for most proceses. Once proces is imploding it will show less complex diagram. Once you explode proces, it will get huge, cumbersum and complex.
    Typical American stupidity. I heard story of United States Space Program's engineering of a pen to work in zero gravity whereas the Cosmonauts simply brought pencils!
    I've always found that story odd: people bragging about not being able to engineer something that works in zero gravity? What do you do with the pencil shavings? They didn't have digital pencils back in the '60s.

    Clearly, the Americans are the bright ones in this story, and the Russians are little brighter than the monkeys we sent up.

    Not sure the yankees ever actually got their pen working (despite a lot of effort). Getting rid of shavings is trivial, and the savings in not doing the research a massive benefit.

    Do you know how much the research and design for the zero-gravity pen cost? Oh, wait on yanks have never been responsible with money - that's why the GFC happened...

  • Alessandrs (unregistered) in reply to Meep
    Meep:
    Harold III:
    Boris Vladamir:
    Nagesh:
    This is naturel for most proceses. Once proces is imploding it will show less complex diagram. Once you explode proces, it will get huge, cumbersum and complex.
    Typical American stupidity. I heard story of United States Space Program's engineering of a pen to work in zero gravity whereas the Cosmonauts simply brought pencils!
    I've always found that story odd: people bragging about not being able to engineer something that works in zero gravity? What do you do with the pencil shavings? They didn't have digital pencils back in the '60s.

    Clearly, the Americans are the bright ones in this story, and the Russians are little brighter than the monkeys we sent up.

    Clearly the idiot is someone who thinks they're going to have graphite dust floating around sensitive electronics.

    <sarcasm> Oh yeah, we saw loads of Russian rockets falling out of the sky because they used pencils... </sarcasm>

    Hard to say the decision to use pencils is stupid (for whatever reason), if it actually worked... Yanks just didn't want to look stupid taking crayons up...

  • (cs) in reply to Mildred Bonk
    Mildred Bonk:
    I used to program LabVIEW for a living. While I was stuck doing that I made this demotivator: [image]

    One day, the pharoah called in his best scribe and told him to take a letter: "To the king of Sumeria, our esteemed greetings! We are most gratified by your generous gift of one hundred oxen, ten thousand bushels of grain, and fifty virile young slaves."

    The scribe interrupted. "Excuse me, exalted one, but is 'virile' spelled with one testicle or two?"

  • Someone from here (unregistered) in reply to Yair
    Yair:
    Steve:
    Would some of the LabView evangelists please post screenshots of LabView programs that aren't incomprehensible? I've never seen one before. (That last one by Yair is about the best I've ever encountered).

    Actually, both of those are really bad examples, because they're very dense and messy.

    Here's an example of one of the functions in the project I currently have opened. It doesn't have any comments, but I can tell you that every time you call it like this it adds a line to a log with timing info and error details (if there was an error) and optionally saves the log to a pipe delimited text file.

    [image]

    Like other languages, LabVIEW has advantages and disadvantages. Some of the advantages include:

    1. Easy to get started with.
    2. Automatic memory management (and it had that for almost 25 years). No need to assign variables, etc.
    3. Writing parallel code is extremely easy and is almost unavoidable.
    4. The function shown above, for instance, has two return parameters and could have several more if I wanted it to.

    Some of the disadvantages:

    1. Easy to get started with, so you get untrained users.
    2. Automatic memory management, so your options of controlling memory allocations are limited. Might be an issue under some circumstances.
    3. Writing parallel code is extremely easy. If you don't know what you're doing, you're going to have lots and lots of race conditions.
    4. Working with SCC is a big issue, although recent versions of LabVIEW have improved this somewhat. Basic work (check in, check out, commit, update) is not much of an issue, but diffing and merging are. Again, this is not necessarily an issue, depending on your needs.

    See to me, that looks quite complex for logging debug and errors. This is probably because I'm more familiar with text-based programming, but the point is that graphical representation (and drag and drop programming) weren't really designed to be simpler than other programming, but were targeted at an audience that were more comfortable with diagrams that text (eg Electronic or Electrical Engineers).

    The merits (or lack there of) of Labview are much the same as any other programming language or tool. You get people who know what they're doing who use it extremely well. You get (more than you want) people who don't know what they're doing and add topo much complexity. You get (the majority, I suspect) who think they know what they're doing, and you get WTFs.

    Not knowing Labview, the example in the article to me isn't a WTF, because it appears to actually be working in a complex environment (although it may well fit into the 'think we know what we're doing' category). Despite what advocates of other languages might say, launching the Space Shuttle isn't actually as easy as:

    SpaceShuttle Endeavour = old SpaceShuttle();
    if(weather.isClear()&&!weather.isWindy())
      Endeavour.Launch();
    
  • Jimmy (unregistered) in reply to tragomaskhalos
    tragomaskhalos:
    just me:
    tragomaskhalos:
    I am working on a project that uses a somewhat similar "visual programming" tool for a lot of its functionality. In my experience these things: a) look great in demos to non-technical types (hence sales); b) can be good for cobbling something together quickly; c) become completely and horribly unmanageable if you try to do anything even remotely complex. And since in any non-trivial system (c) will predominate ...
    c) only if you don't know what you're doing, or if the platform does not provide adequate tools for managing the complexity. LabVIEW does provide these tools (most importantly sub-VIs), so does MaxMSP (sub-patchers).

    The above applies just as well for any text-based languages, so what was your point again?

    In the case of the product we are using, it does not provide adequate tools, no - in LabVIEW, which I've not used, YMMD. For example on our project I have reimplemented a big wodge of it in Ruby - goodbye to unwieldy tangled and deeply nested diagrams, hello to succinct DSL-style notation with all the gubbins hidden away. This is the proper way to manage complexity, and is what a properly utilised text-based language will give you.

    Nor does point (a) apply to text-based languages - a big score for visual programming tools in the sales and marketing area is that they are pretty and colourful and allow for an easy sell of the seductive myth that non-programmers can use the thing, but what looks good in demos can quickly unravel in real use.

    Indeed!! Simplify complex tasks is important, to keep code manageable. Simplifying Programming is not important, as the people who program should be a (fairly) highly skilled subset of society. Making truck transmissions automatic is a convenience for a truck driver, but doesn't necessarily help Jo Schmo drive a vehicle of that size (although it might encourage him to try).

    We seem to have an obsession (we as people not just in IT), to try to simplify things to encourage 'anyone can do it', but the reality is we should leave most things for the people who have trained in those areas. Would you take your car to a mechanic who has never had formal training but has "...read lots of internet and knows simplest way to fix issue..." (Perhpas Nagesh's neighbor)? Would you be happy for someone to fix your computer because "...I managed to get mine set up fine, and it's all been made easy in the last few years"?
    Most people (I suspect) will have a resounding "no" for both of these examples, so why do we insist that we have to simplify programming so that my cousin's third aunt's cat can program? Programming almost anything useful is no trivial task - even the best programmers will often have subtle bugs in even reasonably simple code. No matter how accessible we make the actual language, the logic required can not trivially be taught...

    Sometimes, keeping things a little difficult (or at least having them appear difficult) is an advantage because people don't fiddle. In my experience, one of the biggest failings of the Windows '95 (and 98) releases, was oversimplification. Users who had little idea of what they were doing could 'explore' and find all sorts of (administration) settings to play with. A balance needs to be found to ensure that trivial tasks (for the Technician) are not unnecessarily complicated, but that they are sufficiently obscure to keep people from playing "because we can"

  • Jonesy (unregistered) in reply to boog
    boog:
    Nagesh:
    boog:
    frits:
    At what point does this kind of stuff get offensive to actual Indians?
    Kind of a tree falling in the woods question, don't you think? If someone insults India and no actual Indians are around to hear it, are they offended?
    I am Indian only.
    That's highly unlikely.

    Leave him alone and he'll go away...He thrives on the attention.

    For the record, I reckon he is an Indian (though not necessarily in India) pretending that he is pretending to be an Indian.

    I haven't noticed the hours he's actually online, though, which could give some indication as to which part of Australia he's in

  • daqq (unregistered)

    Noooo! LabView... the horror! It's all coming back to me!!! Noooo....(goes to cry in the corner...)

  • Jonesy (unregistered) in reply to alegr
    alegr:
    Boris Vladamir:
    Дизайн то, что является надежной, и мир будет просто производить лучше дурак.
    Somebody has too much trust to Google translator...

    +1

    Notice the punctuation in both this and Nagesh's posts....?

  • heebd (unregistered) in reply to Yazeran
    Yazeran:
    iToad:
    Unfortunately, most LabVIEW code is written by end users, not programmers. I just inherited a pile of literal spahegetti code from the original author to see if I could clean it up, and eliminate a bunch of race conditions and a mysterious latch-up problem.

    One of the first things that I did was create a concurrent state machine description of the logic, and a text-based specification of what all of the sub VIs did. The original developer was astonished. He didn't realize that you could actually design the software before writing it. I also introduced him to comments.

    Moral of the story: Inheriting a large, badly written VBA application is bad. Inheriting a large, badly written LabVIEW application is very bad.

    Amen!

    I have also heard a lot of people saying LabView is soo good: 'you can just take your instrument and within a few minutes get a graph of what it is measuring'. All well and good, but add some 20 different devices (of some 5 to 10 different types) and all of a sudden you end up with some spahgetti like the one shown. My beef is that you still can't get a screen big enough that you can properly debug any LabView program doing anything remotely interesting. Add to that some interesting race conditions and you are set for disaster....

    I would prefer a program written in C anytime over Labview, as in C at least you know exactly in which order things get executed, and in my experience, in 99% of the time you never need to do things asynchronous anyways. If your application/problem requires that, then it gets more complicated, but then Labview may not be your choice regardless....

    Yours Yazeran

    Plan: To go to Mars one day with a hammer.

    And if you do need asyncronous stuff you can either spawn children, multithread or make multiple concurrent applications.

    As you say, in a Labe environment I would think you normally need to do things in a predetermined order...

  • Charles (unregistered) in reply to Yair

    It's not the use of sub-VI's that kills me. It's that they're unnecessarily nested within each other. Perhaps I wasn't clear...

    For example, using a VI as an equivalent of a function

    #1 VI <-> doThings(a,b,c) ... #400 VI <-> doDifferentThings(d,e)

    That's fine. I'm okay with this. This is reasonably close to good LabView code.

    The issue is that they managed to create inter-dependencies in each spiraling hole of sub-VI's that is 400 VI's deep and manages to require data that may or may not be available yet. So...(oh god)...

    Start -> Main VI -> Stop

    Where the Main VI contains some code and 1 sub VI (let's call it VI 1). VI 1 contains some different code and another sub VI (VI 2)...

    MainVI(VI1(VI2(VI3(VI4(VI5(VI6(VI7(VI8(VI9VI10(...VI399(some code goes here)...))))))))))

    So rather than use sub-VI's for anything useful, the previous guy used them as buckets to throw his mess into. It's like a 400-deep nested if statement in C, is you weren't allowed to write any documentation and had to do it all in crayon.

    So I may have been unclear about that. It wasn't 400 functions, it was 35 unique, approximately 400-deep sub-VI clusters. Sadness.

  • Dirge (unregistered)

    Wow! It's Rocky's Boots for grown-ups! Is there a boot object in Labview?

    [image]
  • Not Nagesh or Vladamir (unregistered) in reply to Yair
    Yair:
    Anon:
    Of course it isn't likely, but why even take the risk?
    <cut this and that so that we lose all sight ogf context> .... but that applies to almost any language. Code rarely stays alive for a very long time. .... <snipped some things here too>

    WOW!! You must be fresh out of uni/college. Did you know, the world is not full of applications written in C# and Java? Nor Python, Ruby, Haskell, PHP, perl etc....

    COBOL is still rife (even without CoolGen and the like) and I'm guessing there's still development done in Pascal, Fortran, Ada, LISP (almost Certainly)...

    Yup. Code sure as hell don't stay alive for long....

  • Pervert (unregistered)

    Touched by his noodly appendage.

  • Davo (unregistered)

    The Labview code is very poorly written. Poor use of sub Vi's. Poor layout. Whoever wrote this code does not understand the importance of communicating the functionality to the poor bunny who has to inherit the code one day. I could write an essay on the problems in the Labview code example. It could be rewritten by someone competent so that it is easily understood and maintainable. Like with C or assembly language, there is nothing more demoralizing than inheriting someone else's substandard code.

  • TeaDrinker (unregistered)

    I spent a few months doing work with LabView for an aircraft simulator (the user controls, and a hydraulic feedback system). If you have a large monitor and a fast CPU it's a lot of fun to work with.

    Debugging the program is fairly interesting too, as you get a visual indication of where the point of execution is as it goes.

  • alegr (unregistered) in reply to Nagesh
    Nagesh:
    VLADIMIR IS NOT SOUNDING LIKE REAL NAME.
    VLAD_I_MIR is a real russian name. VLAD_A_MIR is NOT.
  • Earp (unregistered) in reply to Harold III, Sr.

    They have had mechanical (extending lead) pencils since 1822. Your comment has more Russian than American in it.

  • reductio ad ridiculum (unregistered) in reply to JakeyC
    JakeyC:
    I'd like to think that after all that, the output is just "Hello World".
    +1
  • reductio ad ridiculum (unregistered)

    Damn, did school let out early for summer?

  • Horse (unregistered) in reply to Zylon

    And a -1 for you for pointing out flaws in the original, correct, grammar.

  • ref (unregistered)

    I don't really see what the wtf is here, it's a circuit diagram... so what. If it seems to complex for you to understand, it probably is, so go back to your OOP rubbish and stay away from electron beams and multipliers.

  • J.D. (unregistered) in reply to Anonymous
    Anonymous:
    I'm not really seeing the WTF here, any sufficiently complex LabView system ends up looking like that. Unless the WTF is LabView itself, in which case I wholeheartedly agree.
    Not necessarily. If designed with readability in mind, you can make the "code" with Labview look like something readable an useful. It does take some time, though.
  • Yair (unregistered) in reply to boog
    boog:
    Finally somebody is just admitting the real reason they choose a particular language/environment, and not rambling off a bunch of arbitrary, subjectively-chosen metrics in order to "prove" that their way of doing things is the best.

    Definitely. That's the main reason for me. I don't work in a lab. I don't usually build test equipment. I often write programs which are completely outside the realm of "natural" LabVIEW programs (which is meaningless, since it is a general purpose language, although I'll readily agree it's limited in many ways and completely unsuitable for some types of applications).

    I use it because I like it and it works for me (I'm not even talking about actual technical advantages it has), and I can perfectly understand someone who doesn't like it because they don't get it (it happens a lot, as it's a completely different programming paradigm and it probably uses different parts of your brain).

    I can also understand people who don't use it because of specific valid reasons (It costs too much, I'm building the next Call of Duty or Angry Birds and it won't work for that, I'm working with a large team and couldn't get it to work, the IDE crashes too much for my taste, I can't find good programmers, etc.). I work with it on a daily basis and am well aware of its shortcomings.

    But the majority of the people I see online who complain about like most of those who did here seem to simply be people who used it for a very short time, didn't get it at all and found some excuse for why it's terrible.

    And for those who actually care, it also offers interesting insights into language design and a programming paradigm completely different from most text based languages. If you want, you can actually download a fully functional time-limited evaluation version from NI's site (at least for Windows), but if I were you I'd install it on a VM, because it also comes with a lot of extra services and stuff you don't want.

  • Antman (unregistered) in reply to Goosey Goo
    Goosey Goo:
    Zylon:
    Mike Caron:
    What does it do? Without context, it may be justifiable.

    (That said, +1 for literal spaghetti code)

    And a -1 to you for misusing the word "literally".

    I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap?

    dictionary.com:
    it·er·al    /ˈlɪtərəl/ Show Spelled[lit-er-uhl] Show IPA –adjective 1. in accordance with, involving, or being the primary or strict meaning of the word or words; not figurative or metaphorical: the literal meaning of a word. 2. following the words of the original very closely and exactly: a literal translation of Goethe. 3. true to fact; not exaggerated; actual or factual: a literal description of conditions.

    From what I can tell, the code itself is -not- made of spaghetti. I may be wrong, though, and LabView may save its code using spaghetti noodles. The code may MAKE spaghetti, which would be awesome.

  • Yair (unregistered) in reply to Someone from here
    Someone from here:
    [image]

    See to me, that looks quite complex for logging debug and errors. This is probably because I'm more familiar with text-based programming...

    Probably, and be aware that this code also holds an internal buffer with the log data. I'm fairly sure you couldn't get equivalent code to be much simpler in most text-based languages in terms of the amount code it takes.

    ...but the point is that graphical representation (and drag and drop programming) weren't really designed to be simpler than other programming, but were targeted at an audience that were more comfortable with diagrams that text (eg Electronic or Electrical Engineers).

    Actually, it was designed to be simpler in a number of very important ways (such as being more intuitive and dealing with all the icky stuff of programming such as memory allocations on its own), and it was targeted at scientists and lab users originally, but you are correct that it only works well for people who can read it more easily than text.

    The merits (or lack there of) of Labview are much the same as any other programming language or tool. You get people who know what they're doing who use it extremely well. You get (more than you want) people who don't know what they're doing and add topo much complexity. You get (the majority, I suspect) who think they know what they're doing, and you get WTFs.

    Not knowing Labview, the example in the article to me isn't a WTF, because it appears to actually be working in a complex environment (although it may well fit into the 'think we know what we're doing' category).

    All that you're saying is completely true, but the code in the original submission is still bad code. It probably works, which is an important point in its favor, but it puts everything in one big function instead of splitting it up into logical units and it is impossibly messy. It has no comments and things are overlapping, etc. It's bad.

    But, I can't say that this surprises me. A lot of people write bad code in LabVIEW. In a way, the IDE encourages it, because it allows you to more easily write code which actually works, thus not requiring training, etc.

  • Yair (unregistered) in reply to Not Nagesh or Vladamir
    Not Nagesh or Vladamir:
    WOW!! You must be fresh out of uni/college. Did you know, the world is not full of applications written in C# and Java? Nor Python, Ruby, Haskell, PHP, perl etc....

    COBOL is still rife (even without CoolGen and the like) and I'm guessing there's still development done in Pascal, Fortran, Ada, LISP (almost Certainly)...

    Yup. Code sure as hell don't stay alive for long....

    I didn't say code doesn't stay alive for long. I said RARELY. I'm fairly sure that the vast majority of Fortran and COBOL programs ever written are no longer in use.

    But you're basically making the same point I was - even if a language becomes extinct, you can still develop in it, for a while, anyway.

    And since you insist on irrelevant points, the last time I was in any kind of school was 12 years ago.

  • LISP Programmer (unregistered) in reply to Wonk
    Wonk:
    You are in a maze of twisty little macros, all alike

    Exits are North, South and Dennis.

  • Ian C. (unregistered)

    Hey, at least he took the screen shot during lunch time.

  • Don (unregistered) in reply to Meep
    Meep:
    Harold III:
    Boris Vladamir:
    Nagesh:
    This is naturel for most proceses. Once proces is imploding it will show less complex diagram. Once you explode proces, it will get huge, cumbersum and complex.
    Typical American stupidity. I heard story of United States Space Program's engineering of a pen to work in zero gravity whereas the Cosmonauts simply brought pencils!
    I've always found that story odd: people bragging about not being able to engineer something that works in zero gravity? What do you do with the pencil shavings? They didn't have digital pencils back in the '60s.

    Clearly, the Americans are the bright ones in this story, and the Russians are little brighter than the monkeys we sent up.

    Clearly the idiot is someone who thinks they're going to have graphite dust floating around sensitive electronics.

    Erm.. first clutch (refillable) pencil was designed in the 1800's. If they weren't so advanced as to use clutch/propelling pencils, then they could have used normal pencils with an enclosed sharpener (also circa 1800's). Pencil shavings need not be an issue.

  • foo (unregistered) in reply to Hoffmann
    Hoffmann:
    Lollipops? I found walter (twice)!
    That's all you can do?

    I found Bin Laden!

  • Nagesh (unregistered) in reply to Don
    Don:
    Meep:
    Harold III:
    Boris Vladamir:
    Nagesh:
    This is naturel for most proceses. Once proces is imploding it will show less complex diagram. Once you explode proces, it will get huge, cumbersum and complex.
    Typical American stupidity. I heard story of United States Space Program's engineering of a pen to work in zero gravity whereas the Cosmonauts simply brought pencils!
    I've always found that story odd: people bragging about not being able to engineer something that works in zero gravity? What do you do with the pencil shavings? They didn't have digital pencils back in the '60s.

    Clearly, the Americans are the bright ones in this story, and the Russians are little brighter than the monkeys we sent up.

    Clearly the idiot is someone who thinks they're going to have graphite dust floating around sensitive electronics.

    Erm.. first clutch (refillable) pencil was designed in the 1800's. If they weren't so advanced as to use clutch/propelling pencils, then they could have used normal pencils with an enclosed sharpener (also circa 1800's). Pencil shavings need not be an issue.
    Link, or not did hapen.

  • QJ (unregistered) in reply to Yair
    Yair:
    Not Nagesh or Vladamir:
    WOW!! You must be fresh out of uni/college. Did you know, the world is not full of applications written in C# and Java? Nor Python, Ruby, Haskell, PHP, perl etc....

    COBOL is still rife (even without CoolGen and the like) and I'm guessing there's still development done in Pascal, Fortran, Ada, LISP (almost Certainly)...

    Yup. Code sure as hell don't stay alive for long....

    I didn't say code doesn't stay alive for long. I said RARELY. I'm fairly sure that the vast majority of Fortran and COBOL programs ever written are no longer in use.

    But you're basically making the same point I was - even if a language becomes extinct, you can still develop in it, for a while, anyway.

    And since you insist on irrelevant points, the last time I was in any kind of school was 12 years ago.

    Possibly worth pointing out that if an application is of high quality (e.g. does what it's supposed to, designed so as to be versatile enough to handle a considerable range of changing requirements, runs quickly and efficiently) there will often be considerable resistance to it being replaced by something more modern. Frequently, the main reason for such a program being replaced is that the hardware it runs on is obsolete. If you encounter a program that's over a decade old, treat it with extreme respect because it probably (but not inevitably) means it's pretty damn good.

  • (cs) in reply to foo
    foo:
    Hoffmann:
    Lollipops? I found walter (twice)!
    That's all you can do?

    I found Bin Laden!

    I found the G-spot!

    (showing my maturity)

  • Current (unregistered)

    I develop Labview programs for test equipment setups.

    The commenters who are critical of Labview are mostly right.

    Labview certainly has it's advantages, putting together simple GUIs in it is very quick, quicker than the IDEs and widget layout editors for other languages.

    It has many disadvantages though, especially for large programs. To begin with sequence of execution isn't defined by default, it has to be defined using data flow or sequence structures. In most circumstances this isn't what you want and it creates race conditions.

    Instead of subroutines there are "SubVIs" which are similar. A bunch of wires are wired into a SubVI and when all the input are ready it runs. The problem with this is that it tends to highlight irrelevant details. Suppose you're writing a program that calls a function to command a piece of test equipment. You need to put in information to specify the bus the the test boxes is on and the address on that bus. In a normal language that could just be a couple of variables that are used as arguments in a function call. But, in Labview parameters like this have to be wires and those wires clutter up the diagram. (The "bundles" feature can help with this a bit, but not much).

    Using SubVIs is also difficult. By default each SubVI is a new file, and it's name is global to a Labview session. That means that if I write a SubVI called "Foo" and there's another different SubVI called "Foo" that another Labview program uses then if both programs are loaded into the same session then one of them won't work or will have bugs. This means that the sequence in which programs are loaded into a Labview session can be significant. This can be avoided by using libraries which have their own namespaces, but changing sets of SubVIs to libraries is time-consuming and inflexible. A better solution is to go through all the SubVIs in the entire tree of programs you use and make sure there are no name duplicates.

  • fru (unregistered)

    Just press CTRL+U. That should solve everything ... or crash Labview. So win/win.

  • Yair (unregistered) in reply to Current
    Current:
    I develop Labview programs for test equipment setups.

    Finally, someone who's critical of LabVIEW and provides actual arguments.

    The commenters who are critical of Labview are mostly right.

    Uh, no they're not. At least not the ones who participated here and gave BS criticisms.

    It has many disadvantages though, especially for large programs.

    Correct, but it's nothing that's not manageable if you know what you're doing. My programs are large (ish, because how do you define large? Let's just say that they're not just displaying data on a graph) and they turn out just fine.

    To begin with sequence of execution isn't defined by default, it has to be defined using data flow or sequence structures. In most circumstances this isn't what you want and it creates race conditions.

    Again, you have to know what you're doing. Write it correctly and it will work correctly. Generally, in good code the elements of code which execute in parallel are either designed to run in parallel (such as completely separate processes) or are two unrelated pieces of code where the order of execution doesn't matter. If you have race conditions then YOU wrote buggy code. You can't blame the system because you don't understand its rules.

    But, in Labview parameters like this have to be wires and those wires clutter up the diagram. (The "bundles" feature can help with this a bit, but not much).

    That's like saying that the characters in your text editor are cluttering up the whitespace on your screen. The wires are an inherent part of the system. Write it in a clean fashion and your diagram will not be cluttered. That's not always easy, and it doesn't always work out, but it certainly is possible.

    By default each SubVI is a new file, and it's name is global to a Labview session.

    No, it isn't. It's global to an "application instance". Use a project and you won't have these problems. Of course, there are also technical and historic reasons for this design, some of which offer advantages to this design. And the IDE warns you if you have conflicts. If you ignore those warnings, then you can't be surprised that you have problems.

    But, like I said, at least you actually used it and you know what you're talking about much more than the others who have posted here.

    Again, this is the same with every language and every IDE - it has advantages and disadvantages. It has things it's good at and things it's not good at. And it requires you to know what you're doing if you expect to produce good code.

  • Nagesh (unregistered) in reply to Yair

    tl;dr

  • Danny (unregistered)

    "I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap? "

    They believe 'spaghetti code' to be a figurative term as the code is not actually spaghetti. However the term has become a figure of speech in and of itself (not 'code that is spaghetti' but precisely 'spaghetti code') ergo blurring the line between a literal and figurative statement. It is literally the figure of speech but not literally code that is spaghetti (the meaning of the figure of speech). IMO the use is perfectly valid; however others might not agree.

  • anonymous_coder() (unregistered) in reply to Richard
    Richard:
    Thankfully you can interface to most of the NI devices from C, and you don't need labview at all.

    Incidentally, I have the interesting problem of an NI4462 card which was sold to me a few months ago as being "supported on Linux". This support uses a nasty binary blob kernel driver that is only available for distros about 4 years old (eg Mandrake 2008). Not impressed

    Ran into that with their USB drivers too - only 32-bit, no more than 4 GB of RAM, but at least we could use RHEL 5. But it was an utter pain to deal with. I was trying to write a shim between the NI data acquisition and some Matlab code. Let me tell you how much fun it is to try and run Labview and Matlab on the same machine with less than 4 GB of RAM...

  • Win Su (unregistered)

    To letting you guys know...before he leave Japan, Win Su and Alex go out for night of heavy drinking. Win Su have massive headache and didn't see Alex stopping from whiskey. Maybe tomorrow, you guys, ok, bye.

  • The Gang of Four (unregistered) in reply to Someone from here

    "...the pretty patterns that well-formatted code makes."

    yeah cos that's we we were talking about

  • Current (unregistered) in reply to Yair
    Again, you have to know what you're doing. Write it correctly and it will work correctly. Generally, in good code the elements of code which execute in parallel are either designed to run in parallel (such as completely separate processes) or are two unrelated pieces of code where the order of execution doesn't matter. If you have race conditions then YOU wrote buggy code. You can't blame the system because you don't understand its rules.

    I agree. My point is though that in Labview the default is parallel execution based on dataflow. In my opinion for most problems that Labview is targeted at that's the wrong default. In languages like C and Java the default is sequential execution and the programmer must very explicitly request parallel behaviour. In Labview it's the other way around, sequential behaviour must be explicitly requested. But, parallel execution is only really useful if there are slow blocks of code that can be executed in parallel. I write Labview programs that drive test equipment (which is the normal use of Labview) I've never found an instance of where this parallelism can save execution time. In all of my programs the test equipment or devices under test are what determines performance.

    That's like saying that the characters in your text editor are cluttering up the whitespace on your screen. The wires are an inherent part of the system. Write it in a clean fashion and your diagram will not be cluttered. That's not always easy, and it doesn't always work out, but it certainly is possible.

    The wires are an intrinsic part of the program, but in many cases they are superficial detail. In text based languages the same problem occurs in some cases where Labview avoids it. For example, in C it's necessary to allocate memory and deallocate it. Code for doing this must exist alongside the code for working on data-structures. In Labview this isn't needed, the detail of memory allocation is dealt with by the runtime which makes programming easier.

    But, the issue of extraneous detail does occur in other circumstances. Suppose I have a C program that programs a test instrument with the variable Foo reads the variable Bar from it in a loop. In that C program I can have a function to operate the test box, say "TestX (Addr, Param1, Param2, Foo, Bar)". In this case "Addr", "Param1" & "Param2" are parameters setup by the user. In the C program these parameters are kept in the background, they are detail. They occur once where they're set and again where they're read in the call to "TestX", they don't affect the intervening program. In Labview though they do affect the rest of the program because the wires needed to carry the values from place to place must be laid out. Of course local variables and clusters/bundles can be used here, but I'd argue they don't really remove the problem.

    I agree that with care Labview programs can be laid out in a readable way. I've done that with ~70% of those VIs I've inherited from my predecessors and I have the remaining 30% left to do. But, this layout takes a lot of time (and the automated clean up doesn't really do the job).

    I didn't know about the "application instance" thing, thanks for telling me, I'll check it out.

  • (cs) in reply to Danny
    Danny:
    "I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap? "

    They believe 'spaghetti code' to be a figurative term as the code is not actually spaghetti. However the term has become a figure of speech in and of itself (not 'code that is spaghetti' but precisely 'spaghetti code') ergo blurring the line between a literal and figurative statement. It is literally the figure of speech but not literally code that is spaghetti (the meaning of the figure of speech). IMO the use is perfectly valid; however others might not agree.

    raises hand I disagree; if I had code made out of actual spaghetti, what word would I use to describe it? Aside from "delicious"?

    It's important that "figuratively" and "literally" retain separate (and opposite) meanings. "Literally" doesn't reinforce a figure of speech; if it did, figures of speech would take over the language, since we'd never be able to make a distinction.

    For example: If you've been walking all day and you say your feet are on fire, it is a figure of speech. But if you say your feet are literally on fire, it's no longer a figure of speech and you should seek a fire extinguisher and a doctor.

  • Yair (unregistered) in reply to Current
    Current:
    I agree. My point is though that in Labview the default is parallel execution based on dataflow. In my opinion for most problems that Labview is targeted at that's the wrong default.

    I wouldn't call it "default". It's an intrinsic part of the data flow paradigm, so it's not there just to increase performance (although there are cases where it can) and you can't actually disable it. Like you (and most other users, I suspect), the majority of my code is sequential, and while writing long sequential code in LabVIEW isn't as easy or as elegant as it can be in text, I don't think it's difficult once you get the hang of it.

    In Labview though they do affect the rest of the program because the wires needed to carry the values from place to place must be laid out. Of course local variables and clusters/bundles can be used here, but I'd argue they don't really remove the problem.

    That's a perfectly valid point and you're right. There is no safe and easy way in LabVIEW to carry many unrelated pieces of data from one point in the code to another which are not a wire. Local and global variables are easy but not safe. Other methods, such as queues, are safe but not easy.

    That said, it should be noted that this only affects certain kinds of diagrams and there are ways of working around it (for instance, if your code is a state machine made up of a loop and a case structure, you can add a shift register which will hold a cluster of state data for the state machine. Then, writing and reading data into that cluster is relatively clean and easy).

    Also, if you use classes, a lot of the code which has the problem you describe goes away, because the object keeps the value internally.

  • coz (unregistered)

    Now I know Why Intel f*cked up Sandy Bridge...they tested it with that !!!

  • QJ (unregistered) in reply to boog
    boog:
    Danny:
    "I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap? "

    They believe 'spaghetti code' to be a figurative term as the code is not actually spaghetti. However the term has become a figure of speech in and of itself (not 'code that is spaghetti' but precisely 'spaghetti code') ergo blurring the line between a literal and figurative statement. It is literally the figure of speech but not literally code that is spaghetti (the meaning of the figure of speech). IMO the use is perfectly valid; however others might not agree.

    raises hand I disagree; if I had code made out of actual spaghetti, what word would I use to describe it? Aside from "delicious"?

    It's important that "figuratively" and "literally" retain separate (and opposite) meanings. "Literally" doesn't reinforce a figure of speech; if it did, figures of speech would take over the language, since we'd never be able to make a distinction.

    For example: If you've been walking all day and you say your feet are on fire, it is a figure of speech. But if you say your feet are literally on fire, it's no longer a figure of speech and you should seek a fire extinguisher and a doctor.

    LOL. In fact, I literally laughed my head off. Good job I can touch-type. I've set one of my staff the task of finding out which corner it must have rolled into (all I can see from the angle my eyes ended up are two walls and a bit of carpet).

  • Nagesh (unregistered) in reply to QJ
    QJ:
    boog:
    Danny:
    "I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap? "

    They believe 'spaghetti code' to be a figurative term as the code is not actually spaghetti. However the term has become a figure of speech in and of itself (not 'code that is spaghetti' but precisely 'spaghetti code') ergo blurring the line between a literal and figurative statement. It is literally the figure of speech but not literally code that is spaghetti (the meaning of the figure of speech). IMO the use is perfectly valid; however others might not agree.

    raises hand I disagree; if I had code made out of actual spaghetti, what word would I use to describe it? Aside from "delicious"?

    It's important that "figuratively" and "literally" retain separate (and opposite) meanings. "Literally" doesn't reinforce a figure of speech; if it did, figures of speech would take over the language, since we'd never be able to make a distinction.

    For example: If you've been walking all day and you say your feet are on fire, it is a figure of speech. But if you say your feet are literally on fire, it's no longer a figure of speech and you should seek a fire extinguisher and a doctor.

    LOL. In fact, I literally laughed my head off. Good job I can touch-type. I've set one of my staff the task of finding out which corner it must have rolled into (all I can see from the angle my eyes ended up are two walls and a bit of carpet).

    Before you atempt humour, it is helpful to be having some first.

Leave a comment on “Labview Spaghetti”

Log In or post as a guest

Replying to comment #:

« Return to Article