- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Fuck Off!
Admin
But isn't that the point? Anything complex will needs some work to understand. Where is the WTF? Are Complex programs the WTF? I assumed everyone in this industry is (or has at some stage) worked on complex programs. Sometimes it's just not possible to do things simply.
Admin
They say necessity is the mother of invention. The Reds needed to do this on the cheap, so they come up with a cheap solution. At the time, the yankees had loads of money funding space exploration, so simple solutions were invisible to them - they didn't have the need to save money.
Project managers, take note: Always claim to have about 25% of the budget you have - not only does that allow a massive buffer for overruns, it will also help your Techos to see the simpler cheaper solutions, rather than reinventing the wheel (or more commonly, a date/time class). The more money you have available, the more money a project costs. The more money a project costs, the more effort there probably is reinventing existing functionality simply because one of your monkeys "...never new the standard libraries could do that...". A techo spending even 1/2 a day googling how to do it with standard libraries is far cheaper, both in the long and the short terms than having them recreate flawed equivalents.
Admin
Not sure the yankees ever actually got their pen working (despite a lot of effort). Getting rid of shavings is trivial, and the savings in not doing the research a massive benefit.
Do you know how much the research and design for the zero-gravity pen cost? Oh, wait on yanks have never been responsible with money - that's why the GFC happened...
Admin
Hard to say the decision to use pencils is stupid (for whatever reason), if it actually worked... Yanks just didn't want to look stupid taking crayons up...
Admin
One day, the pharoah called in his best scribe and told him to take a letter: "To the king of Sumeria, our esteemed greetings! We are most gratified by your generous gift of one hundred oxen, ten thousand bushels of grain, and fifty virile young slaves."
The scribe interrupted. "Excuse me, exalted one, but is 'virile' spelled with one testicle or two?"
Admin
See to me, that looks quite complex for logging debug and errors. This is probably because I'm more familiar with text-based programming, but the point is that graphical representation (and drag and drop programming) weren't really designed to be simpler than other programming, but were targeted at an audience that were more comfortable with diagrams that text (eg Electronic or Electrical Engineers).
The merits (or lack there of) of Labview are much the same as any other programming language or tool. You get people who know what they're doing who use it extremely well. You get (more than you want) people who don't know what they're doing and add topo much complexity. You get (the majority, I suspect) who think they know what they're doing, and you get WTFs.
Not knowing Labview, the example in the article to me isn't a WTF, because it appears to actually be working in a complex environment (although it may well fit into the 'think we know what we're doing' category). Despite what advocates of other languages might say, launching the Space Shuttle isn't actually as easy as:
Admin
Indeed!! Simplify complex tasks is important, to keep code manageable. Simplifying Programming is not important, as the people who program should be a (fairly) highly skilled subset of society. Making truck transmissions automatic is a convenience for a truck driver, but doesn't necessarily help Jo Schmo drive a vehicle of that size (although it might encourage him to try).
We seem to have an obsession (we as people not just in IT), to try to simplify things to encourage 'anyone can do it', but the reality is we should leave most things for the people who have trained in those areas. Would you take your car to a mechanic who has never had formal training but has "...read lots of internet and knows simplest way to fix issue..." (Perhpas Nagesh's neighbor)? Would you be happy for someone to fix your computer because "...I managed to get mine set up fine, and it's all been made easy in the last few years"?
Most people (I suspect) will have a resounding "no" for both of these examples, so why do we insist that we have to simplify programming so that my cousin's third aunt's cat can program? Programming almost anything useful is no trivial task - even the best programmers will often have subtle bugs in even reasonably simple code. No matter how accessible we make the actual language, the logic required can not trivially be taught...
Sometimes, keeping things a little difficult (or at least having them appear difficult) is an advantage because people don't fiddle. In my experience, one of the biggest failings of the Windows '95 (and 98) releases, was oversimplification. Users who had little idea of what they were doing could 'explore' and find all sorts of (administration) settings to play with. A balance needs to be found to ensure that trivial tasks (for the Technician) are not unnecessarily complicated, but that they are sufficiently obscure to keep people from playing "because we can"
Admin
Leave him alone and he'll go away...He thrives on the attention.
For the record, I reckon he is an Indian (though not necessarily in India) pretending that he is pretending to be an Indian.
I haven't noticed the hours he's actually online, though, which could give some indication as to which part of Australia he's in
Admin
Noooo! LabView... the horror! It's all coming back to me!!! Noooo....(goes to cry in the corner...)
Admin
+1
Notice the punctuation in both this and Nagesh's posts....?
Admin
And if you do need asyncronous stuff you can either spawn children, multithread or make multiple concurrent applications.
As you say, in a Labe environment I would think you normally need to do things in a predetermined order...
Admin
It's not the use of sub-VI's that kills me. It's that they're unnecessarily nested within each other. Perhaps I wasn't clear...
For example, using a VI as an equivalent of a function
#1 VI <-> doThings(a,b,c) ... #400 VI <-> doDifferentThings(d,e)
That's fine. I'm okay with this. This is reasonably close to good LabView code.
The issue is that they managed to create inter-dependencies in each spiraling hole of sub-VI's that is 400 VI's deep and manages to require data that may or may not be available yet. So...(oh god)...
Start -> Main VI -> Stop
Where the Main VI contains some code and 1 sub VI (let's call it VI 1). VI 1 contains some different code and another sub VI (VI 2)...
MainVI(VI1(VI2(VI3(VI4(VI5(VI6(VI7(VI8(VI9VI10(...VI399(some code goes here)...))))))))))
So rather than use sub-VI's for anything useful, the previous guy used them as buckets to throw his mess into. It's like a 400-deep nested if statement in C, is you weren't allowed to write any documentation and had to do it all in crayon.
So I may have been unclear about that. It wasn't 400 functions, it was 35 unique, approximately 400-deep sub-VI clusters. Sadness.
Admin
Wow! It's Rocky's Boots for grown-ups! Is there a boot object in Labview?
[image]Admin
WOW!! You must be fresh out of uni/college. Did you know, the world is not full of applications written in C# and Java? Nor Python, Ruby, Haskell, PHP, perl etc....
COBOL is still rife (even without CoolGen and the like) and I'm guessing there's still development done in Pascal, Fortran, Ada, LISP (almost Certainly)...
Yup. Code sure as hell don't stay alive for long....
Admin
Touched by his noodly appendage.
Admin
The Labview code is very poorly written. Poor use of sub Vi's. Poor layout. Whoever wrote this code does not understand the importance of communicating the functionality to the poor bunny who has to inherit the code one day. I could write an essay on the problems in the Labview code example. It could be rewritten by someone competent so that it is easily understood and maintainable. Like with C or assembly language, there is nothing more demoralizing than inheriting someone else's substandard code.
Admin
I spent a few months doing work with LabView for an aircraft simulator (the user controls, and a hydraulic feedback system). If you have a large monitor and a fast CPU it's a lot of fun to work with.
Debugging the program is fairly interesting too, as you get a visual indication of where the point of execution is as it goes.
Admin
Admin
They have had mechanical (extending lead) pencils since 1822. Your comment has more Russian than American in it.
Admin
Admin
Damn, did school let out early for summer?
Admin
And a -1 for you for pointing out flaws in the original, correct, grammar.
Admin
I don't really see what the wtf is here, it's a circuit diagram... so what. If it seems to complex for you to understand, it probably is, so go back to your OOP rubbish and stay away from electron beams and multipliers.
Admin
Admin
Definitely. That's the main reason for me. I don't work in a lab. I don't usually build test equipment. I often write programs which are completely outside the realm of "natural" LabVIEW programs (which is meaningless, since it is a general purpose language, although I'll readily agree it's limited in many ways and completely unsuitable for some types of applications).
I use it because I like it and it works for me (I'm not even talking about actual technical advantages it has), and I can perfectly understand someone who doesn't like it because they don't get it (it happens a lot, as it's a completely different programming paradigm and it probably uses different parts of your brain).
I can also understand people who don't use it because of specific valid reasons (It costs too much, I'm building the next Call of Duty or Angry Birds and it won't work for that, I'm working with a large team and couldn't get it to work, the IDE crashes too much for my taste, I can't find good programmers, etc.). I work with it on a daily basis and am well aware of its shortcomings.
But the majority of the people I see online who complain about like most of those who did here seem to simply be people who used it for a very short time, didn't get it at all and found some excuse for why it's terrible.
And for those who actually care, it also offers interesting insights into language design and a programming paradigm completely different from most text based languages. If you want, you can actually download a fully functional time-limited evaluation version from NI's site (at least for Windows), but if I were you I'd install it on a VM, because it also comes with a lot of extra services and stuff you don't want.
Admin
From what I can tell, the code itself is -not- made of spaghetti. I may be wrong, though, and LabView may save its code using spaghetti noodles. The code may MAKE spaghetti, which would be awesome.
Admin
Probably, and be aware that this code also holds an internal buffer with the log data. I'm fairly sure you couldn't get equivalent code to be much simpler in most text-based languages in terms of the amount code it takes.
Actually, it was designed to be simpler in a number of very important ways (such as being more intuitive and dealing with all the icky stuff of programming such as memory allocations on its own), and it was targeted at scientists and lab users originally, but you are correct that it only works well for people who can read it more easily than text.
All that you're saying is completely true, but the code in the original submission is still bad code. It probably works, which is an important point in its favor, but it puts everything in one big function instead of splitting it up into logical units and it is impossibly messy. It has no comments and things are overlapping, etc. It's bad.
But, I can't say that this surprises me. A lot of people write bad code in LabVIEW. In a way, the IDE encourages it, because it allows you to more easily write code which actually works, thus not requiring training, etc.
Admin
I didn't say code doesn't stay alive for long. I said RARELY. I'm fairly sure that the vast majority of Fortran and COBOL programs ever written are no longer in use.
But you're basically making the same point I was - even if a language becomes extinct, you can still develop in it, for a while, anyway.
And since you insist on irrelevant points, the last time I was in any kind of school was 12 years ago.
Admin
Exits are North, South and Dennis.
Admin
Hey, at least he took the screen shot during lunch time.
Admin
Admin
I found Bin Laden!
Admin
Admin
Possibly worth pointing out that if an application is of high quality (e.g. does what it's supposed to, designed so as to be versatile enough to handle a considerable range of changing requirements, runs quickly and efficiently) there will often be considerable resistance to it being replaced by something more modern. Frequently, the main reason for such a program being replaced is that the hardware it runs on is obsolete. If you encounter a program that's over a decade old, treat it with extreme respect because it probably (but not inevitably) means it's pretty damn good.
Admin
(showing my maturity)
Admin
I develop Labview programs for test equipment setups.
The commenters who are critical of Labview are mostly right.
Labview certainly has it's advantages, putting together simple GUIs in it is very quick, quicker than the IDEs and widget layout editors for other languages.
It has many disadvantages though, especially for large programs. To begin with sequence of execution isn't defined by default, it has to be defined using data flow or sequence structures. In most circumstances this isn't what you want and it creates race conditions.
Instead of subroutines there are "SubVIs" which are similar. A bunch of wires are wired into a SubVI and when all the input are ready it runs. The problem with this is that it tends to highlight irrelevant details. Suppose you're writing a program that calls a function to command a piece of test equipment. You need to put in information to specify the bus the the test boxes is on and the address on that bus. In a normal language that could just be a couple of variables that are used as arguments in a function call. But, in Labview parameters like this have to be wires and those wires clutter up the diagram. (The "bundles" feature can help with this a bit, but not much).
Using SubVIs is also difficult. By default each SubVI is a new file, and it's name is global to a Labview session. That means that if I write a SubVI called "Foo" and there's another different SubVI called "Foo" that another Labview program uses then if both programs are loaded into the same session then one of them won't work or will have bugs. This means that the sequence in which programs are loaded into a Labview session can be significant. This can be avoided by using libraries which have their own namespaces, but changing sets of SubVIs to libraries is time-consuming and inflexible. A better solution is to go through all the SubVIs in the entire tree of programs you use and make sure there are no name duplicates.
Admin
Just press CTRL+U. That should solve everything ... or crash Labview. So win/win.
Admin
Finally, someone who's critical of LabVIEW and provides actual arguments.
Uh, no they're not. At least not the ones who participated here and gave BS criticisms.
Correct, but it's nothing that's not manageable if you know what you're doing. My programs are large (ish, because how do you define large? Let's just say that they're not just displaying data on a graph) and they turn out just fine.
Again, you have to know what you're doing. Write it correctly and it will work correctly. Generally, in good code the elements of code which execute in parallel are either designed to run in parallel (such as completely separate processes) or are two unrelated pieces of code where the order of execution doesn't matter. If you have race conditions then YOU wrote buggy code. You can't blame the system because you don't understand its rules.
That's like saying that the characters in your text editor are cluttering up the whitespace on your screen. The wires are an inherent part of the system. Write it in a clean fashion and your diagram will not be cluttered. That's not always easy, and it doesn't always work out, but it certainly is possible.
No, it isn't. It's global to an "application instance". Use a project and you won't have these problems. Of course, there are also technical and historic reasons for this design, some of which offer advantages to this design. And the IDE warns you if you have conflicts. If you ignore those warnings, then you can't be surprised that you have problems.
But, like I said, at least you actually used it and you know what you're talking about much more than the others who have posted here.
Again, this is the same with every language and every IDE - it has advantages and disadvantages. It has things it's good at and things it's not good at. And it requires you to know what you're doing if you expect to produce good code.
Admin
tl;dr
Admin
"I say, English, I'm no great scholar of the language, but I really don't understand how he's misused the word literally. Perhaps you could explain, old chap? "
They believe 'spaghetti code' to be a figurative term as the code is not actually spaghetti. However the term has become a figure of speech in and of itself (not 'code that is spaghetti' but precisely 'spaghetti code') ergo blurring the line between a literal and figurative statement. It is literally the figure of speech but not literally code that is spaghetti (the meaning of the figure of speech). IMO the use is perfectly valid; however others might not agree.
Admin
Ran into that with their USB drivers too - only 32-bit, no more than 4 GB of RAM, but at least we could use RHEL 5. But it was an utter pain to deal with. I was trying to write a shim between the NI data acquisition and some Matlab code. Let me tell you how much fun it is to try and run Labview and Matlab on the same machine with less than 4 GB of RAM...
Admin
To letting you guys know...before he leave Japan, Win Su and Alex go out for night of heavy drinking. Win Su have massive headache and didn't see Alex stopping from whiskey. Maybe tomorrow, you guys, ok, bye.
Admin
"...the pretty patterns that well-formatted code makes."
yeah cos that's we we were talking about
Admin
I agree. My point is though that in Labview the default is parallel execution based on dataflow. In my opinion for most problems that Labview is targeted at that's the wrong default. In languages like C and Java the default is sequential execution and the programmer must very explicitly request parallel behaviour. In Labview it's the other way around, sequential behaviour must be explicitly requested. But, parallel execution is only really useful if there are slow blocks of code that can be executed in parallel. I write Labview programs that drive test equipment (which is the normal use of Labview) I've never found an instance of where this parallelism can save execution time. In all of my programs the test equipment or devices under test are what determines performance.
The wires are an intrinsic part of the program, but in many cases they are superficial detail. In text based languages the same problem occurs in some cases where Labview avoids it. For example, in C it's necessary to allocate memory and deallocate it. Code for doing this must exist alongside the code for working on data-structures. In Labview this isn't needed, the detail of memory allocation is dealt with by the runtime which makes programming easier.
But, the issue of extraneous detail does occur in other circumstances. Suppose I have a C program that programs a test instrument with the variable Foo reads the variable Bar from it in a loop. In that C program I can have a function to operate the test box, say "TestX (Addr, Param1, Param2, Foo, Bar)". In this case "Addr", "Param1" & "Param2" are parameters setup by the user. In the C program these parameters are kept in the background, they are detail. They occur once where they're set and again where they're read in the call to "TestX", they don't affect the intervening program. In Labview though they do affect the rest of the program because the wires needed to carry the values from place to place must be laid out. Of course local variables and clusters/bundles can be used here, but I'd argue they don't really remove the problem.
I agree that with care Labview programs can be laid out in a readable way. I've done that with ~70% of those VIs I've inherited from my predecessors and I have the remaining 30% left to do. But, this layout takes a lot of time (and the automated clean up doesn't really do the job).
I didn't know about the "application instance" thing, thanks for telling me, I'll check it out.
Admin
It's important that "figuratively" and "literally" retain separate (and opposite) meanings. "Literally" doesn't reinforce a figure of speech; if it did, figures of speech would take over the language, since we'd never be able to make a distinction.
For example: If you've been walking all day and you say your feet are on fire, it is a figure of speech. But if you say your feet are literally on fire, it's no longer a figure of speech and you should seek a fire extinguisher and a doctor.
Admin
I wouldn't call it "default". It's an intrinsic part of the data flow paradigm, so it's not there just to increase performance (although there are cases where it can) and you can't actually disable it. Like you (and most other users, I suspect), the majority of my code is sequential, and while writing long sequential code in LabVIEW isn't as easy or as elegant as it can be in text, I don't think it's difficult once you get the hang of it.
That's a perfectly valid point and you're right. There is no safe and easy way in LabVIEW to carry many unrelated pieces of data from one point in the code to another which are not a wire. Local and global variables are easy but not safe. Other methods, such as queues, are safe but not easy.
That said, it should be noted that this only affects certain kinds of diagrams and there are ways of working around it (for instance, if your code is a state machine made up of a loop and a case structure, you can add a shift register which will hold a cluster of state data for the state machine. Then, writing and reading data into that cluster is relatively clean and easy).
Also, if you use classes, a lot of the code which has the problem you describe goes away, because the object keeps the value internally.
Admin
Now I know Why Intel f*cked up Sandy Bridge...they tested it with that !!!
Admin
LOL. In fact, I literally laughed my head off. Good job I can touch-type. I've set one of my staff the task of finding out which corner it must have rolled into (all I can see from the angle my eyes ended up are two walls and a bit of carpet).
Admin