• King (unregistered)

    Convert.ToEnglishFromDutch("http://vb.net/") actually works quite well

  • dpm (unregistered)

    large Visual Basic projects are a thing that sadly happened

    Excellent WTF, nicely summarized. I'm a little confused as to why there other words in the article, though.

  • RLB (unregistered)

    All very well, but you can't really blame this on Microsoft, mostly. Except for the programmers trying to get around the Strict and Explicit, this is all the fault of Kemeny & Kurtz.

  • (nodebb)

    Oh for the days of the BASIC I used. Three types: floats, 16-bit ints, and strings. Names A-Z and A0-Z9, with three namespaces: scalar, arrays and strings (names suffixed by $). Integers had the nice feature of -32768 meaning "undefined."

    Bit banging was so much fun with no bitwise operators.

    Days of glory...

  • hpoe (unregistered)

    As I understand it VB was designed to be so easy a moron could write. The results speak for themselves.

  • Foo AKA Fooo (unregistered) in reply to CoyneTheDup

    Kids these days! 16-bit ints!? Lemme telll you, the first BASIC I used (Atari) had no ints at all, just floating point, string, and arrays of float (no arrays of string, of course, who'd need such decadence).

    All numbers were floating point. Which was a really great decision for a CPU that natively had integer arithmetic, while floating point had to be emulated in software. But how else would you put all those 1MHz cycles to use? An empty for loop from 1 to 100 was good for a small delay; these days, you'd have to add oh so many 0s to that number. (Of course, there were no built-in sleep or delay instructions.)

    No bitwise operators either, obviously (again, since the CPU supported them, that would have been too easy). E.g. if you wanted to extract one bit of the four directions of the joystick, it was easier to just list the 3 possible values that had this bit set than to roll your own "and" subroutine (not a named function, just GOSUB with line number, of course). So instead of one bitwise operator, just 3 floating point comparisons and two logical "AND"s (though I wonder why it had those, I mean you could achieve the same with a series of IF and GOTO ...).

    "Easy to use", indeed. At least, things got more comfortable when I started writing assembler code.

  • Foo AKA Fooo (unregistered) in reply to RLB

    Don't want to defend them or that "langauge", but how exactly did they force MS to use it?

  • WTFGuy (unregistered)

    As the name BASIC says, it's the "Beginner's All-purpose ...". And for that mission, getting from crawling to toddling as a programmer, it was well conceived by the standards of its time.

    But ...

    As any pro in any other field will tell you, using beginner's tools to try to produce a professional-quality result just doesn't pass muster.

    If you want to compete to win in bicycle races don't buy your bike at Toys R Us or Wal*Mart. If you want to build a shed, don't buy your tools from Fisher-Price https://www.amazon.com/dp/B006RQ8UNA. If you want to build more than one shed per set of tools, don't buy your tools at Harbor Freight. etc.

    Our industry will never get beyond the [amateurs playing at DIY homeowner workmanship for a living] until we both jettison the beginner's tools, the self-taught-by-tinkering ethos, and the resigned acceptance of massive WTFery by all levels of the Dev, IT, and business hierarchy. [/RANT]

  • Greg L (unregistered)

    I actually think I see what they were trying to do, and it's terrifying.

    Let's say I'm requesting a number to be used in a calculation. Then let's say I need my number to only be accurate up to 5 decimal places or my calculations (somehow) get messed up. How would I accomplish this? What if I take whatever input the user gives me, convert it to a number, then convert it to a string and truncate it? That way I KNOW the accuracy of my number during calculations! Surely there's no other way to truncate a number to a specific number of decimal places...no one else needs to do that right?

  • muteKi (unregistered)

    A full business-standard BASIC app...just the idea is making me break out in hives. Even Python app dev makes me uncomfortable because of how the typing system there subverts the goals of things like integration tests.

  • Your Mammas name (unregistered)

    VB6 suffered, like so many languages do, by being used inappropriately. It was perfect for quickly putting together a demo/proof of concept/test version, but so often that was the version which went into production because it was good enough and rewriting it would have been a delay.

  • tbo (unregistered)

    Loosely-typed languages seem to proliferate on the web, where all data starts as text.

  • (nodebb)

    VB.Net is a very powerful language, and can be used just and be just as strong as C#. It has some benefits over C# and weaknesses, but it is essentially equivalent now.

    But the DIM command history is crazy. First, everything was a variant type (VB6), then, in .Net it was an object type unless explicitly declared, which made best practice to always declare the type. Then C# got the "var" keyword and autotyping, and VB.Net just reused the DIM statement. So now, best practice may not be to declare every type because it gets strong typed anyway by the type of value you are declaring on your declaration line.

  • (nodebb) in reply to Foo AKA Fooo

    They didn't really force them. Microsoft BASIC predates Microsoft DOS. I once saw a copy on punch-tape.

  • Hanneman (unregistered)

    Is it not in weakly typed languages where a system Hungarian Notation comes in handy?

  • (nodebb)

    Could this perhaps be talking to something with old Cobol-style records? The field is 5 digits, going over stomps on something else.

  • surturz (google)

    But when VB features like loose typing or run-time duck-typing are put in JavaScript, it's professional industry standard XD

    VB6 did have strict typing, it just didn't require the programmer to use it.

    The real WTF in VB6 was "evil type coercion", but this article isn't an example of that. It's just a programmer not knowing his craft.

  • (nodebb)

    The computer sees your program as a series of ones and zeros...

  • sizer99 (google) in reply to Hanneman

    Is it not in weakly typed languages where a system Hungarian Notation comes in handy?

    You might think so, except... it will be a lie. intCount will contain a string. Or a tuple. Trust me on this. So you will read 'intCount' and of course you'll think it will contain an integer, so the code will look okay, how could it be failing? And now you're worse off than you would be with no Hungarian Notation... which is the normal case.

  • Officer Johnny Holzkopf (unregistered) in reply to CoyneTheDup

    Originating in QBASIC (at least on IBM PC and successors), the $ for strings was mandatory, but % and & could be used to indicate integers and floats. Variables could have names like AMOUNT% (being an integer) or DISTANCE& (being a float). Certain function names followed that concept: CHR$() returns a string, ORD() or ORD%() returns an integer. All string manipluation functions added the $, like FORMAT$, LEFT$, MID$, RIGHT$ and so on. That's why there are people who call $ the "string" symbol and say things like "mid string", "my string file" or "string term" or even "string string output".

  • David Mårtensson (unregistered) in reply to sizer99

    The big problem is that what most consider hungarian notation has nothing to do with the original definition of it.

    In most examples it is a type prefix like i or int, sz or str or similar and the only one of those that really have any relevant information is sz that indicates a null terminated string i C.

    The original hungarian notation created by the Word team at Microsoft used prefix to indicate context, ix indicated it was an index variable, us means an unsafe string, that is, one that has not been cleaned or verified, or rwPosition where rw indicated row, while cl column.

    Due to a misunderstanding of what "type" meant another dialect called systems hungarian was created that used int or i, s or string, l or long.

    In a weakly typed language you might use systems hung. to indicate type, but today its better to be more verbose, you do not have to save on row length or memory when writing programs, screens can easily handle more than 80 chars per line ;)

  • RLB (unregistered) in reply to The_Bytemaster

    Nevertheless, K&K BASIC predates Microsoft Basic, by necessity. And they didn't "force" anyone, it's just that that's what BASIC was to begin with - and Basic still was when MS Basic and even VB were first written. To insist on strict typing, in your first edition of Basic, would have put contemporary Basic programmers off.

  • (nodebb)

    VB is one of the most ridiculed languages, but it darn sure fit well in it's time. And it was possible to use it quite effectively. In it's time. Also recall that Linus Torvalds allegedly said "I personally believe that Visual Basic did more for programming than Object-Oriented Languages did."

  • Old timer (unregistered) in reply to dpm

    Large c projects are sadly a thing that happened.

    Numeric calculations in c are error prone, due partly to the non-obvious rule of overflow and implicit type conversion.

    Combine that with the well known C.F. that is c string handling and string formatting....

    Many c programmers have never really had to do much math in c, or support programs where other programmers had tried to do arithmetic in c, and by Golly it shows in their programs.

  • STrRedWolf (unregistered)

    I think we can all agree, TRWTF is VB.

  • Oldgit (unregistered) in reply to The_Bytemaster

    so many people misunderstand the "var" keyword and its purpose it's amazing......

  • Chris Gonnerman (google)

    "Maybe the "easy to use" languages are onto something. Types do seem hard."

    It's just proof that the tools can only help if you know what you're doing.

Leave a comment on “Extra Strict”

Log In or post as a guest

Replying to comment #514833:

« Return to Article