- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I just gotta say this,
Standards in java ... huh huh ...huh huh ... nope... bwahahahahahahahahahahaha!
This list goes on and on ...
Admin
I was referring to this but I see someone beat me to it.
Admin
Admin
XOR is the best choice unless your CPU has one of those registers which is always zero, in which case mov r0, r1 would be a better choice :)
I haven't done embedded systems programming in ages, and I miss it. I used to have to do division without any floating point, so we'd do scaling with bitwise operations. E.g., if you want to divide by 3, you'd actually multiply by 11 and divide by 32 using a shift.
m*=11;
m>>=5;
You'd always need to be careful that m*=11 would never saturate your register(s) (e.g., for 32 bit registers doing unsigned math, m would always need to be less than (2^32)/11 or you'd be hosed). You could implement rounding by adding 2^4 to m before the shift. If you were really anal, you'd make sure your rounding was symmetric by adding one to the result if it was negative.
*sigh* those were the days...
captcha: awesomeness
Admin
> This is what happens when you give assembly programmers a C compiler.
Well perhaps. I've done embedded and non-embedded coding and i've ended up writing roughshot code like this to get a result. Sometimes even just to reduce the size of the code so that the application will fit on the target device. PIC microcontrollers are a favourite for this sort of coding: - they dont have multiply or divide routines and they have (in many cases) < 256 bytes of ram.
But then, i wouldnt dare label the routines as "ANSI" or even "WORKING", they did what they had to do for the application and no more.
Admin
Anonymous wrote:
I just gotta say this,
Standards in java ... huh huh ...huh huh ... nope... bwahahahahahahahahahahaha!
Hmmm...
Not saying that java doesn't have its quirks, but it's not half as hard as it sounds...
Captcha: bedtime. Oooh how I wish on this monday morning...
Admin
Neither String.split() or the regex package is available in MIDP, CLDC devices.
No trig functions or floating point available in early devices (CLDC 1.0) , which sadly are still very much in use and need to be suported.
Admin
Now that is a WTF. Non-re-entrant library functions for long multiplication? I won't believe it until I see one. I can promise you that gcc's [u/s]mul[qi/si/di]3 are all perfectly re-entrant. I can't speak for other compilers but there's no good reason why they shouldn't run entirely in registers/stack local variables.
Admin
I dont rewrite standard library stuff normally because is always better written on the library.
EXCEPT:
But If you want to add more defensive programming, you can make your own version, a slower but safer one.
Also having your own malloc and free, able nice stuff, like garbage collection and leak detection.
Theres also the need to make a function shorter. scmp as short version of strncmp, but you can do that with a define.
Admin
If they are both constant, work it out yourself and put one constant in.
Admin
It is clearer to write for instance x * y * z (where x, y and z are constants) to convey that you have "x things * y stuff * z amount" rather than just have a big magic number just coming out of nowhere (and of course, x, y and z should be #defines too)
Someone will probably come forth with an example of some shitty-ass compiler on some obscure micro controller that can't do constant folding. But in that case I'm not sure it even deserve to be called a compiler anymore.
speaking of magic numbers, captcha: 1337
Admin
The Real WTF(tm) here is that the whole function isn't coded in assembly language ofcourse.
Admin
1. atoi() should return an int, not a long.
2. s should be const
3. atoi() should not return -1 if input is NULL (it should invoke undefined behaviour - most implementations segfault)
4. as pointed out elsewhere, while(isspace(*p++)) will not produce the desired behaviour
5. "+" is correctly ignored, but "-" is also ignored if it is preceded by whitespace
6. unsigned char = *p - '0'; /* this does not compile... :) */
Admin
AIEEEEE... still error. U should "while(isspace(*s++));", otherwise you won't get your "if (*s=='-')..." right.
Admin
No, you'll just have to write better versions of the existing Java libs. Case in point: java.util.random().
Back in the dark days of Java (~1998) I wrote a simple little molecular dynamics simulation to show the relationship between pressure, temperature and volume in a gas for students. Easy enough- I set the initial positions and velocities using the built in random function and let the whole thing go. Despite knowing about the foibles of computer rand() functions I assumed that Java, being a new language and all, would have architects who knew about the problems of most RNGs and who would have chosen one of the many far better rand() algorithms from Knuth, Numerical Recipies or any other decent applied comp sci book.
Imagine my surprise when a few hundred time steps into the sim every particle lined up neatly in a grid pattern. And did it again a few thousand steps after that again. I thought I was hallucinating for a second. Awful performance even by the standards of the crummy system rand()s I'd dealt with in grad school. So I went back to my old copy of "Numerical Recipes in C" for Knuth's ran3() algorithm I'd had to use to replace the crappy DEC and IBM C lib functions in grad school and translated it into Java.
I don't know if they've improved it since I last used it back in ~1998, but it shouldn't have needed improving anyway
Admin
Uh, no. If both multiplicands are constant, then the result is constant, so just pull out a trusty calculator and type the result into your source directly :).
Admin
I've never seen "terrible idea" spelled "need" before. That's fascinating.
Admin
Or just leave the fucking compiler do it. Unless you're concerned with your compile time taking a few more milliseconds so much that you're willing to sacrifice readability for it.
Admin
Well, then you never wrote software for embedded devices. The small size of binaries and / or lesser use of memory are the quite important goals. Really stupid ideas are quite common if have just a handful of memory and long list of feature requests. ;)
Admin
That's because he needed to make "terrible idea" shorter, so he's using "need" as a short version of it.
Admin
When both multiplicands are constants, it's better to take out the calculator and make the computation (ie 8x4=32) and code in that number instead, resulting in a simple "load" instruction rather than "load, load, multiply). (Of course, the good modern compilers will do this for you.)
Admin
You were right the first time 5 decimal is 101 binary. Because it is (1 * (2 ^ 2)) + (0 * (2 ^ 1)) + (1 * (2 ^ 0))
Admin
Actually, it's 2^2 + 2^0 = 5, and becomes 2^3 + 2^1 = 8+2 = 10.
Admin
I hear you. I myself usually define all functions, operators and symbols as _0, _1 and so on, like:
Then remap one of the number keypad keys to _, and you can type etire program with one hand. Sometimes there is a need to have your other hand free.Admin
Well now you know.
AND KNOWING IS HALF THE BATTLE!
Admin
The
while(l)
line is also wonderful. It took me a minute to figure out why I couldn't find the break statement that would leave the infinite loop. :-)Admin
Aye! Dead handy. Now you just need to port the entire platform when you want to run it on, err, a new platform.
It's considerably easier writing conforming implementations of the ANSI/ISO/IEC C standard library... but as the WTF shows it's not a task to give to idiots.
Admin
Plus some documentation claims that isspace & friends are faster than doing it that way anyway.
Rich
Admin
101 binary = 2 to the 2nd + 2 to the 0th decimal. x to the 0th = 1. Your example would = 6 decimal, not 5. (Minor detail, but it does pertain to the matter at hand.
Admin
m= (set m to)
(m<<3) (return m shifted left 3 bits! 1=2,2=4,3=8(2^3)
SO, return m*8!)
+ (plus)
(m<<1) (return m shifted left 1 bit! 1=2(2^1))
SO, m=(m*8)+(m*2)
or
m=m*10
MANY processors, the 8086(I know it IS limited, and may be here) and 6502, for example, can NOT multiply a number times 10! EITHER ONE can do simple adds and shifts.
The 4 shift and add could take 5 FAST instructions! On the 6502 that would probably take like 10 cycles, where calling a subroutine doing NOTHING could easily take 9! So this COULD be faster. Do it a couple thousand times, and it could be NOTICABLY faster!
Steve
Admin
So what exactly is undefined behavior? I ask this in the context of writing your own function that is in compliance with the standard. I always assumed this meant the standard doesn't specify (define) what the result is under that condition so it will depend on the (your) implementation, but I've heard others argue that your output cannot be defined... i.e. it cannot be guaranteed in your implementation...it must be somewhat random... it cannot return a consistent, definable, expected result
Admin
On the ARM, you can just MOV r0, #0, since all instructions are one word long. The immediate value is stored with the instruction as something like n bits of data plus a shift, so you can only use certain values for your immediate data, if your data won't fit, you need to do a MOV and some ADDs to get the value you require.
Admin
I used to program small PDP-11's without FP hardware - you're exactly right.
# Multiply R1 by 10
ASL R1
MOV R1,-(SP) # Save to stack
ASL R1
ASL R1
ADD (SP)+,R1
Admin
And on the other hand, my first real bit of assembler was to replace a piece of BASIC code which turned a 128x128x8 map into a bitfield for a game which used "multiply by two" on an 8-bit micro. What took nearly an hour to run in BASIC ran in under a second in machine code. First time it ran, I thought I must have made a mistake in the programming.
It just goes to show.
This is a fairly lame WTF anyway. There are often compromises in embedded programming. Usually the best assumption is to aim for minimum functionality because all-cases programming will just bite you later.
Rich
Admin
The *real* WTF is that he was porting atoi instead of strtol - because that means he was *using* atoi instead of strol. In an embedded system, of all places.
-- tom
Admin
Yes, but *p=0 doesn't guarantee a crash unless you have some sort of memory protection hardware/software in place. Most embedded micros don't offer memory protection as an option. Take the PIC micro. Address 0 is the memory indirect register, which would likely be the register containing p at that particular instruction.
Admin
Yes, it's awful...
m=((m<<2+m)<<1)
Now, *that* is how it should be.
Rich
Admin
Actually, for Pentium Pro class processors (ie. everything Intel have made since except the P4), partial stalls and dependency chains mean that the best bet is to do both when clearing a register; xor eax, eax marks the register as unused, but sets up a dependency on eax, and mov eax, 0 clears the dependency but leaves the register marked used...
Some of us still learn assembler for fun: http://www.agner.org/optimize/
Admin
Um, so will pretty much every assembler ever written - nothing on earth is going to make an assembler render
ld hl, 12h*34h
as "load, load, multiply". (Which on a Z80 is probably a very good thing. ;-) )
Admin
Was your gas 3D or 2D? I recall in my numerical methods class that many random number generators are weak in three dimensions...
Rich
Admin
If the target platform has just a numeric input, a character other than a digit, '+' or '-' signal would be never input anyway.
Admin
Bah
m=(((m<<2)+m)<<1)
Rich
Admin
Huh...
So, on one hand you have a compiler that generate native code and can take its time to optimize.
On the other hand, you have a basic interpreter on a micro. I believe that most, if not all of these interpreters directly interpreted a tokenized version of the source code and were not using a virtual machine.
It means that they weren't doing any preprocessing of the code beside tokenization, so no optimization of any kind, even trivial ones like constant folding, or optimizing multiplications by constants using shift/add combinations.
So of course, in the basic case, rewriting stuff in assembly would be faster. Doh.
This is an apple versus orange comparison.
Admin
Ah ah
That's one of the reasons to not write such things right there :p
Now, question:
m=(m<<3)+(m<<1); versus m=(((m<<2)+m)<<1);, or even m=(m<<3)+m+m; or (why not) m=(m<<2) + m; m = m+m; which one is faster?
Admin
Don't know but for some reason, I'm suddenly feeling hungry...
Rich
Admin
Sure. And I was talking in a generic fruit-type context rather than, for example, being citrus specific.
Your example used a specific example of one Pascal compiler not improving through hand optimization. My point was that milage may vary.
Rich
Admin
Assembler makes multiplying by 10 easy
lea eax,[eax*4+eax]
add eax,eax
now the REAL wtf is that he used a multiplier,which involves storing an extra value AND two multiples per loop, whereas he couldve just read the string FORWARD and cumulatively add and multiply by 10.
Admin
Yes, but we are almost in 2007. Your example involved a basic interpreter on a 8 bit computer, which is kind of prehistoric.
There is no excuse for using a crappy compiler that can't do all those optimizations (turning multiplications into shifting and other similar trivial optimizations) properly (well, not counting python that didn't do constant folding until the latest release a couple weeks ago)
Admin
I completely agree, I work in Telecomms and I therefore have had to use bit-shifting on so many occasions that I have lost count. There are uses for it (especially when extracting and inserting bits back into a byte), but the reason "this method of multiplication will run faster" is definitely not a valid case. IMHO atoi is usually called by parsing code (reading for a text file or such like) and therefore will only be called a very few times, and as such is not a performance critical function.
The last company I worked for a number of "senior programmers" (and I use that term loosely) used to working with this way of thinking. Some of the best WTF code examples I have seen are when programmers have tried to "optomise" their code, believing that they could write more effective code than anything produced by a decent GCC compiler. Another senior programmer was against too many (well actually any) NULL pointer checks since he believed that it would dramatically slow the execution of the program down -- yeah like a code dump would make the program faster ;-)
Also lets pay attention to the 80-20 rule of programming (where 80% of code execution occurs on only 20% of the written code), unless the application making use of this function was doing true string parsing -- which for an embedded system is unlikely -- then that was a bit of wasted effort.
IMO I would prefer the more readable m *= 10 which is obviously apparent to all programmers and instantly understandable, remember simplicity aids maintainability. ;-)
I have also seen "optomised" odd even condition implemented as:
if ((a & 1) == 1) //even
as a replacement for
if (!(a % 2)) //even
the reason being that the bit-wise and would be faster than the modulus operator. It may well be, but a decent compiler should be able to optimise the second version anyway. I personally prefer the second option, since it more obvious -- and is more in keeping with what we are trying to achieve -- I wonder what other people out there prefer... ?
Admin
Amen brother!
“Trying to outsmart a compiler defeats much of the purpose of using one.” - Kernighan & Plauger