- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
They shipped the product without frist having a customer-compatible procedure to flash an upgrade?
Admin
Wow.. That is marketing.. As IT guy i'll say: Fix this ASAP. But they sell the fixed product for even more money.. Now, I know why i'm not rich..
Admin
I think that highest percentage of bad code I've seen has been MCU code. It's usually written by EEs that don't know or don't care about good software practices, or by SW engineers that don't know about embedded systems...
Admin
I can add though that some of the highest quality code I've seen has been from embedded programmers who know what they're doing.
Admin
Reminds me of of my favorite embedded-development stories. It was an avionics system, so obviously safety was a major concern and was baked into the requirements. It was all done in C (not C++, because the requirements explicitly forbade OO-based design), but the more boneheaded move was the requirement that prohibited pointers. Period. Not even for, say, pass-by-reference semantics. When I joined the project, they were having some severe performance problems, and it didn't take long to see why...
Fortunately, I helped them see the light, and the no-pointers requirement was softened to "no dynamic memory allocation" so that we could still use one of the most fundamental elements of the language in places where it made sense.
Admin
"The code fix was simple- the STM32-series of processors had a hardware timer which could provide precise timing. "
These people need to banned forever from embedded stuff. Every microcontroller (even the venerous 8051) has at least one timer intended to provide you with a means for timing. Using the uCs HW ressources and correct interrupt handling is part of your daily work as an embedded SW engineer. If you don't know about that, you're not an embedded SW engineer.
Admin
So they did something stupid, then released rev 2 (with markup) when they fixed it? I'm not sure I see how that's a WTF, just common marketing attitudes (which generally get weeded out by other areas of the company).
Admin
Is this "temporal dithering" the same thing as PWM (pulse-width modulation)?
Remy says: Sort of! In this case, the underlying signal is PWM, and what I'm doing is toggling between two different duty cycles to essentially average out into a new wave. Temporal dithering is also used in LCD screens to increase the color gamut: swap between two different colors to create a third perceived color.
Admin
Embedded programmers are a massive source of WTFs.
Admin
Waiting in ISRs can be fine, e.g. if you're programming good old C64 for some nice gfx effects in full sync with the VIC-II gfx chip ;) But yes, that's the exception, in general, waiting in an ISR is the second most stupid thing you can do... while the MOST stupid thing is not using a hardware timer for, well, timing. Is this story for real? I just can't imagine someone doing low-level coding for a commercial product would ever do such a rookie mistake :o
Admin
I'm trying to decide whether it's clever or boggling to use "us" as the "usable in programming" version of "μs".
Admin
Any reasonably modern microcontroller can do any kind of PWM you can think of in hardware, with very little software intervention required. But if they really had very high accuracy requirements, over a wide temperature range, that needs to be accounted for in the hardware design. Oscillators drift with temperature, which is why eg. high-precision instruments use oscillators with an integrated heating element to keep them at constant temperature.
There are also special cases outside demo effects where you want to wait inside ISRs. Eg. if there are two closely related events, the overhead of entering and exiting the handler can be longer than the delay. But they are special cases, and you will know when you need to use them.
Admin
I have to agree. Fixing problems and re-marketing is pretty standard practice in pretty much every industry that isn't software. When a car company puts out their next year's lineup with fixes that make the handling better, you don't walk in there with the previous year's car and demand an upgrade.
I agree with beef also, I've worked with a lot of EEs turned SWEs, and the results are never mid-range; they are always beautiful or horrific. I think the line is drawn by the fact that they either realize that they are stepping way outside of their own wheelhouse, and bother to learn the new skills they need, or they just dive right in, and make mistakes in production that CS students make in programming 101 (see: every anti-pattern that you learn in the first 2 years on the job)
Admin
To get smooth fading of a light source without visible steps, you typically need at least 12 bits of pwm resolution. Most MCUS have sufficient 16+bit timers that you can simply use them directly, but in some cases this makes the pwm frequency too low or the minimum on-time too short, so that's where dithering comes in handy.
I'm not sure why Remy needed such accurate timing in his program loop, though, as typically you'd swap your timer compare values in the timer overflow ISR--or better yet, use one of the many MCUs that has hardware dithering built into the timer peripheral.
Admin
PC programmers are a massive source of WTFs. Imagine wasting memory on pointless shit like objects!
Admin
TRWTF are the customers who put up with this.
Admin
I'd guess it is. Trying to do timing by looping is a real thing and is sometimes necessary, depending on the granularity of timers available and whether you've got hierarchic interrupts (waiting in a high-prio interrupt is bad, but not in a low-prio one if the high-prio ones can push on past it).
Admin
Ah yes, customer = beta tester. Works a lot better for software than with hardware/firmware. Even then, it kinda sucks for customer.
Admin
In German, the term for that is “banana software”, because it only ripens after arriving at the clients’.
Admin
It's very common, in documentation, to use "us" as opposed to μs. Just one of those things. You work with pedal-to-the-metal coding, you get used to it. Painful, but not as painful as some of the specifications (as here).
Admin
Well, yes, sort of. Except that, if you loop, you lose. Particularly with interrupts.
I have a hard time imagining what I'd do without a hardware clock. (I have probably programmed against a board without one. A PIC, at my best guess.)
You can't, realistically, provide a deterministic timer (even within some sort of tolerance) unless you write the whole damned thing in a loop that does nothing but count cycles. And at that point you would have to insist that every single routine updates the number of clock cycles used up. This would be hard and quite frankly pointless.
I do so love the idea of cargo-culting the completely idiotic "micro-optimised in assembler" thing across many different requirements, however. Just put IRQ in front of the label! You know it makes sense!
A big, happy, call-out for Jindra S here -- unlike a lot of contributors recently, he came, he saw, he winced visibly, and he darned well conquered.
Good job!
Admin
qwerqwerqwer: A temperature-controlled oscillator is overkill for most purposes. For many purposes, even the built-in RC is close enough, provided you preserve the factory calibration factor when you program the flash. (That is, the factory measures the oscillator frequency and writes a correction factor to a location in flash. Some programmers automatically save that and re-write it if the flash is erased; for others, you have to specifically enable this.) That makes it quite accurate at room temperature; the frequency varies with temperature, but in a mostly predictable way, and not widely enough to noticeably change the time for the LED to dim or brighten, for instance. If you need better accuracy in outdoors temperatures, just use an oscillator crystal instead of the built-in RC oscillator. (Most micros have circuitry to drive either one.) Those are accurate over a wide temperature range to within a few minutes a year, if you're willing to pay a few dollars for one that's been tested to higher precision.
A hardware timer counts the micro's clock cycles, so it isn't more accurate than the clock oscillator. But the hardware timer has three advantages: It's easier to get the # of cycles you are aiming for. It allows the micro to do other things while counting cycles. And the biggest advantage: it keeps counting cycles during an interrupt.
If they could fix the timing problems in software, their oscillator was good enough for the purpose - it must have been that the software counter was losing time unpredictably to interrupts. That's an environmental influence that you'll never fully explore when testing the device in the lab.
Admin
Sometimes field upgrades AREN'T an option - I wrote code for microcontrollers on aircraft. The FAA would have lost their minds if we told them the devices were field upgrade-able. Doing so would have meant anybody who could touch the units, from the folks who built the aircraft to the mechanics who repaired and maintained them would have to go through an extra certification process just for our gauges. And the paperwork to do a field upgrade in an aircraft......!
Admin
The use of inline assembly is justified, and it has nothing to do with performance.
If they just wrote the function in plain C, the compiler could throw the loop away since it doesn't change the state of the program in any way.
Try it: https://godbolt.org/g/KNao63
There are other ways to enforce the loop to be preserved, but inline assembly is also common for this task, especially in cryptographic libraries where it's important to avoid timing attacks.
Admin
If that's your only issue, you just make the counter global (not a bad thing if it's a timer) and pass it by reference.
There are good cases to be made for optimisation by inline assembler. This is not one of those good cases.
Admin
@LCrawford: It is worth noting that most STM series processors have a serial bootloader that just needs a couple of extra pins.
@Dmitry: Making the loop counter volatile should be enough to keep the code around, but it might also force the counter to be stored on the stack, changing the timing. In any case, counted busy loops are usually an embedded anti-pattern, because you won't be able to time anything else properly. But inside an interrupt is almost always bad. About the only exception is when you have really tight timing requirements and need to delay for just a few cycles. Even then it's bad because modern embedded processors may not have guaranteed cycle counts.
Admin
Indeed, I fail to see what's WTF-worthy here. Sure, it's inelegant, and sure they had a performance issue they only picked up when the thing was shipped and got fixed in V2, which isn't exactly ideal.
But the WTF seems to be "zomg, I never worked on embedded systems before, LOLWUT". Which is probably the real WTF; all developers should be forced to spend a few years coding for an 8051 derivative with 256 bytes of RAM at the start of their career - that way I wouldn't now have to waste my time explaining to "Enterprise" developers simple concepts like "just because a GUID looks like a string doesn't mean you store it in a VARCHAR".
Now get off my lawn.
Admin
You don't need to imagine it. Observing will do. After all, as Zinovyev says, the skills you need to get a job are totally different from the skills you need to do that job.
Admin
It used a drifting oscillator for the CPU clock, but a stable one for the timer? Why not use the stable one for both?