- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Well, then, the solution is easy! Just bring in your pimp-daddy programmer to create a listener thread that spawns off another listener whenever it receives a new request.
Admin
Oh, thanks for your responses and clarifications
(Yay I learned something new today)
Mike Rod
Admin
Yes, though they will behave differently under high load. Non-switched ethernet degrades very badly once utilisation reaches (in my experience ) 35% while token-ring remains responsive for all stations right up to 100% util. Switches take care of that now though, and 100% of 16Mbps looks pretty small these days. Token ring behaved badly under other circumstances, like a station dying while holding the token, or certain cabling issues.
Maybe they did their own layer 3 networking and were dependent on certain features of layers 1 or 2?
Admin
Actually, I have to work with a 3rd party library (a .NET wrapper around a COM library by the same company) that requires the application to use Win32's SetProcessAffinityMask to fix the process to a single CPU. The reason being that otherwise .NET finalizers could run in parallel with other code on two different SMP or SMT execution units (if present), calling some sort of non-reentrant COM code simultaneously which causes massive breakage.
Now, that said, it probably won't surprise you to hear that the code doesn't even work reliably if you use SetProcessAffinityMask as documented, or on single CPU machines, anyway. You can run the component (well, it's a parser for a DSL) once in a given process with relative impunity, but call it more often and you will ultimately get funny blow-ups. Please don't even ask about the possibility to parse two different files simultaneously. Static variables rule!
Now to actually make this library useable in our multi-threaded GUI application, I had to write a wrapper executable - our application starts a local server process that does the actual parsing, then collects the results using remoting. Then the parsing process is shut down, because it cannot be safely used to parse more than one file. Now some people might think this is a huge WTF. And they would be dead right. But until the library gets fixed or we find another vendor (or the time to write a better parser ourselves), there is just no other way to make it work.
Admin
That's a NEAT trick for a water company!
But I would NEVER have bought it! WINDOWS only? An ENTIRE computer for ONE sensor?
The idea of using ONLY dell, and even a special MODEL?!?!? WOW!
And the bit about TOKEN RINGS! WOW, I guess Microsoft, DELL, and IBM are ALL giving them KICKBACKS!
Steve
Admin
I dunno. It seems like there might be a difference if the software is highly sensitive to what order the messages come in.
Perhaps the issue is not Ethernet vs Token ring, but more of what protocol is used on the network, and it was a miscommunication.
Admin
Or maybe before anyone considered it.
CAPTCHA: Mustache....as in ride.
Admin
It's a water company...why would they have experienced systems development managers hanging around?
That's why they brought in consultants in the first place.
Admin
Ah, so did I. Mea culpa.
(Actually, I had learned this at some point and forgot it, and am now apparently re-learning it.)
Admin
Damn, $2.5mil would have surely been enough to employ a whole team for what 2 maybe 3 years? to develop their stuff and continually support it....
Admin
people here in the office now recognize when I am reading these things. Its when they see me cover my forhead with both hands and slink down to the desk with an unbelieving stare to the monitor while my mouth is making a big 'O'
Forgive me, I have to go put my lower jaw back into place.
Admin
Well, I can't even describe the shock of all the issues with this post, that other people have not covered yet.
For starters, by your own writing, Token Ring is more efficent than TCP/IP (ignoring the gross inability to compare) because it can't have collisions. If we're not at risk of resending packets (which isn't quite true with TR, but it's true enough), we're by definition more efficent.
That's ignoring the fact collisions are a fairly small problem on switched-Ethernet networks anyway.
Your analysis of page faulting isn't even sensical. When a CPU faults for a page, it doesn't store anything in memory registers. I don't know what a memory register is. If you're talking about a TLB, then they're unreleated to the page faulting process.
If you're talking about registers like the register to the base pointer table, or segmentation/protection descriptor registers, then they are slightly related but generally don't change at all. If they change, it's due to a task switch and not a page fault.
If you're talking about the page tables and determining what virtual pages are actually in physical memory at any one time, then all processors that support paging support some sort of mechanism to tell you when the page was last accessed. Operating systems use this information to order the pages by activity and automatically page out the most least recently used LRU[1] page.
LRU crushes random, so it's not even worth putting them in the same sentence. And what you said is wrong anyway: schemes based on how many references a page has will yield better results than random[2].
But page information related to faults sure as hell isn't stored in a register, or even a group of registers. One register couldn't store all that information without being totally massive.
Finally, token ring was design by people who realized maximums are important in many situations. That's what things like LynxOS and VxWorks exist.
The fact maximums are only important in a limted set of situations doesn't mean they're not important. In fact, in the situations they're warranted, they're usually critical. They can be, in fact, the difference between life and death.
[1]Strict LRU is rarely used anymore, but the concept is the same. Linux and Windows NT don't attempt to keep the lists of free and inactive pages in a strict LRU order, as there's rarely a point. Older BSD does, however.
[2]In fact, this is one thing Linux 2.6 takes into consideration when aging pages. Pages that are mapped into multiple processes take more time to become inactive than pages that are private.
Admin
It's only marginally working due to solar flares. For another $2.5M, we'll build a solar flare deflector array.
Admin
the real WTF is that they used PCs instead of PLCs for automation
and that they didn't use realtime ethernet
Admin
*peers at the extra vowel* You're new here, aren't you?
Admin
Am I the only one who caught this little tidbit at the beginning:
The fact that they were a military contractor should have been the first warning sign.
Admin
Tolkien rings?
Admin
Try plugging a 4Mb station into a 16Mb token ring.
Admin
Except that collisions don't exist anymore. They went away when hubs died. Now it's switches everywhere, and Ethernet has became point-to-point. And I happily run my links to 100% (110-120Mbytes/s of data served over NFS on a gigabit link, thanks $DEITY for big disk caches) without anything crashing. While tokens being lost to a dying server in a loop, oh my. Token ring requires every device on the loop to be cooperative and alive. Ethernet hubs only want them to shut up, switches don't care either way.
OG.
Admin
If I remember, my lecturer showed me how Token Ring was faster on a slower network.
As far as he is concern, token ring was created by IBM purely for patent purpose , ie, they want their own type of network. Ethernet was free for all, and since network speed gone up, token ring dies away (I'm still on a 16mb token ring network...but u really don't notice it..)
IMHO, creating a real time data collection system that rely on the general network (be it token ring or tolken's ring) is a big mistake to start with. There are reasons why you would put several layers of servers to ensure data are being collected. A good read up by the way.
Admin
$5 says the origional system was designed back in the days when Token Ring speeds were comparable to Ethernet speeds, and this company figures its best not to redesign everything when the origional works just fine (something the client in this case should have thought of).
I'm sorry, I don't see the WTF here. A company bought a system without reading the specs and tried putting it to use with hardware it simply was not designed to work with. Happens all the time. Yes, the requirements appear to be a bit strange to me, but then again from what I know about real time systems (which is very little), thats very common.
Admin
Err, no. Token rings do NOT require that every device on the loop be cooperative and alive. At least back when I was working with them, if a device failed or was powered off, it went into a mode where it just passed data thru. Maybe they made token ring more fragile since then, but that's how it used to work.
Admin
TCP retries are a higher layer and totally independent of medium.
And collisions can exist on switched networks. It is possible to remove collision domains, but switches aren't a sufficent requirement to achieve that. They are necessary but insufficent.Admin
Actually the retransmission (in standard ethernet) is done at the MAC level when it is the result of a collision, TCP only retransmits when a packet is lost or damaged or out of sequence which is nigh-on impossible on a single link transmission, it'll probably be handled by the MAC or Link protocols in this case. The link layer (below IP level) will retransmit broken and out of sequence packets, functionality that is repeated at the TCP layer because two sequential packets in a TCP stream may take different routes as a result of the IP layer its sitting on top of so a packet sent later may arrive first if it takes a shorter path.
Admin
Admin
Didn't Ethernet simply become popular because it got really inexpensive really fast?
Admin
From this number I predict that you were using at least one cheap <$10 network card in that segment. Yes, it matters. One cheap card pulls the threshold down from about 70% to about 30%.
Of course, nowadays there is absolutely no excuse for not using a full-duplex switch, thereby eliminating the entire problem (because CSMA/CD is not used at all in that configuration, since no collisions are possible). And yes, it does have to be full-duplex - if something goes wrong with autoconfig and you get stuck in half-duplex mode, you've got exactly the same problem still, even with a switch (because you can get the switch and the host both sending data at the same time).
Admin
From this comment I deduce that your experience with token ring has involved dumb MAUs - devices with little intelligence beyond that needed to insert hosts into the ring. These were common when token ring was popular, but they are to token ring what a hub is to ethernet. If you plug a crappy ethernet card into a busy hub, it wrecks the performance for everything on that hub. If you plug a 4Mb token ring card into a dumb MAU that's otherwise running at 16Mb, it behaves similarly.
There were also smart controllers, that were to token ring what switches are to ethernet. These did not have such issues. Of course, they were kinda expensive... token ring and ethernet aren't really all that different.
Admin
This is a popular myth with inexperienced network admins. Unfortunately, being a myth, it's not true. Collisions are still extremely common, because a lot of network equipment gets misconfigured. Switches do not eliminate collisions, they just make it possible to eliminate collisions. The network admin has to actually set them up right to accomplish this - and sadly, this doesn't happen very often, because most network admins are really windows admins who think they know something about networks. I have lost count of the number of times I've gone to investigate the need for a network 'upgrade' and found the only problem was bad configuration...
Admin
It's hard to pinpoint the exact reason; ethernet had many advantages at the time. Personally, I think it's because the common ethernet cables were willing to go around corners, while the IBM Type-1 cabling used for token ring was only interested in going in a straight line (it was shielded, fairly thick, and very very stiff - not as bad as the frozen yellow garden hose that was used for the original 10base5 ethernet, but not far off). Token ring uses cat5 nowadays, but as with so many other things, it was late to the party.
Admin
I am completely horrified.. I'm sure there is money in your industry but how did someone not get fired for this cluster f*&k.
Admin
Expensive compared to a lot of software, yes. Whereas the standard philosophy for many administrative systems, let alone PC software, seems to be to fix the worst bugs and let the user sort out the rest, that doesn't work when your "users" are valves and pumps and sensors. Add to that the fact that highly-specialized control software by definition has a small market, then yes, it is expensive.
"Marginally functioning"? Well doubtless there is plenty that falls in that category. I have indeed encountered a couple of such systems. OTOH I have worked for several years building both process-control and administrative systems; the process-control systems were designed, built and tested to standards which the administrative users could only dream of. After all, if a process-control system fails, it can kill people. We were used to availability requirements of 99.99%, 24752. Twice we installed a large system and after a couple of weeks had the assigned maintenance personnel clamouring for other work because the system was working flawlessly and there were no bugs to fix.
Admin
Oh , come on, that is the oldest trick in the industry. Just blame some system component for the deficiencies of your own software!
Very elegant, not only you won't be blamed, you even get time and money for the customer to fix what should have been working in the first place!
I have to say: Whoever believes a claim like the one above deserves to pay.
Admin
Well, that won't work. TWO_MISSIPPI isn't defined...
Admin
They probably used token ring because it gives you a guaratee what the maximum delay from each node would be. And then the software was probably coded to rely on that.
Admin
Wow, you have the techno babble almost pinned down. Now if you only stopped mixing network layers like that, and people who actually know anything about networks would read more then one sentence of your text.
Admin
Absolutely. I used to work for a coal mining company and they used token ring at head office for performance, but ethernet at the coal mines because contractors working on site would keep digging through network cables and bringing the whole network down.
This was mid-late 1980s - maybe that's why they didn't use star topology?
Admin
He should probably look up what "page fault" means too.
Admin
I didn't read that far... :D
Admin
I suspect all this could have been avoided if 'The consultant' had written it all in Coldfusion...
Admin
wtf?!! In this day and age?
Admin
Oh, the statement is quite correct. Token Ring is logically a ring, but not necessarily physically. Typically it is more of a star, with a central hub used to connect to all the clients. If a client is unresponsive or is removed, the hub (called a MAU, if I remember right) does the right thing (ie. restores the 'logical' ring). So, in practice, you could remove and attach machines (and other networks) willy-nilly just like (most) ethernet setups.
Admin
At the time token ring, token bus and ethernet were competing, ethernet was still using the coax wire you have to bring from one computer to the next one. Cut the wire, and the network is down (not even separated in 2 halves, because you needed a terminator at extremities to avoid signal echoes). Plus, as everyone said, a very busy 10Mb/s ethernet was really running at 3Mb/s top before the generalization of switches, and with unpredictable delays between frames. The 16Mb/s token ring was a lot faster than ethernet under heavy load. Ethernet also had problems with wire length, it was the network with the shorter distance requirements. In industrial facilities, it was often a serious drawback.
There was of course always the problem of the ring being cut (even with dual rings token ring), so for industrial environment, token bus was popular. It was designed to be a real-time network (sending a frame to a node or receiving a status about this node being unreachable took a guaranteed maximum time) and the ring was virtualized by the network protocol over an ethernet like physical medium.
When you are controlling critical industrial process, knowing a sensor WILL send its measure in the next 250ms (or it's broken) is very important. From the network to the operating system, the hardware and the programing language, every operation has a well-known guaranteed maximum delay. Ethernet could not guaranty that at the time (and I suspect it still can't, but I don't work in realtime anymore, so I'm not certain about that). I guess this is the reason the WTF software has to run on specific machines and network.
Admin
It's amazing how that tired old IBM FUD still continues to circulate, considering that Van Jacobson posted measured 90+% throughput in 1988. Google "4BSD TCP Ethernet Throughput" or "Measured capacity of an Ethernet: myths and reality".
It's true that you might get unacceptable variance in delay, and maybe packet drops in the worst case if you tried to run a control system on a Ethernet when you push it's baseline load way up. Running, say bittorrent and a hard real time control system on the same network is a WTF configuration though.
In any case, no network ever gives you a guaranteed maximum delay, unless you have a bit error rate of 1 in 10^infinity. An analysis that claims that token ring is deterministic but Ethernet not is just plain wrong.
Admin
It's perfectly possible to make a deterministic protocol. Token Ring is. Etherent is not. Token Ring has guaranteed maximum on when I'll be able to talk again and how long data will take to arrive. If those are exceeded, I can safely assume there is a network error.
Ethernet does not provide this at all without specialized hardware and software.
Admin
This has been a very interesting post. I was involved in token ring installation pretty much as soon as IBM introduced it (well, for PCs anyway, back in the eighties, I think) and I can confirm that it was definitely robust. The MAUs (multiple access units) had a relay in for each connection which required the network card to power it, so if you turned off the PC or cut the cable the relay would click over and that section would be cut out of the circuit. You built bigger networks by connecting multiple MAUs together. Chop the link between two MAUs and you end up with two independent networks still functioning. I hated the cabling, but it was much more reliable than ethernet over coax.
Admin
Sheesh. You would never make as a highly paid consultant. Where is the money in 'reconfiguration'?
Admin
You need to take some even more basic courses on reading comprehension. JRH was clearly talking about data transmission, not capability to transmit.
Mind you, most guarantees are couched in terms of probabilities less than 100%, so perhaps JRH is also guilty of nit-picking.
(And nit-picking is a crime against humanity.)
Admin
The WTF there is that they didn't put the cables overhead.
Admin