- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
<font size="1">A $1000 24-port gigabit switch has 48 Gbps of switching bandwidth. In such a topology, there are no collisions, ever.
You could dasiy chain 10+ of those switches together and just network them with plain spanning tree and you'd still have a lower worst-case latency end to end than a token ring network of one tenth the size.
Token ring is just plain assisine and really a non-sequitur (by the Consultants or by the OP trying to protect the identities of the guilty, and doing a poor job)
Captcha: wtf
wtf indeed
</font>
Admin
Every switch that I've seen lately has two LEDs per port, one for the presence/negotiated speed and one for duplex.
If I see two rows of green lights with no light by itself in a column, then everything's good.
Oh, and don't make routing loops when you're playing with VLANs. How hard is this stuff?
Admin
Answer me this: what happens when I attach a plain stupid old hub to one of those ports?
What happens when one of those ports is in half-duplex mode?
If you can't correctly answer those questions, then you don't have business making claims about what Ethernet is or isn't capable of.
Nope, because Ethernet doesn't have defined worse-case latency. You can have average-case latency, but you cannot have determinstic worse-case latency. It's neither. It's potentially perfectly appropriate in several situations, including the one described in this WTF.Admin
Huh? Complete nonsense.
News flash, you can run TCP/IP over Token Ring. I worked at a place that did this. TokenRing and Ethernet are DataLink layers. TCP is Transport layer and IP is Network layers.
Do people have a random jargon machine they use for posts. Just as sensical translation of above post:
"Bananas are more jello than tacos, even though it's completely umbrella. Bananas have zero portabellas, because only one motor boat can . . .. . . "
Admin
I'm sorry, but could you please be more informative? Since I seriously doubt you believe (or are attempting to claim) that the price of a NIC somehow impresses upon it that it should perform less well, I'm guessing that you're trying to say that certain low-cost ethernet chips lack features that more expensive chips have, reducing maximum performance. So what, exactly, is that that feature?
"It's only good if it costs a lot" sounds like something the consultants in the original WTF would have said.
(The OP noted "Non-switched", which initially suggests that the cheap card might have only been a 10MBit card, but then all stations would be forced down to 10Mbit and you'd get at less than 10% of a 100MBit segment's maximum throughput -- so that can't possibly be it.)
Admin
Actually random replacement is rarely used in practice. What's used is the "clock" algorithm, which is similar to random (it can be thought of randomly evicting an entry), but is extremely simple to implement in hardware. And the reason we use clock over LRU is that LRU requires way more hardware resources to manage than clock (you have to add a timestamp to each cache tag, comparators to find the smallest timestamp, worry about overflow, etc).
So it's not very random, and it's deterministic (just complex to figure out, but it can be worked out which entry will be evicted given a starting state).
And compared to LRU, it actually works so well that no one really wants to bother with LRU. But it's not strictly random, and one can optimize for it, if they desired.
Admin
Admin
To be clear, when I say LRU, I'm not talking about strict hardware supported LRU, but sampled LRU schemes, where the OS perodically determines how active the a page is.
Admin
Complete crap. On a typical token ring network, you send a packet and pass the token. And the computer doesn't pass the token - that's handled by the ring.
And nobody uses token ring anymore. And it's dead easy to disable multithreading - make your program run in one thread. the other CPUs won't affect your system at all. In fact, almost everything is by default single-thread.
Admin
True.
Or unplugging the Token Ring cable from an OS/2 2.0 workstation. (That was my favorite trick whenever someone said that OS/2 was crashproof.)
OTOH, those enormous and overpriced "genderless" connectors were very impressive-looking.
Admin
Above message replies to this. Bah.
captcha: clueless. Indeed.
Admin
Really? What do you think happens in a DoS?
It may not be a physical collision, but packets are still lost.
Admin
People still use rings. Look up FDDI2.
Admin
I'm pretty sure the guy that first posted on this topic of LRU and Random replacement algorithms wasn't talking about paging but actually caches. In the event of a cache miss, the processor must attempt to find which cache block to replace with data from a higher-level cache or memory. In this case, it's all done in hardware, and LRU and Random are pretty close in performance as far as I know. LRU has several pitfalls that are really easy to fall into, for example, iterating over a list that is larger than the cache. As you get further in the list, using LRU, you will wipe out the earlier parts of the list, resulting in the next iteration being all misses. In random, at least you have a chance of cache hits.
I could be wrong, I don't work in the industry, and my prof that taught this topic wasn't the most competent, but it makes sense to me.
Captcha: 1337
Admin
By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)
Admin
No, it isn't, unless we're talking about CPU caches. And they don't work like that.
Nope. Not remotely. Especially not for page faulting. Random is as worse as you can get.
Only if you can't code. This is a solved problem.
Why would you be editing list on a scan? Again, this is a solved problem, please go read some algorithm books on this.
Admin
Long live VG Cats!
Admin
Yeah I was talking about CPU caches. They are done in hardware. There are situations where LRU don't work out. LRU is still better than random in most cases, but in some cases random is better. As for editing a list on a scan, the example the prof gave was normalizing a long array. Sum up all the values, divide each value in the list by the sum.
Admin
Well, certainly not frame collisions. Are you sure you know what a DoS is and what it does?
Admin
I also remember another weird IBM network called Canal, that used something like 16 or 32 ethernet wires (the coax ones ...) linked together in a very bulky cable and a HUGE connector. This thing was damn fast at the time. It was a kind of "parallel ethernet" and I stumbled numerous times on that f***ing cable the size of my arm.
Admin
Not above absolute zero, it isn't.
Admin
It concerns me that none of you are aware that Token Ring networks were already moving toward star topologies using Token Ring switches by the late 90s, and that before that, IBM 8228 MAUs and active bridges could be used to segment the physical rings and help deal with wacky workstations. 16Mb switched Token Rings looked a lot like switched Ethernets and performed quite a bit better. 100Mb and 1Gb Token Rings were already available by the turn of the century.
HOWEVER... They were also 2-4 times as expensive and not as simple to learn as Ethernet. But even in 1997, Ethernet switches and Token Ring switches were being used to segment networks and drastically improve performance, so the differences between them started to blur. Token Ring had the edge, though, with MTUs of over 4k, reducing header overhead and fragmentation.
Basically, the WTF is that the customer's network engineers didn't call 'BS' at this explanation. It's completely ridiculous. Regardless of how reliable or predictable Token Ring or Ethernet are under X load or Y conditions...
HOW MUCH BANDWIDTH COULD SOME FREAKIN SENSORS POSSIBLY NEED? More than my streaming video servers or an Active Directory Global Catalog server? I think not.
And kudos to those of you who pointed out the additional WTF that TCP/IP has nothing to do with anything. All of those protocols are at higher layers. If the argument were about TCP (reliable) vs. UDP (faster) or unicast (easy, but wasteful) vs. multicast (efficient, but dodgy), it might make sense, but none of that is the issue here, once someone starts pointing at Layer 2.
--J
Admin
Okay, I did. The servers that hold the second paper are down (or slow?), so I was unable to get it. I quote from the first is revealing though:
The tests were done between 2am and 6am on a fairly quiet Ethernet (~100Kb/s average background traffic)
This situation isn't much different from a switched network today. It has nothing to do with the typical network as it existed back then.
When we were talking about 30% as typical for ethernet, we were not talking about unloaded ethernet. We were talking about more than 100 computers on a single segment, particularly at 5pm when everyone hit save at the same time. Ethernet slowed to a crawl, with total throughput for ALL stations at less than 3Mb/s. The rest of the claimed throughput was lost to collisions!
You will note too that we are talking about coax systems, with maybe a few 10BaseT nodes for the managers (managers always get the latest toys). Switches had not been invented yet, and a simple 8 port hub would set you back more than $100 - USED. (Many companies just put extra cards in their servers and had the server function as routers, which worked until the server ran out of steam (sometimes bandwidth, sometimes CPU), then they bought routers or bridges.
Today it is nearly impossible to find a hub because switches are so cheap (ignoring eBay). Today idiots often will set up a network with minimal collisions by chance, and thus today networks normally can get close to their advertised speeds.
Admin
Admin
That being said, the whole not being up front about the hardware requiments things implies to me that it probably wasn't really needed.
Admin
The consultant said that they could modify their software to work on Ethernet, but that it would cost a little more. The higher-ups agreed, and a few hundred thousand dollars later ($2.5M total, if you're keeping count), the plant finally had a marginally-working control system.
Obviously I'm in the wrong end of the business.
Admin
To get back to the original problem, I think I see what might be going on here:
I suspect the software can't cope with getting packets out of order or something of the sort. It's relying on the determanency of token ring to take care of things it should be taking care of itself.
I don't exactly find myself surprised. My employer is going with some stupid software now and the more I see of it the more horrified I am. There's NO field-level security, it's entirely screen-level. You can't have confirmations on only the rare operations and not the common ones.
Those of us who saw it was a bad idea weren't asked our opinions.
Admin
The scary thing is that would include "sleazy" companies like SAP in the 1990s. Hewlett-Packard was looking to replace their 1960s home-brew ERP system called "HEART" with SAP R3. After spending $100M or so in analysis of the current system when it came time to prepare for the logistics of R3 installation they asked for exactly such a parallel start-up and test. SAP said "It's impossible. It's all or nothing. But for a few $100M and 2 or 3 years we could make modifications to do that, but we can't guarantee the installed system will work even after that." HP told them to take a f*cking leap and never show their faces on the HP doorstep ever again. HP limped along through the HP-Agilent split with HEART and began to replace it with Oracle shortly after.
Admin
I used to do QA testing for a manufacturer of computer network equipment. Played with TR, FDDI, Ethernet, ATM...
It's been my experience that when Token Ring worked, it worked well. When something hosed, it brought the world to its knees. It was indeed temperamental, but it had times when it out-performed ethernet (at the time; 100Mb Ethernet was just coming into being).
Admin
Token ring on a PC works just fine. Completely superceded by bog-standard Ethernet, but for a couple of years it was a major player- Madge Networks, IIRC, were the #1 card and MAU (TR 'switch') provider.
Admin
By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)
I call BS. In 1996 I had just left working for "big publishing company" and my 16Mbit cutting edge Token Ring.
My new job had me working on once of the 1st 100Mbit ethernet switches. Synoptics, I belive.
No way TR had made the jump to 100Mbit in that timeframe!
Admin
And yes, I just figured out how to quote.
Admin
OK - I got my timeframes a little skewed. The Bay Networks Centillion switches we used were 100 MBit and they were token ring only (didn't have an Ethernet version for another year) and it was late 95 or early 96 we installed them. I believe at the time they were the first installation in the UK.
Admin
This company has obviously never heard of the OSI model, or even the word "abstraction" at all...
If its a windows app, it should interface with windows, not with some Dell-specific hardware driver!
It should also interface with TCP/IP, not with the network hardware itself.
Whether its using ethernet or tokenring should be completley transparent to the application layer...
captcha = hotdog
Admin
<FONT face=Arial size=2>Sorry, you're confusing the data link layer with the transport/network layer.</FONT>
<FONT face=Arial size=2>Token ring is at the transport layer, as is Ethernet</FONT>
<FONT face=Arial size=2>TCP/IP and other network protocols like IPX are at the Transport layer. You can run IP or IPX traffic over Token Ring or Ethernet.</FONT>
<FONT face=Arial size=2>Token Ring works by passing a token aroung a ring of computers. Your computer can only talk on the network when it has the token.</FONT>
<FONT face=Arial size=2>Ethernet computers OTOH listen for the network to be quiet, and then start talking during the silence. If they hear another computer talking at the same time, they both stop, and wait for a random period of time before trying again.</FONT>
<FONT face=Arial size=2>One possible reason for wanting Token Ring for a sensor monitoring is that it is deterministic, whereas Ethernet is not deterministic.</FONT>
<FONT face=Arial size=2>The main disadvantage Token Ring has is that it was designed and licensed by IBM, and noone makes gear for it anymore, whereas lots of people make Ethernet hardware. Token Ring capped out at 16Mbps, whereas Ethernet is now in the neighbourhood of 10GB.</FONT>
<FONT face=Arial size=2>Token Ring is technically superior, but badly managed (like Beta vs VHS)</FONT>
Admin
Does your system page individual registers out to disk??
LRU is not the best algorithm. See: http://surriel.com/research_wanted/
Search for "Page replacement". Here's an excerpt:
Admin
Some pedantry applies here: TCP/IP can run over ethernet, or over token ring, or many other protocols. This is a clue that the original engineers in this story were talking bullshit, ebcause if they're doing TCP then the underlying network shouldn't really matter.
Almost true, with one significant detail: while random is, on average, the most efficient option there, it doesn't offer any guarantees. It can still, occasionally, allow worst-case performance. Token-ring trades average efficiency for guaranteed performance. In some situations, this is worth the effort. (Redundant network hardware is probably applicable, and the "guarantee" is subject to the laws of physics, but it's superior to what you get from ethernet.)
The reason it works is that there's best options, worst options, and lots in the middle. Any detrministic algorithm will, inevitably, hit pathological cases that make it invalid - but randomising it means that you have a chance of hitting acceptable options no matter what the state of the system may be, and on average don't hit the worst options often enough to really matter.
The WTF is still valid because the excuse is bullshit, but there are situations in which token-ring is a good thing, but that would be specified, documented, and verified during installation - and it's rare enough that unless you have really exotic requirements and are an expert in the particular field, you won't need to do that.
Admin
I was at HP back in the late 80s/early 90s when the decision to unbundle the C compiler from the OS was made, and I remember some of the discussion. The idea was (1) to make the compilers group into its own profit center, funded by the users that got value from their work, and (2) to allow users who just needed a binary-only system for running end-user applications to get a cheaper system and not have to pay for compiler development. My info is a few years old, but as recently as HP-UX 11.0, the basic OS shipped with cc, which was a PCC-based K&R C compiler, intended to allow you to compile small utilities as part of a patch script. For serious software development, you would pay extra for the C compiler package, which got you c89, an ANSI-compliant C compiler with all the latest and greatest optimization technology. Or you could get a third-party C compiler like gcc.
Admin
That last was in response to chad's post on the HP-UX C compiler.
Admin
Short Description: This should be a one sentence short description/eye catcher that people will see in search result
Admin
Mine has 1 led per port and uses blinking patterns. Like this: ([..] => port, * => multi-color led)
[..] [..] [..] [..]
Ps. The thing is totally overkill and has it's own WTF story*
Bragged enough for now... (it's all true though).
Captcha: ratis, exactly what won't get out the bike shed!
Admin
I saluto you! (my captcha)
Admin
While we're on the subject... I've still got an old lattishub in the closet (not in use, in favor of my switch). The thing has 9*12 cat5 ports. In 3 segments. 3 controllers (or how those cards are called...). 2 controllers with coax (yes they work) and 1 with aix (or aux, not sure)
Sorry for the uncertancy... i haven't used that thing in years and don't completely remember the several kinds of connections it supported. I do remember that it's only 10-base-T. It's usefull as backup in case my 24 port 3com switch dies (it won't but you can't ever be sure, though the switch is rocksolid; 100% uptime unless i pull the plug or disconnect my pc's ;P, the thing one saved a crashing pc (low mem, loading google images was to much for the mem causing heavy swapping for hours and still it wouldn't stop, couldn't switch to console or anything! Disabled the network port using the telnet interface (using a diffrent pc) and after 5 min firefox timed out and i regained control!))