• hk0 (unregistered) in reply to jsmith

    <font size="1">A $1000 24-port gigabit switch has 48 Gbps of switching bandwidth. In such a topology, there are no collisions, ever.
    You could dasiy chain 10+ of those switches together and just network them with plain spanning tree and you'd still have a lower worst-case latency end to end than a token ring network of one tenth the size.
    Token ring is just plain assisine and really a non-sequitur (by the Consultants or by the OP trying to protect the identities of the guilty, and doing a poor job)
    Captcha: wtf
    wtf indeed
    </font>

  • hk0 (unregistered) in reply to asuffield

    Every switch that I've seen lately has two LEDs per port, one for the presence/negotiated speed and one for duplex.
    If I see two rows of green lights with no light by itself in a column, then everything's good.

    Oh, and don't make routing loops when you're playing with VLANs. How hard is this stuff?

  • LordHunter317 (unregistered) in reply to hk0
    hk0:
    <font size="1">A $1000 24-port gigabit switch has 48 Gbps of switching bandwidth. In such a topology, there are no collisions, ever.
    Not true. A switch is a necessary but insufficent requirement.

    Answer me this: what happens when I attach a plain stupid old hub to one of those ports?

    What happens when one of those ports is in half-duplex mode?

    If you can't correctly answer those questions, then you don't have business making claims about what Ethernet is or isn't capable of.

    and you'd still have a lower worst-case latency
    Nope, because Ethernet doesn't have defined worse-case latency. You can have average-case latency, but you cannot have determinstic worse-case latency.
    Token ring is just plain assisine and really a non-sequitur
    It's neither. It's potentially perfectly appropriate in several situations, including the one described in this WTF.
  • mjk (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    TCP/IP is more effcient than Token Ring, even though its completely haphazard. Token ring systems have zero collisions, because only one system can talk at a time. Well, TCP/IP systems have decent numbers of collisions, which requires that the system resend the packets that collided...Well, it turns out that this is hugely more efficient than trying to actually traffic control the packets.


    Huh? Complete nonsense.

    News flash, you can run TCP/IP over Token Ring.  I worked at a place that did this.  TokenRing and Ethernet are DataLink layers.  TCP is Transport layer and IP is Network layers.  

    Do people have a random jargon machine they use for posts.  Just as sensical translation of above post:

    "Bananas are more jello than tacos, even though it's completely umbrella.  Bananas have zero portabellas, because only one motor boat can . . .. . . "
  • (cs) in reply to asuffield
    asuffield:
    Anonymous:
    >>And I'm a little confused here--doesn't the networking API abstract out things like Ethernet vs. Token Ring?

    Yes, though they will behave differently under high load. Non-switched ethernet degrades very badly once utilisation reaches (in my experience ) 35%



    From this number I predict that you were using at least one cheap <$10 network card in that segment. Yes, it matters. One cheap card pulls the threshold down from about 70% to about 30%.



    I'm sorry, but could you please be more informative? Since I seriously doubt you believe (or are attempting to claim) that the price of a NIC somehow impresses upon it that it should perform less well, I'm guessing that you're trying to say that certain low-cost ethernet chips lack features that more expensive chips have, reducing maximum performance.  So what, exactly, is that that feature?

    "It's only good if it costs a lot" sounds like something the consultants in the original WTF would have said.

    (The OP noted "Non-switched", which initially suggests that the cheap card might have only been a 10MBit card, but then all stations would be forced down to 10Mbit and you'd get at less than 10% of a 100MBit segment's maximum throughput -- so that can't possibly be it.)

  • Worf (unregistered) in reply to Satanicpuppy
    Satanicpuppy:

    Well the next best ways are: "Least Recently Used" and "Random". That's right. Random. Throwing out a random register is more efficient than almost any other method.


    Actually random replacement is rarely used in practice. What's used is the "clock" algorithm, which is similar to random (it can be thought of randomly evicting an entry), but is extremely simple to implement in hardware. And the reason we use clock over LRU is that LRU requires way more hardware resources to manage than clock (you have to add a timestamp to each cache tag, comparators to find the smallest timestamp, worry about overflow, etc).

    So it's not very random, and it's deterministic (just complex to figure out, but it can be worked out which entry will be evicted given a starting state).

    And compared to LRU, it actually works so well that no one really wants to bother with LRU. But it's not strictly random, and one can optimize for it, if they desired.
  • LordHunter317 (unregistered) in reply to Worf
    Anonymous:
    What's used is the "clock" algorithm, which is similar to random (it can be thought of randomly evicting an entry), but is extremely simple to implement in hardware.
    On a general-purpose CPU, the process of eviciting pages isn't the job of the CPU, it's the OS. And all general-purpose OSes use some variant of LRU.
    And the reason we use clock over LRU is that LRU requires way more hardware resources to manage than clock
    Err, no.
    And compared to LRU, it actually works so well that no one really wants to bother with LRU.
    You should tell MS, LKML, freebsd.org, and Apple that.
  • LordHunter317 (unregistered) in reply to LordHunter317

    To be clear, when I say LRU, I'm not talking about strict hardware supported LRU, but sampled LRU schemes, where the OS perodically determines how active the a page is.

  • (cs) in reply to John
    Anonymous:

    How about this possibility; a network transmission from a sensor may not be interupted because the server software could not handle multiple interleaved requests, such as 'A1', 'A2', 'A3', 'B1', 'A4', 'B2', 'B3', 'B4'; becuase whenever it got a '*1' packet, it threw out whever incomplete request it had before, or upon reciept of a '*4' packet it closed the current entry.

    With a token ring network, each sensor would be able to transmit it's payload without having to add job/sequence information to each packet, and the server would recieve those packets, and only those packets, in uninterupted order.

    Perhaps the software required particular hardware (Dell) becuase it was programmed to manipulate the BIOS settings to disable HyperThreading, and/or multi-core.



    Complete crap. On a typical token ring network, you send a packet and pass the token. And the computer doesn't pass the token - that's handled by the ring.

    And nobody uses token ring anymore. And it's dead easy to disable multithreading - make your program run in one thread. the other CPUs won't affect your system at all. In fact, almost everything is by default single-thread.
  • THE Nonymouse (unregistered) in reply to Franz Kafka

    True.

    Or unplugging the Token Ring cable from an OS/2 2.0 workstation. (That was my favorite trick whenever someone said that OS/2 was crashproof.)

    OTOH, those enormous and overpriced "genderless" connectors were very impressive-looking.

  • THE Nonymouse (unregistered) in reply to Franz Kafka
    Anonymous:
    jsmith:

    I used to teach a network troubleshooting class and it was very difficult to make a Token Ring stop working at the hardware level.  If any station wasn't functioning properly, it was automatically removed from the network and all the other systems worked just fine.  Token Ring seems fragile at a glance but turns out to be very robust in practice. 



    Try plugging a 4Mb station into a 16Mb token ring.


    Above message replies to this. Bah.

    captcha: clueless. Indeed.
  • (cs) in reply to hk0
    Anonymous:
    <font size="1">A $1000 24-port gigabit switch has 48 Gbps of switching bandwidth. In such a topology, there are no collisions, ever.
    You could dasiy chain 10+ of those switches together and just network them with plain spanning tree and you'd still have a lower worst-case latency end to end than a token ring network of one tenth the size.
    Token ring is just plain assisine and really a non-sequitur (by the Consultants or by the OP trying to protect the identities of the guilty, and doing a poor job)
    Captcha: wtf
    wtf indeed
    </font>


    Really? What do you think happens in a DoS?

    It may not be a physical collision, but packets are still lost.
  • (cs) in reply to mbessey
    mbessey:
    On a network with real-time constraints on data transfer (like those sensor readings), it makes a lot of sense to use a deterministic network. Of course, now that you can make an Ethernet network that's at least 50 times faster than high-speed Token Ring ever was, the worst-case behavior on Ethernet is probably almost always better than the best case with Token Ring.


    People still use rings. Look up FDDI2.
  • SpecialBrad (unregistered) in reply to LordHunter317

        I'm pretty sure the guy that first posted on this topic of LRU and Random replacement algorithms wasn't talking about paging but actually caches. In the event of a cache miss, the processor must attempt to find which cache block to replace with data from a higher-level cache or memory. In this case, it's all done in hardware, and LRU and Random are pretty close in performance as far as I know. LRU has several pitfalls that are really easy to fall into, for example, iterating over a list that is larger than the cache. As you get further in the list, using LRU, you will wipe out the earlier parts of the list, resulting in the next iteration being all misses. In random, at least you have a chance of cache hits.

    I could be wrong, I don't work in the industry, and my prof that taught this topic wasn't the most competent, but it makes sense to me.

    Captcha: 1337

  • Anonymous (unregistered) in reply to asuffield
    asuffield:
    Anonymous:
    Didn't Ethernet simply become popular because it got really inexpensive really fast?


    It's hard to pinpoint the exact reason; ethernet had many advantages at the time. Personally, I think it's because the common ethernet cables were willing to go around corners, while the IBM Type-1 cabling used for token ring was only interested in going in a straight line (it was shielded, fairly thick, and very very stiff - not as bad as the frozen yellow garden hose that was used for the original 10base5 ethernet, but not far off). Token ring uses cat5 nowadays, but as with so many other things, it was late to the party.


    By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)
  • LordHunter317 (unregistered) in reply to SpecialBrad
    Anonymous:
        I'm pretty sure the guy that first posted on this topic of LRU and Random replacement algorithms wasn't talking about paging but actually caches.
    Wouldn't matter.  Page replacement on faults is very identical to caching replacement.  He'd still be totally wrong.

    In this case, it's all done in hardware,
    No, it isn't, unless we're talking about CPU caches.  And they don't work like that.

    and LRU and Random are pretty close in performance as far as I know.
    Nope.  Not remotely.  Especially not for page faulting.  Random is as worse as you can get.

    LRU has several pitfalls that are really easy to fall into, for example, iterating over a list that is larger than the cache.
    Only if you can't code.  This is a solved problem.

    As you get further in the list, using LRU, you will wipe out the earlier parts of the list, resulting in the next iteration being all misses.
    Why would you be editing list on a scan?  Again, this is a  solved problem, please go read some algorithm books on this.
  • Tezz (unregistered)

    Long live VG Cats!

  • SpecialBrad (unregistered) in reply to LordHunter317
    Anonymous:
    Anonymous:
        I'm pretty sure the guy that first posted on this topic of LRU and Random replacement algorithms wasn't talking about paging but actually caches.
    Wouldn't matter.  Page replacement on faults is very identical to caching replacement.  He'd still be totally wrong.

    In this case, it's all done in hardware,
    No, it isn't, unless we're talking about CPU caches.  And they don't work like that.

    and LRU and Random are pretty close in performance as far as I know.
    Nope.  Not remotely.  Especially not for page faulting.  Random is as worse as you can get.

    LRU has several pitfalls that are really easy to fall into, for example, iterating over a list that is larger than the cache.
    Only if you can't code.  This is a solved problem.

    As you get further in the list, using LRU, you will wipe out the earlier parts of the list, resulting in the next iteration being all misses.
    Why would you be editing list on a scan?  Again, this is a  solved problem, please go read some algorithm books on this.


    Yeah I was talking about CPU caches. They are done in hardware. There are situations where LRU don't work out. LRU is still better than random in most cases, but in some cases random is better. As for editing a list on a scan, the example the prof gave was normalizing a long array. Sum up all the values, divide each value in the list by the sum.
  • Soulbender (unregistered) in reply to TeeSee
    TeeSee:
    Really? What do you think happens in a DoS?

    Well, certainly not frame collisions. Are you sure you know what a DoS is and what it does?

  • kirinyaga (unregistered) in reply to Anonymous
    Anonymous:
    By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)
    And I think there is actually a 1Gb/s Token Ring standard.

    I also remember another weird IBM network called Canal, that used something like 16 or 32 ethernet wires (the coax ones ...) linked together in a very bulky cable and a HUGE connector. This thing was damn fast at the time. It was a kind of "parallel ethernet" and I stumbled numerous times on that f***ing cable the size of my arm.

  • javascript jan (unregistered) in reply to LordHunter317
    Anonymous:
    Anonymous:
    JRH was clearly talking about data transmission, not capability to transmit.
    And? He was still wrong. Even considering higher-level protocols, it's perfectly possible to be deterministic.

    Not above absolute zero, it isn't.
  • Justin (unregistered) in reply to Mike Rod

    It concerns me that none of you are aware that Token Ring networks were already moving toward star topologies using Token Ring switches by the late 90s, and that before that, IBM 8228 MAUs and active bridges could be used to segment the physical rings and help deal with wacky workstations.  16Mb switched Token Rings looked a lot like switched Ethernets and performed quite a bit better.  100Mb and 1Gb Token Rings were already available by the turn of the century.

    HOWEVER...  They were also 2-4 times as expensive and not as simple to learn as Ethernet.  But even in 1997, Ethernet switches and Token Ring switches were being used to segment networks and drastically improve performance, so the differences between them started to blur.  Token Ring had the edge, though, with MTUs of over 4k, reducing header overhead and fragmentation.

    Basically, the WTF is that the customer's network engineers didn't call 'BS' at this explanation.  It's completely ridiculous.  Regardless of how reliable or predictable Token Ring or Ethernet are under X load or Y conditions...

    HOW MUCH BANDWIDTH COULD SOME FREAKIN SENSORS POSSIBLY NEED?  More than my streaming video servers or an Active Directory Global Catalog server?  I think not.


    And kudos to those of you who pointed out the additional WTF that TCP/IP has nothing to do with anything.  All of those protocols are at higher layers.  If the argument were about TCP (reliable) vs. UDP (faster) or unicast (easy, but wasteful) vs. multicast (efficient, but dodgy), it might make sense, but none of that is the issue here, once someone starts pointing at Layer 2.

    --J

  • (cs) in reply to J Random Hacker
    Anonymous:
    It's amazing how that tired old IBM FUD still continues to circulate, considering that Van Jacobson posted measured 90+% throughput in 1988. Google "4BSD TCP Ethernet Throughput"  or "Measured capacity of an Ethernet: myths and reality".

    Okay, I did. The servers that hold the second paper are down (or slow?), so I was unable to get it. I quote from the first is revealing though:

    The tests were done between 2am and 6am on a fairly quiet Ethernet (~100Kb/s average background traffic)

    This situation isn't much different from a switched network today. It has nothing to do with the typical network as it existed back then.

    When we were talking about 30% as typical for ethernet, we were not talking about unloaded ethernet. We were talking about more than 100 computers on a single segment, particularly at 5pm when everyone hit save at the same time. Ethernet slowed to a crawl, with total throughput for ALL stations at less than 3Mb/s. The rest of the claimed throughput was lost to collisions!

    You will note too that we are talking about coax systems, with maybe a few 10BaseT nodes for the managers (managers always get the latest toys). Switches had not been invented yet, and a simple 8 port hub would set you back more than $100 - USED. (Many companies just put extra cards in their servers and had the server function as routers, which worked until the server ran out of steam (sometimes bandwidth, sometimes CPU), then they bought routers or bridges.

    Today it is nearly impossible to find a hub because switches are so cheap (ignoring eBay). Today idiots often will set up a network with minimal collisions by chance, and thus today networks normally can get close to their advertised speeds.

  • LordHunter317 (unregistered) in reply to javascript jan
    javascript jan:
    Not above absolute zero, it isn't.
    Yes, it is. You need to join JRH in the classes and look up what that word means, especially in relation to data transmission. You keep using that word, but I don't think it means what you think it means.
  • LordHunter317 (unregistered) in reply to Justin
    Justin:
    HOW MUCH BANDWIDTH COULD SOME FREAKIN SENSORS POSSIBLY NEED?
    If you'd paid attention to the whole thread, you'd note the most likely explanation isn't one of bandwidth, but one of reliability. Bog-standard Ethernet isn't suitable for hard real-time and some soft real-time applications.

    That being said, the whole not being up front about the hardware requiments things implies to me that it probably wasn't really needed.

  • Jon Strayer (unregistered)

    The consultant said that they could modify their software to work on Ethernet, but that it would cost a little more. The higher-ups agreed, and a few hundred thousand dollars later ($2.5M total, if you're keeping count), the plant finally had a marginally-working control system.


    Obviously I'm in the wrong end of the business.

  • Loren Pechtel (unregistered) in reply to javascript jan

    To get back to the original problem, I think I see what might be going on here:

    I suspect the software can't cope with getting packets out of order or something of the sort.  It's relying on the determanency of token ring to take care of things it should be taking care of itself.

    I don't exactly find myself surprised.  My employer is going with some stupid software now and the more I see of it the more horrified I am.  There's NO field-level security, it's entirely screen-level.  You can't have confirmations on only the rare operations and not the common ones.

    Those of us who saw it was a bad idea weren't asked our opinions.

  • JG (unregistered) in reply to snoofle

    The scary thing is that would include "sleazy" companies like SAP in the 1990s.  Hewlett-Packard was looking to replace their 1960s home-brew ERP system called "HEART" with SAP R3.  After spending $100M or so in analysis of the current system when it came time to prepare for the logistics of R3 installation they asked for exactly such a parallel start-up and test.  SAP said "It's impossible.  It's all or nothing. But for a few $100M and 2 or 3 years we could make modifications to do that, but we can't guarantee the installed system will work even after that."  HP told them to take a f*cking leap and never show their faces on the HP doorstep ever again.  HP limped along through the HP-Agilent split with HEART and began to replace it with Oracle shortly after.

  • Bill (unregistered) in reply to anonymous
    Anonymous:

    Making the sensors rely on a token ring is goofy, because one of the issues I remember from my early 90's use of them is, if a node unexpectedly drops off, the whole ring will come down.  So, you never put a token ring anywhere near where actual work got done, for fear of jostling a cable or something.

    I used to do QA testing for a manufacturer of computer network equipment.  Played with TR, FDDI, Ethernet, ATM...

    It's been my experience that when Token Ring worked, it worked well.  When something hosed, it brought the world to its knees.  It was indeed temperamental, but it had times when it out-performed ethernet (at the time; 100Mb Ethernet was just coming into being).

  • (cs) in reply to Unklegwar

    Token ring on a PC works just fine. Completely superceded by bog-standard Ethernet, but for a couple of years it was a major player- Madge Networks, IIRC, were the #1 card and MAU (TR 'switch') provider.

  • Geezer (unregistered) in reply to Anonymous

    By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)

    I call BS.  In 1996 I had just left working for "big publishing company" and my 16Mbit cutting edge Token Ring.

    My new job had me working on once of the 1st 100Mbit ethernet switches.  Synoptics, I belive.

    No way TR had made the jump to 100Mbit in that timeframe!

  • Geezer (unregistered) in reply to Geezer
    Anonymous:

    No way TR had made the jump to 100Mbit in that timeframe!


    And yes, I just figured out how to quote.
  • Anonymous (unregistered) in reply to Geezer
    Anonymous:
    By the time Ethernet was using CAT5, TR was using it too. I remember working with 100MB TR switches in 1993, with fiber connections between risers. We even ran TCP/IP (using Anynet cause subnetting wasn't an option back then) over SNA over X.25. No wonder I drank a lot back then :-)

    I call BS.  In 1996 I had just left working for "big publishing company" and my 16Mbit cutting edge Token Ring.

    My new job had me working on once of the 1st 100Mbit ethernet switches.  Synoptics, I belive.

    No way TR had made the jump to 100Mbit in that timeframe!


    OK - I got my timeframes a little skewed. The Bay Networks Centillion switches we used were 100 MBit and they were token ring only (didn't have an Ethernet version for another year) and it was late 95 or early 96 we installed them. I believe at the time they were the first installation in the UK.
  • PHP coder (unregistered)

    This company has obviously never heard of the OSI model, or even the word "abstraction" at all...

    If its a windows app, it should interface with windows, not with some Dell-specific hardware driver!

    It should also interface with TCP/IP, not with the network hardware itself.

    Whether its using ethernet or tokenring should be completley transparent to the application layer...


    captcha = hotdog

  • Liam (unregistered) in reply to Satanicpuppy

    <FONT face=Arial size=2>Sorry, you're confusing the data link layer with the transport/network layer.</FONT>

    <FONT face=Arial size=2>Token ring is at the transport layer, as is Ethernet</FONT>

    <FONT face=Arial size=2>TCP/IP and other network protocols like IPX are at the Transport layer. You can run IP or IPX traffic over Token Ring or Ethernet.</FONT>

    <FONT face=Arial size=2>Token Ring works by passing a token aroung a ring of computers. Your computer can only talk on the network when it has the token.</FONT>

    <FONT face=Arial size=2>Ethernet computers OTOH listen for the network to be quiet, and then start talking during the silence. If they hear another computer talking at the same time, they both stop, and wait for a random period of time before trying again.</FONT>

    <FONT face=Arial size=2>One possible reason for wanting Token Ring for a sensor monitoring is that it is deterministic, whereas Ethernet is not deterministic.</FONT>

    <FONT face=Arial size=2>The main disadvantage Token Ring has is that it was designed and licensed by IBM, and noone makes gear for it anymore, whereas lots of people make Ethernet hardware. Token Ring capped out at 16Mbps, whereas Ethernet is now in the neighbourhood of 10GB.</FONT>

    <FONT face=Arial size=2>Token Ring is technically superior, but badly managed (like Beta vs VHS)</FONT>

  • (cs) in reply to Satanicpuppy

    Satanicpuppy:
    There is a similar problem dealing with page faulting in memory registers...When you page fault, how do you pick the best register to clear....

    Well the next best ways are: "Least Recently Used" and "Random". That's right. Random. Throwing out a random register is more efficient than almost any other method.

    Does your system page individual registers out to disk??

    LRU is not the best algorithm. See: http://surriel.com/research_wanted/

    Search for "Page replacement". Here's an excerpt:

    Rik van Riel:
    VM researchers have known for over a decade that the assumption made by LRU (recently accessed pages will be accessed again soon) are not true for many workloads, including streaming media, databases (index vs data) and garbage collected applications. Many new replacement algorithms have come up over the last decades, for example 2Q, LRU-K, MQ, LIRS and ARC.

  • Anony Moose (unregistered) in reply to Satanicpuppy

    Satanicpuppy:
    TCP/IP is more effcient than Token Ring

    Some pedantry applies here: TCP/IP can run over ethernet, or over token ring, or many other protocols. This is a clue that the original engineers in this story were talking bullshit, ebcause if they're doing TCP then the underlying network shouldn't really matter.


    Satanicpuppy:
    Well the next best ways are: "Least Recently Used" and "Random". That's right. Random. Throwing out a random register is more efficient than almost any other method.

    Token ring was designed by engineers who couldn't come to terms with that fact.

    Almost true, with one significant detail: while random is, on average, the most efficient option there, it doesn't offer any guarantees. It can still, occasionally, allow worst-case performance. Token-ring trades average efficiency for guaranteed performance. In some situations, this is worth the effort. (Redundant network hardware is probably applicable, and the "guarantee" is subject to the laws of physics, but it's superior to what you get from ethernet.)

    The reason it works is that there's best options, worst options, and lots in the middle. Any detrministic algorithm will, inevitably, hit pathological cases that make it invalid - but randomising it means that you have a chance of hitting acceptable options no matter what the state of the system may be, and on average don't hit the worst options often enough to really matter.

    The WTF is still valid because the excuse is bullshit, but there are situations in which token-ring is a good thing, but that would be specified, documented, and verified during installation - and it's rare enough that unless you have really exotic requirements and are an expert in the particular field, you won't need to do that.

  • DaveW (unregistered) in reply to ChadN

        I was at HP back in the late 80s/early 90s when the decision to unbundle the C compiler from the OS was made, and I remember some of the discussion.  The idea was (1) to make the compilers group into its own profit center, funded by the users that got value from their work, and (2) to allow users who just needed a binary-only system for running end-user applications to get a cheaper system and not have to pay for compiler development.  My info is a few years old, but as recently as HP-UX 11.0, the basic OS shipped with cc, which was a PCC-based K&R C compiler, intended to allow you to compile small utilities as part of a patch script.  For serious software development, you would pay extra for the C compiler package, which got you c89, an ANSI-compliant C compiler with all the latest and greatest optimization technology.  Or you could get a third-party C compiler like gcc. 

  • DaveW (unregistered) in reply to DaveW

        That last was in response to chad's post on the HP-UX C compiler.

  • aru (unregistered) in reply to Sgt. Zim

    Short Description: This should be a one sentence short description/eye catcher that people will see in search result

  • someone (unregistered) in reply to hk0

    Mine has 1 led per port and uses blinking patterns. Like this: ([..] => port, * => multi-color led)

    [..] [..] [..] [..]

    Ps. The thing is totally overkill and has it's own WTF story*

    • Story: I bought the thing for (e)30,- from an it'er at a local company. They where renewing their infrastructure and he decided to sell their spare switch. I bought the thing, it was still completely wrapped in plastic with 3COM-seal and all. FYI. It's a 3com superstack III 4226T. My local (as in "from the computer right across the desk on the same switch") download speed is 30s for 20mb (+/- 3 mb/s). The hd-speed is around 2.8 mb/s (go figure, the speed barriere is set by my hd, expected but still cool). Also i get the same speed when several bittorrents and linux disk downloads are in progress on different pc's (i've got 6, another WTF, in the same room!).

    Bragged enough for now... (it's all true though).

    Captcha: ratis, exactly what won't get out the bike shed!

  • someone (unregistered) in reply to someone
    someone:
    Mine has 1 led per port and uses blinking patterns. Like this: ([..] => port, * => multi-color led) ------------- *[..]* *[..]* [..] [..] -------------
    Whoops... The spaces won't work to here it is with _ instead of spaces. ------------- *[..]*_*[..]* _[..]___[..] -------------

    I saluto you! (my captcha)

  • someone (unregistered)

    While we're on the subject... I've still got an old lattishub in the closet (not in use, in favor of my switch). The thing has 9*12 cat5 ports. In 3 segments. 3 controllers (or how those cards are called...). 2 controllers with coax (yes they work) and 1 with aix (or aux, not sure)

    Sorry for the uncertancy... i haven't used that thing in years and don't completely remember the several kinds of connections it supported. I do remember that it's only 10-base-T. It's usefull as backup in case my 24 port 3com switch dies (it won't but you can't ever be sure, though the switch is rocksolid; 100% uptime unless i pull the plug or disconnect my pc's ;P, the thing one saved a crashing pc (low mem, loading google images was to much for the mem causing heavy swapping for hours and still it wouldn't stop, couldn't switch to console or anything! Disabled the network port using the telnet interface (using a diffrent pc) and after 5 min firefox timed out and i regained control!))

  • Derekwaisy (unregistered)
    Comment held for moderation.
  • Jimmyanten (unregistered)
    Comment held for moderation.

Leave a comment on “Lord of the Token Rings”

Log In or post as a guest

Replying to comment #:

« Return to Article