• (cs) in reply to John
    Anonymous:

    How about this possibility; a network transmission from a sensor may not be interupted because the server software could not handle multiple interleaved requests, such as 'A1', 'A2', 'A3', 'B1', 'A4', 'B2', 'B3', 'B4'; becuase whenever it got a '*1' packet, it threw out whever incomplete request it had before, or upon reciept of a '*4' packet it closed the current entry.

    With a token ring network, each sensor would be able to transmit it's payload without having to add job/sequence information to each packet, and the server would recieve those packets, and only those packets, in uninterupted order.



    Well, then, the solution is easy!  Just bring in your pimp-daddy programmer to create a listener thread that spawns off another listener whenever it receives a new request.
  • (cs) in reply to Sgt. Zim

    Oh, thanks for your responses and clarifications

    (Yay I learned something new today)


    Mike Rod

  • SwordfishBob (unregistered) in reply to mrprogguy

    And I'm a little confused here--doesn't the networking API abstract out things like Ethernet vs. Token Ring?

    Yes, though they will behave differently under high load. Non-switched ethernet degrades very badly once utilisation reaches (in my experience ) 35% while token-ring remains responsive for all stations right up to 100% util. Switches take care of that now though, and 100% of 16Mbps looks pretty small these days. Token ring behaved badly under other circumstances, like a station dying while holding the token, or certain cabling issues.

    Maybe they did their own layer 3 networking and were dependent on certain features of layers 1 or 2?

  • (cs) in reply to John
    Anonymous:

    Perhaps the software required particular hardware (Dell) becuase it was programmed to manipulate the BIOS settings to disable HyperThreading, and/or multi-core.

    Actually, I have to work with a 3rd party library (a .NET wrapper around a COM library by the same company) that requires the application to use Win32's SetProcessAffinityMask to fix the process to a single CPU. The reason being that otherwise .NET finalizers could run in parallel with other code on two different SMP or SMT execution units (if present), calling some sort of non-reentrant COM code simultaneously which causes massive breakage.

    Now, that said, it probably won't surprise you to hear that the code doesn't even work reliably if you use SetProcessAffinityMask as documented, or on single CPU machines, anyway. You can run the component (well, it's a parser for a DSL) once in a given process with relative impunity, but call it more often and you will ultimately get funny blow-ups. Please don't even ask about the possibility to parse two different files simultaneously. Static variables rule!

    Now to actually make this library useable in our multi-threaded GUI application, I had to write a wrapper executable - our application starts a local server process that does the actual parsing, then collects the results using remoting. Then the parsing process is shut down, because it cannot be safely used to parse more than one file. Now some people might think this is a huge WTF. And they would be dead right. But until the library gets fixed or we find another vendor (or the time to write a better parser ourselves), there is just no other way to make it work.

  • WHO WANTS TO KNOW? (unregistered) in reply to Erick

    Erick:
    It looks like the highly-specialized control software is doing exactly as planned: bleeding the customer dry.

    That's a NEAT trick for a water company!

    But I would NEVER have bought it!  WINDOWS only?  An ENTIRE computer for ONE sensor? 

    The idea of using ONLY dell, and even a special MODEL?!?!?  WOW!

    And the bit about TOKEN RINGS!  WOW, I guess Microsoft, DELL, and IBM are ALL giving them KICKBACKS!

    Steve

  • Unklegwar (unregistered) in reply to mrprogguy
    mrprogguy:

    At first I thought they were going to say that the system couldn't run under HPUX--and believe me, very little does.  Our software, which compiles under Windows, Solaris, Unix, Linux, and AIX, still has to under go special re-compilation at the customer site for HPUX. 

    And I'm a little confused here--doesn't the networking API abstract out things like Ethernet vs. Token Ring?



    I dunno. It seems like there might be a difference if the software is highly sensitive to what order the messages come in.

    Perhaps the issue is not Ethernet vs Token ring, but more of what protocol is used on the network, and it was a miscommunication.
  • Unklegwar (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    TCP/IP is more effcient than Token Ring, even though its completely haphazard. Token ring systems have zero collisions, because only one system can talk at a time. Well, TCP/IP systems have decent numbers of collisions, which requires that the system resend the packets that collided...Well, it turns out that this is hugely more efficient than trying to actually traffic control the packets.

    There is a similar problem dealing with page faulting in memory registers...When you page fault, how do you pick the best register to clear, so that you won't have to page fault again, and bring the same data back in. The best way to do it is purely hypothetical...Clear the register whose data is going to be needed again after the data in all the other registers. Can't be done, due to the fact that software can't predict the future (without a hell of a lot of process overhead).

    Well the next best ways are: "Least Recently Used" and "Random". That's right. Random. Throwing out a random register is more efficient than almost any other method.

    Token ring was designed by engineers who couldn't come to terms with that fact.


    Or maybe before anyone considered it.

    CAPTCHA: Mustache....as in ride.
  • (cs) in reply to snoofle
    snoofle:

    Let me get this straight. They had something working, presumably relatively trouble free, for a quarter of a century, and did a complete system replacement without a (thorough) parallel testing phase?

    While the software company was sleazy at best (for not divulging/providing proper h/w, s/w specs up front), the water company managers have to take some of the blame for this fiasco.



    It's a water company...why would they have experienced systems development managers hanging around?

    That's why they brought in consultants in the first place.
  • (cs) in reply to Mike Rod
    Mike Rod:
    Oh, thanks for your responses and clarifications

    (Yay I learned something new today)

    Ah, so did I.  Mea culpa.

    (Actually, I had learned this at some point and forgot it, and am now apparently re-learning it.)
  • (cs)

    Damn, $2.5mil would have surely been enough to employ a whole team for what 2 maybe 3 years? to develop their stuff and continually support it....

  • Ishai (unregistered)

    people here in the office now recognize when I am reading these things. Its when they see me cover my forhead with both hands and slink down to the desk with an unbelieving stare to the monitor while my mouth is making a big 'O'

    Forgive me, I have to go put my lower jaw back into place.

  • LordHunter317 (unregistered) in reply to Satanicpuppy

    Well, I can't even describe the shock of all the issues with this post, that other people have not covered yet.

    For starters, by your own writing, Token Ring is more efficent than TCP/IP (ignoring the gross inability to compare) because it can't have collisions. If we're not at risk of resending packets (which isn't quite true with TR, but it's true enough), we're by definition more efficent.

    That's ignoring the fact collisions are a fairly small problem on switched-Ethernet networks anyway.

    Your analysis of page faulting isn't even sensical. When a CPU faults for a page, it doesn't store anything in memory registers. I don't know what a memory register is. If you're talking about a TLB, then they're unreleated to the page faulting process.

    If you're talking about registers like the register to the base pointer table, or segmentation/protection descriptor registers, then they are slightly related but generally don't change at all. If they change, it's due to a task switch and not a page fault.

    If you're talking about the page tables and determining what virtual pages are actually in physical memory at any one time, then all processors that support paging support some sort of mechanism to tell you when the page was last accessed. Operating systems use this information to order the pages by activity and automatically page out the most least recently used LRU[1] page.

    LRU crushes random, so it's not even worth putting them in the same sentence. And what you said is wrong anyway: schemes based on how many references a page has will yield better results than random[2].

    But page information related to faults sure as hell isn't stored in a register, or even a group of registers. One register couldn't store all that information without being totally massive.

    Finally, token ring was design by people who realized maximums are important in many situations. That's what things like LynxOS and VxWorks exist.

    The fact maximums are only important in a limted set of situations doesn't mean they're not important. In fact, in the situations they're warranted, they're usually critical. They can be, in fact, the difference between life and death.

    [1]Strict LRU is rarely used anymore, but the concept is the same. Linux and Windows NT don't attempt to keep the lists of free and inactive pages in a strict LRU order, as there's rarely a point. Older BSD does, however.

    [2]In fact, this is one thing Linux 2.6 takes into consideration when aging pages. Pages that are mapped into multiple processes take more time to become inactive than pages that are private.

  • design (unregistered)
    Alex Papadimoulis:

    The higher-ups agreed, and a few hundred thousand dollars later ($2.5M total, if you're keeping count), the plant finally had a marginally-working control system.


    It's only marginally working due to solar flares.  For another $2.5M, we'll build a solar flare deflector array.
  • Rico Chet (unregistered)

    the real WTF is that they used PCs instead of PLCs for automation
    and that they didn't use realtime ethernet

  • (cs) in reply to mratzloff
    mratzloff:

    Brilliant!



    *peers at the extra vowel*  You're new here, aren't you?

  • Bob N Freely (unregistered)

    Am I the only one who caught this little tidbit at the beginning:

    dir=ltr>

    "And who better to setup and install an entirely new control, database, and tracking system than a Fortune 500 company that prides itself on the highly-specialized systems it develops for the military?"

    The fact that they were a military contractor should have been the first warning sign.

  • qbolec (unregistered)

    Tolkien rings?

  • Franz Kafka (unregistered) in reply to jsmith
    jsmith:

    I used to teach a network troubleshooting class and it was very difficult to make a Token Ring stop working at the hardware level.  If any station wasn't functioning properly, it was automatically removed from the network and all the other systems worked just fine.  Token Ring seems fragile at a glance but turns out to be very robust in practice. 



    Try plugging a 4Mb station into a 16Mb token ring.
  • Olivier Galibert (unregistered) in reply to Darryl Smith, VK2TDS
    Anonymous:

    CSMA/CD will generally detect collisions and resend at that level, not requiring the TCP/IP timers to trigger for a retry.

    Errr no.  Ethernet cards never ever resend themselves, it's always TCP that takes care of it.

    Anonymous:

    Sometimes random is NOT better. In CSMA/CD Ethernet random means that it is theoretically possible that a packet will be delayed by a few seconds following multiple collisions. If you are trying to control equipment in a plant using a distributed control system that is a BAD thing!



    Except that collisions don't exist anymore.  They went away when hubs died.  Now it's switches everywhere, and Ethernet has became point-to-point.  And I happily run my links to 100% (110-120Mbytes/s of data served over NFS on a gigabit link, thanks $DEITY for big disk caches) without anything crashing.  While tokens being lost to a dying server in a loop, oh my.  Token ring requires every device on the loop to be cooperative and alive.  Ethernet hubs only want them to shut up, switches don't care either way.

      OG.

  • Unpro (unregistered) in reply to Olivier Galibert

    If I remember, my lecturer showed me how Token Ring was faster on a slower network.

    As far as he is concern, token ring was created by IBM purely for patent purpose , ie, they want their own type of network. Ethernet was free for all, and since network speed gone up, token ring dies away (I'm still on a 16mb token ring network...but u really don't notice it..)

    IMHO, creating a real time data collection system that rely on the general network (be it token ring or tolken's ring) is a big mistake to start with. There are reasons why you would put several layers of servers to ensure data are being collected. A good read up by the way.

  • Nick (unregistered) in reply to mbessey
    mbessey:

    Several folks have already touched on this, but the major difference between token-passing networks and CDMA networks (like Ethernet) is that the token-passing network has easily characterized worst-case behavior in terms of latency and throughput. Token Ring wasn't at all efficient in using the available bandwidth, but it was very predictable.

    On an Ethernet network, it's not possible to predict exactly how long it'll take for one node to send a message to another. On Token Ring, you just wait for your turn and send your packet, and it'll get to the destination before the next time you get the token back.

    On a network with real-time constraints on data transfer (like those sensor readings), it makes a lot of sense to use a deterministic network. Of course, now that you can make an Ethernet network that's at least 50 times faster than high-speed Token Ring ever was, the worst-case behavior on Ethernet is probably almost always better than the best case with Token Ring.


    $5 says the origional system was designed back in the days when Token Ring speeds were comparable to Ethernet speeds, and this company figures its best not to redesign everything when the origional works just fine (something the client in this case should have thought of).

    I'm sorry, I don't see the WTF here.  A company bought a system without reading the specs and tried putting it to use with hardware it simply was not designed to work with.  Happens all the time.  Yes, the requirements appear to be a bit strange to me, but then again from what I know about real time systems (which is very little), thats very common.

  • MurdocJ (unregistered) in reply to Olivier Galibert
    Anonymous:
    Anonymous:

    CSMA/CD will generally detect collisions and resend at that level, not requiring the TCP/IP timers to trigger for a retry.

    Errr no.  Ethernet cards never ever resend themselves, it's always TCP that takes care of it.

    Anonymous:

    Sometimes random is NOT better. In CSMA/CD Ethernet random means that it is theoretically possible that a packet will be delayed by a few seconds following multiple collisions. If you are trying to control equipment in a plant using a distributed control system that is a BAD thing!



    Except that collisions don't exist anymore.  They went away when hubs died.  Now it's switches everywhere, and Ethernet has became point-to-point.  And I happily run my links to 100% (110-120Mbytes/s of data served over NFS on a gigabit link, thanks $DEITY for big disk caches) without anything crashing.  While tokens being lost to a dying server in a loop, oh my.  Token ring requires every device on the loop to be cooperative and alive.  Ethernet hubs only want them to shut up, switches don't care either way.

      OG.



    Err, no.   Token rings do NOT require that every device on the loop be cooperative and alive.  At least back when I was working with them, if a device failed or was powered off, it went into a mode where it just passed data thru.  Maybe they made token ring more fragile since then, but that's how it used to work.
  • LordHunter317 (unregistered) in reply to Olivier Galibert
    Errr no. Ethernet cards never ever resend themselves, it's always TCP that takes care of it.
    You're totally wrong on several points. The collision detection mechanism involves resends as part of the algorithm. Otherwise, what would be the point of detecting them in the first place? This is in fact, a hardware function and the TCP/IP is generally totally unaware of. The operating system is generally totally unaware of it, unless it totally fails.

    TCP retries are a higher layer and totally independent of medium.

    Except that collisions don't exist anymore. They went away when hubs died. Now it's switches everywhere, and Ethernet has became point-to-point.
    And collisions can exist on switched networks. It is possible to remove collision domains, but switches aren't a sufficent requirement to achieve that. They are necessary but insufficent.
  • Alex (unregistered) in reply to Olivier Galibert
    Anonymous:


    Errr no.  Ethernet cards never ever resend themselves, it's always TCP that takes care of it.



    Actually the retransmission (in standard ethernet) is done at the MAC level when it is the result of a collision, TCP only retransmits when a packet is lost or damaged or out of sequence which is nigh-on impossible on a single link transmission, it'll probably be handled by the MAC or Link protocols in this case.  The link layer (below IP level) will retransmit broken and out of sequence packets, functionality that is repeated at the TCP layer because two sequential packets in a TCP stream may take different routes as a result of the IP layer its sitting on top of so a packet sent later may arrive first if it takes a shorter path.


  • LordHunter317 (unregistered) in reply to Alex
    Alex:
    which is nigh-on impossible on a single link transmission,
    Sure it is. You're assuming the physical link is mostly reliable. That's a poor assumption if the link is heavily conjested or fundamnetally unreliable, like wireless links.
  • rwegesgsetetsetsetstte4 (unregistered) in reply to LordHunter317

    Didn't Ethernet simply become popular because it got really inexpensive really fast?

  • (cs) in reply to SwordfishBob
    Anonymous:
    >>And I'm a little confused here--doesn't the networking API abstract out things like Ethernet vs. Token Ring?

    Yes, though they will behave differently under high load. Non-switched ethernet degrades very badly once utilisation reaches (in my experience ) 35%



    From this number I predict that you were using at least one cheap <$10 network card in that segment. Yes, it matters. One cheap card pulls the threshold down from about 70% to about 30%.

    Of course, nowadays there is absolutely no excuse for not using a full-duplex switch, thereby eliminating the entire problem (because CSMA/CD is not used at all in that configuration, since no collisions are possible). And yes, it does have to be full-duplex - if something goes wrong with autoconfig and you get stuck in half-duplex mode, you've got exactly the same problem still, even with a switch (because you can get the switch and the host both sending data at the same time).

  • (cs) in reply to Franz Kafka
    Anonymous:
    jsmith:

    I used to teach a network troubleshooting class and it was very difficult to make a Token Ring stop working at the hardware level.  If any station wasn't functioning properly, it was automatically removed from the network and all the other systems worked just fine.  Token Ring seems fragile at a glance but turns out to be very robust in practice. 



    Try plugging a 4Mb station into a 16Mb token ring.


    From this comment I deduce that your experience with token ring has involved dumb MAUs - devices with little intelligence beyond that needed to insert hosts into the ring. These were common when token ring was popular, but they are to token ring what a hub is to ethernet. If you plug a crappy ethernet card into a busy hub, it wrecks the performance for everything on that hub. If you plug a 4Mb token ring card into a dumb MAU that's otherwise running at 16Mb, it behaves similarly.

    There were also smart controllers, that were to token ring what switches are to ethernet. These did not have such issues. Of course, they were kinda expensive... token ring and ethernet aren't really all that different.
  • (cs) in reply to Olivier Galibert
    Anonymous:

    Except that collisions don't exist anymore.  They went away when hubs died.


    This is a popular myth with inexperienced network admins. Unfortunately, being a myth, it's not true. Collisions are still extremely common, because a lot of network equipment gets misconfigured. Switches do not eliminate collisions, they just make it possible to eliminate collisions. The network admin has to actually set them up right to accomplish this - and sadly, this doesn't happen very often, because most network admins are really windows admins who think they know something about networks. I have lost count of the number of times I've gone to investigate the need for a network 'upgrade' and found the only problem was bad configuration...
  • (cs) in reply to rwegesgsetetsetsetstte4
    Anonymous:
    Didn't Ethernet simply become popular because it got really inexpensive really fast?


    It's hard to pinpoint the exact reason; ethernet had many advantages at the time. Personally, I think it's because the common ethernet cables were willing to go around corners, while the IBM Type-1 cabling used for token ring was only interested in going in a straight line (it was shielded, fairly thick, and very very stiff - not as bad as the frozen yellow garden hose that was used for the original 10base5 ethernet, but not far off). Token ring uses cat5 nowadays, but as with so many other things, it was late to the party.
  • Danny (unregistered)

    I am completely horrified.. I'm sure there is money in your industry but how did someone not get fired for this cluster f*&k.

  • Dazed (unregistered)
    Alex Papadimoulis:
    For the longest time now, I've held the belief that best words to describe highly-specialized control software are "ridiculously expensive" and "marginally functioning." Maybe I'm wrong, but this story from Brandon Jones certainly doesn't help change my mind.

    Expensive compared to a lot of software, yes. Whereas the standard philosophy for many administrative systems, let alone PC software, seems to be to fix the worst bugs and let the user sort out the rest, that doesn't work when your "users" are valves and pumps and sensors. Add to that the fact that highly-specialized control software by definition has a small market, then yes, it is expensive.

    "Marginally functioning"? Well doubtless there is plenty that falls in that category. I have indeed encountered a couple of such systems. OTOH I have worked for several years building both process-control and administrative systems; the process-control systems were designed, built and tested to standards which the administrative users could only dream of. After all, if a process-control system fails, it can kill people. We were used to availability requirements of 99.99%, 24752. Twice we installed a large system and after a couple of weeks had the assigned maintenance personnel clamouring for other work because the system was working flawlessly and there were no bugs to fix.

  • Metaspace (unregistered) in reply to mrprogguy
    mrprogguy:

    And I'm a little confused here--doesn't the networking API abstract out things like Ethernet vs. Token Ring?

    Oh , come on, that is the oldest trick in the industry. Just blame some system component for the deficiencies of your own software!

    Very elegant, not only you won't be blamed, you even get time and money for the customer to fix what should have been working in the first place!

    I have to say: Whoever believes a claim like the one above deserves to pay.

  • (cs) in reply to design
    Anonymous:


    #define ONE_MISSISSIPI (ONE_SECOND)
    #define TWO_MISSISSIPI (ONE_MISSISSIPI + ONE_SECOND)
    #define THREE_MISSISSIPI (TWO_MISSIPPI + ONE_SECOND)
    .
    .
    .
    count(ONE_HUNDRED_MISSISSIPPI);
    const bool ReadyOrNot = true | false;
    hereIcome(ReadyOrNot);


    Well, that won't work. TWO_MISSIPPI isn't defined...
  • Anon (unregistered) in reply to Alex Papadimoulis

    They probably used token ring because it gives you a guaratee what the maximum delay from each node would be. And then the software was probably coded to rely on that.

  • Mike5 (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    TCP/IP is more effcient than Token Ring, even though its completely haphazard. Token ring systems have zero collisions, because only one system can talk at a time. Well, TCP/IP systems have decent numbers of collisions, which requires that the system resend the packets that collided...Well, it turns out that this is hugely more efficient than trying to actually traffic control the packets.


    Wow, you have the techno babble almost pinned down. Now if you only stopped mixing network layers like that, and people who actually know anything about networks would read more then one sentence of your text.


  • (cs) in reply to anonymous
    Anonymous:

    Making the sensors rely on a token ring is goofy, because one of the issues I remember from my early 90's use of them is, if a node unexpectedly drops off, the whole ring will come down.  So, you never put a token ring anywhere near where actual work got done, for fear of jostling a cable or something.



    Absolutely.  I used to work for a coal mining company and they used token ring at head office for performance, but ethernet at the coal mines because contractors working on site would keep digging through network cables and bringing the whole network down. 

    This was mid-late 1980s - maybe that's why they didn't use star topology?
  • javascript jan (unregistered) in reply to Mike5
    Anonymous:
    Satanicpuppy:
    twaddle snipped


    Wow, you have the techno babble almost pinned down. Now if you only stopped mixing network layers like that, and people who actually know anything about networks would read more then one sentence of your text.


    He should probably look up what "page fault" means too.
  • Mike5 (unregistered) in reply to javascript jan
    Anonymous:
    Anonymous:
    Satanicpuppy:
    twaddle snipped


    Wow, you have the techno babble almost pinned down. Now if you only stopped mixing network layers like that, and people who actually know anything about networks would read more then one sentence of your text.


    He should probably look up what "page fault" means too.


    I didn't read that far... :D
  • The Edge (unregistered)

    I suspect all this could have been avoided if 'The consultant' had written it all in Coldfusion...

  • (cs) in reply to trippyz
    trippyz:
    I think that part of the problem with hp-ux is that is doesn't ship with a full compiler, the one it ship with is only sufficient to do the kernel.


    wtf?!!  In this day and age?

  • (cs) in reply to Mike Rod
    Mike Rod:
    jsmith:


    If any station wasn't functioning properly, it was automatically removed from the network and all the other systems worked just fine.



    I agree with almost everything here but I didn't know about the part in bold. Because, you know, I picture a circle topology, so if a part of the circle is missing or broken...



    Oh, the statement is quite correct.  Token Ring is logically a ring, but not necessarily physically.  Typically it is more of a star, with a central hub used to connect to all the clients.  If a client is unresponsive or is removed, the hub (called a MAU, if I remember right) does the right thing (ie. restores the 'logical' ring).  So, in practice, you could remove and attach machines (and other networks) willy-nilly just like (most) ethernet setups.

  • kirinyaga (unregistered) in reply to The Edge

    At the time token ring, token bus and ethernet were competing, ethernet was still using the coax wire you have to bring from one computer to the next one. Cut the wire, and the network is down (not even separated in 2 halves, because you needed a terminator at extremities to avoid signal echoes). Plus, as everyone said, a very busy 10Mb/s ethernet was really running at 3Mb/s top before the generalization of switches, and with unpredictable delays between frames. The 16Mb/s token ring was a lot faster than ethernet under heavy load. Ethernet also had problems with wire length, it was the network with the shorter distance requirements. In industrial facilities, it was often a serious drawback.

    There was of course always the problem of the ring being cut (even with dual rings token ring), so for industrial environment, token bus was popular. It was designed to be a real-time network (sending a frame to a node or receiving a status about this node being unreachable took a guaranteed maximum time) and the ring was virtualized by the network protocol over an ethernet like physical medium.

    When you are controlling critical industrial process, knowing a sensor WILL send its measure in the next 250ms (or it's broken) is very important. From the network to the operating system, the hardware and the programing language, every operation has a well-known guaranteed maximum delay. Ethernet could not guaranty that at the time (and I suspect it still can't, but I don't work in realtime anymore, so I'm not certain about that). I guess this is the reason the WTF software has to run on specific machines and network.

  • J Random Hacker (unregistered) in reply to kirinyaga
    Anonymous:
    Plus, as everyone said, a very busy 10Mb/s ethernet was really running at 3Mb/s top before the generalization of switches, and with unpredictable delays between frames.


    It's amazing how that tired old IBM FUD still continues to circulate, considering that Van Jacobson posted measured 90+% throughput in 1988. Google "4BSD TCP Ethernet Throughput"  or "Measured capacity of an Ethernet: myths and reality".

     It's true that you might get unacceptable variance in delay, and maybe packet drops in the worst case if you tried to run a control system on a Ethernet when you push it's baseline load way up. Running, say bittorrent and a hard real time control system on the same network is a WTF configuration though.

    In any case, no network ever gives you a guaranteed maximum delay, unless you have a bit error rate of 1 in 10^infinity. An analysis that claims that token ring is deterministic but Ethernet not is just plain wrong.
  • LordHunter317 (unregistered) in reply to J Random Hacker
    J Random Hacker:
    In any case, no network ever gives you a guaranteed maximum delay, unless you have a bit error rate of 1 in 10^infinity. An analysis that claims that token ring is deterministic but Ethernet not is just plain wrong.
    You need to take some rather basic courses on network theory and data transmission.

    It's perfectly possible to make a deterministic protocol. Token Ring is. Etherent is not. Token Ring has guaranteed maximum on when I'll be able to talk again and how long data will take to arrive. If those are exceeded, I can safely assume there is a network error.

    Ethernet does not provide this at all without specialized hardware and software.

  • John Williams (unregistered)

    This has been a very interesting post. I was involved in token ring installation pretty much as soon as IBM introduced it (well, for PCs anyway, back in the eighties, I think) and I can confirm that it was definitely robust. The MAUs (multiple access units) had a relay in for each connection which required the network card to power it, so if you turned off the PC or cut the cable the relay would click over and that section would be cut out of the circuit. You built bigger networks by connecting multiple MAUs together. Chop the link between two MAUs and you end up with two independent networks still functioning. I hated the cabling, but it was much more reliable than ethernet over coax.

  • (cs) in reply to asuffield
    asuffield:
    Anonymous:

    Except that collisions don't exist anymore.  They went away when hubs died.


    This is a popular myth with inexperienced network admins. Unfortunately, being a myth, it's not true. Collisions are still extremely common, because a lot of network equipment gets misconfigured. Switches do not eliminate collisions, they just make it possible to eliminate collisions. The network admin has to actually set them up right to accomplish this - and sadly, this doesn't happen very often, because most network admins are really windows admins who think they know something about networks. I have lost count of the number of times I've gone to investigate the need for a network 'upgrade' and found the only problem was bad configuration...

    Sheesh.  You would never make as a highly paid consultant.  Where is the money in 'reconfiguration'?
  • javascript jan (unregistered) in reply to LordHunter317
    Anonymous:
    J Random Hacker:
    In any case, no network ever gives you a guaranteed maximum delay, unless you have a bit error rate of 1 in 10^infinity. An analysis that claims that token ring is deterministic but Ethernet not is just plain wrong.
    You need to take some rather basic courses on network theory and data transmission.

    It's perfectly possible to make a deterministic protocol. Token Ring is. Etherent is not. Token Ring has guaranteed maximum on when I'll be able to talk again and how long data will take to arrive. If those are exceeded, I can safely assume there is a network error.

    Ethernet does not provide this at all without specialized hardware and software.



    You need to take some even more basic courses on reading comprehension. JRH was clearly talking about data transmission, not capability to transmit.

    Mind you, most guarantees are couched in terms of probabilities less than 100%, so perhaps JRH is also guilty of nit-picking.

    (And nit-picking is a crime against humanity.)

  • (cs) in reply to danio

    The WTF there is that they didn't put the cables overhead.

  • LordHunter317 (unregistered) in reply to javascript jan
    Anonymous:
    JRH was clearly talking about data transmission, not capability to transmit.
    And? He was still wrong. Even considering higher-level protocols, it's perfectly possible to be deterministic.

Leave a comment on “Lord of the Token Rings”

Log In or post as a guest

Replying to comment #:

« Return to Article