• pastor (unregistered) in reply to hotzenplotz

    BTW: without the (very unrealistic) high lock contention (or just with a properly set spin-count - I just used 4000 which is the number the MSVC heap manager uses) the producer-consumer queue can do > 600 000 ops/sec. Now compare 7.7 to 600.

    Which also shows that a producer-consumer queue is slow.   Your producer-consumer operation takes ~5000 cycles.  In that time you can often fully service the request.  My point is that the patterns that people use are not often very efficient, but they are simpler to understand than the really fast application-specific solutions. 

    The producer-consumer pattern means context-switching and locking for every message which is not ideal.  People don't know how suboptimal this is because very few have benchmarked the overhead of locking and context-switching.  My experience is that in a top-notch software system, locking and context-switching are the main bottlenecks and having application-specific solutions is necessary. 

  • (cs) in reply to pastor
    Anonymous:

    BTW: without the (very unrealistic) high lock contention (or just with a properly set spin-count - I just used 4000 which is the number the MSVC heap manager uses) the producer-consumer queue can do > 600 000 ops/sec. Now compare 7.7 to 600.

    Which also shows that a producer-consumer queue is slow.   Your producer-consumer operation takes ~5000 cycles.  In that time you can often fully service the request.  My point is that the patterns that people use are not often very efficient, but they are simpler to understand than the really fast application-specific solutions. 

    The producer-consumer pattern means context-switching and locking for every message which is not ideal.  People don't know how suboptimal this is because very few have benchmarked the overhead of locking and context-switching.  My experience is that in a top-notch software system, locking and context-switching are the main bottlenecks and having application-specific solutions is necessary. 

    Yup, of course it is "slow". Although not as slow as creating and destroying threads - that was my point. And my implementation was far from optimal. You're making one mistake though: the producer-consumer does not necessarily have a context switch for every item added or taken from the queue - if there is really much to do multiple items can be added or removed from one thread without switching to another. And since performance only realls counts when there is really much to do...

    The locking overhead of course cannot really be helped except if one was to implement a queue where locking wouldn't be necessary at all or at least not in the most common situations. MS' lock-free slist comes to my mind, but unfortunately that's a LIFO queue which isn't suitable for queuing work-items in most situations. And the more sophisticated lock-free algorithms tend to be even less efficient than locking versions that use a CRITICAL_SECTION (or a spin-lock on *nix).

  • (cs) in reply to Anonymous
    Anonymous:
    You're kidding right? Every call to the method spawns a new Thread! It might eventually get destroyed (I insist on the *might* seeing as this IS theDailyWTF), but still creating threads is costly!

    Creating OS threads is costly (and this is what's created here indeed), but not all languages use native threads. Quite a few languages have lightweight threads (java's so called Green Threads) and some languages are even built upon the notion of lightweight threads (Erlang for example, creating an erlang thread take 1µs up to a few thousand threads, at which points the thread-creation cost jumps to around... 5µs. Erlang also has no notion of lock, semaphores or synchronization routines)

  • (cs) in reply to Raider
    Zygo:
    Threads are decades old, but decent, commercially practical *implementations* of threads (outside of the embedded market) have been available for only a few dozen months.

    Try 15 years, and 8 for an open-source free thread implementation.

    Not for C++ or Java though...

    Raider:

    And people wonder why daemons written in older Java were so slow and such resource hogs :) ... The only way I'd ever do one thread per connection is if there would be less than 10 connections on a single or dual core/cpu system and the connections have to be synchronous (Including ethernet, RS232, USB, etc, connections), otherwise I use a connection pool and if I feel it necessary, a thread pool to manage the connections.

    Here again this is only true for OS threads and Djikstra-style multiprocess/multithread programming (via shared variables, mutexes, semaphores, ...), e.g., Yaws (an HTTP server coded in Erlang), does spawn a thread for each request and yet it scales much better than Apache (as in "apache dies at 4k connections after seeing it's efficiency drop rapidly while Yaws still hums along merrily at 80k connections)

    Yaws versus Apache

    In red Yaws on NFS, in blue Apache on NFS, in green Apache on local FS. Vertical axis is the data throughput (in KBytes/second), horizontal axis is the number of parallel sessions.

    Testing methodology:

    • Machine 1 has a server (Apache or Yaws).
    • Machine 2 requests 20 KByte pages from machine 1. It does this in tight a loop requesting a new page as soon as it has received a page from the server.
    • Machines 3 to 16 generate load.

      Each machine starts a large number of parallel sessions.

      Each session makes a very slow request to fetch a one byte file from machine 1. This is done by sending very slow HTTP GET requests (we break up the GET requests and send them character at a time, with about ten seconds between each character).

    Apache version is 2.0.39 with MPM Worker

  • qbolec (unregistered)

    Correct me if I'm wrong:

    When the ListenForMessage function returns, it finishes the thread, removing itself (with the stack) from the memory, thus the recursive nature of this design is not so bad, since it can be viewed as a FIFO of the threads which are spawned at one and and killed at the other. The number of the threads is thus always adjusted to fit the number of requests at the moment (scaled down by the processing capabilities of the processor which can sometimes not follow up - but then again - if you have no processing power, you cannot blame the program for not handling incoming data)...

    So the WTF is merely the fact, the number of threads is not limited, and that the threads are not reused (ie. by pooling) thus generating overhead. But isn't the beauty of FIFO paradigm and automagically scaling worth of those extra cycles?

    so, in the end, how does it spell : pooling or polling?

    </sarcasm>

    I vote for select()

    And for built in support for thread pooling in .NET

    ....and also: DO NOT HIT ENTER AFTER TYPING CAPTCHA (in IE 6.0 SP 1 it jumps to search page wich is default button for this forum) [thankfuly hitting 'back' and waiting 2 minutes, restored the content of the message textarea]

  • (cs) in reply to masklinn
    masklinn:
    Zygo:
    Threads are decades old, but decent, commercially practical *implementations* of threads (outside of the embedded market) have been available for only a few dozen months.

    Try 15 years, and 8 for an open-source free thread implementation.

    Not for C++ or Java though...


    I don't think its that the implementations have changed that much. Windows still uses the same threading code it used in windows NT4, for the most part.The difference is that the HARDWARE itself is being redesigned with multiple threads and concurrent code in mind. See hyperthreading technology, which in a nutshell reduces the number of cycles required to do a context switch (yes, theres more to it than that, but that about sums it up) and the widespread use of home user desktop smp systems (mostly the new dual-cores)

    I have a dual cpu machine where both cpu's have hyperthreading (dual xeon) and i tell you now, it certainly benchmarks very well with very very large numbers of threads (i am using linux with pthreads). I once wrote some code by mistake that functioned like this WTF, and i managed to get it up to several thousand threads before there waas an issue -- even then, the issue wasnt cpu time taken to context switch, it was lack of memory from all of the thread states being stored in RAM ;)
  • p.diddy (unregistered)

    This song is dedicated to everyone
    Who's worked with someone
    they truly loath
    check it out.


    Seems like yesterday we had code to show
    i wrote the threads, you - semaphore
    so now we dont use punch cards no more
    notorious they got to know that
    i cant write C or VB
    words cant express how i hate to read
    Even tho im dumb, we still a team
    through your proxy i google things
    in the future cant wait to see
    if you open up my codes to read
    rewriting some lines
    at nite until i'm blank
    try to catch it first but error plays again
    all my bad codes is hard to conceal
    cant imagine all the pain you felt
    give anything to have half your brain
    but i dunno if u still be alive after that....


  • (cs) in reply to drdamour

    drdamour:
    ...and the evil switch statement...

    Huh, is that the evil switch of the west?

  • Martijn (unregistered)

    After only a few minutes of searching, Brian found code from nine different articles, all posted on a certain website known best for its presentation of astonishingly low-quality, unedited articles and code samples.

    He copied the code from "thedailywtf.com"?

  • (cs) in reply to Raider
    Raider:

    Yeah, I got a HUGE laugh when he said that to me ... I really thought he was kidding at first, but he wasn't, at all ... He was absolutely serious ... He had never heard of the C++ Standard Library, he had never heard of anything that wasn't MFC/VC++ related. 

    Charles Petzold: Does Visual Studio Rot the Mind? (Ruminations on the Psychology and Aesthetics of Coding)

  • (cs) in reply to Ben
    Anonymous:

    Your block of code above is still isolated, you'd need to account for things like moving your cursor around, opening and closing files, test builds, checkins / checkouts, finding definitions. Granted, in a shorter period of time, things will be more recallable and things go faster if there's no communication involved (only in one person's head) and if you're disciplined enough to not check the daily wtf, etc. A good IDE, intellisense, refactoring tools, background compile, etc would all help... and it depends on physical line count, coding standard, etc.



    Actually, I find that intellisense and VS's refactoring slows things down. Intellisense gets in the way, and you spend a lot of time picking the right word from the list, but worst of all it encourages people to make really long names, because IntelliSense will find the right one anyway.

    And VS's refactoring insists on recompiling all the aspx files, every single time. Which takes about a minute to do anything. Sometimes it does it twice in a row.

    If you strip comment lines in decent production code, and then compare blank lines to non-blank lines (blank being the ones that mess up your merge), I think you'll get closer to 40% blank lines. Especially if you stick { on its own line
  • (cs)

    A friend of mine started a position with a development group about a month-and-a-half ago.  After being there a couple of days, he found a poorly implemented C# list-like collection class.

    My friend, knowing that having poorly designed and/or implemented code as a foundation almost always results in a poor result, brought this to the attention of the lead developer.  However, he did not know that this lead developer was the one that wrote it!

    Now, most people that have earned a lead position know to value the input of those that work with.  Not this particular one however.  This set things off on the wrong foot and resulting in his being released three weeks later due to a personality conflict. (Oh, and this same "personality conflict" occurred with a couple previous developers as well...)

    When someone comes to you and tells you that your code sucks and can point out the reasons why, and you have no way to defend what you have done, that does not mean that you have a personality conflict.  It means that you are incompetent.

    Just about every software group thinks they have a star.  Sometimes they really do, but sometimes they just need someone better to come along, call a spade a spade and point out the facts.  I have never been afraid to point out stupidity when I see it, even if it is at an upper level.  I am glad to see from the parent post that I am not the only one who takes a chance on issues like this in the hopes of improving software general, and happy as hell to see that it worked out correctly in this case.

  • (cs) in reply to WisenHeimer

    Anonymous:
    Sure, had many of those days, but not two straight months of it. Did you comment it? Did that include debugging? Anyway, I wouldn't take it too seriously. 150000 lines of code in two months is not normal for anyone. Any your just a gangsta boastin' if you say its normal for you. But I guess your paying the price now from continued expextation at that rate.

    Oh no, that was by far the most code I've ever written in that short a time span ... I spent a LOT of time at home working on it when not in the office, because it was more or less I had to put my money where my mouth was and show what I could do now that I had brought out the faults of the other guy and so forth ... Not to mention there is always deadlines that have to be met.

  • (cs) in reply to Raider

    Apparently everyone has different ways of counting LOCs ... Here where I work (And believe it or not the higher-ups count it this way because they use it as a method of leverage when charging customers for customizations and what not) all LOC is counted in terms of compilable code ... Comments and whitespace lines don't count, but a line with just a } on it counts, or even placing each parameter on a call to a function where the function takes a lot of parameters and would trail off the screen, on its own line, counts .. This is actually in the development handbook and has become the 'instilled' way of counting the LOC by default for me.

  • (cs) in reply to Unklegwar
    Anonymous:
    Alex Papadimoulis:
    Raider:

    Anonymous:
    Just so I know to avoid it, which site is "known best for its presentation of astonishingly low-quality, unedited articles and code samples"?

    I don't think I'm allowed to post on here what sites he found the code snippets on without getting Alex in trouble, but feel free to email the address attached to my profile here :)

    I don't like to pick on specific companies or sites. But I figure that this reply will be burried far enough in the comments that a clue should be ok. The website in question (and I should note that I get a LOT of submissions directly linking to this site) starts with the the letter "c" and ends with "odeproject.com"



    HA! I knew it! That's the first site that came to mind.
    I'm glad I'm not the only one who thinks that site is full of more turds than the NYC sewer system.

    There are articles of varying quality on CP.  Some are good, some are very good, and some are really bad.  The important thing to remember about the "low-quality, unedited articles and code samples" on the site is that many of them are identified as such on the site itself.  Unedited contributions are clearly marked as such, and all articles have their own comment area and rating.  All articles (and even individual posts) can be rated/voted-on.  When it is used by the (good) people on the site, the system works pretty well.

    Do not get me wrong - some of the unedited contributions really suck and many of them are worth kicking the poster around a little bit.  There are even some people on the site that regularly contribute articles and/or posts that cannot understand the difference between "works" and "works well".  But it is a pretty decent place if you give it a chance.

    [Full Disclosure: I am CP member 2399, and a former CodeGuru contributor.]

  • 4chan (unregistered) in reply to HeroreV

    Thou shalt not use any library whose function call doth not include a reference to a per thread handle.

  • (cs) in reply to Grimoire
    Grimoire:

    Well d'uh.  It's obviously a crappy implementation of the std::queue.  It should either work or not...


    Well d'uh. I'm very much interested in your non-crappy implementation which either works or not. I'm guessing the latter.

    PS: You obviously don't know a fucking shit about STL or synchronization granularity.

  • (cs) in reply to braindigitalis
    braindigitalis:
    masklinn:
    Zygo:
    Threads are decades old, but decent, commercially practical *implementations* of threads (outside of the embedded market) have been available for only a few dozen months.

    Try 15 years, and 8 for an open-source free thread implementation.

    Not for C++ or Java though...


    I don't think its that the implementations have changed that much. Windows still uses the same threading code it used in windows NT4, for the most part.

    Duh. Most of my post's point was that if you're using OS threads you've lost already. Because OS threads are based on Dijkstra's concurrency model, and that model both doesn't scale and makes it difficult for the programmer to write highly concurrent programs, because he has to worry about things that shouldn't even exist (shared states leading to race conditions and deadlocks).

  • (cs) in reply to masklinn
    masklinn:

    Duh. Most of my post's point was that if you're using OS threads you've lost already. Because OS threads are based on Dijkstra's concurrency model, and that model both doesn't scale and makes it difficult for the programmer to write highly concurrent programs, because he has to worry about things that shouldn't even exist (shared states leading to race conditions and deadlocks).

    And you're wrong on numerous accounts. First, you seem to be confusing the question of "OS vs. user code" with "kernel mode vs. user mode". For instance, Win32 fibers and GNU pth offer user mode threading and are a part of Windows rsp. some Linux-based OSes.

    Second, and more importantly, you're wrong about the scalability issues. There are kernel thread implementations that perform and scale badly. I won't call names but the CEO of the vendor of one of those systems is fond of throwing pieces of furniture. There are other implementations that scale much better, for instance those of Solaris and recent Linuxes. As far as the latter is concerned, one of the kernel developers tested the new NPTL (Native Posix Threading Library) benchmarked the creation and destruction of 100,000 threads on a single-processor 32-bit Intel box, and it took 2 seconds (the old threading implementation is said to have required about 15 minutes for the same task). At first, NPTL was intended to be an "m on n" threading library, that piggy-backs a (potentially great) number of user mode threads on a (probably much smaller, ideally no more than one per process and processor core) number of kernel threads. But RedHat came up with a way to greatly accelerate thread context switching in the kernel, and there were many problems an m on n implementation would face, some hard to solve, some possibly unsolveable, so it was decided that NPTL would be "1 on 1". So, yes, every one of the above-mentioned 100,000 threads was a kernel object.

    Kerneltrap discussion on NPTL

    My point is not that Erlang can't do better than that, I've heard claims that it can. Maybe Erlang would require only 1 second for the same task on the same machine, or even less. But my point is actually that this sort of performance easily satisfies all realistic scalability demands. In effect, if you had 100,000 threads that did actual work, the 2 seconds of CPU time required to start and stop them will pale in comparison. As Ingo Molnar pointed out, in such a situation, even the cost of kernel-level context-switching will be small compared to the total cost of context-switching. That is because there are some costs of context-switching, that not even user mode context-switching and then not even a clever VM can prevent, like cache trashing (if you are lucky and the combined working set of all threads actually fits in RAM) or even swapping frenzies (if you are less lucky).

    Linux 2.6, by the way, includes a O(1) scheduler which can obviously handle a large number of threads far better than the old O(n) scheduler. This major improvement was made possible by redesigning the niceness priority algorithm in a backwards-compatible way: The old scheduler rewarded nice threads (i.e. threads that yield before their timeslice is up) but for fairness it had to also reward threads that didn't even get a timeslice, leading to O(n) complexity. The new scheduler instead punishes "un-nice" threads (threads that do use up their timeslice), and since only the currently running thread can be unnice, the complexity is reduced to a constant overhead every time the scheduler runs.

    Third, you claim that Java Green Threads are preferrable over OS threads (or kernel threads). Actually, on Linux at least, not even Sun sees it this way any more (in Sun's own words). According to Sun, NPTL suits highly-threaded Java applications far better than any previous threading implementation. Though this implementation isn't explicitely named, that obviously includes Java Green Threads.

    In effect, Java Green Threads, being an "m on 1" implementation, have significant drawbacks that limit their utility in times where almost every server has at least 2 to 4 processor cores, and even desktop and notebook processors are going multicore. That is, a single process using JGT cannot utilise more than one processor core at a time. At least some amount of kernel-level threading is obviously necessary to allow a single process to make use of SMP. And guess what the new multi-core-enabled version of Erlang does?

    Fourth, you claim that sharing state between threads is inherently evil and causes races and deadlocks. That's a load of nonsense if I've ever seen one. Sharing state does none of this, it's developer ineptitude that causes race conditions and deadlocks. I've worked and am currently working on several multi-threaded applications in Java and .NET, and they run fine even on dual-core machines. Back in the days of procedural C and functions using statically allocated buffers, writing multi-threaded (or even just single-threaded, reentrant) applications was an art, but in today's object-oriented environments, you don't have much to worry about as long as you ensure thread-safe access of all class-static data (or even better avoid such data) and keep track of all class instances shared between threads (usually, only a few are required). And although my team spends a lot of time on debugging, only a tiny proportion of that time is necessary to hunt down the occasional threading-related problem.

    Fifth, show me any Turing-complete language with threading support, and I'll show you racy and/or deadlocking code. Arguably, it's harder to introduce these in Erlang by accident because Erlang forces you to use message queues as the sole mechanism for synchronisation and communication. But then, message queues can be used in other languages as well, actually both Java and .NET have message queue support built-in. And as far as Java is concerned, I've used them for easy synchronisation in the very first multi-threaded application I worked on. And even though Erlang may give fools less rope to hang themselves with, always remember that fools are ingenious!

    Choice of synchronisation primitives is even less of an argument for a certain environment if you consider that synchronisation is a bit like boolean logic - all sufficiently flexible sets of primitives are equivalent. For example, semaphores can be built on top of mutexes and condition variables. Both mutexes and condition variables can be implemented using semaphores. Events (as in .NET threading) can be implemented using mutexes and condition variables. Message queues can be implemented using a (non-threaded) queue structure and the same primitives. And it even works the other way round: Semaphores can be implemented using message queues! This can be useful even in Erlang because even Erlang is forced to share some state, like computer hardware or system settings.

    Point six (point six already??): You posted a benchmark that shows Apache with MPM Worker scaling badly (for a certain definition of badly) to show that OS threading is slow. But you did not even mention what operating system was used in the benchmark! I mean, what the FUCK is that? It's certainly not an argument I can take seriously. So please enlighten us? Is it Windows 9x? Windows NT? Solaris? xyzBSD? A Linux distribution, and if so, based on kernel 2.0, 2.2, 2.4 or 2.6? Even then, you would need to show that the bottleneck lies within the OS threading implementation, and not within Apache or MPM Worker. And how was MPM worker configured anyway? MPM worker starts a new process every ThreadsPerChild threads, the default for that variable is if I'm not mislead 16. So the 80,000 threads would be located into 5,000 different processes. Did your benchmark take this into account?

  • (cs) in reply to Alexis de Torquemada

    <FONT style="BACKGROUND-COLOR: #fdfdfd">Hmmm...</FONT>

    Threads with shared memory appeal to many programmers, because data can be shared and interchanged so easily. At least appearently easy. OTOH having threads separated (with the only mechanism of communication being message queues/sockets/...) complicates many things, and leads to bad performance in certain situations.

    In my experience most programmers (especially those who don't understand threading well enough - and underestimate the load of problems this ignorance will cause) "think shared memory" and only use message queues when it can't be avoided.

    I guess "thinking message queue" until a situation comes up that "formally requests the shared memory approach"  will lead to more stable code and better performance. Of course read-only data can and should be shared between threads, but it should be kept separate from mutable data, otherwise it will hurt performance on todays systems (cache-coherency protocols like MESI don't like it).

    And of course good, stable and fast (message) queue implementations would help to see this happen more frequently.

  • clayne (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:

    And you're wrong on numerous accounts.

    You beat me to it. I was about to write a long diatribe to this guy who kept spouting off about OS thread performance, modern libraries, and speed of thread creation on NT vs unix variants, etc. Thanks for doing it for me - it's obvious he has very little experience with NPTL, Solaris Threads, pthreads in general, or any other reasonably common unix-oriented implementation.

  • clayne (unregistered) in reply to hotzenplotz
    hotzenplotz:

    <FONT style="BACKGROUND-COLOR: #fdfdfd">Hmmm...</FONT>

    Threads with shared memory appeal to many programmers, because data can be shared and interchanged so easily. At least appearently easy. OTOH having threads separated (with the only mechanism of communication being message queues/sockets/...) complicates many things, and leads to bad performance in certain situations.

    In my experience most programmers (especially those who don't understand threading well enough - and underestimate the load of problems this ignorance will cause) "think shared memory" and only use message queues when it can't be avoided.

    I guess "thinking message queue" until a situation comes up that "formally requests the shared memory approach"  will lead to more stable code and better performance. Of course read-only data can and should be shared between threads, but it should be kept separate from mutable data, otherwise it will hurt performance on todays systems (cache-coherency protocols like MESI don't like it).

    And of course good, stable and fast (message) queue implementations would help to see this happen more frequently.

    MQ or not, you're still doing mutex handling with a flat-plane shared global variable, array, etc. or with an abstract MQ passed around as a pointer or also in global space.

    There is nothing magic about a message queue. Share what you need to share, and localize what you do not. How you share what you need to share is another mechanism entirely - be it global variables, array of pointers, linked lists, stacks (at this point we're back in message queue land anyways), yadda yadda - but if you want to share - you WILL be using mutexes or semaphores of some sort. Message queues do NOT change this.

  • (cs) in reply to clayne

    You're right. I shouldn't write when I'm tired. Still I think queues are often not used when they could and should be used. That was what I was trying to say with too much words.

  • a0a (unregistered) in reply to WHO WANTS TO KNOW?
    Anonymous:

    <font size="4">MY GOD!!!!!!  How do such STUPID people find work!</font>

    Jerry seemed like a nice guy; he had "20 years of C++ experience," which, at the time, was older than C++ itself.

    That is a common piece of garbage!  I was once asked if I had 5 years or more of experience with VB5, and I told them what I had(LESS THAN 5 years), and said that if ***ANYONE*** said they had more, they were LYING!  Even the developer couldn't claim more, as the beta had changed a bit, etc...  Still, they seem to ALWAYS want 5 years +, even if the product came out yesterday!

    , wasn't joking. Brian, still being a nice guy, explained that you can't use MFC if you ever want to compile something outside of Windows. "Huh," Jerry responded, "I’ve only used Windows and, besides, isn't UNIX based on Windows."

    Actually, windows is a stupid imitation of X windows(which ran over UNIX) which came out a year BEFORE M/S Windows!  M/S Windows ran over M/S dos(which came out about 4 years BEFORE M/S Windows) which was an imitation of CP/M (which came out over 5 years BEFORE M/S DOS and about 1 year BEFORE MICROSOFT!!!!!!).  CP/M is obviously loosly based to some degree on UNIX which came out  about 5 years before THAT!

    MY GOD!  The I.E.E.E. standard predates microsoft by about 4 years!!!!!   NOT, WINDOWS, NOT DOS, but the ****ENTIRE COMPANY****

     



    *giggle*
  • Kuba (unregistered) in reply to mallard
    mallard:
    Yes UNIX was a very powerful server/workstation OS from the late 70's onwards. However, it is only comparatively recently that ordinary PC's have been capable of running UNIX well. In the past, less powerful, simpler OS's were needed.

    Words fail me. You seem to forget that IBM PC was essentially a minicomputer-class machine of a decade earlier. I can't see how an 8086-based IBM PC would be seen as less capable than the PDP-11 systems that UNIX was built on. Performance-wise, the 5MHz 8086 was in the same ballpark as the KA11 processor used on early PDP-11s (1969 vintage). The worst case instruction timing (instruction + operand overhead) for basic ALU and branches on KA11 and 8086 is methinks within 20% of each other (6-7us per instruction).

    It's true that the first IBM PC had 16kb of memory and that was a bit tiny to run UNIX. Methinks MS-DOS 1.0 didn't run in 16kb either, maybe in 64kb? Anyway, a PC with 64kb of RAM could pretty comfortably run a reasonable approximation of a UNIX. PDP-11 unix would boot and run simple utilities with 12K words (24K bytes) of memory.

    Lack of MMU would prevent proper protection from taking place, but that doesn't prevent folks from running uclinux either. I don't know whether v5 or v6 unices that ran on PDP-11 used the MMU :(

    No quack.

  • Ph (unregistered) in reply to Unklegwar

    I'd be very interested to know what coding web-site all of you frequent. There has be many negatives remark about one bad site, but sadly no good ones where proposed!

    -ph

  • steve (unregistered)

    Am I missing something here?

    I admittedly have never written multi-threaded code (well, other than in college - and my memory might be faulty), but I do at least recall that after creating a new thread, two identical threads (original thread and new thread) will be executing at the next line of code. He never checked the return value from CreateThread, so BOTH threads are going to execute the code under "now process the message".

    He was supposed to put something like:

      //spawn a new listener so we can process this
      int h = CreateThread(
        NULL, 0, ListenForMessage,
        (void *)this, 0, NULL);
      //if we're the original thread, exit
      if (h != 0) return;
    
      //child thread will process the message
      int result = -1;
    --- snip ---

    Spawning a new thread and having both threads proceed to do exactly the same work twice would be a pretty big WTF. In fact I think it might have been the original WTF in this submission, but so far all of the comments have been about how stupid it is to have multi-threading for no reason other than to have multi-threading.

  • Svetljly (unregistered)

    Novyny

  • Vilianawul (unregistered)

    Novost

  • Leonerj (unregistered)
    Comment held for moderation.
  • Vilianaskc (unregistered)
    Comment held for moderation.
  • Serzcci (unregistered)
    Comment held for moderation.
  • Eldarveq (unregistered)
    Comment held for moderation.
  • Robrzp (unregistered)
    Comment held for moderation.
  • Evayju (unregistered)

    Med

  • Eldarcxq (unregistered)
    Comment held for moderation.
  • Svetljly (unregistered)
    Comment held for moderation.
  • Svetlidq (unregistered)
    Comment held for moderation.
  • Svetlzoh (unregistered)
    Comment held for moderation.
  • Svetlzzs (unregistered)
    Comment held for moderation.
  • Svetlukd (unregistered)
    Comment held for moderation.
  • Margaretayv (unregistered)
    Comment held for moderation.
  • Viktoriyrz (unregistered)
    Comment held for moderation.
  • Igorzus (unregistered)

    Привет дамы и господа! Ищите незаангажированное украинское СМИ? Хотите читать свежую и настоящую информацию? Тогда Вы по адресу! [image] Наше интернет СМИ ukr-life признано передовой независимой газетой. У нас Вы можете найти свежие новости статей на тему зарплат в Украине, коммунальных платежей и тарифов, которые интересуют практически каждого гражданина. Мы черпаем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим слушателем Вы будете получать постоянно свежие и интригующие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 животрепещущих новостей, которые стоит прочитать прямо сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    календарь на 2024 год украина распечатать зарплата бойца Нацгвардии Украины зарплата психологов как на киевстаре связаться с оператором

  • Igorubw (unregistered)

    Добрый день друзья! Ищите независимое украинское СМИ? Хотите получать свежую и актуальную информацию? Тогда Вы по адресу! [image] Наше СМИ ukr-life считается лучшей независимой газетой. У нас Вы можете изучить свежие новости статей на тему окладов в Украине, коммунальных платежей и тарифов, которые затрагивают практически каждого гражданина. Мы черпаем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим читателем Вы будете получать ежедневно свежие и животрепещущие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 животрепещущих новостей, которые стоит прочитать прямо сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    как подключить услугу смартфон на месяц тариф киевстар за 75 грн предсказание матроны на 2024 год какого числа день учителя в украине [url=https://uk

  • Igorxnl (unregistered)
    Comment held for moderation.
  • Igorojg (unregistered)

    Привет дамы и господа! Ищите независимое украинское СМИ? Хотите читать последнюю и настоящую информацию? Тогда Вы по адресу! [image] Наш портал УкрЛайф признано яркой независимой газетой. У нас Вы можете найти свежие подборки статей на тему заработных плат в Украине, коммунальных платежей и тарифов, которые касаются практически каждого читателя. Мы собираем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим слушателем Вы будете получать ежедневно свежие и интригующие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 последних новостей, которые стоит прочитать уже сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    скільки заробляє ветеринар тарифи водафон 2024 рік безвизовые страны для Украины тарифы киевстар запорожье [url=https://ukr-life.com.ua/]передбачення мес

  • Igorpvu (unregistered)

    Привет господа! Ищите настоящее украинское СМИ? Хотите изучать лучшую и настоящую информацию? Тогда Вы по адресу! [image] Наше СМИ УКР Лайф является лучшей независимой газетой. У нас Вы можете найти свежие подборки статей на тему зарплат в Украине, коммунальных платежей и тарифов, которые затрагивают практически каждого читателя. Мы берем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим пользователем Вы будете получать постоянно свежие и животрепещущие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 лучших новостей, которые стоит прочитать уже сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    какие документы нужны для продажи квартиры в Украине в 2024 году купить квартиру в чернигове без посредников подорожание электроэнергии в украине 2024 [url=https://ukr-life.com.ua/]предсказания на 2024 рік

  • Igorofn (unregistered)

    Доброго времени суток товарищи! Ищите незаангажированное украинское СМИ? Хотите получать лучшую и актуальную информацию? Тогда Вы по адресу! [image] Наше интернет СМИ УкрЛайф считается топовой независимой газетой. У нас Вы можете изучить свежие разделы статей на тему заработных плат в Украине, коммунальных коммуналки и тарифов, которые затрагивают практически каждого жителя страны. Мы получаем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим пользователем Вы будете получать каждый день свежие и животрепещущие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 животрепещущих новостей, которые стоит прочитать уже сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    когда будет тепло в Украине в 2024 году детские деньги январь 2024 украина как дозвонится оператору киевстар [url=https://ukr-life.com.ua/]к

  • Igorgom (unregistered)

    Привет господа! Ищите настоящее украинское СМИ? Хотите изучать новую и настоящую информацию? Тогда Вы по адресу! [image] Наша интернет газета УкрЛайф считается лучшей независимой газетой. У нас Вы можете увидеть свежие разделы статей на тему окладов в Украине, коммунальных платежей и тарифов, которые касаются практически каждого гражданина. Мы берем информацию из первоисточников и делемся ею с Вами, дорогие читатели. Став нашим читателем Вы будете получать каждый день свежие и животрепещущие новости, а так же мы будем делиться с Вами самыми сокровенными интригами. Вот ТОП 10 животрепещущих новостей, которые стоит прочитать прямо сейчас: 1)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН В 2024 ГОДУ 2)КОГДА НАСТУПИТ ВЕСНА В 2024 ГОДУ? 3)НАСТРОЙКИ СПУТНИКОВЫХ КАНАЛОВ 2024 В УКРАИНЕ 4)ДЕШЕВЫЕ ТАРИФЫ КИЕВСТАР 2024 ГОДА 5)ПЕНСИИ ПЕРЕСЕЛЕНЦАМ В УКРАИНЕ В 2024 ГОДУ 6)ПОСОБИЕ МАТЕРИ ОДИНОЧКЕ В УКРАИНЕ 2024 7)ПРЕДСКАЗАНИЯ О ДОНБАССЕ НА 2024 ГОД ОТ ЭКСТРАСЕНСОВ 8)ВЪЕЗД В УКРАИНУ ДЛЯ РОССИЯН С 1 ЯНВАРЯ 2024 ГОДА 9)НОВЫЕ ТАРИФЫ 2024 ГОДА ВОДАФОН (VODAFONE) В УКРАИНЕ 10)ПЕНСІЇ ВІЙСЬКОВИМ ПЕНСІОНЕРАМ УКРАЇНИ В 2024 РОЦІ Всегда рады помочь Вам! С уважением, команда Ukr-life А вот еще подборка новостей:

    календарь клева щуки 2024 украина зарплата в нацгвардии 2024 причины остановки полицией украина 2024 календарь рибака 2024 украина [url=https://ukr-li

Leave a comment on “Flossin' My Threads”

Log In or post as a guest

Replying to comment #:

« Return to Article