• AdT (unregistered) in reply to dkf
    dkf:
    Quicksort's got some problems though; for example, it's formally O(n^2)

    This is the worst case complexity. The average case complexity is O(n log n). All of worst case, average case and best case are formally defined, so I'm just pointing out that what you were writing is ambiguous.

  • T.T. (unregistered)

    Oh man, long time since I burst into laughter like this :). LOL!

    CAPTCHA: atari, yay, brings back memories (ST ... I loved you)

  • T.T. (unregistered) in reply to Jackal von ÖRF
    Jackal von ÖRF:
    raveman:
    he should use HashMap, Hashtable is slower because its synchronized :P

    just messing with you

    At least in Java. The real WTF is that people still use and teach the use of java.util.Hashtable and java.util.Vector (or even worse, java.util.Stack) instead of the newer alternatives and interfaces. It would be better for Sun to deprecate those classes.

    Tell me, what's the replacement for java.util.Vector? Is it java.util.Vector<> (gosh I hope it is :))

    CAPTCHA: stinky, not just yet! PS: Actually without thinking, I started using HashMap<> because it's seems more similar to std::map<> (I prefer templates in C++, no doubts there)

  • T.T. (unregistered) in reply to SomeCoder
    SomeCoder:
    krupa:
    Devek:
    If you don't know how the classes involved are implemented, you won't know why the code went from running 1,000 scripts a second to 100,000 a second on my core 2 desktop. :P If all you knew was Java and knew nothing about how it and its classes worked you would swear the previous implementation would of been faster.
    You're not convincing me. I would have implemented the same solution. Doing that may require reading source (I'm not familiar with Jython so I won't say definitively either way) but I don't need to read source code to know that repeated process creation is slower than running threads.

    Having said that, being familiar with the source code of ANY 3rd-party library for ANY language you're working with is definitely a plus. But I still disagree with the assertion that you can't write good Java unless you know the source.

    It's not that you have to know the exact source code of Java or should review it daily. But you should know how Java works in general. If you don't, you really have no business trying to write anything more complicated than Hello World.

    Having a rough knowledge of how a language works under the covers makes you a far better programmer in the end, especially when writing code that needs to be highly efficient.

    Yes you have to know how to program in Java, to eeehh...program in Java. Duh! And no, you don't need to know the exact implementation of framework functions to be able to program in Java. Where did you get that impression???

    CAPTCHA: dreadlocks (how about deadlocks?)

  • T.T. (unregistered) in reply to Nonymous
    Nonymous:
    Devek:
    Ben Blok:
    I do not really agree on this mainly because C has the huuge overhead of learning memory management. C makes shooting yourself in the foot really easy.
    If you can't write C without shooting yourself in the foot over and over you will not be able to write good code in any language.

    There isn't a language that has built in stupidity protection.

    Then clearly every programmer out there must be incompetent.

    I love when people start making claims like this because they're hilarious. I mean... "its not that hard to manage memory or check array bounds," but clearly... it is.

    Way too many pieces of software have buffer overflows, format string issues, memory leaks, and all kinds of other problems for C/++ to be "that easy." Apache, ruby, php, Webkit, Firefox, Internet Explorer, and everyone else. Worse, when you start corrupting memory because you didn't check bounds right there's no recovery at all. In a high level language like Java you'll get exceptions when you do something like this, and the issue can be handled gracefully... (worst case the program can save the open files and quit), without the process ending abruptly, as would be the case if you had a segfault in C.

    Seriously folks. If programming in C/++ was so easy, manual memory management was so straight forward, and checking your array bounds so simple, why are there still so many horrible security issues in programs? Why do browsers and text editors still crash?

    </rant>

    Solution: use template containers from std namespace, and use iterators! And absurd amount of possible bugs due to bad memory management goes down the drain. At least, that is my experience. There are still a few gotcha's though, but far less than there used to be.

    CAPTCHA: craaazy (that's really crazy =))

  • Wdomburg (unregistered) in reply to Buster

    Since it already has built-in stupidity, so would built-in stupidity protection disable those portions of the language?

  • T.T. (unregistered) in reply to Anonymouse
    Anonymouse:
    About this idea of not hiring experienced coders because they tend to be arrogant. You are aware that there's a huge difference between actually being arrogant, and being perceived as such because of superior experience.

    An experienced programmer might say "anyone who would reimplement HashTable in such a way is completely incompetent and should not be working here". And that would make him seem arrogant to a clueless manager. But it's also, objectively, the truth.

    Of course you want the guy who doesn't reinvent perfectly good existing classes, before you want the guy who does so in an extremely clever but too time-consuming way. But either is preferable to a man who can't code. The seemingly infinite amounts of incredibly poor software that roams the world today is largely the result of incompetence, not surface arrogance.

    AMEN to that brother! My idea exactly!

    CAPTCHA: gygax (Garry, are you there?)

  • T.T. (unregistered) in reply to Joon
    Joon:
    The Barefoot Bum:
    I'm sad, though, that the designers sacrificed rigorous destructor semantics in C# to facilitate automatic memory management. There's a lot of other stuff, not just memory management, that I like to do in destructors.

    Ummmm.... I my be misreading your comment - are you saying that there is no destructor in C-Pound?

    There is:

    public class MyClass { public MyClass() { // do some init }

    ~MyClass() { // Do some finit } }

    You could also implement IDisposable and put your finalisation in the Dispose method

    Let me add to that:

    Don't use a C# destructor (I prefer the term finalizer) unless you have allocated resources that aren't under C# control. In that case, you should write a C# destructor (finalizer) that frees the allocated resources if they weren't freed in Dispose. An example of such a resource could be a memory buffer you got from VirtualAlloc (kernel32).

    All the other things that need to be freed, should be freed in Dispose. If you fail to free those resources in Dispose, then you better revisit your code.

    CAPTCHA: bling (server: blong?)

  • T.T. (unregistered) in reply to A. Non Mouse
    A. Non Mouse:
    Correct. In C#, even if you implement IDisposable and slap something in Dispose(), you're still at the mercy of the GC as to when things get taken care of. While I don't have any C/++ experience, I do know that difference exists. If you wanted performance that requires proper disposal of objects, perhaps something like C# isn't what you should be using anyway...

    Nope, Dispose is called when you call... Dispose! Or, at the end of a using statement. However, you can not know when the GC calls the finalizer! Big difference!

    CAPTCHA: cognac (no thank you)

  • T.T. (unregistered) in reply to kblees
    kblees:
    Nonymous:
    Devek:
    <snip/> If you can't write C without shooting yourself in the foot over and over you will not be able to write good code in any language.

    There isn't a language that has built in stupidity protection.

    Then clearly every programmer out there must be incompetent.

    I love when people start making claims like this because they're hilarious. I mean... "its not that hard to manage memory or check array bounds," but clearly... it is.

    Way too many pieces of software have buffer overflows, format string issues, memory leaks, and all kinds of other problems for C/++ to be "that easy." Apache, ruby, php, Webkit, Firefox, Internet Explorer, and everyone else. Worse, when you start corrupting memory because you didn't check bounds right there's no recovery at all. In a high level language like Java you'll get exceptions when you do something like this, and the issue can be handled gracefully... (worst case the program can save the open files and quit), without the process ending abruptly, as would be the case if you had a segfault in C.

    Seriously folks. If programming in C/++ was so easy, manual memory management was so straight forward, and checking your array bounds so simple, why are there still so many horrible security issues in programs? Why do browsers and text editors still crash?

    </rant>

    Wow! I think this comment is the real WTF of the day.

    As if Java or C# would protect you from serious bugs or security issues! This is just plain ignorance.

    I agree that array bounds checking is a Good Thing. However, when it comes to memory management, there is a simple rule that applies to all languages out there: if you allocate a resource, you have to release it some time later.

    In C++, this applies to all resources, including memory. In Java/C#, memory is handled by the garbage collector, and all other resources are simply a pain in the ass.

    Consider this C++ code: ifstream f("filename"); ...do something

    The SIMPLEST way to do this correctly in Java (without IoC tricks) is:

    InputStream s = null; try { s = new FileInputStream("filename"); ... do something } finally { if (s != null) s.close(); }

    Of course, InputStream s = new FileInputStream("..."); will also just work - for a while... (seen it so many times...sigh). And as memory is used more often than other resources (files, sockets, db connctions etc.), automatic memory management simply allows you to write much more of such broken code until you realize your stupidity.

    Seriously, if you really find the concept of releasing resources hard to grasp, you should probably look for a 'flipping burger' job, will save you (and your colleagues) some frustration...

    Tell me... What's wrong with:

    InputStream s = new FileInputStream("filename"); try { /*do something */ } finally { s.close(); }

    CAPTCHA: tesla...(had this one too much I'm afraid)

  • T.T. (unregistered) in reply to Andrew
    Andrew:
    dkf:
    kblees:
    The SIMPLEST way to do this correctly in Java (without IoC tricks) is:

    InputStream s = null; try { s = new FileInputStream("filename"); ... do something } finally { if (s != null) s.close(); }

    Actually, you get just as good results from this:

      InputStream s = new FileInputStream("filename");
      try {
         ... do something
      } finally {
         s.close();
      }

    If it's not obvious why this simplification works, think about conditions under which you'll be able to close the stream in the code. (Also note that you can't trivially combine this with catching a failure to open the stream; that requires another try round the outside...)

    So there do you catch the exceptions FileInputStream("filename") can throw? This may be simple but it doesn't compile.

    Que???

    You write a fragment of code without any method signature (that could include a throws specification), and claim it doesn't compile? The fragment you originally post doesn't compile either, you failed to write a method body etc...

    Where do you catch them? Whatever! Somewhere else obviously =)

    sigh

    CAPTCHA: quake (third time...bad omen)

  • T.T. (unregistered) in reply to AdT
    AdT:
    dkf:
    Quicksort's got some problems though; for example, it's formally O(n^2)

    This is the worst case complexity. The average case complexity is O(n log n). All of worst case, average case and best case are formally defined, so I'm just pointing out that what you were writing is ambiguous.

    O(F) is the order of an algorithm F (average efficiency). Order of quicksort is O(nlogn), no doubts there.

    Formally O(n^2), what a load of crap. Worst case, yes, but that has nothing to do with the order of an algorithm which deals with average efficiency.

    And to quote the wiki on Quicksort: Typically, quicksort is significantly faster in practice than other \mathcal{O}(n~\log~n) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the possibility of requiring quadratic time.

    For example... LMAO.

    CAPTCHA: gotcha (oh no, not again!)

  • T.T. (unregistered) in reply to T.T.
    T.T.:
    Jackal von ÖRF:
    raveman:
    he should use HashMap, Hashtable is slower because its synchronized :P

    just messing with you

    At least in Java. The real WTF is that people still use and teach the use of java.util.Hashtable and java.util.Vector (or even worse, java.util.Stack) instead of the newer alternatives and interfaces. It would be better for Sun to deprecate those classes.

    Tell me, what's the replacement for java.util.Vector? Is it java.util.Vector<> (gosh I hope it is :))

    CAPTCHA: stinky, not just yet! PS: Actually without thinking, I started using HashMap<> because it's seems more similar to std::map<> (I prefer templates in C++, no doubts there)

    Nvm...found ArrayList, and sticking with it from now on. Thank you for this WTF :D!

    CAPTCHA: dreadlocks (patience...patience...)

  • T.T. (unregistered) in reply to T.T.
    T.T.:
    Joon:
    The Barefoot Bum:
    I'm sad, though, that the designers sacrificed rigorous destructor semantics in C# to facilitate automatic memory management. There's a lot of other stuff, not just memory management, that I like to do in destructors.

    Ummmm.... I my be misreading your comment - are you saying that there is no destructor in C-Pound?

    There is:

    public class MyClass { public MyClass() { // do some init }

    ~MyClass() { // Do some finit } }

    You could also implement IDisposable and put your finalisation in the Dispose method

    Let me add to that:

    Don't use a C# destructor (I prefer the term finalizer) unless you have allocated resources that aren't under C# control. In that case, you should write a C# destructor (finalizer) that frees the allocated resources if they weren't freed in Dispose. An example of such a resource could be a memory buffer you got from VirtualAlloc (kernel32).

    All the other things that need to be freed, should be freed in Dispose. If you fail to free those resources in Dispose, then you better revisit your code.

    CAPTCHA: bling (server: blong?)

    And best add explicitely, that if you require a finalizer (for the reasons I stated) you also should have a disposer (as I tend to call it) which disposes the unmanaged resources. The finalizer is a safe-guard for programmers that forget to dispose your object (which they shouldn't).

    CAPTCHA: stinky (nope, still not stinking here)

  • AdT (unregistered)
    T.T.:
    O(F) is the order of an algorithm F (average efficiency). Order of quicksort is O(nlogn), no doubts there.

    Absolutely not. O(f(n)) is the class of functions that grows no quicker than f(n) asymptotically. In computer science, f(n) is usually assumed to be a function whose domain are the non-negative integers and whose value range is (a subset of) the non-negative real numbers. Two functions f(n) and g(n) are considered equal asymptotically if f(n) / g(n) converges to a positive number as n goes to infinity. For example, f(n) = 4n and g(n) = 105n + 1280 sqrt(n) + 10 are asymptotically equal as said fraction converges towards 4/105. In big O notation (sometimes, big Omicron notation is used, but the Omicron is virtually indistinguishable from O), that is: f(n) is in O(g(n)) and vice versa.

    You can think of f(n) in O(g(n)) as sort of "f(n) is less than or equal to g(n), asymptotically". Analogously, small o notation means "less than", i.e. n is in o(10n²), but n² is not, ω (small omega) is used for "greater than", Ω (big omega) for "greater than or equal" and Θ (big theta) is used to mean "equal".

    All this has nothing whatsoever to do with best case, average case or worst case time complexity of certain algorithms. The notation itself can be used to describe all of those and more (e.g. space complexity). The big-O-notation itself was introduced by the German mathematician Paul Bachmann in 1894. I don't know whether or not he meant O to mean Ordnung (the German word for order), but in any case his work has nothing to do with algorithmic time complexity.

    T.T.:
    Formally O(n^2), what a load of crap.

    Strictly speaking, all of the best case, average case and worst case time complexities of quicksort are in O(n²), as this basically means “quadratic time or faster”. The best and average case complexities are, however, also in o(n²), so they are not in Θ(n²). Only the worst case complexity is.

    The average case complexity is in Θ(n log n). The best case complexity often is, too, but in some implementations it could also be in Θ(n).

    Practically speaking, O is often used to mean the same thing as Θ, of course.

  • T.T. (unregistered) in reply to AdT
    AdT:
    T.T.:
    O(F) is the order of an algorithm F (average efficiency). Order of quicksort is O(nlogn), no doubts there.

    Absolutely not. O(f(n)) is the class of functions that grows no quicker than f(n) asymptotically. In computer science, f(n) is usually assumed to be a function whose domain are the non-negative integers and whose value range is (a subset of) the non-negative real numbers. Two functions f(n) and g(n) are considered equal asymptotically if f(n) / g(n) converges to a positive number as n goes to infinity. For example, f(n) = 4n and g(n) = 105n + 1280 sqrt(n) + 10 are asymptotically equal as said fraction converges towards 4/105. In big O notation (sometimes, big Omicron notation is used, but the Omicron is virtually indistinguishable from O), that is: f(n) is in O(g(n)) and vice versa.

    You can think of f(n) in O(g(n)) as sort of "f(n) is less than or equal to g(n), asymptotically". Analogously, small o notation means "less than", i.e. n is in o(10n²), but n² is not, ω (small omega) is used for "greater than", Ω (big omega) for "greater than or equal" and Θ (big theta) is used to mean "equal".

    All this has nothing whatsoever to do with best case, average case or worst case time complexity of certain algorithms. The notation itself can be used to describe all of those and more (e.g. space complexity). The big-O-notation itself was introduced by the German mathematician Paul Bachmann in 1894. I don't know whether or not he meant O to mean Ordnung (the German word for order), but in any case his work has nothing to do with algorithmic time complexity.

    T.T.:
    Formally O(n^2), what a load of crap.

    Strictly speaking, all of the best case, average case and worst case time complexities of quicksort are in O(n²), as this basically means “quadratic time or faster”. The best and average case complexities are, however, also in o(n²), so they are not in Θ(n²). Only the worst case complexity is.

    The average case complexity is in Θ(n log n). The best case complexity often is, too, but in some implementations it could also be in Θ(n).

    Practically speaking, O is often used to mean the same thing as Θ, of course.

    Lol, I was trying to put it easily.. Sure I have a MD in CS and could have made a lengthy explanation and a distinction between O and Θ (c/p'd Θ, wouldn't know how to get that character). But I don't think most can follow except if they have some form of mathematical education.

    Anyway my point is that the Order of Quicksort is O(nlogn), meaning that the average computation time is of the order nlogn where n is the number of objects the algorithm processes. And usually, I don't care whether the worst case is n^2 and best case is n, at least not for quicksort.

    Anyway, thanks for the recap :)

    CAPTCHA: slashbot (a bot that produces slashes...yay a new invention)

  • Dominick (unregistered) in reply to Ben Blok
    Ben Blok:
    Devek:
    I don't know C#.. but one of the things that has troubled me in the past about "high" level languages(like Java) is that it requires an immense amount of knowledge to be able to do anything correctly while attracting novice programmers left and right.

    Just normal C is easier to program in because you know what your C code is doing for the most part. Novice programmers should start there.

    To write good Java code you have to not only know what your code is doing, but how everything your code is doing is implemented.

    Perfect example of novices using a "high" level language when they don't know enough about coding is Gentoo's portage system. Python is the worst offender when it comes to luring in novices to do something they have no business doing.

    I do not really agree on this mainly because C has the huuge overhead of learning memory management. C makes shooting yourself in the foot really easy.

    C makes shooting yourself in the foot easy because if you don't know what you are doing then your code won't work. High level languages "baby" novice programmers by managing thier memory for them, leading to classes like the one featured in this article. Yes "it works" at some level, but will do much greater damage in the future than a compiler error would.

  • Chris (unregistered)
    Mike:
    Anonymouse:
    Chris:
    Which standard sorting methods are just as easy to implement as bubble sort? (You used plural)

    Random sort and.. what's it called.. repeatedly searching for and extracting the element with the lowest key, and building a sorted array out of that, I think it has a name. Insert sort is also fairly simple.

    Well, actually bubblesort is simpler still. But "fast enough"? No way. For example quicksort is really simple to implement, like a slightly warped, recursive bubblesort, and my experience is it starts to outperform bubblesort at 10 or so elements. The extra overhead means nothing next to the difference in time complexity.

    1000 elements is certainly well beyond the useful scope of an O(n^2) algorithm when quicksort is so simple. Even, I would say, for prototyping purposes.

    The real question should be why would anyone (well, anyone using Java or C# at least, apart from as College/University exercises) actually implement any sort algorithm? The sorting algorithm used by the Java Collections class is a modified mergesort with n log(n) performance, it's stable, it's damn fast and it's been extensively tested (both by Sun and indirectly by every developer that ever used it). I can think of no good reason to implement any other sort method in Java and I would be amazed to learn that C# (and other modern languages) don't have similar standard methods for sorting. As for C and C++, well most of these algorithms have been around for about 50 years if you can't find a "standard" version I'd recommend Robert Sedgewicks Algorithms in C/Algorithms in C++.

    I spend a lot of time bouncing between languages. If I'm thinking an idea through to get comfortable with the problem space, I don't want to get bogged down.

    When I pass in an object, am I triggering a copy constructor? What's it doing to refcounts? If I pass a reference, will it sort based on the address of the reference or will it use the objects comparison operators? Am I going to trigger any problems with garbage collection? What else is that library loading? Will it conflict with anything I have in my quickly constructed prototype.

    Before you get too picky, remember: I didn't say which language I was using, I didn't say which libraries were available, I didn't say what my memory requirements were, I didn't say anything what I was modelling. I didn't say how many things I was going to be sorting. (Maybe I know my tests are only going to have 6 objects during my tests)

  • knock it off... (unregistered) in reply to Anon
    Anon:
    Jackal von ÖRF:
    raveman:
    he should use HashMap, Hashtable is slower because its synchronized :P

    just messing with you

    At least in Java. The real WTF is that people still use and teach the use of java.util.Hashtable and java.util.Vector (or even worse, java.util.Stack) instead of the newer alternatives and interfaces. It would be better for Sun to deprecate those classes.
    Why? Hashtable and Vector are the classes you want when you need synchronized access to them for access to multiple threads.

    Even better, in recent JVMs, the performance loss due to using synchronized methods is far below what it used to be. You no longer gain anything from using an ArrayList over a Vector. Try it.

    No, not really.

    Apart from the fact that method level synchronization is mostly worthless for collections (in most cases you have to control synchronized access on the whole collection, not only single operations, which calls for external synchronization anyway) there are helper methods to create synchronized wrappers around unsynchronized collections (see class java.util.Collections).

    The main problem with using the old implementations are the old APIs. Most programmers that don't get OO completely and repeatedly read

    Vector v = new Vector();

    in other people's programs, books and examples, tend to absorb this bad habit. There's indeed nothing wrong with

    List list = new Vector();

    if you really need a synchronized list, though

    List list = Collections.synchronizedList(new ArrayList());

    is preferred. But

    ArrayList list = new ArrayList();

    is in fact equally dumb. I'm going so far as to say that using a concrete (class) type on a reference variable should be treated as an error. That's what interfaces where made for. There is nothing important that ArrayList can do which can't be done with the interface List. Vector even implements List, and so does Hashtable with Map!

    So please all Java developers: do yourself and your fellow developers a favour and always (well, 99.9% of cases) use the collection API interfaces for all reference variables.

  • knock it off... (unregistered) in reply to Mike
    Mike:
    They're not really legacy classes, although you can consider them to be legacy there's nothing in the docs to indicate that they are. Using them is just a (slightly) simpler option than the using the synchronised methods.

    In the version of Java I'm currently using (JDK1.5.0) the implementations of Vector and ArrayList are pretty much identical, apart for the fact that methods in Vector are synchronised, obviously (check the src.zip wherever your JDK is installed). The synchronised methods in Collections just wrap the list of your choice with a new, thin Collection that synchronises access to your original list, if anything the extra method call that this introduces would cause it to be slightly less efficient (although that performance hit would be so incredibly insignificant as to be irrelevant).

    It's pretty much a matter of taste.

    The more you use the old collection classes, the more likely you are going to see dumb things like

    Vector v = new Vector();

    The problem is not the implementation, it's the API.

    List list = new Vector();

    is basically ok. I'd still prefer the more consistent

    List list = Collections.synchronizedList(new ArrayList());

    An extra method call is never an argument. In another post someone was argumenting that in newer JDKs synchonized methods are nearly as efficient as unsynchronized ones.

    This holds even much more true for wrappers (and therefore additional method calls): every decent JIT compiler is able to optimize that, and even if it is not - a single additional method call will never make your system slow.

    Most, if not all, performance problems stem from not understanding the language or the architecture, and not from one additional indirection when calling a method.

  • Lorcan (unregistered) in reply to Jezza
    Jezza:
    Air-con DOES slow cars down

    Only if you drive an underpowered POS - it does cost extra fuel to use, though.

  • recursion (unregistered) in reply to Eternal Density

    Phhhhhft one dimensional space....I don't even exist. Reality is really an illusion

  • Alicia C Simpson (unregistered)

    There are coders and programmers. Colleges and Universities turn out coders. Companies hire these coders and spend years while the coder learns how to program.

    There are a lot of highly qualified programmers who, if given the chance, could spend a couple of weeks learning how to code in some new language.

    This guy is a coder, someday he may actually learn how to program, although I wouldn't bet on it.

  • foo (unregistered) in reply to TheophileEscargot
    TheophileEscargot:
    I think this is a good counterexample to the people who say that CS degrees are pointless since most of the content is never used. Self-taught programmers can often have big gaps in their knowledge like this, since they've never had to learn boring stuff like data structures. And since it's an unknown unknown, they're not aware that they're missing anything.

    I am still saying CS degrees are pointless; most people with such degrees they may not have an unknown unknown, but have instead learned to bullshit over their gaps. Self-taught programmers who have survived on their skills for some years at least get the job done, and most of them gladly accept (additional) knowledge if it comes their way.

  • foo (unregistered) in reply to Jackal von ÖRF
    Jackal von ÖRF:
    The real WTF is that people still use and teach the use of java.util.Hashtable and java.util.Vector (or even worse, java.util.Stack) instead of the newer alternatives and interfaces. It would be better for Sun to deprecate those classes.
    Not before they manage to get their own usages and references to those classes adapted to the new ones as well. As long as some standard Java interfaces require or generate the old ones, they will be around.

    ("pointer" - good cue for a rant against Java)

  • foo (unregistered) in reply to knock it off...
    knock it off...:
    An extra method call is *never* an argument.
    Never? Example: Recursion Example: Tight loops Example: Deadlocking and synchronisation issues
  • Some_Developer (unregistered)

    I think the point is to enlighten him without making him feel like an undergraduate which has to go back to school before working on his project. So first, one has to try to understand what his point his, and if you are willing to try and not cling to just outsmart him, he has a point. Base implementations provided by the standard libraries are not always efficient to do the job you require. Maybe, the work with a callback interface to make them type independable, and other abstractions, but what you need is just a small, simple and very fast algorithm. That it can indeed be right to implement the data type from scratch.

    Taking his point of view this into account, there are two possibilites. If he is right, which we can say only if we know much about the surrounding of the problem, then we should assist him to make his idea functional. If he is wrong, we should try to explain him why he should use the given standard hashtable instead doing his own thing. Never should we touch the dish and blame him for his point of view, because it is not wrong or right per default, and we should always treat our fellow developers with a minimal amount of respect.

  • Frank (unregistered) in reply to Definitely
    Definitely:
    Well, that may be the spirit of OOP, but when it comes down to it, you do care how Array.sort() is implemented. It could be a bubble sort, a quick sort, or whatever. Once you feed it a large amount of data, it matters. smile! (captcha)
    Well, what we can do is simply read the Javadoc, which says for Arrays.sort(byte[]): "Sorts the specified array of bytes into ascending numerical order. The sorting algorithm is a tuned quicksort, adapted from Jon L. Bentley and M. Douglas McIlroy's "Engineering a Sort Function", Software-Practice and Experience, Vol. 23(11) P. 1249-1265 (November 1993). This algorithm offers n*log(n) performance on many data sets that cause other quicksorts to degrade to quadratic performance." What you have to know is what is a "quicksort" and what means "n*log(n) performance." ...but we, programmers, are as stupid as the end-users of an application, we don't read the user guide.
  • Talliesin (unregistered) in reply to Unomi
    Unomi:
    For those amount of values in an array, you can use Duff's device.
    No you can't, it's not a language that has fall-through in switch blocks, and that's only (at best) going to speed up the performance of the program, not the algorithm, which is still inherently much worse than that of a hashtable.
  • Jani (unregistered) in reply to snoofle

    Err..

    You are suprised to get even worse implementations than your students? Happy, i wasnt one your students then..

  • smithy953 (unregistered)

    tis good.... for a base LOL but too many functions dont make a slow program all that would be needed to avoid lots of functions is a root code just to skip the un necicary functions and go straight to the required.

    lets take the D drive, the power button, the A drive, the speakers, the usb ports, and almost everything else out of the genisis computer im building sure itl be faster (except it wont turn on) but where dose the experience and quality go, sometimes speed must be compramised for the sake of experience and quality, or you can overclock then you win on all fronts

  • Numeromancer (unregistered) in reply to Anonymouse

    That is NOT the problem with that code.

    Memory management, like the ptolemaic system, is one of those things that is good to know, but even better to be able to forget. It is an accidental, not an essential, part of computing.

  • tbrown (unregistered) in reply to TheophileEscargot
    TheophileEscargot:
    I've seen similar things before, like a programmer with years of experience not knowing that there's anything faster than Bubble Sort.

    I think this is a good counterexample to the people who say that CS degrees are pointless since most of the content is never used. Self-taught programmers can often have big gaps in their knowledge like this, since they've never had to learn boring stuff like data structures. And since it's an unknown unknown, they're not aware that they're missing anything.

    Yes and no, I'm not sure what they're teaching for an undergraduate CS degree nowadays, but a thorough grounding in Data Structures and Algorithms doesn't seem to be part of it!

    One of my standard sets of interview questions begins by asking the candidate to tell me what a Hashtable is and how to use it. Most can do this. Then I ask, how is it different from something like an ArrayList. Maybe half can dredge up something like it being much faster (some can even give the big-O relationships). Then I ask how you would implement a Hashtable if it weren't provided for you. Rarely does anyone come up with something different than the culprit in this WTF!

  • (cs)

    LOL what a noob. Obviously he should have wrapped it:

    class HashTable {
        HashMap map = new HashMap();
        public void add(Object key, Object value) { map.put(key, value); }
        public void remove(Object key) { map.remove(key); }
        public void get(Object key) { map.get(key); }
    }
    
    See? Much less code and still more efficient than just a HashMap because it only have three functions!!!
  • (cs)

    On a serious note, why are you grouping people in self taught and educated people? Personally I am self taught AND educated, in fact I don't get why you would want to study the theory behind programming if you don't even know how to program in the first place.

    At least this applies to Computer Science, because the only way to learn how to program there is on your own, since there are typically only one or two introductory courses in it. It is a major benefit to be able to program before you enroll, for many courses it is an implicit requirement.

  • REx (unregistered)

    i wish i knew something about programming to understand these ones i cant help but think of the simpsons episode where Prof Frink says "Pi is exactly three!" to get the attention of a crowd of braniacs.

  • Jim Wilson (unregistered)

    No way dude, no such thing as too many functions?

    Jiff www.privacy-tools.at.tc

  • (cs) in reply to Unomi
    Unomi:
    For those amount of values in an array, you can use Duff's device.
    • Unomi -

    I don't think you can implement Duff's device in C# because you need fall through in the switch statement. Still, it's only good for optimizing loops and still won't be faster than a hash table.

  • CrashCodes (unregistered) in reply to QuinnFazigu

    Self-taught is such a misnomer. People say that about me because I didn't go to college, but I say that this unfairly discounts the role mentors, books, help files, newsgroups, etc. played in my education.

    One of the previous posters had a point, sometimes I'd find seemingly bizarre gaps in my knowledge. (seems bizarre after the fact of course)

  • joe (unregistered)

    I would have snapped and strangled that guy. Seriously.

    And then it would be difficult to explain why I did that to my new jailmates... :)

  • funky (unregistered) in reply to jpaull
    jpaull:
    Ch0:
    Fiona:
    How... did... my god.. I don't understand what these people do to be landed in these jobs.

    That's the thing.

    How can an idiot get a job when I can't? Who do they apply to?

    But then, would I really want to work for the sort of person who'd happily employ an idiot alongside me?

    Wow! This comment struck a nerve with me. I apologize if I go into a lengthy speech about this.

    I have run into quite a few people that I graduated from College with who were MUCH better programmers than I was but none of them could find work, while I have had few problems finding development jobs from the day I got my diploma.

    {omitted text}

    The ones that turn into good developers are the ones that are open-minded and willing to learn.

    Agreed 100%; I had the same thought when I read the original comment.

    My academic background is physics/mathematics, only a handful of coding courses were involved, but I've had an easier time getting programming & database admin jobs than several of my CS friends for the exact reasons you specify: communication skills, ability to market oneself, and willingness to admint I don't know something (but would love to learn).

    Many moons after entering the workforce I assumed some hiring resonsibilities, and it's now easy to see why things worked out as they did. I'd much rather hire a student w/ no official coding experience but has good communication skills, a positive personality, and a willingness to learn, than someone with a B.S. CS on paper but less ability to succeed in a team environment. Not saying CS majors lack these skills - many are quite good - but I see more C.S. applicants destroy their chance via. the interview than applicants coming from other sciences or even finance backgrounds.

  • funky (unregistered) in reply to Anonymouse
    Anonymouse:
    Wow. Even as just a makeshift quick-and-dirty indexed table class this is real bad. I find myself among the ranks of those who fail to fathom how people like this find jobs and keep them for more than a week or two.

    Easily, the same way the dude made millions off the pet rock. It's not what you're selling, it's how you sell it that matters most. Only in acedemia is the reverse true.

  • Cforever (unregistered)

    lol :D :D :D :D :D :D

  • Chandon (unregistered) in reply to dkf
    dkf:
    A really experienced programmer might well reimplement a hash table class to increase the set of Good Properties enjoyed by the implementation. Examples include defining a stable iteration order ... or guaranteeing that there are no memory reallocations in use...

    You mean sometimes people use the standard library's binary tree map instead of the hash table map?

  • LD (unregistered)

    There really are some people that should just not be allowed near code. Seriously, how do these people get through comp-sci programs with ridiculous ideas like that? Did this one even go to a comp-sci program?

    On second thought, he must have gone through a comp-sci program. I've only ever known comp-sci students and PaulaBean wannabe's that come up with statements like that.

Leave a comment on “It Had Too Many Functions”

Log In or post as a guest

Replying to comment #:

« Return to Article