- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
ffrriisstt!!
More seriously, isn't that O(n^2 log n) as he's also re-sorting the arrays on every iteration?
Admin
I love the ku klux klan var named there, makes me want to use also SS and CCCP and _666 as var names
Admin
That code is really racist.
Admin
Hey, at least it uses the minimal amount of memory for the array :)
Admin
This has also been refactored to remove the inefficient jjj loop, so it could have been worse.
Admin
Admin
Looks like a piece of some Ku Klux Klan submarine firmware. You need to understand that the Klan doesn't have a concept of efficiency - if you have slaves to do your bidding there's no point in optimizing to do less work. On the contrary - you need to make sure your slaves are slaving enough - and I think they don't distinguish between people and computers.
I hope snoofle secretly switched their torpedo launch calls to autodestruct!
Btw, it seems that Akismet may be a klan member.. so be careful.
Admin
I didn't study it closely enough to figure out exactly what the code is doing, but I don't think I've ever had the need to resize an array since I started programming C# 13 years ago so I'd be surprised if it couldn't be done better with List<t> and/or IDictionary<t>
Admin
Oh my, there is so much wrong with this gem, you could have filled a whole week with it.
Proves again: real obfuscaters need no stinking GOTO to make code harmful.
Admin
And no memory wasted on variable names either.
Admin
There are sorting implementations that run significantly faster than O (n log n) if the array is already sorted. For example, the MacOS X implementation checks for a sequence of items in either ascending or descending order both at the begin and the end of the array, and if there is not much else then the sort is in O (n). So adding one item at the end and sorting, or changing a random item and sorting, is O (n).
Still horrible.
Admin
On the plus side, at least the kid read the documentation and didn't reinvent the wheel with custom Sort, Resize, etc.
Admin
Admin
Admin
Admin
If there are variables ii, xx, yy, and kkk... does that mean there is jj - ww, zz, and aaa - jjj as well? Scary...
Oh wait, there is a dm variable as well - maybe there is ia-iz, etc. Somehow though I think that one is supposed to be more descriptive then the others, rather than an iteration.
Admin
I'm guessing that the author simply found it easier to read 3 identical letters in a row than the character on it's own.
So, maybe its a some sort of coping mechanism for some kind of dyslexia.
That doesn't excuse the fact that the code is also terrible in every other way.
Admin
I'd say he read somewhere that single-letter variable names are Bad Practice.
Admin
so indexing through a jagged array, you should use, "column" and "row" as your variable names, unless you already used them for something else, then you use "column2"...
For indexing loops, you use a single letter, and stick to a convention so the variable names are always available in that scope, then you can tell how many levels deep you are.
And if the code inside the loop is too big to track what level deep you are, then it's doing too much.
Admin
The algorithm is way nasty. Since it goes round n times sorting each time the cost is as follows
order N^2/2 for the resizing worst case order N(NlogN) for the sorting but assuming a non-clever sort algorithm order N^2 or with a clever sort actually NlogN for working out where to insert but N^2/4 for the insertion step (because we copy half the array again).
even worse the memory cost is order N^2/2 since you are unlikely to have had a garbage collection during the entire process. A very popular memory mistake.
Admin
Admin
Admin
Admin
Probably a similar situation to this:
http://thedailywtf.com/Articles/Coded-to-the-Letter.aspx
Admin
Judging from the code alone there is no way to judge whether the if statement could indicate a rare event.
The damn code is unreadable at the least, such code typically translate from research paper directly and that could explain why.
Admin
Quick way to fix the code:
Admin
Admin
Clicked wrong preview, and Fuck You akismet!
Admin
Admin
Oh, you don't read any? Just some unfounded academia bashing? Thought so.
Admin
Admin
That's truly amazing. When people really grasp the beauty and conciseness of linq, nirvana results. Good trick with the 'let' statments. How does the linq perform when compared to just writing it out in c#? And, did you consider parallelizing it?
Admin
Admin
That means spending O(n) up front to determine if the desired Θ(n) best case complexity of insertion sort is viable. If it is; you apply it and have spent O(2n) in total. If it is not, you can apply a proven optimal worst-case O(n log n) sorting algorithm and you'll have spent O(2n log n) in total.
Seems to me that just going for the simple O(n log n) approach directly will be faster so much more consistently that the additional overhead is simply not worth it for a general purpose sort. Am I missing something here, or is this indeed an example of Apple producing code that is "still horrible" ?
Admin
Admin
Sure you are missing something. What you are missing is that for random data, the check whether there is a long sequence of ascending or descending elements will finish very quickly. The chance that even just five random numbers are in ascending or descending order is only on in 60. So for randomly arranged arrays, the cost is practically zero.
Now assume that the first k elements are in ascending or descending order. In that case you can sort the remaining n - k elements, then do a merge. The merge is n operations. Sorting the remaining elements takes (n - k) log (n - k) instead of n log n, so you save about k log n + n log (n / (n - k)) operations. If k is relatively small compared to n, that's about k log n + k operations. You win if n < k (log n + 1) or k < n / (log n + 1). But in the cases where you don't win, you only checked n / (log n + 1) elements to see if they were sorted.
So worst case, you do about n / (log n + 1) unnecessary operations. That's less than the usual n log n by a factor (log n)^2, so the worst case is only a tiny bit worse. The best case is incredibly much better. In practice, even sorting fully sorted arrays is not at all uncommon, while sorting arrays that were originally sorted except for a few number of changed or added elements is very common. And the worst case would be highly unusual. For example, an array of 100,000 elements where the first 9,000 elements are sorted, but not the first 10,000.
Admin
I once wrote an O(n**3) sorting subroutine. There is a reason why I used Fortran exponentiation syntax in this expression ^_^
It was not the first time I wrote a subroutine, but it was the first time I did sorting, and I had not heard of Knuth at the time. I was someone's nephew. I was 16 years old in first year of university.
THRWTF (the hypothetically real WTF) would have been if I had an aunt or uncle running a business, and if said relative had taken a copy of my code, and if said relative had used it unchanged. This did not happen.
By the way though, my code worked ^_^
Admin
There's always work to be done. Once your slaves insist that they've finished writing code, move them onto doing the dishes. One scrubs while the other dries, in the following order:
By thinking of two easy jobs and two hard jobs, and alternating the two slaves so that at all times they're either doing an easy job or a hard job in the most efficient way, you're teaching them about algorithms (and hopefully a bit of multithreading). While they're doing these tasks, you'll have time to identify areas where your slaves need to refactor code. Reward the slaves who improve over time by giving them less and less hard work (eg. constantly writing/maintaining code because they're good at it, hanging clothes to dry and planting/watering for the veggie garden), and the slaves who don't improve over time by giving them more and more hard, physical work (eg. washing and drying clothes with an old-fashioned hand washing mechanism, mowing the lawn, digging holes for the veggie garden and spreading large mounds of mulch or gravel).
Admin
OK, I'll explain. Sort is limited to O(n logn) at best; the logarithmic part actually comes from the ability to recursively decompose the problem into smaller, simpler pieces. (There are linear sorts, but they're very specialised.) The principal O(logn) algorithm is binary search on a sorted array.
Admin
Admin
a) So you pay the price of that test on every sort operation?
b) If you're building a sorted list by inserting/deleting items randomly the cost per item should be O(log n) whether the list is already sorted or not (eg. std::set)
Admin
Admin
The price is very little, and the possible benefits are huge. More important, the average benefit in real-life scenarios is huge. For example, in the "nephew" case it improves things from O (nnlog n) to O (n*n). It simplifies code: Instead of writing code that inserts a few items into a sorted array, just append the items and sort. It also helps with the notorious case of arrays that are already sorted but in reverse order. It really accelerates the case of concatenating two sorted arrays and sorting the concatenation. It really simplifies the case where some items may have been changed and you want to make sure the array is still sorted - calling the sort function is as fast as checking that it is sorted.
Inserting into an array has linear cost for moving items around. Inserting into a sorted linked list has linear cost for finding where to insert.
Admin
In the usual "stretchy array" container classes the growth is by a constant factor, maybe 1.5. I have seen a one off implementation of a growing byte array with a constant increment of 100 bytes. Not very efficient when it has to grow from 100B to 500kB. This was in an application server sold by a very large corporation.
Admin
The "Nephew" way is all the way up in the government.
It seems like the Obamacare website was build by a company whos executive was friends with Michelle Obama: http://dailycaller.com/2013/10/25/michelle-obamas-princeton-classmate-is-executive-at-company-that-built-obamacare-website/
Admin
Admin
I can actually think of a good reason to do that... try using your text editor's "Find" command to find all instances of a single-letter variable (e.g. "i"), and see how many false positives you end up wasting your time skipping over. With the variable named "iii" instead, you can scan much faster.
Admin
That's why I search for \bi\b or use the whole word option.
Admin
Nah! This really should be a /* ... --- ... */
Admin
VB to the rescue!!!! Search for Dim space letter space.