What makes code bad? Is it an inherent property of the code itself? Or is it the context in which the code is placed?
For example, let's look at this function, found by Lars.
// naive bubble sort
function _sortList(list, column, direction)
{
var mods = 0;
var cmp = (direction == 'ASC' ? function(a,b){return a < b;} : function(a,b){return a > b;});
for (var i = 1, len = list.length; i < len; i++)
{
if (cmp(list[i].column, list[i - 1].column))
{
var temp = list[i-1];
list[i-1] = list[i];
list[i] = temp;
mods++;
}
}
if (mods)
{
_sortList(list, column, direction);
}
return list;
}
Divorced from context, this is a pretty straightforward implementation of bubble sort. I don't love it, but if a CS101 student turned this it, it's a solid C- effort. I don't even hate the ternary in this one, though I'd probably format it differently for readability.
The bigger issue is that they take column
as a parameter, clearly intending to treat objects like dictionaries and sort on list[i][column]
, but they forgot about that partway through. Using mods
as an increment rather than a simple boolean is not the clearest thing, and I'm not sure I love recursion for this, but it's not terrible code from a programmer learning their very first sorting algorithm.
Except, that's not the context of this code. This code comes from a senior web developer, who somehow never learned that JavaScript has a built-in sort function, that you can pass functions to it, that it's far more efficient than a bubble sort, and also why are you implementing a bubble sort.
Bubble sorts are a great way to teach baby programmers about how sorting algorithms work. If you find yourself implementing a sorting algorithm from scratch, you've probably messed up somewhere. If the choice of algorithm you bring to bear is bubble sort, you've definitely messed up.