• (nodebb)

    "I recently was having a debate with a C-guru, who lamented all the abstractions and promised that well written C-code would forever be faster, more compact, and easier to read that the equivalent code in a higher level language.

    That may or may not be true (it's not), but I understand his point of view."

    Except that it is true - for the proper definition of "well written"..... For the past decade(s?), the number of humans who could truly achieve that has dwindled and most likely has even reached zero; but this does not change the validity of the statement.

  • Javascript Apologist (unregistered)

    In fact, the first search result for "MDN deep copy" recommends this very approach, among others. It's probably not the most efficient way to do this, but it's the most straightforward way without needing polyfills (for Object.structuredClone) or third-party libraries.

    Of course, she probably just wanted a shallow clone, so ret.push([...cur]) would be an easy solution.

  • (nodebb)

    Lily comes from a background where she's used to writing lower level code, mostly C or C++ style languages. So when she tried to adapt to typescript, the idea that everything was a reference was inconvenient.

    It shouldn't have been a problem for her coming from a C background. C is awash with references. For example, a similar piece of code to the first example:

    char* foo = "banana";
    char* bar[1];
    bar[0] = foo;
    foo[0] = '\0';
    // The first element of bar is now an empty string
    

    Anyway, is the real WTF Javascript?

  • Prime Mover (unregistered)

    The reason well-written C is hard to come by is because C is a ridiculously difficult language to write well-written code in. It's easy enough to bash something out that works after a fashion, but to make it bug-free, efficient and robust requires so much abstruse knowledge that the average coder nowadays, with so many other languages and techniques at his or her disposal, would be excused from wondering whether it was worth the bother. And, I believe, in a lot of the time, it's not.

  • (nodebb) in reply to Prime Mover

    Not so "ridiculously difficult" ... but first get down to the bare metal... https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-koppe.pdf

  • Sole Purpose Of Visit (unregistered) in reply to TheCPUWizard

    Speaking as a C programmer of 20+ years, my approach is to regard this as a non-question (outside niche areas like embedded, which obviously use C anyway). Nobody really cares about the slight degradation in speed these days. And if you really need an expert in a particular language (be it C or Z80) to achieve that speed, then so be it.

    It's also not quite true that C will always be faster "if written by somebody who knows what they are doing" (to paraphrase you, hopefully accurately). Scott Meyers, in one of his Effective C++ books. points out that the C++ version of qsort is actually faster than the C version -- because it provides what might be called metadata over the interface. I expect this applies to other library functions as well.

    Then there's garbage collection, which might be a non-issue in a malloc/free environment, or then again perhaps not. As far as I am aware, there's no generational GC approach that is applicable to C. Therefore I will assert that in certain circumstances (thrashing through a vast number of tiny memory allocations), a language using modern GC will actually be faster, almost by accident.

    I'd also imagine that a pure FP language might be faster than C in certain circumstances, because you're guaranteed immutability (the lack of which bites the OP here) and you can also expect a competent speed-optimised library and perhaps even a jitter for run-time. Also, because of immutability, the libraries can do multi-core parallelisation for free. Then again, I'm stretching things here.

    Bottom line: to get a largely unnecessary speed benefit out of C, the programmer has to have a deep understanding of the semantics. (In the OP, this would be "understanding pointers and references.") Using linq in C# or streams in Java, for example: no deep knowledge of semantics required. Which apart from anything else is less error prone.

  • Robin (unregistered)

    I'm fairly puzzled here. Why not do exactly what is suggested in the commentary (make a new empty array that's not a reference to the old one)? Sure it doesn't work with the const, but not thinking to change that to a let is a WTF in itself no matter what languages one is most familiar with.

    Clearing an array variable by setting the length to 0 rather than just reassigning an empty array sets off my WTF alarms anyway. As for deep copies, the reason there's no built in way is that there are any number of edge cases where it's far from obvious what should happen in general (one example is, what if the same object or array is nested within itself), and it's so rarely needed anyway - usually a shallow copy is enough and the language provides any number of ways of doing that. I'd be questioning the whole approach if I ever found myself needing a "genuine" deep copy.

    As to the statement being disputed above - I have never been a C or C++ developer, so take this with due caution, but I don't think it's true. I can certainly believe equivalent C code is faster (although probably not at all, or unnoticeably so, for really common operations), but I fail to see how it can be more readable if the higher level code isn't WTF worthy. In any case if I find myself needing "bare metal" speed I would reach for Rust rather than - Rust provides great high level abstractions while still compiling to optimal machine code, while also being vastly safer.

  • (nodebb) in reply to Sole Purpose Of Visit

    @Sole Purpose Of Visit - Been doing "C" development for over 40 years, and I generally agree with the intent of what you wrote, but semantics and details ae important. Consider any high level language (compiled or interpreted): at some point there is a sequence of machine level instructions that will be executed. If this same set of instructions can physically be generated by the compilation of "C" statements, then parity if achieved by definition. It is only if it is physically impossible to generate the same machine language that "C" may not be able to match the results....

    Now, if "matching" is possible, and the code base is of any on-trivial size, then there is a high probability of there being at least one location (alliteration intended) where memory or clock cycles can be improved (one tick or one byte). At this point "C" performance has exceeded that of the other language...

    Is it worth the potential investment of man-years to get a 0.0000001% improvement? Almost certainly not... but that is a different topic and set of calculations entirely.

  • Brian (unregistered) in reply to Prime Mover
    The reason well-written C is hard to come by is because C is a ridiculously difficult language to write well-written code in. It's easy enough to bash something out that works after a fashion, but to make it bug-free, efficient and robust requires so much abstruse knowledge that the average coder nowadays, with so many other languages and techniques at his or her disposal, would be excused from wondering whether it was worth the bother. And, I believe, in a lot of the time, it's not.

    Funny, one could say the same about JavaScript. Back when I had the misfortune of having to do frontend work, I found it would usually take me 2-3 times longer for the JS parts than the backend C# code simply because of the complexities and quirks of the language, compounded by the existing code from a lot of other people who weren't JS experts either. And it didn't help that our project used both Angular and React (and very old versions of both), plus a ton of poorly-documented packages. IMO, a language that has spawned so many frameworks and wrappers just to make it usable has something fundamentally wrong with it.

  • Daniel Orner (github)

    The lack of a strong standard library has definitely caused JS a lot of pain (which is why I'm so excited about Deno compared to all the other ways to use it).

    Having said that... changing const to let is by far the simplest way to handle this case. I've literally never seen any situation where setting the length attribute is used.

  • Splatmandeux (unregistered)

    For a large C++ program, I had a coworker who made a class and disabled the copy-ctor. When asked why, he said, "this may have to run on embedded hardware someday, and the copy-ctor would have to dynamically allocate memory" (this was a simulator running on high-end desktop/gaming-spec machines). Dude just had a bad case of OCD about some things, e.g. he'd spend days optimizing a csv parser that was used only at program startup. Now a copy of the object was badly needed in a certain context and anytime someone added a copy ctor to the class he reverted the code. However there was a serialization/deserialization implementation for the class. So... that's how I made my own copy routine for the class. Oddly enough, he was OK with this... <shrug>

  • (nodebb) in reply to Brian

    [quote]IMO, a language that has spawned so many frameworks and wrappers just to make it usable has something fundamentally wrong with it.[/quoate]

    The sad part is that to the people that call themselves "front end developers" this is all completely normal. And...in my experience, none of them are experienced enough to even recognize that there is a problem with such things. They aren't software engineers and they tend to barely even be programmers (almost not at all to be honest)...I'm genuinely baffled by the entire website front end eco system...especially when I generally have to be the one who has to make everything "work" once the front end part is done.

  • Zygo (unregistered) in reply to Sole Purpose Of Visit

    the C++ version of qsort is actually faster than the C version

    Have you benchmarked that? The results may surprise you.

    Over the years I've found quite a few "lightweight" and "thin" C++ template classes that turned out to add a 5-digit-factor speed penalty compared to the familiar C equivalents. It's cool that the C++ optimizer can analyse inline functions and use the type model to elide dozens of levels of function call from the output code, but it doesn't always do that. When it goes bad, it goes very, very bad.

    About the only thing you can reliably say about C++ vs. C and performance is that developers can make bigger C++ executables faster.

  • (nodebb) in reply to Zygo

    @Zygo - while there are constructs legal in "C" that are not legal in C++. I have yet to find any real world examples of a valid "C" [ISO, not extensions] program that is faster than the best possible implementation in C++.... can you provide an example?

    Critical to remember that a vendors implementation is not indicative of the language itself. Back in the days when there did not exist a single C++ compiler that was 100% compliant, I did testing across the implementations. Habits from those days die hard.....

  • Robin (unregistered) in reply to CodeJunkie

    Sorry but either you have had the misfortune of only ever working with clueless frontend "developers" (of course they exist, but clueless developers exist in all fields as this site proves to us daily), or you just like to dismiss things you don't understand fully.

    I would go into more detail but apparently there's a character limit here so I'm trying to brief. JS has plenty of flaws (I'd rather use Purescript or Elm myself!) but like it or not it's the language web frontends use, and despite those flaws it's at heart a consistent and well-designed language which is worth learning properly. (A huge contrast with PHP in which the surface WTFs come from a core that's chock full of WTF.) I count many very skilful developers among my colleagues and ex-colleagues who are all very definitely "front end devs". And I aspire to be considered that way myself, although of course that's up to others to judge. (I also don't consider myself solely a front end dev but it happens to be what I do at the moment - and I enjoy it.)

    Never mind the JS, there's a lot of skill in even using HTML and CSS properly. Which sadly a lot of devs simply don't have. Even just writing correct HTML to make a webpage accessible to all users is a lot harder than you'd think, and is a real skill to get right, particularly if you need to start adding fancy Javascript to it as many modern UIs do. Anyway, don't know frontend development as a field/skill just because you might have only come across poor examples of it. In fact the clear fact that it's very often done badly (as daily interactions on the web show!) proves how much skill is needed to do it right.

  • Robin (unregistered)

    That last "don't know" was supposed to be "don't knock"...

  • (nodebb) in reply to TheCPUWizard

    This is a very precise way of answering the question my clients often ask: "Can we do ... ?" I inevitably end up answering, "Well, we can ..."

  • (nodebb) in reply to konnichimade

    @konnichimade - Yup "Can" and "Should" are completely different questions.... Just had a recent case of:

    1. Contract says "do X"
    2. I don't know how to do X
    3. Reach out to (expert) associates - they don't know and don't recommend trying.
    4. Research independent 3rd parties - nothing provides a solution
    5. Build tirals/experiments/etc. - Nothing works, though some things look promising if a significant investment is made.

    Inform client that X must be removed from the criteria... Client asks "Are you saying it can't be done?"..... My answer has to be"No, I am not saying that"

    [Honest appraisal is that 6-9 man months could make it work, but only a week of work to make it unnecessary at al]

  • (nodebb) in reply to konnichimade

    @konnichimade - Yup "Can" and "Should" are completely different questions.... Just had a recent case of:

    1. Contract says "do X"
    2. I don't know how to do X
    3. Reach out to (expert) associates - they don't know and don't recommend trying.
    4. Research independent 3rd parties - nothing provides a solution
    5. Build tirals/experiments/etc. - Nothing works, though some things look promising if a significant investment is made.

    Inform client that X must be removed from the criteria... Client asks "Are you saying it can't be done?"..... My answer has to be"No, I am not saying that"

    [Honest appraisal is that 6-9 man months could make it work, but only a week of work to make it unnecessary at al]

  • Wlao (unregistered)

    So... I'm not a developer, but isn't TRWTF this: inner.length = 0 I mean, the length of an array should be a read-only attribute. If you change the length of the array without adding or deleting items in it, wouldn't you expect strange things to happen?

  • Zygo (unregistered)

    I have yet to find any real world examples of a valid "C" program that is faster than the best possible implementation in C++.... can you provide an example?

    The effect is quite visible with std::vector and std::iostream (i.e. the STL templates used instead of calloc()/realloc() and the *printf family). e.g. on GCC:

    & { vector<uint8_t> v(1048576); }: 45676 loops in 1 seconds, 45675 loops/sec 21893 nsecs/loop

    & { char x = static_cast<char>(malloc(1048576)); work_proof ^= reinterpret_cast<size_t>(x); free(x); }: 9255767 loops in 1 seconds, 9255765 loops/sec 108 nsecs/loop

    operator new[1048576] is ~20% slower than malloc of the same size. calloc(size1, sizeof(T)) is ~30% faster than vector<T>(size1, 0).

    GCC is also really good at eliminating calls to C builtin functions, e.g. malloc(), entirely (hence work_proof above, without which the optimizer deletes the entire closure, no calls to malloc() and free() at all).

    We found that "best possible implementation in C++" for things like formatting string IDs was to call sprintf() and put the result in a std::string, or even call asprintf() and wrap it in a class object to call free() on it later. Or use a thread-local static buffer, and claim the savings from not doing dynamic memory too. ostream is slower than all of those, sometimes by a factor of 3.

    I haven't tested qsort this way, but I certainly would, rather than blindly assuming C or C++ had a better qsort function for a given application.

    a vendors implementation is not indicative of the language itself.

    True, but irrelevant. If there are only four usable vendor implementations, and they all suck in similar ways, then sucking in those specific ways will be an unavoidable fact of living with the language.

    Maybe in (yet another) decade or two, the compilers will catch up with their potential--but we'll worry about that after the profiler starts telling us C++ compilers are producing better code, not before.

  • Drak (unregistered) in reply to Brian

    @Brian: there is no need for frameworks when using Javascript. It's only the newbies who need them. I started out my HTML/JS adventure way back in the days of netscape 2, and I can tell you that there is absolutely no need for all these frameworks to make JS usable. But it does take more experience to be able to quickly build something in vanilla as compared to using a framework. And for lots of general things, sure, use Vue, React or Angular. Until your pointy-haired boss wants something extremely specific that does not play nice with the given framework. Then the framework-users are at a loss how to get it made, while anyone with a bit of common old-school JS knowledge can reasonable easily produce the required results.

  • Drak (unregistered) in reply to CodeJunkie

    @CodeJunkie: coming from 20 years html/js experience, 5 years of C experience (starting a little before my js experience), and about 37 years of programming experience in total, I can tell you that front-end developers can certainly be developers. And they can certainly see the problem with frameworks, as I do. But if a company needs stuff made fast, and they cannot find the people with the experience to get it done the vanilla way, they have very little choice but to hire a Vueist, or a Reactor, or an Angularist or (god forbid) a jqueryist.

  • FTB (unregistered) in reply to TheCPUWizard

    Except that it is true - for the proper definition of "well written"

    No it isn't.

  • FTB (unregistered)

    About the only thing you can reliably say about C++ vs. C and performance is that developers can make bigger C++ executables faster.

    You mean by reducing the development time?

    Remember: All those 32kb FLASH / 2kb RAM Arduinos out there are being programmed in C++.

  • Robin (unregistered) in reply to Drak

    I mostly agree with you, but I don't buy the "frameworks are only crutches for beginners" argument. (Realise you didn't say that but it's the best paraphrase of something that comes out clearly to me from your post.) Sure, you can do anything and everything with vanilla JS - and if I'm making a simple webpage or very simple application, with just a few interactions, I'll go vanilla all the way. But I genuinely believe that, if you need anything complex enough that it would take more than a few hundred lines of JS, it's going to be a real maintenance headache. And no, jQuery (which is essentially dead for good reasons) doesn't improve this. The real problem with vanilla or jQuery is they're fundamentally imperative - your code is saying "do this when the user clicks this button", where "do this" is often complex code to remove some DOM elements and add some others. It doesn't take much complexity for that to get repetitive and brittle.

    The beauty of modern frameworks is that they're naturally component-based, and declarative - so while of course there is some imperative code, you don't have to handle the "plumbing" of adding and removing DOM elements, you just tell them which components to render, and how, in response to state changes. And your imperative code to a large extent just manages those state changes. And the component-based approach makes it very easy to reuse things in different places - in many ways it's more like writing HTML than JS.

    Sure you can do similar things with vanilla, but you'd basically end up recreating React or whatever so why not just use something that's known to work well? I don't buy the argument that "real devs can do it the hard way" - no doubt great devs can, but it's not smart to write vast amounts of code yourself that you can get for free.

    If you want to argue that we shouldn't be writing large enough JS applications to need this, that's something I have some sympathy with. But in the modern world of huge JS-powered apps, frameworks fill a real need.

  • David Mårtensson (unregistered) in reply to Robin

    I agree, having built a web application running into the 2-3 thousand lines of JS for one of the more complex pages, using vanilla and jquery and some jquery based components.

    Switching to React I can really appreciate the added help to keep everything in sync and how components just work better together in general than jquery ones due to the more strict design.

    Sure, there are limits, things that are harder to achieve within the framework, and knowing the basics can be very helpful to solve those things.

    But it also requires quite a good understanding of the framework to know when you can and when you should not circumvent it.

    As for C vs high level. Yes you should be able, with the right compiler, to create something that is faster than most higher level languages BUT it will require a thorough understanding of everything involved, like parallelism, cross thread locking, memory management and much more.

    he benefit with higher level languages is that the language and libraries take care of a lot of that so you can focus on the actual program logic and make that the best and fastest.

  • Adam Ingerman (github) in reply to TheCPUWizard
    Comment held for moderation.
  • Adam Ingerman (github) in reply to Brian
    Comment held for moderation.
  • Conrad Buck (unregistered)

    For the haters:

    Web frontend is a house of cards. It's not hard to grab the cards and make a layer or two. It is quite challenging to build something that you can keep building on top of. And make no mistake, there is no end to the building. Frontends are for people, and people's wants and needs change constantly.

    On top of that, there are just so many ways to go wrong! Too much abstraction, too little abstraction, choosing the wrong tools for the job. There's just so, so much stuff out there. So many frameworks and libraries, so much documentation, so much on stackoverflow. The gut-punch is: so much of the information is outdated, and so much of the code is perpetually incomplete and/or broken. The real challenges of being a good frontend dev are being able to think for yourself and make informed technical decisions, and to be able to work through the inevitable stream of problems and not around them.

  • Zygo (unregistered) in reply to FTB

    You mean by reducing the development time?

    Exactly. If memory and CPU usage aren't constrained in the target environment, developers are free to burn a lot of both to deliver programs faster, and C++ helps with that by providing a lot of automation on top of a C-centered toolchain. C++ helps programmers write programs faster in the same way that internal combustion engines help drivers use highways to arrive at their destinations on time. A lot of brute force overcomes all the extra weight and the noise isn't so bad when you get used to it.

    With memory or CPU constraints in place, programmers have to deliver faster programs. C++ does not help programmers write fast programs in the same way that internal combustion engines do not help mountain climbers climb mountains. It's an irrelevant tool for that particular job.

    Not that you have to choose. You could drive to the base of the mountain (using C++) and then leave the engine behind to climb the mountain (using C) and switch between those very seamlessly.

    Occasionally someone compares an ultralight motorcycle to a bicycle made of lead, and writes an article about how C++ templates made one function in their project faster than its traditional C equivalent, but that's the exception, not the rule.

    Remember: All those 32kb FLASH / 2kb RAM Arduinos out there are being programmed in C++.

    More likely they're being programmed in Arduino's variant of Processing, but sure, a few people are going straight to avr-gcc. Then they try to use a 256-entry std::vector, and discover they were better off with malloc(), or rearchitecting their program around static allocation. Or they find out that 16-bit pointer arithmetic is deathly slow compared to 8-bit on that CPU, and template libraries that assume only one kind of pointer exists have unusable performance.

  • dereferenced null pointer (unregistered)
    Comment held for moderation.
  • (nodebb)

    We spent quite a lot of time writing code for embedded micros using C++, and there were a few rules to follow, and some compiler switches.

    Then we could use things like templated functions, and use classes properly, just like the Arduino guys, reducing the amount of non-algorithmic boilerplate code we had to write.

    When we started off, we thought we would need different libraries so somebody painstakingly wrote out code in crippled C++ for the embedded board and another library in C++ for the Linux machines that talked to it.

    After doing this for a while, I went back and recompiled some of our Linux libraries over onto the ARM with minimal code changes in some helper library functions (no try/catch, do not allocate memory in class constructors, permanently allocate memory separately immediately after RTOS startup, (class constructors for statically declared objects were called before malloc() became functional in that RTOS), no RTTI stuff) and I was able to use the code on the embedded CPU . .

    Turn on the GCC Link Time Optimiser, and a lot of the unused code and unnecessary function call hierarchy vanishes .

  • (nodebb)

    While there's the elephant in the room which is JavaScript, not a very good language, I have to admit that until I found the "rfdc" library — which was demonstrably faster in our use cases — I used to use the JSON.parse(JSON.stringify(x)) trick quite a lot.

    If what you have is a vanilla bucket of data with no custom prototypes etc, and you just need to make copies of it to pass around, it bloody works and requires no third-party code at all. But then again, if you need a lot of very custom object, you'd probably also need to initialize your new copies properly, going through the constructor.

    And then you have proper languages like Python which have protocols for this stuff: implement the proper magic methods, and now your objects can clone themselves.

  • Lily (unregistered)
    Comment held for moderation.
  • (nodebb)

    Lily here.

    The thing is, I came from competitive programming. That's a place where global variables fly everywhere and identifier names hardly exceed 5 characters.

    So I'd write the following snippet without thinking twice

    vector<vector<int> > v;
    vector<int> cur;
    int ans = 0;
    for (int i = 0; i < n; i++) {
      cur = solve(i);
      v.push_back(cur);
      // do magic with cur, push_backing to it and removing from it as needed
      // for example
      sort(cur.begin(), cur.end());
      erase(unique(cur.begin(), cur.end()), cur.end());
      // and cur is now pairwise distinct, let's count something with it
      ans = max(ans, calc_something(cur));
    }
    // The problem requires us to print cur for each 0 through n
    for (auto x: v) cout << x << ' '; cout << endl;
    // and then print the answer
    cout << ans << endl;
    

    Per cppreference (https://en.cppreference.com/w/cpp/container/vector/push_back):

    Appends the given element value to the end of the container.

    1. The new element is initialized as a copy of value.
    2. value is moved into the new element.

    Addendum 2022-04-30 09:48: Edit: couting a vector directly is not actually impossible, many of us have an overloaded << operator in our template code to make coding quicker.

Leave a comment on “Confessions of a Deep Copy”

Log In or post as a guest

Replying to comment #:

« Return to Article