• (disco)
    Steve_The_Cynic:
    Talk to me about delete this;
    class ReferenceCountedClassBase { private: int ReferenceCount; virtual ~ReferenceCountedClassBase(); public: void PossiblyUnsafeReleaseForDemonstrationPurposes() { if (!--ReferenceCount) delete this; } // etc. }
    PWolff:
    Planar:
    Pedantic dickweedery aside, || has additive flavor and && multiplicative, and (in most programming languages) it's reflected in their precedences. If you don't see that, you don't know the first thing about boolean algebras.
    But I've learned that Boolean algebras were symmetric (self-dual) under swapping truthiness and falsiness (or whatever you call them) and the operators that are used to be called && and || in programming.
    Indeed, || is true-preserving and && is false-preserving so they are duals of each other under negation. Similar reasoning applies to set union and intersection under complementation.
  • (disco) in reply to urkerab
    urkerab:
    Indeed, || is true-preserving and && is false-preserving so they are duals of each other under negation.

    They're just duals of each other. No need to say anything more than that.

    http://en.wikipedia.org/wiki/Duality_(mathematics)#Duality_in_logic_and_set_theory

  • (disco) in reply to urkerab
    urkerab:
    class ReferenceCountedClassBase {
    private:
        int ReferenceCount;
        virtual ~ReferenceCountedClassBase();
    public:
        void PossiblyUnsafeReleaseForDemonstrationPurposes()
        {
            if (!--ReferenceCount)
                delete this;
        }
    // etc.
    }
    

    Among the etc. that you need to include is some way to make derived classes compile. (A class with a private destructor cannot be a base class because the derived class's destructor cannot call the base class's destructor.) Also, the mere fact that you can gabble this out doesn't tell me you know anything about the implications of delete this;, and also doesn't do what I asked. I asked you to talk to me about the construction, not gabble out broken code. FAIL.

    Still, your answer told me what I needed to know. You showed yourself to be careless and incapable of following instructions. Thanks for playing. NO HIRE.

  • (disco) in reply to PWolff

    Not exactly in shell. For example, a lot of scripts use the pattern test && success || fail, like this:

    true && echo y || echo n
    false && echo y || echo n
    

    Which will output y first and then n for the second command, much like is suggested. But then, if you try and do more complicated things that echo-ing text, you see that it works slightly differently. Suppose our success command fails, like this:

    true && false || echo meow
    

    This will print meow, perhaps slightly unexpectedly. Now look at some other possible combinations, and try and work out what the output will be:

    true || false && echo one
    true || true && echo two
    false || true && echo three
    false || false && echo four
    

    We get one, two and three output, only the last command fails to output anything...

    I've been caught out by trying to be too clever with shell short-circuit operators too often, particularly when trying to programmatically generate scripts (TRWTF) so I usually end up rewriting the test && success || failure one liners like the following, which works as expected:

    # if test ; then success ; else failure ; fi
    if true ; then false ; else echo meow ; fi
    

    Filed under Tag Precedence Exception
  • (disco) in reply to grkvlt
    grkvlt:
    I've been caught out by trying to be too clever with shell short-circuit operators too often

    It's smarter to not be a smartass. Sometimes I remember this when coding… ;)

  • (disco) in reply to Steve_The_Cynic

    Well, maybe 0 is the only integral constant that can be directly assigned, but I see no reason why such a field couldn't hold a value of ~0

  • (disco) in reply to Buddy
    Buddy:
    Well, maybe 0 is the only integral constant that can be directly assigned, but I see no reason why such a field couldn't hold a value of ~0
    Correct, it could, but if you don't specify the signedness of the field, you cannot tell (portably!) what value you will get if you do this:
      struct flaky
      {
        int x : 1;
      } flaky;
      int signed_extracted_value;
      unsigned int unsigned_extracted_value;
    
      flaky.x = ~0;
    
      signed_extracted_value = flaky.x;
      unsigned_extracted_value = flaky.x;
    

    You don't know (without looking in the compiler's manual) whether flaky.x is signed or unsigned, and you don't know (without looking in the machine's architecture manual AND the compiler's manual) what exactly is meant by a signed field consisting only of a sign bit, except that an all-bits-zero int-family variable is, by definition, zero.

    So the portable range of values is 0..0. Done.

  • (disco) in reply to Steve_The_Cynic

    The portable range of values is 0..1. How you interpret the values in the portable range is up to your compiler.

  • (disco) in reply to ben_lubar
    ben_lubar:
    The portable range of values is 0..1. How you interpret the values in the portable range is up to your compiler.
    Don't be that particular kind of stupid. You know perfectly well that I don't mean ***before*** the compiler gets hold of it, but ***afterwards*** (which is the only place that matters). And in fact, the "is up to your compiler" part is *why* the portable range, ***in C*** (i.e. the range of values where C will give you back what you put in on all architectures and all compilers), is 0..0.
  • (disco) in reply to Steve_The_Cynic
    Steve_The_Cynic:
    No, actually. A one-bit two's complement field holds two different values: 0 and -. (No, there is nothing missing between the - and the . I really mean it, although you could make a case that the two values are + and -, because there is only a sign bit...)

    But that does yield the interesting possibility that:

          • == +
          • == + (In both cases, assuming a normal two's complement addition circuit (bit+bit+carry in => bit+carry out, not a hard circuit to build), we observe that 0+0=0 in the bits, and 1+1=0 in the bits, assuming carry in == 0 and carry out == discarded.)

    Hhm. A computer wouldn't perform any calculations for logical operations, it would just check the presence of any bits, or if a branch is performed on the presence of a single bit (like the sign bit), check this bit.

    E.g.: DEC PDP-1, 1's complement, 18-bit words, -0 = 0777777:

    law 077777   ;load number -0 into AC
    sza          ;skip next instruction on contents AC zero
    law 1        ;load number 1 into AC
    hlt          ;halt, inspect console lights
    

    So, what would the console lights for AC reading? 0, 1, or -0 (0777777)? Answer: 0777777 has all bits set, there are some bits set to high in AC, AC is not zero. Therefor we don't skip and execute law 1 and the contents of AC is thus 1.

    There is no such thing like Boolean arithmetics on the machine level, they are all in the compilers of higher level languages.

  • (disco) in reply to noland
    noland:
    There is no such thing like Boolean arithmetics on the machine level

    Or arguably that's virtually all that you've got. (Addition is just a wrapper round XOR and AND…)

  • (disco) in reply to dkf

    XOR and AND are just wrappers around NAND ...

  • (disco) in reply to aliceif
    aliceif:
    XOR and AND are just wrappers around NAND ...

    Not really. The gate libraries are a bit higher-level than that, as they can build things more efficiently if they don't force everything to be a single simple transistor.

  • (disco) in reply to dkf

    Yes, I expect a half-adder (and quite possibly a full-adder) is available there as well.

  • (disco) in reply to aliceif
    aliceif:
    XOR and AND are just wrappers around NAND ...

    It's possible to create every logic gate using only NAND gates.

    It's not done in practice, because it's very expensive in transistors. (Compared to just creating the gates the normal way.)

  • (disco) in reply to Steve_The_Cynic
    Steve_The_Cynic:
    doesn't do what I asked
    Sorry about that. (In which case it hardly matters that I accidentally conflated two different code patterns.)
  • (disco) in reply to dkf
    dkf:
    The gate libraries are a bit higher-level than that, as they can build things more efficiently if they don't force everything to be a single simple transistor

    Generally speaking, there are no "single simple transistors" in logic gates. Even an inverter is two transistors. A 2-input NAND (or NOR) is 4 transistors. A 3-input NAND is 6.

    There are a few places where you'll find single transistors. A DRAM cell is a single transistor (and a capacitor). Single transistors are also used where a simple make-or-break connection is needed. For example, FPGAs are chips that have a zillion logic gates that can be connected together in a zilliion2 ways to do useful things; those connections are made by turning on single transistors (by writing data into an on-chip RAM).

    PleegWat:
    I expect a half-adder (and quite possibly a full-adder) is available there as well.
    Possibly. You'll definitely have latches and D flip-flops, as well as muxes and more complex gates like various AND-OR-invert (NAND and NOR are just special cases of AND-OR-invert).
    blakeyrat:
    because it's very expensive in transistors
    Somewhat, but it's even more expensive in interconnect between the gates.
  • (disco) in reply to HardwareGeek

    Yeah, people forget that transmission gates (aka analog switches or pass gates) are just as fundamental a CMOS element as NAND or NOR gates are...

  • (disco) in reply to dkf
    dkf:
    Or arguably that's virtually all that you've got. (Addition is just a wrapper round XOR and AND…)

    Computers aren't especially good at computing anyway. A computer is all about transferring and jamming bits (even a shift register is quite dramatic). The adder used to be one of the most expensive pieces of hardware before there were microprocessors.

    (Interestingly, computing by discrete elements would be easier using ternary logic. The Russians did it with some success, see: "Development of ternary computers at Moscow State University" http://www.computer-museum.ru/english/setun.htm. Using ternary logic OR becomes a max-function and AND becomes a min-function on the bits involved. Using it for arithmetics, there are no rounding errors and it allows arbitrary word lengths of the arguments. Also, its arguably faster and there are less — and cheaper! — elements involved. Some believed it to be the inevitable future of computing in the 1950s.)

Leave a comment on “Tri-State Boolean”

Log In or post as a guest

Replying to comment #:

« Return to Article