Is Boundary Logic interesting?

Discussion in 'Physics & Math' started by arfa brane, Aug 21, 2018.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    This algebra of programming is not something I found in a textbook. Actually the textbook for the course used Hoare triples and proof rules (antecedents and consequents).

    The algebra was a lecture handout. What the lecturers didn't "need" to tell us about the algebra is that it defines a mathematical object called a semiring, a ring with no inverses (roughly). But see: https://en.wikipedia.org/wiki/Semiring

    Program composition is not commutative, but program union is. The latter implies that a program can take two branches instead of one (i.e. parallel computation is defined, but isn't implemented). Thus my notation: {P}Q ∪ P{Q} doesn't make that distinction, you need to include a predicate and its inverse, i.e. {αP} ∪ {α'Q} to "correct" this for a sequential computer.

    With iteration, the Kleene star operation means the algebra is a star semiring or Kleene algebra. "Addition" is idempotent: P ∪ P = P says (running) two copies of P is the same as one copy.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    . . . so a naive look at boundary logic says distinguishing two copies of P is the same as distinguishing one: (P) (P) = (P)
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. iceaura Valued Senior Member

    Messages:
    30,994
    That is literally axiom 1, page 1, of LoF - although perhaps not obviously. A more easily visible notation is seen introducing the " = " sign, a couple of pages later.
    If that is unpersuasive, a notationally nearly identical algebraic expression, a "consequence" - after the introduction of algebra, then - can be found in chapter 6.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    That it isn't obviously an axiom is possibly an understatement.
    It isn't obvious that P, in 'boundary form' is either ( ) or (( )). Boundary forms are fixed, parenthetical forms have equal numbers of ( and ) in them. You have to assert that a symbol like P, or A, is a fixed form with an unknown (unprocessed, say) equality to one of ( ) or (( )).

    Another point is that boundary logic doesn't seem to have made much impression on current research into correctness proofs. Bricken and Kaufmann are two authors who touch on the computational aspects of LoF.

    Bricken talks about the many to one relation between Boolean logic and LoF. In computation on real physical computers, there are many ways a program can crash. There is only one program that satisfies an identity in the above algebra.
    But I suppose I could assert something like \( P^0 = 1\), or running P zero times is the same as doing nothing. Note P is a recursive symbol, it could itself be a part of a larger program, or a sequential composition of program (sections).
     
  8. iceaura Valued Senior Member

    Messages:
    30,994
    You are still bouncing around in the first forty pages or so of an elementary introduction written more than fifty years ago.
    It was immediately obvious to me - and I am no great expert in the stuff.
    The equality - the "form of condensation" - holds regardless.
    It's basically a complicated way of writing Axiom 1 in LoF. But to ease your mind, it is proved/demonstrated at each successive level in the more complicated notations - axiom, arithmetic, algebra first order, and possibly algebra second order (not explicitly - there would be complexities).
    Or any other application in the commercial or industrial world - the people familiar with it inhabit esoteric and fringe pockets, the people in the mainstream of engineering and software and other practical applications are not familiar with it. Apparently.
    I don't know why. I suspect it's a qwerty phenomenon.
     
    Last edited: Oct 14, 2018
  9. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Sorry, we might be talking about different subjects. The axioms in LoF might apply to program logic; what I meant by not obviously an axiom is how to translate the LoF axiom in question to the same domain as a program that runs and halts, or crashes.

    Just by invoking: P P = P, I need to remember that P might be void, but possibly is not. What kind of program looks like that? Really all the use of symbols allows is that I don't have to write two expressions, one of which has nothing either side of "=".
    Except I also have to remember that nothing can be written like: (( )).

    So the two expressions are ( ) ( ) = ( ) and (( )) (( )) = (( )); the symbol P conveniently lets me write one expression.
     
  10. iceaura Valued Senior Member

    Messages:
    30,994
    There's nothing strange in the fact that people unfamiliar with Boundary Logic cannot employ it to any benefit in real world tasks.
    I took the question of the thread to be whether that common circumstance was an indication of some kind of general or abstract import - if BL were in fact and in theory and unavoidably as little useful as it is little used.
    For some reason.
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Proof of correctness is computationally complex, it would be a waste of effort to use proof calculus for all software. Certainly parts of the code can be proved correct, but humans are still better at this than machines.

    If say, you're the Dept. of Defense, you can afford to hire an army of computer graduates who have studied the stuff. Otherwise the complexity, since it means there is a cost, keeps it on the fringe of mainstream software development.

    As I see it, there's nothing to stop anyone using boundary logic or some other kind of representation of LoF, on the subspace of conditional tests or predicates since the algebra is Boolean (and you have also a subsemiring which is a ring).

    LoF has been around long enough, you would think some clever trevor would be using it, if it simplifies the process of inductive proofs which humans (I guess necessarily) perform in steps.
     
    Last edited: Oct 15, 2018
  12. iceaura Valued Senior Member

    Messages:
    30,994
    There is unfamiliarity, and the conventions of the fields involved, and so forth. (Could they get published? Communicate their findings? )
    That is the situation, yes.

    It has a lot of the earmarks of a qwerty phenomenon.
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I'm trying to bring some ideas together here and I'm not sure where I can put LoF.

    Maybe what it is is a way to notate the dynamic aspects of some model of computation, or maybe it isn't.

    One of the ideas is a connection between the structure of a Boolean lattice and a pair of random bit generators, like coins. So you have the four possible outcomes of a coin-tossing program are the abstractions: { (0,0), (1,0), (0,1), (1,1) }, a set of four ordered tuples.

    So suppose p and q are the random outputs, then you can say you have the set: { p'∧q', p∧q', p'∧q, p∧q }, by the "law" of complements.
    This set contains all the observable outputs, you now want a way to combine these under meet and join so you have a lattice. So you want four orthogonal 'vectors', say p∧q = (1000), p∧q' = (0100), p'∧q = (0010), p'∧q' = (0001).

    Then (p∧q )∨(p∧q') = p = (1100), (p∧q)∨(p'∧q) = q = (1010).

    The rest of the lattice structure follows, "bottom" is (0000) = p'∧p = q∧q', "top" is (1111). The lattice is a fixed structure that exists logically, but the only thing really existing is one of four outcomes of a two-coin toss.

    Is there an invariant each time the coins are tossed or the random bit generators are run? Yes, but it's a probability.

    P.S. the abstraction can also be considered a way to map the four objects, { p'∧q', p∧q', p'∧q, p∧q }, to four distinct areas of say, a disk with four bounded quadrants. Then it follows that the subset relation over the powerset of { p'∧q', p∧q', p'∧q, p∧q } is the same lattice.
     
    Last edited: Oct 17, 2018
  14. iceaura Valued Senior Member

    Messages:
    30,994
    Have you read it, and studied the approach it introduces?
     
  15. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Have I read Spencer Brown? Or have I read anything about Laws of Form?
    I've been reading Kauffman, Bricken, and anything else I can find. I suppose I can say I've skimmed Spencer Brown's opus, but I haven't found anything all that enlightening yet.


    Spencer Brown mentions that he developed his calculus while at British Rail. He says that it uses imaginary Booolean values, but doesn't elaborate.
    A shame because a real world example of the use of LoF to find solutions which are effective paths through a switching network, would be of interest.

    If it worked for BR, why hasn't it been used elesewhere? Given that a computer can reasonably be treated as a switching network?
     
  16. iceaura Valued Senior Member

    Messages:
    30,994
    All I'm saying is: You have several times now asked about stuff that LoF deals with, explicitly. I have responded by pointing to items in the first forty pages of a well known and elementary introductory text that is now fifty years old.
    He demonstrates - not "says" - that it can be extended to four valued logic, heuristically analogous to complex numbers, rigorously - thereby bringing self-referential statements into the realm of rigorous logical analysis via his convenient arithmetic.
    Interesting questions. Some guesses appear above, in my posts, but answers would require considerable research.
    As noted, it may be simply a high-level qwerty phenomenon.
     
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    He isn't the only one who has rigorously derived complex logical values, or functions (which return same).
    The square root of not is a reality in quantum mechanics, it's easily demonstrated with lasers, mirrors, and beamsplitters.

    Apart from that practical "computational" result, and Spencer Brown's rail-scheduling, not much else seems to be around in the literature. Complex logic remains obscure.

    But since it must be intrinsic to QM, quantum computers have to be able to "do it".
     
  18. iceaura Valued Senior Member

    Messages:
    30,994
    It is also a reality in self-referential statements, feedback loops involving cyclic phenomena, and the like. It has been a logic visible in the world for a long time.

    It is very difficult to handle in the standard notations of Boolean algebra. Whether a shift to BL, with its notational facilities, would provide tools easier to employ, is afaik unknown.
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    The idea with imaginary logical values is easy, you take the arithmetic concept of the imaginary number i, to the "arithmetic" of Boolean logic.

    It isn't hard to follow; since the additive inverse of i is the multiplicative inverse of i, we need a logical version of that.

    Suppose a j, such that j∧¬j = 1 (or true), and j∨¬j = 0 (or false). This contradicts for a standard Boolean, a, that a∨¬a = 1, and a∧¬a = 0.
    For i we have i(-i) = 1 and i + (-i) = 0.
     
  20. iceaura Valued Senior Member

    Messages:
    30,994
    The difficulties arise in employing Boolean notation when analyzing self-referential and feedback loops involving cyclic phenomena.
     
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I think there is also a problem because an imaginary-valued number, hence its arithmetic, is not something that translates directly to Boolean logic.
    With numbers, i + i (= 2i). With the "other logic", we have j ∨ j (= ?), since j ∧ j = ¬.

    ¬ what? j is an unary operation on a Boolean variable, and so is j*j = j∧j. We are obliged to mix purely imaginary Boolean with real Boolean logic.
    We have j∧j∧a = ¬a. In fact we have to build new truth tables for at least ¬ and ∧. Does DeMorgan's law hold in this extension of the logic, and can it be shown that the ∨ (join) relation follows given the new truth tables for ¬ and ∧?

    The way I've "mapped" imaginary numbers to logical values isn't the only way to do it (but does follow Bricken's method). This might have something to do with division rings, or something.
     
    Last edited: Oct 23, 2018
  22. iceaura Valued Senior Member

    Messages:
    30,994
    Why would you want to do that? That's a mess. The idea is to stay in BL notation etc - that's why it was invented.
    The advantages of BL - if any - would include the strategical abandonment of Boolean notation and all its restrictions, confusions, etc.
    They appear to be significant obstacles, in some cases - this being one. The analogy with Roman numerals was intentional.
     
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Nonetheless, that's what Kauffman and Bricken both do.

    Both point out that the equation x = -1/x has a logical counterpart. I'm pretty sure division isn't defined in a Boolean ring, which means that yes, it is messy.

    L Kauffman's Knots and Applications (online ebook) shows the connection between BL and something called Temperley-Lieb algebra. The latter is definitely mainstream.

    Kauffman says this interesting thing: "Quantum logic is the pregeometry of notation."
    And I may have mentioned that the complex logical operator √¬ ("the square root of not"), has a notation. Whoah, we need ¬√¬ too, so that √¬∧√¬ = ¬√¬∧¬√¬ = ¬, as ii = -i(-i) = -1.
     
    Last edited: Oct 24, 2018

Share This Page