Symmetry of the cube

Discussion in 'Physics & Math' started by arfa brane, Jan 22, 2019.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    A comment on the complexity of walking through \( \mathfrak G \).

    For one walking graph (the sliced 3-ball, with a coloured graph on it), it's hard, because it can only store a certain limit of information, say.
    For two small graphs, walking is a bit easier because they can compare their positions we know a map exists between the pair, or any pair), and as the number of spheres moving through \( \mathfrak G \) increases, there is more information in it, since entropy increases overall (including in the physical model, more noise dissipating, more particles interacting under physical rotations, an increase in amplitudes).

    Complexity is then factored by the number of small graphs "in motion".

    We know that a walk through the projection must involve paths that cross over each other, for pairs of walks defined over the 42 red coloured points in \( \mathfrak G \). This is a consequence of the graph being a poset; to see the crossing paths, we need to rotate the graph and see a total space, in which \( \mathfrak G \), is a base space.
     
    Last edited: Feb 18, 2019
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So if I again upgrade the notation, I say I have a graph in three dimensions projected to a shadow graph in two, and so I might want to define graphs in one dimension.

    The best I can do there, since I've represented g and the other two in a 2-plane, is factor out one or the other, or sum only the vertical sections parallel to one or the other Euclidean axis. That is, If I represent one dimension as timelike so inside a cone, then the solution is a time and a space axis. This is written 1 + 1 dimensions because it indicates you have to sum over both spacelike and timelike components, to define a path through the big graph.

    But I can say that if \( \mathfrak G^3 \) and \( \mathfrak G^2 \), are the three and two dimensional graphs, the former a projection of the latter, then I have any sections in \( \mathfrak G^1\) left, and none of the vertices are horizontally connected.
     
    Last edited: Feb 18, 2019
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    In my ongoing investigation of the provenance and meaning of the poset graph, I haven't been able to ignore any details, no matter how irrelevant they might seem.

    This is what I should dub the Feynman approach, along the lines of, if there's plenty of room to store information, then one should take note of all of the information, no small detail is small enough to be ignored. Hence the physical model, it makes some noise, there is physical interaction and some electrons (very light particles) move around, friction charges objects made of plastic when we rub a pair of surfaces together.

    So returning to the graph-on-the-sphere, as an object moving through a larger 'abstract' graph (on a graph walk), I can take a pair of elements on the line x = 1 for the same walk; one of the pair moves forward pointing forwards, the other moves forward pointing backwards, because the pair are inverses of each other.

    That is, start with the pair \( g_x,\, g_x^{-1} \), and take both in parallel (timewise) to the line x = 2.

    Then to keep moving in the same direction (i.e. forwards but pointing forwards and backwards), form the pairs \( g_x g_y,\, g_x^{-1}g_y^{-1} \). Halt for a bit here.

    If you now iterate the pairs simultaneously: \( (g_xg_y)^n,\, (g_x^{-1}g_y^{-1})^n \), you find each is a cycle, in fact the same cycle as a pair, one the inverse of the other.
    The compositions fix two of eight vertices, and act on six, in each moving graph. The position identity on six of eight vertices is at \( (g_xg_y)^5,\, (g_x^{-1}g_y^{-1})^5 \), the same order as g when it "stays at home" or transforms itself into \( h^{-1}\) by mapping itself to a local system of coordinates.

    If you have a {3,4} Schafli 'tiling' symbol you also rotate each of the six, you have an action on their orientations which is a 3-cycle in the positive direction.

    So you have to iterate 3x5 times over the pair of walking graphs, to get a path integral. There doesn't seem to be enough room along y = x, which is the geodesic line you stay on in this controlled descent; ergo both the small graphs must bounce around somewhere along it. The problem resolves to fitting a 5-cycle on a 2-letter composition to a line with 11 points on it.

    And I can theorise there's a physical way to represent a graph with edges that disconnect and reconnect, an obvious physical analog is magnetic field lines. I can make a DIY "moving graph" with small blocks of wood and some disc-shaped magnets!
     
    Last edited: Feb 18, 2019
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Topology and its ideas lets me do abstract things like consider the faces or vertices of a solid cube to have a topological map, which corresponds to a path along edges which halts, a complete path visits each edge once, or each vertex once. I extend each map to a cycle that visits each face once (the face is a vertex in one of the graphs), and a cycle that visits each vertex once depending on what edges represent in each case.

    I'm free to then choose an identity permutation of a string of eight vertices, and of a string of six faces. The vertex graph is a spanning tree, a Hamilton path, which is a Hamiltonian graph if it is completed (i.e. an edge is added which joins the "free" ends of the path or vertices of degree 1). There is an Euler circuit (resp. an Euler trail for the spanning tree) if the graph is a multigraph (has more than one edge between points, might have loops on single points).

    Then I add the following "graph move" or deformation: I allow say, the eight letter graph (each of eight vertices has a different letter on it), then form a cycle on half, leaving four fixed (an open string). I see that I can also have a set of graphs in which each edge is either a vertex of the solid cube, or an edge. I have graphs which are related by the geometry of the cube; or "graph relativity" as a relation between pairs of graphs . . .
     
    Last edited: Feb 18, 2019
  8. Write4U Valued Senior Member

    Messages:
    20,069
  9. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    There's a more interesting relation between the octahedral (i.e. graphs with six faces, above) and cubic (with six faces) graphs, an octahedral graph has a dual graph, in which faces of the first are vertices of the second, the edges exist because they do on the solid versions, if that's enough to explain where they come from.

    As to triangles, the smallest complete graph is \(K_3\) which has 3 vertices and 3 edges, and is a subgraph of the above graphs in multiple places. It remains connected if you remove one edge and don't glue vertices together.

    Moreover, any graph with non-intersecting edges in the plane is a planar graph, in a sense it's a method of defining planarity as non-intersection. Another method is wrapping a graph around a cylinder or a torus so the edges have no intersections. But that might not be "the" plane (the only minimal surface with exactly two dimensions and zero curvature), if the cylinder is topological and might look like a truncated cone, for instance.
     
    Last edited: Feb 19, 2019
  10. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Dammit there's a mistake, I should say a cubic graph has eight faces.

    A planar graph has one face which is the rest of the plane.

    So the dual of the first graph which has no intersecting edges, is a graph with vertices on the faces of the first, so one vertex is somewhere on the plane, so might as well be at infinity except you can just put it anywhere outside the face set of the first graph.
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So far, the whole thing has been a few matrices, and a basic outline of what the thesis is here.

    It isn't an original thesis, but that said, I haven't come across any other work that looks at it the way I'm doing here.
    There is plenty of literature on Rubik's cube and its many modifications, and on puzzles that are homeomorphic to a sliced 3-ball.

    I've got another 4x4x4 Rubik's cube which is the next nxnxn model with n even. So there's a relation between the n = 2, n = 3, . . . series which should fit in some category (be a tower of algebras, say). In fact constructing DIY puzzles with blocks or other shapes by physically gluing small magnets to surfaces so they all lie below a projected surface which you can colour, or make a graph, is a direct way to investigate the properties of permutation groups with their physical models. Some of these groups have very large order, like \( 10^{500} \) permutations, so my graph is in that context very small.

    Another puzzle I have, embedded in the 3-ball, is an icosahedral one. It has 19 spherical triangles on it, and a hole you can plug with a free triangle (so you remove it by projecting it to infinity and quotient the plane). So there are \( 19! \) possible positions for the remaining triangles 'stuck on' the surface. You rotate one of three triangles around the hole into it, rotating the triangle as you do this. It's called a vonDyck move and you can embed it in a reflection group--it's the subgroup (subset) of reflections which rotate triangles

    However, with just n=2 in the cube puzzle set, I know that there's an \( S_8 \) action on the vertex set, an \( S_4 \) action on subsets of four other geometric 'features'--subsets of four parallel edges that "used to be" cube diagonals--a standard map from the four diagonals of a cube to the symmetric group on four letters is 'available'.

    I've shown to myself that both the above group actions (on G-sets of vertices and edges), are a cover for the permutation space of positions, and that the subgroup that cycles the orientations is isomorphic to \( Z_3 \) under addition. But thanks to complex roots of unity I can make this subgroup multiplicative--I can count the rotations in each element in the vertex G-set with complex numbers, specifically the third roots of unity.

    These are abstractly, three complex unit vectors aligned symmetrically (with the same angle between) around the unit circle in \( \mathbb C \). So I again upgrade the algebra (8x8 permutation matrices), by including scalar multiplication by a complex number \( w \),
    such that \( w^3 = 1 \). This is one way to solve the problem of counting the equivalence classes, but not the only way.
     
    Last edited: Feb 19, 2019
  12. Write4U Valued Senior Member

    Messages:
    20,069
    This is above my pay-grade, but your proposal sounds logical and demonstrable. Well done.
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I want to do something off in a different direction, to do with complex number representations, initially numbers with a 'unit' complex dimension. So it can all be defined by a unit circle, or points lying on it or inside it (i.e. on the disc).

    So I got the basis vectors for a space which has four dimensions:
    \( e_1 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \bar {e_1} = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, {e_2} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \bar {e_2} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \).

    The multiplication table:

    \( \begin{array} {| c | c |} * & e_1 & \bar {e_1} & e_2 & \bar {e_2}\\ \hline e_1 & e_1 &0 & 0 & \bar {e_2}\\ \bar {e_1}& \bar {e_1 }& 0&0 & e_2\\ e_2 &0& \bar {e_1}& e_2&0\\ \bar{e_2}&0&e_1&\bar{e_2}&0\\ \hline \end{array} \)

    And I know that complex numbers with one dimension can be represented by 2 x 2 matrices. Since I know in that representative domain, that the unit is the identity 2 x 2 matrix, the representation of \( i \) as a fourth root of I is \( \begin{pmatrix} 0& 1\\ -1& 0 \end{pmatrix} \), since then \( i^4 = 1 \) and \( \begin{pmatrix} 0& 1\\ -1& 0 \end{pmatrix}^4 = I \).

    So both of these I know can be written component-wise, as \( I_{ij} = \delta_{ij},\, \omega_{ij} = \epsilon_{ij} \). Well, great.

    But I want to now multiply basis vectors with scalars, the complex nth roots of unity.

    If I have \( \omega^3 = 1 \) then the table looks like:

    \( \begin{array} {| c | c |} * & \omega e_1 & \omega\bar {e_1} &\omega e_2 & \omega\bar {e_2}\\ \hline \omega e_1 & \omega^2e_1 &0 & 0 & \omega^2\bar {e_2}\\ \omega \bar {e_1}& \omega^2\bar {e_1 }& 0&0 & \omega^2e_2\\ \omega e_2 &0& \omega^2\bar {e_1}&\omega^2 e_2&0\\ \omega\bar{e_2}&0&\omega^2 e_1&\omega^2\bar{e_2}&0\\ \hline \end{array} \)

    Multiply the products in the table again:

    \( \begin{array} {| c | c |} * & \omega^2 e_1 & \omega^2\bar {e_1} &\omega^2 e_2 & \omega^2\bar {e_2}\\ \hline \omega e_1 & e_1 &0 & 0 & \bar {e_2}\\ \omega \bar {e_1}& \bar {e_1 }& 0&0 & e_2\\ \omega e_2 &0& \bar {e_1}& e_2&0\\ \omega\bar{e_2}&0& e_1&\bar{e_2}&0\\ \hline \end{array} \)

    So by induction, since I need one scalar multiplication then two more, I need one followed by n-1 for \( \omega^n \). Well, lookee thair, I can draw myself a graph for this.
     
    Last edited: Feb 20, 2019
  14. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Another little clue is that the 'canonical' basis for \( \mathbb R^2 \) is pairs of real numbers (x,y), the matrix rep. is:

    \( \Biggl\{\begin{pmatrix} 1\\0 \end{pmatrix},\, \begin{pmatrix} 0\\ 1 \end{pmatrix}\Biggr\} \)

    So the space has vectors like \( v = x\begin{pmatrix} 1\\0 \end{pmatrix}\,+\, y \begin{pmatrix} 0\\ 1 \end{pmatrix} \)

    Now I can take the outer product usually this is written \( u \otimes v \), so I want \( u = \begin{pmatrix} 1\\0 \end{pmatrix},\, v = \begin{pmatrix} 0\\ 1 \end{pmatrix}\)

    Another multiplication table gives:


    \( \begin{array} {| c | c |}\otimes & u & v \\ \hline u &u\otimes u& u\otimes v \\ v & v \otimes u& v \otimes v\\ \hline \end{array} \)

    And each product is one of the \( e_j, \bar e_j \).
     
    Last edited: Feb 20, 2019
  15. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    ...

    Inspection of these tables gives me a way to write set products, such as \( \{\omega e_j,\, \omega \bar e_j \}^2 \), or \( \{\omega e_j,\, \omega \bar e_j \}^3 \), or say, \( \{\omega e_j,\, \omega \bar e_j \}^n \), with complex coefficients:

    \( \omega^n e_1 + \omega^n e_2 + \omega^{n-1}(\bar e_1 e_2)\, + . . . \)

    And I have this "*" operation thing, or closure of the set of all finite strings over the characters in the sets, such that:

    \( \{\omega e_j,\, \omega \bar e_j \}^* = \{\omega e_j,\, \omega \bar e_j \}^0\{\omega e_j,\, \omega \bar e_j \}^*\).

    The problem with this part: \( \{\omega e_j,\, \omega \bar e_j \}^0\) is that it isn't defined yet. What does it mean to take the set product 0 times? Of what set? Surely the set has to first be non-empty, and this is given by: \( \{\omega e_j,\, \omega \bar e_j \}^1\)
     
    Last edited: Feb 20, 2019
  16. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So we fix that one by making the 0th set product the empty set, ∅, and we can then invoke an empty character or string of length zero, which is the empty string \( \lambda \) in the monoid but ∅ in the set of characters.

    Done.
     
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Ok so now we're as it were, in motion here. We've got matrices and a way to connect them back to ordinary old \( \mathbb R^2\).

    Or in another language, there is a map from \( \mathbb R^2 \circeq \mathbb E^2\) to the permutation group in question \( \mathfrak T \) such that \( \mathfrak G \) is its graph.

    I get there by taking the set of outer products in \( \mathbb R^2\) as a set product operation. I embed the operation in a table in which the left column of elements in the "alphabet" acts on the upper row of the same elements--the set "x" itself.

    So, of course there is nothing to stop you using your set of partial imcomplete 2 x 2 matrices to represent the table, or set product as:

    \( e_1 \otimes u \otimes^2 + e_2 \otimes v \otimes^2 + \bar e_1 \otimes v \otimes u + \bar e_2 \otimes u \otimes v = \). A simple change of notation.

    And it seems you have now a space with, well, \( 2 + \bar 2 \) dimensions, or degrees of freedom for abstract polynomials.

    But we aren't there yet. Since I can apply the operation of set product as a table-generating operation, a square matrix, I can generate the 1st, 2nd etc cosets of the set of elements in G, the generating set (I'm using here).

    So say I define \( g_{\alpha} = {g_1,g_2,g_3} \), so I can write the generating set as one symbol with one index (covering all the values), then I can do this:

    \( g_{\alpha}^1 = g_{\alpha}^1, \)
    \( g_{\alpha}^2 = e_1 \otimes g_{\alpha}^2 + e_2 \otimes g_{\alpha}^{-2} + \bar e_1 \otimes g_{\alpha}^{-1} g_{\alpha} + \bar e_2 \otimes g_{\alpha} g_{\alpha}^{-1} \)
     
    Last edited: Feb 20, 2019
  18. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    One more go at that faces of a cubic graph thing, which is a consequence of having to draw it with non-intersecting edges. Then it represents a real cube with 8 vertices and let's say 1 + 5 faces; the planar version of any cubic graph has enclosed cycles, the enclosed region in the plane inside a cyclic subgraph is called a face of the cubic graph. It's a cubic graph because every vertex is degree 3, or \( \delta = 3 \).

    The dual, octahedral graphs are all \( \delta = 3 \).

    In each case there is exactly one external face, bounded by the largest union of regions in the convex hull.
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Ok, sorry about some blunders there with faces of a planar graph, but back to the algebra I've defined out of nothing too exciting, a set of partial 0-1 matrices; these are minimal in some numerical sense, and I've seen them used with a similar kind of notation in QIS papers.

    Now clearly if \( g_{\alpha}^k \) is the first coset of \( g_{\alpha}^{k-1} \), then \( g_{\alpha}^2 \) is the first coset of \( g_{\alpha}^1 \), and that's a halting point--we don't need to keep looking because we have, a trivial solution in front of us. So we put all the moves of a single turn in the same set and assert that doing it twice means there's another move, h, which is smooth enough it "skips" over a point in the graph, although technically it's a 2-point shift up the vertical. So we in some sense stretch this over a line with a gradient: dy/dx = 2

    This set (embedded in a vertex pair over (1,0)), and the other vertical sets, are incomparable sets along horizontal lines, because there are zero connections in that direction, even at the horizontal boundary along y = 14.

    So we forget about the origin, so then along y = x is a set of 11 points. Maybe I can use an 11th root of 1 there?
     
    Last edited: Feb 20, 2019
  20. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I need to correct again, that set product notation.
    So say I define the set \( \{g_{\alpha}\} = \{g_1,g_2,g_3,g_1^{-1},g_2^{-1},g_3^{-1}\} \),

    (. . . the products iterated from 1 to 2)
    \( \{g_{\alpha}\}^1 = \{g_{\alpha}\}^1, \)
    \( \{g_{\alpha}\}^2 = e_1 \otimes g_{\alpha}^2 + e_2 \otimes g_{\alpha}^{-2} + \bar e_1 \otimes g_{\alpha}^{-1} g_{\alpha} + \bar e_2 \otimes g_{\alpha} g_{\alpha}^{-1} \)

    Sorry forgot to format the { } properly. That's better, the thing on the left of = is a set, on the right, the products. Wheew

    But I can also do:
    \( \{g_{\alpha}\}^2 = e_1 \otimes g_{\alpha}^2 + e_2 \otimes g_{\beta}^2 + \bar e_1 \otimes g_{\beta} g_{\alpha} + \bar e_2 \otimes g_{\alpha} g_{\beta} \)
     
    Last edited: Feb 20, 2019
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So, ('meh')

    . . . if \( \{g_{\alpha},g_{\beta}\}^k \) is the first coset of \( \{g_{\alpha},g_{\beta}\}^{k-1} \), then \( \{g_{\alpha},g_{\beta}\}^2 \) is the first coset of \( \{g_{\alpha},g_{\beta}\}^1 \), ...

    This set (embedded in a vertex pair over (1,0)), and the other vertical sets, are incomparable sets along horizontal lines, because there are zero connections in that direction, even at the horizontal boundary along y = 14. These missing edges are the (horizontal) antichains in the poset graph; the 'rows' in the lattice. But I can also define another change of notation, \( \{g_{\beta}\} = \{g_1^{-1},g_2^{-1},g_3^{-1}\} \). Then I need to do \( \{g_{\alpha},g_{\beta}\}^n \) (so I've edited this post again to correct it)

    Now I can can call the vertical columns (sets) a fiber over the antichains, an n-point fiber where n depends on, well a certain geometry I can explore with complex roots of unity mapped to sets (and their cosets) of group elements -- the \( \{g_{\alpha},g_{\beta}\}\) under concatenation, which is then the default group multiplication, or a composition of matrices left to right (alternately right to left depending on which rule you choose).

    However it's done, because the \( 2 + \bar 2\) dimensions let you write down product tables in matrix form, you can write out the equivalence classes as sets (distinct in the horizontal direction!) of polynomials in the \( \{g_{\alpha},g_{\beta}\} \), with coefficients in \( \{\omega e_i, \omega\bar e_j \}\).

    Start with \( \omega = 1 \), say, the first root (not a primary root) of unity.
     
    Last edited: Feb 20, 2019
  22. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So now I have a way to describe motion from fiber to fiber in a base space \( \mathfrak G^2 \), along some kind of geodesic through points each the lowest point in the fiber over y = 0.

    Motion in \( \mathfrak G^3 \), a total space is along 2-point fibers, because to move forward you have to cross over your inverse (there is more than one way to find an inverse path, but only two which are pairs of individual Eulerian trails). Does a word like \( g_1g_2 \), or \( g_1g_2^{-1} \) reach the endpoint? if it takes 5 iterations to get to x = 10, how exactly does the motion map to all the points so that \( (g_1g_2^{-1})^3 \) rotates a vertex cube once, and \([(g_1g_2^{-1})^3]^5\) does that 5 times, over the 11 points?

    Once I go to the complex roots of unity though, I seem to suddenly have simplified things, since now I'm dealing with one dimension instead of a pair of real ones; this must be connected to the way I take outer products of the basis set of 2 x 1 column vectors . . .
     
    Last edited: Feb 20, 2019
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    HAh, notice what happens when I iterate multiplication but with the complex roots \( \omega^n \), n = 3, used once, then twice etc.

    I.e. iterating thus

    \(\omega \{e_i, \bar e_j \}\{e_i, \bar e_j \} = \{e_i, \bar e_j \}\omega \{e_i, \bar e_j \}\)
    \( = \{e_i, \bar e_j \}\{e_i, \bar e_j \}\omega = \omega \{e_i, \bar e_j \}\)
    \(\omega \{e_i, \bar e_j \} \omega \{e_i, \bar e_j \} = \omega^2 \{e_i, \bar e_j \}\)
    \( \omega \{e_i, \bar e_j \}\omega^2 \{e_i, \bar e_j \} =\{e_i, \bar e_j \} \)

    \( \Rightarrow \{e_i, \bar e_j \}^* = \{e_i, \bar e_j \}, \omega^*\{e_i, \bar e_j \}^* = \{\{e_i, \bar e_j \}, \omega\{e_i, \bar e_j \}, \omega^2\{e_i, \bar e_j \}\} \)
    \(\Box\)​

    by induction.
    Then another notational problem resolves to:

    \( \{\omega e_i, \omega\bar e_j \}^0 = \omega^0\{e_i, \bar e_j \} = \{e_i, \bar e_j \} \), which is obtained by multiplying the set by the scalar.
     
    Last edited: Feb 20, 2019

Share This Page