Unscrambling the cube

Status
Not open for further replies.

noodler

Banned
Banned
There are a "few" steps to it.

In my own case they went something like:
1. buy some permutation puzzles.
2. read about some ways to solve (hard) and scramble (easy) the Rubik's group, start with the smaller ones that stay "closer to the identity".
3. also use computer algorithms to investigate the dynamics of the numbers of symmetries
4. read up some more about algebra and group theory, since the puzzles are all obvious groups of "elements in sets".
5. try out some optimization techniques to find the logarithmic orders and general groups in the structures
6. develop computational model that can map Galois to a triangulated 'signal space' = a category-parser that sorts the symmetry groups.

of course, I did all of that the first time I solved the cube - in principle - so it was only a matter of delving into "the math" by following a certain path, certain people's names were dropped in the article apart from Mr Hofstader's. Also there are possible uses for such a model as in 6 - if I can design a real one. Step 1 there is then design it. Basically the whole problem is about division algorithms. Here Bob Doran came in handy because he knows a bit about parallel design and such.

Is anyone up on the design of a logarithmic-order category parsing machine?
 
Last edited:
Hmm. I may add that the science of puzzles and gaming is an actual mathematical topic.
Also, since most people who have solved puzzles like the Rubik ones, haven't been studying category theory or abstract algebra, they are drastically oversimplifying their solutions.

But of course, simplifying is exactly what you need to be doing, to find a solution.
The triangulation of a signal might be an oversimplification of a circuit (I haven't decided), or a kind of information channel, with time-reversed encoding. You might think of the triangles more as a kind of tile that has to be made and added to a tiling (of a Euclidean plane).
 
Well, isn't this boring?

Trying to think about decidability and the halting problem - and I didn't read the recommended books!
One more attempt which may be my last - I seem to be good at hitting the wall of contempt here, but I don't, upon reflection, appear to give a shit, so here goes, for the rabble.

Decidability when it comes to numbers is about deciding if a number exists, in the first place, then deciding what class of number it is. A way to look at classes is to see how many ways a number can be written.
So writing a number is decision problem. Turing saw that numbers must have a finite representation to encode information (a message), and also have an algorithm that "figures" a number in a finite set of steps. That is to say, a message can't be encoded in an alphabet of "infinite" characters - the set of tokens or characters in an alphabet must be limited; a message will require the repetition of certain of these tokens

The circularity of the writing process, is what he refers to in his paper on the decision problem, which when it was written had a specific meaning, although the "decision problem" per se has a modern meaning, the meaning of both the individual words in the phrase hasn't changed since 1936.

Circle-free means there is a finite representation, in n places, for the characters or digits in a written number, or that recursion is finite or "least depth".

In the cube puzzles, there are numbers of permutations of faces, in a facelet basis (i.e, to color equivalence/inequivalence). You are trying to write a certain number, when "solving" a cube puzzle; this number is a combination of the edge and corner groups which are separated for puzzles with N >= 3.

There is a finitary representation of parity in the faces, for the facelet groups, at each step. You are trying to find a solution that resets parity to zero. So that, solving or scrambling a cube is preserving a parity function, divided between corner and edge parity "equations" you solve in parallel, since the edges and corners move together.


So there, boys and girls...

Oh, and P.S. a logarithmic-order category parser sounds like a human brain, just a little.
 
Last edited:
You seem to be babbling about things you don't know too much about. And anyone who does know such topics sufficiently to be programming them into a computer isn't going to be coming onto a forum to ask "Is anyone up on the design of a logarithmic-order category parsing machine? ".

Writing a program to solve a Rubik's cube doesn't require you to know Galois theory or even much group theory. You just need to work out general methods to solve Rubik's cube, which can be found on any one of a great many websites, and then you program them in. The whole group and Galois theory side of things is for a more abstract examination of the structure of the problem.

You're trying to bite off a bit more than you can chew.
 
Note how the last contributor - the sole agent to date - appears to be attempting to place Galois theory on a pedestal which is proclaimed to be beyond the reach of lesser minds.

This is of course, total rubbish - Galois theory is about codes and encoding, for one thing (as well as some more things).
Note also that the theories don't start and end with a mathematical model that simulates the permutations on a computer; the above probably knows this so the comments are entirely superfluous and at least as gratuitous as the ones I've been making.
 
Hah.
"solving a Rubik's cube doesn't require you to know", yada yade...

What solving any permutation puzzle absolutely requires is that the solver develop a good understanding of the moves and what they achieve; how to "flip" or exchange arbitrary pairs of pieces.

Although you may never formally name or write down the "formulas" you develop, they all correspond to group-theoretical operators which you conjugate together. Just because some dude called Galois got there first, makes no difference to what you conceive you are doing.

Of course, you can argue that thoughts in a mind aren't mathematical - but then you need to provide evidence that neurons don't communicate mathematically, there aren't any "codes" in a brain, etc.
 
I can characterize the cubed puzzles as stacks or groups of cubed elements.
There is a section number, N which when cubed is the number of subcubes in the symmetrical stack (to cube symmetry).

Nothing interesting happens in the total group until N is = 2, then N-1 is the slice-number for an edge basis. Per face there are 2(N-1) slices.
At N = 2 there is a stack of 8 elements and a total of 24 sections. The permutations of this subgroup are not affected by the number of edges in higher-N stacks. Let this subgroup be K, then $$ |K| = 7!3^6 $$. |K| is a divisor of |G| for N in {2,3,4,5,6,7}.

There is only a single completed coset table in the total group, for N = 2. This says there are an odd number (11) of square or double moves or an even number of single (quarter) moves from the extreme of |K| to the identity; at most there are 11 moves to solve a scrambled 2-cube. That means parity for square moves is odd, and even for quarter moves; since the coset table says you can reach a single quarter move distance from the identity with square moves (from an arbitrary # moves away).

The structure of the corners group divides the edges group, which extends to dividing the structure of permutation groups, when N > 2.

There is a natural way to analyse these things, as computational models. Depending how you classify the data structures, what sort of language you use - an inperative or a declarative language, or some functional abstract language. And (yes folks) there is now a discipline called computational group theory. (Just as well I learned how to program in BASIC -no I'm joking 'ha ha'; actually I learned to program in assembler) ...:rolleyes:
 
Last edited:
We must find the god's algorithm (optimal solving)... Well... Yeah :)
 
What solving any permutation puzzle absolutely requires is that the solver develop a good understanding of the moves and what they achieve; how to "flip" or exchange arbitrary pairs of pieces.


The way I do the Rubik's cube (in particular --- you seem to be talking about puzzles in general) doesn't require much thought at all. I simply learned X number of algorithms, and pay attention to the situation at the end of each algorithm.......

-basically glance at the cube, do the pattern, glance at the cube again, do the pattern. You really don't even have to look at the cube while you're doing the patterns......its pretty much just muscle memory. You can however shave off a lot of time if you are familiar with where certain cubelets will be at the end of a pattern - in which case you just chain the algorithms together until you get to a part where you can't predict it (which means you have to glance at it again whenever you get there).

.....of course, I'm still layer solving, so I'm not super-fast or anything, My average time for the 3 x 3 right now is ~ 1 minute 20 seconds the revenge cube is ~ 5 minutes, and professor is ~ 10 minutes. These times really suck.

Developing mathematical concepts and computer programs to solve the cubes is a great learning exercise however, so I can't knock it.
 
I did the original Rubik's Cube once and only once. Working on and off, probably not quite once a day, it took a couple of months to figure out two algorithms (series of several rotations) that made sense out of it, and to develop a notation system that allowed me to keep track of where I was and what I was trying to do. After that it took a few days to reach the solution.
 
Speed-solving is only one aspect; I suppose this could be characterized as a test of the brain's ability to find log order solutions in realtime.

But there's much more to it - for instance, writing a "construction" algorithm that, given N will specify physical parameters, and relating this to |G| the number of permutations for the N-cube; an easy problem.

A harder problem is finding a correspondence between the sliced-up cubes and other Platonic solid puzzles which are deep or shallow sliced. Another is mapping the full set of Platonic forms to the N-cubes - finding a tetrahedron, octahedron etc, in the "algebra", or characterizing the full group which the N-cubes are a subset of.
 
Orientation:

I have a question about this. The first two examples produced in the group of N-cubes are for N = {2,3}. The Pocket Cube has only 8 corner pieces; since any of these can be in 1 of 8 positions and 1 of 3 orientations there are 24 equivalent permutations, meaning the number of possible permutations is reduced by a factor of 24.

200px-Pocket_cube_solved.jpg

For N = 3, there are fixed centers which don't permute (and a fixed set of 6 colors is used, with the same permutation order), since the 3-cube is oriented by the centers the reduction by a factor of 24 doesn't apply to it.

As N gets larger, are the odd-N examples subject to the same factor of 24, and are the even-N examples free of this restriction?

What happens to the permutation total if the center color is removed from the 3-cube does this reduce the total?

200px-Void_Cube.jpg
 
To Stoke things a little here, I've got another question.

Considering the physical parameters in the above, which would correspond to a "temperature", or a "volume"? In what sense is there any "pressure"?

What is temperature "physically", and what does a degree of temperature represent? Is temperature a serial, or a parallel measurement - a derivative of some "work" function?
The Boltzmann temperature is just a partition assumed to be at equilibrium.
 
I found that a quarter turn of the top section and a thumb pushed upwards dislodge the first section of the cube then it was easy after that to disassemble the cube and rebuild it with all the pieces in there proper place again. I found that farm better then removing the colours and rearranging them to suit the need of the other colours.
 
Ok...

this disassembling and reassembling of the pieces, would you say it involved "work"?
How would you describe the "function" you applied to the process? Something like "n pieces", and "m moves", over "the cube"?
And you used a discriminant, right? Something we call "colour", below the equator (\'smirk').
 
The slice-number, N, when = 1, constructs a 1-cube, which is the "atom" in a lambda calculus.
Since there are no "levels" or rather, all positions in K and E are fixed at N = 1.

When N = 2, and things are more "interesting", there is a number of particles (8) to move around.

"...$$n$$ is the total number of particles (8), $$ n_i $$ is the number of particles in the $$i$$th "energy level" (after m moves), M is the number of levels, and

$$ \frac {dS} {dn_i} = k \sum_{i=1}^n \frac {\partial {ln(W)}} { \partial {n_i} } $$

$$ n = \sum_{i=1}^M n_i $$
"

Ed: mistake corrected, i needs to start at 1 (the atom). Also, n is the number of permutations, [M] is the limit for i, which is the index of the corners group K.
I might have written (8) to mean something other than the integer 8 inside parentheses...
So M is a total number of moves, for which all 8 corners, in the 2-cube, are in n permutations.
There are 3 permutations per face when M = 1 since a move m can be clockwise or anticlockwise, it is only restricted to "corners meeting corners". Then n = 9 when M = 1.
But this doesn't seem to stay a simple relation - at M = 2, n = 54, a factor of 6 larger; the next value for n can't be explained by a simple 'naive' formula like the above - it needs some tweaking. Perhaps M is a more complicated number than just a "moves" counter.

Ed2: some may be thinking: "the permutations are discrete, and that's a continuous differential equation". You can 'move' a face continuously and it will still take corners and edges to corners and edges, respectively and in parallel.
So continuous rotation of a group (4 corners on the same 'level') will take corners to corners 4 times per cycle. There is still a discrete solution.
Some may also recognize the formula for entropy changes by a work function, in some space or other.
 
Last edited:
I've made a clanger here with this formula:
$$ \frac {dS} {dn_i} = k \sum_{i=1}^n \frac {\partial {ln(W)}} { \partial {n_i} } $$

The $$ dn_i $$ should properly be on the RHS as a multiplicand in the sum, I was thinking too discretely. With a discrete 'solution' though, having a sum over i is ok for the $$ dn_i $$, and then it partitions dS, you see.

You can see that there's an analog of a Carnot cycle, in a single-face group of "positions and momentums" for 1/2 of the atomic cube, when (the slice number) N > 1. The number N acts on edges, then faces, then cubes when it's $$ N^3 $$. Rotation of a single face for any N is a strict 4-cycle of edges and corners, which is 'trivially' reversible.

The usual way to discretize a continuous formula, a work function of some kind, is to introduce discrete partitions and assume equilibria exist (in a general Carnot-cycle); this is the notion of temperature T. A degree of temperature is a word w, with a certain bit length; to represent continuity the words are either large depth (bit length) or large 'width' in terms of the number of samples per unit time - again a discrete "time or temperature" value is assumed to exist, in order to parameterize heat and the flow of heat.
 
Last edited:
You can look at the work function in a discontinuous way, by cycling small amounts of work. If you alternately "do work" in the same direction - clockwise or anticlockwise - on two adjacent faces, by making small displacements, eventually the faces and the rest of the cube are locked up.

$$ \frac {\partial {ln(W)}} { \partial {n_i} $$ where i is an ith position.

You expand something - a gap appears between a corner section and three others - and a stack of pieces gets twisted or skewed, along one edge. In the N = 3cube you get a row or column of 2 corners joined by an edge piece. You can do this maneuvre because of the slight tolerances in the spaces between individual pieces, the 'inner' faces.

This is like coupling two Carnot cycles (or combustion engines) improperly; the alternate cycles need to complete individually (i.e. find a corner or edge, where 'real' displacements vanish) before alternating. This is the Singmaster word XY, where X and Y are any two adjacent 'letters' in the heuristic alphabet (BFURDL).

Ed; to be precise, the alphabet should be written as a list of adjacent faces, so the above is true if any two letters in (URFDLB) are chosen including cyclic permutations. This is since, any pair of opposite face 'letters' is a distance of 2 apart in the cycle.

The face 'metric' is four edges joining four adjacent faces (which vanish when you look at or project only one face). Each edge has a sliced-evenly 'metric' for any inner slices, in the Rubik's cube group. The edges in the original are a fractional metric of edge-distance. There are no edge pieces in the 2-cube, but the cube still has edges, in the set E, these are the $$ e_0 $$ in the dynamics. Obviously, an edge is a 'root' of a face; the sliced edges are also roots of a nullspace; you get to see this by doing the "wiggle" move above, slightly skewing two alternate faces repeatedly until the mechanical limit is reached. This is the shear-stress component you try hard to avoid when speed-solving (running at a max efficiency), you don't want any lockups or any of the inner sections 'catching' each other.
 
Last edited:
Why does the original cube resemble so many physical "systems"? Because, for some reason, such devices are multi-mnemonic.

They are physical systems that encapsulate a lot of 'simple' ideas, and how a simple arrangement and rearrangement of a stack of elements (rules and moves that follow them) can lead to complex behavior.

A "game" being played at the moment is, estimating how long will it take to enumerate all possible permutations of the 3x3x3 cube, how much memory space and processor time is needed? And is there a word that visits every node in the graph and how many letters is it?
 
And now, this:

The components in the group are designed to show 'compression', viz the 2-faces differential functional that results in lockup. Here, because the faces are free to move in either of two directions, the lockup maneuvre has a time-reversible unlocking functional. There is a real representation of pressure, with the skew 'derivative' and an application of stress and shear to the structure as a whole - the pressure is distributed.

So there is a compressive action - like gravity - that means none of the operators can continue to act in the same direction because of a mechanical stress limit. Continuing to apply pressure will result in shearing of the structure (a "supernova"). Anyways, if anyone has any objections to the following, please make them.

The Toccata:

Assume two generators, g1 g2 that act in the same direction over G (g implies a geodesy) with small displacements dE > e0, s.t. dE/dt = dW a work function and:

dW/dt = dP where P is an intrinsic pressure variable, dW is extrinsic and t a nullcurve in (the geodesy of) G. Any nullcurve has 4 possible spacelike or timelike classes. That is, $$ t \in T $$, is a composite variable of time and space, which are special "inertialess" extrinsic components of G (see A. Einstein "On the Electrodynamics of Moving Bodies", Ann. der Phys. 1905).

A spacelike curve t is "in E" if it is also timelike. A timelike curve "takes t1 to t2" in a generalized Carnot sense of Th -> Tc. That is, there is a connection between the 'temperature' of a nullcurve and the 'time' measured to transfer or transport an energy E from t to t'.
So there is a direct connection between adiabatic (Carnot) expansion at "fixed entropy", i.e. $$ \hbar \nu $$, and adiabatic compression at "fixed temperature" i.e. ct = ct'.

The fugue:

g1 and g2 "work together" as a pair, to compress the components of G. Each 'step' = a partial differential (a sum over terms in g1g2) so that pressure is an integral over W - a circulation integral as in Boyle's ideal gas equations.
 
Status
Not open for further replies.
Back
Top