# (SR) Manifolds

So, in spite of an obvious lack of enthusiasm for this thread, I shall continue to the bitter end (already in sight!)

We have defined the General Linear Group $$GL(n, \mathbb{F})$$ of all invertible $$n \times n$$ matrices. We now want to show that this group is also a manifold. I proceed as follows.

Suppose there is a map $$\mathbb{Z} \to GL(n,\mathbb{F})$$ such that, for some subset $$\mathbf{A}_n$$ of matrices in $$GL(n,\mathbb{F})$$, that $$\mathbf{A_1,A_2,....,A_n}$$ is a sequence of matrices.

I will say that the sequence $$\bf{A_n}$$ converges uniformly on the matrix $$\mathbf{A}$$ iff, as $$n \to \infty$$, each entry $$\mathcal{A}^i_j$$ in the the n-th matrix$$\bf{A_n}$$ converges on the corresponding entry in $$\bf{A}$$. That is, $$(\mathcal{A}_n)^i_j \to \mathcal{A}^i_j$$.

Writing this in the usual way, I say that, for any $$n \in \mathbb{Z}$$ and some $$\epsilon \gt 0$$, the criterion for uniform convergence is

$$|(\mathcal{A}_n)^i_j - \mathcal{A}^i_j| \lt \epsilon$$. As I may take $$\epsilon$$ as small as I like, I can ensure that, provided $$\bf{A}$$ has non-vanishing determinant, then so do all $$\bf{A}_n \in \epsilon \subset GL(n, \mathbb{F})$$

Under this circumstance, I will say that $$\epsilon$$ is a neighbourhood of the matrix$$\bf{A}$$ whose coordinates are given by the numbers $$(\mathcal{A}_n)^i_j - \mathcal{A}^i_j = x^i_j$$, that is $$GL(n,\mathbb{F})$$ has the structure of a manifold.

Quite obviously there are $$n \times n = n^2$$ of the coordinates $$x^i_j$$, so our manifold is of dimension $$n^2$$. It is therefore a submanifold of $$\mathbb{F}^{n^2}$$, and therefore inherits the subspace topology.

(Hmm, did I say what this is? Do I need to?)

Last edited:
*Sigh*

Let me see if I can grab the attention of the physics jocks here by talking about a couple of important Lie subgroups of the General Linear group $$GL(n,\mathbb{F})$$, the unitary groups and the orthogonal groups.

First we need to do a bit of operator theory. No - even firster, we need to stop babbling about the abstract generic field $$\mathbb{F}$$ and admit that we are only interested in the Real field $$\mathbb{R}$$ and the Complex field $$\mathbb{C}$$. So we distinguish the general linear Lie groups $$GL(n,\mathbb{R})$$ and $$GL(n,\mathbb{C})$$.

Note that in what follows, I will use the terms "operator", "transformation" and "matrix" interchangeably; accordingly I shall use the same notation for each.

Right. Let $$V$$ be a vector space over $$\mathbb{C}$$, and let $$A: V \to V$$ be a linear transformation, the matrix with complex entries $$\mathcal{A}^i_j$$, as before.

Further suppose the inner product $$(v,w)$$ is defined on $$V$$ - I will accordingly call $$V$$ an inner product space, IPS for short.

The operator $$B: V \to V$$ satisfying $$(Av,w) = (v,Bw)$$ is called the adjoint of the operator $$A$$ and can be shown to have entries in the matrix $$B$$ as $$\mathcal{B}^k_l =\overline{ \mathcal{A}}^j_i$$, where the bar denotes complex conjugation. Note that, by this construction, $$B$$ is the transpose of the complex conjugate of $$]A$$. That is $$B = (\overline{A})^T$$.

So, I will write $$A^{\dagger} = (\overline{A})^T$$ for the adjoint operator to $$A$$.

(as an aside, though I don't think I shall need it, the operator on a complex IPS satisfying $$H=H^{\dagger}$$ is called an Hermitian operator)

So, an operator $$U:V \to V$$ over $$\mathbb{C}$$ satisfying $$UU^{\dagger}= U^{\dagger}U=I$$ is called a unitary operator. Clearly this implies that $$U^{\dagger} = U^{-1}$$.

Unitary transformations have the nice property that they preserve the metric properties ("length", angle, say) of the IPS they act upon, that is $$(Uv,Uw)=(v,w)$$. They are isometries.

The set of $$n\times n$$ unitary matrices form a Lie subgroup of $$GL(n,\mathbb{C})$$, and I will call this group $$U(n)$$, the unitary group. (in case its not blindingly obvious why I make no reference to the field, it will be soon!)

An isometry satisfying the above, but which is simply a transposed matrix (no conjugation) is called and orthogonal transformation. I will use the notation $$OO^T= O^TO = I$$ to signify this fact. In like fashion, the set of all real orthogonal matrices form a Lie subgroup of $$GL(n,\mathbb{R})$$, the orthogonal group $$O(n, \mathbb{R})$$.

In fact, although the Lie group $$O(n,\mathbb{C})$$ does exist, I believe it is somewhat pathological, so I will arbitrarily assert that $$O(n, \mathbb{R}) \equiv O(n)$$.

$$O(n)$$ and $$U(n)$$ have some familiar subgroups, with some nice properties.

Later for that.

OK, I had a bit of a pout late last night (which I have now deleted). Sorry about that.

So I said that the Lie groups $$U(n)$$ and $$O(n)$$ had some familiar subgroups, which is true. Lets start with the real group $$O(n)$$, as it is easier (in some ways) to get a handle on.

Recall that an orthogonal transformation is such that $$O^TO = OO^T =I$$. Note now the elementary fact that the determinant of the matrix $$I$$ is $$1;\;\det I=1$$.

Note further that, by an elementary property of determinants that $$\det O = \det O^T$$. Then I will find the following unremarkable fact:

$$\det OO^T = \det O \det O^T = (\det O)^2 = \det I = 1$$

hence $$\det O = \pm1$$. So $$O(n)$$ is the group of orthogonal matrices with determinant $$=\pm1$$. Now note that, viewed as a manifold, this group is not connected; the only path from matrices with positive unit determinant to those with negative unit determinant passes through those with zero determinant, and these are excluded from $$O(n) \subset GL(n,\mathbb{R})$$ by definition.

So I define the special orthogonal group $$SO(n)$$ as the group of real orthogonal matrices with determinant = 1. Evidently the identity is included here, so it is a group. As transformations, these are realized as rotations. Here is one on a 3-space;

$$\begin{pmatrix}\cos \theta&\sin \theta\\ \\ -\sin \theta & \cos \theta\\ \\ 0&1 \end{pmatrix}$$

a rotation through the angle $$\theta$$ around the $$z$$ axis in the $$x,y$$ plane. The other two rotations are found equivalently. We easily see that these matrices have positive unit determinant. I will, like all others, call this Lie group $$SO(3)$$.

I will, like all others, call this Lie group $$SO(3)$$.

That is so cool! I had no idea. The lightbulb has finally turned on. Thank you!

You're welcome; if you thought that was cool, wait until you see how we recover the group from its algebra (if I live that long!). That's down the road a bit, though.

Anyway, I had planned to talk about the unitary groups, but I think I should really say a few words about the Lorentz group. This will be of interest to the physics guys, no doubt, who could probably do a better job at explaining it that me.

So, recall that for any pair of vectors $$v,w \;v \ne w$$, the condition that they are orthogonal is given by the requirement that the inner product vanishes; $$(v,w) = 0$$. Recall also that in general, for some space to qualify as an inner product space, we require the inner product to be positive-definite - definitely positive or definitely zero.

Suppose we think of Euclidean 3-space $$\mathbb{E}^3$$ as an inner product space in the above sense, with the set $$B=\{e_1, e_2, e_3\}$$ as orthogonal basis vectors. Assume these are normalized, i.e. unit vectors. Then obviously $$(e_i,e_j)= 0$$ when $$i \ne j$$, and 1 when $$i = j$$. Notice that, since $$O$$ is an orthogonal transformation, and that $$(Oe_i,e_j)=(e_i,O^Te_j)$$ are also inner products, I will require the same for these.

I will define the matrix (not really a transformation in this case) $$g_{ij}$$ of all pairwise inner products of $$B$$. Then the property above can be encoded in the Kroenecker delta as $$g_{ij} = \delta_{ij}$$ where
$$\delta _{ij} = \begin{cases}1\; \text{if}\; i=j\\0\;\text{if} i \ne j \end{cases}$$.
The matrix $$g_{ij}$$ is obviously
$$\begin{pmatrix}1&0&0\\ \\0&1&0\\ \\0&0&1 \end{pmatrix}$$

I will call $$g_{ij}=\delta_{ij}$$ the Euclidean metric on this space (ah, yes, any inner product space is a metric space - not v.v).

However, it seems there are spaces where a pseudometric is desirable, where the inner product is not positive-definite, that is, for any set of orthogonal and normal basis vectors, diagonalized entries in the matrix $$g_{ij}$$ may be positive or negative (but never zero).
Then obviously $$g_{ij} = \delta_{ij}$$ no longer holds, rather I need to define a new metric.

From this follows that, as the inner product between any two vectors is defined as the scalar sum of their respective components on the inner product of the basis, then no inner product need be positive-definite (this is a fudge, I know - I can explain if anyone really wants)

Let $$p$$ count the positive diagonal entries in this matrix, $$q$$ the negative. Then the orthogonal transformation on a pseudo-inner product space $$O: M^n \to M^n$$ is represented by the real Lie group $$O(p,q)$$. A vector 4-space where $$p = 3,\; q = 1$$ is called a Minkowski space, whose metric $$g_{ij} =\eta_{ij}$$ is, astonishingly, called the Minkowski metric ; you may think of this vector space as spacetime.

Then the Lie group $$O(p,q) =O(3,1)$$ is called the unrestricted Lorentz group. To see what this means, I have go back a bit and plug a hole in my last post.

Your thread has given me inspiration to look into SU(n) a bit, and I found that SU(2) is related to the unit quaternions. That's even cooler because I have experience with quaternion math, and I actually understand them half decently!

I've also read that the group of unit octonions is related to SU(3).

Anyway, I don't know much about this kind of thing, but it's very interesting, and I owe you for introducing me to them. Thank you.

shalayka said:
Your thread has given me inspiration to look into SU(n) a bit, and I found that SU(2) is related to the unit quaternions. That's even cooler because I have experience with quaternion math, and I actually understand them half decently!
Well good. There is a slight wrinkle to the group $$U(n)$$; it seems that, although the entries in any matrix in $$U(n)$$ are complex numbers of the form $$z = x + iy$$, nevertheless this is a real Lie group. I don't fully understand why. See if your studies throw any light on this, and report back!

Anyway, just in case anyone is taking this self-promotional stuff seriously, I should point out a potentially misleading omission.

I said
Quite obviously there are $$n \times n = n^2$$ of the coordinates $$x^i_j$$, so our manifold is of dimension $$n^2$$. It is therefore a submanifold of $$\mathbb{F}^{n^2}$$, and therefore inherits the subspace topology.
.

I should have gone on to say that every neighbourhood of $$GL(n,\mathbb{F})$$ is homeomorphic to $$\mathbb{F}^{n^2}$$, when the latter is viewed as a topological space. This is the definition of a manifold, right enough. It also allows me to make the following assertion.

I will set $$\mathbb{F}^{n^2} = \mathbb{F}^m$$ to simplify notation. Then, fairly obviously, any tangent (vector) space at any point in $$\mathbb{F}^m$$ is all of $$\mathbb{F}^m$$. So we might just as well choose the tangent space at the origin, in full knowledge that this will be equivalent to any other tangent space.

From this I infer that, since the group $$GL(n,\mathbb{F})$$ is the group of matrices with the single property of invertibilty, the tangent space at the identity $$e$$ of $$GL(n, \mathbb{F})$$, and at all other points, are matrices with this property.

That they are not elements in $$GL(n,\mathbb{F})$$ is less easy to see, but I have to run. Maybe later.

OK, I was just cranking up to the Lie algebra, but I see I left this promise hanging.....
Then the Lie group $$O(p,q) =O(3,1)$$ is called the unrestricted Lorentz group. To see what this means, I have go back a bit and plug a hole in my last post.
Right. Recall I said that the Lie group $$O(n)$$ is the group of all orthogonal matrices with positive or negative unit determinant. Recall also I defined the subgroup $$SO(n) \subset O(n)$$ as those matrices with positive unit determinant, and claimed that this group is realized as the rotation group. Then a natural enough question is: what is the residuum of $$O(n)$$ after $$SO(n)$$ has been "removed". Or, to put it more sensibly, what is $$O(n) \setminus SO(n)$$, i.e. those matrices with negative unit determinant?

First thing to notice is that, since $$SO(n)$$ ran off with the identity of $$O(n)$$, then $$O(n)\setminus SO(n)$$ cannot be a group. I will state, without argument that, whereas $$SO(n)$$ is the group of rotations on some n-space, then the $$O(n)\setminus SO(n)$$ represent rotations plus inversions.

Returning now to the unrestricted Lorentz group $$O(3,1)$$, and applying the same argument (as I must), I will say that the restricted Lorentz group $$SO(3,1)$$ is the group of Lorentz transformations which do not allow inversions (oh, it's a simple exercise with pencil and paper to show that Lorentz transformations are coordinate rotations).

I assume this means, and I am willing to be corrected by a physicist , that, where $$O(3,1)$$ acts on a Minkowski 4-space in which is is possible to move freely in that space, the restricted Lorentz group assumes that it is only possible to move in one direction in spacetime. Hm. I wonder if this is at all controversial - it's not meant to be!

(in fact, since it was Ben first introduced these groups, I insist that he comment!)

Last edited:
First, an apology.
From this I infer that, since the group $$GL(n,\mathbb{F})$$ is the group of matrices with the single property of invertibilty, the tangent space at the identity $$e$$ of $$GL(n, \mathbb{F})$$, and at all other points, are matrices with this property.
This is false, or rather the implied assertion that elements of the tangent space at the identity have no-zero determinant, is false. We need to look at the Lie algebras to see why.

So. I can do this by fiat or thoroughly..... let's try the fiat approach first, then maybe get to the proper stuff. Maybe.

So, let $$G$$ be a generic Lie matrix group, with $$g \in G$$. By our definitions, the group $$G$$ is also a differentiable manifold, to which I can associate a space of directional derivatives, vectors, at any point $$g$$ which I called the tangent (vector) space $$T_gG$$.

Now, $$G$$ is also a group, which means it must have an identity - let's call this $$e$$ - and a vector space $$T_eG$$ here. Then, for each $$X,\;Y,\;Z \in T_eG$$, I will say that $$T_eG$$ is a Lie algebra iff, for any vectors $$X,\;Y \in T_eG$$ one has that the commutator $$[X,Y] = XY - YX$$ is also a vector in $$T_eG$$. Skew-symmetry $$[X,Y] = - [Y,X]$$ follows immediately from this, as does the Jacobi identity

$$[X,[Y,Z]] + [Z,[X,Y]] + [Y,[Z,X]] = 0$$, as we saw in an earlier post.

Since $$X,\;Y,\;[X,Y]$$ are all elements in $$T_eG$$, the map $$[\quad,\quad]: T_eG \times T_eG \to T_eG,\;\;[\quad,\quad](X,Y) = [X,Y]$$ is clearly bi-linear. Under these circumstances, I will therefore offer the definition:

A Lie algebra $$\mathfrak{G}$$ is a vector space, together with a skew-symmetric bi-linear map $$[\quad,\quad]:\mathfrak{G} \times \mathfrak{G} \to \mathfrak{G}$$ that satisfies the Jacobi identity.

I should warn of two (at least) serious defects in this fiat approach: most importantly, in $$[X,Y] = XY - YX \in \mathfrak{G}$$, the operation $$XY$$ is not defined. Second, it will not be obvious to you why the vectors $$X,\;Y,\;[X,Y]$$ are matrices.

This is the price one pays for simplifying, I guess.

Actually, on reflection, I think I have gone as far as I want with this thread. So, unless anyone has any comments or queries, consider it a done deal.

So, fukkit, I am bored, and anyway, I never keep my promises. Plus, there seems to be quite a lot of nonsense in this sub-forum at present . So read this or not, I don't really care.

OK, I gave a fiat definition of a Lie algebra, and also hinted why it was slightly unsatisfactory. I now propose a rather more rigourous definition. But first we need to agree on a few definitions.

In fact, no - first we need this. Recall I said the Lie group $$SO(n)$$ was realized as a rotation group. In fact, realizations of Lie groups are not unique; every Lie group has at least two realizations.

Consider the Lie group $$SO(3)$$. I can think of this as a set of maps on the 2-sphere as follows: Let $$S^2 = x^2 + y^2 + z^2 = 1$$ be the unit sphere embedded in $$\mathbb{R}^3$$. Then the realization of $$SO(3)$$ as a rotation group means that, for a rotation $$\theta$$ around the $$x$$-axis, for example,

$$x' = x$$
$$y' = y cos \theta - z sin \theta$$
$$z' = y sin \theta + z cos \theta$$

(This is just a funny way of writing the familiar rotation matrix). But notice that a simple calculation shows that $$x'^2 + y'^2 + z'^2 = 1$$ is still "on" the sphere $$S^2$$; that is, to any element in $$SO(3)$$ there corresponds a map $$S^2 \to S^2$$.

But, on the other hand, since the coordinate set {x, y, z} can be regarded as elements in $$\mathbb{R}^3$$, then I can just as easily think of $$SO(3)$$ a map $$\mathbb{R}^3 \to \mathbb{R}^3$$. This then is another realization of this group. But since, unlike $$S^2$$, we know that $$\mathb{R}^3$$ is a vector space, the map $$\mathbb{R}^3 \to \mathbb{R}^3$$ is a linear transformation, which we saw can be written as a matrix.

I make the definition: The realization of a Lie group as the set of transformations on a vector space is referred to as a matrix representation of the group.

In fact, every Lie group has as at least one such matrix representation, as defined above - that is the realization of the group as transformations on its own tangent (vector) space. This is called the adjoint representation of the group.

Anyway, to continue. It is customary, with certain provisos, to identify the transformations on the tangent space of the Lie group $$G$$ with its algebra $$\mathfrak{G}$$

So, let's do some arithmetic!

Maps between arbitrary groups of the form $$f: G \to H$$ with the property that, for all $$g,k \in G$$, that $$f(g \cdot k) = f(g) \cdot f(k) \in H$$, and where the centre dot denotes a generic group operation, are referred to as group homomorphisms.

I will call the homomorphism $$\varphi: G \to G$$ an endomorphism. The set of all endomorphisms of this form will be denoted as $$\text{End}(G)$$

I will call the endomorphism $$\psi: G \to G$$ an automorphism if it is a bijection with a bijective inverse, i.e an isomorphism. The set of all automorphisms of this form will be denoted as $$\text{Aut}(G)$$ (this set is a group)

It starts to a bit hairy now, so fasten seat belts.

Let $$G$$ be a Lie matrix group, with $$g \in G$$ a matrix, obviously. I define the automorphism $$\Psi_g: \;G \to G$$ given by $$\Psi: G \to \text{Aut}(G), \;\; \Psi(g) \equiv \Psi_g \in \text{Aut}(G)$$ in such a way that $$\forall g,h \in G,\; \Psi_g(h) = g\cdot h \cdot g^{-1}$$. It is easy to see this is an isomorphism (isn't it?). Here the centre dot is, of course, matrix multiplication (I set the image of $$g$$ under $$\Psi$$ as $$\Psi_g$$ for aesthetic reasons only: it is a map, and this will simplify notation. But note well: this map depends
absolutely on my choice of $$g$$).

This weird, right? Clearly, as both $$G$$ and $$\text{Aut}(G)$$ are groups. $$\Psi$$ is a group homomorphism. So I have a map that takes an object to a set of isomorphisms from that object to itself! Don't worry - it gets worse (or better, depending on your point of view)

Anyhow. This is a nice map; notice that, by the elementary properties of identity and inverse, the identity of $$G$$, let's call it $$e$$, is fixed under this map i.e $$\Psi_g(e) = g \cdot e \cdot g^{-1} = e$$. This need not be true for other elements of $$G$$, that is $$g \cdot h g^{-1}$$ need not equal $$g$$, and in general it won't.

Now recall that I found my tangent vectors to the Lie group/manifold as (directional) derivatives at the "point" $$g$$, and asserted that these made up the tangent space $$T_gG$$ at $$g$$. I now claim that, by the same token, the differential of $$\Psi_g$$ at the point $$e$$ will "lift" the map $$\Psi_g: \;G \to G$$ to the map $$T_eG \to T_eG$$. Let's see that. By the above suppose $$\Psi_g(h) = g \cdot h g^{-1} = k$$. Let's write $$dk$$ to denote a tangent vector at $$k$$. Then $$d(\Psi_g(h)) = dk$$, by which I justify my claim.

And now define the differential of this map at $$e$$ as $$(d\Psi_g)_e = Ad_g:\;T_eG \to T_eG$$. This is an automorphism, i.e. $$Ad_g \in \text{Aut}(T_eG)$$ given by $$Ad: G \to \text{Aut}(T_eG)$$, where $$Ad(g) \equiv Ad_g$$ as before. ($$Ad$$, of course, simply means the adjoint map).

This is the realization of the Lie group $$G$$ as a transformation on its own tangent space, i.e. the adjoint representation of $$G$$ This also a nice map, but still depends on my choice of $$g$$.

So I go one step further, and take the differential of the adjoint map $$Ad$$. Now the group $$\text{Aut}(T_eG)$$ is of course a subset the vector space $$\text{End}(T_eG)$$of all endomorphisms $$T_eG \to T_eG$$. Now linear transformations/matrices, in the present context, are going to be elements in a Lie group, so $$\text{Aut}(T_eG)$$ is also a Lie group, whose tangent space at the identity is $$\text{End}(T_eG)$$ and I will say that the algebra of the tangent space at the identity can be identified with the space $$\text{End}(T_eG)$$; then the differential $$d(Ad) = ad:\;T_eG \to \text{End}(T_eG)$$ again "lifts" the isomorphisms in $$\text{Aut}(T_eG)$$ to the endomorphisms in $$\text{End}(T_eG)$$.

This defines the algebra we seek. Hmm, I'm not sure I explained that last bit very well

Let the vectors $$X,\;Y \in T_eG$$. Then the image of $$X$$ under $$ad$$ is $$ad(X) \equiv ad_X \in \text{End}(T_eG)$$, as before. But now, $$ad_X$$ is a transformation $$T_eG \to T_eG$$, and so has the right to act on $$T_eG$$ i.e. it is a function in two variables, say $$X$$and $$Y$$.

One can then define $$ad_X(Y) =[X,Y]$$, that is the bilinear map defined as $$[\;,\;]:\;T_eG \times T_eG \to T_eG\; \forall X,\;Y$$. Skew symmetry and Jacobi identity don't exactly follow instantly, but they do follow.

And that, dear friends, is how you derive the Lie algebra $$\mathfrak{G}$$ from the Lie group $$G$$ (should you ever want to!)

Ya, well, I didn't explain that very well, I guess, though I think what I said is correct (someone with another view is welcome to say!)

Let me try with a simple analogy, which will be, of necessity, rather weak.

Start with a vanilla group $$G$$, closed under addition, with additive inverse and additive identity.

Associate to $$G$$ a scalar field, with the property of scalar multiplication, and any other axioms we might require to make $$G$$ into a vector space; $$G \to V$$.

The set of all linear transformations $$V \to V$$ is a vector space (I called it $$\text{End}(V)$$), which is an elementary fact in the theory of linear vector spaces (in some of my books "left as an exercise for the reader"!). In such books, one often finds sections entitled "The algebra of Linear Transformations", where we learn that the vector space of endomorphisms $$V \to V$$ is, indeed, an algebra.

So, here the analogy starts to break down. Any element in $$\text{End}(V)$$ is generally given a label, like $$A$$ (arbitrary), $$O$$ (null), $$I$$ (identity) etc. But this is merely a convenience.

We can describe the action of an endomorphism by writing down, in matrix form, what it "does" to an element in $$V$$. And in order to start doing that, I will need to "inject" a vector from $$V$$ into the space $$\text{End}(V)$$; this was my map $$V \to \text{End}(V)$$.

And now the analogy fails totally; there is no way on earth that I can think of this space, and its associated algebra, as the algebra of my starting vanilla group $$G$$

In the case that $$G$$ is a Lie group, I can, and that's why I took the route that I did.

Anyway, now I have to go and try and divide zero by zero, wake up a few conscious photons and ponder on the nature of light; it's all go-go-go round here!