Tensors and moment of inertia

Discussion in 'Physics & Math' started by arfa brane, Dec 16, 2017.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Given a rigid body, with an homogenous mass density, rotation about its center of mass 'produces' a moment of inertia.

    This can be calculated if the radius of gyration, R, is also known, using Steiner's formula (if you know what it is, which you should if you studied 1st year physics).

    My question is, is the moment of inertia: a (1,1) tensor, a (2,0) tensor, or a (o,2) tensor?
    Explain your answer.

    Also explain how the tensor relates angular velocity to centripetal acceleration. (heh heh)
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    Before anyone attempts to answer your question, it might be helpful if you explained exactly what a tensor is, and in particular the difference between the various tensors you mention.

    And while your at it, you might like to explain what is meant by the assertion that a physical property (or phenomenon) literally IS a mathematical object (if indeed that's what a tensor is)
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    About "my" question. Actually it's a question posed by Frederic P Schuller in lecture 3 of the Winter School on Gravity and Light.

    I suppose one could start at about 54 minutes into that lecture where he begins to define tensors and multilinear maps. He asks the question I've asked here also, somewhere after that. So far I've looked at my old physics textbook's definition and derivation of moments of inertia. In that text it seems the mathematics indeed describes physical properties like angular velocity, angular momentum etc.

    The former two objects are indeed rank 1 tensors, as you no doubt know.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    According to Wikipedia:
    The dot product of two vectors is a rank 2 tensor because it 'acts' on two rank 1 tensors, both contravariant (?). It maps the two vectors to a scalar. Likewise with the cross product, except that maps two vectors to a third vector. Dr Schuller says that in undergraduate physics you usually aren't told about the difference between vectors and covectors, treating both as equivalent objects.

    Returning therefore to my undergrad text, I see that:

    A rigid body rotating about an axis Z with angular velocity \( \vec \omega \) has each of its parts orbiting about Z in a circular orbit. Choose some part (or particle) \( A_i \) describing a circle with radius \( R_i \) relative to Z, such that \( \vec r_i \) is the position vector of \( A_i \) relative to an origin (the centre of mass) O.

    (For the sake of simplicity the rigid body could be a hollow sphere and \( A_i \) a point chosen somewhere on its surface)

    Then \( A_i \) has an angular velocity \( \vec v_i = \vec \omega × \vec r_i \). Are the terms on the right vectors or covectors, and how do you tell the difference?
     
  8. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    This could hardly be more wrong. I don't know what the Wiki says, but the dot-product of 2 vectors is a scalar, by definition. It is often called the "inner product".

    A rank n tensor is the tensor product of n vectors. The tensor product is also known as the "outer product".

    Inner-outer. See the difference? Probably not, but I am not confident you would understand me if I tried to explain.

    Incidentally, there is a stupid (to me) convention that calls a "lone" vector as a rank 1 tensor and an even stupider (to me) one that calls a scalar a rank 0 tensor
     
  9. Dinosaur Rational Skeptic Valued Senior Member

    Messages:
    4,885
    Moment of inertia is a scalar for a rigid object. It is a measure of resistance to a change in angular velocity.

    It is analogous to ordinary inertia for an object, which is a measure of resistance to a change in linear speed.

    For example, a tight rope walker often carries a long pole, which increases his moment of inertia. Note that in order to fall off a tight rope, the walker must rotate in a plane approximately perpendicular to the tight rope.

    Mass further from the center of gravity has a higher moment of inertia than the same mass closer to the center of gravity.​
     
  10. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Then I've misunderstood what the dot product represents, apparently. However, "what the Wiki says" is that the dot product is a relation between two vectors. The tensor is the relation, not just the RHS of an equation.

    Abstractly it's an object that 'inputs' two vectors and 'outputs' a scalar. I could represent this object as a black box with two inputs and one output, something like a Penrose tensor diagram--or could I? If not, why not? It seems like a completely reasonable thing to do.


    If the moment of inertia is a rank 2 tensor, it also outputs a scalar 'product', or have I got that all wrong too? Generally tensors are of type (p,q) right? So what exactly is the problem (or the stupidity) of p or q being 0?

    And, if I write down a real number, can I say it's the dot product of some pair of vectors? Is that what you mean? Does it all depend on which side of the equals sign you're talking about?
    Sorry if I'm being a bit condescending, possibly patronising, but:
    ()
     
    Last edited: Dec 17, 2017
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    How does Penrose (surely not a stupid person) represent the dot product?

    Like this:

    Please Register or Log in to view the hidden image!



    \( \beta, \xi \) are obviously the vectors, but the dot product is matrix multiplication of a column vector by a row vector. The triangle has a line descending down to the circle, which I interpret as the triangle being the row vector and the circle as the column vector. The vertical line represents \( a \), the single index.

    Now vectors, in linear algebra are usually column vectors, but the dot product transposes one of a pair of vectors to a row vector (nominally a covector), in this case \( \beta \). Since an inner product is a commutative relation, the vectors could be switched around, but not the shapes in the diagram, presumably, since these specify which is the row vector (transposed column vector) and which is the column vector. The dot product is a relation, not a number.

    In fact it represents the contraction of a pair of rank 1 tensors to a rank 0 tensor . . .
     
    Last edited: Dec 18, 2017
  12. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Ok, so I'm explaining what a tensor is, maybe. If Wikipedia is any kind of authority (I have no reason to question this, all I can do there is consult other authors and I do have some textbooks to hand, one is my linear algebra course notes from Waikato Uni, another is Tensor Analysis on Manifolds by Bishop & Goldberg).

    Vectors have an algebraic and a geometric meaning (or structure). In physics vectors always have a magnitude and a direction. The definition of a vector space however, doesn't define direction as such, and a space of polynomials is a vector space (lots of mathematical objects can be elements of a vector space), what's the direction of a polynomial then? And I've encountered stuff on the net that says vectors aren't necessarily objects with magnitude and direction, or indeed either of those things.
    An example might be what computer scientists (I guess I'm one of those) call a program vector which describes the state of a program in terms of variables. But then, programs generally are a one-way process, a directed path in an abstract graph or switching network (the latter could be defined as a directed graph, say).

    Anyways, the rubric is that vectors are just one type of tensors, namely (1,0) tensors. A (p,q) tensor has p contravariant and q covariant components which are vectors--tensors have a kind of recursive definition. Both p and q are positive integers. Covectors are (0,1) tensors.

    As Wikipedia states, elementary examples of tensors are the vector dot and cross product. The dot (or inner) product applies to two vectors in the same Euclidean domain, which is to say, lying in the same 2-plane. The cross product applies to vectors in a 2-plane but is only defined in Euclidean 3-space because the product is perpendicular to that plane.

    Back to the dot product (or inner products in general). The two vectors are represented geometrically as having a common origin with an angle between:

    Please Register or Log in to view the hidden image!


    In this diagram there is a projection of vector A onto B with (scalar) magnitude \( |A| cos \theta \). I'll claim this projection defines a relation between the vectors A and B, and that I can represent this relation with matrix algebra. That algebra is the tensor algebra.

    Alrighty?
     
  13. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    No you are not, not even close. You are ignorant of the subject you are claiming to "teach" and are not only confused yourself, you are likely to confuse others.

    Please stop, as a service to the forum. Just remember there are others here who have studied this stuff for half a lifetime, and would be happy to share their knowledge if asked

    No
     
  14. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Please shut up.

    Unless you want to also point out why the Wikipedia page on tensors, here, is "not even close" when it states quite clearly that a vector is a (1,0) tensor, and that the inner product (dot product of two vectors in the same plane) is a (o,2) tensor. The cross product is a (1,2) tensor. Please share your knowledge and explain how wrong and misleading this is, noting carefully it's not me saying it.

    Clearly the dot product is not just a number, it's a (0,0) tensor after contraction.

    As in: \( \vec u\cdot \vec v = u^i v^j \delta_{ij} = Tr ( \vec u \otimes \vec v) \)

    Why isn't this a tensor, is it algebraic or what? Enlighten me, and all those others here.
    Or just shut up.
     
  15. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Response v2: A (0,0) tensor can be considered a composition of two mathematical objects. They have no indices so there is nothing to raise or lower (make contravariant or covariant resp.), nothing to contract (or "expand"), you will need to introduce "new" objects to take this (0,0) object to something that can be described as an object with (a nonzero number, not negative, duh!, of) contravariant and covariant components.

    Physically, scalar components have a physical basis, a linear map from/to \( \mathbb {R} \) with kilograms, metres and seconds in it (ignoring electric charge for now). These are conveniently expressed in the mathematical sense, as units, none of which has the concept of a direction attached (extra structure you have to impose on the set of units). Units are good because we can tally them.

    Hence, a kilogram is a unit (0,0) "mass tensor", etc.

    Now introduce a contravariant vector: (1,0) and do something with the unit mass and this vector. If we say the vector is a velocity then all we need is scalar multiplication and we have a momentum, also a (1,0) tensor.

    The word "vector" is another name for a special case of a tensor with only contravariant components. "Covector" means an object which has only covariant components (or as you sort of pointed out, in the dual space). I suppose it's easier than saying (1,0) tensor, or (0,1) tensor; physics teachers, like Schuller does, could point this out early on, erm, maybe.
     
    Last edited: Dec 19, 2017
  16. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    What does contravariant mean, and why are vectors contravariant?

    I explain this to myself this way: If you draw an arrow on the ground and walk around it, the direction it points transforms continously in the opposite direction you walk, this is contravariance.

    Covariance is actually a bit harder to explain because the arrow can also be assigned covariant components by choosing a reference frame which is not rectangular. But if you walk around the arrow in a rough circle you are making that (approximated) choice.
     
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    There's a question about tensor type, in Schuller's 3rd lecture mentioned above in post 3, he says an (r,s) tensor has r covectors and s vectors in it, this seems back to front, but it isn't when you consider a tensor is also a multilinear map.

    A vector is a (1,0) tensor because that takes 1 covector and raises its index with the contravariant metric tensor. In notation, if \( A_i \) is a covector "in the first slot" of the tensor, then its dual is \( A^*_i = A_i g^{ij} = A^j \). I.e (V*)* = V.

    Although different indices are used, the dual of a covector has the same number of components, all with the same scalar values as the covector. And Schuller writes out explicitly a Cartesian product of r covectors as duals ("*"s) and s vectors as a tensor.
     
    Last edited: Dec 20, 2017
  18. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    A comment about moments of inertia and the radius of gyration.

    The radius of gyration is the distance from the axis of rotation where all the mass could be, so the moment of inertia is the same.

    This radius can be easily calculated for a solid body such as a rigid disk, without any motion needed, so a time-independent quantity the radius where, all the mass can go even if its elsewhere, is the same for a small enough solid ring rotating about its central axis. But, in physics you have to allow for an object like a disk or a ring to have more than one axis of rotation and so an equation of motion would reflect that there are at least two such axes in three dimensions.

    For ring rotating only such that all the mass is at a constant (or nearly constant) radius R, the radius of gyration K = R
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    The Kronecker delta is a function, but the two numbers in its domain can be indices from an index set.
    An index set is generally a positive integer (GR's field tensors have indices that run from 0 to 4).

    So you have the delta as a linear function acting on indices. These indices, in tensor algebra, have a different meaning if they are upper (column) indices, than if they are lower (row) indices. However, in a Euclidean space, where for instance the dot product 'defines' a flat plane, the Kronecker delta can be a metric tensor with either two upper or two lower indices, and it can be a mixed tensor with one up, and one down.

    But when you multiply the components of a pair of vectors and sum, i.e. \( (u_1, u_2, . . ., u_n) \cdot (v_1, v_2, . . ., v_n) = u_1v_1 + u_2v_2 + . . . + u_nv_n \) it doesn't matter where the indices are. Why not? Multiplication and addition are commutative; all the above equation says is "this is how you define multiplication" for two n-tuples (of scalar elements).

    It can be seen then, that upper or lower indices are a convenience in a flat Euclidean space, but since Einstein realised space is not necessarily flat, you need to be able to distinguish between vectors which transform (from one coordinate system to another) contravariantly and covariantly. In the sum over terms above, raising or lowering indices with the Kronecker delta makes no difference; in order to locate the delta in a tensor space, you have to distinguish upper and lower indices. But this amounts to distinguishing a column vector from a row vector, or generally, a columnspace from a rowspace.

    Ed: note that since the Kronecker delta 'acts' as a function of two variables (the indices in expressions that have them) it's multilinear, specifically it maps two integers to the set {0,1}.
    When it has two upper indices it's a (2,0) tensor, when it has two lower indices it's a (0,2) tensor, and when it has one upper and one lower index it's a (1,1) tensor. The choice is yours.
     
    Last edited: Dec 20, 2017
  20. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    This is totally garbled (again) and mostly wrong.

    If anyone is interested (which I doubt). Just last year year I made this thread, which seemed to interest arfa brane at the time; now he thinks he knows better[/URL]
     
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I'll just keep talking past the local skeptic.

    Back on to the dot product. This gives a way to define distance in Euclidean space. Revisiting the diagram representing a pair of geometric vectors:

    Please Register or Log in to view the hidden image!


    Although there's an abuse of notation apparent, |A| mathematically means the absolute value of A, in linear algebra the norm of a vector space is written ||A||.
    The dashed line can be considered as one side of a right triangle, so it has a value given by |A|sinθ (recall that sine and cosine are orthogonal functions).

    It seems the dot product yields a pair of completely abstract orthogonal "cordinates" for one of the vectors (in the diagram this is \( \vec A \)). These can be said to be free of a choice (something Schuller is quite explicit about in vector fields) of external coordinates located anywhere else in the frame, so because the angle θ is an invariant, we might consider it's a kind of choice of gauge (or maybe that's pushing the idea of a gauge too far).

    What happens when θ is zero? then A rotates down onto B and is 'absorbed', there is only B left in the frame. We can find the magnitude of B by taking the dot product: \( B \cdot B = B^i B^j \delta_{ij} \). But this is also \( \delta_{ij} B^i B^j \), and also \( B^i \delta_{ij} B^j \).
    And \( (B^i \delta_{ij}) B^j = B^i (\delta_{ij} B^j)\).

    But \( (B^i \delta_{ij}) B^j = B_j B^j \), because the 'metric tensor' lowers an index. Equivalently \( B^i (\delta_{ij} B^j) = B^i B_i\). Since this operation--the dot product of a vector with itself--gives a distance, that's a metric. Since you can have the metric tensor anywhere in the expression, it's completely abstract, you can determine that the magnitude, or length, |A|, is given by Pythagoras too, using the projection(s) as the orthogonal basis.

    But these too can be given a vector structure, just abstractly include unit vectors: \( \vec e_1|A| cos \theta, \vec e_2 |A|sin \theta \) and then use vector addition in the case θ is not zero.
     
    Last edited: Dec 21, 2017
    Thales likes this.
  22. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    The only thing I am sceptical about is your qualification for running this thread.

    I linked to my thread of last year in the hope that you would see that the metric tensor that induces the inner product is indeed a type (0,2) tensor, but that it is NOT the inner product itself, which is a number. To confuse the two is like saying if \(f(x)= x^2\) then \(f=x^2\),which is clearly nonsense
     
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    At that link you said nothing about this from me:
    .

    They are, but you also need a thing that raises and lowers indices. What's it called again?

    You present the metric tensor as an object that "induces" an inner product. A type (p,q) tensor has p contravariant components and q covariant components (vectors have components which are projections onto a coordinate system), or it has p covectors and q vectors as components depending on which end of the map you are.

    I say the inner product (which as you put, is induced by the presence of the metric) isn't just a number too, but I mean the other end than you do I think.
     
    Last edited: Dec 21, 2017
    QuarkHead likes this.

Share This Page