On Vector Subspaces

Discussion in 'Physics & Math' started by Anamitra Palit, Mar 13, 2021.

  1. Anamitra Palit Registered Senior Member

    Messages:
    102
    We consider a linear vector space V of dimension n. W is a proper subspace of V.We take a vector 'e' belonging to V-W and N vectors y_i belonging to W;i=1,2,3…N;N>>n the dimension of V. All y_i cannot obviously be independent the number N being greater than the dimension of V the parent vector space; k of the y_i vectors are considered to be linearly independent where k is the dimension of W. The rest of the y_i are linear combinations of these k, basic y_i vectors of W.
    We consider sums
    \[\alpha_i=e+y_i;i=1,2…N\] (1)
    Now each alpha_i=e+y_i belongs to V-W. We prove it as follows
    If possible let alpha_i belong to W. We have
    \[e=\alpha_i-y_i=\alpha_i+(-y_i)\](2)
    Both alpha_i and -y_i belong to W. Therefore their sum ‘e’ should belong to W . This contradicts our postulate that e belongs to V-W. Therefore each alpha_i belongs to V-W.
    Next we consider the equation
    \[\Sigma_i \left(c_i \alpha_i\right)=0\] (3)
    \[\Rightarrow \Sigma_{i=1}^{i=N} \left(c_i\left(e+y_i\right)\right)=0\]
    \[\Rightarrow e\Sigma_{i=1}^{i=N}c_i=-\Sigma_{i=1}^{i=N}c_i y_i\] (4)
    The right side of (4) belongs to W while the left side belongs to V-W. If the left side belonged to W them (1/Sigma c_i)(Sigma c_i)e=e would belong to W which is not the case. |The right side being a linear combination of vectors from W belongs to W.The only solution to avoid this predicament would be to assume Sigma c_i on the left side of (4) to be zero: that each side of (4) represents the null vector. We cannot have all c_i=0 [individually]in an exclusive manner since that would make the space N dimensional , in view of (3), [N is much greater than n, the dimension of the parent vector space V].
    Equations
    \[\Sigma_{i=1}^{i=N}c_i=0\] (3.1)
    \[\Sigma_{i=1}^{i=N}c_iy_i=0\] (3.2)
    From (3.1)
    \[c_N=-c_1-c_2-c_3…..-c_{N-1}\] (4)
    Considering (3.2) with (4) we have,
    \[y_N=\frac{c_1}{c_1+c_2+c_3…..+c_{N-1} }y_1+\frac{c_2}{c_1+c_2+c_3…..+c_{N-1}} y_2\\+…..+\frac{c_{N-1}}{c_1+c_2+c_3…..+c_{N-1}} y_{N-1}\] (5.1)
    \[y_N=a_1y_1+a_2y_2+……a_{N -1}y_{N-1}\](5.2)
    Where,
    \[a_i=\frac{c_i}{c_1+c_2+…+c_{N-1}}\] (5.3)
    From (5.3) we have the identity
    \[a_1+a_2+…+a_{N-1}=1\](6)
    But the N(>>N) vectors were chosen arbitrarily. Equation (5.2) should not come under the constraint of equation (6).We could have chosen y_N in the form of (5.2) in a manner that (6) is violated.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. mathman Valued Senior Member

    Messages:
    2,002
    You need an introduction to tell us what you are trying to show.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Anamitra Palit Registered Senior Member

    Messages:
    102
    The topic has been reposted with a brief introduction, with typo corrections on the last post and with additional material in order to induce a greater amount of clarity for the reader.

    The aim of this writing is to project a logical difficulty with the Linear Vector Spaces. We apply the concept of subspaces to validate our point.We have considered a vector space V and a proper subspace W [of V]followed by some simple mathematics to illustrate the intended fact

    We consider a linear vector space V of dimension n. W is a proper subspace of V.We take a vector 'e' belonging to V-W and N vectors y_i belonging to W;i=1,2,3…N;N>>n ,the dimension of V. All y_i cannot obviously be independent, the number N being greater than the dimension of V ,the dimension of the parent vector space; k of the y_i vectors are considered to be linearly independent where k is the dimension of W. They comprise a basic set in W.The rest of the y_i are linear combinations of these k, basic vectors of W.
    We consider sums
    \[\alpha_i=e+y_i;i=1,2…N\] (1)
    Now each alpha_i=e+y_i belongs to V-W. We prove it as follows
    If possible let alpha_i belong to W. We have
    \[e=\alpha_i-y_i=\alpha_i+(-y_i)\](2)
    Both alpha_i and -y_i belong to W. Therefore their sum ‘e’ should belong to W . This contradicts our postulate that e belongs to V-W. Therefore each alpha_i belongs to V-W.
    Next we consider the equation
    \[\Sigma_i \left(c_i \alpha_i\right)=0\] (3)
    \[\Rightarrow \Sigma_{i=1}^{i=N} \left(c_i\left(e+y_i\right)\right)=0\]
    \[\Rightarrow e\Sigma_{i=1}^{i=N}c_i=-\Sigma_{i=1}^{i=N}c_i y_i\] (4)
    The right side of (4) belongs to W while the left side belongs to V-W. If the left side belonged to W them (1/Sigma c_i)(Sigma c_i)e=e would belong to W which is not the case. |The right side being a linear combination of vectors from W belongs to W.The only solution to avoid this predicament would be to assume Sigma c_i on the left side of (4) to be zero: that each side of (4) represents the null vector. We cannot have all c_i=0 [individually]in an exclusive manner since that would make the space N dimensional , in view of (3).But N is much greater than n, the dimension of the parent vector space V.
    Equations
    \[\Sigma_{i=1}^{i=N}c_i=0\] (5.1)
    \[\Sigma_{i=1}^{i=N}c_iy_i=0\] (5.2)
    From (5.1)
    \[c_N=-c_1-c_2-c_3…..-c_{N-1}\] (6)
    Considering (3.2) with (4) we have,
    \[y_N=\frac{c_1}{c_1+c_2+c_3…..+c_{N-1} }y_1+\frac{c_2}{c_1+c_2+c_3…..+c_{N-1}} y_2\\+…..+\frac{c_{N-1}}{c_1+c_2+c_3…..+c_{N-1}} y_{N-1}\] (7.1)
    \[y_N=a_1y_1+a_2y_2+……a_{N -1}y_{N-1}\](7.2)
    Where,
    \[a_i=\frac{c_i}{c_1+c_2+…+c_{N-1}}\] (7.3)
    From (7.3) we have the identity
    \[a_1+a_2+…+a_{N-1}=1\](8)
    If all c_i are not non zero we have
    \[y_N’=A_1y_1+ A_2y_2+……… A_{N’-1}y_{N’-1}\](9.1)
    \[A_1+A_2+….+A_{N’-1}=1\](9.2)
    N’<N
    Both (8) and (9.2) require
    \[\Sigma_{i=1}^{i=N}c_i=0\]that is they require (5.1) for their formulation from (5.2) recalled below:
    \[\Sigma_{i=1}^{i=N}c_iy_i=0\]
    Failure of (5.1) would lead to the failure of (8) or of (9.2) accordingly as all c_i are non zero or not respectively.We recall (4)
    \[e\Sigma_{i=1}^{i=N}c_i =\Sigma_{i=1}^{i=N}c_iy_i\]
    It is an equation and not an identity.It is possible to have (5.2) with a failure of (5.1). As mentioned earlier failure of (5.1) would lead to the failure of (8) or of (9.2) accordingly as all c_i are non zero or not respectively.
    We may choose {y_i;i=1,2,…N} so that (5.2) holds with a failure of (5.1).Neither (8) nor (9.2) would materialize due to the failure of (5.1).Now with the same {y_i;i=1,2,…N} we formulate (1),(2)……..(6),....(8),(9.2). Considering the validity of both (5.1) and (5.2) in order to account for (4) )we finally arrive at (8)or at (9.2) accordingly as all c_i are non zero or not.
    Thus (8) or (9.2) will materialize projecting a contradiction: we expected them not to materialize due to the failure of (5.1) considered at the outset.
    We take a concrete example:
    V is a six dimensional vector space while W is a three dimensional subspace belonging to V.The last three coordinates of a vector in W are zero:for the vector w belonging to W we have,w=column(a b c 0 0 0); for e belonging to V-W, e=column(p q r s t u) s, t and u are not simultaneously zero.We consider N vectors {y_i;i=1,2…N}in W that form a closed N-sided polygon[vectors y_i represent the sides of the n-sided polygon[which may not be regular] taken in order]
    We have for the closed polygon
    \[\Sigma_{i=1}^{i=N}y_i=0\](10.1)
    \[\Rightarrow \Sigma_{i=1}^{i=N}c_iy_i=0;c_i=1;i=1,2..N\](10.2)
    We have
    \[\Sigma_{i=1}^{i=N}c_i=N\ne 0\](10.3)
    Equation (10.3) leads to the failure of (5.1) and hence to the failure of (6) or of (7.2)
    With each of the y_i,i=1,2…N we formulate (1),(2) ,(3) and the other equations , arriving at (4). To justify (4) we arrive at the validity of both (5.1) and of (5.2):equations (8) and (9.2) are now successful raising a contradiction.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. QuarkHead Remedial Math Student Valued Senior Member

    Messages:
    1,740
    You know, Amanitra, that if you start from a wrong premise, then it is no surprise your conclusion is contradictory.

    Look - you tell us that \(W \subsetneq V\). You claim that \(e \in V-W\) and \(y \in W\).

    But since \(W\cap (V-W)= \emptyset\), the sum \(e+y\) makes no sense.

    I couldn't bothered to read further,
     
  8. Anamitra Palit Registered Senior Member

    Messages:
    102
    \[e\in V-W \Rightarrow e\in V\]
    Since \[W\subset V\]
    we have
    \[y_i\in W \Rightarrow y_i \in V\]

    Both e and y_i are elements of V
    \[\Rightarrow e+y_i \in V\]
    In fact it has been clearly proved[as an extra bit] in my last post that
    \[\alpha_i=e+y_i \in V-W\]
    That
    \[W\cap \left(V-W\right)=\phi\] should not delude us from the fact that both W and V-W are proper subsets of V and hence elements from these subsets can undergo the operation of vector addition[since they are the elements of V itself]. The subsets W and V-W were considered only to decide on the choice of y_i and of e:y_i coming from [belonging to] W and e belonging to V-W.

    QuarkHead is expected to review and revise his thoughts in the view of the current post .
     
    Last edited: Mar 15, 2021
  9. Anamitra Palit Registered Senior Member

    Messages:
    102
    (in continuation)
    The subspace W and the subset V-W were considered in deciding on the choice of y_i and of 'e' respectively[as stated earlier(last post)].
    They were also considered in analyzing equation (4) of my second last post:

    \[ e\Sigma_{i=1}^{i=N}c_i=-\Sigma_{i=1}^{i=N}c_i y_i\]
     
    Last edited: Mar 15, 2021
  10. Anamitra Palit Registered Senior Member

    Messages:
    102
    Clarification
    For the fixed set of vectors
    \[y_i;i=1,2,3....N>>n\]
    we test the simultaneous validity of
    \[\Sigma_{i=1}^{i=N}c_iy_i=0\](1)
    and of
    \[\Sigma_{i=1}^{i=N}d_iy_i=0\] (2)
    [Some c_i and/or d_i may be zero ;all of then are not zero]
    We have,
    \[m\Sigma_{i=1}^{i=N}c_iy_i=0 \Rightarrow \Sigma_{i=1}^{i=N}m c_iy_i=0\](3)
    \[n\Sigma_{i=1}^{i=N}d_iy_i=0 \Rightarrow \Sigma_{i=1}^{i=N}n d_iy_i=0\](4)
    [m and n are arbitrary constants]
    |Adding (3) and (4) we have,
    \[\Sigma_{i=1}^{i=N}\left(mc_i+nd_i\right)y_i=0\](5)
    \[\Rightarrow\Sigma_{i=1}^{N}A_iy_i=0;A_i=mc_i+nd_i\](6)
    Since m and n are arbitrary constants A_i is arbitrary and equation (6)[or equation (5) for matter ]are invalid equations. Therefore (1) and (2) cannot hold simultaneously unless for non zero constants c_i and d_i we have
    \[c_i=k d_i\]
    [k : constant]that is unless we consider two closed polygons with the corresponding sides as parallel. ]
     
    Last edited: Mar 16, 2021
  11. Anamitra Palit Registered Senior Member

    Messages:
    102
    (in continuation )When we say
    \[A_i=mc_i+nd_i\] are arbitrary it is meant that by applying several values of 'm' and 'n' more than N[in fact far more than N] independent homogeneous vector equations of the type (5) or (6) of the last post[re written below] may be obtained,
    \[\Sigma_{i=1}^{N}\left(mc_i+nd_i\right)y_i=0;i=1,2...N\]
    \[\Sigma_{i=1}^{N}A_iy_i=0;i=1,2...N\]
    For more than N equations of the above type we have y_i=0 which is not true. Therefore equations (1) and (2) of the last post cannot be simultaneously valid.
     
    Last edited: Mar 16, 2021
  12. someguy1 Registered Senior Member

    Messages:
    727
    nevermind, posted in error.
     
    Last edited: Mar 17, 2021

Share This Page