How to prove it? (apostol)

Discussion in 'Physics & Math' started by alyosha, Jun 14, 2006.

  1. alyosha Registered Senior Member

    Messages:
    121
    Last edited: Aug 28, 2006
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. alyosha Registered Senior Member

    Messages:
    121
    For 4a on page 60, something seems a little odd. To me it intuitively makes sense that (for rectangles at least), the area is going to be equal to A = B + I. And, this makes sense as it is just counting up all the points that the rectangle consists of (thus the area). It also makes sense when you consider the equation for the points on the boundary B = 2w + 2l - 4 and the equation for the points inside, I= (l-2)(w-2). I don't see how I + (1/2)B - 1 is going to work.


    Edit: Nevermind, I was counting each point itself as a unit of area. Ugh.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. alyosha Registered Senior Member

    Messages:
    121
    At this point I'm just going to elaborate on my number 9 to make sure I haven't goofed in reasoning.

    9(a) has already been discussed extensively.

    9(b) We assume that for some for some n degree polynomial f(x), we can speak of a new polynomial p(x)=f(x+a) that is also of n degree. These means, I suppose, that we can write, if we wanted to, the sum representing f(x+a) in the same form as f(x) if we wished to. Now let us consider n+1. We have our original sum plus a term involving (x+a)^n+1. Write this term in binomial form and extract the very last n+1 term. Now, because we can write our original f(x+a) sum in f(x) form, we may do that now and factor out an x^k from both this sum and the binomial sum. What we are left with is one sum from k=0...n with a huge constant term multiplying x^k, and then in addition to this sum we have a term with x^n+1. This, I believe, is the inductive definition of an n+1 polynomial and we have proven our point. (sorry for not showing the steps exactly)


    9(c) Consider the function p(x) such that p(a)=0. Now let us speak of the function p(x)= f(x-a) so that p(a)= f(0) = 0. From part a, we know that there is an n-1 degree polynomial g(x-a) such that f(x-a)= (x-a) g(x-a).

    9(d) First I will assume as an induction hypothesis that for some n degree function f(x), that if f(x)=0 for p distinct real numbers a1....ap, then we may write

    f(x)= [product: r=1...p : (x-a(r)) ] [ sum: k=0...n-p : c(k)x^k ]

    Now when we consider p=1 it is trivial because this is what we showed in part c.

    Now consider some real number a(p+1) such that f ( a(p+1)) = 0.

    Now when we substitute a(p+1) into our hypothesis and set it equal to zero, we see that the product part can't be zero because all the a(r) are distinct. Then the n-p degree polynomial must be equal to zero, and we know from part c that we can re-write our polynomial as (x-a(p+1)) h(x) where h(x) is an n-(p+1) degree polynomial. Substituting this back into our hypothesis confirms the induction.

    So let us consider an n degree polynomial f(x) with n distinct real numbers a1...an such that f(a)=0. Then we may rewrite our polynomial and we see that we have a product multiplied by a constant. If we consider an (n+1)th zero of f(x), by the earlier argument we see that the constant is what must be zero, so that f(x) = 0 zero for all x.


    9(e)

    So far I have only considered forming the new m degree function h(x) = g(x) - f(x). By part d, if h(x)=0 for m+1 distinct real numbers, then h(x)=0 for all x and we may write that f(x) = g(x) for all x. I haven't yet shown this definitely implies that m=n and that all the coefficients are equal, though.
     
    Last edited: Aug 31, 2006
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
  8. alyosha Registered Senior Member

    Messages:
    121
    Just a quick idea about how a polynomial f(x)=0 for all x implies that every coefficient must be 0. (This sounds like an obvious thing but I'm interested in the specifics.) As I showed above, we can write a polynomial with zeros a1...ap as

    f(x)= [product: r=1...p : (x-a(r)) ] [ sum: k=0...n-p : c(k)x^k ]

    Now we already showed that the constant c(o) must be zero. So we can then reason backwards re-write the polynomial as

    f(x)= [product: r=1...n-1 : (x-a(r)) ] [c(1)x + c(o)] = 0

    We can then choose any x=a(n) (basically any x distinct from all of the a(r)) and conclude that

    c(1)a(n) + c(o) = 0

    We've already shown c(o)=0, and we can choose some a(n) that is not zero, so that

    c(1)a(n)=0

    c(1) = 0.


    This process can be continued inductively I believe, but I imagine there is a much more simple way of doing this.


    Now suppose we have two polynomial functions g(x)=f(x) for all x. It seems that we are saying that g(x) is f(x) independent of x, which seems to be another way of saying simply that g(x) is f(x), which would of course mean (I hope) that they have the same degree. I am having a hard time in 9e demonstrating in any other way that m=n.
     
    Last edited: Sep 5, 2006
  9. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    I don't know if your 9c is different from mine but I tried 9 again today just for fun and here is what i did.

    Say p(x) = f(x + a)

    Since f(a) = 0, p(0) = 0

    From a, p(x) = f(x+a)=xh(x), for a n-1 degree polynomial h

    The step I'm iffy on is substituting x with x-a to get

    f(x) = xh(x-a), from which it is easy to verify that h(x-a) is a polynomial of degree n-1

    do you think that substitution is an acceptable method of proof?

    just to clarify, you mean the 'sigma polynomial' when you say 'our polynomial'. right?

    for d i don't know if you saw some of the notes i made here but my method is similar to yours
    for e, if m!=n then m>n. it is easy to verify that any coefficients of g, say c(k), for k>=1 must be 0. and what do you remember about a polynomial of degree n which has its c(n) coefficient equal to 0. is it really of degree n?
     
  10. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    ok for #2 on page 60...

    shmoe said i can turn a right triangle into a rectangle.. whatever that means.

    Please Register or Log in to view the hidden image!



    do you mean the proof that a right triangle's area is 1/2bh is just as simple as looking at a rectangle and seeing that it is composed of two right triangles? i was thinking a proof might be a little more.. analytic.

    if this is true then it is simple to generalize to all triangular regions.

    also for 4a on 64, i am not sure what analytic method is available to use in proving that. i mean, i can see what's happening geometrically but how i'm supposed to write it..??
     
    Last edited: Sep 8, 2006
  11. alyosha Registered Senior Member

    Messages:
    121
    For 4a on 64, by [x+n] we mean of course a particular number, that being the greatest integer <= x+n, which we can denote by p and say that

    x+n-1 < p <= x+n

    where p is the only integer satisfying this inequality.



    On the right side of the equals sign we have an expression for another particular number, say, z, such that z= [x] + n. We just show that z satisfies the same inequality that p does.

    for [x] we have [x]= t such that

    x-1< t <= x

    adding n to the inequalities gives

    x+n-1 < t+n <= x+n

    and because t+n is what we meant by z,

    x+n-1< z <= x+n

    and we see that z is the same integer as p.

    The other problems can also be demonstrated in this manner of setting up inequalities, but I have a difficult time justifying that the last two were really always equal. I was able to show that both sides of the equation refer to the same possible inequalities, but I haven't shown that if one side is a certain inequality, it implies that the other is.


    As for 9c, if you look at the way I did it in the last post, I think using x-a in place of x when applying 9a is totally valid, because again, "x" really could be anything. We could instead of x had used the symbol z and let z = x-a, I think.






    As for your question about my polynomials, yes I was refering to the n-p degree polynomial I wrote in sigma notation. The product part can't be zero so the n-p degree polynomial must be.
     
  12. alyosha Registered Senior Member

    Messages:
    121
    To be honest, the whole theory with step functions and lattice points kind of disgusts me, it isn't very pretty. I don't know of any analytical way of proving the ideas about triangles and rectangles and whatnot; that too is not very pretty. I learned enough about the stepfunctions in order to understand the existance and calculation of integrals for bounded monotonic functions. Still, I must hand it to apostol for managing this without mentioning things like antiderivatives, continuity, or using the word "infinity" once.
     
  13. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    wow alyosha, it seems as if you have gotten ahead of me. I have been slacking, I admit. I'll try to catch up tonight and tomorrow. AIM and facebook are killers..
     
  14. alyosha Registered Senior Member

    Messages:
    121
    I didn't do any of the integral exercises yet, the last thing I did was try to attempt to generalize [nx] and also the thing about characteristic functions on the same page. I'm really not at all concerned with "solving problems" in the usual sense. I'll try to prove all the interesting theorems though. In retrospect I can only say that that was one hell of an introduction.
     
  15. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    I was very impressed with some of the fancy IMO type problems we were able to solve at the end of the introduction using properties of the geometric and arithmetic mean as well as that fancy if the product is 1, the sum is greater than n thing. all very non-intuitive. i don't know if i could come up with that or even .. know to use that kind of stuff. makes me wonder if i'm cut out for mathematics.

    i'm not a fan of some of the problems either but i suppose practice is practice. i hate drawing graphs and counting the area of step functions when i can be proving interesting properties, especially because i KNOW the fun stuff is far off and I can't wait to get to it.

    by the way have you seen Spivak's calculus book? I have both Spivak and Apostol right now and Spivak approaches the fundamentals differently from Apostol. Spivak's problems are also a lot different and OFTEN times more interesting. I noticed that Spivak's first chapter for functions covers properties like multiplicity, compositions and so on. unfortunately i do not have time to go through both books at the same time since i really have to get going on linear algebra and multivariable calculus. a shame really, that my school doesn't bother making sure students have a solid analytic foundation in single variable calculus.
     
    Last edited: Sep 9, 2006
  16. alyosha Registered Senior Member

    Messages:
    121
    I agree that some of the theorems in the intro to apostol are golden; I wonder if there exists somewhere a collection of little theorems like that. I got spivaks book not too long ago because I figured its a classic that every mathematician should know. I've been through the first part and skimmed through many of the problem sections and have noted that many of the theorems you prove are either very similar or the same as those in apostol, yet they are presented differently. The material itself is essentially the same but definitely presented in a different way (he covers continuity and derivatives before integrals), much more like a narrative in the sense that you feel an author is really speaking to you ABOUT the material, rather than the material itself somehow telling you about.... itself (like in apostol). Spivak just adds a different dimension in that he guides the problems and material in an exploratory manner that usually builds up to someplace important. In the first problem section you prove the special case of n=2 for the nth power mean being greater than the geometric mean, where as in apostol you can get a little lost with the exact meaning of what youve shown because you are immediately presented with the general case.
     
  17. alyosha Registered Senior Member

    Messages:
    121
    I actually have a little question about a problem in spivak. In the chapter 2 problem section we prove that if a^(1/n) is rational then a=m^n for some natural number m.


    The next problem asks you to prove that for some N-degree polynomial with the following properties:

    1. Every coefficient is an integer.
    2. The leading coefficient=1.

    that if this polynomial is zero for some x, then the x is irrational unless it is an integer.

    I simply extracted the last term x^n, and assumed that x was rational, and then subtracted the rest of the polynomial to the other side of the equation. From the last theorem we proved, if we can take the nth root of a number and get a rational number, this rational number is going to be a natural number, i.e., an integer. He however used what seemed to be a very different argument.....whats wrong with mine?
     
  18. alyosha Registered Senior Member

    Messages:
    121
    Another problem asks to prove that if x=p + sqrt(q) where p and q are rational, then

    x^m= a+b sqrt(q) for some rational a and b if m is a natural number.

    Spivak gives a very short inductive argument but I came up with something far more unecessarily complicated, but sufficiently different enough to warrant attention.


    Starting with the binomial theorem, we can write

    (p+sqrt(q))^m

    In binomial form. We can rewrite the sum as two sums, one that sums through the even numbers and another that sums through the odds.

    Now the sum with even numbers will be a rational number because these will cancel out the (1/2) power on q.

    Now the sum with the odd numbers can be multiplied carefully by q/q, with q^1/2 going into the sum and leaving q^1/2 multiplying the sum from the outside. We see that the terms in the sum have now been shifted to be even and so it represents a rational number being multiplied by q^1/2, and I believe this demostrates the idea
     
  19. shmoe Registred User Registered Senior Member

    Messages:
    524
    This isn't true. The cube root of 27/8 is rational but not an integer. There had to be some conditions on the "a" in the theorem, probably it was an integer to begin with.

    I'm not sure why you are multiplying by q/q, you just factor out a q^1/2 from the odd terms, and that's it right?
     
  20. alyosha Registered Senior Member

    Messages:
    121
    Yes, youre right, its if we take the nth root of a natural number, not just any number. And I used q/q because by that time I had already given up on all attempts at practicality.
     
  21. alyosha Registered Senior Member

    Messages:
    121
    If schmoe is still out there, maybe he can help me with the last of the chapter.

    On page 84, I must be applying the properties wrongly, because I can't get 26, 27, or 28 to come out right. When they say f(kx) , does this mean to multiply everything inside of the f( ) paranthesis by k, or JUST the x inside the paranthesis? Looking at the problems, it seems like they mean just the x.......

    Also, I was never able to solve (or have the patience for, I suppose) the straightedge and compass problem on page 36.

    My last question is about number 13 on page 41. I was only ever able to show that one side of the inequalites (with the sigma notation) must hold. The other side seemed indeterminate for me....


    Are there theories of integration that don't involve sets of step functions? What do they involve?

    When should a subject like modern algebra be looked into; how much do they rely on analysis?
     
  22. shmoe Registred User Registered Senior Member

    Messages:
    524
    I'll work out 26 for you as an example. We are trying to change the interval of integration from [a,b] to [0,1]. Theorem 1.18 allows us to translate the interval by c, theorem 1.19 allows us to scale it by a factor of k. We can apply them in either order, scale to get an interval of length 1 then translate to the origin, or translate to the origin and then scale. Your choice of c will depend on the order you chose.

    let's translate first, taking c=-a in theorem 1.18:

    int(a,b,f(x),dx)=int(0,b-a,f(x+a),dx)

    We are integrating over (0,b-a) and want to be integrating over (0,1), so we are scaling by a factor of k=1/(b-a) in theorem 1.19:

    int(0,b-a,f(x+a),dx)=int(0/(b-a),(b-a)/(b-a),f(x*(b-a)+a),dx)=int(0,1,f(x*(b-a)+a,dx)

    like we were after. Here, you can think of us as applying theorem 1.19 to an integral of the function g(x)=f(x+a). Then we get a g(x/k) from the theorem, and you know g(x/k)=f(x/k+a). Does that answer your question? Try applying 1.19 then 1.18, the c will be different, a little ickier.

    I'll try to describe the steps. Given a line and a point on this line, you need to be able to draw a perpendicular line through it. Call our point A. Draw two circles of radius R and r centered at A, the actual radii don't matter, just make R>r and make sure these circles each intersect our line twice (extend the line with the straightedge if necessary). Call these points of intersection to one side of A by r1 and R1, on the other side call them r2 and R2 (little r for the small circle, R for the big one). Now put the pointy part of the compass on R1 and the drawing part on r2 and make a circle. repeat with R2 and r1. these circles you've just drawn intesect at two points, draw a line between them. This line is perpendicular to out original line and passes through A. This will make more sense if you physically do it.

    armed with this, we can tackle the problem using induction. n=1, we already have a line of length sqrt(1)=1, it's what we started with. Assume we have a line of length sqrt(n). Draw a line perpendicular to one of it's endpoints. Bollocks- take my word for it for now that given a line of length 1, we can translate this length over to our perpendicular line. We now have a line of length sqrt(n) with a perpendicular line of length 1 shooting off one endpoint. Join the endpoints to make a triangle with sides length 1, sqrt(n) and sqrt(1^2+sqrt(n)^2)=sqrt(n+1) by the pythagorean theorem.

    I've cheated and translated distances with our straightedge and compass without proving we can do so. The basic tools are an unmarked straightedge and a compass that collapses when you lift it from the paper. I can describe how to transfer distances with these tools if you like.


    I'm guessing it's the right inequality that's the problem? They use the bounds you had proven only for n>1. when n=1 the upper bound is crappy, it's saying 1/sqrt(1)<2*(sqrt(1)-sqrt(0))=2. That's not too good, so they just use 1/sqrt(1)<=1. Note they had the restriction that m>=2, so you do still get a strict inequality for the sum.


    Not that I've ever seen, no. At least nothing that's fundamentally different. It's not really avoidable, you only really have an area defined for rectangles before you get into integration, and you define 'area' for curvey things to essentially be limits of areas of finite numbers of rectangles that somehow approximate your curvey thing. these are really going to be step functions or something similar.

    today, it can be looked into today. Intro algebra books won't have any analysis except possibly in the odd example. They'll sometimes cover compass and straight edge constructions a little bit with the goal of applying abstract algebra to classic problems like "squaring the circle".
     
  23. alyosha Registered Senior Member

    Messages:
    121
    The algebra books I've looked into (fields and galois theory) and also lie algebras all seem to assume I know linear algebra. I do wonder if the linear algebra covered in apostol is enough. I'm also interested in number theory. Both algebra and number theory have a certain purity to me because they don't revolve around...well, pictures, as much as calculus does. I'm always interested in how an idea must very necessarily follow from another one; sometimes its hard to tell how much I am taking for granted in the calculus (because its easy to simply agree with pictures and intuition). I've stared at the 15 or so pages of integration theory in apostol for a very extended period of time, trying to make absolutely sure I'm not just blindly agreeing with each line. It threw me a little that he built the theory on monotonically increasing/decreasing functions but then quickly extended it to arbitrary polynomials for which this monotonic idea doesn't hold. He demonstrates why he can do this in his proofs of the properties of integrals, but it didn't stop me from feeling like he was pulling a fast one.
     
    Last edited: Sep 22, 2006

Share This Page