You can always manipulate sum limits: \( \sum_{j=0}^{2n-2}\sum_{k=0}^{2n-2} x^{(k-j)}|_{j,k=0,2,4,...} = \sum_{j=0}^{n-1}\sum_{k=0}^{n-1} x^{2(k-j)}|_{j,k=0,1,2...} = \sum_{j=0}^{n-1}\sum_{k=0}^{n-1} e^{i(k-j)t} \) Now to sum over n x n terms, rpenner appears to do this diagonal wise: \( \sum_{\ell = 1-n}^{n-1}\sum_{m=1}^{n-|\ell|} e^{i\ell t} \) Which is seen in the outer sum. Suppose again n = 3, then there are nine terms to sum, and \( \ell\) in the outer sum goes from 1 - n = -2, to n - 1 = 2. The inner sum goes from m = 1 to m = n - | \( \ell\)| = n - 2, n - 1, n - 0. Although it's clunkier, the same nine terms are summed by: \( \sum_{ \ell = 1}^{n-1}\sum_{m=1}^{n-| \ell|} \left[ e^{i \ell t} + e^{-i \ell t} \right ] + \sum_{j=1}^n e^0 \)
That second term sticks out because, if you integrate a complex number like \( e^{i\ell t} \) over an interval of \( 2 \pi\), then if \(\ell\) is a nonzero integer the integral will be zero. This is because both the sine and cosine functions have the same area above and below the x axis over an interval of \( 2 \pi\), and \(\ell\) is the number of periods. The only nonzero integrals over \( 2 \pi\) are when \(\ell = 0 \). So you have: \( n\,e^{i0 t} = n \). Then the given integral will equal \( 2 \pi\, n\), and that explains the constant \( \frac{1}{ 2 \pi\, n}\). I think that's enough to show \( \int_{-\infty}^{\infty} \delta_n(t) dt = 1\). So given the definition of the delta function, it should also show that \( \int_{-\infty}^{\infty}f(t) \delta(t) = f(0) \). Nothing wrong with a bit of rigour, though.
Wrong limits of integration, just like your source material had the wrong limits. Wrong limits of integration. You also need to show that, with arbitrary choice of small number \(\epsilon\), over \([-\pi, - \epsilon] \cup [ \epsilon, \pi]\) that the function in the limit of large n goes to zero which shows that all contributions to the integral in the limit of large n come from an arbitrarily small neighborhood of zero. That would probably be enough. It's also nice to show that the intermediate functions are even so that there is no possible residual component from the first derivative.
But, haven't you shown firstly that the integral is = 1, over a finite, instead of an infinite, interval namely [-π, π], then shown that the integral converges to this value around t = 0 (which I agree should also be in the proof)? So doesn't that mean it must = 1 over any interval? The author has this at the start of the online chapter: Please Register or Log in to view the hidden image! The delta sequence functions (\( \delta_n(x) \)) are also integrated over [-∞, ∞]. I can recall seeing somewhere this method for infinite integrals, first integrate over a finite interval then show the integral converges, so it must also converge over an infinite interval.
After having a look at Fejer's method, I concur that the limits of integration as given in the problem are wrong.
Yes, the problem is \(\forall n \in \mathbb{N} \forall m \in \mathbb{N} \quad \delta_n(2 m \pi) = \delta_n(0)\) so the limits of integration can't be \(\pm\infty\) unless the class of functions being supported is very special. But with limits \(\pm \pi\) we have a situation that makes sense for smooth functions.
From Wikipedia: " The Fejér kernel is defined as Please Register or Log in to view the hidden image! where Please Register or Log in to view the hidden image! is the kth order Dirichlet kernel. It can also be written in a closed form as Please Register or Log in to view the hidden image! where this expression is defined.[1] The Fejér kernel can also be expressed as Please Register or Log in to view the hidden image!. " About closed forms: I was under the impression this applied to sums over polynomial expressions. I note the trigonometric 'closed form' in the third equation is substantially the same as the given exercise.