Difference between revisions of "Aufgaben:Problem 9"

From Ferienserie MMP2
Jump to: navigation, search
m (a): typo)
 
(30 intermediate revisions by 8 users not shown)
Line 1: Line 1:
Ihr seid ja immer noch am basteln mit dem differenzieren unter dem integral oder?
+
==Task==
Ich hab [http://math.stackexchange.com/q/298458 hier] folgendes gefunden (weil ich recht wenig mmp gemacht habe dieses semester, bin ich nicht sicher, ob das das gleiche ist, was wir hatten):
+
Let \(U \subset \mathbb{R}^n\) be a domain. Assume that \(g\) is a Riemannian metric field on \(U\).
  
 +
a) ''The covariant derivative'' \(\nabla_k\) acts on a \((0, 2)\)-tensorfield \(T_{lm}\) by
 +
$$\nabla_k T_{lm} = \partial_k T_{lm} − T_{im}\Gamma^i_{kl} − T_{li}\Gamma^i_{km},$$
 +
where \(\Gamma^l_{ij}\) are the Christoffel symbols defined as
 +
$$\Gamma^l_{ij}=\frac{1}{2}g^{lk}(\partial_i g_{jk} + \partial_j g_{ik} − \partial_k g_{ij} ).$$
 +
Show that
 +
$$\nabla_k g_{ij} = 0, \forall i, j, k \in \{1, ..., n\}.$$
  
<div style="color: #227;">
 
I am trying to take the derivative with respect to \(a\) of some function \(I(a)=\int_{0}^{\infty}f(a,x)dx\).  I would like to make sure that I am using the Leiniz Integral Rule correctly.
 
Various web sources indicate a set of conditions that must hold for \(f(x,a)\) and \(\frac{\partial f(x,a)}{\partial a}\) when integration is done over infinite region.  From reading [http://www.math.uconn.edu/~kconrad/blurbs/analysis/diffunderint.pdf this source] (see Theorem 10.3 on page 13) the conditions that \(f(x,a)\) and \(\frac{\partial f(x,a)}{\partial a}\) must obey are:
 
  
1. \(f(x,a)\) and \(\frac{\partial f(x,a)}{\partial a}\) are continuous over \(x\in[0,\infty)\) and around \(a\) that we are interested in.
+
b) Recall the Laplace-Beltrami operator \(L_g =\frac{1}{\sqrt{\det(g)}}\sum_{i,j}{\partial_j (\sqrt{\det(g)} g^{ij}\partial_i)}\). Show that
 +
$$L_g(f) = g^{ij}\nabla_i(\partial_j f)$$
 +
where \(f\) is a smooth function on \(U\).
  
2. There exists an integrable function (over \(x\)) \(g(x)\) such that \(|\frac{\partial f(x,a)}{\partial a}|\leq g(x)\).
+
''Note'': the covariant derivative \(\nabla_k\) acts on a covector field \(v_l(x)\) by \(\nabla_k v_l(x):=\partial_k v_l − v_i\Gamma^i_{kl}\).
  
3. There exists an integrable function (over \(x\)) \(h(x)\) such that \(|f(x,a)|\leq h(x)\).
 
  
Integrable here means \(\int_{-\infty}^{\infty}g(x)dx<\infty\).
+
c) Let \(\chi: U \to \mathbb{R}_{>0}\) be a strictly positive smooth function on \(U\), and let \(\tilde{g} = \chi^2 g\).
</div>
+
  
 +
Show that the Christoffel symbols \(\tilde{\Gamma}^l_{ij}\) (tilde refers to metric \(\tilde{g}\)) are given by
 +
$$\tilde{\Gamma}^l_{ij}= \Gamma^l_{ij} + (\partial_i \log{ \chi})\delta^l_j + (\partial_j \log{\chi})\delta^l_i − g^{lk}(\partial_k \log{ \chi})g_{ij} .$$
 +
Conclude that, if \(\bar{g} := \frac{4}{(1−|x|^2)^2}\mathbb{I}\), then
 +
$$\bar{\Gamma}^l_{ij}=\frac{2}{1 − |x|^2}(x_i\delta_{jl} + x_j\delta_{il} − x_l\delta_{ij} )$$
  
Mit \(f(t,x)=\cos(xt) / (1+x^2) \) sind die Bedingungen doch gegeben, oder?
 
  
1: trivial
+
==Solution==
  
 +
=== a) ===
  
3: \(|\cos(xt)| \leq 1 \; \Rightarrow |f(x,t)| \leq 1/(1+x^2)\) und das ist ja integrierbar <br/>
+
$$\nabla_k g_{ij} = \partial_k g_{ij} - g_{nj} \Gamma^n_{ki} - g_{in}\Gamma^n_{kj} =$$
\(\; \int_0^\infty 1/(1+x^2)\: dx = \frac{\pi}{2}\)
+
$$ = \partial_k g_{ij} - g_{nj}(\frac{1}{2} g^{np}(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki})) - g_{in}(\frac{1}{2} g^{nq}(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj})) =$$
  
 +
from the symmetry of \( g_{ij}  \) and from: \( g_{ki}g^{ij} = g^{ji}g_{ik} = \delta^j_k \) we obtain:
  
2: \(\frac{\partial f(x,t)}{\partial t} = - t\sin(xt) / (1+x^2) - 2x\cos(xt) / (1+x^2)^2\)<br/>
+
$$ = \partial_k g_{ij} - \frac{1}{2} \delta^p_j(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki}) - \frac{1}{2} \delta^q_i(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj}) = $$
\( \begin{align}\left|\frac{\partial f(x,t)}{\partial t}\right| &\leq |t\sin(xt) / (1+x^2)| + |2x\cos(xt) / (1+x^2)^2| \\ &\leq |t / (1+x^2)| + |2x / (1+x^2)^2| \end{align}\)<br/>
+
und die terme sind beide wieder integrierbar
+
  
[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 17:45, 30 December 2014 (CET)
+
$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_i g_{kj} - \partial_j g_{ki}) - \frac{1}{2} (\partial_k g_{ji} + \partial_j g_{ki} - \partial_i g_{kj}) = $$
  
---------------------------------------------------------------------------------------------------------------------------------------
+
$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_k g_{ji}) - \frac{1}{2} (\partial_j g_{ki} - \partial_i g_{kj} + \partial_i g_{kj} - \partial_j g_{ki} ) = 0 $$
Reply Beni:
+
  
 +
again we used the symmetry of \( g_{ij}  \) in the first bracket.
  
Ne, das stimmt so nicht. Wenn du meine Herleitung anschaust, denn verwende ich den Mittelwertsatz und das bedeutet, dass cos(xt) noch einmal nach t abgleitet wird. Damit wäre die dominierende Funktion dann x/(1+x^2), welche auf (0,unendlich) nicht integrierbar ist. Ich habe die Aufgabe in einem Forum gepostet und darüber bin ich auf den Lösungsweg gekommen. Siehe Link:
+
=== b) ===
  
http://matheplanet.com/matheplanet/nuke/html/viewtopic.php?topic=202739
+
Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big)\)
  
Ich meinte der Lösungsweg über das partiell Integrieren hält sich vom Aufwand her in Grenzen. Beim zweiten Lösungsweg bin ich mir sehr unsicher. Daher bevorzuge ich die aktuelle Lösung.
+
Proof: We know that \(g\) is symmetric and therefore diagonalisable. \(\det g = \det (T^{-1}AT) = \det A\) where \(A\) is diagonal:
  
 +
(Note that \(A_{ii} > 0\) since \(g\) is also positive definite \(\Leftrightarrow\) the eigenavalues of \(g\) are strictly positiv)
  
----
+
\begin{align}
 +
\partial_j \det g &= \partial_j \det A = \partial_j \prod_{i=1}^n A_{ii} = \sum_{k = 1}^n \prod_{i=1}^n \frac{A_{ii}}{A_{kk}} \partial_j {A}_{kk}  = \prod_{i=1}^n A_{ii} \sum_{k = 1}^n  \frac{\dot{A}_{kk}}{A_{kk}}\\
 +
&= \det A \sum_{k = 1}^n \frac{\dot{A}_{kk}}{A_{kk}} =  \det A  \sum_{k = 1}^n (A^{-1}\dot A)_{kk} = \det A\ tr( A^{-1} \partial_j{A})\\
 +
&= \det g\ tr(A^{-1} \partial_j{A}) \overset{!}{=} \det g\ tr(g^{-1} \partial_j{g})
 +
\end{align}
  
Ok, dann halt nicht ;)
+
proving the last part separately (You can easily verify that the product rule holds for matrices):
  
Neuer Versuch:
+
\begin{align}
 +
tr(g^{-1} \partial_j(g)) &= tr(g^{-1} \partial_j(T^{-1}AT))\\
 +
&= tr( T^{-1} A^{-1}T (\dot{T^{-1}}AT + T^{-1}\dot{A}T + T^{-1}A\dot{T})\\
 +
&= tr((AT)^{-1}T \dot{T^{-1}}AT) + tr(T^{-1} A^{-1}\dot{A}T)  +  tr(T^{-1} A^{-1}A\dot{T})\\
 +
&= tr(T \dot{T^{-1}}) + tr(A^{-1}\dot{A})  +  tr(T^{-1} \dot{T})\\
 +
&=  tr(-\dot{T} T^{-1}) + tr(\dot{A} A^{-1} )  +  tr(T^{-1}\dot{T})\\
 +
&=  -tr( T^{-1} \dot{T}) + tr(\dot{A} A^{-1} )  +  tr(T^{-1}\dot{T})\\
 +
&=  tr( A^{-1}\dot{A})
 +
\end{align}
  
'''Proposition: '''
+
where I used that \(T\dot{T^{-1}} = -\dot{T} T^{-1}\). And the fact that the trace is invariant under conjugacy.
$$\frac{d}{dt} \int_0^\infty \frac{\cos(xt)}{1+x^2} dx = \int_0^\infty \frac{d}{dt} \left(\frac{\cos(xt)}{1+x^2}\right) dx$$
+
  
'''Proof''':
+
<p style="text-align:right;">\(\square\)</p>
  
First, we put one of the few things that we learned in Analysis I & II to the test: Taylor (yay!)
+
Now we can prove (b):
  
We expand \(\cos(a+b) - \cos(b)\) around \(a=0\):
+
$$L_g(f) = \frac{1}{\sqrt{\det g}} \partial_j(\sqrt{\det g} g^{ij} \partial_i) f$$
$$\begin{align}
+
\cos(a+b)-\cos(b) &= (\cos(a+b)-\cos(b))|_{a=0} + a\left.\left(\frac{d}{da}\cos(a+b)\right)\right|_{a=0} + \mathcal{O}(a^2)\\
+
&= -a\sin(b) + \mathcal{O}(a^2)
+
\end{align}$$
+
  
Ready to differentiate hell outta that integral?
+
using the product rule (and swappig the terms around):
  
$$\begin{align}
+
$$= \frac{1}{\sqrt{\det g}} \Big( \sqrt{\det g} g^{ij} \partial_j \partial_i +\sqrt{\det g} \partial_j (g^{ij}) \partial_i  +\partial_j\Big(\sqrt{\det g}\Big) g^{ij} \partial_i)\Big) f$$
\frac{d}{dt} \int_0^\infty \frac{\cos(xt)}{1+x^2} dx &= \lim_{h\rightarrow 0} \frac{\int_0^\infty \frac{\cos(x(t+h)) - \cos(xt)}{1+x^2} dx}{h} \\
+
&= \lim_{h\rightarrow 0} \int_0^\infty \frac{\cos(x(t+h)) - \cos(xt)}{h(1+x^2)} dx \\
+
&= \lim_{h\rightarrow 0} \int_0^\infty \frac{-xh\sin(xt) + \mathcal{O}(h^2)}{h(1+x^2)} dx \\
+
&= \lim_{h\rightarrow 0} \int_0^\infty \frac{-x\sin(xt)}{(1+x^2)} + \mathcal{O}(h) \ dx
+
\end{align}$$
+
  
Now, let's have a close look at that integrand:
+
using the Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big) = (\det g) (g^{kl}\partial_j g_{lk})\)
$$\left|\frac{-x\sin(xt)}{(1+x^2)}\right| \leq \left|\frac{x}{(1+x^2)}\right| \leq \sup_{x \in \mathbb{R}} \left|\frac{x}{(1+x^2)}\right| = \frac{1}{2}$$
+
  
As \(h\) goes to \(0\) anyways, we can ignore the \(\mathcal{O}(h)\) (or account for it with a infinitesimally small constant)
+
$$= g^{ij} \partial_j \partial_i f + (\partial_j g^{ij})\partial_i f + \frac{1}{2} (g^{kl}\partial_j g_{lk}) g^{ij} \partial_i f $$
and since we now have a bounded integrand, we can move the limes inside the integral.
+
  
[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 16:42, 13 January 2015 (CET)
+
with the product rule: \((\partial_j g^{-1}) = - g^{-1}(\partial_j g) g^{-1}\). In particular: \(\partial_j g^{ij} = -g^{ik} (\partial_j g_{kl}) g^{lj}\) (Notice that calling the summation indecies \(k\) and \(l\) is allowed, as it just combines the sums, which is possible as all indices run from \(1\) to \(n\))
  
----
+
$$=  g^{ij} \partial_i \partial_j f -g^{ik} (\partial_j g_{kl}) g^{lj}\partial_i f + \frac{1}{2} g^{kl}(\partial_j g_{lk})  g^{ij} \partial_i f  $$
  
Ok, on second thought, that may or may not be correct. It was never \(\mathcal{O}(h^2)\) but actually \(\mathcal{O}(x^2h^2)\) and as we cannot integrate \(\frac{x^2}{1+x^2}\), this fucks everything up, doesn't it?
+
Now some indecie swapping: In the second part: \( l \leftrightarrow i\) and in the third part  \( \leftrightarrow i\) and \( \leftrightarrow j\) (We can do this by seperating the sums, swapping indecies and then putting them back together)
  
Mea Culpa!<br> [[User:Nik|Nik]] ([[User talk:Nik|talk]]) 16:47, 13 January 2015 (CET)
+
$$=  g^{ij} \partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) g^{ij}\partial_l f + \frac{1}{2} g^{ji}(\partial_k g_{ij}) g^{lk} \partial_l f  $$
  
----
+
$$=  g^{ij} \Big(\partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) \partial_l f + \frac{1}{2} (\partial_k g_{ij})  g^{lk} \partial_l f \Big) $$
  
A. found this in another group:
+
$$=  g^{ij} \Big(\partial_i \partial_j f - \partial_l f \frac{1}{2} g^{lk} \big( (\partial_j g_{ki}) + (\partial_j g_{ki}) -(\partial_k g_{ij}) \big) \Big) $$
  
For \(t>0\):
+
notice that if we pull all the sums apart again we can switch \((\partial_j g_{ki})\) to \((\partial_i g_{kj})\) as the only other \(i,j\) term in that sum would be \(g^{ij}\) which is symmetric. Then the second term is Cristoffel:
$$\begin{align}
+
\int_0^\infty \frac{x\sin(xt)}{1+x^2}dx &\overset{\text{integrand even}}{=} \frac{1}{2}\int_\mathbb{R}\frac{x\sin(xt)}{1+x^2}dx \\
+
&= \frac{2\pi}{4} \frac{1}{\sqrt{2\pi}} \int_\mathbb{R} x \left(\frac{1}{\sqrt{2\pi}}\frac{2}{1+x^2}\right)\sin(xt)\:dx \\
+
&= \frac{\pi}{2} \frac{1}{\sqrt{2\pi}} \int_\mathbb{R}(-i)\widehat{\frac{d}{dt}f}(x) \frac{1}{2i}\left(e^{ixt}-e^{ix(-t)}\right)dx \\
+
&= \frac{\pi}{4}\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\widehat{\frac{d}{dt}f}(x)\left(e^{ix(-t)}-e^{ixt}\right) dx\\
+
&= \frac{\pi}{4}\left(\frac{df}{dt}(-t)-\frac{df}{dt}(t)\right)
+
\end{align}$$
+
Where we have used that (\(\rightarrow\) look [http://www.math.ethz.ch/~gruppe5/group5/lectures/mmp/hs14/Files/Fourier-Heat-etc..pdf here], p.95)
+
$$\widehat{\frac{d}{dt}f}(x) = ix\hat f(k)$$
+
and since
+
$$\frac{df}{dt}(t) =
+
\begin{cases}-e^{-t} & \text{for } t>0 \\
+
e^t & \text{for } t<0\end{cases},$$
+
we finally find that
+
$$\int_0^\infty \frac{x\sin(xt)}{1+x^2}dx = \frac{\pi}{2}e^{-t}\;.$$
+
  
this would be nice, it is similar to something i tried, but the inverse fouriertrafo is not allowed i think because, the fouriertrafo of the derivative is not in L1. check the script fourier-schwartzadded page 97
+
$$=  g^{ij} \Big(\partial_i \partial_j f - \partial_l f \Gamma^l{}_{ij}\Big) =  g^{ij} \nabla_i (\partial_j f)$$
  
"[[User:Carl|Carl]] ([[User talk:Carl|talk]]) 17:33, 14 January 2015 (CET)"
+
==Problem 9 (Craven) ==
 +
\(\partial_k g_{lm} := g_{klm}\)
 +
===a===
 +
$$
 +
\begin{align}
 +
\nabla_k g_{lm} &= g_{klm} - g_{mi}\Gamma^{i}_{km} \\
 +
&= g_{klm} - \frac{1}{2} \overbrace{g_{mi}g^{ip}}^{\delta^{p}_{m}}\left(g_{klp}+g_{lpk}-g_{pkl})\right) - \frac{1}{2} \overbrace{g_{li}g^{ip}}^{\delta^{p}_{i}}\left(g_{kmp} + g_{mpk} - g_{pkm}\right) \\
 +
&= g_{klm} - \frac{1}{2}\left(g_{klm} + g_{lmk} - g_{mkl}\right)-\frac{1}{2}\left(g_{klm}+g_{mlk}-g_{lkm}\right) = 0
 +
\end{align}
 +
$$
  
 +
===b===
 +
Insert the definition of the covariant derivative, define \(\partial_j f := f_j\):
  
I've checked on Wolfram Alpha and the integral of the derivative over \( \mathbb{R} \) seems to converge. - A.
+
$$g^{ij}\nabla_i f_j = g^{ij}\partial_i f_j - \frac{1}{2}g^{lk}g^{ij}\left(g_{ijl}+g_{jli} - g_{lij}\right)f_k$$
  
 +
Since we sum over i, j by symmetry of the indices \(g^{ij}g_{ijl} = g^{ij}g_{jli} \) thus this is equal to the expression
  
Okay, but the Fouriertransform of the derivative is not in L1. We only showed
+
$$g^{ij}\partial_i f_i - g^{ij}g_{ijl}g^{lk}f_k + \frac{1}{2}g^{ij}g_{lij}g^{lk}f_k$$
$$f(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \hat f(x) e^{ixt} dt $$
+
for \( f(x),  \hat f(x)\in L^1(\mathbb{R}) \). In this case you are probably allowed to use the inverse fouriertransform even though \(\hat f\) is not in L1, but you would have to come up with a prove where you don‘t have to use the L1 property of  \(\hat f\). -Carl
+
  
 +
We now calculate with the other expression given:
  
Do I understand it correctly that what you're saying essentially amounts to "We do have the Fourier transform of a function (i.e. of \(\frac{d}{dt}e^{-|t|}\)), but we're not allowed to transform it back to the time domain"? To be honest, I can't find any flaws in your argumentation, but this sounds pretty weird to me :/
+
$$\sqrt{\det{g}}^{-1}\partial_l\left(\sqrt{\det{g}}g^{lk}f_k\right)=\frac{1}{2\det{g}}\left(\partial_l \det{g}\right)g^{lk}f_k + \partial_l g^{lk}f_k + g^{lk}\partial_l f_k$$
  
[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 20:40, 14 January 2015 (CET)
+
Now what is left is to calculate the partial derivative of the determinant. By the chain rule we get:
 +
$$
 +
\partial_l \det{g} = d\det_{g}{\partial_lg}
 +
$$
 +
Now we have to determine the linear map \(d\det_g{X}\) that takes an element of the tangential space \(X\) and maps it to a scalar. For this we use a suitable curve. By the chain rule if \(\phi(0) = \psi(0), \phi'(0) = \psi'(0)\) then \(\frac{d}{dt}_{t=0}\det{\phi(t)}\). Thus we pick the curve \(ge^{g^{-1}Xt} = \phi(t)\). Inserting, using matrix identities and taking the derivative gives us \(d\det_g{\partial_l g} = \det{g}~\text{tr}(g^{-1}\partial_lg) = \det{g}g^{ij}g_{lij}\). Thus we get the expression:
 +
$$
 +
\frac{1}{2} g^{ij}g_{lij}g^{lk}f_k+\partial_lg^{lk}f_k + g^{lk}\partial_lf_k
 +
$$
 +
Thus all that is left to show is that \(\partial_lg^{lk}f_k = -g^{ij}g_{ijl}g^{lk}f_k\).
  
 +
\(gg^{-1}\) and from product rule we get \( (\partial_mg)g^{-1} + g(\partial_mg^{-1}) = 0 \Rightarrow -g^{-1}\partial_mgg^{-1}=\partial_mg^{-1}\). Inserting indices gives us \( -g^{ij}g_{mjk}g^{kl} = \partial_mg^{il}\). Inserting \(m=i\) and summing over the index i gives us \(-g^{ij}g_{jkl}g^{kl}=\partial_ig^{il}\) thus the identity is proven.
  
yes, it was a surprise to me too, but it seems to be true that you can‘t always recover the original function from the fourier transform. It would probably take a strange function such that the fouriertransform exist but one can not use the inverse transform, I can't think of one...
+
===c===
 
+
$$
see: http://en.wikipedia.org/wiki/Fourier_inversion_theorem
+
\begin{align}
under "integrable functions in one dimension" -> "piecewise smooth; one dimension" we could use this theorem, if only it where in the script :)
+
\begin{split}
 
+
\tilde \Gamma_{ij}^{l} &= \frac{1}{2}\chi^{-2}g^{lk}\left(\partial_i\chi^2g_{jk} + \chi^{2}g_{ijk} + \partial_j\chi^{2}g_{ki} + \chi^2g_{jki}-\partial_k\chi^2g_{ij}- \chi^2g_{kij}\right) \\
"[[User:Carl|Carl]] ([[User talk:Carl|talk]]) 21:17, 14 January 2015 (CET)"
+
&=\overbrace{\frac{1}{2}g^{lk}\left(g_{ijk} + g_{jki}-g_{kij}\right)}^{\Gamma^{l}_{ij}} + \overbrace{\frac{1}{2}\chi^{-2}\partial_i\chi^2}^{\partial_i\ln{\chi}}\overbrace{g^{lk}g_{jk}}^{\delta^l_j}+\overbrace{\frac{1}{2}\chi^{-2}\partial_j\chi^2}^{\partial_j \ln{\chi}}\overbrace{g^{lk}g_{ki}}^{\delta^{l}_{i}} - \overbrace{-\frac{1}{2}\chi^{-2}\partial_k\chi^2}^{\partial_k \ln{\chi}}g^{lk}g_{ij} \\
 
+
&= \Gamma^{l}_{ij} + \partial_i\ln{\chi}\delta^l_j + \partial_j\ln{\chi}\delta^l_i - \partial_k \ln{\chi}g^{lk}g_{ij}  
 
+
\end{split}
Too bad, then :(<br>
+
\end{align}
Thanks for pointing that out, though!<br>
+
$$
-Nik
+
\(g_{ij} = \delta_{ij}, \chi = \frac{2}{1-|x|^2}, g^{ij}=\delta^{ij} \) just as in the example given. \(\Gamma^l_{ij} = 0 \), since the metric \(I\) is constant. Now \(\partial_i \ln{\chi}=\partial_i(\ln{2}-\ln{(1-|x|^2)}) = \frac{2x_i}{1-|x|^2} \). We conclude:
 
+
$$
----
+
\tilde \Gamma^l_{ij} = \frac{2}{1-|x|^2}\left(x_i\delta^l_j + x_j\delta^l_i - x_k\delta^{lk}\delta_{ij}\right) = \frac{2}{1-|x|^2}\left(x_i\delta_{jl} + x_j\delta_{li} - x_l\delta_{ij}\right)
 
+
$$
Okay, I try one too for the second one in 10. b):
+
 
+
Consider the function: \( \mathcal{F}^{-1} \left( k \cdot \mathcal{F} \left( e^{- |x|} \right)  \right) \).
+
 
+
Now since \( \mathcal{F}^{-1} \left[ f \right] \left( k \right) = \mathcal{F} \left[ f \right] \left( -k \right) \) some properties for the Fourier-transform are the same for the Inverse up to a minus sign, namely:
+
 
+
$$  -i \cdot \frac{d}{dk} \mathcal{F}^{-1} \left( f \right) =  \mathcal{F}^{-1} \left( x \cdot  f \right) $$
+
 
+
We then get by plugging in \( f \equiv \mathcal{F} e^{- |x|} \):
+
 
+
$$ -i \cdot \frac{d}{dt} e^{- |t|} = \frac{1}{\sqrt{2 \pi}} \int_{\mathbb{R}} \sqrt{\frac{2}{\pi}} \frac{x}{1 + x^2} e^{ixt} \, dx = \frac{1}{\sqrt{2 \pi}} \cdot \sqrt{\frac{2}{\pi}} \cdot\left( \underbrace{\int_{\mathbb{R}} \frac{x \cdot \cos (xt)}{1 + x^2} \, dx}_{\text{odd function, so zero}} + i \cdot \int_{\mathbb{R}} \frac{x \cdot \sin (xt)}{1 + x^2} \, dx \right) = \frac{i}{\pi} \cdot \int_{\mathbb{R}} \frac{x \cdot \sin (xt)}{1 + x^2} \, dx $$
+
 
+
And since \( t > 0 \) we can simply calculate:
+
 
+
$$ \frac{2}{\pi} \int_{0}^{\infty} \frac{x \cdot \sin (xt)}{1 + x^2} \, dx = e^{-t} $$  
+
 
+
which makes us avoid the long proof for Lebesgue dominated convergence.
+
 
+
Best, A.
+
 
+
Same problem your taking the inverse fouriertrafo of a function that is not in L1. You get the correct result because it is still allowed to do this, under certain conditions (see link above) but we haven't proven it. maybe it doesnt really matter, but my guess is that this is a trap, where many will make this mistake, I initially did to. best, Carl
+
 
+
 
+
Why would they want to trap us? I mean the solution and the way to do it is correct, since the function is in \( L^2 \) and there everything that we use stays the same (checked on Wikipedia). So my guess is that if we simply don't state that there could be a problem, there is no problem. Cheers, A.
+

Latest revision as of 07:01, 3 August 2015

Task

Let \(U \subset \mathbb{R}^n\) be a domain. Assume that \(g\) is a Riemannian metric field on \(U\).

a) The covariant derivative \(\nabla_k\) acts on a \((0, 2)\)-tensorfield \(T_{lm}\) by $$\nabla_k T_{lm} = \partial_k T_{lm} − T_{im}\Gamma^i_{kl} − T_{li}\Gamma^i_{km},$$ where \(\Gamma^l_{ij}\) are the Christoffel symbols defined as $$\Gamma^l_{ij}=\frac{1}{2}g^{lk}(\partial_i g_{jk} + \partial_j g_{ik} − \partial_k g_{ij} ).$$ Show that $$\nabla_k g_{ij} = 0, \forall i, j, k \in \{1, ..., n\}.$$


b) Recall the Laplace-Beltrami operator \(L_g =\frac{1}{\sqrt{\det(g)}}\sum_{i,j}{\partial_j (\sqrt{\det(g)} g^{ij}\partial_i)}\). Show that $$L_g(f) = g^{ij}\nabla_i(\partial_j f)$$ where \(f\) is a smooth function on \(U\).

Note: the covariant derivative \(\nabla_k\) acts on a covector field \(v_l(x)\) by \(\nabla_k v_l(x):=\partial_k v_l − v_i\Gamma^i_{kl}\).


c) Let \(\chi: U \to \mathbb{R}_{>0}\) be a strictly positive smooth function on \(U\), and let \(\tilde{g} = \chi^2 g\).

Show that the Christoffel symbols \(\tilde{\Gamma}^l_{ij}\) (tilde refers to metric \(\tilde{g}\)) are given by $$\tilde{\Gamma}^l_{ij}= \Gamma^l_{ij} + (\partial_i \log{ \chi})\delta^l_j + (\partial_j \log{\chi})\delta^l_i − g^{lk}(\partial_k \log{ \chi})g_{ij} .$$ Conclude that, if \(\bar{g} := \frac{4}{(1−|x|^2)^2}\mathbb{I}\), then $$\bar{\Gamma}^l_{ij}=\frac{2}{1 − |x|^2}(x_i\delta_{jl} + x_j\delta_{il} − x_l\delta_{ij} )$$


Solution

a)

$$\nabla_k g_{ij} = \partial_k g_{ij} - g_{nj} \Gamma^n_{ki} - g_{in}\Gamma^n_{kj} =$$ $$ = \partial_k g_{ij} - g_{nj}(\frac{1}{2} g^{np}(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki})) - g_{in}(\frac{1}{2} g^{nq}(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj})) =$$

from the symmetry of \( g_{ij} \) and from: \( g_{ki}g^{ij} = g^{ji}g_{ik} = \delta^j_k \) we obtain:

$$ = \partial_k g_{ij} - \frac{1}{2} \delta^p_j(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki}) - \frac{1}{2} \delta^q_i(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj}) = $$

$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_i g_{kj} - \partial_j g_{ki}) - \frac{1}{2} (\partial_k g_{ji} + \partial_j g_{ki} - \partial_i g_{kj}) = $$

$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_k g_{ji}) - \frac{1}{2} (\partial_j g_{ki} - \partial_i g_{kj} + \partial_i g_{kj} - \partial_j g_{ki} ) = 0 $$

again we used the symmetry of \( g_{ij} \) in the first bracket.

b)

Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big)\)

Proof: We know that \(g\) is symmetric and therefore diagonalisable. \(\det g = \det (T^{-1}AT) = \det A\) where \(A\) is diagonal:

(Note that \(A_{ii} > 0\) since \(g\) is also positive definite \(\Leftrightarrow\) the eigenavalues of \(g\) are strictly positiv)

\begin{align} \partial_j \det g &= \partial_j \det A = \partial_j \prod_{i=1}^n A_{ii} = \sum_{k = 1}^n \prod_{i=1}^n \frac{A_{ii}}{A_{kk}} \partial_j {A}_{kk} = \prod_{i=1}^n A_{ii} \sum_{k = 1}^n \frac{\dot{A}_{kk}}{A_{kk}}\\ &= \det A \sum_{k = 1}^n \frac{\dot{A}_{kk}}{A_{kk}} = \det A \sum_{k = 1}^n (A^{-1}\dot A)_{kk} = \det A\ tr( A^{-1} \partial_j{A})\\ &= \det g\ tr(A^{-1} \partial_j{A}) \overset{!}{=} \det g\ tr(g^{-1} \partial_j{g}) \end{align}

proving the last part separately (You can easily verify that the product rule holds for matrices):

\begin{align} tr(g^{-1} \partial_j(g)) &= tr(g^{-1} \partial_j(T^{-1}AT))\\ &= tr( T^{-1} A^{-1}T (\dot{T^{-1}}AT + T^{-1}\dot{A}T + T^{-1}A\dot{T})\\ &= tr((AT)^{-1}T \dot{T^{-1}}AT) + tr(T^{-1} A^{-1}\dot{A}T) + tr(T^{-1} A^{-1}A\dot{T})\\ &= tr(T \dot{T^{-1}}) + tr(A^{-1}\dot{A}) + tr(T^{-1} \dot{T})\\ &= tr(-\dot{T} T^{-1}) + tr(\dot{A} A^{-1} ) + tr(T^{-1}\dot{T})\\ &= -tr( T^{-1} \dot{T}) + tr(\dot{A} A^{-1} ) + tr(T^{-1}\dot{T})\\ &= tr( A^{-1}\dot{A}) \end{align}

where I used that \(T\dot{T^{-1}} = -\dot{T} T^{-1}\). And the fact that the trace is invariant under conjugacy.

\(\square\)

Now we can prove (b):

$$L_g(f) = \frac{1}{\sqrt{\det g}} \partial_j(\sqrt{\det g} g^{ij} \partial_i) f$$

using the product rule (and swappig the terms around):

$$= \frac{1}{\sqrt{\det g}} \Big( \sqrt{\det g} g^{ij} \partial_j \partial_i +\sqrt{\det g} \partial_j (g^{ij}) \partial_i +\partial_j\Big(\sqrt{\det g}\Big) g^{ij} \partial_i)\Big) f$$

using the Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big) = (\det g) (g^{kl}\partial_j g_{lk})\)

$$= g^{ij} \partial_j \partial_i f + (\partial_j g^{ij})\partial_i f + \frac{1}{2} (g^{kl}\partial_j g_{lk}) g^{ij} \partial_i f $$

with the product rule: \((\partial_j g^{-1}) = - g^{-1}(\partial_j g) g^{-1}\). In particular: \(\partial_j g^{ij} = -g^{ik} (\partial_j g_{kl}) g^{lj}\) (Notice that calling the summation indecies \(k\) and \(l\) is allowed, as it just combines the sums, which is possible as all indices run from \(1\) to \(n\))

$$= g^{ij} \partial_i \partial_j f -g^{ik} (\partial_j g_{kl}) g^{lj}\partial_i f + \frac{1}{2} g^{kl}(\partial_j g_{lk}) g^{ij} \partial_i f $$

Now some indecie swapping: In the second part: \( l \leftrightarrow i\) and in the third part \( l \leftrightarrow i\) and \( k \leftrightarrow j\) (We can do this by seperating the sums, swapping indecies and then putting them back together)

$$= g^{ij} \partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) g^{ij}\partial_l f + \frac{1}{2} g^{ji}(\partial_k g_{ij}) g^{lk} \partial_l f $$

$$= g^{ij} \Big(\partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) \partial_l f + \frac{1}{2} (\partial_k g_{ij}) g^{lk} \partial_l f \Big) $$

$$= g^{ij} \Big(\partial_i \partial_j f - \partial_l f \frac{1}{2} g^{lk} \big( (\partial_j g_{ki}) + (\partial_j g_{ki}) -(\partial_k g_{ij}) \big) \Big) $$

notice that if we pull all the sums apart again we can switch \((\partial_j g_{ki})\) to \((\partial_i g_{kj})\) as the only other \(i,j\) term in that sum would be \(g^{ij}\) which is symmetric. Then the second term is Cristoffel:

$$= g^{ij} \Big(\partial_i \partial_j f - \partial_l f \Gamma^l{}_{ij}\Big) = g^{ij} \nabla_i (\partial_j f)$$

Problem 9 (Craven)

\(\partial_k g_{lm} := g_{klm}\)

a

$$ \begin{align} \nabla_k g_{lm} &= g_{klm} - g_{mi}\Gamma^{i}_{km} \\ &= g_{klm} - \frac{1}{2} \overbrace{g_{mi}g^{ip}}^{\delta^{p}_{m}}\left(g_{klp}+g_{lpk}-g_{pkl})\right) - \frac{1}{2} \overbrace{g_{li}g^{ip}}^{\delta^{p}_{i}}\left(g_{kmp} + g_{mpk} - g_{pkm}\right) \\ &= g_{klm} - \frac{1}{2}\left(g_{klm} + g_{lmk} - g_{mkl}\right)-\frac{1}{2}\left(g_{klm}+g_{mlk}-g_{lkm}\right) = 0 \end{align} $$

b

Insert the definition of the covariant derivative, define \(\partial_j f := f_j\):

$$g^{ij}\nabla_i f_j = g^{ij}\partial_i f_j - \frac{1}{2}g^{lk}g^{ij}\left(g_{ijl}+g_{jli} - g_{lij}\right)f_k$$

Since we sum over i, j by symmetry of the indices \(g^{ij}g_{ijl} = g^{ij}g_{jli} \) thus this is equal to the expression

$$g^{ij}\partial_i f_i - g^{ij}g_{ijl}g^{lk}f_k + \frac{1}{2}g^{ij}g_{lij}g^{lk}f_k$$

We now calculate with the other expression given:

$$\sqrt{\det{g}}^{-1}\partial_l\left(\sqrt{\det{g}}g^{lk}f_k\right)=\frac{1}{2\det{g}}\left(\partial_l \det{g}\right)g^{lk}f_k + \partial_l g^{lk}f_k + g^{lk}\partial_l f_k$$

Now what is left is to calculate the partial derivative of the determinant. By the chain rule we get: $$ \partial_l \det{g} = d\det_{g}{\partial_lg} $$ Now we have to determine the linear map \(d\det_g{X}\) that takes an element of the tangential space \(X\) and maps it to a scalar. For this we use a suitable curve. By the chain rule if \(\phi(0) = \psi(0), \phi'(0) = \psi'(0)\) then \(\frac{d}{dt}_{t=0}\det{\phi(t)}\). Thus we pick the curve \(ge^{g^{-1}Xt} = \phi(t)\). Inserting, using matrix identities and taking the derivative gives us \(d\det_g{\partial_l g} = \det{g}~\text{tr}(g^{-1}\partial_lg) = \det{g}g^{ij}g_{lij}\). Thus we get the expression: $$ \frac{1}{2} g^{ij}g_{lij}g^{lk}f_k+\partial_lg^{lk}f_k + g^{lk}\partial_lf_k $$ Thus all that is left to show is that \(\partial_lg^{lk}f_k = -g^{ij}g_{ijl}g^{lk}f_k\).

\(gg^{-1}\) and from product rule we get \( (\partial_mg)g^{-1} + g(\partial_mg^{-1}) = 0 \Rightarrow -g^{-1}\partial_mgg^{-1}=\partial_mg^{-1}\). Inserting indices gives us \( -g^{ij}g_{mjk}g^{kl} = \partial_mg^{il}\). Inserting \(m=i\) and summing over the index i gives us \(-g^{ij}g_{jkl}g^{kl}=\partial_ig^{il}\) thus the identity is proven.

c

$$ \begin{align} \begin{split} \tilde \Gamma_{ij}^{l} &= \frac{1}{2}\chi^{-2}g^{lk}\left(\partial_i\chi^2g_{jk} + \chi^{2}g_{ijk} + \partial_j\chi^{2}g_{ki} + \chi^2g_{jki}-\partial_k\chi^2g_{ij}- \chi^2g_{kij}\right) \\ &=\overbrace{\frac{1}{2}g^{lk}\left(g_{ijk} + g_{jki}-g_{kij}\right)}^{\Gamma^{l}_{ij}} + \overbrace{\frac{1}{2}\chi^{-2}\partial_i\chi^2}^{\partial_i\ln{\chi}}\overbrace{g^{lk}g_{jk}}^{\delta^l_j}+\overbrace{\frac{1}{2}\chi^{-2}\partial_j\chi^2}^{\partial_j \ln{\chi}}\overbrace{g^{lk}g_{ki}}^{\delta^{l}_{i}} - \overbrace{-\frac{1}{2}\chi^{-2}\partial_k\chi^2}^{\partial_k \ln{\chi}}g^{lk}g_{ij} \\ &= \Gamma^{l}_{ij} + \partial_i\ln{\chi}\delta^l_j + \partial_j\ln{\chi}\delta^l_i - \partial_k \ln{\chi}g^{lk}g_{ij} \end{split} \end{align} $$ \(g_{ij} = \delta_{ij}, \chi = \frac{2}{1-|x|^2}, g^{ij}=\delta^{ij} \) just as in the example given. \(\Gamma^l_{ij} = 0 \), since the metric \(I\) is constant. Now \(\partial_i \ln{\chi}=\partial_i(\ln{2}-\ln{(1-|x|^2)}) = \frac{2x_i}{1-|x|^2} \). We conclude: $$ \tilde \Gamma^l_{ij} = \frac{2}{1-|x|^2}\left(x_i\delta^l_j + x_j\delta^l_i - x_k\delta^{lk}\delta_{ij}\right) = \frac{2}{1-|x|^2}\left(x_i\delta_{jl} + x_j\delta_{li} - x_l\delta_{ij}\right) $$