Aufgaben:Problem 9

From Ferienserie MMP2
Revision as of 14:28, 28 June 2015 by Carl (Talk | contribs) (b))

Jump to: navigation, search

Link to pdf

Media:9+10.pdf

Solution

a)

$$\nabla_k g_{ij} = \partial_k g_{ij} - g_{nj} \Gamma^n_{kj} - g_{in}\Gamma^n_{kj} =$$ $$ = \partial_k g_{ij} - g_{nj}(\frac{1}{2} g^{np}(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki})) - g_{in}(\frac{1}{2} g^{nq}(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj})) =$$

from the symmetry of \( g_{ij} \) and from: \( g_{ki}g^{ij} = g^{ji}g_{ik} = \delta^i_k \) we obtain:

$$ = \partial_k g_{ij} - \frac{1}{2} \delta^p_j(\partial_k g_{ip} + \partial_i g_{kp} - \partial_p g_{ki}) - \frac{1}{2} \delta^q_i(\partial_k g_{jq} + \partial_j g_{kq} - \partial_q g_{kj}) = $$

$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_i g_{kj} - \partial_j g_{ki}) - \frac{1}{2} (\partial_k g_{ji} + \partial_j g_{ki} - \partial_i g_{kj}) = $$

$$ = \partial_k g_{ij} - \frac{1}{2} (\partial_k g_{ij} + \partial_k g_{ji}) - \frac{1}{2} (\partial_j g_{ki} - \partial_i g_{kj} + \partial_i g_{kj} - \partial_j g_{ki} ) = 0 $$

again we used the symmetry of \( g_{ij} \) in the first bracket.

b)

Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big)\)

Proof: We know that \(g\) is symmetric and therefore diagonalisable. \(\det g = \det (T^{-1}AT) = \det A\) where \(A\) is diagonal:

(Note that \(A_{jj} > 0\) since \(g\) is also positive definite \(\Leftrightarrow\) the eigenavalues of \(g\) are strictly positiv)

\begin{align} \partial_j \det g &= \partial_j \det A = \partial_j \prod_{i=1}^n A_{ii} = \sum_{j = 1}^n \prod_{i=1}^n \frac{A_{ii}}{A_{jj}} \partial_j {A}_{jj} = \prod_{i=1}^n A_{ii} \sum_{j = 1}^n \frac{\dot{A}_{jj}}{A_{jj}}\\ &= \det A \sum_{j = 1}^n \frac{\dot{A}_{jj}}{A_{jj}} = \det A \sum_{j = 1}^n (A^{-1}\dot A)_{jj} = \det A\ tr( A^{-1} \partial_j{A})\\ &= \det g\ tr(A^{-1} \partial_j{A}) \overset{!}{=} \det g\ tr(g^{-1} \partial_j{g}) \end{align}

proving the last part separately (You can easily verify that the product rule holds for matrices):

\begin{align} tr(g^{-1} \partial_j(g)) &= tr(g^{-1} \partial_j(T^{-1}AT)) = tr(g^{-1} (\dot{T^{-1}}AT + T^{-1}\dot{A}T + T^{-1}A\dot{T})\\ &= tr(g^{-1} \dot{T^{-1}}AT) + tr(g^{-1}T^{-1}\dot{A}T) + tr(g^{-1}T^{-1}A\dot{T})\\ &= tr(\dot{T^{-1}}AT g^{-1}) + tr(T^{-1}A^{-1}T T^{-1}\dot{A}T) + tr(T^{-1}A^{-1}T T^{-1}A\dot{T})\\ &= tr(\dot{T^{-1}}AT T^{-1}A^{-1}T) + tr(T^{-1}A^{-1} \dot{A}T) + tr(T^{-1}A^{-1}A\dot{T})\\ &= tr(\dot{T^{-1}}A A^{-1}T) + tr(\dot{A}T T^{-1}A^{-1} ) + tr(T^{-1}\dot{T}) = tr(\dot{T^{-1}}T) + tr(\dot{A} A^{-1} ) + tr(T^{-1}\dot{T}) = - tr(T^{-1}\dot{T}) + tr(\dot{A} A^{-1} ) + tr(T^{-1}\dot{T})\\ &= tr( A^{-1}\dot{A}) \end{align}

where I used that \(\dot{T^{-1}}T = - T^{-1}\dot{T}\).

\(\square\)

$$L_g(f) = \frac{1}{\sqrt{\det g}} \partial_j(\sqrt{\det g} g^{ij} \partial_i) f$$

using the product rule (and swappig the terms around):

$$= \frac{1}{\sqrt{\det g}} \Big( \sqrt{\det g}) g^{ij} \partial_j \partial_i +\sqrt{\det g} \partial_j (g^{ij}) \partial_i +\partial_j\Big(\sqrt{\det g}\Big) g^{ij} \partial_i)\Big) f$$

using the Claim: \(\partial_j \det g = (\det g) tr \Big(g^{-1}\partial_j g \Big) = (\det g) (g^{kl}\partial_j g_{lk})\)

$$= g^{ij} \partial_j \partial_i f + (\partial_j g^{ij})\partial_i f + \frac{1}{2} (g^{kl}\partial_j g_{lk}) g^{ij} \partial_i f $$

with the product rule: \((\partial_j g^{-1}) = - g^{-1}(\partial_j g) g^{-1}\). In particular: \(\partial_j g^{ij} = -g^{ik} (\partial_j g_{kl}) g^{lj}\) (Notice that calling the summation indecies \(k\) and \(l\) is allowed, as it just combines the sums, which is possible as all indices run from \(1\) to \(n\))

$$= g^{ij} \partial_i \partial_j f -g^{ik} (\partial_j g_{kl}) g^{lj}\partial_i f + \frac{1}{2} g^{kl}(\partial_j g_{lk}) g^{ij} \partial_i f $$

Now some indecie swapping: In the second part: \( l \leftrightarrow i\) and in the third part \( l \leftrightarrow i\) and \( k \leftrightarrow j\) (We can do this by seperating the sums, swapping indecies and then putting them back together)

$$= g^{ij} \partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) g^{ij}\partial_l f + \frac{1}{2} g^{ji}(\partial_k g_{ij}) g^{lk} \partial_l f $$

$$= g^{ij} \Big(\partial_i \partial_j f -g^{lk} (\partial_j g_{ki}) \partial_l f + \frac{1}{2} (\partial_k g_{ij}) g^{lk} \partial_l f \Big) $$

$$= g^{ij} \Big(\partial_i \partial_j f - \partial_l f \frac{1}{2} g^{lk} \big( (\partial_j g_{ki}) + (\partial_j g_{ki}) -(\partial_k g_{ij}) \big) \Big) $$

notice that if we pull all the sums apart again we can switch \((\partial_j g_{ki})\) to \((\partial_i g_{kj})\) as the only other \(i,j\) term in that sum would be \(g^{ij}\) which is symmetric. Then the second term is Cristoffel:

$$= g^{ij} \Big(\partial_i \partial_j f - \partial_j f \Gamma^l{}_{ij}\Big) = g^{ij} \nabla_i (\partial_j f)$$

Problem 9 (Craven)

\(\partial_k g_{lm} := g_{klm}\)

a

$$ \begin{align} \nabla_k g_{lm} &= g_{klm} - g_{mi}\Gamma^{i}_{km} \\ &= g_{klm} - \frac{1}{2} \overbrace{g_{mi}g^{ip}}^{\delta^{p}_{m}}\left(g_{klp}+g_{lpk}-g_{pkl})\right) - \frac{1}{2} \overbrace{g_{li}g^{ip}}^{\delta^{p}_{i}}\left(g_{kmp} + g_{mpk} - g_{pkm}\right) \\ &= g_{klm} - \frac{1}{2}\left(g_{klm} + g_{lmk} - g_{mkl}\right)-\frac{1}{2}\left(g_{klm}+g_{mlk}-g_{lkm}\right) = 0 \end{align} $$

b

Insert the definition of the covariant derivative, define \(\partial_j f := f_j\): $$ g^{ij}\nabla_i f_j = g^{ij}\partial_i f_j - \frac{1}{2}g^{lk}g^{ij}\left(g_{ijl}+g_{jli} - g_{lij}\right)f_k $$ Since we sum over i, j by symmetry of the indices \(g^{ij}g_{ijl} = g^{ij}g_{jli} \) thus this is equal to the expression $$ g^{ij}\partial_i f_i - g^{ij}g_{ijl}g^{lk}f_k + \frac{1}{2}g^{ij}g_{lij}g^{lk}f_k $$ We now calculate with the other expression given: $$ \sqrt{\det{g}}^{-1}\partial_l\left(\sqrt{\det{g}}g^{lk}f_k\right)=\frac{1}{2\det{f}}\left(\partial_l \det{g}\right)g^{lk}f_k + \partial_l g^{lk}f_k + g^{lk}\partial_l f_k $$ Now what is left is to calculate the partial derivative of the determinant. By the chain rule we get: $$ \partial_l \det{g} = d\det_{g}{\partial_lg} $$ Now we have to determine the linear map \(d\det_g{X}\) that takes an element of the tangential space \(X\) and maps it to a scalar. For this we use a suitable curve. By the chain rule if \(\phi(0) = \psi(0), \phi'(0) = \psi'(0)\) then \(\frac{d}{dt}_{t=0}\det{\phi(t)}\). Thus we pick the curve \(ge^{g^{-1}Xt} = \phi(t)\). Inserting, using matrix identities and taking the derivative gives us \(d\det_g{\partial_l g} = \det{g}~\text{tr}(g^{-1}\partial_lg) = \det{g}g^{ij}g_{lij}\). Thus we get the expression: $$ \frac{1}{2} g^{ij}g_{lij}g^{lk}f_k+\partial_lg^{lk}f_k + g^{lk}\partial_lf_k $$ Thus all that is left to show is that \(\partial_lg^{lk}f_k = -g^{ij}g_{ijl}g^{lk}f_k\).

\(gg^{-1}\) and from product rule we get \( (\partial_mg)g^{-1} + g(\partial_mg^{-1}) = 0 \Rightarrow -g^{-1}\partial_mgg^{-1}=\partial_mg^{-1}\). Inserting indices gives us \( -g^{ij}g_{mjk}g^{kl} = \partial_mg^{il}\). Inserting \(m=i\) and summing over the index i gives us \(-g^{ij}g_{jkl}g^{kl}=\partial_ig^{il}\) thus the identity is proven.

c

$$ \begin{align} \begin{split} \tilde \Gamma_{ij}^{l} &= \frac{1}{2}\chi^{-2}g^{lk}\left(\partial_i\chi^2g_{jk} + \chi^{2}g_{ijk} + \partial_j\chi^{2}g_{ki} + \chi^2g_{jki}-\partial_k\chi^2g_{ij}- \chi^2g_{kij}\right) \\ &=\overbrace{\frac{1}{2}g^{lk}\left(g_{ijk} + g_{jki}-g_{kij}\right)}^{\Gamma^{l}_{ij}} + \overbrace{\frac{1}{2}\chi^{-2}\partial_i\chi^2}^{\partial_i\ln{\chi}}\overbrace{g^{lk}g_{jk}}^{\delta^l_j}+\overbrace{\frac{1}{2}\chi^{-2}\partial_j\chi^2}^{\partial_j \ln{\chi}}\overbrace{g^{lk}g_{ki}}^{\delta^{l}_{i}} - \overbrace{-\frac{1}{2}\chi^{-2}\partial_k\chi^2}^{\partial_k \ln{\chi}}g^{lk}g_{ij} \\ &= \Gamma^{l}_{ij} + \partial_i\ln{\chi}\delta^l_j + \partial_j\ln{\chi}\delta^l_i - \partial_k \ln{\chi}g^{lk}g_{ij} \end{split} \end{align} $$ \(g_{ij} = \delta_{ij}, \chi = \frac{2}{1-|x|^2}, g^{ij}=\delta^{ij} \) just as in the example given. \(\Gamma^l_{ij} = 0 \), since the metric \(I\) is constant. Now \(\partial_i \ln{\chi}=\partial_i(\ln{2}-\ln{1-|x|^2}) = \frac{2x_i}{1-|x|^2} \). We conclude: $$ \tilde \Gamma^l_{ij} = \frac{2}{1-|x|^2}\left(x_i\delta^l_j + x_j\delta^l_i - x_k\delta^{lk}\delta_{ij}\right) = \frac{2}{1-|x|^2}\left(x_i\delta_{jl} + x_j\delta_{li} - x_l\delta_{ij}\right) $$