Back

Ricci Tensor and Ricci Scalar

Ricci tensor tracks how volume changes along geodesics. We can say it "summarizes" the Riemann tensor. Ricci scalar tells us how a volume is different than in flat space.

Recall the geodesic deviation from the previous chapter:

vvs=R(s,v)v.\nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} \boldsymbol{s} = -R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}.
Geodesic deviation in flat spaceGeodesic deviation in curved spaceGeodesic deviation in curved space

Flat space

Curved space

Curved space

[R(s,v)v]s=0[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s} = 0
[R(s,v)v]s>0[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s} > 0
[R(s,v)v]s<0[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s} < 0

The dot product of Riemann tensor and the separation vector depends on the sizes of v\boldsymbol{v} and s\boldsymbol{s}. We can normalize this by dividing it by the squared area of the parallelogram formed by the two vectors:

[R(s,v)v]ss×v2=[R(s,v)v]ss2v2sin2θ=[R(s,v)v]ss2v2sin2θ=[R(s,v)v]ss2v2(1cos2θ)=[R(s,v)v]ss2v2(svcosθ)2=[R(s,v)v]s(ss)(vv)(sv). \begin{align*} \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s} \times \boldsymbol{v}|^2} &= \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s}|^2 |\boldsymbol{v}|^2 \sin^2 \theta} \\ &= \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s}|^2 |\boldsymbol{v}|^2 \sin^2 \theta} \\ &= \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s}|^2 |\boldsymbol{v}|^2 (1 - \cos^2 \theta)} \\ &= \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s}|^2 |\boldsymbol{v}|^2 - (|\boldsymbol{s}| |\boldsymbol{v}| \cos \theta)^2} \\ &= \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{(\boldsymbol{s} \cdot \boldsymbol{s}) (\boldsymbol{v} \cdot \boldsymbol{v}) - (\boldsymbol{s} \cdot \boldsymbol{v})}. \end{align*}

We can prove the formula stays constant by transforming sas+bv\boldsymbol{s} \to a\boldsymbol{s} + b\boldsymbol{v} and vcs+dv\boldsymbol{v} \to c\boldsymbol{s} + d\boldsymbol{v}, first considering the Riemann tensor:

R(as+bv,cs+dv)=R(as,cs+dv)+R(bv,cs+dv)=acR(s,s)+adR(s,v)+bcR(v,s)+bdR(v,v)=adR(s,v)bcR(s,v)=(adbc)R(s,v). \begin{align*} R(a\boldsymbol{s} + b\boldsymbol{v}, c\boldsymbol{s} + d\boldsymbol{v}) &= R(a\boldsymbol{s}, c\boldsymbol{s} + d\boldsymbol{v}) + R(b\boldsymbol{v}, c\boldsymbol{s} + d\boldsymbol{v}) \\ &= ac R(\boldsymbol{s}, \boldsymbol{s}) + ad R(\boldsymbol{s}, \boldsymbol{v}) \\ &+ bc R(\boldsymbol{v}, \boldsymbol{s}) + bd R(\boldsymbol{v}, \boldsymbol{v}) \\ &= ad R(\boldsymbol{s}, \boldsymbol{v}) - bc R(\boldsymbol{s}, \boldsymbol{v}) \\ &= (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}). \end{align*}

Recall that the Riemann tensor acting on a dot product is zero:

R(eα,eβ)(ab)=R(eα,eβ)ab+aR(eα,eβ)b=0,R(eα,eβ)ab=aR(eα,eβ)b, \begin{align*} R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) (\boldsymbol{a} \cdot \boldsymbol{b}) &= R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{a} \cdot \boldsymbol{b} + \boldsymbol{a} \cdot R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{b} = 0, \\ R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{a} \cdot \boldsymbol{b} &= -\boldsymbol{a} \cdot R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{b}, \end{align*}

and if the dot product is of two same vectors, we get:

R(eα,eβ)aa=aR(eα,eβ)a=0.R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{a} \cdot \boldsymbol{a} = -\boldsymbol{a} \cdot R(\boldsymbol{e_{\alpha}}, \boldsymbol{e_{\beta}}) \boldsymbol{a} = 0.

Continuing with the nominator:

R(as+bv,cs+dv)(cs+dv)(as+bv)=(adbc)R(s,v)(cs+dv)(as+bv)=(adbc)R(s,v)cs(as+bv)+(adbc)R(s,v)dv(as+bv)=(adbc)R(s,v)csas+(adbc)R(s,v)csbv+(adbc)R(s,v)dvas+(adbc)R(s,v)dvbv=bc(adbc)R(s,v)sv+ad(adbc)R(s,v)vs=bc(adbc)R(s,v)vs+ad(adbc)R(s,v)vs=(adbc)2R(s,v)vs, \begin{align*} R(a\boldsymbol{s} + b\boldsymbol{v}, c\boldsymbol{s} + d\boldsymbol{v}) (c\boldsymbol{s} + d\boldsymbol{v}) \cdot (a\boldsymbol{s} + b\boldsymbol{v}) &= (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) (c\boldsymbol{s} + d\boldsymbol{v}) \cdot (a\boldsymbol{s} + b\boldsymbol{v}) \\ &= (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) c\boldsymbol{s} \cdot (a\boldsymbol{s} + b\boldsymbol{v}) + (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) d\boldsymbol{v} \cdot (a\boldsymbol{s} + b\boldsymbol{v}) \\ &= (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) c\boldsymbol{s} \cdot a\boldsymbol{s} + (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) c\boldsymbol{s} \cdot b\boldsymbol{v} \\ &+ (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) d\boldsymbol{v} \cdot a\boldsymbol{s} + (ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) d\boldsymbol{v} \cdot b\boldsymbol{v} \\ &= bc(ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{s} \cdot \boldsymbol{v} + ad(ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v} \cdot \boldsymbol{s} \\ &= -bc(ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v} \cdot \boldsymbol{s} + ad(ad - bc) R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v} \cdot \boldsymbol{s} \\ &= (ad - bc)^2 R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v} \cdot \boldsymbol{s}, \end{align*}

giving us the original nominator multiplied by (adbc)2(ad - bc)^2.

For the denominator, I will use the original cross product:

(as+bv)×(cs+dv)2=as×(cs+dv)+bv×(cs+dv)2=as×cs+as×dv+bv×cs+bv×dv2=as×dv+bv×cs2=(adbc)2s×v2=(adbc)2s×v2, \begin{align*} |(a\boldsymbol{s} + b\boldsymbol{v}) \times (c\boldsymbol{s} + d\boldsymbol{v})|^2 &= |a\boldsymbol{s} \times (c\boldsymbol{s} + d\boldsymbol{v}) + b\boldsymbol{v} \times (c\boldsymbol{s} + d\boldsymbol{v})|^2 \\ &= |a\boldsymbol{s} \times c\boldsymbol{s} + a\boldsymbol{s} \times d\boldsymbol{v} + b\boldsymbol{v} \times c\boldsymbol{s} + b\boldsymbol{v} \times d\boldsymbol{v}|^2 \\ &= |a\boldsymbol{s} \times d\boldsymbol{v} + b\boldsymbol{v} \times c\boldsymbol{s}|^2 \\ &= |(ad - bc)^2 \boldsymbol{s} \times \boldsymbol{v}|^2 \\ &= (ad - bc)^2 |\boldsymbol{s} \times \boldsymbol{v}|^2, \end{align*}

giving us the original denominator multiplied by (adbc)2(ad - bc)^2

[R(s,v)v]ss×v2(adbc)2(adbc)2[R(s,v)v]ss×v2,\frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s} \times \boldsymbol{v}|^2} \to \frac{(ad - bc)^2}{(ad - bc)^2} \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{|\boldsymbol{s} \times \boldsymbol{v}|^2},

implying the equation stays constant and depends only on the plane formed by the vectors and not the size.

This is called the sectional curvature:

K(s,v)=[R(s,v)v]s(ss)(vv)(sv).K(\boldsymbol{s}, \boldsymbol{v}) = \frac{[R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{s}}{(\boldsymbol{s} \cdot \boldsymbol{s}) (\boldsymbol{v} \cdot \boldsymbol{v}) - (\boldsymbol{s} \cdot \boldsymbol{v})}.
Geodesic deviation in flat spaceGeodesic deviation in curved spaceGeodesic deviation in curved space

Flat space

Curved space

Curved space

K(s,v)=0K(\boldsymbol{s}, \boldsymbol{v}) = 0
K(s,v)>0K(\boldsymbol{s}, \boldsymbol{v}) > 0
K(s,v)<0K(\boldsymbol{s}, \boldsymbol{v}) < 0

For Ricci curvature, we take a set of basis vectors {e1,e2,,en}\{\boldsymbol{e_1}, \boldsymbol{e_2}, \dots, \boldsymbol{e_n}\} and a direction vector v=en\boldsymbol{v} = \boldsymbol{e_n}. The Ricci curvature is the sum of the sectional curvatures along this vector:

Ric(v,v)=μK(eμ,v).Ric(\boldsymbol{v}, \boldsymbol{v}) = \sum_{\mu} K(\boldsymbol{e_{\mu}}, \boldsymbol{v}).
Sectional curvature

In the scenario on the picture:

K(e1,v)>0,K(e2,v)<0. \begin{align*} K(\boldsymbol{e_1}, \boldsymbol{v}) > 0, \\ K(\boldsymbol{e_2}, \boldsymbol{v}) < 0. \end{align*}

We don't have enough information, to determine the Ricci curvature, it could be positive, negative or even zero. If the Ricci curvature is zero, it implies the volume doesn't change, However from the Riemann tensor, we may still get that there is a curvature and a change in shape.

We can use the above formula and calculate the Ricci tensor components in orthonormal basis (dot product of same basis vectors is one and cross product of different basis vectors is zero):

Ric(v,v)=μK(eμ,v)=μ[R(eμ,v)v]eμ(eμeμ)(vv)(eμv).=μ[R(eμ,v)v]eμ(1)(1)(0)=μvνvσ[R(eμ,eν)eσ]eμ=vνvσμRλσμνeλeμ=vνvσμRλσμνgλμ=vνvσRμσμν=vνvσRσν, \begin{align*} Ric(\boldsymbol{v}, \boldsymbol{v}) &= \sum_{\mu} K(\boldsymbol{e_{\mu}}, \boldsymbol{v}) \\ &= \sum_{\mu} \frac{[R(\boldsymbol{e_{\mu}}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{e_{\mu}}}{(\boldsymbol{e_{\mu}} \cdot \boldsymbol{e_{\mu}}) (\boldsymbol{v} \cdot \boldsymbol{v}) - (\boldsymbol{e_{\mu}} \cdot \boldsymbol{v})}. \\ &= \sum_{\mu} \frac{[R(\boldsymbol{e_{\mu}}, \boldsymbol{v}) \boldsymbol{v}] \cdot \boldsymbol{e_{\mu}}}{(1) (1) - (0)} \\ &= \sum_{\mu} v^{\nu} v^{\sigma} [R(\boldsymbol{e_{\mu}}, \boldsymbol{e_{\nu}}) \boldsymbol{e_{\sigma}}] \cdot \boldsymbol{e_{\mu}} \\ &= v^{\nu} v^{\sigma} \sum_{\mu} R^{\lambda}{}_{\sigma \mu \nu} \boldsymbol{e_{\lambda}} \cdot \boldsymbol{e_{\mu}} \\ &= v^{\nu} v^{\sigma} \sum_{\mu} R^{\lambda}{}_{\sigma \mu \nu} g_{\lambda \mu} \\ &= v^{\nu} v^{\sigma} R^{\mu}{}_{\sigma \mu \nu} \\ &= v^{\nu} v^{\sigma} R_{\sigma \nu}, \end{align*}

where

Rσν=RμσμνR_{\sigma \nu} = R^{\mu}{}_{\sigma \mu \nu}

are the components of the Ricci tensor.

The step

μRλσμνgλμRμσμν\sum_{\mu} R^{\lambda}{}_{\sigma \mu \nu} g_{\lambda \mu} \to R^{\mu}{}_{\sigma \mu \nu}

might seem a bit weird at first, but remember that the metric tensor is the Kronecker delta. If we expand the sum, we get:

μRλσμνgλμ=Rλσ1νδλ1+Rλσ2νδλ2+=R1σ1ν+R2σ2ν+.=Rσν \sum_{\mu} R^{\lambda}{}_{\sigma \mu \nu} g_{\lambda \mu} = R^{\lambda}{}_{\sigma 1 \nu} \delta_{\lambda 1} + R^{\lambda}{}_{\sigma 2 \nu} \delta_{\lambda 2} + \dots = R^1{}_{\sigma 1 \nu} + R^2{}_{\sigma 2 \nu} + \dots. = R_{\sigma \nu}

Recall the components of the Riemann tensor:

Rθθθθ=0,Rθθθϕ=0,Rθθϕθ=0,Rθθϕϕ=0,Rθϕθθ=0,Rθϕθϕ=sin2θ,Rθϕϕθ=sin2θ,Rθϕϕϕ=0,Rϕθθθ=0,Rϕθθϕ=1,Rϕθϕθ=1,Rϕθϕϕ=0,Rϕϕθθ=0,Rϕϕθϕ=0,Rϕϕϕθ=0,Rϕϕϕϕ=0. \begin{align*} R^{\theta}{}_{\theta \theta \theta} &= 0, & R^{\theta}{}_{\theta \theta \phi} &= 0, & R^{\theta}{}_{\theta \phi \theta} &= 0, & R^{\theta}{}_{\theta \phi \phi} &= 0, \\ R^{\theta}{}_{\phi \theta \theta} &= 0, & R^{\theta}{}_{\phi \theta \phi} &= \sin^2 \theta, & R^{\theta}{}_{\phi \phi \theta} &= - \sin^2 \theta, & R^{\theta}{}_{\phi \phi \phi} &= 0,\\ R^{\phi}{}_{\theta \theta \theta} &= 0, & R^{\phi}{}_{\theta \theta \phi} &= -1, & R^{\phi}{}_{\theta \phi \theta} &= 1, & R^{\phi}{}_{\theta \phi \phi} &= 0, \\ R^{\phi}{}_{\phi \theta \theta} &= 0, & R^{\phi}{}_{\phi \theta \phi} &= 0, & R^{\phi}{}_{\phi \phi \theta} &= 0, & R^{\phi}{}_{\phi \phi \phi} &= 0. \end{align*}

we can obtain the non zero components:

Rθθ=1,Rϕϕ=sin2θ. \begin{align*} R_{\theta \theta} &= 1, \\ R_{\phi \phi} &= \sin^2 \theta. \end{align*}

Sectional curvature works only in orthonormal basis. For any basis, we need to consider a different approach. We need to use the volume form which tells us the volume. For a parallelogram created by two vectors, the volume form is as follows:

ω(u,v)=u1v1u2v2=ϵμνuμvν,\omega(\boldsymbol{u}, \boldsymbol{v}) = \begin{vmatrix} u^1 & v^1 \\ u^2 & v^2 \end{vmatrix} = \epsilon_{\mu \nu} u^{\mu} v^{\nu},

where

ϵμν={+1counting order (ϵ12)1wrong order (ϵ21)   0any index repeated\epsilon_{\mu \nu} = \begin{cases} +1 & \textrm{counting order (\(\epsilon_{12}\))} \\ -1 & \textrm{wrong order (\(\epsilon_{21}\))} \\ \ \ \ 0 & \textrm{any index repeated} \end{cases}

is the Levi-Civita symbol.

A volume created by three vectors is equal to:

ω(u,v,w)=u1v1w1u2v2w2u3v3w3=ϵμνσuμvνwσ,\omega(\boldsymbol{u}, \boldsymbol{v}, \boldsymbol{w}) = \begin{vmatrix} u^1 & v^1 & w^1 \\ u^2 & v^2 & w^2 \\ u^3 & v^3 & w^3 \end{vmatrix} = \epsilon_{\mu \nu \sigma} u^{\mu} v^{\nu} w^{\sigma},

where:

ϵμνσ={+1even permutation of μνσ1odd permutation of μνσ   0any index repeated\epsilon_{\mu \nu \sigma} = \begin{cases} +1 & \textrm{even permutation of \(\mu \nu \sigma\)} \\ -1 & \textrm{odd permutation of \(\mu \nu \sigma\)} \\ \ \ \ 0 & \textrm{any index repeated} \end{cases}

Now this only works in orthonormal basis.

Just as we took the components of the vectors in orthonormal basis, we can take the volume form of an arbitrary basis obtained by a coordinate transformation from the orthonormal basis:

e~1=x1x~1e1+x2x~1e2,e~2=x1x~2e1+x2x~2e2,ω(e~1,e~2)=x1x~1x1x~2x2x~1x2x~2=detJ, \begin{align*} \boldsymbol{\tilde{e}_1} &= \frac{\partial x^1}{\partial \tilde{x}^1} \boldsymbol{e_1} + \frac{\partial x^2}{\partial \tilde{x}^1} \boldsymbol{e_2}, \\ \boldsymbol{\tilde{e}_2} &= \frac{\partial x^1}{\partial \tilde{x}^2} \boldsymbol{e_1} + \frac{\partial x^2}{\partial \tilde{x}^2} \boldsymbol{e_2}, \\ \omega(\boldsymbol{\tilde{e}_1}, \boldsymbol{\tilde{e}_2}) &= \begin{vmatrix} \frac{\partial x^1}{\partial \tilde{x}^1} & \frac{\partial x^1}{\partial \tilde{x}^2} \\ \frac{\partial x^2}{\partial \tilde{x}^1} & \frac{\partial x^2}{\partial \tilde{x}^2} \end{vmatrix} = \det J, \end{align*}

with the metric tensor transformed:

g~μν=xαx~μxβx~νgαβ=JμαJνβgαβ, \begin{align*} \tilde{g}_{\mu \nu} &= \frac{\partial x^{\alpha}}{\partial \tilde{x}^{\mu}} \frac{\partial x^{\beta}}{\partial \tilde{x}^{\nu}} g_{\alpha \beta} \\ &= J^{\alpha}_{\mu} J^{\beta}_{\nu} g_{\alpha \beta}, \end{align*}

and if we take the determinant of both sides, we obtain:

detg~=(detJ)2(detg)=(detJ)2,\det \tilde{g} = (\det J)^2 (\det g) = (\det J)^2,

where the determinant of the old metric tensor is 1 since it's the Kronecker delta (orthonormal basis). This implies:

ω(e1,e2)=detg~.\omega(\boldsymbol{e_1}, \boldsymbol{e_2}) = \sqrt{\det \tilde{g}}.

The volume form of the vectors in general basis is:

ω(u,v)=detgu1v1u2v2=detgϵμνuμvν,\omega(\boldsymbol{u}, \boldsymbol{v}) = \sqrt{\det g} \begin{vmatrix} u^1 & v^1 \\ u^2 & v^2 \end{vmatrix} = \sqrt{\det g} \epsilon_{\mu \nu} u^{\mu} v^{\nu},

where I have relabeled g~\tilde{g} to gg. For three vectors, this is equal to:

ω(u,v,w)=detgu1v1w1u2v2w2u3v3w3=detg ϵμνσuμvνwσ,\omega(\boldsymbol{u}, \boldsymbol{v}, \boldsymbol{w}) = \sqrt{\det g} \begin{vmatrix} u^1 & v^1 & w^1 \\ u^2 & v^2 & w^2 \\ u^3 & v^3 & w^3 \end{vmatrix} = \sqrt{\det g}\ \epsilon_{\mu \nu \sigma} u^{\mu} v^{\nu} w^{\sigma},

or in general:

ω(v1,v2,)=detg ϵαβv1αv2β=detg ϵμ1μ2(iviμi),\omega(\boldsymbol{v_1}, \boldsymbol{v_2}, \dots) = \sqrt{\det g}\ \epsilon_{\alpha \beta \dots} v_1^{\alpha} v_2^{\beta} \dots = \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(\prod_{i} v_i^{\mu_i}\right),

where there is a sum for each μi\mu_i index due to Einstein summation convention.

I will prove that the first covariant derivative is zero. We take a geodesic path and parallel transport the vectors along the path. And since the Levi-Civita connection preserves, volume, the covariant derivative of the volume form must be zero:

0=uω(v1,v1,)=(iviμi)udetg ϵμ1μ2=0,0 = \nabla_{\boldsymbol{u}} \omega(\boldsymbol{v_1}, \boldsymbol{v_1}, \dots) = \left(\prod_{i} v_i^{\mu_i}\right) \nabla_{\boldsymbol{u}} \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} = 0,

where the product can go outside the covariant derivative, since by the definition of parallel transport, its covariant derivative is zero. Note that this does not mean that the volume does not change. Just that the way we measure volume does not change. Now let's take the second covariant derivative of a volume spanned by separation vectors between geodesics:

V=detg ϵμ1μ2(isiμi),vvV=vv[detg ϵμ1μ2(isiμi)]=detg ϵμ1μ2vv(isiμi)=detg ϵμ1μ2v(s˙jμjijsiμi)=detg ϵμ1μ2(s¨jμjijsiμi+s˙iμjs˙kμkij,ksiμi), \begin{align*} V &= \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(\prod_{i} s_i^{\mu_i}\right), \\ \nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} V = \nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} \left[\sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(\prod_{i} s_i^{\mu_i}\right)\right] &= \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} \left(\prod_{i} s_i^{\mu_i}\right) \\ &= \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \nabla_{\boldsymbol{v}} \left(\dot{s}_j^{\mu_j} \prod_{i \neq j} s_i^{\mu_i}\right) \\ &= \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(\ddot{s}_j^{\mu_j} \prod_{i \neq j} s_i^{\mu_i} + \dot{s}_i^{\mu_j} \dot{s}_k^{\mu_k} \prod_{i \neq j, k} s_i^{\mu_i}\right), \end{align*}

since si\boldsymbol{s_i} are separation vectors, their second derivative is just the Riemann tensor:

vvs=R(s,v)v=Rσλαβsαvβvλeσ,(vvs)σ=Rσλαβsαvβvλ,s¨jμj=Rμjλαβsjαvβvλ \begin{align*} \nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} \boldsymbol{s} &= -R(\boldsymbol{s}, \boldsymbol{v}) \boldsymbol{v} \\ &= - R^{\sigma}{}_{\lambda \alpha \beta} s^{\alpha} v^{\beta} v^{\lambda} \boldsymbol{e_{\sigma}}, \\ (\nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} \boldsymbol{s})^{\sigma} &= - R^{\sigma}{}_{\lambda \alpha \beta} s^{\alpha} v^{\beta} v^{\lambda}, \\ \ddot{s}_j^{\mu_j} &= - R^{\mu_j}{}_{\lambda \alpha \beta} s_j^{\alpha} v^{\beta} v^{\lambda} \end{align*}

continuing:

vvV=detg ϵμ1μ2(Rμjλαβsjαvβvλijsiμi+s˙iμjs˙kμkij,ksiμi).\nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} V = \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(- R^{\mu_j}{}_{\lambda \alpha \beta} s_j^{\alpha} v^{\beta} v^{\lambda} \prod_{i \neq j} s_i^{\mu_i} + \dot{s}_i^{\mu_j} \dot{s}_k^{\mu_k} \prod_{i \neq j, k} s_i^{\mu_i}\right).

Remember, there are summations across the μi\mu_i indices. And because of the Levi-Civita symbol, if there is any repeated index, the whole term vanishes. And since the summation is across all indices except μj\mu_j, we will choose α=μj\alpha = \mu_j:

vvV=detg ϵμ1μ2(Rμjλμjβsjμjvβvλijsiμi+s˙iμjs˙kμkij,ksiμi)=Rμjλμjβvβvλdetg ϵμ1μ2isiμi+detg ϵμ1μ2s˙iμjs˙kμkij,ksiμi=RλβvβvλV+detg ϵμ1μ2s˙iμjs˙kμkij,ksiμi, \begin{align*} \nabla_{\boldsymbol{v}} \nabla_{\boldsymbol{v}} V &= \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \left(- R^{\mu_j}{}_{\lambda \mu_j \beta} s_j^{\mu_j} v^{\beta} v^{\lambda} \prod_{i \neq j} s_i^{\mu_i} + \dot{s}_i^{\mu_j} \dot{s}_k^{\mu_k} \prod_{i \neq j, k} s_i^{\mu_i}\right) \\ &= -R^{\mu_j}{}_{\lambda \mu_j \beta} v^{\beta} v^{\lambda} \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \prod_i s_i^{\mu_i} + \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \dot{s}_i^{\mu_j} \dot{s}_k^{\mu_k} \prod_{i \neq j, k} s_i^{\mu_i} \\ &= -R_{\lambda \beta} v^{\beta} v^{\lambda} V + \sqrt{\det g}\ \epsilon_{\mu_1 \mu_2 \dots} \dot{s}_i^{\mu_j} \dot{s}_k^{\mu_k} \prod_{i \neq j, k} s_i^{\mu_i}, \end{align*}

where RλβR_{\lambda \beta} are the Ricci tensor components. The volume scales proportional to the Ricci tensor. The first term tells us how the volume changes due to the curvature of the space, while the second term tells us the change in volume that can occur in flat space (remember, geodesics may separate at a constant rate in flat space).

Since the Ricci tensor components are all positive on the surface of a sphere, this means that the first component is negative. This means that the volumes shrink.

The Ricci scalar keeps track of how the size of a ball deviates from the size in flat space:

R=Rμμ=gμνRμν.R = R^{\mu}{}_{\mu} = g^{\mu \nu} R_{\mu \nu}.

On the surface of the sphere, this equals to:

R=gθθRθθ+gϕϕRϕϕ=1r2+1r2sin2θsin2θ=2r2.R = g^{\theta \theta} R_{\theta \theta} + g^{\phi \phi} R_{\phi \phi} = \frac{1}{r^2} + \frac{1}{r^2 \sin^2 \theta} \sin^2 \theta = \frac{2}{r^2}.

I will replace the radius of the sphere rr with R\mathcal{R}:

R=2R2.R = \frac{2}{\mathcal{R}^2}.

Observe the difference between circle on the surface of a sphere vs a circle in flat space:

Circle on a sphere vs flat circle

We can see that the circle on the surface of a sphere has bigger area. Also:

r=Rθ,ρ=Rsinθ. \begin{align*} r &= \mathcal{R} \theta, \\ \rho &= \mathcal{R} \sin \theta. \end{align*}

The area of the flat circle is:

Af=πr2,A_f = \pi r^2,

while the area of the circle on the surface of the sphere is:

As=dA=2πρ dr=2πRsinθRdθ=2πR20θsinθdθ=2πR2[cosθ]0θ=2πR2(1cosθ)=2πR2(1cosrR). \begin{align*} A_s &= \int dA \\ &= \int 2 \pi \rho\ dr \\ &= \int 2 \pi \mathcal{R} \sin \theta \mathcal{R} d \theta \\ &= 2 \pi \mathcal{R}^2 \int_0^{\theta}\sin \theta d \theta \\ &= -2 \pi \mathcal{R}^2 \left[\cos \theta\right]_0^{\theta} \\ &= 2 \pi \mathcal{R}^2 (1 - \cos \theta) \\ &= 2 \pi \mathcal{R}^2 \left(1 - \cos \frac{r}{\mathcal{R}}\right). \end{align*}

The ratio of the curved and flat area is:

AsAf=2πR2(1cosrR)πr2=2R2(1cosrR)r2. \begin{align*} \frac{A_s}{A_f} &= \frac{2 \pi \mathcal{R}^2 \left(1 - \cos \frac{r}{\mathcal{R}}\right)}{\pi r^2} \\ &= \frac{2 \mathcal{R}^2 \left(1 - \cos \frac{r}{\mathcal{R}}\right)}{r^2}. \end{align*}

I will take the taylor series of the cosine:

AsAf=2R2r2(1(112!(rR)2+14!(rR)416!(rR)6+))=2R2r2(12!(rR)214!(rR)4+16!(rR)6)=1r2242R2+2[16!(rR)4]=1r2242R2+2[16!(rR)4]=1r224R+[O(r)4+], \begin{align*} \frac{A_s}{A_f} &= \frac{2 \mathcal{R}^2}{r^2} \left(1 - \left(1 - \frac{1}{2!} \left(\frac{r}{\mathcal{R}}\right)^2 + \frac{1}{4!} \left(\frac{r}{\mathcal{R}}\right)^4 - \frac{1}{6!} \left(\frac{r}{\mathcal{R}}\right)^6 + \cdots \right)\right) \\ &= \frac{2 \mathcal{R}^2}{r^2} \left(\frac{1}{2!} \left(\frac{r}{\mathcal{R}}\right)^2 - \frac{1}{4!} \left(\frac{r}{\mathcal{R}}\right)^4 + \frac{1}{6!} \left(\frac{r}{\mathcal{R}}\right)^6 - \cdots\right) \\ &= 1 - \frac{r^2}{24} \frac{2}{\mathcal{R}^2} + 2 \left[\frac{1}{6!} \left(\frac{r}{\mathcal{R}}\right)^4 - \cdots\right] \\ &= 1 - \frac{r^2}{24} \frac{2}{\mathcal{R}^2} + 2 \left[\frac{1}{6!} \left(\frac{r}{\mathcal{R}}\right)^4 - \cdots\right] \\ &= 1 - \frac{r^2}{24} R + \left[O(r)^4 + \cdots\right], \end{align*}

this ratio is a little less than one. Meaning for the same radius (rr, not ρ\rho), the sphere has smaller area than a circle in flat space.

Generally, if a Ricci scalar is positive, it means that for the same radius, curved space has less area. And for the same circumference, curved space has more area.

If a Ricci scalar is negative, it means that for the same radius, curved space has more area. And for the same circumference, curved space has less area.

Recall the symmetries and identities of the Riemann tensor:

Rσρμλ=Rσρλμ,Rσγαβ+Rσβγα+Rσαβγ=0,Rβαμλ=Rαβμλ,Rσγαβ=Rαβσγ. \begin{align*} R_{\sigma \rho \mu \lambda} &= -R_{\sigma \rho \lambda \mu}, \\ R_{\sigma \gamma \alpha \beta} + R_{\sigma \beta \gamma \alpha} + R_{\sigma \alpha \beta \gamma} &= 0, \tag{Torsion-free} \\ R_{\beta \alpha \mu \lambda} &= -R_{\alpha \beta \mu \lambda}, \tag{Metric compatibility} \\ R_{\sigma \gamma \alpha \beta} &= R_{\alpha \beta \sigma \gamma}. \tag{Torsion-free \& metric compatibility} \\ \end{align*}

The Ricci tensor is a contraction of the Riemann tensor:

Rμν=Rσμσν=gσλRλμσν.R_{\mu \nu} = R^{\sigma}{}_{\mu \sigma \nu} = g^{\sigma \lambda} R_{\lambda \mu \sigma \nu}.

Consider the contraction in the contravariant index and the first covariant index:

Rσσμν=gσλRλσμν=gσλRσλμν=0.R^{\sigma}{}_{\sigma \mu \nu} = g^{\sigma \lambda} R_{\lambda \sigma \mu \nu} = -g^{\sigma \lambda} R_{\sigma \lambda \mu \nu} = 0.

We already know that the contraction in the contravariant index and the second covariant index is the Ricci tensor components:

Rσμσν=Rμν.R^{\sigma}{}_{\mu \sigma \nu} = R_{\mu \nu}.

Considering the last possibility - the contraction in the contravariant index and the third covariant index:

Rσμνσ=gσλRλμνσ=gσλRλμσν=Rμν,R^{\sigma}{}_{\mu \nu \sigma} = g^{\sigma \lambda} R_{\lambda \mu \nu \sigma} = -g^{\sigma \lambda} R_{\lambda \mu \sigma \nu} = -R_{\mu \nu},

so the Ricci tensor is the only meaningful contraction.

The Ricci tensor is symmetric:

Rμν=gσλRλμσν=gσλRσνλμ=Rνμ. \begin{align*} R_{\mu \nu} &= g^{\sigma \lambda} R_{\lambda \mu \sigma \nu} \\ &= g^{\sigma \lambda} R_{\sigma \nu \lambda \mu} \\ &= R_{\nu \mu}. \end{align*}

Recall the second Bianchi identity:

Rσλαβ;γ+Rσλγα;β+Rσλβγ;α=0,R^{\sigma}{}_{\lambda \alpha \beta; \gamma} + R^{\sigma}{}_{\lambda \gamma \alpha; \beta} + R^{\sigma}{}_{\lambda \beta \gamma; \alpha} = 0,

I will lower the index and do the following contraction:

Rσλαβ;γ+Rσλγα;β+Rσλβγ;α=0,gλβgσα(Rσλαβ;γ+Rσλγα;β+Rσλβγ;α)=0,gλβ(Rαλαβ;γ+Rαλγα;β+gσαRσλβγ;α)=0,gλβ(Rλβ;γRαλαγ;βgσαRλσβγ;α)=0,Rββ;γgλβRλγ;βgσαRβσβγ;α=0,R;γRβγ;βgσαRσγ;α=0,R;γRβγ;βRαγ;α=0,R;γRβγ;βRβγ;β=0,R;γ2Rβγ;β=0,12δγβR;βRβγ;β=0,12gγρδγβR;βgγρRβγ;β=0,12gβρR;βRβρ;β=0, \begin{align*} R_{\sigma \lambda \alpha \beta; \gamma} + R_{\sigma \lambda \gamma \alpha; \beta} + R_{\sigma \lambda \beta \gamma; \alpha} &= 0, \\ g^{\lambda \beta} g^{\sigma \alpha} (R_{\sigma \lambda \alpha \beta; \gamma} + R_{\sigma \lambda \gamma \alpha; \beta} + R_{\sigma \lambda \beta \gamma; \alpha}) &= 0, \\ g^{\lambda \beta} (R^{\alpha}{}_{\lambda \alpha \beta; \gamma} + R^{\alpha}{}_{\lambda \gamma \alpha; \beta} + g^{\sigma \alpha} R_{\sigma \lambda \beta \gamma; \alpha}) &= 0, \\ g^{\lambda \beta} (R_{\lambda \beta; \gamma} - R^{\alpha}{}_{\lambda \alpha \gamma; \beta} - g^{\sigma \alpha} R_{\lambda \sigma \beta \gamma; \alpha}) &= 0, \\ R^{\beta}{}_{\beta; \gamma} - g^{\lambda \beta} R_{\lambda \gamma; \beta} - g^{\sigma \alpha} R^{\beta}{}_{\sigma \beta \gamma; \alpha} &= 0, \\ R_{; \gamma} - R^{\beta}{}_{\gamma; \beta} - g^{\sigma \alpha} R_{\sigma \gamma; \alpha} &= 0, \\ R_{; \gamma} - R^{\beta}{}_{\gamma; \beta} - R^{\alpha}{}_{\gamma; \alpha} &= 0, \\ R_{; \gamma} - R^{\beta}{}_{\gamma; \beta} - R^{\beta}{}_{\gamma; \beta} &= 0, \\ R_{; \gamma} - 2 R^{\beta}{}_{\gamma; \beta} &= 0, \\ \frac{1}{2} \delta^{\beta}_{\gamma} R_{; \beta} - R^{\beta}{}_{\gamma; \beta} &= 0, \\ \frac{1}{2} g^{\gamma \rho} \delta^{\beta}_{\gamma} R_{; \beta} - g^{\gamma \rho} R^{\beta}{}_{\gamma; \beta} &= 0, \\ \frac{1}{2} g^{\beta \rho} R_{; \beta} - R^{\beta \rho}{}_{; \beta} &= 0, \end{align*}

and this is called the contracted Bianchi identity:

Rαβ;β12gαβR;β=0.R^{\alpha \beta}{}_{; \beta} - \frac{1}{2} g^{\alpha \beta} R_{; \beta} = 0.

This can be rewritten:

Rαβ;β12gαβR;β=0,(Rαβ12gαβR);β=0,Gαβ;β=0. \begin{align*} R^{\alpha \beta}{}_{; \beta} - \frac{1}{2} g^{\alpha \beta} R_{; \beta} &= 0, \\ \left(R^{\alpha \beta} - \frac{1}{2} g^{\alpha \beta} R\right)_{; \beta} &= 0, \\ G^{\alpha \beta}{}_{; \beta} &= 0. \end{align*}

where

Gαβ=Rαβ12gαβRG^{\alpha \beta} = R^{\alpha \beta} - \frac{1}{2} g^{\alpha \beta} R

is the Einstein tensor.

The conservation of energy-momentum is as follows:

Tαβ;β=0,T^{\alpha \beta}{}_{; \beta} = 0,

where TαβT^{\alpha \beta} is the energy-momentum tensor.

This implies:

Gαβ;β=Tαβ;β=0.G^{\alpha \beta}{}_{; \beta} = T^{\alpha \beta}{}_{;\beta} = 0.

We can say that the curvature is proportional to the energy and momentum:

Gαβ=8πGc4Tαβ,Rαβ12gαβR=8πGc4Tαβ, \begin{align*} G^{\alpha \beta} &= \frac{8 \pi G}{c^4} T^{\alpha \beta}, \\ R^{\alpha \beta} - \frac{1}{2} g^{\alpha \beta} R &= \frac{8 \pi G}{c^4} T^{\alpha \beta}, \end{align*}

where all three tensors are symmetric. In four dimensional spacetime, this results in ten independent equations.

Since the covariant derivative is zero, we can add the metric tensor multiplied by a constant:

Rαβ12gαβR+Λgαβ=8πGc4Tαβ,R^{\alpha \beta} - \frac{1}{2} g^{\alpha \beta} R + \Lambda g^{\alpha \beta} = \frac{8 \pi G}{c^4} T^{\alpha \beta},

or in the covariant form:

Rαβ12gαβR+Λgαβ=8πGc4Tαβ,R_{\alpha \beta} - \frac{1}{2} g_{\alpha \beta} R + \Lambda g_{\alpha \beta} = \frac{8 \pi G}{c^4} T_{\alpha \beta},

where Λ\Lambda is the cosmological constant. It is related to the expansion of the universe and the dark energy.