Chapter 3
Laplace Equation

One of the most important PDEs is the Laplace equation

u = 2u x12 + + 2u xn2 = 0.

The corresponding inhomogeneous PDE is Poisson’s equation

u = f.

Both equations are linear PDEs of second order with the unknown function u : n . A function that solves Laplace’s equation is called harmonic. As is typical with linear inhomogeneous equations, the sum of a solution of Poisson’s equation and a harmonic function is again a solution to Poisson’s equation.

These equations show up in many situations. In physics they describe for example the potential u (also called the voltage) of an electric field in the vacuum with some distribution of charges f. To give some more detail, perhaps you are familiar with Coulomb’s law: if we have a particle with charge Q at the origin and another particle with charge q at x, then the force on the second particle is

F = ke𝑞𝑄 |x|2 x^ = q (keQ |x|2 x^),

where ke is an empirical constant. (If you haven’t seen this before, it is very much like Newton’s equation for gravity.) If the charges have the same sign the force pushes the second particle in the x^ direction (repulsion); if the opposite sign the force is in the x^ direction (attraction). We interpret the bracket as the electric field of the first particle. Then this same rule could be stated that a positively-charged particle moves in the direction of the electric field and a negatively-charged particle in the opposite direction. And in fact this vector field is a gradient

keQ |x|2 x^ = (keQ |x| ).

For historical reasons, the potential is defined E = u. So then we could say that a positively-charged particle tries to decrease the electric potential, like a ball rolling down a hill. The steeper the change in potential, the stronger the force. We will use this example of electric potential to give an interpretation of some of our results. Indeed, much of this theory was developed first by physicists and some techniques seem strange if one does not know the physics motivation!

3.1 Fundamental Solution

The Laplace equation is invariant with respect to all rotations and translations of the Euclidean space n. Therefore we first look for solutions which are invariant with respect to all rotations. These solutions depend only on the length r = |x| = x x of the position vector x. For such functions u(x) = v(r) = v(x x) we calculate:

xu(x) = v(x x) xr = v(x x) 2x 2r.

Hence the Laplace equation simplifies to an ODE

xu(x) = x xu = v(r)x2 r2 + v(r)n r v(r) x2 r2r = v(r) + n 1 r v(r) = 0.

Let us solve this ODE:

v(r) v(r) = 1 n r ln(v(r)) = (1n) ln(r)+C v(r) = { Cln(r) + Cfor n = 2 C rn2 + C for n 3.

We see two things here. The space of solutions is two dimensional, with one solution being just the constant solution u = C. The other solution is not a solution on all of n because is has a singularity at the origin. Never-the-less these are important ‘solutions’ to consider!

Definition 3.1. Let Φ(x) be the following solutions of the Laplace equation:

Φ(x) = { 1 2π ln|x| for n = 2 1 n(n2)ωn|x|n2for n 3.

Here ωn denotes the volume of the unit ball B(0, 1) in Euclidean space n. We call these fundamental solutions of the Laplace equation.

This solution lies in the space of radially symmetric solutions. And notice that for n = 3 it is the electric potential of a single particle. We have chosen C = 0, which makes the solution tend to zero for large x. The constant C is chosen in such a way that the following theorem holds:

Theorem 3.2. For f C02(n) a solution of Poisson’s equations u = f is given by

u(x) = Φ f =nΦ(y)f(x y) dny.

Moreover, the distribution corresponding to the fundamental solution obeys FΦ = δ.

Proof. We see that the function u is twice continuously differentiable since f is twice continuously differentiable and because it has compact support we can differentiate under the integral sign. We calculate

2u xixj(x) =nΦ(y) 2f xixj(x y) dny.

In particular, u(x) = nΦ(y)xf(x y) dy. We decompose this integral in the sum of an integral nearby the singularity of Φ and an integral away from this singularity:

u(x) =B(0,𝜖)Φ(y)xf(x y) dy + nB(0,𝜖)Φ(y)xf(x y) dy = I𝜖 + J𝜖.

We use rln𝑟𝑑𝑟 = r2 2 (lnr 1 2) and 𝑟𝑑𝑟 = r2 2 and estimate the first integral for 𝜖 0:

|I𝜖|xfL(n)B(0,𝜖)|Φ(y)|dy {C𝜖2(|ln𝜖| + 1)(n = 2) C𝜖2 (n 3).

In the J𝜖 integral, because is second order, we can change xf(x y) to y[f(x y)] without changing signs. Then integration by parts yields

J𝜖 =nB(0,𝜖)Φ(y)y y[f(x y)] dy = nB(0,𝜖)yΦ(y) y[f(x y)] dy + ∂𝐵(0,𝜖)Φ(y)y[f(x y)] N dσ(y) = K𝜖 + L𝜖.

We are able to apply integration by parts because f has compact support; we can restrict n to some large ball without changing the integral. The second term converges in the limit 𝜖 0 to zero:

|L𝜖||f|L(n)∂𝐵(0,𝜖)|Φ(y)|dσ(y) {𝐶𝜖|ln𝜖|(n = 2) 𝐶𝜖 (n 3).

Another integration by parts of the first term yields

K𝜖 = nB(0,𝜖)yΦ(y)f(x y) dy ∂𝐵(0,𝜖)yΦ(y)f(x y) N dσ(y) = ∂𝐵(0,𝜖)yΦ(y)f(x y) N dσ(y).

Here we used that Φ is harmonic for y0. The gradient of Φ is equal to Φ(y) = 1 nωn y |y|n. The outer normal N of n B(0,𝜖) on ∂𝐵(0,𝜖) points towards the origin and is given by the expression y |y|. Together yΦ(y) N = 1 nωn 1 |y|n1. As we will prove rigorously in Lemma 3.3, the limit of K𝜖 as 𝜖 0 is f(x). We can understand this intuitively by observing that for 𝜖 small and y ∂𝐵(0,𝜖) by continuity f(x y) f(x). Therefore

K𝜖 ∂𝐵(0,𝜖)f(x) 1 nωn 1 |𝜖|n1 dσ(y) = f(x) 1 nωn|𝜖|n1∂𝐵(0,𝜖)1 dσ(y) = f(x).

Putting these three limits together

u(x) = I𝜖 + K𝜖 + L𝜖 0 f(x) + 0.

Because the left hand side is independent of 𝜖, we conclude that it must have been equal to f(x) all along.

It remains to prove the claim about distributions. For any test function φ we have per the definition of distribution derivative

(FΦ)(φ) = FΦ(φ) =nΦ(y)φ(y) dny.

But then we can see this as the calculation above with φ(y) = f(0 y). The conclusion is that the value of the integral is φ(0). Moving the minus sign around we arrive at FΦ(φ) = φ(0). But this is the definition of the delta distribution. □

In general, a fundamental solution of a constant coefficient linear PDE 𝐿𝑢 = f has the property that LΦ = δ in the sense of distribution. We make these assumptions on L so that L is just the real-linear combination of partial derivatives, and so interacts well with convolution. In particular, if we apply L to the convolution of f and the fundamental solution

L(Φ f) = (LΦ) f = δ f = f.

This shows that the convolution Φ f solves the inhomogeneous PDE as long as it is well defined and the derivative rule for convolutions holds.

To give the physics explanation, the fundamental solution is the potential of a single particle with unit charge. The charge of a particle is described by the delta distribution because it is only at a point but the total amount is finite. Consider the situation with two particles f = Q1δp + Q2δq. This formula (pretending that δ is a function) says that their potential is

u(x) =nΦ(x y)f(y) dny =nΦ(x y)Q1δp(y) dny +nΦ(x y)Q2δq(y) dny = Q1Φ(x p) + Q2Φ(x q).

The interpretation is that if you have charges described by f, then treat them as a sum (or integral) of particles. Each particle produces an electric potential Q1Φ(x p), and the total potential u is the sum (or integral).

Fundamental solutions are not usually unique however. Consider the present case of the Laplace equation. If we have any harmonic function v then (Φ + v) = Φ + v = δ + 0 shows that Φ + v is also a fundamental solution. The difference between two fundamental solutions solves the Laplace equation, so this is the only possibility for other fundamental solutions. Different fundamental solutions can produce different solutions to the PDE. We shall see that the fundamental solution we have chosen is the only one that vanishes at infinity, which makes it in some sense the best one.

The difference between the first and second claim of the theorem is the assumption of regularity of f: twice continuously differentiable or smooth respectively. In fact it is possible to generalise this theorem further: the convolution of f with Φ is defined for continuous functions f L1(n) and belongs to L1(n). In this case the result of the convolution may not be differentiable but it is a solution of Poisson’s equation in the sense of distributions. However, if one assumes that f is Lipschitz continuous and belongs to L1(n) then u is twice differentiable (in the usual sense) and solves the PDE. This situation is typical of the delicate questions of regularity of the solution.

3.2 Mean Value Property

In the previous section we constructed a solution to the inhomogeneous equation. Any other solution must differ from the constructed one by a harmonic function. We should therefore understand harmonic functions in order to understand the space of solutions. In this section we shall prove the following property of a harmonic function u on an open domain Ω n: the value u(x) of u at the center of any ball B(x,r) with compact closure in Ω is equal to the mean of u on the boundary of the ball. Conversely, if this holds for all balls with compact closure in Ω, then u is harmonic. This relation is called mean value property and has many important consequences.

Let us introduce some notation. Given a function u let

𝒮[u](x,r) := 1 nωnrn1∂𝐵(x,r)u(y) dσ(y) = 1 nωn∂𝐵(0,1)u(x + 𝑟𝑧) dσ(z)

be its spherical mean. Here ωn denotes the volume of the unit ball in Euclidean space n and equality follows from Lemma 2.9(iv) using 𝑟𝑃(z) = x + 𝑟𝑧. We write 𝒮(r) when the function and center point are clear.

The ball mean or of u on the ball B(x,r) is

M[u](x,r) := 1 ωnrnB(x,r)u dμ = 1 ωnrn0r∂𝐵(x,s)u dσ ds,

using the co-area formula, Lemma 2.11. Many statements can therefore be made either in terms of ball means or spherical means.

The spherical mean, and means generally, have several nice properties. First note that the normalisation constant in the definition ensures that 𝒮[1] = 1 and likewise for any other constant. The mean is real-linear in the function: 𝒮[𝑎𝑢 + 𝑏𝑣] = a𝒮[u] + b𝒮[v], which just follows from linearity of the integral. Likewise it follows from the monotonicity of the integral that if u v then 𝒮[u] 𝒮[v]. From these basic properties follows continuity at the center:

Lemma 3.3. If u is a continuous function then lim 𝑟↓0S[u](x,r) = u(x).

Proof. By the definition of continuity for all 𝜀 > 0 there is a radius δ such that for all points y B(x,δ) we know |u(y) u(x)| < 𝜀. For any r < δ it follows that

|𝒮[u] u(x)| = |𝒮[u] 𝒮[u(x)]| = |𝒮[u u(x)]| 𝒮[|u u(x)|] < 𝒮[𝜀] = 𝜀.

But this is the definition that lim 𝑟↓0S[u](x,r) = u(x). □

Particularly important is the relationship between the spherical mean and the Laplacian of u. Differentiating the spherical mean with respect to the radius and using the divergence theorem gives

∂𝑟𝒮(r) = 1 nωn∂𝐵(0,1) d 𝑑𝑟(u(x + 𝑟𝑧)) dσ(z) = 1 nωn∂𝐵(0,1)u(x + 𝑟𝑧) z dσ(z) = 1 nωnrn1∂𝐵(x,r)u(y) N dσ(y) = 1 nωnrn1B(x,r)u dμ. (3.4)

Therefore if u is harmonic then 𝒮(r) is constant. With these important properties of means prepared, we are ready to fully prove our claim.

Theorem 3.5 (Mean Value Property). Let u C(Ω) on an open domain Ω n. We say that u has the mean value property if

u(x) = 𝒮[u](x,r) = 1 nωnrn1∂𝐵(x,r)u(y) dσ(y)

for all balls with B(x,r)¯ Ω. A twice continuously differentiable function u C2(Ω) has the mean value property if and only if it is harmonic. Additionally, the same result holds if ball means are used in place of spherical means.

Proof. We have just calculated that if u is harmonic then 𝒮(r) is constant. From the previous lemma we then conclude that 𝒮(r) = u(x) for all applicable r. Conversely, if u(x)0, then by the continuity of u there is a ball B(x,r) where u is strictly positive (or negative). For this ball and any ball contained in it the right hand side of equation (3.4) is strictly positive (or negative) and the spherical mean is strictly monotonic. Therefore it is not constant.

To show the statement about ball means relate it to the spherical means:

M[u](x,r) = 1 ωnrnB(x,r)u dμ = n rn0r sn1 nωnsn1∂𝐵(x,s)u dσ ds = n rn0rsn1𝒮(s) ds.

Thus if 𝒮 is constant and equal to u(x), so is the ball mean. If the ball mean is constant and equal to u(x) then we differentiate both sides with respect to r

0 = ∂𝑟M[u](x,r) = n2 rn+10rsn1𝒮(s) ds + n rnrn1𝒮(r) = n ru(x) + n r𝒮(r).

Therefore 𝒮(r) = u(x) too. □

Keeping with our theme of distributions, we might wonder how we can reinterpret the mean value property for distributions. As is typical for extending definitions to distributions, we first develop a formula for regular distributions. Suppose that u : Ω is continuous and B(a,R)¯ Ω. For each point a Ω, we view the spherical mean as a function r𝒮[u](a,r) on (0,R). Therefore F𝒮[u](a,r) 𝒟((0,R)). For any test function ψ 𝒟((0,R)) we compute

F𝒮[u](a,r)(ψ) =0R𝒮[u](a,r)ψ(r)𝑑𝑟 =0R∂𝐵(a,r) 1 nωnrn1u(z)ψ(r)𝑑𝜎(z)𝑑𝑟 =B(a,R)u(x) ψ(|x a|) nωn|x a|n1𝑑𝑥 =Ωu(x) ψ(|x a|) nωn|x a|n1𝑑𝑥 = Fu ( ψ(|x a|) nωn|x a|n1 ) .

Therefore we make the following definition for any distribution F 𝒟(Ω). For any a Ω there is a ball B(a,R)¯ Ω. The spherical mean Sa[F] of F around a is the distribution on (0,R) with the formula

Sa[F](ψ) = F(ψ~a), for ψ~a(x) = ψ(|x a|) nωn|x a|n1.

This is well-defined for two reasons. First, the support of ψ excludes 0, so ψ~a is identically zero on a neighborhood of a. In particular, dividing by |x a|n1 does not produce a singularity. And second, the support of ψ~a is contained in B(a,R). This shows that it is a test function on Ω.

The mean value property is that the spherical means of the function are constant in the radius. Hence the corresponding property of distributions should require Sa[F] to be ‘constant’ in a suitable sense. We will prove in an exercise that a distribution G corresponds to a constant function if and only if

φ 𝒟(Ω) : Ωφ𝑑𝑥 = 0 G(φ) = 0.

Together we have

Definition 3.6 (Weak Mean Value Property). Let U 𝒟(Ω) be a distribution on an open domain Ω n. It is called harmonic if U = 0 in the sense of distributions. We say that U has the weak mean value property if for each a Ω the respective spherical mean 𝒮a[U] is a constant distribution. More explicitly, this means that for each ball B(a,R) with B(a,R)¯ Ω and each ψ C0((0,R)) with ψ dμ = 0 the distribution U vanishes on the test function ψ~a.

What is the relationship of the weak mean value property to the (strong) mean value property? Suppose U = Fu for a continuous function u C(Ω). If u has the mean value property, then we observe that

𝒮a[Fu](ψ) = Fu(ψ~a) = F𝒮[u](a,r)(ψ) = Fu(a)(ψ).

In other words, for each a Ω the distribution 𝒮a[Fu] corresponds to the constant function u(a). Thus Fu has the weak mean value property. Conversely, suppose that Fu has the weak mean value property: For each point a Ω there is a constant c such that Fc = 𝒮a[Fu] = F𝒮[u](a,r). But we may use the fundamental lemma of the calculus of variations, Lemma 2.15, to conclude that c = 𝒮[u](a,r). Hence the spherical mean of u is constant in the radius. In summary:

Lemma 3.7. For u C(Ω), u has the mean value property if and only if Fu has the weak mean value property.

The functions ψ~a may look a little scary, but in fact they are actually friendly once you get to know them. They are smooth functions characterised by two properties:

1.

they are radially symmetric around a, and

2.

they have compact support in n {a}.

It is clear that any ψ~a has these two properties. If a smooth function φ has Property 1, then it is a function of the distance |x a|. Another way to state Property 2 is to say that the support is contained in an annulus centered at a. Because it vanishes in a neighborhood of a, there are no issues with the non-smoothness of |x a| at x = a. So define ψ(|x a|) = nωn|x a|n1φ(x) to get the function ψ C0((0,R)) with the relation φ = ψ~a.

These functions also behave well under convolution, so long as its the convolution of a ‘big annulus’ with a ‘little annulus’. By this we mean the following. Consider χ~b,ψ~a. Further suppose that χ~b is identically zero on B(b,R) and the support of ψ~a lies in B(a,r) for r < R. Then χ~b ψ~a also obeys Property 1 and 2. Let us demonstrate this now. First, due to Lemma 2.13 we know that χ~b ψ~a is rotationally symmetric around b + a. Second, the convolution has compact support in n by the addition formula for supports. It remains to show that it vanishes in a neighbourhood of b + a. But this too follows from the addition formula for the support of a convolution, since a + b(n B(b,R)) + B(a,r).

There is a final point to be made about the total integral of these functions. Recall the formula Fu(ψ~a) = F𝒮[u](a,r)(ψ). We apply this to the function u 1, which has the mean value property, to get F1(ψ~a) = F1(ψ). Writing this out as integrals shows

Ωψ~a𝑑𝑥 =(0,R)ψ𝑑𝑟.

In particular, the integral of ψ~a is zero if and only if the integral of ψ is zero. And as a reminder, when we introduced convolutions we noted that the integral of χ~b ψ~a is the product of the integral of each function. Important to the proof below is that if χ~b has total integral zero, so too does χ~b ψ~a. In particular, the weak mean value property applies to it.

Now we ready to prove that a distribution has the weak mean value property if and only if it is a harmonic distribution. This should be seen as a generalisation of Theorem 3.5. Something stronger comes out of this proof, a famous result known as Weyl’s lemma. It tells us that weak solutions of the Laplace equations coincide with the strong solutions, and all solutions are smooth.

Weyl’s Lemma 3.8. On an open domain Ω n, a distribution U 𝒟(Ω) is harmonic if and only if it has the weak mean value property. For each harmonic distribution U 𝒟(Ω) there exists a harmonic function u C(Ω) with U = Fu.

Proof. The steps of the proof are as follows:

1.

We show that harmonic distributions have the weak mean value property.

2.

For any distribution U with the weak mean value property, we can define a function u through spherical means. This function is smooth and harmonic.

3.

We show that u corresponds to the original distribution U. So every distribution with the weak mean value property is a harmonic distribution.

Step 1. Suppose that U is a harmonic distribution. Choose any point a Ω and suppose B(a,R)¯ Ω. For every ψ 𝒟((0,R)) with integral 0, we will show that there exists a test function g 𝒟(Ω) with g = ψ~a. This is sufficient to prove that U has the weak mean value property because then U(ψ~a) = U(g) = (U)(g) = 0.

By the assumption on ψ that the total integral is zero we can define a test function Ψ 𝒟((0,R)) through Ψ(r) = 0rψ with Ψ = ψ. Then we define

g(x) = v(|x a|)withv(t) =Rt Ψ(r) nωnrn1 dr.

This function g depends only on |x a|. Because one end of the integral is set at R and Ψ has compact support, g has compact support in B(a,R) Ω. Similarly it is constant on B(a,𝜖) for some 𝜖 > 0. For x near a therefore, g = 0 = ψ~a(x). And for xa we can reuse the calculation of the Laplacian for radial function from the search for the fundamental solution:

g(x) = v(|x a|) + n 1 |x a|v(|x a|).

Note

v(t) = Ψ(t) nωntn1 v(t) = ψ(t) nωntn1 (n 1)Ψ(t) nωntn = ψ(t) nωntn1 n 1 t Ψ(t) nωntn1,

which implies

g(x) = ψ(|x a|) nωn|x a|n1 = ψ~a(x).

This concludes Step 1.

In Step 2, we assume that U has the weak mean value property and construct a smooth harmonic function u. For any open subset Ω Ω there is a radius R such that Ω + B(0,R) Ω. For all x Ω choose any ψ 𝒟((0,R)) with 0Rψ(r) dr = 1 and define

u(x) := (ψ~0 U)(x).

Due to Lemma 2.16, u is smooth. But we need to check that this definition is independent of the choice of ψ. We can unwind the definitions of the convolution

u(x) = (ψ~0 U)(x) = U(𝖳x𝖯ψ~0) = U(y𝖯ψ~0(y x)) = U(yψ~0(x y)) = U(ψ~x).

Now suppose that χ is another choice. Then ψ~x χ~x is a test function on Ω with total integral zero (it is equal to the integral of ψ minus the integral of χ, both of which are 1). The weak mean value property now implies

U(ψ~x) U(χ~x) = U(ψ~x χ~x) = 0.

Next we prove that the distribution Fu has the weak mean value property. How does Fu act on a test function φ? Again this is answered by Lemma 2.16, Fu(φ) = U(φ 𝖯ψ~0). This formula simplifies a little due to ψ~0 = 𝖯ψ~0 being a radial function. Let χ~b be any function from the definition of the weak mean value property. Then we must show that U(χ~b ψ~0) = 0. The trick is to use the freedom definition of u to choose a suitable ψ~0. We know that there is an 𝜖 > 0 such that χ~b vanishes on B(x,𝜖). We can choose ψ~0 such that its support lies inside the ball B(0,𝜖2). Then by the discussion above we know that χ~b ψ~0 is again a function of the form considered in the weak mean value property. Therefore Fu(χ~b) = U(χ~b ψ~0) = 0. In other words Fu has the weak mean value property. By Lemma 3.7, u has the mean value property and further by Theorem 3.5, u is harmonic.

Lastly, we have Step 3, where we prove Fu = U. The functions κ𝜖(t) = λ𝜖3(t 2 3𝜖) have support [𝜖3,𝜖] and total integral 1. Thus the corresponding functions κ~𝜖 are a smooth mollifier on n. We again use the freedom in the choice of ψ to see that Fu = κ~𝜖 U for every 𝜖. Now Lemma 2.12 implies Fu = U. □

To conclude this section we show that the mean value property leads to a growth estimate.

Corollary 3.9. Let u be a harmonic function on an open domain Ω n and B(x,r) a ball with compact closure in Ω. For all multi-indices α we have the estimate

|αu(x)| C(n,|α|)r|α|u L(B(x,r)¯)withC(n,|α|) = 2|α|(1+|α|) 2 n|α|.

Proof. We have just seen in Weyl’s lemma that all harmonic functions are smooth and thus all partial derivatives of a harmonic function are harmonic. The mean value property and integration by parts (the divergence theorem version) yield for i = 1,,n

|iαu(x)| = | 2n ωnrnB(x,r2) i αu dμ| = | 2n ωnrn∂𝐵(x,r2) α uN i dσ| 2n r αu L(∂𝐵(x,r2)).

The inductive application gives first C(n, 1) = 2n, and using the induction hypothesis

αu(y) 2|α|C(n,|α|)r|α|u L(B(x,r))for ally ∂𝐵(x,r2)

the relation C(n, 1 + |α|) = 21+|α|𝑛𝐶(n,|α|). The given C(n,|α|) is the solution. □

Liouville’s Theorem 3.10. On n a bounded harmonic function is constant.

Proof. The foregoing corollary shows that |iu(x)| is bounded by 2nuL(n)r1 for each i = 1,,n and x n. In the limit r the first partial derivatives vanish identically. Therefore u is constant. □

3.3 Maximum Principle

We have already mentioned the intuition that if a harmonic function is increasing in some direction then it must decreasing in another. This would imply that a harmonic function cannot have a local extremum, and this is indeed the case. Suppose a harmonic function u has a maximum at a point x of an open connected domain Ω n. The mean value property implies on all balls B(x,r) Ω

1 rnωnB(x,r)|u(y) u(x)|dy = 1 rnωnB(x,r)u(x) u(y) dy = 0.

By the fundamental lemma of the calculus of variations (or a standard argument from continuity), we conclude that u(y) = u(x) for all y B(x,r). Hence u takes the maximum on all these balls B(x,r) Ω. This shows that the set {y Ωu(y) = u(x)} is open. But it is also the preimage of a single value, and therefore closed. It is non-empty since by assumption u does have a maximum. By the definition of connected, this set must be all of Ω.

Strong Maximum Principle 3.11. If a harmonic function u has on a connected open domain Ω n a maximum, then u is constant. □

There is a more geometric proof in the case that Ω is path connected. We again begin with showing that u takes its maximum on every ball centered at x in the domain. Since Ω is path-connected every other point y Ω is connected with x by a continuous path γ : [0, 1] Ω with γ(0) = x and γ(1) = y. The compact image γ[0, 1] is covered by finitely many balls B(γ(t1),r1),,B(γ(tN),rN) Ω with 0 t1 < tN 1 and r1,,rN > 0. Supplementing the balls if necessary, we can assume that the center of each ball belongs to the previous ball. Then repeating the argument Inductively, u is constantly u(x) on all these balls too and hence u u(x) along γ, and on Ω since this is true for all y Ω.

A practical consequence is the following

Weak Maximum Principle 3.12. Let the harmonic function u on a bounded open domain Ω n extend continuously to the boundary Ω. The maximum of u is taken on the boundary Ω.

Proof. By Heine Borel the closure Ω¯ is compact and the continuous function u takes on Ω¯ a maximum. If it does not belong to Ω, then u is constant on the corresponding connected component and the maximum is also taken on Ω. □

Since the negative of a harmonic function is harmonic the same conclusion holds for minima.

The triumph of the Maximum Principle is that it generalises to many elliptic operators (Definition 2.1), unlike the mean value property. It really goes to the heart of ellipticity.

Theorem 3.13. Let L be an elliptic operator on a bounded open domain Ω n whose coefficients a𝑖𝑗 and bi extend continuously and elliptic to Ω, and c 0. Every twice differentiable solution u of 𝐿𝑢 0 which extends continuously to Ω takes its maximum on Ω.

Proof. Let us first show that L is uniform elliptic, i.e. there exists λ > 0 with

i,j=1na 𝑖𝑗(x)kikj λ i=1nk i2 for all x Ω and all k n.

The continuous function (x,k) i,j=1na 𝑖𝑗(x)kikj attains on the compact set (x,k) Ω¯ × Sn1 Ω¯ × n a minimum λ > 0. Hence L is uniform elliptic.

Next we use a trick to move to the case where L of the function is strictly positive. For v(x) = exp(αx1) with α > 0 we conclude

𝐿𝑣 = α(αa11(x) + b1(x))v α(𝛼𝜆 + b1(x))v.

The continuous coefficients bi are bounded on the compact set Ω¯. Therefore there exists α > 0 with 𝐿𝑣 > 0. By linearity of L we obtain L(u + 𝜖𝑣) > 0 on Ω for all 𝜖 > 0.

Now we show that the continuous functions u + 𝜖𝑣 cannot attain a maximum on Ω even though they must attain a maximum on Ω¯. At any such interior maximum x0 Ω the first derivative of the function u + 𝜖𝑣 which is twice differentiable on Ω vanishes and the Hessian is negative semi-definite. At this point we need a little bit of linear algebra to explain the connection between the Hessian and the Laplacian. The Hessian is a real symmetric matrix, so it is diagonalizable by an orthogonal matrix O, that is H = OT 𝐷𝑂. D is a diagonal matrix whose entries are the eigenvalues of H. Because H is negative semidefinite, all the eigenvalues are negative or zero. In symbols 2(u+𝜖𝑣)(x0) xixj = kO𝑘𝑖λkO𝑘𝑗. The Laplacian is the trace of the Hessian. Therefore

u(x0,t0) = 𝗍𝗋H = 𝗍𝗋(OT 𝐷𝑂) = 𝗍𝗋(𝐷𝑂OT ) = 𝗍𝗋(𝐷𝐼) = 𝗍𝗋(D) = λi 0.

Similarly, for any elliptic operator

L(u + 𝜖𝑣)(x0) = i,j=1na 𝑖𝑗(x)2(u + 𝜖𝑣)(x 0) xixj + i=1nb i(x)0 = i,j,k=1na 𝑖𝑗(x)O𝑘𝑖λkO𝑘𝑗

Because the eigenvalues are non-positive, we define B𝑘𝑖 = O𝑘𝑖λk. Continuing with the calculation

L(u + 𝜖𝑣)(x0) = k=1n i,j=1na 𝑖𝑗(x)B𝑘𝑖B𝑘𝑗 k=1nλ i=1nB 𝑘𝑖2 0,

and this contradicts L(u + 𝜖𝑣) > 0. Therefore for all 𝜖 > 0 the maximum of u + 𝜖𝑣 belongs to the boundary. Finally, we use the following comparison between u and u + 𝜖𝑣 to reach the conclusion.

sup xΩu(x)+𝜖 inf xΩv(x) sup xΩ(u(x)+𝜖𝑣(x)) = max xΩ(u(x)+𝜖𝑣(x)) max xΩu(x)+𝜖 max xΩv(x).

Because this holds for all 𝜖 > 0 the boundedness of v on Ω¯ implies the theorem. □

The negative of the functions u in the theorem obey 𝐿𝑢 0 and take a minimum on the boundary. In particular, the solutions u of 𝐿𝑢 = 0 take the maximum and the minimum on the boundary.

Now let us see why maximum principles are so important. We consider the following very natural boundary value problem:

Dirichlet Problem 3.14. For a given function f on a bounded open domain Ω n and g on Ω we look for a solution u of u = f on Ω which extends continuously to Ω and coincides there with g.

The condition that u extends continuously to the boundary is necessary for the boundary value problem to be meaningful. Otherwise the values on the boundary could be complete unrelated to the rest of the function. We say that a function u is m times continuously differentiable on the closure Ω¯ of an domain, if it is m times continuously differentiable on Ω and all partial derivatives of order at most m extend continuously to Ω.

Let Ω n be an open and bounded domain and suppose that there are two solutions u1 and u2 to the Dirichlet problem for the Poisson equation with inhomogeneous term f and boundary value g. Then the difference v := u2 u1 solves the homogeneous problem, i.e. it is harmonic, and v 0 on Ω. Therefore by the weak maximum principle we know that both the maximum and minimum of v on every connected component of Ω is 0. The only possibility is that v 0 on all of Ω. This shows that solutions to the Dirichlet problem are unique.

Putting this another way, we can uniquely determine a harmonic function if we know its values on the boundary of its domain. This gives us a way to understand the space of harmonic functions.

3.4 Green’s Function

We just saw that the solution to the Dirichlet problem is unique, if a solution exists. In this section we try to find some conditions which ensure the existence.

First we prepare some well known formulas, which hopefully you have already proved as an exercise. In first formula we apply the Divergence Theorem to xv(x)u(x):

Green’s First Formula 3.15. Let the Divergence Theorem hold on the open and bounded domain Ω n. Then for two functions u,v C2(Ω¯) we have

Ωvu dy +Ωv u dy =Ωvu N dσ.

If we subtract the formula for interchanged u and v, then we obtain:

Green’s Second Formula 3.16. Let the Divergence Theorem hold on the open and bounded domain Ω n. Then for two functions u,v C2(Ω¯) we have

Ωvu uv dy =Ω [vu uv] N dσ.

The significance of these formulas becomes apparent when we apply them to the fundamental solution v(y) = Φ(x y). This function is harmonic for yx, so we need to exclude a small ball B(x,𝜖). We apply Green’s second formula on the domain Ω B(x,𝜖). The left hand side becomes

ΩB(x,𝜖)Φ(x y)u(y) dy.

As argued in Theorem 3.2 (the part with I𝜖) this integral is well defined in the limit 𝜖 0. For the right hand side of Green’s second formula, there are two boundary components to consider, namely Ω and ∂𝐵(x,𝜖). The integrals over ∂𝐵(x,𝜖) are of a type L𝜖 and K𝜖 respectively. We have in the limit 𝜖 0

∂𝐵(x,𝜖)Φ(x z)u(z) N(z) dσ(z) 0.

For the other integral, we must be very careful of signs. As required by the divergence theorem, let N be the unit normal vector to ∂𝐵(x,𝜖) that points towards x. It can be expressed as N(z) = xz |xz|. Therefore N(x z) = z |z| is the unit normal vector to ∂𝐵(0,𝜖) pointing away from the origin. This is the opposite sign as the N in Theorem 3.2. We have

∂𝐵(x,𝜖)u(z)z(Φ(x z)) N(z) dσ(z) =∂𝐵(x,𝜖)u(z)Φ(x z) N(z) dσ(z) =∂𝐵(0,𝜖)u(x z)Φ(z) N(x z) dσ(z) u(x).

Rearranging the terms gives

Green’s Representation Theorem 3.17. Let the Divergence Theorem hold on the open and bounded domain Ω n. Then for x Ω and a function u C2(Ω¯) we have

u(x) = ΩΦ(x y)u(y) dy +Ω [Φ(x z)u(z) u(z)z(Φ(x z))] N dσ(z).

This representation formula allows us to reconstruct a function u from its Laplacian and the values of u and the normal derivative u N on Ω. But the Weak Maximum Principle implies the function is already uniquely determined by its Laplacian and boundary values, the normal derivatives on the boundary are redundant information. The question is, how can we calculate the normal derivatives from the other two pieces of information? If the domain Ω admits a function of the following type, then there is a clean formula.

Green’s Function 3.18. A function GΩ : {(x,y) Ω ×Ωxy} is called Green’s function for the bounded open domain Ω n, if it has the following two properties:

(i)

For x Ω the function yGΩ(x,y) Φ(x y) extends to a harmonic function on y Ω.

(ii)

For x Ω the function yGΩ(x,y) extends continuously to Ω and vanishes on y Ω.

From the physics perspective, a Green’s function tells us the potential at x of a single particle at y if the potential is forced to be zero on the boundary. This is the case if the boundary is a metal cage (a Faraday cage). The first condition can also be expressed as yGΩ(x,y) = δx in the sense of distributions, where δx is the delta distribution centered at x Ω. We could imagine expanding the definition of a Green’s function so that unbounded domains Ω were allowed, but the potential has to go to zero ‘at infinity’ in the second condition. The shifted fundamental solution Φ(x y) would then be a Green’s function of Ω = n.

Let’s put them to use. We apply Green’s Second Formula to the function v(y) = GΩ(x,y) Φ(x y). It is a harmonic function on all of Ω so there is no need to exclude a ball this time. Further, because we know the integrals with Φ are well defined, so therefore are the ones with GΩ. We have

ΩGΩ(x,y)u(y) dy ΩΦ(x y)u(y) dy = Ωu(z)zGΩ(x,z) N dσ(z) Ω [Φ(x z)u(z) u(z)z(Φ(x z))] N dσ(z).

Now Green’s Representation Theorem implies

u(x) = ΩGΩ(x,y)yu(y) dy Ωu(z)zGΩ(x,z) N dσ(z).

We should think of this as an improved version of Green’s representation formula, enabled by the existence of a Green’s function. We will shortly prove that conversely that if functions f : Ω¯ and g : Ω have sufficient regularity, then

u(x) :=ΩGΩ(x,y)f(y) dny Ωg(z)zGΩ(x,z) N dσ(z)

defines a function that solves the Dirichlet Problem. Therefore the Dirichlet Problem reduces to the search of the Green’s Function.

A Green’s function is unique. If there are two Green’s functions on Ω, then their difference is harmonic for all y Ω:

G(x,y) G~(x,y) = G(x,y) Φ(x y) [G~(x,y) Φ(x y)]

and vanishes for y Ω. By the weak maximum principle, this difference must be zero. As an aside, if we return to the generalised case where Ω = n, then the difference between two Green’s functions is a harmonic function that goes to zero at infinity. Therefore it is bounded and Liouville’s theorem tells us it is constant (and thus constantly zero). Therefore the shifted fundamental solution Φ(x y) is the unique Green’s function for n.

Further

Theorem 3.19 (Symmetry of the Green’s Function). If there is a Green’s Function GΩ for the bounded domain Ω, then GΩ(x,y) = GΩ(y,x) holds for all xy Ω.

Proof. For xy Ω let 𝜖 > 0 be sufficiently small, such that both balls B(x,𝜖) and B(y,𝜖) are disjoint subsets of Ω. Green’s Second Formula implies for the domain Ω (B(x,𝜖) B(y,𝜖)) and the functions u(z) = GΩ(x,z) and v(z) = GΩ(y,z)

∂𝐵(x,𝜖) [GΩ(y,z)zGΩ(x,z) GΩ(x,z)zGΩ(y,z)] N dσ(z) =∂𝐵(y,𝜖) [GΩ(x,z)zGΩ(y,z) GΩ(y,z)zGΩ(x,z)] N dσ(z).

For 𝜖 0 the estimate for L𝜖 in the proof of Theorem 3.2 shows that both second terms converge to zero. The calculation of K𝜖 in the proof of Theorem 3.2 carries over and shows that the first terms converge to GΩ(y,x) and GΩ(x,y), respectively. □

Finding a Green’s function for an arbitrary domain can be difficult, and they do not even exist for all domains. However it is feasible for highly symmetric domains, and the advantage is that then the solution has a concrete formula. We shall calculate Green’s function for all balls in n. Let us first restrict to the unit ball Ω = B(0, 1). The key is to try and add a harmonic function to Φ(x y) that equals it on the boundary. We may use the inversion xι(x) = x |x|2 in the unit sphere ∂𝐵(0, 1). It maps the inside of the unit ball to the outside and vice versa, fixing the boundary.

Green’s Function of the unit ball 3.20. The Green’s Function of B(0, 1) is

GB(0,1)(x,y) = Φ(xy)Φ(|x|(ι(x)y)) = { Φ(x y) Φ(ι(x) y) Φ(x) for n = 2, Φ(x y) |x|2nΦ(ι(x) y)  for n > 2.

Proof. Fix x B(0, 1). There are two properties that we must satisfy. First the function yGB(0,1)(x,y) Φ(x y) = Φ(|x|(ι(x) y)) should extend to a harmonic function on all y B(0, 1). Observe that ι(x) is a point outside unit ball, so ι(x) y is never zero and thus this function is well-defined for all y B(0, 1). Moreover, we have proved in a exercise that composing a harmonic function with rescaling, reflection or translation of its domain creates another harmonic function.

For the vanishing on the boundary, note that there is no problem extending GB(0,1)(x,y) for y ∂𝐵(0, 1), because x and ι(x) are not in ∂𝐵(0, 1). To show that it’s zero, we need some geometry. For |y| = 1 we have

||x|(ι(x) y)|2 = (|x|1x |x|y) (|x|1x |x|y) = 1 2x y + |x|2|y|2 = |y|2 2x y + |x|2 = |x y|2.

Because Φ is a function that only depends on the length of its argument, Φ(|x|(ι(x) y)) and Φ(x y) are equal on the boundary y ∂𝐵(0, 1). □

Although the definition of GB(0,1) appears to treat x and y differently, in fact ||x|(ι(x) y)|2 = 1 2x y + |x|2|y|2 from the above proof, which does not use on |y| = 1, shows that the it is symmetric as expected.

The affine map xa + 𝑟𝑥 is a diffeomorphism from B(0, 1) onto B(a,r) and a homeomorphism from ∂𝐵(0, 1) onto ∂𝐵(a,r). We can use this coordinate change to transform a Dirichlet problem on the ball B(a,r) to one on B(0, 1). If u solves u = f on B(a,r) and u|∂𝐵(a,r) = g then v(x) = u(a + 𝑟𝑥) solves v = r2f(a + 𝑟𝑥) on B(0, 1) and v(x) = g(a + 𝑟𝑥) for x ∂𝐵(0, 1). The same is true in reverse. Thus the ability to solve the Dirichlet on one ball confers the ability to solve the Dirichlet problem on every ball (and the same for other domains related by similarity).

We can use this insight to give the Green’s function for a general ball. We use an equivalent characterisation of the Green’s function: for every x Ω the harmonic difference u(y) := GΩ(x,y) Φ(x y) is a solution to the Dirichlet problem

u = 0 on Ω,u(y) = 0 Φ(x y) for y Ω.

(This gives an alternative proof of uniqueness.) For Ω = B(a,r) and a point x = a + 𝑟𝑥 B(a,r) the related Dirichlet problem on the unit ball is v(x) = u(a + 𝑟𝑥) with v = 0 on B(0, 1) and

v(y) = Φ(x(a+𝑟𝑦)) = Φ(r(xy)) = { Φ(x y) r 2π for n = 2 r2nΦ(x y)  for n 3

for y Ω. By linearity and since constant functions are harmonic, we can write down the unique solution on B(0, 1) by inspection:

v(y) = { Φ(|x|(ι(x) y)) r 2π for n = 2 r2nΦ(|x|(ι(x) y))  for n 3.

Putting this all together gives

GB(a,r)(x,y) = Φ(x y) + u(y) = Φ(r(x y)) + v(y) = { Φ(x y) + r 2π Φ(|x|(ι(x) y)) r 2π for n = 2 r2nΦ(x y) r2nΦ(|x|(ι(x) y))  for n 3 = r2n [Φ(x y) Φ(|x|(ι(x) y))] = r2nG B(0,1)(xa r ,ya r ).

It remains to prove therefore that taking the Green’s representation formula and inserting f and g with sufficient regularity does indeed define a solution to the Dirichlet problem. We do this only for the specific example of the unit ball, but by the above discussion an analogous result will hold for any ball.

Poisson’s Representation Formula 3.21. For Ω = B(0, 1), f C2(Ω¯) and g C(Ω) the unique solution of the Dirichlet Problem on Ω is given by

u(x) =B(0,1)GB(0,1)(x,y)f(y) dny ∂𝐵(0,1)g(y)yGB(0,1)(x,y) y dσ(y).

Proof. It suffices to consider the two cases g = 0 and f = 0 separately.

Consider g = 0 first. The essential point is the symmetry of the Green’s function, so whatever properties hold in the second variable also hold in the first. From Theorem 3.2 we have function v(x) that satisfies v = f. Their difference has the formula

u(x) v(x) =B(0,1)[GB(0,1)(x,y) Φ(x y)]f(y) dny.

But the bracketed expression is harmonic in x and therefore u v is harmonic. This shows that u = v = f. Moreover, we know that GB(0,1)(x,y) is zero for x ∂𝐵(0, 1) and hence so too is u(x).

The f = 0 case is the new part. We define the Poisson kernel K(x,y) := yGB(0,1)(x,y) y. By the Symmetry of the Green’s Function the function xK(x,y) is harmonic. Hence for f = 0 the given function u is harmonic. It remains to show

u(x) =∂𝐵(0,1)g(y)K(x,y) dσ(y)

extends continuously to x ∂𝐵(0, 1) and coincides there with g(x). The issue is that the integral is over y ∂𝐵(0, 1) so their is a singularity in the integration in this limit. We compute for |y| = 1 and n > 2 (the reader should check this same formula holds for n = 2 too):

K(x,y) = 1 n(n 2)ωny y ( 1 |x y|n2 1 |x|n2 |ι(x) y|n2 ) = 1 nωny ( y x |x y|n |x|2(y ι(x)) |x|n |ι(x) y|n ) = 1 x y |x|2 + x y nωn|x y|n = 1 |x|2 nωn|x y|n.

This clearly shows the singularity at y = x but that for all other x ∂𝐵(0, 1) it is zero. We observe

(i)

the integral kernel K(x,y) is positive for (x,y) B(0, 1) × ∂𝐵(0, 1).

(ii)

The following formula, which follows from Green’s Representation Formula for the function u = 1 on the domain Ω = B(0, 1):

∂𝐵(0,1)K(x,y) dσ(y) = 1forx B(0, 1).

(iii)

For all x ∂𝐵(0, 1), δ > 0, and y ∂𝐵(0, 1) B(x,δ) there is the bound K(𝜆𝑥,y) 1 nωnδn(1 λ2) Therefore the family of functions yK(𝜆𝑥,y) converge uniformly to zero for λ 1 on y ∂𝐵(0, 1) B(x,δ).

We will now prove that for continuous g the properties (i)-(iii) ensure that in the limit λ 1 the family of functions x ∂𝐵(0,1)g(y)K(𝜆𝑥,y) dσ(y) converge on ∂𝐵(0, 1) uniformly to g. For any x ∂𝐵(0, 1), 0 < λ < 1, and δ > 0 we have estimate

|u(𝜆𝑥) g(x)| = |∂𝐵(0,1)g(y)K(𝜆𝑥,y) g(x)K(𝜆𝑥,y) dσ(y)| using (ii) ∂𝐵(0,1)|g(y) g(x)|K(𝜆𝑥,y) dσ(y) using (i) = (∂𝐵(0,1)B(x,δ) +∂𝐵(0,1)B(x,δ)) |g(y) g(x)|K(𝜆𝑥,y) dσ(y) sup y∂𝐵(0,1)|g(y) g(x)|× (1 λ2)δn using (iii) + sup y∂𝐵(0,1)B(x,δ)|g(y) g(x)|× 1 using (ii).

Therefore for any δ > 0 and 0 < λ < 1 we have the uniform estimate

u(𝜆𝑥) g(x) (1λ2)δn sup x,y∂𝐵(0,1)|g(y)g(x)|+sup x∂𝐵(0,1) y∂𝐵(0,1)B(x,δ) |g(y)g(x)|.

Taking the limit λ 1 we see that the limit is bounded by the second term for any δ > 0, since the first term tends to zero. But the second term can be arbitrarily small, and therefore the uniform limit must be zero. This proves the claim. □

A harmonic function u on B(a,r) which extends continuously to ∂𝐵(a,r) obeys

u(x) = r2 |x a|2 𝑛𝑟ωn ∂𝐵(a,r) u(y) |x y|n dσ(y).

Like the Weak Maximum Principle, this shows that u is completely determined by the values on ∂𝐵(a,r), except here the result is constructive. One can also integrate this formula in x over a ball, and after interchanging the integral and using some geometry, arrive at the Mean Value property.

One new consequence of this formula is an additional regularity result for harmonic functions. The dependence on x in the formula is well-behaved for x B(a,r) with r < r, because |x y|n is bounded away from its singularity. Therefore partial derivatives of u with respect to x can be expressed with similar formulas depending only on the values of u on a fixed ball B(a,r). For all y ∂𝐵(a,r) the Taylor series of x|x y|n = (y2 2𝑥𝑦 + x2)n 2 in x = z converges uniformly to |x y|n. This implies:

Corollary 3.22. Harmonic functions on an open domain Ω n are analytic. □

Another regularity result, which speaks to the connection between harmonic functions and holomorphic functions (if you know some complex analysis), is the so called ‘removable singularities’ theorem:

Lemma 3.23. Let Ω n be an open neighbourhood of 0 and u a bounded harmonic function on Ω {0}. Then u extends as a harmonic function to Ω.

Proof. On a ball B(0,r) with compact closure in Ω, Theorem 3.21 gives a harmonic function ũ which coincides on ∂𝐵(0,r) with u. The family of harmonic functions u𝜖(x) = ũ(x) u(x) + 𝜖GB(0,r)(x, 0) on B(0,r) {0} vanish on ∂𝐵(0,r). If for any 𝜖 > 0 the function u𝜖 takes on B(0,r) {0} a negative value, then due to the boundedness of u and ũ and the unboundedness of GB(0,r)(, 0) the harmonic function u𝜖 has a negative minimum on B(0,r) {0}. This contradicts the Strong Maximum Principle. Hence u𝜖 is non-negative. Analogously u𝜖 us for negative 𝜖 non-positive. Otherwise u𝜖 would have a positive maximum in B(0,r) {0}. In both limits 𝜖 0 and 𝜖 0 u0 = ũ u vanishes identically on B(0,r) {0} and ũ is a harmonic extension of u to Ω. □

The proof shows a slightly stronger statement. Each harmonic function on Ω {0} whose absolute value |u(x)| is for all 𝜖 > 0 bounded by 𝜖GB(0,r)(x, 0) on B(0,δ) {0} with sufficiently small δ > 0 depending on 𝜖 has an harmonic extension to Ω.

3.5 A PDE with no solutions

In this optional section we deliver on the promise in Section 2.2 to give a PDE without any solutions. The key is the following lemma, which shows that no (nontrivial) function has a Laplacian that grows negatively faster than the function grows. This should be compared to Liouville’s theorem 3.10, in which a growth bound is used to show that a harmonic function (a solution to u = 0) is constant. Then we only need to construct a PDE which implies this property but that u 0 does not solve. The idea and lemma come from the paper “Nonexistence of weak solutions for some degenerate elliptic and parabolic problems on Rn” (Mitidieri and Pohozaev, 2001).

Lemma 3.24. Let Ω = 2 {0}. The only twice-differentiable function u : Ω that satisfies

|x|2u u2

is u 0.

Proof. The trick is to choose a particular family of test functions φR C0(Ω) and use them to derive decreasing bounds on the integral of u that can only be satisfied by u 0. Choose a smooth bump function ψ0 on that has the value 0 for |t| 2, the value 1 for |t| 1, and is monotonic increasing/decreasing for 1 < |t| < 2. We define

φR(x) = ψR(|x|)andψR(r) = ψ0 (R1 lnr).

Because they are radially symmetric, it is easy to describe their supports:

x supp φR R1 ln |x| 2 2R ln |x| 2R e2R |x| e2R.

So φR is positive on the open annulus AR = B(0,e2R) B(0,e2R)¯. Likewise φR 1 on the closed annulus AR = B(0,eR)¯ B(0,eR).

We will bound the integral of u2|x|2 on AR. Because we are working with non-negative functions we can increase the domain of the integral:

IR :=AR u2 |x|2 dx =AR u2 |x|2φR dx AR u2 |x|2φR dx =: JR AR(u)φR dx.

Now we apply Green’s second formula on AR. The test function and all its derivatives vanish on the boundary AR, the result is to transfer the Laplacian to φR.

JR AR(u)φR dx =AR(u)φR |x| |x|φR φR dx.

The introduction of these strange factors will become clear in a moment. In the next step we use a result you might not know. You should be familiar with the Cauchy-Schwarz inequality for vectors, which says a b ab for a,b n. But it holds for all inner products, including the L2 inner product on functions.

AR(u)φR |x| |x|φR φR dx = (u)φR |x| ,|x|φR φR L2 (u)φR |x| L2 |x|φR φR L2 = (AR |(u)φR |x| |2 dx)12 (AR ||x|φR φR | 2 dx)12 = JR12 (AR|x|2(φ R)2 φR dx)12.

Now we see that the choice of factors has created another JR on the right hand side. We can manipulate the inequality, by dividing JR12 across and squaring:

JR JR12 (AR|x|2(φ R)2 φR dx)12 J R AR|x|2(φ R)2 φR dx.

This bound is useful because u does not appear on the right hand side, it is solely in terms of φR.

In the next phase of the proof we use the specific form of φR (until now, we have only used that Green’s formula applies to AR). Recall that the Laplacian in polar coordinates is

v = 2v r2 + 1 r ∂𝑣 ∂𝑟 + 1 r2 2v 𝜃2.

We use the chain rule:

φR ∂𝑟 = ψ0(lnr R ) 1 𝑟𝑅 2φ R r2 = ψ0 ( lnr R ) 1 r2R2 ψ0(lnr R ) 1 r2R φR = ψ0 ( lnr R ) 1 r2R2 ψ0(lnr R ) 1 r2R + 1 rψ0(lnr R ) 1 𝑟𝑅 + 0 = ψ0 ( lnr R ) 1 r2R2.

We substitute this into the integral and then make the change of variable t = R1 lnr (which implies dt = R1r1 dr and ψR(r) = ψ0(t)):

JR AR|x|2(φ R)2 φR dx =02πe2Re2R r2 ψR(r) (ψ0 ( lnr R ) 1 r2R2 ) 2r dr d𝜃 = 2πe2Re2R 1 ψR(r)ψ0 ( lnr R )2 1 rR4 dr = 2π 1 R322 1 ψ0(t)ψ0 (t)2 dt.

Now we have an integral that doesn’t even depend on R. Of course the precise value of the integral depends on the choice of ψ0, but it is possible to choose one such that the integral is finite. Therefore we have a bound

IR =AR u2 |x|2 dx JR 2𝜋𝐶R3.

Finally we can prove the statement of the lemma. Choose any point x Ω = 2 {0} and let S be such that x AS. Consider IS. For all R > S we have IR IS since the integrand is positive and this is expanding the domain. But this implies 0 IS 2𝜋𝐶R3 for all R > S. The only possibility is IS = 0. But this implies u 0 on AS. Therefore u(x) = 0. □

With this lemma it is easy to construct a PDE with no solutions, even before we impose any boundary conditions, namely |x|2u = u2 + 1. Any solution has the property

|x|2u = u2 + 1 u2,

and therefore u 0. But u 0 doesn’t solve the PDE.