The Earth started its existence as a red-hot rock, and has been cooling ever since. It’s still quite toasty in the core, and will remain so for billions of years, yet. Cooling implies a flow of heat, and where heat flows, the possibility exists of capturing useful energy. Geysers and volcanoes are obvious manifestations of geothermal energy, but what role can it play toward satisfying our current global demand? Following the recent theme of Do the Math, we will put geothermal in one of three boxes labeled abundant, potent, or niche (puny). Have any guesses?
Thermal energy is surprisingly hefty. Consider that putting a room-temperature rock into boiling water transfers to it an equivalent amount of energy as would hurling it to a super-sonic speed! We characterize the amount of heat an object can hold by its specific heat capacity, in Joules per kilogram per degree Celsius (or Kelvin, since one degree of change is the same in either system). Tying some energy concepts together, the definition of a kilocalorie (4184 J) is the amount of energy it takes to raise 1 kg (1 liter) of water 1°C. So we can read the specific heat capacity straight away as 4184 J/kg/K. This is a rather large heat capacity, on the scale of things. As a rule of thumb, 1000 J/kg/K is a marvelously convenient universal number for most substances: it works for wood, air, rock, etc. Liquids tend to be higher (typ. 2000 J/kg/K), and metals tend to be lower (150–500 J/kg/K). Rocks—relevant for geothermal energy—range from about 700–1100 J/kg/K, and although I would be happy enough to use the convenient 1000 J/kg/K for crude analysis, I will be somewhat more refined and use 900 J/kg/K for rock in this post—although I feel silly for it.
As an example, to heat a 30 kg dining room table by 20°C, we need to supply 600,000 J. Just multiply specific heat capacity by the mass and by the temperature change. A 1000 W space heater could do it in ten minutes (600 seconds), if all of its energy could be channeled directly into the table.
The next property to understand is thermal conductivity: how readily heat is transported by a substance. Differing thermal conductivity is why different materials at the same temperature feel like different temperatures to our touch. It’s because high thermal conductivity materials (metals) slurp heat out of our hands much faster than plastic or wooden objects would. Copper has a thermal conductivity of 400 W/m/K, while stainless steel has an abysmally low value (for a metal) around 15 W/m/K—which is one reason why stainless steel is the preferred metal in kitchens: we can tolerate holding the spoon or pot handle even when another part of the item is quite hot. Plastics are around 0.2 W/m/K, and foam insulation tends to be around 0.02 W/m/K. Rock falls between 1.5–7 W/m/K, with 2.5 W/m/K being typical.
How do we apply thermal conductivity? Imagine a flat panel of stuff with area, A, and thickness, t. Using the Greek letter kappa (κ) to represent thermal conductivity, the rate at which thermal energy flows across the panel given a temperature difference ΔT across it is κAΔT/t, which comes out in Watts.
Two sources contribute to the Earth’s heat. The first, contributing 20% of the total, derives from gravity. As proto-planetary chunks fell together under the influence of gravity, the kinetic energy they carried (converted gravitational potential energy) ended up heating the clumps that stuck together. If this were the only contributor, Earth’s center would have cooled significantly below its present levels by today. The other 80% of heating is the gift that keeps on giving: long-lived radioactive nuclei given to us by ancient supernovae (as with most of the other elements comprising Earth and ourselves). Specifically—in order of significance to heating—we have 232Th, 238U, 40K, and 235U, with half-lives of 14, 4.5, 1.25, and 0.7 billion years, respectively. Ironically, one can view the radioactive contribution as gravitational in origin also! This is because supernovae result from fusion losing the fight to gravity, and the heavy elements are created in the resulting gravitational collapse.
In total, the radioactive decay produces about 7×10−12 W/kg; in the mantle. The mantle occupies 85% of the volume of the Earth at an average density about 5 times that of water, having a mass of about 4.5×1024 kg. Multiply these together to get 34 TW of heat flow in steady state. If radioactivity is 80% of the story, this implies 42 TW total. Averaging over the area of the Earth, we get 0.08 W/m². Because of the decaying nature of radioactive materials, the heat generation was much higher a few billion years back, making Earth a more geologically active place (e.g., more volcanoes).
We can work up another estimate of the total geothermal heat flow by observing that the temperature gradient in the crust is 22°C/km. This gradient can be used as the ΔT/t part of the thermal conduction heat flow rate, κAΔT/t. Taking a square meter for A and 2.5 W/m/K for κ, we calculate a geothermal “loading” of 0.055 W/m². Indeed, Wikipedia reports a land-based heat flow of 0.065 W/m² while the ocean (due to thinner crust and thermally greedy water) averages 0.1 W/m².
Using the Wikipedia value of 0.065 W/m² over land, multiplying by land area yields 9 TW. Humans use 13 TW currently. So if we managed to catch every scrap of land-based geothermal flow (and could use it efficiently), we would not fully cover our present demand. Needless to say, we’re not remotely capable of doing this.
Naturally, some places are better than others for tapping into geothermal energy. A map of the continental U.S. in heat flow (below) reveals that the west has more flow than the east. A similar map for North America (including oceans) can be found on the SMU website. On a large regional basis, some spots in the U.S. dip down to 0.03 W/m², while some of the better regions reach up beyond 0.1 W/m².
Even so, we’re talking thermal gradients that are at most in the neighborhood of 35°C/km. In order to produce electricity in a heat engine, we are stuck with a maximum thermodynamic efficiency of (Th − Tc)/Th, where “h” and “c” subscripts refer to absolute temperatures of the hot and cold reservoirs, respectively. At 1 km depth, this amounts to only 10% (and in practice we tend to only get about half of the theoretical maximum efficiency). One needs to drill at least 3 km down before being able to take advantage of steam (at 27% max efficiency). A depth of 5 km reaches 38% maximum theoretical efficiency—so perhaps 20% practical efficiency. Making a 1 GW electricity plant operating on the steady-state geothermal flow would require canvassing an area 200 km on a side buried 5 km deep even in the better regions having 0.1 W/m². Realizing that we’re stuck with thermodynamic inefficiency, a geothermal network covering every scrap of land area on the globe would get less than 2 TW of power at 20% end-to-end efficiency.
So rather than mess with the pathetically impractical commonplace thermal gradients for the purpose of electricity production, we look to hotspots like the Yellowstone region, or places where hot springs and geysers can be found at the surface. Indeed, The Geysers in California hosts 1.5 GW of installed geothermal electricity, but the power output has declined by almost a factor of two in recent decades (it is possible to draw out heat faster than it is replaced by conduction).
The U.S. has about 3 GW of geothermal electricity installed, out of the worldwide total of 10 GW. Surprising to me, Iceland has just 0.6 GW installed, but this is 30% of their national electricity production. Another surprise to me was that the Philippines also derive about 30% of their electricity from geothermal sources, amounting to 1.9 GW.
I don’t have any handy back-of-the-envelope way to estimate the abundance of hotspots. Out of the 9 TW of diffuse land-based heat flow, I might guess that something like 1% (90 GW) may be available in the form of geyser-like surface steam. In short, these are rare beasts.
Rather than try to generate electricity, we could use direct heat from geothermal, or use the thermal mass of the ground as a push-point for heat pumps. The latter should not be called geothermal, since it is not tapping into the geothermal heat flow. As such, I will ignore it here and return to it at some later time together with a discussion of heat pumps for controlled climate applications.
The difficulty with extracting heat from the ground is that the gradient is rather small. For instance, hot water in the home generally wants to be about 45°C. This requires drilling 1.5 km (about a mile) down to get this warm—certainly impractical for individual homes. It could possibly be effective for communities or cooperatives that distribute hot water to a number of houses/businesses. Using geothermal energy for home heating faces similar distribution challenges.
When drawing heat out of a region in the ground, that region will cool relative to its surroundings if heat is extracted at a rate faster than the nominal flow—leading to a depletion of thermal capacity. The replacement heat must ultimately come primarily from radioactive decay. Let’s ask how much rock volume needs to supply thermal energy for one house on a sustainable basis.
The average American household used 80 thousand cubic feet of natural gas in 2001 (apologies for old data and Imperial units). The gas is predominantly used for heating of one form or another: house, water, and food. 80,000 cf translates to about 800 Therms of energy per year, or 2700 W of continuous thermal power. Using our number from before that the mantle generates 7×10−12 W/kg, the average American home would need a rock mass of 4×1014kg, or a cubic volume 5 km on a side at a crustal density of 3.3 times that of water.
Can you believe this? All that volume for one house! This does not mean that the collection network needs to be this large. After all, heat is flowing from deeper down all the time. In this context, the average house needs to intercept an area 200 meters on a side at 0.065 W/m². Still quite a large outlay of piping 1.5 kilometers deep.
What if we cheat and use a smaller collection network, relying on conduction to fill in with surrounding heat? How long will our resource be useful? I’ll spare you the derivation, but the recharge time via thermal conduction is proportional to density times specific heat capacity divided by thermal conductivity. Most importantly, it scales as the square of the dimension (think radius of the depleted zone). Using numbers for an egg (typical food will have values like: ρ ≈ 1000 kg/m³; κ ≈ 1 W/m/K; cp ≈ 2000 J/kg/K; R ≈ 0.02 m), I get a timescale of 800 seconds, or about 13 minutes. This is how long it would take an egg to cool down (or heat up in boiling water). Not to bad, as estimates go. Using numbers for rock, I get a one year time constant when R ≈ 5 m. Crudely speaking, this means we’d have access to a yearly “sustainable” volume—recharging in summer, for instance—around 500 cubic meters, holding 45 GJ (cpρVΔT) of thermal energy at a ΔT of 30°C. Used over a year, this provides something like 1400 W of average power—about half of the typically desired amount.
The danger is that once you try to go larger scale than this, the depletion volume gets larger, and the time to recharge scales up accordingly. Fundamentally, thermal depletion is a dimensional problem. You can draw out energy according to volume, but it is recharged according to area. So the problem is dimensionally stacked to come up short, leading to thermal depletion. This analysis deals with straight conduction. An underground fluid flow would change the story, and developed geothermal sites usually have this feature.
Still, if we don’t care about sustainable use of geothermal, we can just keep drilling new holes to deplete one region after the other. In this sense, we could evaluate the thermal endowment in the upper 5 km of crust under land. The average temperature in this layer is about 60°C above the surface value, so that each cubic meter (3300 kg) contains 180 MJ of thermal energy. Summed over 1.4×1014 m², we get about 1026 J. This is 250,000 years of our global appetite. A quarter-million years might seem close enough to indefinite that we’re willing to call it sustainable. Truly it is a substantial endowment. It’s the practical considerations that hold us off from rushing into this resource.
The energy derived is mostly useful for heat, being inefficient at producing electricity. It won’t fly our planes or drive our cars. And it’s buried under kilometers of solid rock, making it very difficult to access. Each borehole only makes available the heat in its immediate surroundings—unlike drilling for oil or natural gas, where a single hole may access a large underground deposit. So my guess is that we’ll burn every tree and fossil fuel on the planet before we start drilling through ordinary rock to stay warm. In other words, there is little incentive to dig deep for heat. By the time we run out of the easier resources—having burned every scrap of wood not bolted down—are we going to be left in a state to drill through rock at a massive scale?
In short, even though the thermal energy sitting under our feet is enormous in magnitude, it does not strike me as a lucky find. No one is racing to dig in. Perhaps it is simpler to say that it’s economically excluded, at present. And will it ever be cheaper to drill? For me, this falls into a category similar to space resources. Sure, they exist, but getting to them means that they might as well not be there, for practical purposes.
Abundant, potent, or niche? Hmmm. It’s complex. On paper, we have just seen that the Earth’s crust contains abundant thermal energy, with a very long depletion time. But extraction requires a constant effort to drill new holes and share the derived heat among whole communities. Consider that two-thirds of our fossil energy goes up as waste heat, and often in cold environments. Waste is an appropriate word, in this context. But distributing the heat into useful places is a practical challenge to which we seldom rise.
Once we move to the steady flow regime, we get 9 TW across all land. This might qualify it as potent, except that practical utilization of the resource fails to deliver. For one thing, the efficiency with which we can produce electricity dramatically reduces the cap to the 2 TW scale. And for heating a home, we saw that we would need to capture zones well over 100 meters on a side. Recall that in similar fashion, the 1200 TW scale for wind dissipation was knocked way down to a handful of terawatts to account for the practically extractable portion—but still leaving it in the potent category. So realistically, steady-state geothermal fails to deliver, and lands in the “niche” box.
Clearly, geothermal energy works well in select locations (geological hotspots). But it’s too puny to provide a significant share of our electricity, and direct thermal use requires substantial underground volumes/areas to mitigate depletion. All this on top of requirements to place lots of tubing infrastructure kilometers deep in the rock (do I hear EROEI whimpering?). Even dropping concerns about depletion, the practical/economic challenges do not favor extraction of geothermal heat on a large scale. So geothermal is not giving me that warm, fuzzy feeling I seek. It’s certainly not riding to the rescue of the imminent liquid fuels crunch.
We’ll see nuclear fusion next week.