Making AI Environmentally FriendlySkeptoid Podcast #994 ![]() by Brian Dunning In 2017, there were various financial upheavals all around the world. Japan became the first country to recognize Bitcoin as legal tender. Australia followed soon after. The natural result was that Bitcoin's value jumped by 20× that year, and more entrepreneurs than ever began Bitcoin mining operations in earnest — often with millions of dollars of investor backing. Why did they need it? Two reasons: First, you have to buy a whole bunch of tiny computers that are optimized for these particular types of calculations; and second, you'll quickly learn that your electric bills are going through the roof. These tiny computers — often referred to as mining rigs — suck up an incredible amount of electricity to power their processors. This, in turn, causes those processors to emit a surprising amount of heat, which sends your air conditioning bill into the stratosphere to keep the computers from melting down. And while that was the problem a decade or so ago, its analog has reared the same ugly head again — not in cryptocurrency, but this time in AI. Just listen to these recent nightmarish reports:
All of this negativity surrounding AI's environmental impact is — to put it bluntly — basically true. But like with all burgeoning technologies, it's necessary to go through the steps of doing things badly on the way to learning to do them well. The first airplanes were terrible, but we pushed through in order to learn to make better ones. 150 years ago, medical care was likely to do more harm than good, but that's how we learned to do it so well today. In 2022, Ethereum — the second largest cryptocurrency after Bitcoin — made a fundamental change to the way it functions, in order to address these environmental concerns and massive costs. This was switching from Proof of Work validation to Proof of Stake validation, and while you don't need to understand what that means, it meant a reduction in power consumption of over 99%. Might there be a fundamental shift like this in the future of AI? In January 2025, we thought there might be. You may have heard about DeepSeek, a Chinese AI that claimed to be 90% more efficient than other models. A 90% reduction in power consumption would indeed be a game changer, but it turned out this claim was not what everyone hoped. The efficiency gains were only realized during DeepSeek's initial training period. The hardware used for this was indeed highly optimized, but it was only trained for 10% as many GPU hours as those it was being compared against, including Meta's Llama AI. So, obviously, it would have only consumed 10% as much power. But now that it's trained and it's up and running, analysts have found that DeepSeek is actually less efficient at running each query given to it, consuming up to twice as much power as the same query on Llama. So, once again, be skeptical of amazing science news coming from China. Let's take a quick look at how AI is sucking up all this power. There are various kinds of AIs for different applications — neural networks, natural language processing, generative AI, machine learning — but they all run on basically the same hardware. A conventional data center that runs websites and cloud computing consists of thousands of computers, each with a CPU, memory, and storage, but little else. But like cryptocurrency mining, AI computers are constantly doing complex calculations, and so they are much more reliant on GPUs. Most of us know GPUs (graphics processing units) as that thing our computer needs to drive an extra monitor, or to play video games with amazing real-time graphics. While a CPU may have a few powerful cores, a GPU has thousands of tiny cores allowing it to perform thousands of simpler, repetitive tasks simultaneously, in parallel. This is ideal for the matrix operations that are central to AI processing, so today, these simple yet highly specialized machines loaded up with powerful GPUs are what carry the lion's share of AI, and what eat up all that power. However, growing needs drive innovation, and innovation in the land of AI hardware means ever-greater efficiency. Higher efficiency serves two needs at the same time: it improves the speed and power of the AIs, and doing more with less also means reduced power consumption. In 2015, Google developed a new kind of chip called a TPU (tensor processing unit). These are optimized for processing multidimensional arrays, a central computing function of AI. They're designed explicitly for this and can't really do anything else, thus they work faster and consume less energy than GPUs. A TPU is one kind of ASIC (application-specific integrated circuit). These are chips that are hardwired to perform a single specific function or algorithm. Compared to running that same task in software, an ASIC does it far faster and requires much lower computing and power resources to do so. TPUs are not the only ASICs that have been developed for AI, but you get the idea. The more the AI algorithms mature, the more the most resource-intensive part of the AI infrastructure can be made vastly more efficient. Another interesting way that AI can be made more resource efficient is by methodological improvements like model pruning and quantization. This is something like using heuristics, or shortcuts in the way we think about things. If you ask an AI whether you should bring an umbrella this afternoon, there are a million answers it could give you that you don't care about, all of which would require computing time. How many raindrops per square meter per second are falling? What's the barometric pressure likely to do in the next three hours? No. You only care whether it's raining a lot or hardly at all. Simplifying the math where it makes sense, fewer significant digits, ignoring all but the most important inputs, can cut the size of the job tremendously. Faster results, less power consumed. More useful in every way. A question we might ask at this point is which one is likely to accelerate faster: improvements in efficiency, or demand for more AI? We do have a solid answer for this, and it's not the one we'd like to hear. Remember those numbers at the top of the show were basically correct. As of now, 2025, the best projections show that global data center power consumption will probably double by 2030; and that's in spite of all the gains in efficiency. But it's also due, in part, to those same gains in efficiency. As we reduce their energy consumption we're able to do more with them. They become even more useful. And that drives their demand even faster. It's a type of feedback loop we call the snowball effect: the faster it rolls, the more snow it picks up and the heavier it gets, making it roll even faster, and so on. This is called Jevon's Paradox, after the 19th century British economist William Stanley Jevons: Increased efficiency leads to increased consumption. But we live in a capitalistic world. Supply and demand are forever intertwined. When demand becomes too great, we have to do one of two things: we increase the supply, or we reduce the demand. In this case we're not physically able to increase the supply, so we do the other thing: reduce the demand — and we do that by raising the price. Expect AI to get more expensive. Potentially a lot more; as much as it takes to avoid melting the grid. But wait, you say, a lot of AI is open source — meaning the algorithms and software, or at least analogs comparable to the commercial versions — are freely available to all. That's nice. The hardware and the electricity are not. This is a system where water is going to find its own level. One thing nearly all the industry experts are projecting is that data centers are going to turn increasingly to renewable energy. It's the one variable in this equation that's a one-time expenditure. Invest once now, and the energy needs will be covered for the industry's next phase of growth. And that brings us to a whole other side to this issue that many people don't take into consideration. Powering the AI engines may indeed have a high environmental impact, but some of what the AIs are doing is protecting the environment. An AI can be trained to do just about anything, and whether your application is reducing carbon emissions or protecting old growth forests, somebody probably already has an AI at work on that problem. Is the benefit each program produces worth the cost of generating the power to run it? Well, maybe in some it is, maybe in some it isn't. Let's take a look at a few examples:
Of course these are only three of many, many such initiatives that turn to AI to increase efficiency and cleanliness of processes throughout the world. But so far, they are not nearly enough to offset the costs of running the AIs. Renewable energy helps, but it too is in a losing battle. And so far, nobody sees anything on the horizon comparable to what Ethereum did in 2022 to reduce its consumption by 99%. The bottom line is that gains against the environmental impact of AI are likely to be evolutionary, not revolutionary. In the meantime, we can probably expect economic levers to be about the only effective tool we have, and that means jacking up the cost more and more to reduce the demand. Hopefully, in a few years, I'll have the pleasure of updating this episode with a major new development.
Cite this article:
©2025 Skeptoid Media, Inc. All Rights Reserved. |