This is an extract from a recent report “Dodging The Firm Fixation For Data Centers And The Grid” published by Energy Innovation. This extract presents six different demand features of US data centers that explain the diversity of data center types (agency, clustering, and profile) and their internal workings (flexibility, backup, and modularity).

Data Center Feature 1: Agency and the split-incentive problem
Planning and operating a data center involves many decision-makers. Some data centers are facilities where customers can rent space to house their servers and equipment or just run their software on provided equipment. This means the facility is developed and owned by a different company from those that rent rack space, buy computing capacity, and ultimately consume electricity. Multiple actors force complicated decisions around electricity supply. If new data centers adapt their development approach to better integrate with the grid and increase their “speed-to-power,” policymakers must understand the planning, construction, and operation of modern data centers. Many different actors are involved, creating a classic split-incentive problem. Loosely speaking, apart from the users or clients, three groups of actors dictate the energy and resource impacts of data centers: developers, facility operators, and service providers. These tend to be separate entities. Overlap sometimes occurs, but usually not enough to prevent split-incentive issues. More than half of data centers are categorized as co-location facilities—large facilities that rent out space to multiple separate entities.
In early stages, data center development is mostly a real estate bet: developers acquire land, water, and electric connection rights and then these rights pass on to the projects they sell. The natural incentive for developers is to keep the range of future owners they could sell to as wide as possible. Hence, they are unlikely to want to enter contracts or agreements (or support legislation) that might prematurely impair any of the land, water, and power consumption rights for their projects. For example, they may not want to agree to be a flexible consumer in return for faster interconnection because that might scare off some prospective buyers. Similarly, owners/operators that lease capacity to data centers customers do not necessarily have much insight into how flexible these customers are or how their customers’ usage patterns might change over time. They are conservative about aspects such as whether the tenant-user would be interested in avoiding on-peak usage, participating in time-varying rates, accessing clean energy tariffs, or participating in a demand-response program.
Obviously, renters must abide by some rules (via master service agreements or service-level agreements ) about behavior that impacts power quality (voltage, frequency, harmonics, transients, etc.) or broader electrical concerns (like grounding, interference, and surge protection), but that still leaves a lot of uncertainty for the data center owner/operator. Violations may also pass undetected until a severe problem occurs. Because data centers are also large electricity consumers, utilities will want to know if contracts are backed by the ultimate users (e.g., hyperscalers) or an intermediate company that could go bankrupt or disappear. Grid investments involve assets with multi-decadal lifetimes, while the service life of cutting-edge chips can be two to three years. Utilities and their regulators have a strong interest in recovering any incremental costs of investments needed to serve data centers and will look for contractual arrangements to make this happen.
Data Center Feature 2: Clustering, data centers are attracted by similar conditions or to each other
Data center locations tend to be concentrated in a few regions rather than evenly distributed. This clustering amplifies stress on already energy-dense grids. The main drivers are favorable conditions—reliable power, dense fiber, skilled workforce, tax regimes, and land—but anchor investments by hyperscalers or AI campuses could also accelerate the process. Policymakers should avoid treating projects as one-offs and consider the likelihood of a single facility snowballing into a larger cluster. “Clustering” describes how data centers in the U.S. tend to collect in a handful of regions rather than being evenly distributed. Clustering creates stress for the bulk power system because it takes already energy-dense loads and adds even more load nearby. The easiest explanation for clustering is that it derives from favorable existing conditions: reliable electricity, dense fiber connectivity, neighboring trained workforce, supportive tax regimes, and land availability. Large anchor projects also draw in more data center development: Once a hyperscaler or AI training facility establishes itself, it signals viability, brings new infrastructure, and lowers costs for additional entrants.
Policymakers wanting to provide support for a big project by promises of jobs and tax revenue, risk underestimating the impacts of this attractive force as welcoming one project may quickly lead to a cascade of follow-on facilities, with both outsized benefits and mounting strains. Recent history reveals a pattern whereby anchor investments amplify favorable local conditions into enduring centers of digital infrastructure. Northern Virginia’s “Data Center Alley” grew from early fiber and internet exchange into the world’s largest concentration of data centers. Amazon Web Services (AWS) was an early and steady investor in this cluster. Today, Data Center Alley reportedly handles roughly ~70 percent of the world’s internet traffic, contains over 12 million square feet of commissioned data center space, and sustains hundreds of megawatts of power load. Reno’s Tahoe-Reno Industrial Center became a global hub after Switch and Apple established major campuses, followed by Google and others. Central Ohio offers a newer case: Google and AWS each invested in major builds, quickly attracting colocation providers. Atlanta and Phoenix look to be on similar paths.
In theory, diverse types of data centers should reinforce these patterns. Colocation facilities are drawn to network-dense hubs where they can maximize interconnection to other facilities. For example, enterprise servers might want to easily connect to multiple cloud providers––providers of cornerstone internet services stand to benefit from the reduced latency proximity affords, especially for content delivery like streaming video and games and so on. Hyperscalers could function as anchors, just like a department store in a shopping mall, investing billions into single campuses that create the vendor ecosystems others rely on. However, AI-focused facilities, with their unprecedented power needs, can also reshape the landscape by displacing other data centers competing for the same power network and generation resources.
Electric power infrastructure both attracts and is stressed by clustering. Access to transmission lines and substations is a prerequisite, but as clusters grow, demand can overwhelm grids. Northern Virginia now faces multi-year waits for new hookups. Reno’s growth has raised water concerns and left Nevada utilities facing a potential doubling in necessary electrical infrastructure (also spurring them toward large renewable additions). Ohio illustrates the stakes most vividly: By March 2023, the utility AEP Ohio imposed a moratorium on new data center service agreements in Central Ohio, pending further study citing grid strain. Eventually regulators approved a new tariff requiring data centers to pay for 85 percent of subscribed capacity whether it is used or not, with penalties for cancellation or under-performance and a four-year onramp.
Clustering behavior can easily outrun planning and force regulators into reactive steps, introducing delays before more pro-active policies and tariffs can be put in place. The policy lesson is not to avoid clusters—after all, they bring new jobs, tax revenue, and digital infrastructure—but to keep a skeptical eye on benefits claimed by developers and focus on smart planning. This should consider the multiple interests of stakeholders affected by a data center cluster and work in advance to align land use, grid upgrades, generation, flexible loads, and permitting frameworks, ensuring that benefits can be captured without bottlenecks or backlash once clusters grow.
Data Center Feature 3: Consumption profile
Data center electricity usage is not steady or 24/7. Up close, it can be quite choppy and challenging. Batteries could act as a buffer—a keystone solution to managing power quality. Data centers exhibit considerable variability, especially going between operational and idle states. In Lawrence Berkeley National Laboratory’s 2024 United States Data Center Energy Usage Report, the authors explain that in 2014 colos reported 21 percent utilization rates, hyperscalers 45 percent – rising to an estimated 35 percent and 50 percent respectively in 2027. The same report models AI learning centers and AI inferencing at 80 percent and 40 percent utilization rates, respectively. These don’t directly translate into electricity consumption load factors because some electricity is used for other purposes like cooling that don’t follow a 1:1 relationship with computing load. Even looking at the whole power consumption profile of a data center, it’s important to differentiate between actual load factor and availability. Whether power comes from on-site generation or from the grid, it needs to be prepared to provide power when the data center wants it, and back off when the data center doesn’t.
Big swings in data center demand will clearly be a challenge, even for the most flexible on-site generation. Given the scale at which many data centers operate, these swings can still create problems for large regional grids . The CEO of Hitachi Energy reportedly commented “there can be swings of 200, 300 MW within a ten-minute period as data centers move from learn vs stop learn mode, and that these types of swings would not be acceptable from other grid customers.” At smaller time scales, large numbers of similar chips in one place switch on and off and can create an aggregate resonance effect . Existing electrical standards are inadequate for screening out these behaviors, and utilities may not have sufficient sensors to properly trace back issues to a particular data center. In aggregate, the evidence points to data centers deteriorating power quality metrics in their environs.
More research needs to be done that focuses on new large digital loads, including variable generation resources with inverters that center around things like low-voltage ride through or fault clearing. For more information, context, and solutions on some of the challenges with interconnecting these large loads, see GridLab’s recent Practical Guidance and Considerations for Large Load Interconnections. Data centers are not a “perfect baseload” fit to directly couple with large mechanical generators or even the grid, and they will need significant electrical equipment to buffer this connection and prevent extra wear and tear on co-located generation or nearby grid users. Even if some data centers can learn to be flexible, incorporating battery energy storage, especially as the hardware cost decreases, will likely become a key element in managing data center impacts on the grid. When good wind and solar resources are available nearby, batteries can play a dual role in managing both load and generation variability at multiple time scales.
Data Center Feature 4: Flexibility, or the lack thereof
Flexibility could be key to quickly connecting new data centers, especially those involved with AI learning. Managed demand is possible, but on-site batteries may be a better solution where split incentives or onsite needs make demand control too rigid or complex. Data centers can be flexible, but different functions involve different levels of flexibility. This is probably hardest to achieve for co-location data centers because the third-party owner which interfaces with the grid and with utilities is not the one deciding what servers inside its facility are doing. Additionally, data centers are tasked with fluctuating sets of applications, creating uncertainty about how reliable or persistent demand management can be as a means of providing flexibility. Data centers fully owned by large hyperscalers provide a higher degree of control over the whole facility. But the diversity of services being provided, often with low latency (response times) needs, may create constraints on what the hyperscaler can do. Hyperscale data centers provide both regular services—like AWS’ cloud computing and AI workloads such as inference, which involves answering client queries using preprocessed AI models.
For AI learning data centers, which create these large learning models, the goal is to cram as many chips as possible into the same square mile with the fastest internal connectivity so that the collection can operate as one big parallel machine. Much of the possible flexibility here comes from adjusting the timing of computing batches, yet matching these adjustments to power supply flexibility needs is not a given, especially when considering that data center operators will want to prioritize computation over flexibility. Flexibility is a particularly important quality for data centers because they are such a large component of load growth, and just a little flexibility would reduce the need for new peaking resources and speed up interconnection. A 2025 analysis by the Nicholas Institute for Energy, Environment & Sustainability at Duke University finds that just 0.5 percent to 1 percent flexibility opens significant space on the grid: 98 GW of new load could be integrated at an average annual load curtailment rate of 0.5 percent, and 126 GW at a rate of one percent. This level of flexibility is similar to what is provided by demand-response programs that exist today for other loads, but as far as speeding up interconnection, it may be the AI-driven hyperscalers and learning centers, acting more directly under their owners’ control and schedules, that can achieve more.
AI loads are fundamentally more flexible than generic data center loads because they can be processed in batches, easily scheduled, and often internally orchestrated. For example, in a presentation to the Texas grid operator Electric Reliability Council of Texas (ERCOT), the company Emerald AI demonstrated how it could implement flexibility at a data center. The company argued there is enormous potential to control AI data center load, and that “major hyperscalers are amenable to curtailing up to 25 percent for up to 200 hours in return for priority interconnection of 1 GW.” No one knows if any particular data center’s operations will remain stable enough to guarantee a given level of flexibility or willingness to curtail over the lifetime of matching local grid upgrades. In some cases, the data center load can be flexible (willing to forgo some batches of work) but not exactly in the way that best serves the local grid. Some amount of local battery energy storage (providing multiple value streams like integrating local on-site variable energy, backup, and power quality services) could also help data centers be more flexible at their grid interface, especially those with less direct control over internal processes.
Data Center Feature 5: Backup needed for disturbances and outages
Most data centers require backup. Demand flexibility and short-duration batteries can either eliminate or lighten the load for traditional backup solutions. Many data center customers aspire to high availability as much as 99.999 percent uptime—hence the need for backup power to take over in case of any grid failure. The Uptime Institute, a widely followed source for industry tier certification in data center design, build, and operations, defines four reliability tiers (I through IV) with increasing expectations for performance under challenging conditions, with an eye towards worst-case scenario planning. Many data centers serving enterprise needs require at least a Tier III level of reliability, either because of a direct need, like maintaining accessibility to data under adverse conditions, or as a proxy for operational trustworthiness. For mission-critical operations—major banks, stock exchanges, the military, or hyperscalers serving global customers—a Tier IV level of availability may be required.
Because Tier III and Tier IV facilities require 72 and 96 hours of on-site power capacity, respectively, simple economics dictate that backup is usually in the form of diesel generators with fuel storage on-site. Batteries can also be used to help ride-through disturbances in power supply, providing faster response times and reducing fuel and maintenance expenses on diesel. However, with today’s technology, battery energy storage systems (BESS) that can cover critical needs for three to four days are not economically feasible, especially without some form of on-site generation to sustain their state of charge. However, diesel does not scale well: As data centers get much larger, massive tank farms for the generators’ on-site fuel require complex fire protection, spill containment, and environmental risk mitigation. Furthermore, many air districts (e.g., Virginia, California, or Oregon) place strict caps on generator run time and cumulative emissions in a site or region. Placing more than a hundred diesel generators on one site creates a cumulative permitting challenge and may well face serious local resistance along with the prospect of delays or outright rejection from regulators. Somewhat cleaner gas generators (turbines or reciprocating engines) are usually connected to a pipeline and require large propane or liquefied natural gas storage facilities to satisfy on-site capacity requirements.
Some large hyperscalers are opting to target better up-time based on statistical estimates rather than explicit proxies for reliability. For example, Microsoft has publicly committed to reducing the use of diesel generators by 2030. To that end, it contracted with Saft, a subsidiary of TotalEnergies, to install four battery energy storage systems, each in groups of four megawatt hours (MWh) and capable of 80 minutes of on-site power, to replace diesel backup. In the U.S., Microsoft’s newest Azure region in San Jose, California is also being built diesel-free, but is using natural gas turbines for backup (plus batteries for ride-through). In general, the U.S. grid is quite reliable, with the one-in-ten reliability standard mostly achieved at the transmission service level. Most outages that do occur are less than one or two hours, so a battery can carry enough of the backup burden to get the facility to a high level of reliability while hardly, if ever, using on-site generation. As longer-duration storage solutions like Form’s 100-hour battery or thermal batteries connected to local renewables and steam turbines in local energy parks emerge, data centers will be able to free themselves from fossil fuel backups while taking advantage of integrated design to combine multiple uses of batteries for flexibility, power quality, and backup.
Data Center Feature 6: Modularity—data centers are built in phases
Data centers expand in discrete phases, from racks to halls to entire campuses, with uncertain demand and rapidly rising power density. This modular growth pattern matches well with the modularity of renewables-plus-batteries deployment, which can be built in parallel to meet incremental load without the risks of lumpy firm power investments. For utilities and data center developers, timing capital investments can be challenging. Matching these investments with energy supply for their increasingly electric power sub-components compounds the challenge. Building a new data center means committing to constructing a large building and grid capacity without knowing if consumers will come, how quickly they will deploy, or how their consumption will evolve over time. Tenants in a co-location situation, hyperscalers, and AI data centers may not immediately have all the chips available so may want to deploy in phases: slowly building up electrical demand over time until reaching full capacity, if all the anticipated demand materializes.
The digital world’s infrastructure is itself modular—built from discrete, substitutable units. Data centers are not just abstract systems of bytes and tokens; they are also collections of tangible components: chips, servers, and, above all, racks. The rack is the main unit of reference: a cabinet holding multiple slender servers or “rack units.” Racks are grouped into “pods” of 20–30, and an enterprise client might deploy a couple of pods at a time in either a dedicated or co-location facility. Some tenants lease only a handful of racks in a shared space, while hyperscalers may build entire halls of 200–400 racks, with multiple halls forming a single phase of expansion on a large campus. The modular nature of data centers lets developers manage financial risk by building in phases, with the option to add new capacity quickly but in a planned way. Each phase, however, carries high stakes not only in capital cost but also in power demand. A 2024 Uptime Institute report states that four- to six-kilowatt (kW) racks remain common, with a trend towards higher consumption today.
Meanwhile, AI applications and high- performance computing are pushing the development of liquid-cooled racks with incredible increases in power density. Vertiv, an Ohio-based company that designs, manufactures, and services critical infrastructure for data centers, reported in its 2024 Investor Event Presentation that extreme rack densities already reach 250 kW per rack today and could exceed one MW within five years. That means a space the size of a bedroom closet could consume more power than a thousand average homes. As a result, a single phase of development for a data center might range from on the low end at 250 kW on the low end to 250 MW at the high end at the high end, with extra overhead for cooling.
The extreme end of data center development is exemplified by data center developer Vantage’s recently announced plans to build its $25 billion Frontier campus situated on 1,200 acres in Shackleford County, Texas, with an eventual total consumption of 1.4 GWs—close to average total consumptions of the states of either Rhode Island or Delaware. And this project is not alone: a September 2025 ERCOT staff report to ERCOT’s board details 130 GW of non-crypto data center load in the interconnection queue through 2030. In the last few years, Texas has met a new additional load with a new, mostly clean generation. Of the 428 GWs of generation requests as of August 31, 2025, 204 GWs are for wind and solar and 180 GWs are for energy storage.
Data center development may come in all levels of power consumption. However, because developers rarely build, install, and commission data centers in a single phase, projects of all sizes need a power supply that can grow and expand with them. When covering the incremental energy demand from a new data center, a large new single firm resource is an unwieldy indivisible capital investment. A modular approach with renewables plus batteries reduces risk and provides better economics. With computing loads that grow unevenly, modular investments let operators respond dynamically—deploy more solar, wind, or storage as AI racks come online. As a bonus, you can avoid supply chain bottlenecks because incremental installation bypasses the big lead times and equipment backlogs associated with large generator orders, enabling continuous expansion without project delays. Just as data centers grow in discrete steps, modular renewables and batteries let the grid grow in parallel.
Access the report here