Analysis Offers Blueprint for Faster Data Center Interconnection
Benefits of Load Flexibility, Bring-your-own Capacity Modeled at Six PJM Sites

Listen to this Story Listen to this story

Construction is ready to begin on a third data center at the Google complex Storey County, Nevada.
Construction is ready to begin on a third data center at the Google complex Storey County, Nevada. | Google
|
A new analysis models a markedly faster interconnection process for large data centers where the developer and utility can agree on flexible interconnection and the developer can secure some of its own generation capacity.

A new analysis models a markedly faster interconnection process for large data centers where the developer and utility can agree on flexible interconnection and the developer can secure some of its own generation capacity.

Camus Energy, encoord and Princeton University’s Zero Lab analyzed six hypothetical data centers’ large load requests at locations within PJM that have been the scene of actual requests.

The analysis concluded that by agreeing to partial curtailment during limited periods of system stress, and by directly procuring accredited generation capacity, data center developers could reach operational status in roughly two years instead of five to seven years. It also found the approach would shield other grid customers from most of the costs.

It is, Camus said, the first publicly available study to combine real utility transmission system data, system-level capacity expansion modeling and site-level capacity optimization to evaluate how flexibility can accelerate data center interconnections.

And it provides a repeatable blueprint other utilities can follow, Camus said.

Different Approach

Load flexibility is a concept that is drawing attention as the rate of large load requests exceeds the pace at which the grid can be expanded to serve them.

A Duke University study in early 2025 concluded the existing U.S. power network could handle 126 GW of new demand with no new generation if data centers cut their energy use by as little as 1% in times of peak demand. (See US Grid Has Flexible ‘Headroom’ for Data Center Demand Growth.)

Some of the biggest names in the tech sector have begun exploring demand response as a way to limit exposure to the high cost and slow pace of building new infrastructure to serve these new large loads. (See Google Strikes Demand Response Deals with I&M, TVA.)

The new study — “Flexible Data Centers: A Faster, More Affordable Path to Power” — was funded by Google, which reviewed it prior to publication.

It advocates for a mixed, flexible approach.

The sticking point, Camus CEO Astrid Atkinson told RTO Insider, is that most tariffs have no middle ground — large-load customers can build their own generation behind the meter or they can get firm service from a utility, but not some mixture of both.

To change this, utilities need to have not only the willingness but the skills and technology to consider alternatives, she said.

Data center operators, too, need to open up to the idea.

“They’ve also been very reluctant to consider curtailment,” Atkinson said. “Historically, they want to make sure that if they’re building a data center facility, that they can use 100% of the power footprint that the facility is designed for. … Being paid to curtail is absolutely dwarfed by the opportunity cost of not using the resource that they’ve invested in.”

The “huge disconnect” between the time frames on which Big Tech and the U.S. power industry operate is leading to changes, she said, because there is plenty of room on the grid for what is described variously as conditional firm service, non-firm service or flexible connections.

The obligation-to-serve model “naturally means that the system, for the most part, does have a decent amount of slack capacity in many places, most of the time,” Atkinson said.

Some utilities are receptive to the idea, she added.

“There’s obviously a lot of complexity in how that plays out, but we have definitely seen utilities be actively curious and willing to explore flexible interconnection models for data centers and other large load assets.

“There’s also challenges in terms of, we need to adapt the existing market participation rules and the regulatory models that support connecting stuff to the grid.”

Updated interconnection methodologies and potentially new market mechanisms are among the potential changes, Atkinson said. But these are relatively new concepts for an industry that typically makes changes at a deliberate pace.

“This whole conversation, I think in some ways, was kicked off by the Duke University report at the beginning of the year. And it’s really just this year that data centers have been interested in and willing to explore this sort of model. So the conversation is relatively young.”

The Analysis

The analysis applied system-, utility- and site-level modeling to the six scenarios it created.

Importantly, the study looked at all 8,760 hours of the year, not just at the worst moments of the year.

It found that a 500-MW data center using flexible grid connection and bringing its own capacity to the table could lop three to five years off its grid connection process.

It found grid power was available for more than 99% of all hours in a year; on-site resources such as batteries, generators and load flex were dispatched 40 to 70 hours a year; transmission curtailment lasting four to 16 hours totaled seven to 35 hours a year; and generation shortfalls totaled 32 hours a year, mostly due to extreme weather.

And it found that while each gigawatt of new data center demand creates $764 million in supply system costs under a traditional firm-only interconnection, a non-firm interconnection could insulate the grid from almost all of that cost: Flexible interconnections with 20% conditional firm would avoid 273 MW of new build at a cost of $78 million per gigawatt; internalizing capacity would internalize $326 million in capacity costs per gigawatt; and the data centers’ bill payments would cover $329 million per gigawatt.

The research evaluated dynamic line rating (DLR) as a complementary option and found it boosted transmission capacity during most hours and significantly reduced the need for curtailment at the modeled data centers. While DLR is beyond the reach of data centers, they could partner with utilities to expand its use, the authors write.

The Conclusion

The report identifies four key barriers to implementing the flexible connection model it explores:

    • Planning frameworks assume every load always is at its maximum; regulators instead would need to incorporate limited large-load flexibility where voluntarily offered as an explicit input in integrated resource planning and resource adequacy processes.
    • Accreditation methods do not consistently define and value load-modifying resources; regulators would need to extend accreditation to recognize the reliability contribution of emergency load modifying resources in resource adequacy planning under predetermined bounds of duration and annual availability.
    • Tariffs allow only firm and non-firm service, and often not even non-firm service; FERC and state regulators should encourage transmission providers to change their processes to better use voluntary flexible loads.
    • Transmission and resource adequacy commitments would need to be recognized as independent of each other; FERC or other regulators could clarify this through rule making or guidance.

The report follows the list with an optimistic note: “Although regulatory frameworks are still evolving, momentum is building across federal, regional, and state levels.”

The authors add the caveat that the analysis is a demonstration of the methodology on certain sites and system configurations, not a comprehensive national assessment.

Public PolicyResourcesTransmission

Leave a Reply

Your email address will not be published. Required fields are marked *