NERC ‘Leaning into AI’ for Online Assistance
E-ISAC Briefs Trustees on More Sophisticated Cyberattacks

Listen to this Story Listen to this story

Matt Duncan of the E-ISAC addresses NERC's Technology and Security Committee in Calgary, Canada.
Matt Duncan of the E-ISAC addresses NERC's Technology and Security Committee in Calgary, Canada. | © RTO Insider 
|
The NERC board's Technology and Security Committee heard updates on the ERO's intentions for the use of artificial intelligence, along with threat information from the E-ISAC.

CALGARY, Alberta — NERC staff told a Board of Trustees committee that the ERO’s work on integrating artificial intelligence technology into its operations is “on track” and has produced promising developments.

Speaking to the board’s Technology and Security Committee on Aug. 13, Howard Gugel, NERC’s senior vice president for regulatory oversight, said the ERO Enterprise is “leaning into AI [by] learning, listening and supporting the industry” while engaging with developers on possible uses for the technology in the organization’s business.

Gugel and other speakers characterized the ERO’s approach to AI as “conservative,” acknowledging the need to keep industry data secure and deploy the technology responsibly. NERC and the regional entities have adopted the National Institute of Standards and Technology’s AI Risk Management Framework as a model. NIST’s framework is structured around four core functions:

    • Govern — implement a risk-management culture through policies, processes and accountability mechanisms;
    • Map — identify and document the context, intended uses and potential impacts of AI;
    • Measure — develop metrics for evaluating AI risks, and test and monitor performance regularly; and
    • Manage — prioritize and address identified risks through mitigation strategies, monitoring and improvement.

NIST provided a set of attributes that AI systems should exhibit to demonstrate trustworthiness. These include accuracy and robustness across diverse conditions, protection against failures and outside attacks, accountability and transparency, processes for safeguarding user data and privacy, and fairness.

“There [were] a number of reasons that the NIST AI risk framework was attractive,” said Joseph Younger, chief operating officer at the Texas Reliability Entity. “It’s non-industry-specific; it’s flexible; and it can be tailored to different-size organizations as well as organizations that are at different maturity levels in terms of how they’re implementing AI. … [It] also provides a range of supporting materials, including playbooks, models [and] templates that NERC and the regions could leverage as needed as we start out on these journeys.”

One of the first projects under the ERO’s AI initiative is a chatbot, developed with an outside partner, intended to help users quickly find information from NERC’s website. In the meeting agenda, NERC said such an application “could significantly reduce the time required to … find and apply the knowledge required to perform CMEP [compliance monitoring and enforcement program] tasks.”

The chatbot “can be used as a tool for either somebody that’s a new hire to NERC, or somebody that’s wanting to know more about standards, just to quickly ask a question and get it back,” Gugel said. NERC is developing the chatbot with AI Factory, a product of Microsoft partner company UnifyCloud. An internal pilot is expected to begin in the third quarter.

NERC also is exploring the use of OpenAI’s ChatGPT to help summarize comments submitted for draft reliability standards, which are in the public record and therefore considered a low security risk. In addition, ReliabilityFirst began a test of Microsoft Copilot in January to determine its suitability for the RE’s business. RF has “enabled Copilot for about 35 users, with plans to reach 50 by August 2025,” NERC said. RF has limited the use of Copilot to work teams that are not “primarily focused on core CMEP functions.”

Trustee Sue Kelly noted that in addition to these efforts, NERC’s Modernization of Standards Processes and Procedures Task Force — of which she is a member — is exploring the use of AI in the standards development process. She asked Gugel if such a use case would fall under the governance model.

Gugel assured Kelly that if such an application were created, the personnel involved would be appropriately trained and that the program would “have good guardrails in place [about] what … files can be accessed on the internet, and [which] ones can’t.”

“At this point, my vision would be [that] there’ll always be somebody reviewing that output for a sanity check before it ever goes out for either a public comment or be a document that’s actually used somewhere,” Gugel said.

E-ISAC Notes Growing Threat Sophistication

Matthew Duncan, vice president for security operations and intelligence at the Electricity Information Sharing and Analysis Center, delivered a presentation on the state of the security landscape at the TSC meeting.

Duncan said the environment remained largely “unchanged,” but the E-ISAC has seen “subtle shifts in the techniques and tradecraft being used by all manner of adversaries.” China-linked actors remain an ongoing threat, with an additional rise in malicious activity, including distributed denial of service attacks, from pro-Iran groups following the U.S.’ and Israel’s airstrikes on that country’s nuclear program earlier this year. (See Iran Strikes Likely to Raise Cyber Risk, CISA Warns.)

Between January and June, the E-ISAC made 1,982 direct shares to member and partner organizations, a 79% increase over the same period in 2024. Shares to utility members overall were up 43% year over year, and shares to Canadian members and partners up 13%. Duncan credited the increase to “the efficiency and the automation gains we have made at the E-ISAC.”

Asked by Trustee Jane Allen about reports that AI has fueled an increase in cyberattacks, Duncan acknowledged that “you don’t always know it is AI, or [generative] AI, that’s attacking you.” He said that one possible sign of AI assistance is that “the phishing emails … have all gotten better grammar and better spelling” and seem to be better tailored to their targets.

“The unfortunate truth is [generative] AI makes hacking easier, so even non-sophisticated folks can use these tools to do more effective phishing,” Duncan said. “So I think it behooves us to get ahead of this, to train our people to think about how to respond. If you have a question about whether an email or a text is authentic, find an alternative way to confirm that it is real.”

BOTCMEPE-ISACRFTexas RE

Leave a Reply

Your email address will not be published. Required fields are marked *