The MEARIE Blog


The electrical distribution industry continues to adopt more advanced artificial intelligence (AI) technologies. There are a few examples of this in the sector already:

Hydro Ottawa has delivered operational enhancements through the implementation of artificial intelligence. Based on a more recent EDA Distributor article, Hydro Ottawa employees have reported a reduction of up to three hours in customer service workload, along with a noticeable decrease in equipment failures and service outages.[i]

Another great example of AI adoption among Ontario’s utilities is InnPower’s use of the Senpilot AI platform. Not only did InnPower present on this topic at the most recent MEARIE Conference in June, their project was also discussed in a recent EDA Distributor article. Faced with increasing demands due to extreme weather and the province’s emphasis on energy reliability, the utility has been leveraging Senpilot to streamline workflows and reduce repetitive tasks.

The platform analyzes extensive operational data, consolidates critical information across departments into a single interface, and empowers staff to prioritize high-value initiatives. Since implementation, InnPower has achieved an estimated 18% increase in efficiency, with engineering teams spending up to 85% less time on data collection.[ii] This project has demonstrated significantly enhanced operational efficiency.

InnPower highlighted the rising threat of data poisoning, a critical risk in artificial intelligence where faulty or deliberately manipulated data corrupts the learning process of algorithms. Such contamination can lead to unreliable, biased, or incorrect outputs—potentially undermining operational effectiveness and stakeholder trust. This threat makes clear that the integrity of data is not just a technical concern, but a cornerstone of ethical and effective AI implementation.

Building on InnPower’s deployment, the presentation emphasized the broader principles utilities should follow to ensure ethical and secure AI integration. When a Member implements AI responsibly and securely, it’s important to start slow and scale smart by adopting a phased approach that includes controlled pilot programs and clear success metrics. Organizations should avoid ingesting customer or private information by excluding personal identifiers—such as names, account numbers, and addresses—unless absolutely necessary, and anonymizing data whenever possible.

Minimizing direct platform integration is also crucial; limiting real-time API connections between internal systems and external AI platforms helps reduce exposure. Secure, segregated datasets should be used to train and evaluate AI models, especially when dealing with sensitive utility operations, to ensure data isolation.

Additionally, conducting privacy impact assessments for all AI use cases will help identify potential privacy risks and ensure regulatory compliance. Access to AI outputs must be restricted to authorized personnel only, and comprehensive audit trails should be maintained to log data usage and document decision support history.[iii]

As AI systems continue to shape industry decisions and workflows, ensuring the accuracy, provenance, and security of data inputs is essential. Proactive monitoring, validation protocols, and robust data governance practices are vital to detect and mitigate the risk of data poisoning. Upholding these standards reinforces the reliability of AI tools and supports their safe, accountable integration into our operations.

The adoption of AI systems introduces an expanding range of risks that go beyond traditional concerns.

 

Pin Key AI-Related Risks in Electrical Distribution

  1. Operational Disruption Due to AI Malfunctions
    AI applications in grid optimization, predictive maintenance, and load forecasting are vulnerable to algorithmic errors and data corruption. A single miscalculation may lead to blackouts or equipment overloading—disruptions that are often not adequately addressed by standard property or liability insurance policies.[iv]
  2. Increased Cybersecurity Threats
    The expansion of AI-enabled smart grids and IoT-connected substations significantly broadens the attack surface. A cyber incident leveraging these AI systems could result in cascading operational failures, raising complex questions about coverage under cyber versus property insurance provisions.
  3. Complex Liability in Autonomous Operations
    When an AI system independently reroutes power or deactivates grid sections, the consequences can be severe. Determining liability—whether it rests with the utility provider, AI developer, or systems integrator—becomes increasingly ambiguous, straining current legal and insurance frameworks.[v]
  4. Bias and Discriminatory Outcomes
    AI tools used for customer segmentation or adaptive pricing may unintentionally produce biased results, particularly in regions with limited data infrastructure, this raises the risk of regulatory action or legal challenges involving alleged discrimination.[vi]
  5. Regulatory Compliance and Lack of Transparency
    The highly regulated nature of the utility sector demands that AI models used in safety-critical contexts are explainable and subject to audit. Without explainability, utilities risk both compliance violations and denied claims for AI-related failures.
  6. De-skilling and Over-Reliance on Automation
    As operational staff increasingly depend on AI for diagnostics and decision-making, institutional knowledge and manual expertise may erode. In the event of AI failure, this could hinder timely human intervention and exacerbate the impact of the incident. Training programs that support core competencies alongside AI adoption will be critical.

 

 

As time goes on, the ability to evaluate AI tools/agents for use becomes extremely important, particularly to ensure acceptability and adherence to corporate policies, as this technology rapidly advances. The NIST AI Risk Management Framework (RMF) and Playbook serve as dynamic resources for organizations seeking to ensure the trustworthiness and responsible oversight of AI systems. Rather than a prescriptive checklist, the Playbook offers adaptable guidance centered around four key functions—Govern, Map, Measure, and Manage—that support continuous evaluation across the AI lifecycle. These functions help teams establish clear oversight, assess system interactions, monitor trust-related metrics, and proactively mitigate risks. As AI tools evolve, this framework enables organizations to maintain transparency, accountability, and ethical alignment in real time.[vii]  

In addition to tool evaluation, the RMF provides a foundational structure for developing internal AI-use corporate policies. By leveraging the Playbook, organizations can tailor their governance protocols to industry-specific needs and regulatory environments while fostering multidisciplinary collaboration. This ongoing policy refinement ensures that roles, responsibilities, and operational safeguards remain aligned with emerging challenges and standards. As a living document, the NIST Playbook empowers organizations to create resilient, agile policies that promote trustworthy and safe AI practices.

As AI technologies continue to evolve rapidly, The MEARIE Group remains committed to equipping stakeholders with prompt, relevant, and actionable insights. Through ongoing research, strategic partnerships, and industry engagement, we strive to ensure that our members are not only informed but also empowered to navigate emerging challenges and opportunities with confidence.


[i] AI in action. (2025, July/August). Electrical Business Magazine, p. 18. http://magazine.annexbusinessmedia.com/publication/?i=847097&p=18&view=issueViewer

[ii] Electricity Distributors Association. (2024, June 25). InnPower enhances operations, efficiency, and cost management with AI. https://www.eda-on.ca/Blog/ArtMID/20114/ArticleID/3684/InnPower-Enhances-Operations-Efficiency-and-Cost-Management-with-AI

[iii] Persaud, D., & Casciato, A. (2025, June 26). AI for LDC operations: InnPower’s experience, lessons, and risk mitigation [MEARIE Conference, June 2025 presentation]. InnPower Corporation.

[iv] Swiss Re Institute. (2024, May 23). AI and the industry risk landscape. https://www.swissre.com/institute/research/topics-and-risk-dialogues/digital-business-model-and-cyber-risk/ai-and-the-industry-risk-landscape.html

[v] Tayal, K., Lusardi, G., & Olivieri, A. (2024, May 6). Insuring the unpredictable: The challenges of AI risk insurability. DLA Piper. https://www.dlapiper.com/en/insights/publications/derisk-newsletter/2024/insuring-the-unpredictable-the-challenges-of-ai-risk-insurability

[vi] Yong, W. (2023, May 8). The risks and impact of AI in insurance. IBM. https://www.ibm.com/think/insights/ai-insurance-risks

[vii] National Institute of Standards and Technology. (2025, February 6). NIST AI RMF Playbook. https://airc.nist.gov/airmf-resources/playbook/


At The MEARIE Group, we remain committed to providing the most up-to-date insights on risk management and industry best practices. Should you have any questions or require further information, please do not hesitate to reach out.

 

 

For more information on this topic, contact us to learn more.

Article by:
MEARIE Insurance Team