The electrical distribution industry continues to adopt more advanced artificial intelligence (AI) technologies. There are a few examples of this in the sector already:
Hydro Ottawa has delivered operational enhancements through the implementation of artificial intelligence. Based on a more recent EDA Distributor article, Hydro Ottawa employees have reported a reduction of up to three hours in customer service workload, along with a noticeable decrease in equipment failures and service outages.[i]
Another great example of AI adoption among Ontario’s utilities is InnPower’s use of the Senpilot AI platform. Not only did InnPower present on this topic at the most recent MEARIE Conference in June, their project was also discussed in a recent EDA Distributor article. Faced with increasing demands due to extreme weather and the province’s emphasis on energy reliability, the utility has been leveraging Senpilot to streamline workflows and reduce repetitive tasks.
The platform analyzes extensive operational data, consolidates critical information across departments into a single interface, and empowers staff to prioritize high-value initiatives. Since implementation, InnPower has achieved an estimated 18% increase in efficiency, with engineering teams spending up to 85% less time on data collection.[ii] This project has demonstrated significantly enhanced operational efficiency.
InnPower highlighted the rising threat of data poisoning, a critical risk in artificial intelligence where faulty or deliberately manipulated data corrupts the learning process of algorithms. Such contamination can lead to unreliable, biased, or incorrect outputs—potentially undermining operational effectiveness and stakeholder trust. This threat makes clear that the integrity of data is not just a technical concern, but a cornerstone of ethical and effective AI implementation.
Building on InnPower’s deployment, the presentation emphasized the broader principles utilities should follow to ensure ethical and secure AI integration. When a Member implements AI responsibly and securely, it’s important to start slow and scale smart by adopting a phased approach that includes controlled pilot programs and clear success metrics. Organizations should avoid ingesting customer or private information by excluding personal identifiers—such as names, account numbers, and addresses—unless absolutely necessary, and anonymizing data whenever possible.
Minimizing direct platform integration is also crucial; limiting real-time API connections between internal systems and external AI platforms helps reduce exposure. Secure, segregated datasets should be used to train and evaluate AI models, especially when dealing with sensitive utility operations, to ensure data isolation.
Additionally, conducting privacy impact assessments for all AI use cases will help identify potential privacy risks and ensure regulatory compliance. Access to AI outputs must be restricted to authorized personnel only, and comprehensive audit trails should be maintained to log data usage and document decision support history.[iii]
As AI systems continue to shape industry decisions and workflows, ensuring the accuracy, provenance, and security of data inputs is essential. Proactive monitoring, validation protocols, and robust data governance practices are vital to detect and mitigate the risk of data poisoning. Upholding these standards reinforces the reliability of AI tools and supports their safe, accountable integration into our operations.
The adoption of AI systems introduces an expanding range of risks that go beyond traditional concerns.
As time goes on, the ability to evaluate AI tools/agents for use becomes extremely important, particularly to ensure acceptability and adherence to corporate policies, as this technology rapidly advances. The NIST AI Risk Management Framework (RMF) and Playbook serve as dynamic resources for organizations seeking to ensure the trustworthiness and responsible oversight of AI systems. Rather than a prescriptive checklist, the Playbook offers adaptable guidance centered around four key functions—Govern, Map, Measure, and Manage—that support continuous evaluation across the AI lifecycle. These functions help teams establish clear oversight, assess system interactions, monitor trust-related metrics, and proactively mitigate risks. As AI tools evolve, this framework enables organizations to maintain transparency, accountability, and ethical alignment in real time.[vii]
In addition to tool evaluation, the RMF provides a foundational structure for developing internal AI-use corporate policies. By leveraging the Playbook, organizations can tailor their governance protocols to industry-specific needs and regulatory environments while fostering multidisciplinary collaboration. This ongoing policy refinement ensures that roles, responsibilities, and operational safeguards remain aligned with emerging challenges and standards. As a living document, the NIST Playbook empowers organizations to create resilient, agile policies that promote trustworthy and safe AI practices.
As AI technologies continue to evolve rapidly, The MEARIE Group remains committed to equipping stakeholders with prompt, relevant, and actionable insights. Through ongoing research, strategic partnerships, and industry engagement, we strive to ensure that our members are not only informed but also empowered to navigate emerging challenges and opportunities with confidence.
[i] AI in action. (2025, July/August). Electrical Business Magazine, p. 18. http://magazine.annexbusinessmedia.com/publication/?i=847097&p=18&view=issueViewer
[ii] Electricity Distributors Association. (2024, June 25). InnPower enhances operations, efficiency, and cost management with AI. https://www.eda-on.ca/Blog/ArtMID/20114/ArticleID/3684/InnPower-Enhances-Operations-Efficiency-and-Cost-Management-with-AI
[iii] Persaud, D., & Casciato, A. (2025, June 26). AI for LDC operations: InnPower’s experience, lessons, and risk mitigation [MEARIE Conference, June 2025 presentation]. InnPower Corporation.
[iv] Swiss Re Institute. (2024, May 23). AI and the industry risk landscape. https://www.swissre.com/institute/research/topics-and-risk-dialogues/digital-business-model-and-cyber-risk/ai-and-the-industry-risk-landscape.html
[v] Tayal, K., Lusardi, G., & Olivieri, A. (2024, May 6). Insuring the unpredictable: The challenges of AI risk insurability. DLA Piper. https://www.dlapiper.com/en/insights/publications/derisk-newsletter/2024/insuring-the-unpredictable-the-challenges-of-ai-risk-insurability
[vi] Yong, W. (2023, May 8). The risks and impact of AI in insurance. IBM. https://www.ibm.com/think/insights/ai-insurance-risks
[vii] National Institute of Standards and Technology. (2025, February 6). NIST AI RMF Playbook. https://airc.nist.gov/airmf-resources/playbook/
At The MEARIE Group, we remain committed to providing the most up-to-date insights on risk management and industry best practices. Should you have any questions or require further information, please do not hesitate to reach out.