Hear from CIOs, CTOs, and other senior executives and leaders on data and AI strategies at the Future of Work Summit on January 12, 2022. Learn more
Earlier this week, I was in San Diego as a speaker and guest of the National Association of Insurance Commissioners (NAIC) National Meeting. I had the opportunity to share some of my own perspectives and opinions with the Big Data and Artificial Intelligence working group. I have also had the opportunity to participate in meetings with key stakeholders involved in reviewing the next steps towards regulatory oversight of AI.
2021 has seen a significant acceleration in regulatory interest and posture regarding the use of AI, both in insurance and more broadly. From New York City Council creating legislation to limit AI bias during the hiring process, to advice from the Federal Trade Commission on how to create and deploy responsible AI and machine learning models, governing bodies in the United States have shown a direct interest in the regulation of AI. . For insurance companies with European exposure, a recently released update to the proposed European AI law now specifically places the use of AI in the insurance industry in the “high risk” category.
In August 2020, the NAIC brought the principles of AI to the fore. Over the past year, his goal has been to gain more data on the exact state of the insurance industry in its use of AI. The priority was to get an idea of the impact of regulations on the use of AI technologies by industry. During the Big Data Working Group, a first public insight was offered on the results of a survey of cargo and accident carriers and their use of AI. The results show wide application of AI in the core functions of this group of insurers. This task force appears likely to expand the survey to owners and life insurance lines of business in the coming months.
The challenge of regulating AI is not trivial. Regulators need to strike a balance between protecting consumers and supporting innovation. Several themes are evident regarding regulatory perspectives on the use of AI in insurance:
- An appreciation that AI is a complex system resulting from actions, decisions and data driven by a team of stakeholders throughout the lifecycle of a system.
- Understanding that regulations will need to include evidence of comprehensive lifecycle governance and objective reviews of key risk management practices.
- Agreement among regulators that they are largely not equipped to perform, with state regulatory staff, in-depth technical reviews or forensic analyzes of AI systems. To be successful in regulatory oversight, they need more training, partnerships with more specialized organizations, and a degree of demonstrated accountability from carriers going forward.
- A possibility that the substantive regulations that shape and define regulations will have to be forged at the federal level – and not just at the level of state-level insurance departments.
Looking back on my conversations in San Diego – and throughout the year – I have another point to think about: we could all benefit from being more direct. Where does the specific AI regulation start or end? How should insurance companies fundamentally change to better serve the often underserved categories of our population?
My career has not been in insurance. However, I very quickly realized that many of the conversations about fairness and bias in AI governance venues are by no means exclusive to AI governance. Instead, these are more important questions and considerations about balancing the appropriate risk assessment factors and the correlation these factors may have with fair treatment of certain classes of our population. I 100% agree that we have economic disparities and inequalities, and I want to see more inclusive markets; However, I would hate to see important and much needed governance practices that enhance key principles like transparency, security and accountability to wait for agreements on, in my opinion, the much larger and more difficult discussions about fairness.
I have constantly heard from regulators and industry stakeholders in San Diego that insurance is experiencing a technological renaissance. There seems to be a consensus that the way regulation works today is not what we need regulation of in the future. In some ways, enhancing the NAIC’s focus on AI by creating a new, higher-level “Letters Committee” (H) – only the eighth such committee in 150 years of The story of the NAIC – is a tremendous acknowledgment of this reality.
The next year will provide a more in-depth perspective on insurance regulators’ approach to the use of AI. We’ll see Colorado further define practices and plans for SB21-169: Restricting Insurers’ Use of External Consumer Data. We’ll likely see some federal policy or legislative development that might even be something like HR 5596: The Malicious Algorithm Justice Act 2021.
What should carriers be doing right now with all these moving parts? At a bare minimum, insurance companies should internally organize key stakeholders related to AI strategy and development to collaboratively assess how they define and develop AI projects and models. If carriers haven’t yet put in place extensive lifecycle governance or risk management practices specific to their AI / machine learning systems, they should start this journey in a hurry.
Anthony Habayeb is the founding CEO of Monitaur, an AI governance and ML insurance company.
VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member