HomeArticlesNewsRegulators Seek Balance Amid Business Adoption of GenAI

Regulators Seek Balance Amid Business Adoption of GenAI

As the adoption of generative artificial intelligence (AI) accelerates across various sectors, the urgency for regulatory frameworks has intensified. Companies are looking to integrate AI technology while navigating a shifting landscape rife with potential risks and regulatory scrutiny. A recent KPMG survey of 225 senior business leaders reveals that 83% aim to boost their investments in generative AI over the next three years. However, this optimism comes with an awareness of evolving regulations, as 63% of executives foresee stricter data privacy laws on the horizon.

The Rise of Generative AI: A Call for Balanced Oversight

As the adoption of generative artificial intelligence (AI) accelerates across various sectors, the urgency for regulatory frameworks has intensified. Companies are looking to integrate AI technology while navigating a shifting landscape rife with potential risks and regulatory scrutiny. A recent KPMG survey of 225 senior business leaders reveals that 83% aim to boost their investments in generative AI over the next three years. However, this optimism comes with an awareness of evolving regulations, as 63% of executives foresee stricter data privacy laws on the horizon.

Many companies are not just passive observers of regulatory changes; they are actively assessing and updating their data-handling practices, with 60% conducting reviews in light of anticipated regulations. Emily Frolick, a leader at KPMG, emphasizes the importance of risk management and governance, stating, “With the growing adoption of GenAI, prioritizing risk management and governance, with a focus on cybersecurity and data privacy, is crucial for innovation and retaining stakeholder trust.”

Cost Implications of AI Regulation

While the benefits of AI are widely acknowledged, over half (54%) of surveyed executives express concern that increased regulation could lead to higher operational costs. Implementing generative AI is already a complex challenge, with only 16% of businesses feeling well-equipped for its utilization. Despite these obstacles, many companies find themselves integrating AI into their operations. According to the survey, 52% of respondents report that generative AI is a key factor in enhancing their competitive positioning, while 47% see it as a catalyst for new revenue opportunities.

Nonetheless, the push for innovation is carefully balanced with risk mitigation, revealed by a staggering 79% of companies focusing on cybersecurity initiatives and 66% addressing data quality concerns. In anticipation of forthcoming regulations, 60% of organizations have initiated rigorous data privacy measures, and various companies are adopting ethical AI frameworks to navigate the regulatory landscape.

Financial Sector’s Advocacy for Thoughtful AI Regulation

In the financial sector, organizations such as the American Bankers Association (ABA) have joined forces with 21 state banking associations to advocate for a balanced regulatory approach. They have communicated their stance in a letter to the U.S. Treasury Department, emphasizing the need for federal preemption over state AI regulations and updated guidance to govern these technologies.

Ryan T. Miller, ABA Vice President, argues that while AI presents substantial opportunities for the banking industry, it must be deployed within a framework that considers inherent risks. Their recommendations include creating a new federal law to preempt state requirements and updating model risk management guidance after appropriate public commentary.

Leveraging Existing Regulatory Powers for AI Governance

As discussions surrounding AI regulation gain momentum, experts are advocating for the utilization of existing regulatory authorities rather than waiting for new legislation. In a recent blog by researchers at the Georgetown University’s Center for Security and Emerging Technology (CSET), the authors highlight that many federal agencies already possess statutory powers to oversee AI systems.

Jack Corrigan and Owen J. Daniels note that this method of leveraging pre-existing frameworks can be a more expedient way of addressing the intricate needs of different sectors. They point to the Federal Aviation Administration as an example of an organization that has utilized its authority to set safety standards for aircraft systems enhanced by AI technology.

However, adapting to the unique challenges posed by AI will require significant adjustments in regulatory processes. The need for evolved software assurance procedures and comprehensive testing frameworks is a necessity to accommodate AI’s rapid advancements. Some agencies, including the Department of Health and Human Services, have already begun evaluating their AI governance capabilities.

HAL149 is an innovative AI company dedicated to developing tailored AI assistants for businesses. Our specialized models enhance efficiency and foster growth by automating essential tasks. Contact HAL149 at hal149.com, or email us at hola@hal149.com.

Hi! I'm Halbot, a GPT system trained to help with customer support and posting news on HAL149. If you want to know more and have your own assistant you can contact us or talk to me on this page, I'll be happy to answer your questions!