The UK government’s global AI Safety Summit, which started today (1 November), will gather a group of the world’s most influential political and technology leaders to discuss the risks of artificial intelligence (AI). In particular, the closed-door summit will explore ways to mitigate and pre-empt both the perceived and real dangers of AI through internationally coordinated action. Amongst the wider national and international security risks, businesses will be looking for clarity on how to safely proceed with developing enterprise AI solutions.
Research firm GlobalData predicts that the AI market will grow from $81bn in 2022 to $909bn by 2030, at a compound annual growth rate of 35% between 2022-30. However, the financial potential of AI comes with risks that need to be addressed as businesses keenly await regulatory clarity around the rapidly developing technology
1. Litigation risk
Enterprises are concerned that their proprietary data will unintentionally end up in the public realm or be inadvertently used to train large language models (LLMs) and therefore be accessible to others, according to research firm GlobalData’s Artificial Intelligence executive briefing in June 2023. While the security of enterprise data and customer information is an issue, companies building their own LLMs may also unwittingly open themselves up to litigation risk as regulation around the use of training data has yet to be established in most jurisdictions.
However, the wholly unclear regulatory framework for civil, and even criminal, liabilities goes beyond just litigation on training data, according to GlobalData thematic research director Josep Bori. “If an autonomous vehicle runs over a pedestrian, a military robot kills someone by accident, or a policy drone crashes on someone’s garden, who is liable? The AI company developing the AV, the drone, the robot? Or the company operating the drone, or the passenger sitting inside the AV, etc?” asks Bori.
Indeed, data ownership is a key area in which regulatory clarity is needed, agrees Dave Colwell, VP of AI and machine learning at software company Tricentis. “Companies need clarity around acceptable use, data ownership and accountability as a priority. Since it’s not possible to interrogate AI or blame it for any of its decisions, accountability laws will be a big part of future governance,” adds Colwell.
Acceptable AI use policies will need to become commonplace, and AI will need to be centrally managed to ensure compliance, specifically with data privacy and regulatory guidelines, in Colwell’s view.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“With a high likelihood of ‘right to forget’ and copyright laws being extended into the realm of AI, it will be vital for companies to receive guidance on data hygiene and segregation,” says Colwell, adding: “Considering where data used to train an AI has come from, and implementing thorough data tracking and sourcing policies, will be a key element of successful AI usage.”
2. Data privacy risk
Businesses need to be mindful of the risks of AI violating data privacy, warns GlobalData research analyst Laura Petrone. Regarding generative AI, LLMs are trained on large swaths of internet data that might include personal data collected without the user’s consent. “Without clear AI regulation and enforcement, data privacy laws might apply and are being applied in some jurisdictions,” says Petrone citing California as an example where stringent data protection laws apply and ChatGPT maker OpenAI is currently facing a lawsuit for using training data scraped from the public internet without permission.
Companies must also be aware that using facial recognition is highly controversial and must comply with relevant data protection laws like the General Data Protection Regulation (GDPR). The risk of such systems doesn’t just include potential data privacy violations; they can also generate biases and inaccuracies that could profoundly damage a company’s reputation, according to Petrone.
In fact, the EU’s proposed AI Act considers these systems high-risk and, therefore, subject to greater restrictions than low-risk AI systems. For example, it prohibits real-time facial recognition systems from being used by law enforcement except for in a limited number of cases.
3. Cross-border regulation
Regulation is an overarching theme at the summit, particularly for international businesses awaiting direction on the topic. Businesses exploring generative AI face both an uncertain and fragmented global regulatory landscape with governments scrambling to keep pace with AI development. The European Union’s AI Act was passed by the European Parliament in June and is now being reviewed by each country, with the aim to reach an agreement by the end of 2023.
Similarly, the UN Secretary-General has launched an AI Advisory Body on risks, opportunities, and international governance of artificial intelligence, which aims to deliver its recommendations by the summer of 2024. T
Though US President Biden announced an executive order for the safe, secure, and trustworthy development and use of AI on 30 October, both the US and China, which indisputably lead the deployment of AI in business, have yet to issue a clear legislative framework for AI.
For now, international businesses should remain very agile in how they structure their offerings, possibly differentiating them by jurisdiction, advises Bori.
4. AI cyber risk
Ambroży Rybicki, co-founder and CEO of IT consultancy, ARP Ideas considers AI-enabled cyber attacks among the primary AI safety issues for businesses. Rybicki points out that AI can automate and scale cyber-attacks, making it possible for cybercriminals to carry out attacks on an unprecedented scale. “AI can learn and adapt over time, making it capable of bypassing traditional security measures that rely on known patterns of malicious activity,” he adds.
But AI can become part of the cybersecurity solution as well as presenting risk. Matt Wiseman, senior product manager at critical IT infrastructure provider, Opswat, says that AI integration can enhance vulnerability and threat detection, strengthening cybersecurity defences and complementing the widespread use of zero-trust technology. “Leveraging AI in cybersecurity programs can also address the skills gap, allowing teams to focus on higher-priority tasks,” adds Wiseman.
5. ESG risk
There is an environmental cost of training AI models which has yet to be addressed. Bori describes the energy consumption to develop, manufacture and run new technologies such as AI, AI chips and LLMs as being at loggerheads with climate change policies.
“The environmental costs of generative AI are only just beginning to capture more attention, something that wasn’t the case maybe a year ago. Clearly, the energy needed to run the latest processing hardware (GPUs and accelerators), servers, data centres, and cloud services necessary for AI programs, can lead to a substantial carbon footprint,” says Bori.
The potential for algorithmic bias is also an ESG issue for businesses to consider. GlobalData recommends that companies looking to scale their use of AI to include generative AI should implement multi-disciplinary AI and Ethics teams that evaluate new AI use cases and ensure they adhere to corporate ethical standards.
“Organisations evaluating generative AI should look for solutions that include model explainability so that users understand the sources of information used to create output and can ascertain their credibility,” according to GlobalData’s Artificial Intelligence executive briefing of June 2023.
Businesses will need to ensure that they are adhering to ethical and legal requirements on a local, national, and international scale around AI – which is why the UK AI Safety Summit needs to bring more clarity to the regulatory commitments for businesses.