Failure to introduce new legislation in the King’s Speech on November 7th would postpone regulations from becoming law until 2025, which would see the UK “being left behind” the EU and other counterparts, according to the Commons Technology Committee.
The government has previously introduced several measures to cement the UK’s position as an AI leader, introducing a £54 million investment for the secure and trustworthy development of AI and a £100 million initial investment in a taskforce for the safe development of AI.
This comes ahead of the international AI summit due to be held at Bletchley Park in early November.
Sridhar Iyengar, Managing Director for Zoho Europe, commented: “Taking a global lead in the AI race is a vital part of the UK’s aim to become a Tech Superpower and November’s AI summit will play a key part in this.”
“There has been some concern expressed by consumers and business around trust and safety when it comes to AI. Regulation could be important to develop widespread trust and promote further adoption of AI tools to drive business success. However, this cannot come at the expense of innovation. Taking the lead on R&D in the AI space can help to cement the UK as a global tech hub.”
“AI can add significant value for businesses. For example, it can help increase efficiency and accuracy in projections forecasting, fraud detection and sentiment analysis. However, collaboration between business, government and industry experts is necessary to ensure its success. This can help to strike the right balance when introducing safe regulations and guidance for the development of truly innovative AI solutions that can play a central role in business growth.”
Sheila Flavell CBE, COO of FDM Group, commented: “The UK’s approach to the Global AI Safety Summit and legislation will not only widely better the position of the UK as a Tech Superpower, but has the potential to affect the day-to-day lives of working people.
The tech has the capability to improve employee and customer experiences across all levels of businesses, but needs regulating. It is key we work on harnessing AI for good, and see its potential to mitigate the likes of biases in hiring, for example. While we await the fate of a possible nationwide regulation, companies should work on creating internal policies and offer opportunities to educate their staff on internal and external AI-usage. A one-size-approach may not fit all business needs, so adapting where possible to enhance operations will set up businesses for success in the long run.”
The news follows a warning from the National Cyber Security Centre (NCSC) that AI-powered chatbots powered by large language models could pose a cybersecurity risk due to a lack of “failsafe measures.”