UK ‘pro-innovation’ draft AI guidelines suggest no real pressure on big tech, says GlobalData

The UK government’s latest draft guidelines for regulating artificial intelligence (AI) are too vague and simply seek to appease big tech, according to GlobalData. The leading data and analytics company notes that the proposal to be “proportionate and adaptable” and that regulators should “consider lighter touch options” suggests there will be no real pressure on big tech to change its AI systems in the short term, Light touch regulation gives tech companies unprecedented power and could lead to unfair outcomes for recipients of AI systems.

Sarah Coop, Thematic Analyst at GlobalData, comments: “The new UK AI guidelines are clearly aiming to encourage AI investment in the UK by simply appeasing big tech. While it is commendable that we want to welcome AI innovations, this is inappropriate when it comes to regulating a technology that is adapting so rapidly. As AI algorithms replace human decision making, unregulated algorithms risk unfairly disadvantaging groups of people, or even causing safety risks. Light touch regulation will allow short term innovation but cause long term problems.”

GlobalData notes that the innovation-focused tech policy and digital strategy, the draft of which was presented to parliament on July 18, is likely aimed to boost the digital economy following the COVID-19 pandemic and Brexit, as well as attract talent to the UK.

Emma Taylor, Thematic Analyst at GlobalData comments: “Much of this draft policy feels like a box-ticking exercise, rather than something that will adequately regulate big tech. The government has been too cautious of stifling innovation in its aim to become a new world AI superpower.

“The government is also undoubtedly hoping that this digital strategy will counteract its lack of skilled workers by encouraging innovation and attracting talent. The new AI policy is part of the UK’s wider efforts to enhance its technological prowess in areas like quantum computing and semiconductors.

“Further, the regulations are purposely vague to ensure they have a long shelf life. This is indicated by the refusal to set out a universally applicable definition of AI, for fear of it not encompassing future technology. In practice, this type of policy and regulation should be reviewed in conjunction with established AI organizations continuously, and be precise and actionable, instead of ambiguous and performative.”

In contrast, the EU’s latest AI regulation draft took a privacy-centric approach to regulation, prohibiting AI systems where the risk to safety or risk of bias is too high—for example, banning black box algorithms, which can include so many algorithmic layers that humans are not able to interpret its decisions or predictions. Big tech has lobbied against the EU’s privacy-centric approach in the past, arguing that the proposed Digital Markets Act (DMA), which aims to regulate internet competition, would reduce innovation* and therefore consumer choice.

Coop continues: “While the UK may want to impose an investment focused strategy which contrasts to the EUs approach, this risks developing unreliable or unsafe AI systems, affecting individuals and causing problems with future global AI regulation compliance ”.

* https://www.wired.com/story/digital-markets-act-messaging/

Media Enquiries

If you are a member of the press or media and require any further information, please get in touch, as we're very happy to help.



DECODED Your daily industry news round-up

This site is registered on wpml.org as a development site.