<?xml encoding=”utf-8″ ?????????>
People should not assume a positive outcome from the artificial intelligence boom, the UK’s competition watchdog has warned, citing risks including a proliferation of false information, fraud and fake reviews as well as high prices for using the technology.
The Competition and Markets Authority said people and businesses could benefit from a new generation of AI systems but dominance by entrenched players and flouting of consumer protection law posed a number of potential threats.
The CMA made the warning in an initial review of foundation models, the technology that underpins AI tools such as the ChatGPT chatbot and image generators such as Stable Diffusion.
The emergence of ChatGPT in particular has triggered a debate over the impact of generative AI – a catch-all term for tools that produce convincing text, image and voice outputs from typed human prompts – on the economy by eliminating white-collar jobs in areas such as law, IT and the media, as well as the potential for mass-producing disinformation targeting voters and consumers.
The CMA chief executive, Sarah Cardell, said the speed at which AI was becoming a part of everyday life for people and businesses was “dramatic”, with the potential for making millions of everyday tasks easier as well as boosting productivity – a measure of economic efficiency, or the amount of output generated by a worker for each hour worked.
However, Cardell warned that people should not assume a beneficial outcome. “We can’t take a positive future for granted,” she said in a statement. “There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”
The CMA defines foundation models as “large, general machine-learning models that are trained on vast amounts of data and can be adapted to a wide range of tasks and operations” including powering chatbots, image generators and Microsoft’s 365 office software products.
The watchdog estimates about 160 foundation models have been released by a range of firms including Google, the Facebook owner Meta, and Microsoft, as well as new AI firms such as the ChatGPT developer OpenAI and the UK-based Stability AI, which funded the Stable Diffusion image generator.
AI expert Sridhar Iyengar, Managing Director, Zoho Europe, commented: “The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.”
“Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools. AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”
The CMA added that many firms already had a presence in two or more key aspects of the AI model ecosystem, with big AI developers such as Google, Microsoft and Amazon owning vital infrastructure for producing and distributing foundation models such as datacentres, servers and data repositories, as well as a presence in markets such as online shopping, search and software.
The regulator also said it would monitor closely the impact of investments by big tech firms in AI developers, such as Microsoft in OpenAI and the Google parent Alphabet in Anthropic, with both deals including the provision of cloud computing services – an important resource for the sector.
It is “essential” that the AI market does not fall into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews, the CMA said.
In the long term, it could enable firms that develop foundation models to gain or entrench positions of market power, and also result in companies charging high prices for using the technology.