Can AI be safely developed without stifling innovation?
A UK tech department representative repeatedly said in a briefing to journalists last week that “passions run high in AI”. All too accurate, especially when it comes to the contentious issue of safety.
At the AI Safety Summit in Seoul this week some of the world’s leading artificial intelligence businesses put their signatures to an agreement on developing the technology safely. OpenAI, Anthropic, Google, Meta, one Chinese and one UAE organisation were among the 16 that committed themselves to a series of promises, including not to make or use AI systems that cross a certain risk threshold.
The pact marks the second gathering to discuss AI safety following the Bletchley Park summit last November, led by the UK government with Elon Musk providing the stardust.
Not everyone believed the warm and reassuring noises made by the AI businesses in South Korea. “AI Safety Summit full of world’s leading hypocrites,” read one email circulated by a cybersecurity firm.
The event certainly raised questions. How do you go about policing these voluntary commitments? How do you get more companies from, say, China — which is rapidly developing the technology — to sign up? How effective can these agreements really be, unless the entire world subscribes to them?
The work being done by the UK’s public sector to keep on top of the rapidly developing tech is impressive. Along with the summit, Britain has established an AI Safety Institute (AISI, pronounced in Friday’s briefing as “ay see”), headquartered in London with more than 30 staff. Others around the world have followed and have pledged to work together.
AISI is recruiting steadily and in its short six-month life to date has enlisted a stellar cast of AI experts, who have analysed models and released an open-source testing platform for others to use.
While its first report reassuringly does not suggest that robots will take over the world any time soon, there were some troubling findings. All the models they have looked at were vulnerable to producing toxic content, known as “jailbreaks”, with reasonably simple prompts.
Doing this kind of important study on commercially sensitive data from private businesses inevitably creates friction. It relies on transparency and access — not easy when, for some in the tech sector, safety is a dirty word and the institute will always be seen as a thorn in the side of innovation. When AISI announced this week it was expanding to San Francisco, someone messaged me: “Oh great, doom and gloom all over the world.”
Tensions are bubbling beneath the surface. The frontier labs privately grumble about the way they are being approached by AISI and report there is “miscommunication” about what they are expected to do. AISI officials, meanwhile, claim they are being given the information they need by the AI labs, but privately acknowledge that building relationships and trust with the companies takes time.
Trust is even more difficult to build when the fiery and highly nuanced debate over AI controls is erupting within the labs themselves. The OpenAI safety executive Jan Leike quit the ChatGPT maker this week because he said that “safety culture and processes have taken a backseat to shiny products”, something OpenAI denies.
Europe and the UK are hurling regulations at American technology companies to control online content, competition and, in the EU’s case, AI. The bloc’s latest regulations on AI were rubber-stamped just this week.
At the moment the US and the UK are shying away from the r-word over AI. British mandarins insist that the kinds of open commitments made in Seoul are proving effective and that companies have so far published their safety commitments as promised. But for how long?
There are rumblings. Yoshua Bengio, one of the so-called godfathers of AI who chaired the first International Scientific Report on the Safety of Advanced AI, has suggested pledges would have to be backed up by regulation. Christina Montgomery, chief privacy and trust officer at IBM, said it too: “IBM believes that effective regulation coupled with corporate accountability will allow businesses and society at large to reap the benefits of AI.”
AISI is not an official watchdog, but it feels like there will come a point when it will be given more teeth and become one.
Katie Prescott is Technology Business Editor of The Times
Post Comment