“Like Photoshop on steroids”: OpenAI boss admits risks of AI revolution

At a hearing in the US Senate, the OpenAI boss admitted that AI development entails risks. Sam Altman, therefore, called for more supervision.

Sam Altman in the US Senate

Samuel Altman, head of ChatGPT firm OpenAI, has acknowledged that AI generators pose serious risks and has called for strict regulation. At a hearing in the US Senate, he agreed with other respondents that language models such as the company’s own GPT4 and tools based on it will cause even more disinformation, encourage cybercriminals and even endanger trust in democratic elections. At the same time, however, he was convinced that the possibilities of the new technology far outweighed the risks. Language models like GPT4 would “automate away” some professions, but at the same time create many new ones.

Secure US leadership

Altman testified before the Senate Judiciary Committee on Tuesday and, in his introductory statement, spoke out in favor of regulating the new technology. Legal requirements are a fundamental prerequisite for ensuring security while people benefit from the many advantages of AI technology. OpenAI strives to help politicians with this. We are committed to working with US lawmakers to maintain US leadership in key areas of AI development. They also want to ensure that as many Americans as possible benefit from artificial intelligence.

In view of the fears repeatedly expressed by the senators about possible dangers from AI, Altman did not want to let at least the bleakest scenarios stand. It has been suggested several times that the current AI revolution could be comparable to the development of nuclear weapons. That went too far for Altman, who preferred to draw parallels with Photoshop. With the advent of the powerful image editing program, there were also concerns that people would fall for manipulated images in droves. However, most of them quickly got used to questioning images as a matter of principle and considering manipulation to be possible. The current development is similar, only “like on steroids”.

Licenses for the operation of AI

To control technical development, Altman has proposed the creation of a US government agency to test AI models such as GPT4. There should be a series of security tests for the technology – for example, whether it can spread independently. In addition, those responsible for US politics should check whether providers of AI technology are licensed so that politicians can control who can develop and distribute the technology at all. During the hearing, it became clear that there is support for the establishment of such an authority in both conflicting parties. The omissions in the regulation of social networks were repeatedly reminded of.

Altman is co-founder and CEO of Open AI, the company behind the ChatGPT AI text generator and Dall-E image generators. Above all, ChatGPT and the underlying language model GPT4 have been causing a stir far beyond the technology industry for weeks. Upon request, the technology generates texts that sound so coherent that they appear to come from humans. At the same time, however, technology cannot distinguish between truth and untruth, which is why there are sometimes completely invented sources in the answers. Because the differences cannot be seen, there are increasing fears that the technology can be used to unintentionally – but also intentionally – spread disinformation.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s