OpenAI cuts AI safety-testing time from months to days, raising red flags

OpenAI cuts AI safety-testing time from months to days, raising red flags

OpenAI has reportedly slashed the time it spends safety-testing its artificial intelligence models from months to mere days.

The Financial Times reports that the company behind ChatGPT is now racing to get its models out into the wild, amid fears that the technology could be weaponised.

AI safety advocates are particularly concerned about catastrophic outcomes, such as the development of bioweapons. One former OpenAI researcher warned of an arms-race mentality that could lead to a perilous rush towards powerful AI.

Most researchers believe that AI will ultimately benefit humanity, but they also acknowledge the possibility of disastrous consequences.

As the debate around how to regulate AI ramps up, there are concerns that governments do not fully understand the technology. A US cabinet member recently referred to it as “A1” in a public address, for example.

Read more