OpenAI's new GPT-4o can create realistic images, but is it a deepfake disaster waiting to happen?

OpenAI has recently launched its new GPT-4o image generation capabilities within ChatGPT, so it’s no surprise that the company is the talk of the town among AI enthusiasts.
However, while many were expecting OpenAI to announce release dates for GPT-4.5 and GPT-5 during the livestream, that wasn’t the main focus. Instead, the focus was on the impressive new features of ChatGPT that have gone viral on social media.
That’s because the new image generation features are capable of creating highly realistic images from text prompts. Users have reported that the AI is able to create images with legible text and even transform existing photos.
While this is a pretty neat party trick, it’s also opened up concerns about how this technology could be misused. In particular, it could be used to create deepfakes without any visible watermarks.
There have been examples on Reddit of AI-generated images featuring celebrities and fictional scenarios that could easily fool people if they weren’t aware of the underlying technology.
OpenAI’s Sam Altman acknowledged the excitement surrounding the new technology but has faced some backlash for not putting more safeguards in place to prevent misuse.
The AI does have some restrictions when it comes to generating certain types of content, but there are no robust protective measures in place to prevent copyright infringement or potential job losses in creative fields.
Altman’s nonchalant attitude towards the situation, combined with the ease at which lifelike images can be created, has sparked a debate about whether AI developers have a moral obligation to ensure their technologies are not misused.