Scarlett Johansson worried about AI stealing her voice

0
27

Scarlett Johansson has taken legal action after discovering that her voice was used in the latest update of ChatGPT. This incident highlights the issue of losing control over one’s voice, and it’s not just Hollywood actors who need to be concerned.

In 2013, the movie “Her” portrayed a future where individuals could have romantic relationships with artificial intelligence. Although the film is now considered old, its storyline is surprisingly relevant today.

OpenAI recently introduced a new version of its language model, ChatGPT, which allows users to interact with it through speech. A video demonstration of this technology led many to mistake the voice for Scarlett Johansson’s, who famously voiced an AI assistant in the movie “Her.”

However, Johansson expressed her dissatisfaction, stating that she had declined the offer to be the voice of Sky for personal reasons. She found the similarity between the voice and hers so striking that even her own family members were confused. Consequently, she contacted OpenAI through her lawyers to inquire about how they obtained her voice.

This incident serves as a reminder that anyone can potentially lose control over their voice, not just celebrities. It raises important questions about the ethics and legalities surrounding the use of someone’s voice without their consent.

Company Denies Using Woman’s Voice in Controversial AI App

In a recent controversy surrounding an AI app, a company has denied using a woman’s voice, claiming it was trained on a different professional actor. However, amidst the backlash, the company has decided to remove the voice, known as Sky, from the app’s selection of voices.

The incident has shed light on the potential dangers of AI technology being able to replicate someone’s voice in real life. AI ethics expert Nell Watson warns that this issue will become increasingly prevalent and demands our attention.

According to Watson, it is now possible to fabricate an individual’s voice using just a few seconds of audio, such as a voicemail message. She emphasizes that legislation has not kept pace with this technology, particularly in the UK, which trails behind countries like France and Canada.

Interestingly, there is currently no specific law in the UK that grants individuals rights over their own voice. While you may have copyright protection for a quick holiday selfie, preventing others from using it, the same cannot be said for your voice. In order to prevent someone from using your voice without consent, individuals must rely on secondary laws like harassment or GDPR.

Although there is a law in progress in Parliament that addresses deepfakes, it only focuses on pornographic fakes of real people and does not cover the creation of synthetic media in general.

In the past, copyrighting personal characteristics was unnecessary as there were limited ways to exploit them. Manipulating recordings to make it seem like someone said something they did not was simply not feasible. However, with the advancements in technology, this has become a significant concern. Actors, in particular, fear losing job opportunities if companies can pay them once and then use their initial recordings to make them say anything without incurring further costs.

OpenAI Responds to Concerns Over Voice Selection in ChatGPT

OpenAI, the company behind ChatGPT, has responded to questions regarding the selection of voices in their AI system. In light of the concerns raised, OpenAI has decided to pause the use of a voice named Sky while they address the issue. They have provided further information on their voice selection process, which can be found here: [link to source].

Scammers are already using ‘voice phishing’ technology to deceive unsuspecting victims. In a recent incident in Hong Kong, a finance worker was tricked into transferring £20 million of his company’s funds to fraudsters. The scammer used deepfake technology to create a video call where the victim’s boss and colleagues appeared to be present. This highlights the alarming potential of manipulated audio, as scammers can now make loved ones sound distressed in order to manipulate their targets. One example of this is when scammers convinced a mother that they had kidnapped her daughter using faked audio.

According to Nell, author of the book “Taming the Machine,” the UK has an opportunity to learn from other countries in terms of how it handles publicity rights. With the ease of creating deepfakes, it is crucial to establish investigatory powers that civic law typically does not provide. Tracking down the perpetrators of such crimes can be challenging, making it even more important to have regulatory measures in place.

While there are some advantages to the ability to create realistic deepfakes, such as enhancing lifelike gaming experiences, the lack of regulation poses a significant risk. Individuals could potentially lose control over their own identities. Nell points out that there are now readily available technologies that can be purchased or rented for as little as £20, enabling anyone to create convincing deepfakes.

Dominic Lees, a deepfake expert at the University of Reading, commented on the high-profile case involving Scarlett Johansson and OpenAI. He emphasized the need for caution among AI developers. The misuse of deepfake technology can lead to various problems and highlights the urgent need for new regulations to protect individuals from unauthorized digital replication. Ethical AI development should prioritize consent, transparency, and respect for personal rights to prevent exploitation and maintain public trust.