This past February, Deepfake ads were run on Facebook, Messenger, and Instagram which depicted inappropriate edited photos of celebrities such as Jenna Ortega.
This is not the first time this has happened: nude pictures of Taylor Swift ran rampant over the platform X before it was addressed. One image racked up 47 million views before it was taken down. The White House even responded, saying to ABC News, “We are alarmed by reports of the circulation of images…of false images, to be more exact,” and are urging Congress to take legislative action.
But why was this kind of app even available to download and show ads in the first place? Mrs. Stevens (Teacher) thinks that it should not be available, saying that “[T]here is truly no practical or useful reason to have an app like this in existence aside to humiliate another person.” A kind of app like this is also detrimental to META itself, as stated by Mrs. Stevens, “This app does not align with the corporate image for META. They should be actively attempting to locate and delete any use of this app.”
Does this give reason to question safety when using social media? Mr. Luckett responded to this question, saying, “It would fully depend on the apps’ intended use. If there is ever a time where I see content that is not relevant or appropriate, then I block that content in order to limit my viewing of… (possibly inappropriate content). Preferably, I would like to not see it at all (on these platforms).”
Stefan Turkheimer, RAINN (a nonprofit anti-sexual assault organization) Vice President, said that, “More than 100,000 images and videos like this are spread across the web…a virus in their own right. We are angry on behalf of Taylor Swift (and other celebrities such as Jenna Ortega), and angrier still for the millions of people who do not have the resources to reclaim autonomy over their images.”