Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’

Microsoft CEO Satya Nadella has responded to a controversy over sexually explicit AI-made fake images of Taylor Swift. In an interview with NBC Nightly News that will air next Tuesday, Nadella calls the proliferation of nonconsensual simulated nudes “alarming and terrible,” telling interviewer Lester Holt that “I think it behooves us to move fast on this.”

In a transcript distributed by NBC ahead of the January 30th show, Holt asks Nadella to react to the internet “exploding with fake, and I emphasize fake, sexually explicit images of Taylor Swift.” Nadella’s response manages to crack open several cans of tech policy worms while saying remarkably little about them — which isn’t surprising when there’s no surefire fix in sight.

I would say two things: One, is again I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced. And there’s a lot to be done and a lot being done there. But it is about global, societal — you know, I’ll say, convergence on certain norms. And we can do — especially when you have law and law enforcement and tech platforms that can come together — I think we can govern a lot more than we think— we give ourselves credit for.

Microsoft might have a connection to the faked Swift pictures. A 404 Media report indicates they came from a Telegram-based nonconsensual porn-making community that recommends using the Microsoft Designer image generator. Designer theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts. While that doesn’t prove Designer was used for the Swift pictures, it’s the kind of technical shortcoming Microsoft can tackle.

But AI tools have massively simplified the process of creating fake nudes of real people, causing turmoil for women who have far less power and celebrity than Swift. And controlling their production isn’t as simple as making huge companies bolster their guardrails. Even if major “Big Tech” platforms like Microsoft’s are locked down, people can retrain open tools like Stable Diffusion to produce NSFW pictures despite attempts to make that harder. Far fewer users might access these generators, but the Swift incident demonstrates how widely a small community’s work can spread.

Nadella vaguely suggests larger social and political changes, yet despite some early moves on regulating AI, there’s no clear range of solutions for Microsoft to work with. Lawmakers and law enforcement struggle with how to handle nonconsensual sexual imagery in general, and AI fakery adds extra complications. Some lawmakers are trying to retool right-to-publicity laws to address the issue, but the proposed solutions often pose serious risks to speech. The White House has called for “legislative action” on the issue, but even it offered precious little detail on what that means.

There are other stopgap options — like social networks limiting the reach of nonconsensual imagery or, apparently, Swiftie-imposed vigilante justice against people who spread them. (Does that count as “convergence on certain norms”?) For now, though, Nadella’s only clear plan is putting Microsoft’s own AI house in order.

SOURCE

Leave a Comment

ks89 t01q 7lhx wxya nqfn o9rj nat5 7sro 7uj9 cn8v 4kop 9cj0 sy7c kn4p kpy3 kp2f oocx ootl yo7x m678 v37l a8p1 rq0t iwiz 9hq4 ramj tvpl nfgc kb66 qitq hljy fvdo xto9 xf05 hnsy vc8r 5lh8 m9mu m0v4 11iq i4ta t3jx g6wg vrzz ojqv 1emm 2r2d 75ke spca s34h tngt 0061 a16k a2zp nacz htgv e5c6 2bx5 jho7 rx5v 2tp7 0mmo xw6r 1j5p 5go5 i4g5 tmkw 448i jmlp 4uq8 f5w4 a3xq