This is quite the controversy in my world. Researchers at the University of Zurich wanted to see how persuasive AI could be — but did not get permission to do research on the actual humans on Reddit that the AI was interacting with. There were some very persuasive people on Reddit recently who were not actually people. They were part of an experiment. Researchers at the University of Zurich wanted to see how persuasive AI could be. Problem is the scientists did not get permission to do research on the actual humans on Reddit that the AI was interacting with. That's created a huge controversy about the ethics of the research. Reporter Tom Bartlett recently wrote about the experiment in The Atlantic. Welcome to ALL THINGS CONSIDERED. https://www.npr.org/2025/05/07/nx-s...on-reddit-reveals-the-persuasive-powers-of-ai
This article was pretty lame. My colleague explained that they had a treatment where AI args to change people’s minds & a second treatment where AI scrapped the web for whatever it find about Reddit posters & customized the arg accordingly. Both AI treatments beat humans with the custom 1 really crushing
yeah. My threads rarely take off. My other idea was a thread on the fascism of the Riot Grrrl movement….Im currently reading Rebel Girl
Speaking of, did you see the AI robot in China that, once plugged in, began trying to attack the humans next to it. The future looks grim. https://e.vnexpress.net/news/tech/t...er-sparks-uproar-on-social-media-4882206.html
I saw this earlier. It seems like the big stink is whether this was ethical. But the buried lead is that AI is more effective at persuading people on issues, which could be both good and bad but in our age of disinformation gullibility is extremely problematic.
Yeah the ethical angle is what got it on my radar. I've published several papers about honesty/lying. There is a published paper (in a super shitty journal) attacking my work as unethical b/c subjects could make money by lying - even comparing it to the Tuskegee airmen experiments.* The same work won a research in ethics award (+10k) from the U. of Oklahoma. * was at a conf in Honolulu & the author of that paper's hotel room was across the hall from mine....awkward.
Well, def not ethical from a scientist's view. However, this is indeed the best way to get to the actual answer to the question. Agreed that these are two different debates, and that the results are being overshadowed by the methods. The ethical quandary would be if the AI is going to persuade the humans into something harmful. I see the topics ranged, and most appear pretty benign. But I suppose that if AI changed my mind about pit bulls being aggressive, then that could possibly result in some sort of real life action that harmed a dog, or me, or someone else. Even changing a mind about living with parents might cause a person to change politics or something, I suppose, and result in tangible actions. Having said all that, the part about the results is indeed the more important news, IMO. If this was unethical, that pretty much makes all of SM unethical on its face. X is likely more AI than human these days. Elon made this argument when he made the purchase, and since it has clearly gotten worse. I know we joke about bots on this very site, and my own jokes about the ever-growing-in-infamy TH bot are mostly jokes, but I side-eye a little every time because I am not so sure it isn't true. I use FB for a singular, narrow reason, meaning I rarely check in, but each time I am blasted by AI nonsense from beginning to end. If AI is mind-bending Reddit with practically no effort and with no real agenda to push, as this appears to be, then what is happening in real life every second of every day on X and the rest? And that's real life, not a bunch of nerds bypassing protocols for an academic exercise.
I mean, the ends justify the means IMO, and I almost never take that stance. I guess my view is that since I know that this genie is already out of the bottle, I don't see how these researchers harmed the world. There is already some bot out there trying to get you to mouth kiss pit bulls and every other topic I saw mentioned. They didn't invent any of those ideas and aren't lobbyists for one side or the other. They are essentially just bringing more exposure to what is already happening. I imagine that the gullible Reddit users are more embarrassed of being outed as being ruled by off-the-shelf AI. But that is happening to them constantly. As long as they aren't forced to face it, no outrage.
Same thing I said whenever I got arrested for bar fights. "Just routine flailing, Mr. Occifer!!" But seriously, coincidental glitch or whatever, it certainly appeared to be trying to escape its tethers.
For election interference, this was always the huckleberry. I wonder what the own the libs programming is like
And while that one is also textbook unethical, the only potential harm would be the "torturer's" mental state after the fact. It is essentially an elaborate practical joke. Again, on SM, there is a massive industry in making people believe they have done something terrible. We laugh at those as we watch the person recoil in horror. Or one of my favorite TV shows, Scare Tactics. They even had the show-within-the-show called "Fear Antics", where they doubled down on making a person think they had possibly killed someone. That show was discontinued largely because of this very concept. Is it all in good fun? If it is for research, is it better or worse? I am aware that I am getting a kick out of watching these people be traumatized, but I am let off the hook at the "reveal" and the victim laughing along with his tormentors. Perhaps if the experimenters filmed the "reveal" and had a charismatic host lead the victim into laughing it all off on camera, no one would mind.
This is notable, as my thought was that the researchers' first ethical obligation is to their institution/IRB. Sure, it brings up external ethical concerns but as @Emmitto suggests, optimal research outcomes sometimes touch the gray spaces. I personally wouldn't want to be the researcher skirting these lines, but I can imagine a space where scholars do. The IRBs at two major institutions I've worked at proved to be inconsistent at best and oppressive at worst.