top of page
LEARN MORE

New York Times reporter, Sheera Frenkel, joins Niki remotely to talk about the AI 2024 campaign season that wasn’t. She shares insights from her reporting on tech, shedding light on the reluctance to use AI by US campaigns compared to other countries, the rush by foreign adversaries to capitalize on AI’s capability to enhance disinformation campaigns, and lays out what has led Facebook to step back from political coverage.

“While our candidates are not excited about using AI…There are nation states all over the world who are super excited about the possibilities of AI and how it lets them reach Americans in unprecedented ways.” -Sheera Frenkel

guest

Sheera Frenkel (NYT)

date

10.03.24

AI’s Campaign Influence

episode title

transcript

Niki: I'm Niki Christoff and welcome to Teched Up. I'm joined today by Sheera Frenkel, an award-winning reporter who covers technology and cybersecurity for the New York Times. Based in the Bay Area, she's calling in to chat about how technology, including AI and social media, is being used and not used during this barn-burner election season. Sheera, thank you so much for coming on the podcast.


Sheera: Thank you for having me.


Niki: I actually attended a book party for “An Ugly Truth,” your international bestseller. You were feted all over Washington for thrashing Facebook in a warranted way [Sheera: laughs], I think, with Cecilia Kang, a friend of the pod.


Sheera: Yeah, that was a fun party.


Niki: It was at Comet Pizza for people who don’t know.. So it was a little bit tongue-in-cheek. I don't know, was that tongue-in-cheek? Maybe it was like pointed. I'm not sure, I’m using…


Sheera: The location was very intentional!


Niki: Yes, very intentional because the book focused so much on misinformation and the domination of Facebook. Since then, you've been doing a lot of reporting on misinformation, disinformation, and just all the ways that technology is being used in viral trends. Specifically in ways that influence people.


Sheera: Yeah, I would say that my focus is just on the way that technologies shape society for better or worse.

And certainly in an election, that seems to be a time where we really can kind of focus in and figure out what role technology is playing in our world.


Niki: Okay. So you wrote an article that I loved called “The AI election that wasn't.” [Sheera: laughs] And I thought it would be interesting to talk about that because it's true. We were all sort of hyped for campaigns using artificial intelligence.

It's 2024. It's all we talk about. What were your findings?


Sheera: Well, y'know, as a reporter, you always kind of just start off with an idea. And in this case, the idea was all these AI companies have formed. They say they're going to be pitching campaigns. They've had months now to talk to campaigns. Let's just call and check in and see how many contracts have they signed. Are they making billions of dollars in this campaign season, selling their services to small and big offices all over the country?”

And as I started calling them, I just found that one after another was really disappointed that they had not managed to gain traction in the campaigns that their experiments had not necessarily gone very well. They were just finding that despite the hype of this being the first AI elections, that, in fact, it wasn't! [chuckles] Y’know, that AI was really playing at most kind of an administrative role, but not doing the kind of exciting things that AI originally promised it could do in the campaigns.


Niki: An example of exciting, and I'm going to put that in scare quotes [chuckles], is robocalls.


Sheera: Yeah. I mean, that was a great idea on paper, right? You could choose any voice you wanted and you could train it on your sense of humor. You could train it on your personality. You could train it to give every single policy position.

Imagine the time you could save if you had this thing that knew your policies inside and out, and you could trust it, right? Because, unlike a human being who goes off script, this will not go off script. This will say exactly what you want it to say, and you could reach tens of thousands of people, at a fraction of the cost of what human beings would cost you.

This one AI company I talked to was like, “Look, the problem wasn't the execution, the execution was good, the technology was good. People just don't like robocalls.” [Niki: mmh] And the minute you start a robocall with, “This is an AI-powered…”, y'know, the minute that someone heard the combination of AI and robocall, they hung up.

So, the technology didn't even get a chance to practice. It didn't even get a chance to prove itself before people were like, “Yeah, no, absolutely not! The only thing I want less than a human being calling me right now is a robot calling me.”

[both laugh]


Niki: [interrupting excitedly] About politics, too!  So the word cloud [Sheera: laughs] of robocall plus AI plus politics, that's, it's a hard pass for me too, but I'm really glad you talked about the point of it, which is very positive because I can be quite cynical about some of these things.

And I think you're right. If you are an underfunded candidate or running for the first time, it can save you a lot of money. It could use different languages to talk to constituents who need that translation in a way to reach voters. It's infinitely cheaper than either hiring people or just even having volunteers to your point. Volunteers say all kinds of stuff when they call. [Sheera: Yeah]

And yet, I, I agree. I think if somebody called me and identified itself as an AI, I'd be like out of there. I'd hang up.


Sheera: Right! And it's so interesting because on paper, everything about it makes sense.


Niki: Yeah, why do you think they said that they needed to identify it as an AI? Because people think that it would be shady not to?


Sheera: Well, I'll give credit to the company that ran this specific campaign, which is that they wanted to be above board. Because there's so much fear of AI in the United States, because it's still largely unregulated, and there's a lot of nervousness around it they were like, “We didn't want to fool anybody. We didn't want anyone thinking we were pulling the wool over their eyes. We want it to be super upfront.”

And actually, y'know, they said this to me quite a few times in our conversation, they were like, “We wanted to set a standard where if you're using AI, you should disclose that you're using AI.”

I actually spoke to AI companies that have worked in elections in Europe and in India. And it was really fascinating what a culturally specific thing this is because in India, when they did exactly the same thing, it went very, very well.

They did these AI-backed text messages, because people, y’know, in India, WhatsApp is widely used, and it was the same kind of idea where you could WhatsApp with a candidate, and it was 100 percent AI and people were enthused. People thought this was great that their politicians were adopting AI and were really eager to talk to the AI version of a candidate.

They even did these AI videos, which showed candidates, like, appearing with Gandhi and getting endorsed by Gandhi, right? [Niki: laughing] Like, this is very clear use of AI, which I think in the United States, if a candidate appeared with Abraham Lincoln, people would be [Niki: totally creeped out!] Totally creeped out!

[both laugh]

And in India people were like, “How amazing and progressive and savvy of our politicians to be embracing new technology.”

So, I really think that this was like culturally specific to America and maybe speaks to how much Americans don't trust technology,


Niki: Which is crazy because we invent all this stuff!

I think one of the issues with AI is it has a PR problem in the United States and partly because we've lost control of the narrative. And by we, I mean people like me who do strategic communications for a living.

If you're looking at these abstract fears - we're told constantly that you're going to have voice clones and you can't trust what you see. I think we've become so deeply cynical, both about our politics and about tech. A lot of the promise of AI is just efficiency, which is hugely helpful to campaigns, which are by definition chaotic enterprises.

If you can create efficiency using those tools, I actually think that would be a great use of AI that might not trip into voter consciousness, but I don't know, do people adopt those or do we know?


Sheera: Yeah, absolutely. The administrative software, so it's the really boring stuff that like organize your campaign call sheets for you, it organizes your email box and labels them, it sorts through  immense files of voter data. That was the technology that campaigns really were excited to adopt.

The one caveat there is actually that they were buying this technology, but they had the company sign NDAs. [Niki: mmh] So that wouldn't become public that they were, and even though this is, like, in my mind, I can't imagine a person being, like, “No, how dare you use AI to make your Gmail box more manageable!” [both laugh]

These campaigns were so terrified of being associated with AI. Anything AI that even in this quite innocuous use case where you're using it in a purely administrative capacity, they wanted NDAs signed. They did not want that getting out.


Niki: And that does sort of concern me because it's usually the better funded candidates who are gonna be able to build out big IT infrastructure, and have a zillion interns. By the way, I used to work on a campaign doing exactly that.  I'm like, [chuckling] putting together spreadsheets of, y'know, phone numbers of policy surrogates and all sorts of glamorous things. The upstart campaigns can be much less well funded.


Sheera: I think there's just a lack of nuance happening because I, as a reporter, have written extensively about the dangers of AI and the ways in which it can be used to manipulate people and the concerns around AI, around its safety, even about the way it gathers data is something I've written quite a bit about.

That being said, that doesn't mean AI is bad, [Niki: Right] and I think as, as a reporter, it's something I think about a lot because, y'know, part of our job as reporters is to raise the alarm about the potential harms of new technology, but that doesn't mean that we're saying this new technology should not exist and doesn't have a place in the world.


We're saying, “Here are the concerns, like let's walk forward with these concerns in mind.” That sort of nuance is really lost, especially on a campaign where, you and I were chatting a little bit before we started recording, campaigns are so risk averse. They don't want to turn off a single voter.

So when they're shown polling, that even a tiny percentage of their potential voter pool is not going to be enthused about AI, they're not going to go in that direction, right?

They're going to say, “Absolutely not. Let's not take this risk.”


Niki: Right. And everything leaks from campaigns. I mean, it ultimately all comes out.

And so, if you think you're going to get, “Oh, the speech writers are using AI to write this speech,” then you lose authenticity or integrity.  It is interesting. The idea that you could just turn off a small number of voters in a way that could, well, I guess we are in a razor-thin election, so maybe you're afraid of that. But if you could reach more people, I'm not sure that it comes out that the math maths, as the kids say.


Sheera: Right. And I think part of, again, a polling problem is happening here where the people they're trying to reach is people who especially, I think you mentioned this, y’know, might not have English as a first language.


So, if you have an AI tool that can translate your messaging into dozens of languages very, very quickly and get it like that is often a group of people that isn't polled very well. [Niki: Right]

The math may not be mathing here for a lot of candidates.


Niki: Right. Yeah. And I used to be a political pollster too. What is it, “liars, damn liars and statisticians?” Like, nobody knows.


I would suspect that by the midterms in two years, people are using this for efficiency, for translation, for speech prompts, spreadsheet management. It will be cheaper and more efficient and that will lower the cost of campaigns, which frankly I think is a good thing.


Okay, so that covers the campaigns running the elections in the U.S. andhow the campaigns are running things, but let's talk about how we're being impacted potentially by overseas folks using social media, AI, et cetera.


Sheera: Right. I mean, cause while our candidates are not y’know, are not excited about using AI or at least not, not excited publicly about using AI.


There are nation states all over the world who are super excited about the possibilities of AI and how it lets them reach Americans in unprecedented ways. We have seen countries from Russia to China to Iran adopt AI. And specifically, programs like ChatGPT to spread disinformation because if you are, y'know, trying to, to quickly generate messaging to Americans, let's just think of a scenario here where you want to convince Americans that there's no point in voting, [Niki: mmh] that the election is rigged and it's awful and y’know, democracy is broken and everyone should just stay home and not vote.


That's actually specifically messaging we have seen around this election cycle from Russia. [Niki: Ok!] And you want a thousand variations of the message, “Stay home, don't vote. There's no point. Democracy is broken.” You can put that as a prompt into a language model and very quickly come up with language that sounds American. It sounds like the way an American would speak, which is something that actually quite a few countries have struggled with when they're spreading disinformation.


When you read the really early Russian disinformation attempts that happened around the 2016 election, they have a big problem sounding normal. The language is stilted. A lot of it is like jargon or like words that no one has used in 30 years.


Niki: Right. This is like the [chuckling] North Korean comms department. Which has those amazing press releases they put out, but yes, so it doesn't sound American.

People can sort of, it's like the uncanny valley of “This probably isn't some TikToker that's from the United States.”


Sheera: I have this memory of, um, chatting with a Russian hacker in 2016, one of the people behind the Guccifer account that was spreading documents that had been hacked from the Clinton campaign from. And as a reporter, you could reach out to them and chat to them on Twitter because they were trying to really spread these documents.


They kept typing things. They kept writing me things like, “That is seriously cool pie, man.”


[both laugh]


And I'm thinking like, “No, no American speaks like this!” [laughing] It's great-


Niki:  It is interesting though, because I feel like there was a moment when the, the memes, which I feel like is one of our greatest, greatest national exports. I think we're great at memes, [Sheera: chuckles] especially as like gallows humor. We're so good!


I do feel like the Russians were getting pretty good and also the CCP pretty good at our memes and understanding certain things like understanding the tension points between us.


But to your point, if you can use ChatGPT and just scrape our large language models, which are scraped from the internet, which is a bunch of English native English speakers, you are going to end up with a more natural sounding disinformation campaign.


Sheera: Right.

So, for instance, on, on Twitter, you'll see these examples where the same message will be typed out by 3000 accounts at the same time. And somebody notices that and they go, “Oh, you guys are all bots. You all wrote exactly the same sentence.”


If you have an army of 3000 bots, you can program them to tweet out 3000 different things on the same message using AI. It's one more tool that they now have to try to make themselves appear more realistic. Cause that's really all they're after, right? All they're trying to do is to pretend to be someone they're not to convince you of a point of view that supports their geopolitical stance.


That's what's happening here, whether they're Russia, Iran, North Korea, and they're getting smarter and smarter about this. While we as Americans are now very aware, I think, in large, we're very aware of misinformation, disinformation, we're very aware that bots exist. These countries are adapting.

So, instead of going on Twitter, instead of going on Facebook, and being like, “Right, American democracy is broke,” they go into small little group chats. [Niki: mmh] They go into the really like, y'know, nuanced, conversation about houseplants in a specific group. And they might spread messaging that they think an environmental activist would be interested in, or they will go to a group of animal lovers and they will spread messaging that is specific to, y’know, someone who has a, a love of animals.


Y'know, they're, they're getting-


Niki: The childless cat ladies? They don't have to do much for us. We're a, [chuckling] we're a, very secure voting block.


Sheera: I am a fellow cat lady and I am very vulnerable to cat lady chat. [Niki: still laughing]  I just think it's really interesting that people assume these tactics just aren't working because the really big campaigns are being taken down by Facebook and by some of the other tech companies.


And what we keep hearing from these big tech companies is like, “Yeah. No, we're, we're taking them down, but we have no idea what they're doing on Gab, or Truth Social, or Telegram, or some of these like smaller channels where they've really spent time embedding themselves.”


Niki: Right. And it's interesting you mentioned Twitter because I feel like Twitter has gotten so vile and yet I'm still sort of on it occasionally because it is a good place for me to find breaking news.


I have mostly tuned out of it. But if you are in a small group, I hadn't really thought of this, but if you're in a more intimate group online, and there is scaling potential by using AI by foreign disinformation campaigns, that is really effective at sort of incepting an idea into a place that doesn't feel as obviously problematic as I think Twitter does.


To me, it feels so obviously, ugly. People might say things to me or send scary clown emojis and I'm just like, “I don't even know if that's a human. Who cares?” But I do think if it was someone in a smaller group, that would be much more effective.


Sheera: For sure. And I will just put in a plug for a book that my colleagues published just this week.

Ryan Mac and Kate Conger, New York Times reporters, published a book called “Character Limit”, which actually is such a good and interesting look at Twitter and on specifically looking at what changes Musk brought to the company and how Twitter as a company has really shifted and why you are seeing more misinformation, why you are seeing more conspiracies, why you are seeing more bots.

So, I will just say I finished their book earlier this week and I absolutely loved it.


Niki: I'm so glad you mentioned it. Kate actually came on the podcast. She was doing like a sabbatical in D.C. and she came on and it was right when Elon was bidding and then trying to get out of his bid for Twitter [chuckling] and we couldn't future proof the podcast by even 48 hours because so many things were happening.


So, we will put a link to that book!


It is sort of fascinating what's happened with that app over time. And maybe this is something we can talk about, too. You just recently wrote about politics as a business sector within big tech, specifically Facebook. You recently wrote about them bowing out of politics or bowing back from politics. [Sheera: Yeah]

What do you make of that?


Sheera:  Yeah. I mean, it's been such an interesting character arc for Mark Zuckerberg specifically. Pre-2016, Facebook was courting politicians to come onto the platform. They were approaching the Clinton campaign. They were approaching the Trump campaign to say, “Hey, we, we have a VIP office for you.” Like, “Please, please, please come to the platform. We really, really, really want you guys here” because Mark, y'know, just in his like in his soul, believed that Facebook could play this really amazing integral role


Niki: And his soul, which also I'll say this, you don't have to, has a lot of dollar signs, uh,  in it. [Sheera: chuckling]

He also was one of the first tech companies to get into political ads. So, I was at Google so long ago that we didn't even have political demographics as an advertising category, but then Facebook did it. So we swiftly followed suit, which they've now backed away from.


Sheera: He's so fond of the Roman empire and this idea of like, “all speech is good and the best speech will rise to the top.” And like, these are just really like integral to the way he. he thinks about the world and the way he thinks the world operates. So, it makes a lot of sense if you can kind of understand the progression of his worldview that he would want politicians on the platform.


And it just kind of goes from bad to worse for him. The 2016 elections. We all know what happens there. Then you have 2020 where he's like spent all this money, hundreds of millions of, some would argue billions of dollars, in security and bringing on all these experts and making sure he's like, y'know, going to Congress and talking to them and investing so much time and money to make Facebook better for elections and then “stop the steal” happens.


And Facebook is blamed for, partially at least, for what happens on January 6th, for the ability of people to organize and spread, misinformation about the vote on the platform and then get to the point where the fervor is so high that they're gathered in Washington trying to storm the Capitol building.

Right in the wake of that, I mean, quite literally weeks after that, he kind of goes “No more!” [Niki: mm-hmm]  He's on an investor call and he's like, “I want to out. I just don't think Facebook needs to be in the middle of political discussions. I don't think Facebook needs to play as large a role.”


What our article this week kind of looked at over the last four years, he has taken really concrete steps to remove his company from the political conversation. So, whether that's launching new apps like threads, where they're very, very clear that they don't want political discourse to be at the heart of what's happening on Threads.


That's probably why you still have to go - {interrupts self]  I, I find the same thing, which is that Twitter is better for breaking news than Threads is. [Niki: mm-hmm]


Something massive will be happening in the world and it's nowhere to be found on Threads.


Niki: Right. [chuckling]  I mostly get threads content about, like, note this is TMI, but, like, perimenopause and washable cashmere. I'm like, “Okay?!”


Sheera:  I use threads to post about my pottery habit and I seem to get a lot of pottery stuff on threads. It's quite intentional what they're focusing on.


Niki: And people want that too. We want a break from politics. So, that's also a consumer demand, I would think.


Sheera: Right. And, and Instagram the same. They have made a number of decisions on Instagram that unless you are going out of your way to follow political accounts on Instagram, you're not going to see political content.


So, you're seeing that shift happen. I think it's happening a little bit less so on Facebook, just because people's Facebook accounts are older. The average person has followed political groups over the course of their lifetime that they're still gonna see content from.


But again, unless you seek it out, they're not pushing it to you. That is all Mark kind of saying, “I can't win with this. I don't want to be the first call a reporter makes when something spreads online that is misinformation or disinformation.”


Niki: And you also think about the tax, right? The Washington tax.

How many times were those executives dragged down here in front of Congress to defend about Russian misinformation, election disinformation. I remember, [chuckles] I remember Sheryl Sandberg testifying and somebody asked her about Russians buying ads.


And she said, “Well, a lot of people pay, y'know, for Facebook ads in rubles.” And I remember thinking, “Oh man, this is a headache for them!” This is, this is such a headache and it's a, it's a big tax on executive time and resources and big teams. So, I can understand why they would do that.



Sheera: And I will say, I do understand the point of view of executives where they say, like, “Why isn't the U.S. government defending against disinformation?”


They don't have strong allies within the U.S. government that are working with them on this. They are left very much to defend for themselves and come up with their own strategy and their own laws.

Anyone who's followed this closely has seen Mark Zuckerberg go to Congress and beg them to put laws into place to define what misinformation and disinformation is to set hard lines because then as a company, they get to say, “Hey, we did what the government asked of us. You guys told us what was, what was misinformation. You guys defined it. And we followed suit.”


And the government's failed to do that year after year. And so, these companies have kind of been left flailing and getting blamed for what is on their platforms, which is, as a public, we understand why that's happening but the other piece of it, of like, where's the role of government here? Where's the role of the national security establishment, frankly, is, is kind of unanswered.


Niki: This is so funny to me that a New York Times reporter is giving me the Facebook argument for why it's hard for them. [laughing]


Sheera: I mean, I give every argument, [Niki: Nooo!] I think, for that one. Y'know, my job is to kind of like, I, I, I, y'know, I think that's always funny because Facebook always think I'm very, very, I am critical. I'm critical of everyone. Right? Because I think that this is a failure on the part of so many different parties that has left us in this situation.


Niki: Yes! But it is funny to me that a New York Times reporter is kind of giving their argument, which you should. You've talked to a lot of people over there and understand their point of view. And I'm going to sit here and say they have somewhat bitten the hand that feeds them because they're always going around suing over free speech rights.


It is this sort of impossible, unsolvable riddle where you would like to see cooperation with the federal government saying these are the standards and then you're fighting, sort of, free speech absolutes. Although, I feel like people really mischaracterize the First Amendment.


We could set certain standards we do for, for other types of speech, but it's tricky for the government to get involved. So, I sort of agree with him on bowing out. We're seeing, y'know, TikTok, which has very tight Chinese Communist Party connections. We're seeing still Russian misinformation on other apps that people are using. They're still engaged. I could see why Facebook wants to step back and intentionally crank down the dials on this.


Sheera: Yeah. I mean, I, [sighs]  I look, I always say to people the same thing. The bottom line here is they're a company. They're beholden to their bottom line. They want to make money. They want to continue to grow. That is their North Star. That's always going to be what guides them. Let's not pretend that they're going to make decisions for the good of democracy. They're going to make decisions for the good [chuckles] of their bottom line and their stock price.


And so, as the US government, if you want to incentivize them, there's very specific ways you can incentivize a company. And it's largely through [chuckling] penalties, largely through, y'know, damaging their bottom line if they can't comply with certain things.


And so, I look, I, [sighs] I, the first amendment is It's incredibly nuanced and complicated and I'm by no means a First Amendment scholar. I would never, I've never pretend to be, but I think-


Niki: I pretend to be! I don't know why you wouldn't! I'm just kidding. [both laugh]  I actually kind of do. I get on here and mansplain the First Amendment all the time.

I am also not a First Amendment scholar. Yes. Sorry. I interrupted you.


Sheera: No, I was going to say that I think that for too long there's been this idea that it's so complicated that we can't solve it.


As a reporter who's constantly being pelted with like new technology and I'm in my forties and I find the new, “I'm like, what is this? Gosh, darn new technology thing they're sending me now. Right?”  Like, but I'm on it because I have to try it and I have to learn about.


All I keep thinking about is it's only going to get more complicated, it's only going to get harder to rein in, it's only going to get trickier to kind of figure out where our lines in the sand are about what constitutes free speech and what doesn't. And the longer we kick this bucket down the road, the harder it's going to get.


Niki: I think that's right. And I also am in my 40s and I too find myself - it's not intuitive to me the way that technology did feel intuitive before. And I don't know how much of that is generational and how much of it is that some of the applications just don't apply to me, right?


I don't have hours and hours to scroll on TikTok, looking at whatever. I'm not on TikTok and I'm not on Facebook. So, I'm not sure that I'm even seeing all the ways that things are rolling out, but you're right. It's just going to get more complicated.


Sheera: As someone who's on it, I'm always blown away by how well it understands human psychology and how well it plays into human psychology.


And that's part of what I'm thinking about when I think about this idea of free speech and what constitutes free speech. I think a lot of the people talking about the laws here are people who are thinking about gen one of this technology of like early Facebook. And they aren't thinking about the ways in which, technology now works, which is that it gets to know you as a human being, and it's catering to you. And it's catering messaging to you as a, as a human being that is meant to manipulate you. It's meant to play into your deepest emotions.

And so, I think there's just a lot more here that's complicated and isn't as simple as like, “Well, free speech should be allowed.”


Niki: Right. And there is also “give the people what they want,” which I mean, again, we're still, for better or worse, we are a capitalistic society.


Maybe that's the takeaway. The people don't want robocalls [both laugh] from AI campaign assistants. They don't want manipulated images. They do want their social media, but they want it to be politically light. Many people do. They want to have a little less of that.


And yet, we're still being incepted with divisive messaging. We do a decent amount of that to ourselves, but how much of that is made worse by some of these campaigns that are very intentionally trying to make us feel dejected, and hopeless, and just scroll on other things rather than the election, which we are a hop, skip, and like a hanging Chad away from - [chuckling]

[both laugh]


Sheera: I think maybe this is just me. I'm a reporter. Maybe this is just what I want. I want transparency.

It's not that I don't want AI generated images. I just want to know that if that kitchen counter is AI-generated and I can't find it in the real world, you're going to tell me “This is an AI–generated kitchen. Your kitchen is never going to look like this no matter how many buckets of green paint you buy.” Right?


Niki: A 100 percent! Is it Goop or is it AI? Tell me! They're both too aspirational, but I agree!

That actually drives me nuts when you see that, this disturbs me a lot, [chuckling] maybe more than anything else. Those gorgeous AI photos. I think the Washington Post ran this amazing series of AI photos where you thought, “Oh my God, what a beautiful island that does not exist.” [Sheera: Right]  [cross talk] It's troubling.


Sheera: It's, it's, but y'know, so I keep thinking like, is it troubling or do I just want to be told like, this is AI, we have imagined this, a computer has imagined this. [Niki: Right]

I think the same goes with social media. Like, is it a bot? Is it a real person? I crave that right now when I'm online. I want to know that the person I'm talking to has a real name, is who they say they are.

If it's an AI-generated photo or, or a language model that came up with a short story, that's okay. Like, just tell me that. I want the disclosure of knowing what something has been sourced to. And I think that's something that we're really lacking in social media across every single platform at this moment.


Niki: Right. And that gives you at least as a viewer, as a user, as a consumer, knowledge, so you can decide what you're going to do with it.


And actually, I will say there is a lot of bipartisan support in the Senate to label election ads as artificial intelligence, whether it's been just manipulated or informed by artificial intelligence. And that went absolutely nowhere because we can't get anything done in election year except we've barely funded the government. There is support for it.


So, I do feel optimistic that that is something that could come out of Washington, which is this transparency, because you're not really tying anyone's hand behind their back. You're just letting people know what they're looking at. And then we can use it for escapism. We can decide we want to hang up on the AI robocaller, or we can use it for really valid and important purposes to make us better educated and crunch big data better.


Sheera: Exactly. I spend a lot of time thinking about this because I have, I have two daughters, they're very young, but y'know, eventually I will not be able to stop them and they will get on social media. And I think a lot about the teenagers I speak to who talk about the danger of filters.


A lot of those AI filters look so, so real. It does all sorts of things to teens that they're already talking about. They're already saying “We as teens want this stuff labeled. We think it's important for our experience online to be, to be different than [chuckling] it is right now.”


Niki: It's interesting you say that. These images do create warped ideas of ideal beauty. They're unrealistic. They're literally unattainable.


I live with four teenage boys. They also get manipulated imagery, and it's interesting because they get manipulated music, but they seem to know that it's AI-generated. So, one of them was playing a song that was AI-generated that he got on Tik-Tok the other day. It was sort of a, it was honestly kind of a bop. I was like-

[both laugh]

[cross talk]


Sheera: The kids know! [Niki: They just know!]  I mean, I think the unrealistic beauty standards are for teenage boys and girls. I think the kids know that the music is manipulated.


I think they know all of it, but they want it out there. They want it labeled because they're like, even if we know like technically that this isn't something that a human mind achieved or a human body looks like when it's natural and rolls out of bed, like they still want it labeled. They still want it clearly stated because it can still create a warped idea of what's acceptable and what's not.


Niki: Well, I'm very for it. And I do love the idea that the business that you interviewed about the tools they were giving campaigns focused on transparency. I do think we all deserve that.


It's certainly going to help us build trust in this technology, but also just trust among ourselves, which is sort of in short supply at the moment and would be really good for us to be able to look at things and not just have a completely cynical eye.


So, on that optimistic note, thank you so much for coming on and talking about this.

Just so people know: you were on a Pulitzer finalist winning team. You've written this international bestseller, you speak Arabic, and Hebrew, and have lived in Gaza. And like, actually, I heard you say something about how you’ve talked [chuckling] to Hamas's PR team. I'm like, “What?!” You just said you talked to a Russian hacker. You’ve have an amazing career.


I really appreciate you taking time out of your reporting to come on and talk about the trends you're seeing.


Sheera: Thank you so much for having me!

Podcast Links

bottom of page