top of page


[music plays]

Niki: I’m Niki Christoff and welcome to Tech’ed Up. Today we’re learning about artificial intelligence with Congressman Will Hurd, a former undercover CIA agent and author of the forthcoming book American Reboot arriving in February 2022. 

In our conversation, Will breaks down the difference between AI and machine learning, teaches us about the concept AGI, and we both agree that middle schoolers should all be learning how to code. 

[music plays]


Niki: Welcome, Will Hurd. Thank you for coming on the show today. You served three terms in Congress representing your hometown of San Antonio, Texas. You are unique in the sense that you were a computer scientist serving in Congress, and you worked on a national AI strategy, a bipartisan program. So today, that’s what we’re talking about. Let’s start with the basics: what is artificial intelligence?

Will: So, artificial intelligence is, a, a tool that reacts in human-like ways. Okay? It's basic. And you put inputs in and inputs come out. What's fascinating is that if you say AI to someone who's older than me, they're going to say “Hall 9,000.” That was that creepy robot, y’know, on that space plane and in Space Odyssey 2001. If they're younger than me, they say the killer humanoid, right? Or they'll say Roomba. Right?[Niki laughs] Which is the little machine that goes around and sweeps your floors. But ultimately, artificial intelligence is a tool that is going to be, that is gonna be- I say tool, specifically, because humans still gotta be able to use it. 

But it is such a powerful tool that I equate it to nuclear fission. Nuclear fission, when it's controlled gets us nuclear power, clean energy that can power the world, right? Nuclear fission uncontrolled, is nuclear weapons and could destroy the world. And that is where we're at or where we can be when it comes to artificial intelligence.

Niki: So, that’s helpful. I think you're right. There's sort of this idea is it the robots, are the robots going to become sentient. Is it like Skynet? Is it evil? And so your point is basically, it is a technology, a tool that can be either used for good or for evil. And I want to talk about those different use cases [Will: Sure] and what you think is important. But can you also just answer a question I do not know the answer to, which is: what's the difference between AI and machine learning?

Will: Okay, look, great. Great question. Great question. So, machine learning is a part of AI, and it is the process in which you teach the algorithm to learn.  Right? And so, there are different techniques that you can use, coding techniques, in order to reinforce the learning within the algorithm. So, AI matters, it’s- you need data.

So, there are three components; most people say, the three components of artificial intelligence. I would say there's a fourth. The components are data. You have to train the algorithms, right, with something. You train it with data.

Let's take something as simple as autonomous vehicles. If that camera on the car is looking at a stop sign, they have to know that's a stop sign regardless of how the shade on that stop sign is. So, that's why you need like a million images of that stop sign in order to recognize that it's a stop sign. [Niki: Right] So, that's why the data goes in. Right? And then you have the algorithms: the program that uses machine learning to take that data and make decisions from it. So, you have to have data, you have to have algorithms, you have to have computing power. [Niki: Mm-hmm.] The reality is to get to- and there’s two parts of AI, like how we use AI now. If you're listening to the show on Spotify and it says you would like this other thing- guess what, that’s AI that’s being used. If you’re using a mapping software that gets you from point A to point B in the quickest way, there's some AI behind that. 

There’s another thing called artificial general intelligence, AGI. And this is a state where the, the, algorithm is going to be smarter than most humans on most things. And this is a reality, we're going to get there. And so, AGI is really that thing that is going to be super, super powerful, but to get to that, you have to have computing power. So you need, you need, some fast computers that can tabulate all that data you're putting into those algorithms that have millions of lines of codes.  Right? 

And so, you need that compute power; compute power requires energy. I also think another element that we should be thinking about is the policy around how artificial intelligence can be used. And so, those are the elements you have when you think about AI and AGI. A couple of years ago, Vladimir Putin said, and I'm paraphrasing, “Whoever masters AI is going to master the world.” [Niki: Yes] That’s probably one of the few things- that's probably the only thing I agree with ol’ Vladimir Putin on. 

Niki: There is a lot to unpack in what you just said. [Will:  Mm-hmm] So first of all, I don't even like it that Mark Zuckerberg knows that I want to see photos of cats stuffed into pitchers and refrigerators. Literally, they show me these photos on Instagram and I don't like it. It makes me feel- it feels invasive, even though I do enjoy a good cat photo.

Will: Who doesn’t, who doesn’t. Yeah. 

[Both Laugh]

Niki: So, I want to talk a little bit about the average person and how they're interacting with this because I do think there is potentially a generational divide. There are these positive use cases of AI, but there are also ways in which the private sector, which has a lot of that compute power. These are the people that have the data centers that can run these algorithms, which can then serve us things we want to click on and capture our attention. The private sector right now is dominating this. And it makes me, I'm curious what you think. Part of me thinks that this is why people are uncomfortable with it; it feels like a privacy invasion. 

Will: Sure. And, and, these are all valid questions, right? So, we can't tackle this question about privacy and data privacy without talking about- that we're in a race. And I do believe that we're in a new cold war with the Chinese government, and I say Chinese government specifically.

I don't have a problem with the Chinese people. I obviously don't have a problem with Chinese Americans. Some of the hate that has been directed to Asian-Americans in the United States is just outrageous. I'm very precise; it’s the Chinese government. And the Chinese government has made it very clear.

This is not my opinion; this is not me laying in bed at night, staying at a Holiday Inn Express musing about what the Chinese are gonna do. This is what the Chinese government has said about what their goal is. Their goal is to surpass the United States of America as the global hegemon, the only superpower in the world, and they're going to do that by being the global leader in advanced technology. 

And they've outlined 12 to 15 different types of technology: Quantum computing is one of them, AI is one of them, 5G is one of them. And so, there is this race. Now there are some who are involved in developing AI who think these race conditions are bad. Because to get to that point of artificial general intelligence-that somebody is going to cut corners. 

And when you cut corners, then you're going to create that Skynet, or that thing that gets out of control because this tool is going to be so powerful. Right? But you know who doesn't care about privacy? 

Niki: Yeah.

Will: The Chinese government, right?

Niki: Right.

Will: You know who doesn't care about civil liberties? The Chinese government. 

And it's hard to talk about artificial intelligence without talking about 5G. So 5G, so look, it’s going to be awesome to download season three of Ted Lasso on my phone in 2.5 seconds. So, so, 5G is going to give us the power of those uploads and download speeds that are just outrageous, in a good way. 

But there's also a thing called latency. I do something on my phone- I, I type in a command, and I do some action on my phone. It goes up into the cloud. Then it comes back down. The trip that that takes is called latency. With 5G, that's going to happen in like a nanosecond. Our thoughts are like seven nanoseconds. So, now we're going to have the entire power of the internet in real-time at our fingertips. Whoa! Like, what is that gonna allow us to do? That's going to actually allow us to have true driverless vehicles, things like that.

So, 5G is part of the infrastructure that you need in order to truly have artificial intelligence. So, why do you think the Chinese government spent all this time developing Huawei? Because they're owning this 5G infrastructure in a lot of parts of the world, right?

So then, not only, think of it this way- I'm driving down the highway, and somebody else owns the highway. Well, then you think: “Okay, they can't get into my car.” What if I put a stoplight on that highway and force you to get out? [Niki: Right] Or, what if I put a trapdoor in that highway, causing the car to drop out? Right? I can do that because I own the infrastructure.

So, so, this is one issue. Now, this all relates to privacy because us and Europe, we got to get beyond this transatlantic beef on privacy. We have to be able to work together against what the Chinese government is trying to implement. And that's the real threat.

A couple of weeks ago, the Chinese government took every reference on the Chinese internet to this one Chinese- she was an actress. She was a well-known actress and became a billionaire, and she got cross with the Chinese government. Guess what happened? They literally, they used AI to trawl the Chinese internet and took every reference of her off of the internet. [Chuckle]

Holy smokes. Right? Like, that’s why you don't want the Chinese government to win this. And they're exporting this technology and these tools all around the world.

Niki: So, if I- just to sum up what you said, which I think is really smart. Which is: the Chinese government, although they do a pretty good job on the world stage, pretending that they are a democracy, they don't have elections, they don't have to bother with that. They don't have civil liberties. They're a total surveillance state for the people who live there. They're building, with their government resources, the infrastructure that's going to create 5G. That's going to create this extremely powerful artificial intelligence. 

They already have some of that. And while they're focused on that, we're 1) mad about our Instagram feeds. We're talking to Europe about privacy legislation and regulations. Which actually, having worked at a tech company, I get it. I absolutely get it. I do think that we need to think carefully about people's rights, and the rights to their data, and who owns their data.

But your point, if I'm hearing you, is: if we lose this cold war race with the Chinese government, suddenly the East has way more power than these Western democracies, which are subject to elections and civil liberties and rights. And in fact, has a private sector that's taking a lot of this on. Is that what I'm hearing you say?

Will: No, you're absolutely right, but it still matters. Right?  Getting this right matters. And I think there's a couple of things- look, the only way the US is going to win this war, right? And I would say that the best-case scenario is that we're tied right now. [Niki: Ok] The only way that we're going to win this is if the public and the private sector actually start working together in a more efficient way. An authoritarian government like the Chinese government can get somewhere first. Why did the Russians get into space first? There were so many firsts in the Russian space program, but they were unable to evolve. They weren't able to cut on a dime and leave change. Right? And so, an authoritarian government can get someplace first because they can marshal all their factors of production in one direction. But I will always bet on American creativity, entrepreneurship, right, in order to stay and continue and to be able to evolve. 

And so, we have to accept we need a privacy standard.  And this is where we should be working with the Europeans. So the Europeans are, probably like, 18 months ahead when it comes to setting policy around these issues. And partly because they don't have big European companies that are driving this conversation. And a lot of their early steps were, what I thought, were anti-competitive attempts focused on great American tech companies. The tech companies have to recognize that they have a public policy role because their tools are now being used to advance public policy, whether that was the original intent or not.

And then, when it comes to something like artificial intelligence, I think it starts with those algorithms. Let's just start with: follow the law. [Niki: Hmm] We already have rules about protecting civil rights and civil liberties. Just enforce that, right? Have the algorithms learn to use those things, and we aren’t going to violate those things. 

And so, this is a conversation that we can move to, but we have to move a little bit faster because the debate is slowing us down to get into the point with how did the federal government have data sets that should be made available to more people in order to train better algorithms. The Chinese government’s always going to have more data because of what we talked about: they don't care. So this is where with some of the debates in Washington D.C., we need to get beyond some so we can start talking about driving forward and winning this race.

Niki: So, I think you make a good point. And just for people who aren’t really close to the law, what you're basically saying is that if a tech company is using AI or their algorithm is making a determination on credit, on mortgage worthiness, on housing rights, they can not discriminate. They're legally not allowed to discriminate.  Now that doesn’t mean- we know and we will have an episode on bias in AI and how the people building the machines bring their own biases to the technology. [Will: Mm-hmm]  And that's something we need to look at, but in fact, you could in theory use AI to be more compliant with the law.

Will: For sure, right?  Because you're going to be able to take every element and every example of how the law was implemented or every court case and ultimately train the algorithm on saying: this is what your left and your right bounds, you know, should be.  And so, but, in order to make sure that, this, that, the algorithms are doing the right thing, you need more people involved in this industry that are designing these algorithms. Part of that bias comes from not having a diverse workforce on developing this. And so, that tech talent divide is serious.

And so, I think one of the ways we improve that is, We got to start teaching coding in middle school, at a minimum, in my opinion. For my generation, if you didn't know how to type, you weren't getting a job. No matter what the job was. Like, every job required you to know how to type, right?  And every job is going to have something to do with data analytics, with understanding coding in some form or fashion.

And so, we need to start- we’re having all these debates:  What should we be teaching in college? How do we improve the classes in college? How do we get more cyberwarriors out of college? Well, all the great hackers I know, none of them went to college for cybersecurity, right? But they had these skills that they developed really at a young age.

So, that's another area that we need to be stressing. And then, the problem is it’s not like there is some teacher, some computer science teacher chilling at the coffee shop somewhere, waiting for a tap on the shoulder to be like, “Hey, we need you”, right? We don't have enough.

And so, those that have some of these skill sets need to be working in their communities and being a resource to those teachers. So the teachers that are teaching this have someone that they can go to when they have questions. And that's something that you're starting to see people try to participate in.

Niki: I think that's absolutely right. I, in middle school, was learning to use a bandsaw. I don't even know if they still have industrial arts.

[Both laugh]

I broke three bandsaws. Absolutely, kids should be learning how to code. These other countries are making sure all their kids know how to code. What are your recommendations for- so, we've talked a little bit about the private sector. These big tech companies absolutely have a role in making sure that they're following the law, that they're cooperating with the government on providing data sets potentially, or at least thinking through that. What does a public-private AI strategy look like for the United States?

Will: Well, it starts with making sure that the entities, a lot within the DOD and the intelligence communities, that there is some cooperation on developing some of the algorithms on using some of the data. How are some of these tools going to be used? That conversation needs to improve, right? And it needs to move at a faster pace. You also need to make sure that the federal government is introducing some of these tools into the government and using them. 

Why does it take six weeks, in a good time, to renew your passport? Right? This should be pretty easy. I should be able to do this on my computer. I shouldn't have to go to the post office to pull this off. Right? And so, how are we using that? The future of cybersecurity is going to be good AI versus bad AI. How are you using AI tools in order to defend the digital infrastructure? To protect citizens' information. 

Niki: We absolutely want them running on state-of-the-art technology that the private sector uses. But how do we solve for this? How do you get people [trails off] In my opinion, a lot of the issue is: why would someone go work for the federal government when they could go work at a big tech company, making a ton of money, vesting in some of the most valuable companies on the planet, in the history of the universe. How do we get them to go into government? What’s the solution? Do you have ideas for that? 

Will: Sure. Look, two things. One thing I tried when I was in Congress that was unsuccessful: I called it the Cyber National Guard.

It was very specific to people that were involved in cybersecurity. If you're going to go study cybersecurity, Uncle Sam's going to pay for that, but you got to come back. If you get a three-year scholarship, you're going to come work in the federal government for six years in cybersecurity, and you're not going to, necessarily going, to the DoD or NSA.

You're going to the Department of Commerce. You're going to the Census Bureau because you need tools. You need those, kind of, you need that workforce all over the government. Then once you finish that requirement and go work in the private sector, the private sector is going to loan you back for 45 man or woman days a year.  Right? 

And so, you have someone that has experience in the government, understands what's happening in the private sector, and there's that cross-pollination of ideas and skillset. The problem that we found was holding clearances and getting clearances done. It's insane. We should be able to use AI to do that. It shouldn't take six months to do a background check. It should take six days.

So, that's one way. Another way is to create a category of person in the government that doesn't have to get rid of all their financial investments. Y’know, if I'm holding stock in a company that I was working at for a long time, I shouldn't have to get rid of that in order to go work for the federal government.

Oftentimes, that person may come in and work in the federal government without a salary if they're able to keep all their other investments. And so, and that's going to be for a short period of time. I would love to have people like that. That, when they have a successful exit from a startup company and they're looking for that next thing. Hey, come provide your skills to the government for nine months, a year, or two years. 

But the way we think of a government employee means we can’t have that. We put someone in some category; you gotta be a GSX, or this or that. We can't think creatively in those financial structures. People would do that because they want to help their government.

Niki: I think you're absolutely right that people would do it. And in fact, if we have anyone listening who works at a tech company, I'm sure there are engineers that if they could continue vesting their stock, if they could have a safe harbor where they could go back into the private sector without being demoted or losing opportunities, absolutely, they'd spend a year rotating through government.  And, I think they should ask their employers about helping make that happen, especially because tech at the moment is, you know, they've got a little bit of a PR problem in Washington.

[Both Laugh]

Will: Right, Right.

Niki: But it's a great service to the country to take your engineers, your data scientists, program managers, product development people, and rotate them through.

And I know that people who work in the federal government don't need a bunch of whiz kids coming in and mansplaining stuff to them. However, getting extra boots on the ground and people who can just help would be, I think, a huge service. And I think, it's only going to happen if the employees ask for it, if they start pushing for it. This is my observation, having worked in the industry. 

Will: I'm with you on that because the stakes are high. Let's just take AI as an example. There are algorithms right now that can look at your eye and determine that you're susceptible to a certain kind of cancer, right? It's allowing people to live longer.

We have AI now that can help farmers grow more crops, use less water, use less land. Hello?! Why would we not try to push that and move at lightspeed in order to pull that off? Right? You know, in a world, right, that the CEO of Open AI says, he’s talking about he envisions Moore's Law for everything. Moore’s Law is that transistors that we're putting on an integrated circuit were increasing power, were doubling the power of the integrated circuit, were doubling every year. He's saying that AI and AGI specifically are going to become so powerful that you're going to be able to reduce the cost of goods or services. Because the algorithm is going to be able to do it. So imagine if everything we buy, from our groceries to our rent, was decreased by half every two years. That's a pretty amazing situation.

Why wouldn't we want to try to get there faster? So, that's the upside. The downside? I'm not worried about artificial intelligence becoming the Terminator [Niki: Skynet?]

[Both laugh]

And that technically wouldn't happen until we actually achieve quantum supremacy, which is a whole other topic. But imagine a piece of software that knows absolutely everything about you and can talk to you based on knowing that and having that understanding. 

Like, that’s something, if we want to talk about misinformation, disinformation and influence and how that happens. [pause] Imagine something that is the best marketer or influencer and that this is all tailored specifically to an individual based on everything that individual has done in the digital sphere. That's why we need to have these protections in place.

Niki: Right. And that’s why we need to have these protections in place. Have a norm, a privacy structure that's normalized across countries, certainly across the 50 states, and get moving. So that it’s not developed first by authoritarian totalitarian regimes who are going to use it for ill.  And that’s, sort of, where you stand on this. 

Will: Right, look, it truly is. To understand, if you go back and look at what Mao Zedong did in China, and not having these tools, the death and destruction that was put on the Chinese people over decades. [Niki: Yes] If you look at how the current leader of the Chinese government is using these tools to suppress opposition.

We've seen what they've done in Hong Kong. We see what they do to the Uyghurs, the ethnic minority in Xinjiang province. Right? We know how they're going to use these tools to continue to extend their influence over their society. Ten years ago, people thought that, “Oh the Chinese government only cared about China. They only care about Asia.” No. They have military bases in Africa now. The One Belt, One Road initiative was designed in order to increase their influence in their region to include Africa and now into Europe. And so, this is the battle that we're at.

I don't want, y’know, the situation that happened in, what, 475 AD, when the Western Roman Empire fell, and some Western chilling in Rome was like “What the hell is a Goth?” When the Goths invaded the Western Roman Empire and the empire fell. This is the similar situation that America can be in.

I want us to wake up and be like, “Hey, we're ready for the fight. The public and private sectors are going to work together. We're going to educate our kids for jobs that don't exist today so that we can be competitive.” And then we're going to continue to uplift humanity for another 245 years. And so, that's, I think we can still get to that.

Niki: I’m so glad you turned that around. It was getting really harrowing for a moment.

[Both Laugh]

Niki:  I don't want to end the show thinking through a modern-day Mao Zedong using AI to subjugate civilization. So, I'm really glad you focused on middle-schoolers learning to code, come work for the U S., let's get it together and cooperate with our European allies and Western democracies. And, like, let's start moving so that we have innovators and inventors in the United States creating the rails and the infrastructure so that we can win this race. 

Will: Amen. 

[music plays]


Niki:  Next week, we talk to former White House Deputy Press Secretary Jamie Smith about the basics of blockchain technology and cryptocurrency and how the industry should be talking to regulators. Be sure to subscribe to Tech’ed Up wherever you get your podcasts, and video content is available on YouTube, the link is the show notes.

bottom of page