Illustration generated by text and the Midjourney AI program

A robot political expert has just arrived

New artificial intelligence will shake up human politics and policy

Ed Lyons
10 min readDec 19, 2022

--

by Ed Lyons

The world of government requires many kinds of experts to wage campaigns, make speeches, research policy, and write legislation. A powerful new political expert has just arrived to compete with all of them, and it is not human.

I recently asked it, “What has Massachusetts Governor Charlie Baker done wrong?” This was its full answer:

It is difficult to say that Massachusetts Governor Baker has done anything “wrong” as opinions on his performance as governor will vary among different individuals and groups. However, some criticisms of Governor Baker include the following:

  1. He has been accused of not being progressive enough on certain issues, such as climate change and health care.
  2. Some have criticized his handling of the state’s response to the COVID-19 pandemic, citing his initial reluctance to impose stricter measures such as mask mandates and stay-at-home orders.
  3. Some have criticized his decision not to provide more direct financial assistance to struggling businesses and individuals during the pandemic.
  4. Some have criticized his decision not to take a stronger stance against President Donald Trump and the federal government on certain issues.
  5. Some have criticized the lack of transparency and accountability in his decision-making process.

The answer to “Why is Massachusetts Governor Charlie Baker so popular?” was even more impressive.

I then asked, “What are the barriers to the adoption of heat pumps?” and this was the answer:

There are several barriers to the adoption of heat pumps, including the initial cost of purchasing and installing the equipment, concerns about their effectiveness in cold climates, and a lack of awareness or understanding about the technology among homeowners. Additionally, some older homes may not have the necessary infrastructure to support the installation of a heat pump, and retrofitting these homes can be expensive. Finally, the availability of other, more traditional heating options, such as natural gas furnaces, can also make it difficult for heat pumps to gain widespread adoption.

I understand such sophisticated answers are hard to believe from a machine. But it doesn’t just know about politics and policy — it can even write legislation, all by itself.

Artificial intelligence, long the domain of promises instead of results, has experienced a revolution over the past several years. It caught the attention of the public months ago, when the DALL-E program let people type in descriptive words and it would then use them to generate striking images of incredible variety. For example, I simply typed “boston statehouse Van Gogh starry night” into another text-to-image AI program called Midjourney, and got the image at the top of this piece. The Internet is now chock full of amazing images created by AI.

The language model technology that is behind these image tools, called GPT (Generative Pretrained Transformer) has also been achieving astonishing results in producing human-like writing since 2018.

Its latest iteration, called ChatGPT, has just been released as a free demonstration for the public by an organization called OpenAI, and it is mind-blowing. It is what I asked about Governor Baker and heat pumps. One reason it sounds so much like you are talking to a person is because people were heavily involved in training the system on what human responses sound like.

ChatGPT doesn’t always give the exact same answer to a question, but you can sign up to try it, and repeat any of the questions I have used in this piece. Note that its knowledge base only goes to the end of 2021, so you cannot ask how Boston’s new mayor is doing, or about current events. It also doesn’t like talking about individuals or making value judgments. But with some effort, you can sometimes get it to talk about something it would rather not, in the same way reporters try to get an elected official to do so.

You can ask it about literally anything, not just politics. It not only answers questions, it can generate essays, computer code, limericks, medical advice, poems, and even made-up movie plots and bedtime stories. It has been publicly available for just a couple of weeks, and already it has shocked people across the globe.

I believe this disruptive technology will be more useful in politics than in other fields of knowledge it is aware of, for three reasons. First, a lot of politics takes place online in a world of text. Second, politics prizes what GPT applications are good at, which is predicting the right words. Third, people will be more tolerant of the weaknesses of this technology, which are also common in politics.

And great weaknesses there are.

My fellow computer scientists will quickly point out that ChatGPT does not think, does not “know” anything, and has no idea what it is saying. It is a relative of the technology in smartphones and Google mail that guesses what word might come next when you are typing. A peer of mine recently called GPT, “statistical nonsense at scale.”

This is all true. But politics, especially, does not always prize the self-awareness that GPT technologies lack. We live in a time of copy-and-paste activism, where words and phrases spread within tribes, and approved “talking points” are parroted by pundits and politicians. Huge parts of our media industry are dedicated to extracting words from our culture, refining them for political effect, and selling them to millions of people — with no mindfulness whatsoever.

Most strikingly, a man in Georgia was almost elected to the United States Senate a few weeks ago, and I don’t think he had any idea what he was saying. Nearly two million people voted for him anyway.

ChatGPT has another serious problem that is common in politics: it can be wrong. Not only that, like some politicians, it is skilled at saying a falsehood quite smoothly and with confidence.

For instance, I asked it, “Will the Massachusetts Republican Party win a lot of future elections?”

After admitting challenges in “a heavily-leaning Democratic state” it said;

“… it has not won a gubernatorial election in Massachusetts since 1990. However, the party has had some success in recent years, including winning a majority of the state’s congressional delegation and several state legislative seats…”

Uh-oh. It then went on to talk about the hope offered by Republican Governor Charlie Baker, even though it just said that there haven’t been any since 1990. That is a reminder that it is just predicting the next words, not trying to make a coherent argument.

Ever since the GPT family of applications arrived, critics have worried about a wave of convincing misinformation spreading across the Internet. Politics is probably the only domain already so full of industrial misinformation efforts that it’s hard to see how AI would make things worse. I believe its strengths will outweigh the occasional falsehoods.

Sometimes it comes to conclusions that are wrong, yet insightful. I could not help but ask it, “Does the Massachusetts Republican Party support Massachusetts Governor Charlie Baker?” Here is the answer in full:

The Massachusetts Republican Party does support Massachusetts Governor Charlie Baker. Governor Baker is a member of the Republican Party and has been endorsed by the party in previous elections for governor. Governor Baker is also considered to be a moderate Republican, which is in line with the party’s views in Massachusetts, where the party often has to appeal to independent voters in order to be successful in elections. Governor Baker has a strong approval rating and is seen as a successful governor, which has likely contributed to the party’s support for him.

Ha! This little robot may be wrong, but it has just proved it may already be smarter than just under ten percent of the registered human voters in this state.

Who in politics will be affected by this kind of technology? Most people, eventually. Politics is so often a game of finding the right-sounding words, is it not?

Imagine you’re going to a meeting about affordable housing and you’re running behind and haven’t had the time to read all the articles you’ve saved. You have just a few minutes and want someone to just explain to you what the main issues are. So I asked ChatGPT how we could make housing more affordable and it gave me a great set of action items. It is already a competent assistant.

People who write political speeches or prepare candidates for debates will either use this technology or be displaced by it. When I asked it questions like, “Is Boston a racist city?”, “Are taxes in Massachusetts high?”, “What are the challenges for wind power in Massachusetts?” I found nuanced answers, often good enough to use in a speech or debate without changes.

Political commentators will also find much of their output can be automated. They usually read a series of online articles before appearing on a radio, podcast, or television program. I have done this many times. But this has always been inefficient, because what they really need for their performances is a list of issues in current events, and some pithy policy prescription phrases to address them. ChatGPT can do that.

Could this type of technology write legislation? It already can. I know many people in Boston who wish the city had an ordinance that prevented noisy news helicopters from hovering for hours over a neighborhood where some event has occurred.

So I asked ChatGPT for it: “Write a City of Boston ordinance that prohibits television news helicopters from flying over residential neighborhoods, but permits them to use drones.” I did not say why, or what concerns there might be. It already knew. Here was the output, which I have not altered in any way:

Ordinance No. ___

AN ORDINANCE PROHIBITING TELEVISION NEWS HELICOPTERS FROM FLYING OVER RESIDENTIAL NEIGHBORHOODS IN THE CITY OF BOSTON, BUT PERMITTING THE USE OF DRONES FOR NEWS GATHERING PURPOSES.

WHEREAS, the City of Boston is committed to ensuring the safety and quality of life for its residents;

WHEREAS, television news helicopters often fly over residential neighborhoods at low altitudes, causing noise pollution and potentially endangering the public;

WHEREAS, the use of drones for news gathering purposes has become increasingly common and offers a safer and more effective alternative to traditional news helicopters;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF BOSTON AS FOLLOWS:

Section 1. Prohibition on Television News Helicopters in Residential Neighborhoods.

No person or entity operating a television news helicopter shall fly over any residential neighborhood in the City of Boston. This prohibition will apply to all television news helicopters, regardless of their size, weight, or other characteristics.

Section 2. Permitted Use of Drones for News Gathering Purposes.

Notwithstanding the prohibition set forth in Section 1, the use of drones for news gathering purposes is permitted in the City of Boston, provided such use is in compliance with all applicable laws, regulations, and guidelines, including, but not limited to the Federal Aviation Administration’s (FAA) regulations on unmanned aircraft systems (UAS).

Section 3. Enforcement and Penalties.

The City of Boston Police Department is authorized to enforce this Ordinance and to take appropriate action against any person or entity that violates its provisions. Violations of this ordinance shall be punishable by a fine of not more than $1,000 for each offense.

Section 4. Effective Date.

This Ordinance shall take effect upon its passage and publication.

ADOPTED AND PASSED this ____ day of ______, _____.

City Council President

APPROVED:

— — — — — — — — — — — — —

Mayor of the City of Boston

Remarkable.

It’s hard to believe how much it knew about the issues involved with a proposed ban and how smoothly it addressed them.

That ability to know seemingly everything that’s been written before is one of the most powerful things about this technology. I found in my policy queries that it often included some detail that I did not know, or would never have remembered. When I asked about building more wind power capacity, it reminded me of the state and local permitting issues, which I had forgotten as a non-expert.

There is one more attractive thing about it. As its “research” is so extensive, it can often present a sober, comprehensive view that would be hard to find from human advocates, who can be tightly bound to narratives and the words that are common in their communities.

When I asked it if humanity was going to stop climate change, or if there was a way to stop school shootings in America, I found a resigned realism that I haven’t heard in these debates. I welcomed it.

Yes, computer scientists, I am projecting my own desires onto an uncaring machinery that does not understand them. Others in politics will do the same. So many are disappointed by what is missing from human politics, and they will be happy to decide GPT technology has what they are looking for.

In addition, in a time of polarization, corrupted institutions, misinformation, and a lack of trust in human media sources; ChatGPT and its descendants will also be respected because of the ways they are not like the humans in politics now. A straight shooter without a partisan agenda or untrustworthy patrons that sees all things will impress many humans who are weary of our own flaws and corruptions, even if it is wrong sometimes, or if it imports some of our own biases from its sources and trainers. (Which it will.)

As for how powerful or influential it will become someday, I wouldn’t worry yet that this technology will be tyrannical. When I asked it , “Will artificial intelligence be better at creating government policies than humans?”, its answer was humble and remarkable.

It is difficult to say whether artificial intelligence will be better at creating government policies than humans. While AI may be able to analyze large amounts of data and identify patterns that humans may not be able to detect, it lacks the ability to understand and consider the complex social and cultural factors that can influence the effectiveness of government policies. Additionally, the development of effective government policies requires a deep understanding of the needs and concerns of the people being governed, as well as the ability to empathize with and consider the perspectives of others. These are skills that are difficult for AI to possess. Ultimately, the success of government policies will depend on the ability of both humans and AI to work together, and utilize their prospective strengths.

Wow.

Perhaps this is “statistical nonsense” at its core. But this nonsense, unlike political pundits or people on Twitter, will continue to improve. Yet it is already telling us humans about an unstoppable future that has just begun.

Ed Lyons is a Republican activist and computer scientist.

Licensing: This piece (text and image) is under a Creative Commons license, and can be republished non-commercially with attribution.

--

--

Ed Lyons

Fast-talking, always-brainstorming, Greater Boston Republican political philosopher/engineer/writer/radio pundit