Writing Tips (Sometimes): Let's Talk About AI

You may have noticed that a lot of people seem to be talking about AI. You may have also noticed, at least in the author community, that everyone’s feelings on it are not only divided, but viciously so. In only the last couple of weeks, I saw a post by one author claiming that anyone who thinks AI is okay has failed an ethical meter and would be judged by her accordingly. And I saw another Facebook post by a different author suggesting that anyone who is anti-AI is a luddite, an idiot, and not worth listening to.

You’ve probably also noticed that the majority of the news articles about the topic use extremely emotionally heightened language. I’m looking at a list of articles now and here are a few of the phrases:

  • “Gaslighting, love bombing, and narcissism”

  • “Ring Alarm Bells on Rogue AI”

  • “Unnerving Interactions with ChatGPT”

  • “ChatGPT AI robots writing church sermons causing hell for pastors”

  • “The monster is already on the loose”

  • “Get ready for a sound revolution”

  • “AI comes for the creative class”

The problem with emotionally heightened rhetoric is not that it is posing a particular angle on a new development in the world—it’s that it is telling us how to feel. And when we don’t recognize the rhetoric being used on us, it takes away our ability to decide for ourselves how we want to feel.

Now, since I don’t intend for this post to be a rhetorical analysis of news articles, lol, I’m going to simply say this: pay attention to the language. Because so often, the language is used to grab your attention and emotions, but the narrative it’s weaving is false. And you’ll notice that anytime anything new happens, this is the language used to talk about it. Not just AI. Everything.

Now that I’ve got that gripe out of my system, let’s actually talk about AI.

First, let me be clear that I’m what I call "AI Curious."

The picture at the top of this email was made using MidJourney. I used ChatGPT to answer a few questions I had about AI (ever wonder how support vector machines work?), and I used Google’s search engine to fact check a few things I thought were true but wasn’t certain about. I also used ProWritingAid to edit this post.

And I now run a brand-new discord server about AI, which you can click here to join.

I think AI is fascinating and exciting, and I don’t think anyone who is curious about it or interested in it is evil. That said, nor do I think anyone who is uncomfortable with it or prefers to avoid it is dumb.

AI is an extremely complex and nuanced topic, and so much of what you see are people applying binary conclusions to something that is the opposite of binary. You’ll see people saying “good/bad” and “yes/no,” but AI is not as simple as all that.

So what is AI?

"Artificial Intelligence" is an enormous term that encompasses a wide variety of different technologies. It is not only things like MidJourney or ChatGPT, it is also things like search engines, GPS, and email spam filters. It is editing software, robots, and algorithms. It is autonomous vehicles, space travel, social media, Spotify, Amazon, Netflix, Alexa, and Siri.

It is many things—and plays a role in many of the tools you likely use in your day-to-day life, whether or not you realize it.

Have you heard of a neural network? Do you understand the difference between heuristics, decision trees, and support vector machines? How about reinforcement learning? Do you know the difference between supervised and unsupervised learning? Do these words mean anything to you: classification, regression, clustering, association, dimensionality reduction?

These concepts are all part of understanding how AI works, where it gets the data it’s trained on, the advantages and disadvantages of AI, and potential pitfalls of the tech. And while you don’t truly need to know every detail and every nuance, the more you do know, the more educated of an opinion you can form on the topic.

Because AI, like any other technology, has pros and cons. It can be used for good things, and it can be used for bad. It can help people with disabilities navigate the world. For example, someone I know with dyslexia is using ChatGPT to help them write professional emails; I know several writers with aphantasia using writing tools to help them write descriptions with more depth and emotion; the ability to connect on platforms like social media can allow people confined to their homes to experience less isolation and loneliness.

But AI can also be used by corporations to exploit people. And when a tool is programmed on biased or racist data, it will produce biased and racist outputs.

My point is that it is unwise to go all in on or completely dismiss AI without understanding the full scope of the technology. If you don’t understand how it works, that’s okay. You can learn. I gave you some great keywords to get started with (above).

The second point I want to make is about ethics, because the number one objection to AI I see tossed around in the author community is that it’s “unethical.”

Now, you may not know this about me, but I’m a hobbyist philosopher. Sit me down sometime and ask me about absurdism or my essentialist definition of “dogness,” or tell me your feelings about Wittgenstein or Simone de Beauvoir, and I will be entertained for hours.

What I’ve noticed in these conversations about AI, is that people are conflating ethics with “something that makes me uncomfortable,” or “something that goes against my own moral code.” Sometimes these things align, but sometimes they don't.

Ethics is the study of “good and bad,” born from Socrates’s attempt to answer the question, “How should we live?” It’s an entire discipline that has been discussed, debated, and argued by some of the most intelligent people in history, reaching all the way back to Plato and Socrates—and the thing is, people still don’t agree on it.

To flatly say “AI is ethical/unethical” once again places a binary conclusion onto a concept that is immensely complex. Certain things about AI are definitely unethical. For example, I would argue that corporations selling each other data that is owned by someone else (looking at you, Findaway Voices/Spotify and Apple) is definitely unethical. But do I think the machines trained on the data are inherently unethical? No.

I also want to mention here, before someone yells “theft!” or “What about the artists?” that I am both an visual artist and a writer. I have posted many drawings online and written millions of words that are freely available to read. And I know for a fact my work has been used to train one machine or another. And I’m not upset about it. Why? Because I understand how the technology works.

Am I upset Findaway sold my audiobooks to Apple? Yes.

But these are two separate conversations: one is about the exploitation of creators by corporations; the other is about machine learning. There is overlap, but they are not one and the same.

Again, as I don’t intend this to be an essay on ethics, let me leave you with this: if you are concerned about the ethics of AI, brush up on your understanding of ethics first. Start by reading the Wikipedia article on ethics or watch Hank Green’s Crash Course on philosophy.

The last thing I want to say about AI (today, haha, because I’m sure I’ll keep getting questions about it), is that right now, everyone is hyper fixated on AI art tools like MidJourney and Stable Diffusion, or text bots like ChatGPT.

But AI has been around for decades. It’s not actually new. Have you ever considered what other tools you might use that rely on AI, and you didn’t even know it?

Let me give you a few examples of technologies that use AI: search engines like Google, Bing, DuckDuckGo, etc.; social media (all of it); GPS tools like Waze, Google Maps, and Apple Maps; Alexa and Google Home; website-building platforms; ProWritingAid and Grammarly; entertainment platforms like Spotify and Netflix; Amazon and other online ecommerce retailers; anything that gives you recommendations; the operating system on your computer; the production of cars, and autonomous features like auto-parking or braking; even your Roomba uses AI.

AI has also made its way into some surprising sectors you might not expect. For example, AI is used in agriculture to analyze crop data and predict weather patterns. In sports, AI is used to analyze player performance and develop game strategies. Financial service companies use AI to detect fraud, manage portfolios, and perform risk assessments. AI is used for traffic management, route optimization, and autonomous vehicle control. It's also used in the energy sector to optimize energy consumption and reduce waste. And don’t get me started on the Mars Curiosity rover (insert squeal of delight).

And because software is often proprietary, a company doesn’t have to tell you if they use AI. You could be using tons of tools that make use of AI technology without even knowing it.

Where do you think all these tools got the data to train their machines? Google Search, for example, scrapes the entire internet every day. It reads every website, looks at every image, listens to every audio file. And it has to do this in order to give you the best results when you type in a search query. We know this about search engines. But few other companies have revealed where their data came from, and we know big companies buy data and sell datasets all the time. So often, we just can’t know.

AI is everywhere. Not only that, it has made people’s lives immeasurably better in ways they don’t even realize.

If I had to offer a specific conclusion on this topic, it would be this: AI is not bad. AI has the potential to do immense good, and already does. But what we need is regulation—top-down regulation of corporations, transparency into the technologies and where they are being used and how they are being developed, and compensation for data bought and sold.

But that gets into politics, so I’m going to stop here.

I’ve received a couple of questions about the use of specific AI tools such as Google auto-narration, MidJourney, and writing tools, so I’ll talk about those soon.

But I’ll leave you with this: don’t be afraid. But put in the work to draw your own conclusions on the topic. The robots seem like they’re smart, but they don’t even come close to the power of the human mind.

We’re all gonna be okay.

Hello, World!