The Self-Driving Trolley Car

A self-driving car speeds down the road. The human inside is barely paying attention—probably scrolling, answering emails, or daydreaming about dinner. The AI is doing what it does best: making micro-adjustments, scanning the road ahead, processing more data in a second than a human driver could in a lifetime.

Then, everything goes wrong.

An unexpected obstacle appears. The AI calculates every possible outcome in milliseconds, and none of them are good. It has to make a choice:

  • Swerve into a concrete barrier, almost certainly killing the driver.

  • Stay the course and plow into a group of pedestrians.

A human might slam the brakes, jerk the wheel, make a desperate, instinct-driven move. But the AI doesn’t panic. It doesn’t hesitate. It simply follows its programming.

And that’s where the real question begins.

Who decides how self-driving cars should be programmed in a life-or-death situation?

Should the car prioritize the person inside—the one who bought it, trusted it, and expected it to keep them safe? Or should it act for the so-called “greater good,” sacrificing one to save many? And if that’s the case, would anyone actually be willing to step inside a car that might be programmed to sacrifice them?

But maybe there’s an even bigger question: if humans struggle with these ethical choices—if we hesitate, if we panic, if we make mistakes—should we really expect AI to be better at determining the “right” choice than humans?

The Modern Day Trolley Car Problem

This is, at its core, a modern twist on the classic trolley problem—that endlessly debated ethical question about whether it’s better to let five people die or pull a lever to actively kill one.

A utilitarian approach says the AI should prioritize saving the most lives, which means the self-driving car should choose to hit the obstacle—even if that means sacrificing the driver. Five lives outweigh one, mathematically speaking.

But there’s a catch. The driver is the customer. The person who paid for the car, who trusted it to be their guardian on the road. No one buys a vehicle expecting it to decide they’re expendable.

If self-driving cars are programmed to protect the most people rather than their owners, would anyone even want to use them? Would you?

And what if the answer isn’t clear-cut? Should AI mimic human instinct—self-preservation, hesitation, split-second irrational choices? Or should it hold to some higher ethical standard, making the hard calls even when humans wouldn’t?

This isn’t some far-off, science-fiction thought experiment. AI-driven decision-making is already happening in all sectors. And so far, different companies have taken different approaches.

Some automakers lean toward prioritizing the driver—because, let’s be real, no one wants to buy a car that might kill them in a crisis. But government regulators might argue that public safety should come first, meaning AI should be programmed to protect the greatest number of people, even if that means sacrificing its passenger.

Ideally, the cars would be safe enough that even in the worst accidents, the driver would be protected. But there will always be a situation the programmers and manufacturers didn’t predict. Meaning, they’ll have to build a set of ethical rules into the programming.

Mercedes-Benz has already stated that if forced to choose, their vehicles would prioritize protecting the driver. The reasoning is simple: people won’t buy a product designed to sacrifice them.

But that raises even more questions.

  • If different manufacturers program AI differently, who is responsible when things go wrong?

  • If every company sets their own ethical framework, will customers start picking cars based on which brand values their life the most?

  • And maybe the biggest question of all—should we even be outsourcing these choices to machines in the first place? Or to corporations, for that matter?

Because let’s not forget the social contract problem.

Right now, people accept that being a pedestrian near traffic comes with some risk—because human drivers are unpredictable. But if self-driving cars are supposed to be flawless, should pedestrians assume they are always protected? Could that lead to riskier behavior—people stepping into traffic because they assume the AI will stop in time?

As self-driving cars move from concept to reality, we’ll be forced to grapple with questions that go beyond convenience or efficiency. Who should AI protect? Who should it sacrifice? Who gets to decide?

Should these decisions be left to private companies, each making their own ethical calls? Should governments step in and set universal rules? And if every manufacturer is making their own choices, how will that shape our roads—and our trust in technology?

The future of transportation isn’t just about speed, safety, or automation. It’s about morality. And whether or not we realize it, the choices we make now will determine who lives and who dies in the world we’re building.

So, if you had to choose, what would you program your self-driving car to do?