Book Review: Culpability
Here’s my review of Culpability by Bruce Holsinger. Leave a comment if you’ve read it, plan to read it, or have any book recs to share. And don’t miss today’s writing prompt.
Culpability by Bruce Holsinger is a novel that will make you stop and reflect on the real consequences of AI in our world today, and who (or what?) is responsible for what when tragedy strikes. It is a compelling domestic drama with a bit of philosophy thrown in and offers a few twists and turns that will keep you flipping pages.
The novel opens with a bang, literally, as a family of five, which includes the narrator, is involved in a tragic car crash, where the drivers in the other vehicle perish. The eldest son, who is about to start college, is behind the wheel of the autonomous driving minivan when the accident happens. Meanwhile, both parents are busy working on their devices, and the younger daughters are on their phones too. So, the remainder of the novel explores who is at fault for the fatal crash and each family member’s level of culpability in what happened, as they each played their own part in what happened.
As the novel continues to unfold, readers will confront near-impossible questions to answer regarding who or what is ultimately responsible for the safety and security of AI and what it’s programmed to do, or could (or should?) do, as well as the people who use it. Is it the AI and or Tech industry as a whole? The individuals who write the algorithms that power AI? The companies and organizations that invent the systems and devices powered by AI algorithms? AI knowledge or ethics experts? Law enforcement or government entities? Each and every individual who uses AI operated systems and devices, as well as those who benefit from them in some way? The AI itself? Or does everything and everyone just mentioned have a shared culpability in what happens when using or relying on AI, particularly when it leads to undesirable outcomes? If so, then how is that understood in a legal and social setting?
Centering the novel’s plot around the private dynamics of a specific and seemingly healthy family was also wise, as it allows readers to explore each family member’s struggle with their own culpability regarding what happened inside the autonomous driving minivan right before the crash and what it means to their inter-family relationships. Because who else would you feel more beholden to and responsible for than your own family, especially if you’re a parent?
While the novel is primarily told from the father Noah’s perspective, we also get periodic insight from others inside the family. We encounter excerpts from the mother Lorelei’s AI industry related work, which engage more directly with the broader philosophical questions posed in the novel. And we encounter middle child Alice’s interaction with an AI bot that she chats with about what happened, allowing readers to remember how human-like AI can behave. And all these perspectives combined encourage readers to have a more nuanced view regarding AI by the end of the novel.
The novel also has a few twists and turns you may not see coming, fueled by teenage love, spousal jealousy, sibling rivalries and affection, industry secrets and intrigue, and an ongoing police investigation. Twists and turns that will keep you flipping pages while never straying from the focus of the novel and the questions it asks regarding AI and culpability. Yet I confess, I was anticipating a more dire ending or morally ambiguous conclusion for the family and was surprised that there wasn’t one, though I’m sure some other readers may not agree with this?
Overall, I would recommend this novel to readers who are interested in exploring a nuanced look at the culpability of AI and those who design, distribute, and use it, but don’t want to get bogged down by technobabble. It would also be a compelling choice for a book club discussion.
Notable passages from the book:
“Like an algorithm, a family is endlessly complex yet adaptable and resilient, parents and children working together as parts of an intricate, coordinated whole. Sure, there might be some bugs in the system, a glitch or two. But if you simply tweaked the constants from time to time, life would continue to unfold in its intricate yet predictable patterns, an endless cycle of inputs and outputs subject to your knowledge and control.”
“The abstract discussion of such harrowing scenarios is one thing. Actually coding them into the algorithm of a three-ton sports utility vehicle is quite another. When it comes to the workings of AI in the real world, the ethicist herself cannot escape the prison house of culpability.”
“In the face of Artificial Intelligence, we are all in something like the position of Adonis, confident in our invulnerability and oblivious to peril. Yet we are also Venus, whose role in the myth is to voice a kind of maternal protectiveness and fear. The real danger is that we will let our fear overwhelm our judgment and outpace our strategies to control the forces imperiling our future. Perhaps, then, we need to learn better ways to fear.”
“This behavior is known as anthropomorphic projection. We want our helpful machines to be like us, and so we tend to project onto them our ways of understanding the world. Yet such human-seeming systems comprise a small fraction of the AI shaping our everyday experience. Even as you read these words, there are AI systems at work all around you, with profound bearing on the disposition of your food, your money, your shelter, your safety. They manage investment portfolios, coordinate global supply chains, and keep networks secure. They direct air traffic, drive trucks and cars, detect fraud, and optimize irrigation schedules. Increasingly, they fight wars. And there is almost no one teaching them how to be good.”
“With AI, anyone who pretends we can know the future good of a present-day investment in that sector is a fool. Moral outcomes are always uncertain, no matter how much you dress up your investment with a benevolent halo.”
“While humanity did not create itself, in Sartre’s understanding, we did create machines. And the question of the freedom of these machines—their autonomy—is shaping our present moment in ways most of us can neither see nor understand. Is a machine responsible for everything it does? Of course not. It is we who are responsible for the consequences of the very freedom we grant to these objects of our creation. In granting autonomy to an algorithm, we are not condemning the machine to be free. Rather, we are condemning ourselves.”
“Algorithms face no such consequences for their misbehavior, either societal or emotional. Punishment, guilt, culpability are alien to them. There are no moral qualms in an algorithm. Yet without acknowledgment of wrongdoing, how can there be regret? Without self-consciousness of guilt, how can there be remorse? And without regret and remorse, how can there be moral growth?”
Subscribe to receive future book reviews in your inbox, along with other engaging posts. And don’t forget to leave a comment with your own book recommendations.
© This work is not available for artificial intelligence (AI) training. All Rights Reserved by K.E. Creighton; Creighton’s Compositions LLC.
Want to express your appreciation for this post and writing prompt?
My writing and I are fueled by loyal readers, caffeine, and kind words, so I appreciate any support you can offer that keeps me writing. Thank you so much!
Today’s Writing Prompt
Writing Prompt: Autonomous Driver
Write a piece of flash fiction about an autonomous vehicle. Or write a journal entry about an experience you’ve had with or inside an autonomous vehicle.
Writing Tip:
Before you begin writing, consider: Who owns the autonomous vehicle? Is it being used? If so, who is riding in it and where are they going?







