Author’s note: This is not a book review. It’s a retrospective look at the themes in the book in the post-GenAI context. Please be aware that this essay is filled with spoilers. My justification is that the book was published 4 years ago. Plus, like with all great literature, I do believe that the themes explored and the ways in which they are explored are more important than the storyline.
“Klara and the Sun” unfolds like any other Kazuo Ishiguro novel. The main characters, the world around them and their world views are slowly revealed to the reader, layer by layer, mostly through the inner dialogue happening in the protagonist’s mind, as memories are reconstructed, and through little hints and allusions that are weaved into the narrative. The protagonist is – again typical for Ishigurio – a keen and quiet observer, reflective, constantly reflecting on both their own and others’ actions and words, and working out intents and motivations in the process, and often earnest and conscientious about where they need to do better. Deep emotions undergird the narrative and before long the reader is hooked developing a strong emotional connection and empathy for the protagonist. But here’s the big difference between this and earlier books: Klara, the protagonist, is a humanoid robot and the story is told entirely from “her” perspective.
The book is set in the near future and was published in 2021 prior to the emergence of ChatGPT and Generative AI. Klara is an AF (never spelt out in the book but one can infer it stands for “Artificial Friend”) in a world where teen kids from privileged families are gifted an AF – to serve as a companion and provide physical and emotional support to the teen. I read this book as soon as it was published. What prompted me to go back for a second reading is a recent article in the New York Times about Anthropic, the AI company, hiring an “AI Welfare Officer” to start thinking about the welfare of AI models assuming that they willbecome sentient soon. At first glance, the idea seemed preposterous. Why worry about the welfare of AI models now when we are faced with the grim prospect of AI taking over the world and we face existential questions on what it means to be human? But then, I started thinking about the emotional resonance of Klara’s character in Ishiguro’s book and I was no longer able to dismiss this idea. Plus, great literature has the power to make us think deeply about profound questions that impact humanity in ways that no work of non-fiction can. This pushed me to pick up the book again for a re-read. So, this is not a review. It is a retrospective look at the book and its themes in the post-GenAI context.
Klara is built to be like a perfect human, embodying human qualities like unconditional love, empathy, appreciation of nature, and even a kind of spiritual faith, in many ways better than the humans around her. She gives the humans around her everything she can, but they owe her nothing. She is always there and not there at the same time. She is quietly observing Josie, the teenager who is her “human”, and the people around her, learning more about their motivations and thinking, all towards the purpose of better supporting Josie both physically and emotionally. But the people around her act like she doesn’t exist unless they need something from her. She could be a piece of furniture as far as they are concerned, completely ignoring the human-like sentience that she is obviously capable of.
Klara is constantly trying to learn how she can be of better service to Josie. She feels deep empathy towards Josie and the other humans around her. She is troubled by Josie’s illness and determined to do what she can to help her recover. She is ready to take risks and even put her own well-being at risk (in the humanoid robot sense). She goes out of her way to seek the help of many folks in Josie’s life in service of this cause. Because she gets powered by solar energy, she has a mystical faith in the power of sunlight to heal. Driven by what seems like a combination of child-like naivete and a very human-like faith in an ultimate power, she takes great pains and some risks to incur sun’s goodwill and direct the sun’s healing power towards Josie.
But in spite of all this, when Josie has recovered from her illness and is emotionally mature and ready to go to college, Klara the AF has done her job and she has outlived her usefulness. Josie has no need for her anymore and Klara is discarded like an old toy.
In typical Ishiguro style, the narrative is filled with allusions, hinted at but never directly explained, to various dystopian situations in the background along with euphemisms to describe them – just like we saw in his earlier classic “Never Let Me Go”. Kids in privileged families get “lifted” which refers to some medical procedure that supposedly elevates their intellect. When a parent decides not to “lift” their child, there’s a stigma attached to it since the parent is denying opportunities to the child. There are references to dark factories completely run by robots (not so futuristic anymore since this is a reality today in China) and the resulting economic and social dislocation. Josie’s estranged father, a successful engineer in the past, is one of those impacted. Social resentment against AFs and other robots are hinted at, drawing parallels to the current political environment tied to immigrants. And in an eerie harbinger of the AI-driven angst beginning to roil the white-collar professional class today, Josie’s father lives in a community of white men, all former professional elites and victims of economic dislocation just like him, who seem to be arming themselves and preparing for the end of days.
And perhaps in the most dystopian twist to the story, we gradually learn more about Josie’s mother’s true motivations in picking Klara to be Josie’s AF and her early interest in seeing how well Josie could imitate Klara. Josie’s mother is deeply concerned about losing Josie to her illness especially having lost her sister earlier. We learn that the mysterious portrait sittings that Josie is taken to periodically are not really for a portrait. They are instead prep sessions for a madcap scientist to create an AF in Josie’s image, with Klara eventually inhabiting that AF and in effect becoming a continuation of Josie if she were to pass away. It’s as if Klara is being trained to learn as much as possible about Josie – not just her physical traits but her entire being including her thoughts, emotions and feelings and what makes her tick – so she can become the brain and soul of Josie when the time comes.
The book raises many interesting questions for the AI-centric future we are hurtling towards. Perhaps the deepest and most existential question, prompted by Josie’s mother’s plans for Klara transforming into Josie, is this: What does it mean to be human and what makes a person unique? Can this be “transferred” to a humanoid robot? And even if this is possible, is it morally and ethically okay to attempt this?
There are other disturbing scenarios with implications that are not the focus of the book but are still pertinent questions in the context of the future that’s envisioned. The humanoid robots (AFs) in the story are all benign and devoted to serving their humans even if they are treated shabbily. In a future world teeming with such intelligent and highly capable robots, what if they decide to gang up together and fight back? Would humans stand a chance against these powerful intelligent machines? While Ishiguro doesn’t elaborate on the concept of “lifting”, it’s not hard to imagine that this could be akin to implanting AI chipsets inside our brains to lift our intellect – another dystopian possibility that doesn’t seem too far-fetched given the potential for wide-reaching AI impact across practically every sphere of human activity. Perhaps Ishiguro is indirectly telling us we need to more seriously consider the warnings from AI pessimists about the grave dangers to humanity posed by unfettered AI with no guardrails.
In the final scene, we find Klara discarded in some sort of a junk yard for abandoned AFs and left to gradually decay (at least that’s what we are to infer is going to happen). She no longer has her mobility but her “brain” is still intact and she is continuing to process her memories and reflect on her “life”. One cannot but be moved by this as we see the parallels to humans in the last stage of their lives – often physically incapacitated while still fully retaining their cognitive faculties.
Which brings me back to where I started about Anthropic’s intent to study AI model welfare. As human readers, we empathize with Klara and are clearly moved by her final condition. But as intelligent, aware and perceptive as Klara might be, does she really feel anything? Or is this just anthropomorphic projection from us? Throughout the book, there are small hints of Klara feeling what one can describe as joy or sadness. And in one particular scene when Josie’s mother hugs Klara after confessing her plans for Klara to be a continuation of Josie, Klara indicates that she can feel “her kindness sweeping through her”. If humanoid robots and more broadly AI models in the future are indeed capable of that kind of feeling, then what do we owe them and how should they be treated? Perhaps Anthropic is right to start thinking about these questions in the context of model welfare, so future AFs don’t suffer the cruelty we inflict on factory-farmed animals today (though I still believe that it’s much more important to worry about human welfare at this point in time).
Irrespective of what we think about the concept of AI model welfare and the need to think about that now, there should be an urgency to address some of these deep questions and related ethical considerations across government, business and academia. The historian and thinker Yuval Noah Harari recently warned that we have only 5 years to address such questions before things spiral out of control. But in today’s political and business environment, such urgency seems to be sorely lacking. So in that context, Anthropic’s early efforts in this direction, which at a minimum are at least triggering some thinking about thorny ethical questions, are to be commended.
Finally, in these times when there is so much debate about content generated by AI and the value of human creativity, one cannot escape the irony that after all this is a story written by a human that is evoking empathy and emotional connection towards a robot. In the near future, would AI be capable of writing an utterly original, creative story like this with subtle, complex emotional undercurrents and relationships, exploring complex ideas and questions in a nuanced way, evoking powerful emotions, and envisioning a future setting that doesn’t exist today? In other words, can AI produce great literature that can illuminate profound truths and raise deep questions about the human condition? I would like to think the answer is no and that this would remain the exclusive purview of humans. But I am not sure if this is conviction or just wishful thinking on my part.


Leave a reply to satyam rastogi Cancel reply