Authenticity in the era of AI
There’s a great sentiment from the HBO show Westworld, where Dr. Ford (played by Anthony Hopkins) reflects on the distinction between humans and machines (hosts):
“The self is a kind of fiction, for hosts and humans alike. It's a story we tell ourselves. [...] There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can't define consciousness because consciousness does not exist. Humans fancy that there's something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, you're not missing anything at all.”
Now, this post won’t go to the end of the philosophical rabbit-hole that the quote starts to delve into. It will explore some of the intellectual side of it, though.
Being genuine
Let’s think through the emphasis we place on genuine, human-crafted content as it compares to AI. It seems to be that many of our interactions have layers far beyond the raw content. Think about:
Personal letters
This blog
CVs
Hand-crafted goods
Meeting a client in-person
Any kind of social work with a focus on empathy
Two similar stories:
I was in Japan recently, and some fellow tourists went to a whiskey distillery to try some of the raw ingredients. They could get the real finished product, sure, but anyone on Earth could order that online. This was something they could get directly from the source and nowhere else.
A colleague of mine left the firm last year and their departing email was far more verbose than their typical communications. The grammar was perfect, but the tone was completely uncharacteristic of this person - it was clearly written by ChatGPT.
In each of these instances, we’re looking for something genuinely unique and from the heart.
That’s intuitive, but quite intriguing when you think about it from different angles. Why do we, humans, want specific human-compute to go into the production of something? One argument might be that it is in our interactions with the world that we imbue a part of ourselves. It’s a kind of window into our minds, and that has an intrinsic value. One deeply rooted in our human nature, and not something we can truly be decoupled from.
Performance, productivity, and outcome-oriented thinking
We know we value human efforts, but consider the following scenarios where someone may want to apply an AI:
What if your efforts are greater than that of the competition, but you are just not born with the same talent?
What if you can’t speak without causing great offense?
This is where it gets interesting. To think on a grander scale, depending on how you answer the above questions, you’re effectively answering: should our technology follow the trends set by Darwinism or override them through pursuit of equity? Are those in (1) and (2) less deserving of merit?
Taking this to an extreme, you can imagine a point where AI outpaces every single one of us in every domain and so questions of merit go out the window. Imagine for instance, my firm has a superintelligence producing an RFP response, our competition has their own superintelligence, and the prospective client has a superintelligence in their procurement team. Do the inputs of any of our team members really even impact the outcome?
But let’s bring this back to Earth. Why would productivity be the key metric that we measure ourselves against? We’re always asked to consider what would be said of us in our eulogy, rather than what’s stated on our CV.
My only challenge to such a sentiment is this - we are innately problem-solvers. Knowing there is a better way drives us to pursue it, that is the key quality that distinguishes us from luddites and paradoxically, like with our emphasis on authenticity, something that is impossible to decouple from our nature.
The hypocrisy of genuine inspiration
For all we credit ourselves and humanity with, we’re actually subject to the very same flaws. We have a large vocabulary of words like themes, styles, topics, and so on, all indicating a kind of similarity of creation. Language itself is something we learn through exposure and repetition. We are all only ever building on the shoulders of giants; part of a continuum. The idea that our minds really can create something from nothing has little to substantiate it.
So attribution itself falls into question. If humans were immortal, would we need to provide all our forebears with royalties for our inventions? This is, of course, an overanalytical approach - as humans, we should of course favour humanity. It’s more that I acknowledge the bias here rather than attributing something else to it.
Thought experiment: a merged mind
Say Elon’s dreams come true - Neural link makes some serious strides and has a real-time integration to an Artificial Superintelligence. With non-invasive surgery you can seamlessly attach it to your own brain, augmenting your mental capabilities across all domains - communication, arithmetic, creativity and so on, by 15%.
At this point, are the letters you write still genuine? Is it really you? Is it 85% you?
Later, an update comes out. Your intellect is now 50% AI. Following from the above, can we now say your interactions are 50% genuine?
Much further into the future, the technology has made significant strides. By the time you are on your deathbed, the AI is 99.9% of your consciousness. Then you pass away. What of you remains?
It’s an interesting variant of Theseus ship framed in terms of both consciousness and human attribution. And it’s not far from the truth.
Can you really say that everything you do is 100% you?
So, does real matter?
Being human, for now, I think it does. Maybe our attitudes will change over time. Maybe the technology will stagnate, and we won’t have to confront the question so directly. For now, it’s worth exploring as we think through how we apply technology and the impacts it can have.