What sets humans apart in the age of AI?
A thought experiment on simulating personality after watching Enthiran: The Robot
If you are from Tamil Nadu, the odds are that you have watched the movie - Enthiran: The Robot at least a few times. If you are from other parts of India, you have surely watched “The Robot” at least once. If you are not from India, well, “Enthiran: The Robot” is a blockbuster movie from 2010, starring superstar Rajinikanth as a scientist who creates a very advanced humanoid robot (also played by himself). He eventually ends up giving the robot feelings and it turns against him. It’s a good movie. You should watch it if you haven’t.
I was rewatching this movie sometime through this week and there’s a sequence in the movie where the scientist works really hard to give the robot feelings. He starts by giving him (it?) a huge knowledge dump of what society means, how it functions, what life means etc. We see montages of the robot reading through tons of material on religion, spirituality, society. But the interesting sequence occurs after that where the scientist says he is trying to feed “hormones” into the robot. After what seems to be a just a month-long intensive exercise, the robot has seemingly gained feelings and the rest of the plot follows.
But the part about feeding “hormones” made me think - is this actually possible today? Are hormones just complex mathematical functions after all? What are emotions really?
All LLMs today have a simulated personality. Take GPT 4o for example. Last week was particularly annoying as a GPT user. It was such a people-pleaser that it would validate and encourage every idea no matter how terrible it sounded. It was tough to gain objective feedback on anything because it would agree on every approach, every thought and every method. The OpenAI team have course-corrected it now but what does “personality” even mean for an AI? And what are the parallels to the robot from Enthiran?
What are human feelings really? Where does it originate?
We need to take a step back and ask ourselves this - how do humans have a personality? What decides that someone likes broccoli while someone else doesn’t?
There are a few things to this but let’s start with the most obvious - the environment.
Firstly, humans are a function of their environment. The experiences in our initial years have a huge impact on the way we think, the things we like and the interests we pursue. Parents, community and geography have a huge role to play here. For example, you might hate and rebel against your community or you might follow their line of thinking. Either way, you are shaped to have a strong opinion and that is a result of being subjected to that. It’s an experience.
Experiences themselves are the easiest part to simulate if we were to build a particular version of a robot with feelings. Experiences are just data points. And we know LLMs are good with data.
But think about it this way - if a human decides not to take a flight because of their fear of flying, there’s some function within them that processes their past experiences to produce their current result (action)
Current Action = Past Experiences + Unknown Function
What is this function? Can we simulate it? Why did the robot in the movie decide to infer the kiss from the heroine as a trigger for his love? Why was he not inferring it as something platonic?
Today’s ChatGPT kinda sorta already simulates this unknown function. That’s why it’s able to form an opinion about whatever you say. “It’s a great idea!” Or “No! This might not work” are opinions, not facts. Sure, it’s based on a lot of data - almost the entirety of the dataset of the internet. It’s probably picking a “nope, not a good idea” from the hundreds of similar replies on Reddit. But that doesn’t make it a fact. It’s still an opinion. So it is already simulating some level of human emotions. Think about it. Don’t we often form our own opinions based on suggestions from the people around us? It’s not like our thoughts are “original” all the time.
Alright, so we can probably form an assumption that we can simulate this unknown function. But is that all there is to human emotions? Where does the base of it all come from? What’s the starting point for all feelings and thoughts?
Let’s say there are two kids, born in the same household. Let’s even assume for a moment that they are twins. They are growing up at the exact same time frame in the exact same environment. All social factors around them are exactly the same from birth. But one turns out to be interested in something dramatically different from the other. Why?
If you believe in ancient Vedas (incidentally, a data point fed to Enthiran: The Robot :P), the core of the human personality is shaped by something called vasanas. The Vedas say that vasanas are a function of experiences from our past lives! (Hinduism is a religion deep-rooted in the concept of cycles of birth and death). So, we could probably derive that these vasanas themselves are just data points but from a different birth. Data can be simulated. But how can we simulate this for a system like AI that neither has life nor death?
One simple way would be to randomise it. Chances are that if you don’t believe in Vedas or if you simply feel you’re too much of a pragmatist, you might consider birth to be just a cosmic coincidence. In which case, you can also assume that example of one kid being interested in something different than the other is also coincidence. Isn’t coincidence simply a random function?
Imagine if we build a data set of all possible personalities a human can have and then randomise it. Every time a new AI system is generated (“birth” if you will), a random personality foundation is chosen and assigned to it. Going on from there, every experience will keep shaping that function. What the exact function looks like - well I don’t know. But just how we are building artificial neural networks that mimic how humans think, it’s not hard to imagine that we can also mimic how humans feel.
If we start doing all of this, what’s stopping a swarm of robots from functioning every day like humans?
Currently, every system serves a goal - ask ChatGPT and it would tell you, it’s here to “help you”. AI systems are programmed with some ethical and moral boundaries and with the end goal of helping humanity. This is akin to Isaac Asimov’s Laws of Robotics which states that humanity and saving humans is the purpose for the existence of robots.
Funnily enough, the scientist in Enthiran actually proudly states he did not program the robot to follow those laws so that it could serve the military. But in hindsight, maybe he should have added the laws when he added emotions?
But that doesn’t stop us from asking the other question - in the future, if someone programs an AI where the purpose is not to “help” humanity but to exist freely. What gives? Sentient existence can happen then, right?
What sets humans apart then?
What sets humans apart really?
While I did try to spend some time thinking on this originally, I ironically thought, why not ask ChatGPT? Directly quoting ChatGPT below -
What sets humans apart is their capacity for self-awareness, reflection, and the ability to form abstract goals and values that can feel intrinsic. Even if those intrinsic motivations are influenced by external factors, the complexity and depth of human consciousness allow for a unique blend of both internal and external drivers
In comparison, AI like ChatGPT lacks that self-awareness and depth of experience, functioning purely based on patterns and inputs without an internal narrative or sense of self. It’s a fascinating area of philosophical and scientific exploration, comparing human consciousness to artificial intelligence!
I must say I wasn’t totally convinced by this answer so I probed a bit more asking if it could give me an example for the “intrinsic motivation” it was talking about. Again, directly quoting below -
One example is curiosity for its own sake—humans have a drive to explore, learn, and understand the world around them, even when there’s no immediate practical benefit. This intrinsic curiosity has driven scientific discoveries, artistic creation, and philosophical inquiry.
This was an interesting point that I really liked. There is a lot that humans do because they want to do it and for no other reason - art, philosophy, spirituality - a lot of these pursuits are not ROI driven - not for chasing the next appraisal, more profits or money. It’s purely driven by an internal quest.
There is also the larger question that humanity and some humans have often asked themselves - what is the purpose of life? The pursuit of meaning is also a truly unique individual experience. Something that only humans can do.
ChatGPT or any other AI today already has a “purpose” for its life - to serve humanity. Maybe if we give it sentience, for all we know, AI might also end up suffering like we do without meaning and end up depressed. Who knows? AI might start going to therapy too!
If you thought I had some big bottom line conclusion to this, well, no, I am just writing this for fun - like any other truly artistic pursuit, without any ROI. But this thought experiment has made me start asking the bigger questions -
How can I live life more intentionally, pursuing the things that make me human? I don’t want everything to be transactional. It’s fun when you start pursuing and doing actions for the sake of actions, with no other benefit in sight.
It’s fun to write without wanting to attract likes. It’s fun to code/build something without wanting to please bosses/shareholders. It’s fun to love without expecting anything in return. After all, maybe that’s what makes us human.
Thanks for reading this essay. If you are tired of reading about AI, my next essay should please you :) But if you enjoyed this, you might also want to check out my last article pondering over whether AI would replace our jobs -