Do (emotional) androids smile when dreaming of electric sheep?
Earlier this year, Blake Lemoine, a dev-turned-celebrity-in-a-flash, claimed that Google had fired him from his job at the Mountain View giant because he had discovered that a bot in his company was sentient. It’s a curiosity-spiking episode. So, following up on it? Do we have the data to back up Lemoine’s claim? And if AI can be sentient, à la Ghost in the Shell, can it have feelings?
In broad strokes, artificial intelligence (AI) isn’t too complicated to grasp; it’s a computer program that enables machines to reason and finds solutions to problems. However, when AI is broken down into its constituent parts, it shows that AI algorithms are essentially just sets of rules that allow computers to analyze data, learn from experience, and make rational decisions within given constraints. Since these rules can’t (yet) replicate emotions, does that mean we have a long way to go before computers can feel? Can AI even get there?
For this post, we’re considering machine learning as a subfield of AI that provides computers with autonomous learning capabilities. It is a form of statistical analysis that allows computers to draw inferences and make forecasts from data. Algorithms designed for machine learning can sift through mountains of data, identify patterns, and draw conclusions from what they’ve learned.
Can machines be conscious?
The state of being ‘aware’ or ‘conscious’ comes by recognizing an external world and one’s internal state. Some scholars and artists think it has to do with quantum physics and the fabric of reality itself, while others think it’s a biological process triggered by our central nervous system. Since consciousness is such a broad study area, scientists have focused on developing artificial intelligence (AI) systems that can simulate human consciousness, the so-called ‘sentient’ or ‘artificial general intelligence’ (AGI) software. However, these programs have a long way to go before becoming sentient. Self-awareness is a prerequisite for sentient machines; such machines would have to be aware of their existence, abilities, and the state of the world around them.
According to physicist Enzo Tagliazucchi, sentience is ‘essentially not different’ from the measurements an ambient thermostat makes to check temperatures. The main divide is that humans have infinite measurement layers, receiving teraflops of data input each second, which determine our outputs. To all these abstraction layers, we gave them the name ‘consciousness.’ Tagliazucchi claims that computers that can reach our level of sophistication we’ll eventually develop this ‘consciousness.’ From his perspective, AIs can be sentient — it’s just a matter of time.
Other subjects use the word ‘emotion’ along with ‘sentience’, almost interchangeably. Affectiva, an Egyptian-American firm, was one of the first companies to develop the so-called Emotion AI. The company has developed software that can read human emotions and facial expressions and is now working on Human Perception AI, which allows machines to comprehend human behavior. According to an official statement, ‘Human Perception AI will detect nuanced emotions as well as complex cognitive states, activities, interactions, and objects people use.’ Affectiva uses computer vision, speech analytics, deep learning, and this data to analyze human states in context. The company has analyzed nearly 8 million faces from different countries. But are these genuine emotions? If you can tell that someone is sad or hurting because they’re grimacing, but you have never felt pain — are you actually having feelings or just reading out loud a message?
Can AI get emotional?
It seems common ground among researchers that for your computer to ‘feel’ pain because you’ve updated it with a buggy OS patch, it must have sentience or consciousness before that.
But, so far, we’ve noticed that AI programs lack sentience, meaning they can’t ponder for themselves or experience emotion. There are plenty of examples to support this fact. When you play Age of Empires II and crush the computer-controlled enemy without truce, you know the automated player isn’t having a panic attack.
Despite this, as Blake Lemoine suggested, AI systems can be taught to act in eerily similar ways humans do when experiencing particular emotions. Considering how important feelings are to human decision-making, it makes sense that AI systems would be designed to account for these human characteristics. Algorithms based on artificial intelligence can be instructed to respond to a scenario the same way a human would. Artificial Intelligence is so efficient at processing information because computers, when presented with a decision, use algorithms to make that decision based on their experiences and the data they have collected. A computer will use the information gathered to determine the best way to proceed when it encounters a problem.
Thus, an ‘artificial intelligence program can be taught to experience negative emotions like anger, guilt, or shame in response to specific stimuli, and then use those feelings to guide its actions,’ says T.D., a Belgian computer scientist and video game aficionado now working on frontend rendering. Remember Skynet?
I love you, PC. Do you love me back?
Since there are so many different positions around what emotions mean, what machines can interpret from them, and even how far we can get as humans, how about you try your luck and make your sentient, sentimental AI program?
According to J.C., a software developer working in the startup and VC ecosystem, Python should be the ‘go-to language’ when trying to draft out a sentient AI. She might not be alone: Forbes magazine reported a fivefold increase in Python usage in 2018, and GitHub claims it will be the second most popular programming language by 2022. Python’s strengths as a tool for AI and machine learning projects lie in the language’s accessibility to a wealth of valuable libraries and frameworks, its adaptability, its portability across different platforms, and its large and supportive community. ‘But devs can rely on any other language they like,’ according to J.C., adding that she considers another pick ‘somewhat secondary.’
Besides a programming language, the techniques behind the project are just as important. According to G.B., a systems engineer working in an AI company from Silicon Valley, ‘relying on deep learning and reinforcement learning’ might yield better results when trying to develop a sentient AI.
Conclusion
While we have yet to create genuinely sentient machines, some programmers already suggest that it’s possible to train AI to act in similar ways to human emotions. AI is beneficial in many ways, such as helping us solve problems, save time, and make decisions. So, one fateful question would be: Would it be better if it had feelings?
Are you working on a sentient AI? Do you have any close encounters with an AI that is ‘experiencing’ feelings? Share your story with us!
Source: https://cult.honeypot.io/reads/can-artificial-intelligence-have-feelings