“What Does Alexa Really Know?”
On Friday, April 29, a coalition of Luther College organizations, including the Campus Programming Board, Center for Ethics and Public Engagement, Paideia,and Preus Library, came together to sponsor a day-long event centered around the presence of artificial intelligence in our lives. Programming included three workshops held by leading experts in the fields of philosophy, religion, and computer science, and culminated in a roundtable discussion titled “What Does Alexa ReallyKnow?”.
While the event was a cross-departmental effort, Professor of Religion Gereon Kopf was a key figure during the planning process. Kopf is currently researching topics related to the philosophy of consciousness, and the qualifications needed to be considered a sentient being. In the spring of 2021, he was approached by Associate Professor of Religion Marcus Bingenheimer of Temple University to participate in a workshop he was developing on chatbots, a software application used to mimic human interaction, and their place in the modern world. After participating in and recognizing the interest inspired by the event, Kopf decided to bring an extension of it to Luther, inviting Bingenheimer, Professor of Theology and Computer Science Noreen Herzfeld of St.John’s University, and Assistant Professor of Computer Science Justin Brody of Franklin and Marshall College.
“We want to have an interdisciplinary cross-campus dialog on AI, and explore the question of ‘what is consciousness?’” Kopf said.“We focus on chatbots because they are in our lives. Most students have a smartphone or have seen an Alexa. We coexist with chatbots, and that raises ethical or moral issues. We want to start a conversation about how we do that now, and how we will respond to future developments in AI technology.”
The first seminar of the day was held in the Loyalty Hall boardroom, and was presented by Brody. “Getting Creative with AI” focused on the artistic and inventive potential of AI technology, and integrated actual models that were able to produce entire stories or images in seconds. Brody conveyed his continual fascination with the quality of material a computer is capable of creating.
“My research has shown me that our own minds and creativity are somewhat less impressive than we thought they were,” Brody said. “[AI] can do all of these crazy things we thought were unique to us. In a way I think it is fundamentally changing how we understand ourselves.”
The second lecture, “Why Chatbots Fail: When the Second Person isn’t a Person”, hosted by Herzfeld, explored our interactions with chatbot technology, and how these encounters often prove disappointing on an emotional level to the humans involved. She expressed doubts over AI’s future ability to achieve markers of sentience.
“Living beings have consciousness, and that awareness is coterminal to life,” Herzfeld said. “[AI] has no actual intentionality, it is not acting on its own interests. It cannot relate to the human experience.“
The final workshop was moderated by Bingenheimer, and was an interactive history of chatbots. After a brief introduction, participants were given the opportunity to try different models of the AI technology,including Eliza, Kuki, and GPT-3. Bingenheimer is concerned by the speed at which AI technology is advancing, and the potential ramifications it holds for our society.
“Every time you look up,something new has happened in the world of technology, and you don’t have time to digest any particular model,” Bingenheimer said. “[The current generation] values privacy in a much different way than those groups that came before them, in a sense it does not really exist for them anymore. Social media has publicized so much of life, and we have lost much for it – being unobserved is a great privilege, a great pleasure.”
The final event of the day, a roundtable discussion, was held in the Dahl Centennial Union, and invited Luther community members to reflect on their own experiences and relationships with AI technology. Topics explored included sexbots and digisexuality, the cognitive ability of AI, and the potential autonomy of AI to direct its own use. Attendee McKinley Leinweber (‘24) shared her thoughts.
“I attended the lecture for a class, but I really enjoyed it just from a conversational perspective,” Leinweber said. “I’m not a religion, philosophy, or S.T.E.M. major, so it was really interesting to hear from both professors and students about AI and religion in their fields.”
Grant Castillou • May 12, 2022 at 2:19 pm
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461