You need to enable JavaScript to run this app.
GPT-4, Artificial General Intelligence, and Beyond:  The Deerfield Forum and the Future of Being Human
Kevin Yang '25 Layout Editor and Staff Writer
June 4, 2023

“Artificial Intelligence and the Future of Being Human”, the second annual Deerfield Forum, took place on April 11, 2023. With the recent release of ChatGPT other artificial intelligence platforms, the Academy focused this year’s forum on showing civil discussion on the contemporary and ambiguous topic of AI’s anticipated impact on society. 

Albert Yuk

The Forum welcomed two experts in the field of AI as guest panelists. Stuart Russell (Ph.D.) is a pioneer and leading thinker in AI and professor at UC Berkeley who has written multiple books on AI and received numerous awards for his contributions to computer science. Melanie Mitchell (Ph.D.) is an educator, researcher, and professor at the Santa Fe Institute, a top science and technology think-tank, who specializes in cognitive science and artificial intelligence and whose work has appeared in the New York Times, Science Magazine, and others publications. The Forum was moderated by Steven Johnson, author, co-creator, and host of an Emmy-winning PBS/BBC series.

Albert Yuk

Dean of Academic Affairs Anne Bruder, a key organizer of the event, defined the purpose of the Forum: “[It’s] to bring leading voices to campus to talk about issues of contemporary concern to students, to our country, to the world. It’s to model what it looks like when leading thinkers sit down and have a conversation together about big, complex, not very easy questions.” When asked about the Forum’s chosen topic, Dr. Bruder said the Academy considered several ideas, but when GPT-3 emerged at the end of November, Dr. Bruder realized “[AI was] clearly going to be the dominant issue, not only in education, but in tech, and in a range of fields.”

Dr. Bruder engaged in thorough research to find qualified panelists and a moderator. In this process, Dr. Russell emerged as one of the leading thinkers about artificial intelligence. His skills in explaining complex topics and his futuristic insight in his BBC Reith Lectures prompted Dr. Bruder to reach out. Dr. Mitchell came to the school’s attention through her debate with Dr. Russell in The Munk Debates in early 2021. Dr. Bruder heard the panelists “articulate pretty clear differences” and realized they were perfect for the Forum. When it came to finding a moderator, Dr. Bruder was fascinated by Mr. Johnson’s New York Times piece on AI last year and thought he “would be a lively, smart, engaging moderator who was himself very familiar with the work of both Russell and Mitchell.” 

During the Forum, Mr. Johnson asked the panelists various questions about uses and concerns of AI, first centering the discussion on the technological shift and capabilities introduced by new AI. Dr. Russell summarized the development of language models since the first of its kind was developed by Andre Markov in 1913 and highlighted the exponential growth of recent developments. He said, “We don’t know where these systems are, between a piece of paper that’s simply regurgitating some intelligent thing that humans said and human.” However, he pointed out that researchers from Microsoft and members of the National Academies are claiming “sparks of artificial general intelligence.” Dr. Mitchell added that AI systems “have produced behaviors that are not what people who built them would have predicted,” but believes that the models cannot fully “get to human-like common sense because a lot of what we know is not expressed anywhere on the internet or books or any different training data for these systems.”

Both panelists agreed that language models including GPT significantly impact the classroom because educators need to learn how to account for shortcuts created by AI and how to properly utilize it. Emphasizing a sense of urgency, Dr. Russell added, “This is not automating tedious, mindless recipes. This is potentially replacing all the tools that we use in education to help students learn how to think, how to argue.”

The panelists then discussed a recently released letter from the Future of Life Institute calling for the immediate pause of training of all Artificial Intelligence systems more advanced than GPT-4 for safety studies. The letter was signed by various tech executives and AI researchers, including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk. 

Dr. Mitchell and Dr. Russell had contrasting views, prompting one to sign while the other refrained. Dr. Mitchell did not sign as she felt it misrepresented the severity and urgency of different concerns. On the other hand, Dr. Russell did sign as he believed more intervention was needed. 

On the topic of regulating AI, Dr. Russell referenced rules proposed by the Cyberspace Administration of China on the same day of the Forum (April 11). Along with requiring AI models to respect intellectual property rights, output only true information, avoid discriminatory language, and prevent physical or mental harm to others, the CAC stated that all AI products and services must subscribe to socialist ideology and cannot undermine state power. According to Dr. Russell, the European Union is also currently working on creating regulations in the industry.

To contextualize the problem AI poses, Dr. Russell said, “Don’t think about Terminators, think about oil companies.” He explained that oil companies have taken over many countries’ political systems through corruption and that “No matter what you say or what you think, [those companies] have won.” Dr. Russell continued, “Don’t think about a single robot, but think about an entity that spans the entire globe that draws on the computational resources of every data center in the world and has read every written word that the human race has ever produced.” He concluded that it becomes “much more plausible that we could gradually lose control over our own future if we don’t design [AI] the right way.”

Reflecting on the forum, Eric Li ’24 wished the panelists covered “technical details of the base logic of AI” and discussed more “the realistic benefits and conveniences,” instead of focusing on potential harms of intelligence. “ChatGPT’s intelligence is a well-fabricated illusion, mere glorified clockwork…logical reasoning couldn’t be grasped by ChatGPT,” stated Li. “I just don’t think it’s fair to focus so extensively on the theoretical potential harms of AI technology while ignoring its practical realistic benefits.” 

Aka Egbosimba ’24 similarly felt that “their skepticism toward the actual ‘intelligence’ of AI was a bit narrow-minded”, but believed that “regulation is certainly needed in the interest of safety.” 

While Li wishes that the school invited a panelist who supported AI development, he still received “useful perspectives” and learned from the Forum.

Egbosimba also appreciated the extensive content and insights covered in the forum and Mr. Johnson for “smooth[ing] out the discussion and keep[ing] it cohesive even with the contrasting opinions.” Overall, Egbosimba felt that the content was “extensive” and exceeded his expectations and believes that “though it[AI] will bring a lot of change that will inevitably come with dangers, AI will ultimately act as a positive force in the world.”

 To conclude the Forum, Mr. Johnson emphasized the purpose of fostering civil discussion by observing “two extremely gifted thinkers in front of [a] situation where there was a lot of uncertainty and ambiguity,” and reiterated that “you can see [the speakers] leaning into that and embracing that and seeing that as something to be inspired by rather than running away from it.” 

Similarly, Dr. Bruder remarked, “I feel like that conversation could have gone on for five hours, and we still wouldn’t have gotten to the depths of the matter… As long as we can remain excited instead of terrified, we are going to be in a better place.”