Is artificial intelligence truly living up to the hype, or are we witnessing a fantastical leap forward? Are AI models capable of thinking and learning independently, paving the way for artificial general intelligence? After watching the recent “60 Minutes” episode featuring an interview with Google CEO Sundar Pichai, you might think so.
In the April 16 segment, CBS host Scott Pelley sat down with Pichai and other developers to discuss the latest advancements in AI technology and its future implications. Pelley described how, in 2023, a machine taught itself to speak to humans as peers, exhibiting creativity, truth, errors, and even lies. He compared this breakthrough to the discovery of fire or the invention of agriculture.
Pichai shared an optimistic yet cautious perspective, noting that AI technologies like Google’s chatbot, Bard, will impact every product and company. He acknowledges the rapid progress AI is making and the challenges that come with it, admitting that some of these concerns keep him up at night. He also highlighted the potential dangers of AI, including the spread of misinformation, saying, “On a societal scale, it can cause a lot of harm.”
Google introduced Bard in response to Microsoft’s release of a Bing search engine powered by OpenAI’s large language models, which also drive ChatGPT. While Bard is still in its early stages and has limited capabilities, Pichai mentioned that a more powerful version of Bard is being held back for further testing.
During the interview, Pelley asked Pichai if Bard is safe for society. Pichai responded that the current version, launched as an experiment, is safe but emphasized the importance of responsible development, which includes conducting more testing and gathering user feedback before rolling out more capable models.
AI chatbots, such as Bard and ChatGPT, have been known to fabricate information while sounding convincing, something the “60 Minutes” team experienced firsthand. During a demo, Google’s SVP of Technology and Society, James Manyika, asked Bard about inflation. The chatbot responded by recommending five non-existent books, including a title that sounded plausible but didn’t exist. This issue, known as “hallucination,” is common in AI chatbots.
Pelley asked Pichai if hallucinations are a frequent problem. Pichai acknowledged that they are, stating that no one in the field has yet solved the issue but expressing hope for progress.
Some AI ethicists have raised concerns not just about hallucinations, but about the way AI is being humanized in media. Emily M. Bender, a professor of computational linguistics at the University of Washington, criticized CBS and Google for promoting AI hype. She pointed out how Pelley described Bard’s “emergent properties,” such as teaching itself to speak Bengali with minimal prompting. Bender argued that the idea of Bard learning Bengali on its own is misleading, suggesting that the system was most likely trained on Bengali data, which Google may not fully understand.
Pichai admitted that AI systems, including Bard, are still somewhat of a “black box,” meaning we don’t fully understand how they work or why they sometimes make mistakes. He compared this uncertainty to our lack of understanding of the human mind, a comparison that Bender rejected as a rhetorical distraction, questioning why Google would release systems it doesn’t fully comprehend.
Pelley also raised concerns about Bard’s ability to discuss complex human emotions, citing a demo where the chatbot wrote a story about losing a child. Pichai explained that some people see AI as simply following algorithms and repeating what it’s learned from online sources, while others believe these algorithms are showing emergent properties like creativity and reasoning. He admitted that this is still an open debate, emphasizing that AI developers must approach the situation with humility.
Bender, however, believes that true humility would involve taking a more cautious approach, ensuring systems are thoroughly tested before release, and considering the societal impacts of AI technology.
As the conversation around AI evolves, it’s clear that while the technology holds incredible promise, it also raises important questions about responsibility, transparency, and its long-term effects on society.