Quantcast

Mercer Times

Sunday, December 22, 2024

Princeton experts discuss realities versus myths in artificial intelligence

Webp 5o34tnvce69xfma17q1ju455v555

Christopher L. Eisgruber President of Princeton University | Princeton University Official Website

Christopher L. Eisgruber President of Princeton University | Princeton University Official Website

In the rapidly evolving world of artificial intelligence, a new book titled "AI Snake Oil" by Princeton scholars Arvind Narayanan and Sayash Kapoor offers insights into the complex landscape of AI technologies. The authors aim to help readers discern between realistic AI applications and those that are overhyped or flawed.

Narayanan, a computer science professor at Princeton University and director of its Center for Information Technology Policy (CITP), alongside Kapoor, a former Facebook engineer now pursuing a Ph.D. in computer science at Princeton, have seen their book make it onto several top recommendation lists. "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference," published by Princeton University Press, seeks to provide foundational knowledge to separate hype from reality in AI.

"AI snake oil, as a concept, is AI that doesn’t work as advertised and probably can’t work as advertised," Narayanan explained. He emphasized the importance of understanding this technology for making informed decisions.

Kapoor highlighted instances where companies market ineffective AI products without scientific backing. He stated that the book aims to empower readers to distinguish between effective and ineffective AI applications in daily life.

Narayanan pointed out that people often fear the wrong aspects of AI. He cited cybersecurity expert Bruce Schneier's aphorism: "If it’s in the news, don’t worry about it." According to Narayanan, while generative AI like ChatGPT garners media attention, predictive AI quietly makes significant decisions affecting people's lives.

Predictive AI is used in various high-stakes scenarios such as employment decisions or determining bail amounts. In healthcare, it predicts recovery times with precision but can lead to issues if insurance coverage ends prematurely based on these predictions.

The authors argue against using "AI" as an umbrella term for different technologies like generative and predictive AI. Kapoor noted that conflating these distinct technologies leads to public confusion.

Both Narayanan and Kapoor left Silicon Valley for Princeton seeking broader perspectives on tech's societal impact. They value interdisciplinary collaboration at CITP, which combines technical expertise with policy analysis.

Kapoor expressed concern over industry influence on large language models due to reliance on major tech companies like OpenAI and Google. However, he remains hopeful about integrating beneficial AI into everyday workflows seamlessly.

Narayanan sees potential in generative AI for new applications but stresses caution with current problematic uses until they mature into reliable tools like spellcheck or autocomplete functions.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate