Imagining the Internet Center | Today at Elon | þ /u/news Mon, 20 Apr 2026 20:22:32 -0400 en-US hourly 1 Building human resilience for the age of AI /u/news/2026/04/01/building-human-resilience-for-the-age-of-ai/ Thu, 02 Apr 2026 01:25:15 +0000 /u/news/?p=1042916 Experts Call for Radical Change Across Institutions and Social Structures, Warning That AI Will Be Significantly More Influential in the Next 10 Years or Less

The vast majority of expert respondents in a by þ’s called for leaders to work together now to build a coordinated resilience infrastructure for the age of artificial intelligence (AI) to counterbalance the human and systemic challenges posed by widespread AI adoption. Some 82% said AI will play a significantly larger role in shaping people’s lives and key societal functions in the next 10 years or less. They urged an “institutions-first” resilience agenda because the most significant problems arise from a life-encircling AI infrastructure.

In more than 160 impassioned essays, the global experts noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humans flourish as they adjust to an AI-infused future.

Janna Anderson
Janna Anderson

“The central risk described by these experts is not a single catastrophic AI event,” said report co-author Janna Anderson, professor of communications and senior researcher for the ITDF Center. “They said accelerated AI use will lead to a cumulative reallocation of human agency until people and institutions find it harder to question, contest or even notice what has changed. That drift can look like ‘progress’ in the short term, but it has a price – the gradual weakening of human judgment, accountability, shared truth and the social fabric that makes self-government possible.”

Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: “AI will diffuse responsibility by design. … Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”

The experts responding to this canvassing is an international and notably cross-disciplinary mix of people with academic, professional, technical and industry experience.

>

is 376 pages. It includes experts’ full responses to the open-ended essay question. This is the 52nd issued by ITDF since 2005.

Lee Rainie
Lee Rainie

“One of the major surprises to me in these responses is that we wrote our questions about resilience wondering about individual resilience and its various parts. Yet these experts were insistent that humanity’s best response for building a brighter future as we evolve with our AI systems must start at a higher level,” said Lee Rainie, director of the ITDF Center. “They note how AI has already become part of our environment, embedded in often invisible ways in our lives and it will take a systems-level response to shore up our in-born capacities.”

Alison Poltock, co-founder of AI Commons UK, wrote, “We are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.”

Mel Sellick, founder of the Future Human Lab, said, “AI has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no ‘outside’ anymore. Some form of AI is upstream of everything. We are the last generation that knows what human capacity felt like before it became inseparable from AI.”

Srinivasan Ramani, Internet Hall of Fame member, former research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote, “AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?”

The experts underscored the urgency of taking action. Salman Khatani, manager of the IMAGINE Institute of Futures Studies in Pakistan, wrote, “The window for proactive intervention is now – we have perhaps five to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.”

Taken together, they suggested a sweeping agenda for developing human resilience in the AI Age, focused on the fact that actions by individuals alone are not sufficient. Many of the concerns and proposed solutions are crosscutting, and they said collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others.A selection of goals to target:

For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable “red lines” and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.

For AI developers: Do better than designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage people to reflect on choices; train AIs to cite and honor humanity’s intellectual and psychological foundations; build systems that buttress humans’ capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.

For business leaders: See the call to action in the items above; play a role in initiating and carrying out that change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs’ challenges to people’s self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones – areas of work in which AI is intentionally prohibited.

For educators: Create literacy regimes in all AI-related domains, particularly þ “existential literacy” – the cultivation of individuals’ understanding of how technologies shape goals, values and identities. They urged the þ of skills and development of norms that encourage people to consciously navigate life‘s fundamental challenges, to strive to retain and apply the capabilities of metacognition, discernment and epistemic vigilance – to be responsible for making their own decisions, retaining agency. To strengthen their ability to adapt to change and manage friction, paradoxes, ambiguity and anxiety. To focus on their critical human traits such as curiosity and social and emotional intelligence.

For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as “analog communities,” “dumbphones” and “dumb homes” that allow people to avoid algorithmic mediation and surveillance technology.

For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don’t be passive, don’t be hasty and don’t be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.

Many experts expressed optimism, saying if we are resilient and all goes well, humans will flourish in the AI age. Internet pioneer Doc Searls wrote that humans will come to rely on AIs to help with the myriad details of modern life. “Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976,” he wrote. “But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.”

While most comments were focused on developing human resilience for the AI Age, a number of futures-scenario predictions were included in the report. A small selection of the many predictions:

Digital advances drive sex and childbirth declines: “Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.” – Greg Sherwin, Singularity University global faculty member based in Portugal, previously senior principal engineer at Farfetch

“Modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.” – Tracey Follows, founder and CEO of Futuremade, a UK-based futures consultancy

Solitude will be lost: “Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.” – Paul Saffo, prominent Silicon Valley-based forecaster

The retirement age will be manipulated to maintain ‘full employment’: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the “governor” (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. – Nigel M.de S. Cameron, past president of the Center for Policy on Emerging Technologies

Battles will occur over defining what is ‘human’: “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans. … ‘Physical AI’ will fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.” – Ray Wang, chair and principal analyst at Constellation Research

AIs will gain rights: “We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights.” – John Smart, president of the Acceleration Studies Foundation and author of “Introduction to Foresight”

“AI psychosis and other forms of mental illness will arise. The further erosion of a solid foundational reality will create a great vulnerability. Coping with these issues will require new approaches to the diagnosis and treatment of mental illness. It will also demand new approaches to evaluating and appreciating the impact of human relationships with AIs and deeper assessment and understanding of consciousness itself.” – Stephan Adelson, president of Adelson Consulting Services

Superstupidity (not superintelligence) is the real threat: “The existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand – the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film ‘Idiocracy’ is prophetic.” – Roger Spitz, founder of the Disruptive Futures Institute in San Francisco

Agent failures will start with social (not technical) problems: “Agentic systems will fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.” – Daniel Erasmus, founder at Serious Insights, based in Amsterdam

As agents take over, the internet will become a network of databases, not websites: “As software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.” – Gary Bolles, author of “The Next Rules of Work” and chair of the Future of Work efforts at Singularity University


on a canvassing with a non-random sample conducted between Dec. 26, 2025, and Feb. 12, 2026. In all, 386 experts responded to at least one aspect of the canvassing; 251 provided written answers to an open-ended question – more than 160 provided detailed essay-length responses. is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead. The Center was established in 2000 as Imagining the Internet and renamed with an expanded research agenda in 2024.It is funded and operated by, a nationally ranked private university located in Elon, North Carolina.

]]>
Elon faculty and staff named to CAA Academic Alliance AI Technologies Champion Network /u/news/2026/02/05/elon-faculty-and-staff-named-to-caa-academic-alliance-ai-technologies-champion-network/ Thu, 05 Feb 2026 14:46:21 +0000 /u/news/?p=1038209 An Elon faculty member and staff member have been named to the inaugural cohort of the

Dan Anderson, special assistant to the president, and Michele Lashley, assistant professor of strategic communications, are recognized as faculty and staff “who are creatively and responsibly integrating artificial intelligence technologies into þ/learning, research, student success, leadership development and institutional effectiveness.”

As the use of AI is impacting higher education, structured and collaborative approaches are essential for implementation that is cohesive, consistent and ethical. The AI Technologies Champion Networkinitiative addresses this transformational challenge by recognizing leaders across the Alliance, including Elon, building a community of AI technology champions and preparing inter-institutional teams for near-future extramural funding efforts.

Anderson was also named an AI Technologies Network Award recipient, which acknowledged his spearheading of the and his effort involving scholars from 48 countries to produce a statement of principles guiding higher education’s role in preparing humanity for the AI revolution.

Launching as a novel initiative in October 2025, the CAA Academic Alliance requested applications from the thirteen institutions comprising the Alliance. Nearly 400 applicants responded to the call, with 22 faculty/staff members successfully creating the Alliance’s Class of 2025-26.

]]>
Elon/AAC&U national survey: 95% of college faculty fear student overreliance on AI /u/news/2026/01/21/elon-aacu-national-survey-95-of-college-faculty-fear-student-overreliance-on-ai/ Wed, 21 Jan 2026 12:18:20 +0000 /u/news/?p=1037214 A new survey of college and university faculty nationwide finds widespread concern and skepticism about how generative artificial intelligence is affecting their þ and student performance across academic disciplines.

Related Articles

Large majorities warn that these tools will lead to student overreliance on AI, weaken their critical thinking, shorten their attention spans, and erode academic integrity and the value of college diplomas – concerns they say strike at the heart of higher education’s mission.

At the same time, many think that þ AI literacy is important, that their students’ future jobs will be seriously impacted by the spread of GenAI and that it is vital for those in higher education to stress the ethical, environmental, and social consequences of AI use.

These new findings come from a November survey of 1,057 faculty by the and

Key Findings

  • 95% of the faculty in this survey said GenAI’s impact will be to increase students’ overreliance on these artificial intelligence tools, including 75% who said the tools will have a lot of impact.
  • 90% said the use of GenAI will diminish students’ critical thinking skills, including 66% who think GenAI will have a lot of impact.
  • 83% said the use of GenAI will decrease student attention spans, including 62% who thought GenAI will have a lot of impact.
  • 86% said they believe it is likely or extremely likely that the emergence of GenAI tools will impact the work and role of those who teach in higher education.
  • 79% think the typical þ model in their department will be affected by GenAI tools at least to some extent, including 43% who said they believe the impact will be significant.
  • 78% said cheating on their campus has increased since GenAI tools have become widely available, including 57% who said it has increased a lot. And 73% said they have personally dealt with academic integrity issues involving their students’ use of GenAI.
  • 48% said their students’ research has gotten worse because of GenAI, compared with 20% who said they believe it has gotten better.
  • 74% of these faculty said the use of GenAI tools will affect the integrity and value of academic degrees for the worse, including 36% who said the value of degrees will worsen a lot. Just 8% said GenAI’s impact will affect the value of degrees for the better.
  • 63% said their schools’ graduates in spring 2025 were not very or not at all prepared to use GenAI in the world of work, compared with 37% who felt the graduates were very or somewhat prepared.

“These faculty are divided about the use of generative AI itself,” said Lee Rainie, director of þ’s Imagining the Digital Future Center and a co-author of the report. “Some are innovating and eager to do more; a notable share are strongly resistant; and many are grappling with how to proceed. At the same time, there is broad agreement that without clear values, shared norms and serious investment in AI literacy, we risk trading compelling þ, deep learning, human judgment and students’ intellectual independence for convenience and a perilous, automated future.”

Eddie Watson, vice president for digital innovation at AAC&U, added: “When more than nine in ten faculty warn that generative AI may weaken critical thinking and increase student overreliance, it is clear that higher education is at an inflection point. These findings do not call for abandoning AI, but for intentional leadership – rethinking þ models, assessment practices, and academic integrity so that human judgment, inquiry, and learning remain central. The challenge before higher education is to act with urgency and purpose so that AI strengthens, rather than undermines, the value of a college degree.”

A profession coming to terms with AI, but not feeling prepared

Despite these concerns, the report finds that faculty are not uniformly opposed to AI. Many acknowledge potential benefits, particularly in personalized instruction and efficiency, and a majority are already engaging students in discussions about AI’s limitations and risks.

  • 69% of faculty say they address AI literacy topics—such as bias, hallucinations, misinformation, privacy and ethics—in their þ.
  • 61% believe GenAI could enhance or customize learning in the future.
  • 87% report that they have created explicit policies for students on acceptable and unacceptable uses of AI in coursework.

At the same time, faculty describe a fragmented policy environment. Some 48% say their institution has clear, campus-wide guidelines for AI use in þ and learning, and just 35% say their departments have done so.

Faculty also report that many institutions are unprepared for the scale of change AI is bringing:

  • 59% say their institution is not well prepared to use GenAI effectively to prepare students for the future.
  • 68% say their school has not adequately prepared faculty to use GenAI for þ or mentoring.
  • 67% said their schools have not prepared their non-faculty for using GenAI to perform their work.

When asked about longer-term consequences of AI’s impact on higher education, more often than not, faculty expressed worry:

  • 49% say GenAI’s impact on students’ future careers will be more negative than positive, compared with 20% who see more positive than negative effects.
  • 62% believe GenAI will worsen student learning outcomes over the next five years.
  • 54% say GenAI will have a more negative than positive impact on students’ overall lives at their institution.

About the Study

This non-scientific survey was conducted between October 29 and November 26, 2025, using a list of college and university faculty members developed by AAC&U and þ. The sample of 1,057 respondents is diverse in a range of academic disciplines, school sizes, job titles and composition of student populations, but the data reported here are not generalizable for the entire population of college faculty members. Full methodology details and topline findings are included in the report.

About AAC&U

The American Association of Colleges and Universities (AAC&U) is a global membership organization dedicated to advancing the democratic purposes of higher education by promoting equity, innovation, and excellence in liberal education. Through our programs and events, publications and research, public advocacy and campus-based projects, AAC&U serves as a catalyst and facilitator for innovations that improve educational quality and equity and that support the success of all students. In addition to accredited public and private, two-year and four-year colleges and universities and state higher education systems and agencies throughout the United States, our membership includes degree-granting higher education institutions around the world as well as other organizations and individuals. To learn more, visit www.aacu.org.

About þ’s Imagining the Digital Future Center

Imagining the Digital Future is an interdisciplinary research center focused on the human impact of accelerating digital change and the sociotechnical challenges that lie ahead. The center’s mission is to discover and broadly share a diverse range of opinions, ideas and original research about the likely evolution of digital change, informing important conversations and policy formation. The center was established in 2000 as Imagining the Internet and renamed Imagining the Digital Future with an expanded research agenda in 2024. It is funded and operated by þ, a nationally ranked private university in central North Carolina.

]]>
Leading Artificial Intelligence expert Beth Noveck to give lecture on AI and democracy /u/news/2026/01/16/leading-artificial-intelligence-expert-beth-noveck-to-give-lecture-on-ai-and-democracy/ Fri, 16 Jan 2026 14:25:35 +0000 /u/news/?p=1037063 Join members of the þ community for a lecture by Beth Noveck, leading expert on using artificial intelligence to reimagine participatory democracy and strengthen governance, on Wednesday, April 15 at 2 p.m. in LaRose Digital Theatre

Noveck is a leading expert of using artificialintelligenceto reimagine participatory democracy and strengthen governance.She is a professorat Northeastern University, where she directs the Burnes Center for Social Change and its partner project, The Governance Lab. Noveck previously served as the first Deputy Chief Technology Officer under President Barack Obama, where she founded the White House Open Government Initiative, which created policies and platforms for making the federal government more transparent, participatory, and collaborative.

Noveck also served as Senior Advisor for Open Government to British Prime Minister David Cameron and as a member of the Digital Council thatadvisedGerman Chancellor Angela Merkel. She is the author of “Solving Public Problems: How to Fix Our Government and Change Our World,” and her new book “Reboot: The Race to Save Democracy with AI” will appear with Yale University Press.

This event is sponsoredby the Imagining the Digital Future Center and Council on Civic Engagement

]]>
Lee Rainie quoted in The Washington Post about emotional attachments and ChatGPT /u/news/2025/11/12/lee-rainie-quoted-in-the-washington-post-about-emotional-attachments-and-chatcpt/ Wed, 12 Nov 2025 13:59:04 +0000 /u/news/?p=1033148 Lee Rainie, director of þ’s Imagining the Digital Future Center, spoke The Washington Post for an article titled

The authors analyzed thousands of chats from the large language model and discussed the patterns that arose. Emotional conversations were some of the most common, in those analyzed by The Washington Post.

Rainie’s research with the Imagining the Digital Future Center has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot.

“The optimization and incentives towards intimacy are very clear,” Rainie told The Post. “ChatGPT is trained to further or deepen the relationship.”

]]>
Lee Rainie speaks with MassLive about the decline of cable TV /u/news/2025/11/03/lee-rainie-speaks-with-masslive-about-the-decline-of-cable-tv/ Mon, 03 Nov 2025 14:28:00 +0000 /u/news/?p=1032243 Director of þ’s Imagining the Digital Future Center Lee Rainie spoke with MassLive about cable subscriptions declining.

The outlet notes that cable subscriptions in Massachusetts, where MassLive is based, have fallen 45% since their peak.

“It’s a convergence of multiple trends,” said Rainie.“Cable subscribers used to pay for a bundle of stations — local news, sports — that bundle has been broken apart in modern years.”

MassLive notes that people can not put together their own bundle “a la carte.”

“With the internet you can throw your content online for free, like YouTube, and keep an archive on a free platform — as opposed to cable, where you had to pay for a slot,” Rainie said. “It’s benefited both customers and creators.”

]]>
Lee Rainie interviewed by WXII about AI and human relationships /u/news/2025/11/03/lee-rainie-interviewed-by-wxii-about-ai-and-relationships/ Mon, 03 Nov 2025 14:22:36 +0000 /u/news/?p=1032237 Lee Rainie, director of þ’s Imagining the Digital Future Center, recently spoke with WXII about research surrounding artificial intelligence and relationships.

Rainie says the center is analyzing how people are now using AI tools like humans, including as therapists, friends or even dating partners.

“It’s a long-standing story, especially with digital technologies, that the first thing people do with it, no matter why it’s invented, is to start doing social things,” said Rainie.

Read the full interview .

]]>
Elon summit with RTI International examines humanity in the age of AI /u/news/2025/09/21/elon-summit-with-rti-international-examines-humanity-in-the-age-of-ai/ Sun, 21 Sep 2025 13:40:57 +0000 /u/news/?p=1028081 What does it mean to be human in the age of artificial intelligence? Is it a unique use of language? Is it the demonstration of empathy? Is it the ability to form communities?

How can artificial intelligence help humans better understand their own special capabilities and natural rights? For that matter, what legal rights should be bestowed on highly advanced systems that can reason and, perhaps in the near future, may become self-aware?

These questions and many more were posed during a daylong summit in North Carolina’s Research Triangle Park co-hosted by and þ. More than 600 people registered to attend the conference on Sept. 17, 2025, either in person or via Zoom.

Participants explored relationships between AI and modern approaches to education, human agency, creativity, and well-being. In addition, attendees worked toward a shared research agenda during breakout sessions meant to support responsible development and use of AI technologies.

A roundtable of higher education leaders from top universities across the state also presented on the AI initiatives and research taking place on their respective campuses.

þ President Connie Ledoux Book delivers opening remarks of an RTO International and þ co-hosted summit on AI on Sept. 17, 2025
þ President Connie Ledoux Book

þ President Connie Book urged attendees in her welcoming remarks to confront fundamental questions about humanity’s place in a world increasingly shaped by artificial intelligence.

Book traced Elon’s leadership in technology research through its long-running Imagining the Internet Center, the predecessor to the university’s Imagining the Digital Future Center. She also pointed to þ’s leadership in developing a set of core principles to guide development of artificial intelligence policies and practices at college and universities.

More than 140 higher education organizations, administrators, researchers and faculty members from 48 countries collaborated on a statement of those principles, which was released Oct. 9, 2023, at the 18th annual United Nations Internet Governance Forum in Kyoto, Japan.

Book cited the success of an þ publication authored in partnership with the American Association of Colleges and Universities since adopted by approximately 4,000 colleges, universities, schools and organizations globally.

“All institutions must seriously address the coevolution of humans and digital systems,” she said, calling the conference a chance to “foster forward thinking and take significant action for building a better future together.”

In his own welcoming remarks, RTI International President and CEO Tim Gabel encouraged attendees to consider the promise and responsibilities of employing emerging AI technologies.

“Today is about possibility,” Gabel said. “It’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”

Today is about possibility … it’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”

– Tim Gabel, President and CEO, RTI International

Gabel noted his pride in hosting the summit in partnership with þ and outlined some of RTI’s efforts to use artificial intelligences responsibly. Projects include tools for public health communication, a new AI system for RTI researchers, and a “digital twin” of the U.S. population to model disease spread and test solutions.

“The promise lies not just in the technology,” Gabel said, “but in how we, as humans, choose to use it.”

Legal Rights for AI Systems?

James Boyle, the William Neal Reynolds Professor of Law at Duke University and author of suggested in one of two keynote addresses that participants rethink legal and moral boundaries as artificial intelligences advance, arguing that machines with humanlike capacities will force society to confront what it means to be a person.

Boyle, who attended via Zoom and addressed attendees on large screens that flanked both sides of the stage, said the debate over AI goes beyond familiar concerns about bias, jobs and copyright. He urged a deeper look at the “line that we draw between subject and object, between persons and things,” and at how that line has shifted in past moral struggles over race, sex and life itself.

Boyle told his audience that language – long deemed the human hallmark by philosophers from Aristotle to Turing – no longer settles the question of personhood or humanity. Modern systems “have so much language,” Boyle said, and linguistic ability complicates assumptions that syntax implies sentience.

While Boyle said that “Chat GPT is … not in any way conscious right now,” the rapid pace of development makes eventual change plausible. His remarks outlined three themes:

AI will prompt scientific, philosophical and spiritual reflection about consciousness and human exceptionalism.

AI will force reconsideration of legal personhood — not only for biological beings but for entities such as corporations that already hold rights for pragmatic reasons.

Encounters with machine intelligence can be a mirror: they may expose ethical shortcomings, or spur critical reflection on what entitles beings to moral consideration. Boyle closed on a note of guarded wonder, saying that while risks are real, the possibility of meeting another intelligent entity should also inspire reflection – and, perhaps, humility

The Intersection of AI and Healthcare

Erich Huang, head of clinical informatics at and chief science & innovation officer for , shared insights on the latest trends in AI and their impact on healthcare innovations and human well-being.

Photo of Erich Huang at a podium delivering remarks at a summit on AI co-hosted by RTI International and þ.
Erich Huang, head of clinical informatics at Verily (Google’s life sciences subsidiary) and chief science & innovation officer for Unduo/Verily

A surgeon trained at Duke University Hospital, he framed the second of two keynote addresses around a trauma case to underscore the limits of today’s AI tools.

Huang described stabilizing a 58-year-old crash victim, placing chest tubes and rushing her to surgery while consoling her physician husband — moments that no model or robot can yet replicate. “Algorithms don’t pledge any oaths,” he said, invoking the promises physicians make under the Hippocratic oath. “Medicine is a real-life enterprise, and there are still real-life things that have to happen.”

The speaker argued that large language models excel at identification and synthesis but do little to build the culture, incentives and workflows needed to change clinician and patient behavior. He warned that electronic health record data and billing codes often reflect reimbursement priorities rather than pathophysiology, risking “garbage in, garbage out.” Aligning payment with outcomes, he said, would create better data and a stronger foundation for trustworthy models.

Huang shared how he has invited technologists to complete “clinical rotations” to see care at the bedside andunderstand unwritten practices that rarely appear in charts but drive safer outcomes.

While calling himself an optimist about machine learning — citing his early research modeling cancer signaling pathways — he pushed back on hype, noting that autonomous vehicles and other highly touted systems have adopted more slowly than promised.

“We shouldn’t be using AI as a way to paint over fundamental underlying problems,” he said. Instead, the field should intentionally produce higher-quality clinical data, rigorously test models for specific tasks and embed them in team-based workflows in which humans still call consults, coordinate services and deliver hard news. The goal, he said, is not artifice but “real intelligence” that helps patients get better.

The Future Evolution of Humans and AI

Lee Rainie, director of þ's Imagining the Digital Future Center, addressess attendees of an AI summit co-hosted by RTI Internationl and þ on Sept. 17, 2025
Lee Rainie, director of þ’s Imagining the Digital Future Center

Lee Rainie, director of , delivered plenary remarks that summarized his center’s recent public opinion surveys of expert and American attitudes about the impact of artificial intelligences on key human capacities and traits.

Rainie described how both experts and the public voiced concern that AI could erode key aspects of human identity over the next decade. Of a dozen traits that were posited in the survey, ranging from empathy to decision-making, “experts thought nine would turn out more negatively than positively,” Rainie said.

Only creativity, curiosity and problem-solving drew optimism.

Those with higher levels of education are more pessimistic than those with lower levels, Rainie said. That reversal from earlier technology surveys, he added, “absolutely reverses the valence” of typical adoption patterns, where educated groups are usually early enthusiasts.

“There’s this palpable, universal sense that the moment we are in is a pivotal moment,” Rainie said. “We’re sharing the space now, in some respects, with other intelligences.”

During audience questions, one participant compared today’s changes to past industrial revolutions. Rainie replied that AI differs because “this is the first time we’ve faced a tool that looks like it has cognitive capacities.”

**

“The Human Edge: Our Future with Artificial Intelligences” was made possible by support fromBurroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences. It was organized by theImagining the Digital Future Centerat þ (with Lee Rainie), and RTI International’sFellows Program (with Brian Southwell) andUniversity Collaboration Office (with Katie Bowler Young).

]]>
Poll: Americans expect AI to harm many essential human abilities by 2035 /u/news/2025/09/17/poll-americans-expect-ai-to-harm-many-essential-human-abilities-by-2035/ Wed, 17 Sep 2025 17:15:51 +0000 /u/news/?p=1027753 A new survey by finds that more than half of American adults believe the expanded use of AI will have significant impacts on key human capacities and behaviors in the next decade.

The survey asked U.S. adults about their views on the effect of AI systems on 12 core human capacities and found that on each of those attributes, people expect that the impact of AI systems will be more negative than positive in the next 10 years, particularly on these traits:

  • Social and emotional intelligence: By a six-to-one margin (55%-9%), people said the impact of AI will be more negative than positive.
  • Empathy and moral judgment: By a similar margin (49%-8%), they said the impact of AI will be more negative.
  • Capacity and willingness to think deeply about complex subjects: By a 53%-14% margin, they said the impact of AI will be more negative.
  • Sense of individual agency: By a 49%-11% margin, they said the impact of AI will be more negative.
  • Confidence in their own native abilities: By a 43%-17% margin, they said the impact of AI will be more negative.
  • Self-identity, meaning and purpose in life: By a 42%-9% margin, they said the impact of AI will be more negative than positive.

American adults said they expect that by 2035 AI will have had a mixed impact overall on “the essence of being human”: 41% said the changes will be for the better and for the worse in fairly equal measure, while 25% said the changes will mostly be for the worse and 9% said the changes will mostly be for the better.

“These findings raise stark questions about the impact of AI on the essence of being human,” said Lee Rainie, director of þ’s ITDF initiative. “Americans expect the effect of AI will be more negative than not across each of the key human attributes we offered them. This is striking because it challenges the conventional notion that key human skills and social intelligences – sometimes called ‘soft skills’ – will be our saving grace as AI becomes more capable of matching or surpassing other kinds of basic intelligence. It’s now the case that the population fears that in the next decade AI could diminish many of the very qualities that make us uniquely human.”

Chart with information from a survey of Americans about attitudes toward AI

These findings were presented at a Sept. 17 conference co-hosted by þ and RTI International in Durham, N.C.: “The Human Edge: Our Future with Artificial Intelligences.”

The survey followed an earlier set of findings from the ITDF Center which canvassed several hundred experts on these same questions. Comparing those results, the general public is considerably more negative about the impact of AI than experts are about the impact of AI on human curiosity and capacity to learn, people’s capacity for innovative thinking and creativity, decision-making and problem solving and human metacognition (the ability to think analytically about thinking).

The public also is more likely than experts to declare that they don’t know how to answer these questions about the future impact of AI.

The survey of 1,005 U.S. adults was conducted by SSRS on its Opinion Panel from July 17-20, 2025, and has a margin of error of +/- 3.5 percentage points. The . And the 285-page report covering expert views on these issues can be found at:

]]>
RTI International and þ to host conference on the future of artificial intelligence /u/news/2025/08/25/rti-international-and-elon-university-to-host-conference-on-the-future-of-artificial-intelligence/ Mon, 25 Aug 2025 20:19:54 +0000 /u/news/?p=1025489 As artificial intelligence systems become more embedded in daily life, thought leaders will gather at RTI International on Wednesday, Sept. 17, from 8 a.m.–6 p.m. ET to examine how humans can shape the ways in which these technologies impact individuals and societies.

will be co-hosted by RTI, an independent scientific research institute, and þ. It will bring together experts from across the region to explore the societal implications of AI.

Higher education leaders, researchers and practitioners are invited to attend.

Opening remarks will be delivered by Tim J. Gabel, president and CEO of RTI International; Connie Ledoux Book, president of þ; and Brian Southwell, distinguished fellow and conference co-organizer at RTI.

“AI is transforming how we work, think and solve problems; at the same time, it’s still people who drive purpose and impact,” Gabel said. “We’re proud to co-host this gathering of thought leaders at our headquarters in RTP, where science and innovation meet real-world challenges. Together, we’ll explore how the human edge—our capacity for critical thinking, creativity, empathy and ethical judgment—improves the use of AI.”

Participants will explore relationships between AI and modern approaches to education, workforce development, human agency, creativity, well-being and governance. Attendees will create a shared research agenda that supports responsible development and use of AI technologies.

“As AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do,” Book said. “This conference will help chart a path forward by developing a research agenda that expands and evaluates new tools that serve the highest purposes of human endeavor.”

The program will feature keynote addresses, lightning talks and breakout discussions on topics including AI governance, workforce transformation and the impact of intelligent systems on mental and physical health.

As AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do.

– þ President Connie Ledoux Book

Featured speakers include:

  • Beth Simone Noveck, professor of experiential AI at Northeastern University, director of the GovLab, and author of the forthcoming book “Reboot: The Race to Save Democracy with AI”, will discuss the impact of AI on democracy and collective problem-solving.
  • Erich Huang, head of clinical informatics at Verily (Google’s life sciences subsidiary) and chief science & innovation officer for Unduo/Verily, will discuss the latest trends in AI and healthcare innovations and how they will affect human well-being.
  • James Boyle, William Neal Reynolds Professor of Law at Duke University and author of “The Line: Artificial Intelligence and the Future of Personhood”, will offer insight on the legal and philosophical issues raised by intelligent agents.
  • Lee Rainie, director of the Imagining the Digital Future Center at þ, will report a new survey covering public views about the impact of AI on key human capacities and attributes.

Katie Bowler Young, senior director of university collaborations at RTI International, will facilitate a session featuring senior leaders from Duke University, Fayetteville State University, North Carolina A&T University, North Carolina Central University, North Carolina State University, the University of North Carolina at Chapel Hill, the University of North Carolina at Greensboro and the National Humanities Center focusing on their institutions’ AI capabilities.

The event is supported by the Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences, and is organized by the Imagining the Digital Future Center at þ, RTI’s Fellows Program and RTI’s University Collaboration Office.

]]>