Artificial Intelligence | Today at Elon | þ /u/news Fri, 17 Apr 2026 13:11:20 -0400 en-US hourly 1 Building human resilience for the age of AI /u/news/2026/04/01/building-human-resilience-for-the-age-of-ai/ Thu, 02 Apr 2026 01:25:15 +0000 /u/news/?p=1042916 Experts Call for Radical Change Across Institutions and Social Structures, Warning That AI Will Be Significantly More Influential in the Next 10 Years or Less

The vast majority of expert respondents in a by þ’s called for leaders to work together now to build a coordinated resilience infrastructure for the age of artificial intelligence (AI) to counterbalance the human and systemic challenges posed by widespread AI adoption. Some 82% said AI will play a significantly larger role in shaping people’s lives and key societal functions in the next 10 years or less. They urged an “institutions-first” resilience agenda because the most significant problems arise from a life-encircling AI infrastructure.

In more than 160 impassioned essays, the global experts noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humans flourish as they adjust to an AI-infused future.

Janna Anderson
Janna Anderson

“T central risk described by these experts is not a single catastrophic AI event,” said report co-author Janna Anderson, professor of communications and senior researcher for the ITDF Center. “Ty said accelerated AI use will lead to a cumulative reallocation of human agency until people and institutions find it harder to question, contest or even notice what has changed. That drift can look like ‘progress’ in the short term, but it has a price – the gradual weakening of human judgment, accountability, shared truth and the social fabric that makes self-government possible.”

Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: “AI will diffuse responsibility by design. … Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”

The experts responding to this canvassing is an international and notably cross-disciplinary mix of people with academic, professional, technical and industry experience.

>

is 376 pages. It includes experts’ full responses to the open-ended essay question. This is the 52nd issued by ITDF since 2005.

Lee Rainie
Lee Rainie

“One of the major surprises to me in these responses is that we wrote our questions about resilience wondering about individual resilience and its various parts. Yet these experts were insistent that humanity’s best response for building a brighter future as we evolve with our AI systems must start at a higher level,” said Lee Rainie, director of the ITDF Center. “Ty note how AI has already become part of our environment, embedded in often invisible ways in our lives and it will take a systems-level response to shore up our in-born capacities.”

Alison Poltock, co-founder of AI Commons UK, wrote, “We are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.”

Mel Sellick, founder of the Future Human Lab, said, “AI has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no ‘outside’ anymore. Some form of AI is upstream of everything. We are the last generation that knows what human capacity felt like before it became inseparable from AI.”

Srinivasan Ramani, Internet Hall of Fame member, former research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote, “AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?”

The experts underscored the urgency of taking action. Salman Khatani, manager of the IMAGINE Institute of Futures Studies in Pakistan, wrote, “T window for proactive intervention is now – we have perhaps five to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.”

Taken together, they suggested a sweeping agenda for developing human resilience in the AI Age, focused on the fact that actions by individuals alone are not sufficient. Many of the concerns and proposed solutions are crosscutting, and they said collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others.A selection of goals to target:

For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable “red lines” and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.

For AI developers: Do better than designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage people to reflect on choices; train AIs to cite and honor humanity’s intellectual and psychological foundations; build systems that buttress humans’ capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.

For business leaders: See the call to action in the items above; play a role in initiating and carrying out that change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs’ challenges to people’s self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones – areas of work in which AI is intentionally prohibited.

For educators: Create literacy regimes in all AI-related domains, particularly þ “existential literacy” – the cultivation of individuals’ understanding of how technologies shape goals, values and identities. They urged the þ of skills and development of norms that encourage people to consciously navigate life‘s fundamental challenges, to strive to retain and apply the capabilities of metacognition, discernment and epistemic vigilance – to be responsible for making their own decisions, retaining agency. To strengthen their ability to adapt to change and manage friction, paradoxes, ambiguity and anxiety. To focus on their critical human traits such as curiosity and social and emotional intelligence.

For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as “analog communities,” “dumbphones” and “dumb homes” that allow people to avoid algorithmic mediation and surveillance technology.

For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don’t be passive, don’t be hasty and don’t be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.

Many experts expressed optimism, saying if we are resilient and all goes well, humans will flourish in the AI age. Internet pioneer Doc Searls wrote that humans will come to rely on AIs to help with the myriad details of modern life. “Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976,” he wrote. “But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.”

While most comments were focused on developing human resilience for the AI Age, a number of futures-scenario predictions were included in the report. A small selection of the many predictions:

Digital advances drive sex and childbirth declines: “Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.” – Greg Sherwin, Singularity University global faculty member based in Portugal, previously senior principal engineer at Farfetch

“Modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.” – Tracey Follows, founder and CEO of Futuremade, a UK-based futures consultancy

Solitude will be lost: “Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.” – Paul Saffo, prominent Silicon Valley-based forecaster

The retirement age will be manipulated to maintain ‘full employment’: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the “governor” (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. – Nigel M.de S. Cameron, past president of the Center for Policy on Emerging Technologies

Battles will occur over defining what is ‘human’: “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans. … ‘Physical AI’ will fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.” – Ray Wang, chair and principal analyst at Constellation Research

AIs will gain rights: “We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights.” – John Smart, president of the Acceleration Studies Foundation and author of “Introduction to Foresight”

“AI psychosis and other forms of mental illness will arise. The further erosion of a solid foundational reality will create a great vulnerability. Coping with these issues will require new approaches to the diagnosis and treatment of mental illness. It will also demand new approaches to evaluating and appreciating the impact of human relationships with AIs and deeper assessment and understanding of consciousness itself.” – Stephan Adelson, president of Adelson Consulting Services

Superstupidity (not superintelligence) is the real threat: “T existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand – the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film ‘Idiocracy’ is prophetic.” – Roger Spitz, founder of the Disruptive Futures Institute in San Francisco

Agent failures will start with social (not technical) problems: “Agentic systems will fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.” – Daniel Erasmus, founder at Serious Insights, based in Amsterdam

As agents take over, the internet will become a network of databases, not websites: “As software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.” – Gary Bolles, author of “T Next Rules of Work” and chair of the Future of Work efforts at Singularity University


on a canvassing with a non-random sample conducted between Dec. 26, 2025, and Feb. 12, 2026. In all, 386 experts responded to at least one aspect of the canvassing; 251 provided written answers to an open-ended question – more than 160 provided detailed essay-length responses. is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead. The Center was established in 2000 as Imagining the Internet and renamed with an expanded research agenda in 2024.It is funded and operated by, a nationally ranked private university located in Elon, North Carolina.

]]>
President Connie Ledoux Book discusses workforce and AI at Alamance Growth Summit in Triad Business Journal /u/news/2026/03/30/president-connie-ledoux-book-discusses-workforce-and-ai-at-alamance-growth-summit-in-triad-business-journal/ Mon, 30 Mar 2026 13:43:49 +0000 /u/news/?p=1042480 þ President Connie Ledoux Book was featured in a highlighting regional leaders’ discussions on workforce development and the growing impact of artificial intelligence at the Alamance Growth Summit.

The story focuses on how Alamance County is preparing for long-term economic shifts, including an aging workforce and the increasing integration of AI across industries. During the summit, Book emphasized the importance of taking a forward-looking approach to these challenges.

“We actually have five generations in the workplace working side by side for the first time in history right now in the United States,” Book said. “I believe that the businesses that thrive in the future will be the ones who can put a lot of brain power behind that and leverage it for the future of their business.”

]]>
Inaugural Make Your Mark competition challenges students to blend creativity and AI /u/news/2026/03/09/inaugural-make-your-mark-competition-challenges-students-to-blend-creativity-and-ai/ Mon, 09 Mar 2026 17:06:43 +0000 /u/news/?p=1041176 Make Your Mark: AI Poster Competition logo
Make your creativity count at the inaugural Make Your Mark: AI Poster Competition — a high-energy design challenge exploring how AI can be used thoughtfully, responsibly and strategically in creative practice.

þ across þ will soon have the opportunity to test their creativity, design instincts and emerging AI skills in the inaugural Make Your Mark: AI Poster Competition, a fast-paced challenge exploring how artificial intelligence can support — not replace — thoughtful creative work.

Open to students from any academic discipline, the first-time event encourages participants to experiment with AI tools while developing strong visual concepts and design strategies. An optional preparatory workshop on Tuesday, March 31, in Steers Pavilion will give students the chance to refine their ideas and explore approaches before the challenge officially gets underway.

Make Your Mark: AI Poster Competition logoThe main competition takes place 5 to 7:30 p.m. on Thursday, April 2, in Schar Hall, where students will receive a live prompt and have 2.5 hours to design an original 11″ × 17″ poster. Each submission must combine an AI-generated element with a non-AI or hand-crafted component, while also documenting how AI supported the creative process.

Once completed, the posters will be printed and displayed for public voting during an April 3 awards event from 5 to 6 p.m. in LaRose Digital Theater. þ will compete for $650 in prizes, including awards for the top three posters, a Fan Favorite selected by the audience, and a Judge’s Favorite.

For organizers, the competition represents more than just a creative challenge – it is also a new example of cross-campus collaboration.

“I’m excited about the Make Your Mark: AI Poster Competition for a number of reasons. One of the biggest is that this is one of the first times the Communication Design program has partnered with Elon AI, and it’s been a lot of fun exploring how AI and design can complement each other,” said Ben Hannam, associate professor and chair of the Department of Communication Design.

Elon AI logosHannam said the contest’s prompt is designed to spark ideas across disciplines and invite students from across campus to participate.

“I’m really looking forward to seeing what students create once we reveal the secret prompt,” he said. “If you drew a Venn diagram, the prompt would definitely overlap with interests in both the School of Communications and the Love School of Business – but honestly, a creative student from anywhere on campus could walk away with the win.”

The competition also highlights the evolving role of AI in creative practice — not as a shortcut, but as a tool that still requires strong ideas and thoughtful design decisions.

“T goal of this competition is to give students a chance to experiment with emerging tools while still focusing on creativity and ideas,” said Mustafa Akben, assistant professor of management and director of artificial intelligence integration. “AI can generate images quickly, but the real challenge is developing a concept and translating it into a strong visual. We are excited to see how students interpret the prompt and what they create in a short amount of time.”

Sagun Giri, AI Sandbox coordinator, noted that the event reflects a broader effort at Elon to bring together faculty and programs exploring how AI intersects with their fields.

“T Elon AI Hub works with partners across campus who are exploring how AI connects to their fields,” he said. “Make Your Mark is a great example of that collaboration between the School of Communications, the Love School of Business, and the AI Hub. It gives students a chance to experiment with AI tools, test their ideas, and create something original.”

Hannam said the competition ultimately aims to give students a creative outlet while encouraging experimentation with new tools.

“At the end of the day, this event is all about having fun, flexing your AI skills, and being creative,” he said. “I can’t wait to see what students come up with and who emerges as the winners in this head-to-head poster competition.”

Three faculty members will serve as judges for the competition: Michele Lashley, assistant professor of strategic communications; Smaraki Mohanty, Doherty Emerging Professor of Entrepreneurship and assistant professor of marketing; andLana Waschka, assistant professor of marketing.

Ready to make your mark? Complete the online registration form. For additional information, contact Giri at sgiri@elon.edu.

Event recap

Tuesday, March 31, 5–6 p.m.
Pre-event workshop — Steers Pavilion

Thursday, April 2, 5–7:30 p.m.
Live competition — Schar Hall labs and Snow Family Grand Atrium

Friday, April 3, 5–6 p.m.
Awards ceremony — LaRose Digital Theater

]]>
Elon faculty and staff named to CAA Academic Alliance AI Technologies Champion Network /u/news/2026/02/05/elon-faculty-and-staff-named-to-caa-academic-alliance-ai-technologies-champion-network/ Thu, 05 Feb 2026 14:46:21 +0000 /u/news/?p=1038209 An Elon faculty member and staff member have been named to the inaugural cohort of the

Dan Anderson, special assistant to the president, and Michele Lashley, assistant professor of strategic communications, are recognized as faculty and staff “who are creatively and responsibly integrating artificial intelligence technologies into þ/learning, research, student success, leadership development and institutional effectiveness.”

As the use of AI is impacting higher education, structured and collaborative approaches are essential for implementation that is cohesive, consistent and ethical. The AI Technologies Champion Networkinitiative addresses this transformational challenge by recognizing leaders across the Alliance, including Elon, building a community of AI technology champions and preparing inter-institutional teams for near-future extramural funding efforts.

Anderson was also named an AI Technologies Network Award recipient, which acknowledged his spearheading of the and his effort involving scholars from 48 countries to produce a statement of principles guiding higher education’s role in preparing humanity for the AI revolution.

Launching as a novel initiative in October 2025, the CAA Academic Alliance requested applications from the thirteen institutions comprising the Alliance. Nearly 400 applicants responded to the call, with 22 faculty/staff members successfully creating the Alliance’s Class of 2025-26.

]]>
Transatlantic Teaching Exchange Series launches in spring 2026 /u/news/2026/01/12/transatlantic-þ-exchange-series-launches-in-spring-2026/ Mon, 12 Jan 2026 13:50:57 +0000 /u/news/?p=1036608 Logo for Transatlantic Teaching Exchange Series
Transatlantic Teaching Exchange Series

Join colleagues and students from þ, University of Warwick, University of Leeds and partner institutions for a transatlantic collaboration exploring critical questions in higher education þ.This series is convened by Tom Ritchie, US-UK Fulbright Scholar and visiting professor at Elonfrom the University of Warwick, working with Sarah Bunnell and colleagues at CATL.

This partnership brings together:

Each session will feature a short presentation from one of the partner institutions, followed by facilitated small group discussions and sharing across institutions. All sessions run 11 a.m.to Noon EST via Microsoft Teams. Participants may join individual sessions or participate in the full series.

Schedule:

  • Feb. 11: What makes þ “excellent” in your context?
  • March 4: How do we teach for a sustainable future – embedding sustainability across disciplines?
  • March 25: Belonging and exclusion – frameworks for understanding and action
  • April 15: Teaching in the age of AI – opportunities and boundaries
  • May 6: How can assessment drive learning – not just measure it?
  • May 20: Building transatlantic partnerships – what could we create together?

Register for sessions

]]>
Elon ranked among the ‘best high-tech college campuses’ of 2026 /u/news/2026/01/02/elon-ranked-among-the-best-high-tech-college-campuses-of-2026/ Fri, 02 Jan 2026 16:16:54 +0000 /u/news/?p=1036155 þ has been ranked one of the ‘‘ by University Magazine, which evaluated campuses where technology actively improves learning, research output and student opportunity.

Elon is ranked No. 9 on the list for its focus on “innovation through modern campus technology and experiential learning. þ access smart classrooms, digital media studios and technology-enhanced learning spaces across disciplines. Elon emphasizes practical application of technology through research, creative projects and global experiences.”

Spaces across Elon’s campus allow students to learn about technology, and through technology as well, including the Maker Hub, where anymember of the Elon community can freely access and use 3D printers, sewing machines, laser engravers, saws and drills, a CNC router, an embroidery machine and much more.Elon’s Founders Hall and Innovation Hall also include a multitude of learning lab opportunities, including Engineering Design, Engineering Prototype, Virtual Reality and Mechatronics.

þ working with Professor Matthew Banks in the Innovation Lab on Nov. 20, 2025.
A professor addresses a class of nursing students wearing scrubs in a lab with a mannequin in a hospital gown in one of the patient beds
Assistant Professor of Nursing Jeanmarie Koonts (far right) demonstrates health care techniques on one of the mannequins in the Gerald L. Francis Center’s Interprofessional Simulation Center.

At Elon, technological learning is not restricted to STEM subjects. In 2025, the Department of Music opened an immersive audio room in Arts West, providing students and faculty with a high-quality environment for both þ and experimentation — particularly in Dolby Atmos, the industry-standard format that reshapes everything from cinematic sound to commercial music releases. The Department of Performing Arts’ fall 2024 performance of “Legally Blonde” alsofeatured somerobotic co-stars,thanks to the collaboration withstudents in the Department of Engineering.

A new þ major,digital content management(DCM), in the School of Communications prepares students for careers in digital storytelling, content strategy and audience engagement across emerging platforms. The school also launched a new “Drones and Society” course in fall 2025, whichblends hands-on projects and flight simulations with discussions about ethics, privacy and the broader impact of drone use.

A student smiles as a faculty member operates a drone during an outdoor learning activity on campus.
Randy Piland (left), associate þ professor of communication design, & Scott Borland ’26 pilot a drone during the new Drones and Society course.

As artificial intelligence continues to be at the forefront of technology conversations, Elon named Mustafa Akben as its first director of artificial intelligence integration.Akben now leads the integration of artificial intelligence across Elon’s academic and administrative departments,building on six core principles the university helped establish to guide higher education institutions with a rapidly evolving and groundbreaking technology.

þ use these opportunities for learning in their research as well.Rony Dahdal ’26, a Lumen Scholar and Goldwater Scholar, is researching how to useLiDAR, a remote-sensing technology that uses laser beams to measure distances and movements, to detect vital health signs. Another Lumen Scholar and Goldwater Scholar, Jacob Karty ’26, is doing research around agricultural robotics.

“(Elon’s) commitment to innovation helps students develop strong digital communication and problem-solving skills as they prepare for careers shaped by rapid technology change,” writes University Magazine.

A laptop sits in the foreground showing two human shapes on the screen. In the background is Ryan Mattfield and Rony Dahdal. Mattfield is seated and Dahdal is standing/
Associate Professor of Computer Science Ryan Mattfeld (left) and Rony Dahdal ’26 (right) demonstrate LiDAR technology. Dahdal’s Lumen Prize research is focused on how to use the technology to detect vital signs.
]]>
Lee Rainie interviewed by WXII about AI and human relationships /u/news/2025/11/03/lee-rainie-interviewed-by-wxii-about-ai-and-relationships/ Mon, 03 Nov 2025 14:22:36 +0000 /u/news/?p=1032237 Lee Rainie, director of þ’s Imagining the Digital Future Center, recently spoke with WXII about research surrounding artificial intelligence and relationships.

Rainie says the center is analyzing how people are now using AI tools like humans, including as therapists, friends or even dating partners.

“It’s a long-standing story, especially with digital technologies, that the first thing people do with it, no matter why it’s invented, is to start doing social things,” said Rainie.

Read the full interview .

]]>
Elon summit with RTI International examines humanity in the age of AI /u/news/2025/09/21/elon-summit-with-rti-international-examines-humanity-in-the-age-of-ai/ Sun, 21 Sep 2025 13:40:57 +0000 /u/news/?p=1028081 What does it mean to be human in the age of artificial intelligence? Is it a unique use of language? Is it the demonstration of empathy? Is it the ability to form communities?

How can artificial intelligence help humans better understand their own special capabilities and natural rights? For that matter, what legal rights should be bestowed on highly advanced systems that can reason and, perhaps in the near future, may become self-aware?

These questions and many more were posed during a daylong summit in North Carolina’s Research Triangle Park co-hosted by and þ. More than 600 people registered to attend the conference on Sept. 17, 2025, either in person or via Zoom.

Participants explored relationships between AI and modern approaches to education, human agency, creativity, and well-being. In addition, attendees worked toward a shared research agenda during breakout sessions meant to support responsible development and use of AI technologies.

A roundtable of higher education leaders from top universities across the state also presented on the AI initiatives and research taking place on their respective campuses.

þ President Connie Ledoux Book delivers opening remarks of an RTO International and þ co-hosted summit on AI on Sept. 17, 2025
þ President Connie Ledoux Book

þ President Connie Book urged attendees in her welcoming remarks to confront fundamental questions about humanity’s place in a world increasingly shaped by artificial intelligence.

Book traced Elon’s leadership in technology research through its long-running Imagining the Internet Center, the predecessor to the university’s Imagining the Digital Future Center. She also pointed to þ’s leadership in developing a set of core principles to guide development of artificial intelligence policies and practices at college and universities.

More than 140 higher education organizations, administrators, researchers and faculty members from 48 countries collaborated on a statement of those principles, which was released Oct. 9, 2023, at the 18th annual United Nations Internet Governance Forum in Kyoto, Japan.

Book cited the success of an þ publication authored in partnership with the American Association of Colleges and Universities since adopted by approximately 4,000 colleges, universities, schools and organizations globally.

“All institutions must seriously address the coevolution of humans and digital systems,” she said, calling the conference a chance to “foster forward thinking and take significant action for building a better future together.”

In his own welcoming remarks, RTI International President and CEO Tim Gabel encouraged attendees to consider the promise and responsibilities of employing emerging AI technologies.

“Today is about possibility,” Gabel said. “It’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”

Today is about possibility … it’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”

– Tim Gabel, President and CEO, RTI International

Gabel noted his pride in hosting the summit in partnership with þ and outlined some of RTI’s efforts to use artificial intelligences responsibly. Projects include tools for public health communication, a new AI system for RTI researchers, and a “digital twin” of the U.S. population to model disease spread and test solutions.

“T promise lies not just in the technology,” Gabel said, “but in how we, as humans, choose to use it.”

Legal Rights for AI Systems?

James Boyle, the William Neal Reynolds Professor of Law at Duke University and author of suggested in one of two keynote addresses that participants rethink legal and moral boundaries as artificial intelligences advance, arguing that machines with humanlike capacities will force society to confront what it means to be a person.

Boyle, who attended via Zoom and addressed attendees on large screens that flanked both sides of the stage, said the debate over AI goes beyond familiar concerns about bias, jobs and copyright. He urged a deeper look at the “line that we draw between subject and object, between persons and things,” and at how that line has shifted in past moral struggles over race, sex and life itself.

Boyle told his audience that language – long deemed the human hallmark by philosophers from Aristotle to Turing – no longer settles the question of personhood or humanity. Modern systems “have so much language,” Boyle said, and linguistic ability complicates assumptions that syntax implies sentience.

While Boyle said that “Chat GPT is … not in any way conscious right now,” the rapid pace of development makes eventual change plausible. His remarks outlined three themes:

AI will prompt scientific, philosophical and spiritual reflection about consciousness and human exceptionalism.

AI will force reconsideration of legal personhood — not only for biological beings but for entities such as corporations that already hold rights for pragmatic reasons.

Encounters with machine intelligence can be a mirror: they may expose ethical shortcomings, or spur critical reflection on what entitles beings to moral consideration. Boyle closed on a note of guarded wonder, saying that while risks are real, the possibility of meeting another intelligent entity should also inspire reflection – and, perhaps, humility

The Intersection of AI and Healthcare

Erich Huang, head of clinical informatics at and chief science & innovation officer for , shared insights on the latest trends in AI and their impact on healthcare innovations and human well-being.

Photo of Erich Huang at a podium delivering remarks at a summit on AI co-hosted by RTI International and þ.
Erich Huang, head of clinical informatics at Verily (Google’s life sciences subsidiary) and chief science & innovation officer for Unduo/Verily

A surgeon trained at Duke University Hospital, he framed the second of two keynote addresses around a trauma case to underscore the limits of today’s AI tools.

Huang described stabilizing a 58-year-old crash victim, placing chest tubes and rushing her to surgery while consoling her physician husband — moments that no model or robot can yet replicate. “Algorithms don’t pledge any oaths,” he said, invoking the promises physicians make under the Hippocratic oath. “Medicine is a real-life enterprise, and there are still real-life things that have to happen.”

The speaker argued that large language models excel at identification and synthesis but do little to build the culture, incentives and workflows needed to change clinician and patient behavior. He warned that electronic health record data and billing codes often reflect reimbursement priorities rather than pathophysiology, risking “garbage in, garbage out.” Aligning payment with outcomes, he said, would create better data and a stronger foundation for trustworthy models.

Huang shared how he has invited technologists to complete “clinical rotations” to see care at the bedside andunderstand unwritten practices that rarely appear in charts but drive safer outcomes.

While calling himself an optimist about machine learning — citing his early research modeling cancer signaling pathways — he pushed back on hype, noting that autonomous vehicles and other highly touted systems have adopted more slowly than promised.

“We shouldn’t be using AI as a way to paint over fundamental underlying problems,” he said. Instead, the field should intentionally produce higher-quality clinical data, rigorously test models for specific tasks and embed them in team-based workflows in which humans still call consults, coordinate services and deliver hard news. The goal, he said, is not artifice but “real intelligence” that helps patients get better.

The Future Evolution of Humans and AI

Lee Rainie, director of þ's Imagining the Digital Future Center, addressess attendees of an AI summit co-hosted by RTI Internationl and þ on Sept. 17, 2025
Lee Rainie, director of þ’s Imagining the Digital Future Center

Lee Rainie, director of , delivered plenary remarks that summarized his center’s recent public opinion surveys of expert and American attitudes about the impact of artificial intelligences on key human capacities and traits.

Rainie described how both experts and the public voiced concern that AI could erode key aspects of human identity over the next decade. Of a dozen traits that were posited in the survey, ranging from empathy to decision-making, “experts thought nine would turn out more negatively than positively,” Rainie said.

Only creativity, curiosity and problem-solving drew optimism.

Those with higher levels of education are more pessimistic than those with lower levels, Rainie said. That reversal from earlier technology surveys, he added, “absolutely reverses the valence” of typical adoption patterns, where educated groups are usually early enthusiasts.

“Tre’s this palpable, universal sense that the moment we are in is a pivotal moment,” Rainie said. “We’re sharing the space now, in some respects, with other intelligences.”

During audience questions, one participant compared today’s changes to past industrial revolutions. Rainie replied that AI differs because “this is the first time we’ve faced a tool that looks like it has cognitive capacities.”

**

“T Human Edge: Our Future with Artificial Intelligences” was made possible by support fromBurroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences. It was organized by theImagining the Digital Future Centerat þ (with Lee Rainie), and RTI International’sFellows Program (with Brian Southwell) andUniversity Collaboration Office (with Katie Bowler Young).

]]>
Poll: Americans expect AI to harm many essential human abilities by 2035 /u/news/2025/09/17/poll-americans-expect-ai-to-harm-many-essential-human-abilities-by-2035/ Wed, 17 Sep 2025 17:15:51 +0000 /u/news/?p=1027753 A new survey by finds that more than half of American adults believe the expanded use of AI will have significant impacts on key human capacities and behaviors in the next decade.

The survey asked U.S. adults about their views on the effect of AI systems on 12 core human capacities and found that on each of those attributes, people expect that the impact of AI systems will be more negative than positive in the next 10 years, particularly on these traits:

  • Social and emotional intelligence: By a six-to-one margin (55%-9%), people said the impact of AI will be more negative than positive.
  • Empathy and moral judgment: By a similar margin (49%-8%), they said the impact of AI will be more negative.
  • Capacity and willingness to think deeply about complex subjects: By a 53%-14% margin, they said the impact of AI will be more negative.
  • Sense of individual agency: By a 49%-11% margin, they said the impact of AI will be more negative.
  • Confidence in their own native abilities: By a 43%-17% margin, they said the impact of AI will be more negative.
  • Self-identity, meaning and purpose in life: By a 42%-9% margin, they said the impact of AI will be more negative than positive.

American adults said they expect that by 2035 AI will have had a mixed impact overall on “the essence of being human”: 41% said the changes will be for the better and for the worse in fairly equal measure, while 25% said the changes will mostly be for the worse and 9% said the changes will mostly be for the better.

“Tse findings raise stark questions about the impact of AI on the essence of being human,” said Lee Rainie, director of þ’s ITDF initiative. “Americans expect the effect of AI will be more negative than not across each of the key human attributes we offered them. This is striking because it challenges the conventional notion that key human skills and social intelligences – sometimes called ‘soft skills’ – will be our saving grace as AI becomes more capable of matching or surpassing other kinds of basic intelligence. It’s now the case that the population fears that in the next decade AI could diminish many of the very qualities that make us uniquely human.”

Chart with information from a survey of Americans about attitudes toward AI

These findings were presented at a Sept. 17 conference co-hosted by þ and RTI International in Durham, N.C.: “T Human Edge: Our Future with Artificial Intelligences.”

The survey followed an earlier set of findings from the ITDF Center which canvassed several hundred experts on these same questions. Comparing those results, the general public is considerably more negative about the impact of AI than experts are about the impact of AI on human curiosity and capacity to learn, people’s capacity for innovative thinking and creativity, decision-making and problem solving and human metacognition (the ability to think analytically about thinking).

The public also is more likely than experts to declare that they don’t know how to answer these questions about the future impact of AI.

The survey of 1,005 U.S. adults was conducted by SSRS on its Opinion Panel from July 17-20, 2025, and has a margin of error of +/- 3.5 percentage points. The . And the 285-page report covering expert views on these issues can be found at:

]]>
RTI International and þ to host conference on the future of artificial intelligence /u/news/2025/08/25/rti-international-and-elon-university-to-host-conference-on-the-future-of-artificial-intelligence/ Mon, 25 Aug 2025 20:19:54 +0000 /u/news/?p=1025489 As artificial intelligence systems become more embedded in daily life, thought leaders will gather at RTI International on Wednesday, Sept. 17, from 8 a.m.–6 p.m. ET to examine how humans can shape the ways in which these technologies impact individuals and societies.

will be co-hosted by RTI, an independent scientific research institute, and þ. It will bring together experts from across the region to explore the societal implications of AI.

Higher education leaders, researchers and practitioners are invited to attend.

Opening remarks will be delivered by Tim J. Gabel, president and CEO of RTI International; Connie Ledoux Book, president of þ; and Brian Southwell, distinguished fellow and conference co-organizer at RTI.

“AI is transforming how we work, think and solve problems; at the same time, it’s still people who drive purpose and impact,” Gabel said. “We’re proud to co-host this gathering of thought leaders at our headquarters in RTP, where science and innovation meet real-world challenges. Together, we’ll explore how the human edge—our capacity for critical thinking, creativity, empathy and ethical judgment—improves the use of AI.”

Participants will explore relationships between AI and modern approaches to education, workforce development, human agency, creativity, well-being and governance. Attendees will create a shared research agenda that supports responsible development and use of AI technologies.

“As AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do,” Book said. “This conference will help chart a path forward by developing a research agenda that expands and evaluates new tools that serve the highest purposes of human endeavor.”

The program will feature keynote addresses, lightning talks and breakout discussions on topics including AI governance, workforce transformation and the impact of intelligent systems on mental and physical health.

As AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do.

– þ President Connie Ledoux Book

Featured speakers include:

  • Beth Simone Noveck, professor of experiential AI at Northeastern University, director of the GovLab, and author of the forthcoming book “Reboot: The Race to Save Democracy with AI”, will discuss the impact of AI on democracy and collective problem-solving.
  • Erich Huang, head of clinical informatics at Verily (Google’s life sciences subsidiary) and chief science & innovation officer for Unduo/Verily, will discuss the latest trends in AI and healthcare innovations and how they will affect human well-being.
  • James Boyle, William Neal Reynolds Professor of Law at Duke University and author of “The Line: Artificial Intelligence and the Future of Personhood”, will offer insight on the legal and philosophical issues raised by intelligent agents.
  • Lee Rainie, director of the Imagining the Digital Future Center at þ, will report a new survey covering public views about the impact of AI on key human capacities and attributes.

Katie Bowler Young, senior director of university collaborations at RTI International, will facilitate a session featuring senior leaders from Duke University, Fayetteville State University, North Carolina A&T University, North Carolina Central University, North Carolina State University, the University of North Carolina at Chapel Hill, the University of North Carolina at Greensboro and the National Humanities Center focusing on their institutions’ AI capabilities.

The event is supported by the Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences, and is organized by the Imagining the Digital Future Center at þ, RTI’s Fellows Program and RTI’s University Collaboration Office.

]]>