Posts by Dan Anderson | Today at Elon | þ /u/news Wed, 15 Apr 2026 20:57:15 -0400 en-US hourly 1 Building human resilience for the age of AI /u/news/2026/04/01/building-human-resilience-for-the-age-of-ai/ Thu, 02 Apr 2026 01:25:15 +0000 /u/news/?p=1042916 Experts Call for Radical Change Across Institutions and Social Structures, Warning That AI Will Be Significantly More Influential in the Next 10 Years or Less

The vast majority of expert respondents in a by þ’s called for leaders to work together now to build a coordinated resilience infrastructure for the age of artificial intelligence (AI) to counterbalance the human and systemic challenges posed by widespread AI adoption. Some 82% said AI will play a significantly larger role in shaping people’s lives and key societal functions in the next 10 years or less. They urged an “institutions-first” resilience agenda because the most significant problems arise from a life-encircling AI infrastructure.

In more than 160 impassioned essays, the global experts noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humans flourish as they adjust to an AI-infused future.

Janna Anderson
Janna Anderson

“The central risk described by these experts is not a single catastrophic AI event,” said report co-author Janna Anderson, professor of communications and senior researcher for the ITDF Center. “They said accelerated AI use will lead to a cumulative reallocation of human agency until people and institutions find it harder to question, contest or even notice what has changed. That drift can look like ‘progress’ in the short term, but it has a price – the gradual weakening of human judgment, accountability, shared truth and the social fabric that makes self-government possible.”

Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: “AI will diffuse responsibility by design. … Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”

The experts responding to this canvassing is an international and notably cross-disciplinary mix of people with academic, professional, technical and industry experience.

>

is 376 pages. It includes experts’ full responses to the open-ended essay question. This is the 52nd issued by ITDF since 2005.

Lee Rainie
Lee Rainie

“One of the major surprises to me in these responses is that we wrote our questions about resilience wondering about individual resilience and its various parts. Yet these experts were insistent that humanity’s best response for building a brighter future as we evolve with our AI systems must start at a higher level,” said Lee Rainie, director of the ITDF Center. “They note how AI has already become part of our environment, embedded in often invisible ways in our lives and it will take a systems-level response to shore up our in-born capacities.”

Alison Poltock, co-founder of AI Commons UK, wrote, “We are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.”

Mel Sellick, founder of the Future Human Lab, said, “AI has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no ‘outside’ anymore. Some form of AI is upstream of everything. We are the last generation that knows what human capacity felt like before it became inseparable from AI.”

Srinivasan Ramani, Internet Hall of Fame member, former research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote, “AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?”

The experts underscored the urgency of taking action. Salman Khatani, manager of the IMAGINE Institute of Futures Studies in Pakistan, wrote, “The window for proactive intervention is now – we have perhaps five to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.”

Taken together, they suggested a sweeping agenda for developing human resilience in the AI Age, focused on the fact that actions by individuals alone are not sufficient. Many of the concerns and proposed solutions are crosscutting, and they said collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others.A selection of goals to target:

For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable “red lines” and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.

For AI developers: Do better than designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage people to reflect on choices; train AIs to cite and honor humanity’s intellectual and psychological foundations; build systems that buttress humans’ capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.

For business leaders: See the call to action in the items above; play a role in initiating and carrying out that change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs’ challenges to people’s self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones – areas of work in which AI is intentionally prohibited.

For educators: Create literacy regimes in all AI-related domains, particularly þ “existential literacy” – the cultivation of individuals’ understanding of how technologies shape goals, values and identities. They urged the þ of skills and development of norms that encourage people to consciously navigate life‘s fundamental challenges, to strive to retain and apply the capabilities of metacognition, discernment and epistemic vigilance – to be responsible for making their own decisions, retaining agency. To strengthen their ability to adapt to change and manage friction, paradoxes, ambiguity and anxiety. To focus on their critical human traits such as curiosity and social and emotional intelligence.

For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as “analog communities,” “dumbphones” and “dumb homes” that allow people to avoid algorithmic mediation and surveillance technology.

For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don’t be passive, don’t be hasty and don’t be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.

Many experts expressed optimism, saying if we are resilient and all goes well, humans will flourish in the AI age. Internet pioneer Doc Searls wrote that humans will come to rely on AIs to help with the myriad details of modern life. “Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976,” he wrote. “But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.”

While most comments were focused on developing human resilience for the AI Age, a number of futures-scenario predictions were included in the report. A small selection of the many predictions:

Digital advances drive sex and childbirth declines: “Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.” – Greg Sherwin, Singularity University global faculty member based in Portugal, previously senior principal engineer at Farfetch

“Modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.” – Tracey Follows, founder and CEO of Futuremade, a UK-based futures consultancy

Solitude will be lost: “Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.” – Paul Saffo, prominent Silicon Valley-based forecaster

The retirement age will be manipulated to maintain ‘full employment’: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the “governor” (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. – Nigel M.de S. Cameron, past president of the Center for Policy on Emerging Technologies

Battles will occur over defining what is ‘human’: “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans. … ‘Physical AI’ will fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.” – Ray Wang, chair and principal analyst at Constellation Research

AIs will gain rights: “We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights.” – John Smart, president of the Acceleration Studies Foundation and author of “Introduction to Foresight”

“AI psychosis and other forms of mental illness will arise. The further erosion of a solid foundational reality will create a great vulnerability. Coping with these issues will require new approaches to the diagnosis and treatment of mental illness. It will also demand new approaches to evaluating and appreciating the impact of human relationships with AIs and deeper assessment and understanding of consciousness itself.” – Stephan Adelson, president of Adelson Consulting Services

Superstupidity (not superintelligence) is the real threat: “The existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand – the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film ‘Idiocracy’ is prophetic.” – Roger Spitz, founder of the Disruptive Futures Institute in San Francisco

Agent failures will start with social (not technical) problems: “Agentic systems will fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.” – Daniel Erasmus, founder at Serious Insights, based in Amsterdam

As agents take over, the internet will become a network of databases, not websites: “As software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.” – Gary Bolles, author of “The Next Rules of Work” and chair of the Future of Work efforts at Singularity University


on a canvassing with a non-random sample conducted between Dec. 26, 2025, and Feb. 12, 2026. In all, 386 experts responded to at least one aspect of the canvassing; 251 provided written answers to an open-ended question – more than 160 provided detailed essay-length responses. is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead. The Center was established in 2000 as Imagining the Internet and renamed with an expanded research agenda in 2024.It is funded and operated by, a nationally ranked private university located in Elon, North Carolina.

]]>
Jeff Stein named chief integration officer and executive vice president /u/news/2025/12/29/jeff-stein-named-chief-integration-officer-and-executive-vice-president/ Mon, 29 Dec 2025 18:23:03 +0000 /u/news/?p=1036052 Longtime þ leader Jeff Stein, who recently served as president of Mary Baldwin University in Virginia, has returned to Elon to serve as chief integration officer and executive vice president. Jeff began his work Dec. 29 and is based in Charlotte, N.C., providing leadership in the merger process for Elon with Queens University of Charlotte.

A key advisor to Elon President Connie Book and a member of the university’s senior staff and the vice president team, Stein will collaborate with students, faculty, and staff at Elon and Queens to support the creation of a fully integrated campus.

“Jeff Stein’s 21 years of service to Elon and his deep knowledge of higher education make him the perfect choice to lead integration of our two universities,” said President Connie Book. “In his previous roles as Elon’s vice president for strategic initiatives and partnerships, and co-chair of the Boldly Elon strategic planning committee, Jeff was central to our plans to establish national campus locations. His experience and strategic skills are great assets as we move forward in the merger process.”

Elon and Queens trustees formally approved the definitive legal agreement in December, a significant milestone in the complex merger that is projected to be finalized with U.S. Department of Education approval in 2027 or 2028.

“I’m excited to build upon the Elon and Queens legacies, bringing together two student-centered institutions to create something neither could accomplish alone – a model for the future of higher education that expands opportunity for students and strengthens one of America’s most dynamic cities,” Stein said.

Stein will co-chair the soon-to-be-formed Integration Team with Queens President Emerita Pamela Davies. Working closely with President Book and Queens Acting President and CEO Jesse Cureton, the Integration Team will work parallel to the strategic planning committee to respond to regulatory and accreditation requirements and to establish shared services. Both groups will also work to identify future opportunities that leverage Elon and Queens commitment to excellent þ, experiential learning and student success.

During Stein’s tenure as Mary Baldwin University president, the institution advanced financial stability, secured significant philanthropic resources, redesigned the þ experience through an innovative general education program and new academic communities and pathways, and engaged campus and community stakeholders in developing a strategic plan during a period of significant institutional transition. His work establishing new processes for institutional effectiveness, campus-community engagement, and enrollment and brand strategy will prove invaluable in the merger of Elon and Queens.

Stein joined Elon in 2002 as assistant dean of students and assistant professor of English. He joined the university’s senior staff in 2010 as special assistant to the president and secretary to the board of trustees and was later named chief of staff.

In 2019, President Book named Stein to the new position of vice president for strategic initiatives and partnerships, providing leadership for a wide range of Elon initiatives, including the Student Professional Development Center, Cultural and Special Programs, the Office of Leadership and Professional Development, and Professional and Continuing Studies, as well as co-chairing the Mentoring Design Team and development of regional learning centers. He was also instrumental in building Elon’s residential campus initiatives and a vibrant Jewish life program that led to significant growth in Jewish student enrollment. Over the first two years of the global pandemic, Stein led teams of faculty and staff in creating and implementing Elon’s extensive COVID-19 response.

Stein and his wife Chrissy, who taught in Elon’s English department for several years and served as director of the Commons Academic Success Center and the Writing Center at Mary Baldwin, are excited to make their home in Charlotte. Stein has an office in the þ facility on Tremont Avenue in the vibrant South End neighborhood.

]]>
Death of Elon sports legend Rich McGeorge /u/news/2025/12/21/death-of-elon-sports-legend-rich-mcgeorge/ Sun, 21 Dec 2025 18:13:27 +0000 /u/news/?p=1035970 Richard “Rich” McGeorge ’72, one of þ’s most distinguished athletes and a cherished figure in both collegiate and professional football, . He is remembered for a life defined by perseverance, leadership and deep devotion to the people and institutions he loved.

Born in Roanoke, Virginia, McGeorge arrived at Elon College in the late 1960s and quickly became a transformational force on the football field. As a tight end under legendary coach Red Wilson, he rewrote Elon’s receiving records book, ending his career as the school’s career record-holder with 224 receptions for 3,486 yards and 31 touchdowns. He also set single-season marks with 65 grabs for 1,081 yards, and single-game records with 15 catches, 285 receiving yards and four touchdowns. He was the MVP in the Carolinas Conference twice while setting school, conference, district and NAIA national records during his four-year career.

In addition to playing football, McGeorge was also a standout member of Elon’s basketball team, scoring 1,044 points in 76 games and being named All-Conference (Carolinas Conference) in 1969. His career field goal percentage of .589 ranks second in Elon men’s basketball history.

As both a junior and senior, he received the prestigious Elon Athletics Stein H. Basnight Outstanding Athlete Award.

McGeorge was named a two-time first-team All-American and an Academic All-American, helping to elevate Elon’s national profile and inspiring generations of athletes who followed him.

His extraordinary collegiate career caught the attention of the NFL. In 1970, the Green Bay Packers selected McGeorge in the first round of the NFL draft. He became a favorite target of quarterback Scott Hunter and a reliable, tough and intelligent presence on the field. He played nine seasons at Green Bay and was the team’s Offensive Player of the Year in 1973. As one of the NFL’s most respected tight ends of the era, McGeorge caught 175 passes for 2,370 yards and 13 touchdowns in his pro career.Considered both a premier receiver and blocker, McGeorge pulled in more passes (175) than any other tight end in Green Bay’s annals. At the end of his career, only six players in the storied franchise’s 60-year history had caught more passes.

After retiring as a player, McGeorge found what many who knew him believed to be his true calling: coaching and mentorship.He held assistant coaching positions at Duke, under Red Wilson, and at Duke and Florida under Steve Spurrier, with the Miami Dolphins under Don Shula and Jimmie Johnson, and then later again at Duke under Carl Franks. He finished his coaching career under coaches Rod Broadway and Darrell Asberry at North Carolina Central and Shaw, respectively.

McGeorge was inducted into the Elon Sports Hall of Fame in 1979, the NAIA Hall of Fame in 1980, the , and in 2013, Elon celebrated McGeorge’s induction into the North Carolina Sports Hall of Fame, recognizing not only his athletic achievements but also his lasting influence on the state’s sporting legacy.

Rich McGeorge and Red Wilson
Rich McGeorge (l) with Coach Red Wilson at an Elon Football tailgate in 2017.

At Elon, McGeorge has long been remembered as one of the most dominant football players in school history, and as someone who carried the spirit of the university into every chapter of his life. Elon retired his number “85” football jersey, and he was the featured speaker at the groundbreaking for Rhodes Stadium in 2000. He and his wife, Bonnie Moore McGeorge ’70, were elected to the Elon Alumni Board in 2006. A display recognizing his induction into the College Football Hall of Fame is located in the Woods Center at Rhodes Stadium.

McGeorge is survived by his wife, Bonnie; his sons, Randy McGeorge (Kim) and Jason McGeorge (Diane); his grandchildren, Cameron McGeorge, Colin McGeorge, Madison McGeorge, Molly McGeorge, and Emily McGeorge; and his two sisters-in-law, Gayle McGeorge and Patsy Jenkins (Dan).

Clements Funeral and Cremation Services, 1105 Broad St., Durham, is handling . A visitation and memorial service are scheduled for Tuesday, Dec. 23. The visitation will be held from 1-3 p.m., followed immediately by a memorial service from 3-4 p.m.

]]>
Generative AI: Three classroom exercises to give your students a hiring edge /u/news/2025/02/11/generative-ai-three-classroom-exercises-to-give-your-students-a-hiring-edge/ Tue, 11 Feb 2025 19:07:48 +0000 /u/news/?p=1006946 By: Mustafa Akben,Assistant Professor of Management and Director of Artificial Intelligence Integration

According to the World Economic Forum’s Future of Jobs Report 2025, new technologies like AI could lead to 170 million new jobs worldwide by 2030. Exciting, right? But there’s a catch: a significant skills gap still exists. This begs the question for educators and institutions: Are your students truly ready to compete for these roles?

The democratization of AI has completely transformed what it means to be digitally literate. It’s not just about mastering code or obscure programming languages anymore. Today, it’s about understanding and effectively using generative AI—those incredible tools that can create and collaborate almost like a human. With simple prompts, these tools can whip up captivating marketing copy, design eye-catching visuals, and even tackle tough coding or research projects.

But here’s the thing: the quality of AI-generated content is only as good as the human guiding it. Generative AI is like a mirror, reflecting the user’s ability to steer and refine its outputs. This partnership between humans and machines highlights the need for a new core skill: AI literacy.

In this AI Digest article, we’ll give you three hands-on generative AI exercises—an Elevator Pitch Generator, Mock Interview Practice, and Salary Negotiation Training—designed to give your students a real advantage in the job market, all while sharpening their AI literacy skills. Plus, we’ll share practical tips for weaving these exercises into your þ.

WHY GENERATIVE AI FLUENCY IS NO LONGER OPTIONAL

Generative AI is revolutionizing all types of industries, and the need for AI literacy is no longer an “if” but a “must.” Marketers use generative AI to create hyper-personalized

campaigns on scales never seen before; scientists use generative AI to speed up groundbreaking discoveries, such as in pharmaceutical drug development; and software engineers use generative AI to streamline writing lines and lines of code, greatly reducing software development lead times. We are not talking about the workplace of tomorrow—”we are talking about the workplace of today.

And it’s important to state here that such tools do not replace people; they empower people. Think of generative AI as a capable assistant who can take care of mundane and repeated tasks, freeing students (and workers) to instead use their time on big-picture thinking, creative work, and challenging problems that are best solved by humans with their ingenuity and smarts.

THE AI LITERACY IMPERATIVE

The AI is in full swing, fueled by powerful tools like OpenAI’s ChatGPT and DALL·E 3, Google’s Gemini, and Stability AI’s Stable Diffusion. These large language models (LLMs) can produce remarkably sophisticated text, images, and even code based on simple user prompts, transforming the way we tackle creative and analytical problems. But this power brings with it a crucial responsibility: students need to develop the critical skills to evaluate the accuracy of AI-generated content and to understand the potential ethical and data-privacy risks involved.

AI literacy – and the sharp critical thinking it requires – is rapidly becoming essential for career success. This means understanding not only how to use AI tools effectively, but also how to:

  • Evaluate the quality and reliability of AI-generated content
  • Recognize potential biases in AI outputs
  • Navigate ethical considerations in AI deployment
  • Maintain data privacy and security when working with AI systems
  • Leverage AI tools to enhance rather than replace human creativity and judgment

As we move forward into this future, the ability to effectively collaborate with AI tools while maintaining human oversight and ethical considerations will be crucial for professional success. In the following section, we present three practical exercises designed to help educators prepare their students for this exciting new frontier.

THREE GENERATIVE AI EXERCISES: FROM CLASSROOM TO CAREER

EXERCISE 1: THE AI-POWERED ELEVATOR PITCH

A concise elevator pitch can open doors in any professional setting. Instead of crafting it solo, students can lean on generative AI (such as ChatGPT) to help them shape and polish their pitch.


PROMPTING FOR SUCCESS

Example Prompt

“Act as a career advisor. I’m a [Major] student at [University] seeking a [Job Type] in [Industry]. I’m skilled in [Skill 1], [Skill 2], and [Skill 3]. Help me create a 30-second elevator pitch that is memorable and highlights my value.”

Example Student Input

“Act as a career advisor. I’m a Computer Science student at Example University seeking a Software Engineering Internship in the Fintech industry. I’m skilled in Python, Java, and Agile Development. Help me create a 30-second elevator pitch that is memorable and highlights my value.”


ITERATIVE REFINEMENT

After receiving the AI’s output, students should review it for overly generic phrasing, unnecessary jargon, or a lack of personal flair. Encourage them to seek peer and career-services feedback, then feed those insights back into the AI for improvements.


PRACTICE & DELIVERY

Remind students that delivery counts. They should rehearse their pitches out loud, aiming for confident body language and a natural tone. This helps them avoid sounding like they’re reading from a script.


LEARNING OUTCOMES

  • Sharpens concise communication
  • Encourages self-awareness and self-branding
  • Develops a clear, confident value proposition

EXERCISE 2: AI-DRIVEN MOCK INTERVIEWS

Job interviews can be nerve-wracking, especially for new graduates. Generative AI provides a risk-free environment to practice both technical and behavioral questions.


GENERATING QUESTIONS

Example Prompt

“Generate 5 interview questions for a [Job Title] at a [Company Type] company. Include at least one behavioral question, one technical question (if relevant), and one situational question.”

Example Student Input

“Generate 5 interview questions for a Marketing Associate position at a Tech Startup. Include at least one behavioral question, one technical question related to social media marketing, and one situational question.”


RESPONSE PRACTICE

Urge students to structure their responses using the STAR method (Situation, Task, Action, Result). This ensures clarity and showcases their problem-solving approach.


FEEDBACK & ANALYSIS

Encourage students to record themselves responding to the questions. They can then analyze their own body language, tone, and clarity, and invite feedback from classmates or instructors for fresh perspectives.


LEARNING OUTCOMES

· Boosts interview confidence

· Improves clarity of responses

· Fosters critical thinking under pressure


EXERCISE 3: MASTERING SALARY NEGOTIATION WITH AI

Negotiating compensation can be intimidating for first-time job seekers. Generative AI can provide data-driven insights and suggested talking points to help them negotiate effectively.


SALARY RESEARCH

Example Prompt

“Provide the average salary range for a [Job Title] in [City, State] with [X] years of experience. Cite reliable sources.”

Example Student Input

“Provide the average salary range for a Data Analyst in New York City with 1 year of experience. Cite reliable sources.”

Encourage students to compare the AI’s data to reputable job-market platforms (e.g., LinkedIn Salary, Glassdoor, Bureau of Labor Statistics) for accuracy.


STRATEGY DEVELOPMENT

Example Prompt

“I received a job offer for [Job Title] with a salary of [Offer Amount]. I was hoping for [Desired Salary]. Help me craft a response that justifies my desired salary, highlighting my skills and experience.”

Example Student Input

“I received a job offer for a Junior Software Engineer at $60,000. I was hoping for $70,000. Help me craft a response that justifies my desired salary, highlighting my skills in Python, my internship experience at a tech company, and my contributions to open-source projects.”


ROLE-PLAYING & PRACTICE

þ can practice mock negotiations using AI-generated counter-offers. This helps them gain confidence and develop strategies to respond to different negotiation tactics.


LEARNING OUTCOMES

  • Enhances negotiation and self-advocacy skills
  • Fosters an understanding of fair compensation benchmarks
  • Builds confidence for real-world salary discussions

CONCLUSION: CULTIVATING AN AI-READY WORKFORCE

These exercises provide a practical path for integrating generative AI into the classroom to give students both conceptual knowledge and practical skills. You can help bridge the gap between academic theory and the demands of today’s competitive job market by leveraging AI tools to help students improve pitches, practice interview skills, and hone negotiation strategies.

Start small—try out one exercise, ask for feedback, and make adjustments. Share what works with your colleagues and collaborate to identify best practices. By embracing generative AI, we can empower students to become active contributors to an AI-driven economy, driving innovation and building a more dynamic and prosperous future for all.

Let’s prepare the next generation to lead the way—and to do so with the confidence and skills they need to thrive, no matter how the job market changes.

]]>
New research series aims to foster collaboration on using artificial intelligence /u/news/2025/01/31/new-research-series-aims-to-foster-collaboration-on-using-artificial-intelligence/ Fri, 31 Jan 2025 16:56:30 +0000 /u/news/?p=1006072 Eight Elon faculty members will talk about their work on artificial intelligence in a series of events scheduled in February, March and April. The series of talks is being organized by Mustafa Akben, Elon’s director of artificial intelligence integration, and Sagun Giri, a member of Elon’s AI Task Force and an instructional technologist. All faculty, staff and students, along with other interested community members are invited to attend the free sessions.

The AI research series aims to create a collaborative space where faculty members from many different disciplines can explore the latest developments in artificial intelligence, share their research and build connections for future projects. Each session will consist of a 45-minute research presentation followed by a one-hour networking opportunity, allowing attendees to engage in discussions and explore potential collaborations.

“This initiative is more than just a series of presentations – it’s the foundation for an interdisciplinary AI research network,” Akben said. “By bringing together researchers from different fields, we hope to strengthen our connections and support each other’s work.”

The series schedule is as follows:

February 6, 12:30 p.m., Sankey Hall 308
Anne-Marie Iselin, associate professor of psychology

March 5, 12:30 p.m., Sankey Hall 308
Su Dong, associate professor of management

March 26, 12:30 p.m., Sankey Hall 308
Thibault Morillon, assistant professor of finance

April 2, 12:30 p.m., East Commons 102
Qian Xu, professor of strategic communications and AJ Fletcher Professor

April 16, 12:30 p.m., East Commons 102
Shannon Zenner, assistant professor of communication design

April 23, 12:30 p.m., East Commons 102
Paula Rosinski, professor of English and director of Writing Across the University

April 30, 12:30 p.m., Sankey Hall 308
Byung Lee, associate professor of communication design

May 13, 12:30 p.m., Sankey Hall 308
Cheng (Chris) Chen, assistant professor of communication design

]]>
Elon/AAC&U survey focuses on AI’s impact on þ and learning /u/news/2025/01/23/elon-aacu-survey-focuses-on-ais-impact-on-þ-and-learning/ Thu, 23 Jan 2025 05:07:59 +0000 /u/news/?p=1005329 The spread of artificial intelligence tools in education has disrupted key aspects of þ and learning on the nation’s campuses and will likely lead to significant changes in classwork, student assignments and even the role of colleges and universities in the country, according to a new national survey of higher education leaders. The survey was conducted Nov. 4-Dec. 7, 2024, by the (AAC&U) and þ’s .

A total of 337 university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans responded to questions about generative artificial intelligence tools (GenAI) such as ChatGPT, Gemini, Claude and CoPilot. The survey covered the current situation on campuses, the struggles institutional leaders are navigating, the changes they anticipate and the sweeping impacts they foresee. The survey results covered in a new report, Leading Through Disruption, were released at the annual AAC&U meeting, held Jan. 22-24 in Washington, D.C.

Full details on .

Current state of AICurrent situation

  • High student adoption of GenAI, lower faculty uptake: Most of these higher education leaders say GenAI use by students for coursework is prevalent, with 89% estimating that at least half of students use the tools. In the meantime, most say that much smaller numbers of faculty use GenAI as part of their jobs, with 62% estimating that fewer than half of faculty use the tools.
  • Some 83% of the academic leaders in this sample say they use GenAI tools – and a portion of them are power users who use GenAI for a wide range of activities. The most common uses by these executives were for writing and communications, information gathering and summarization, idea generation, and data analysis.
  • Unpreparedness: Majorities of these college and university leaders believe their institutions are not very or not at all ready to use GenAI for such things as: preparing students for the future (56% say their schools are not prepared for this); preparing faculty to use GenAI for þ and mentoring (53% feel unprepared); and helping non-faculty staff use these tools for work (63% feel unprepared). Some 59% believe last spring’s graduates were not prepared for work in companies where skill in using GenAI tools is important.
  • Cheating increase: 59% of these leaders report that cheating has increased on their campuses since GenAI tools have become widely available; 21% say it has increased a lot.
  • Detection of GenAI content isn’t great: More than half of these leaders do not think their faculty effectively recognize GenAI-created content. Some 13% believe their faculty are “not at all effective” in spotting this kind of content, and 41% think their faculty are “not very effective.”
  • Peer comparisons: 38% perceive their own institutions as about average in using GenAI for þ, learning, and other activities, while 28% say their schools are below average, and 7% say they are far behind.
  • Challenges to making progress: Large majorities of these leaders cite specific hindrances to GenAI adoption and integration at their schools. The challenges most often mentioned include faculty unfamiliarity with or resistance to GenAI, distrust of GenAI tools and their outputs, and concerns about diminished student learning outcomes.

Most of these leaders say their institutions have taken some steps to adjust to the rise of GenAI. Some 69% report their schools have adopted written policies about appropriate and inappropriate uses of GenAI tools in learning and þ. In addition, 44% report they have created new classes specifically devoted to AI, and a fifth have created majors or minors in AI.

“The overall takeaway from these leaders is that they are working to make sense of the changes they confront and looking over the horizon at a new AI-infused world they think will be better for almost everyone in higher education,” said Lee Rainie, director of þ’s Imagining the Digital Future Center. “They clearly feel some urgency to effect change, and they hope the grand reward is revitalized institutions that serve their students and civilization well.”

“While our survey reveals significant growing pains as colleges adapt to AI – from concerns about cheating to gaps in faculty preparedness – there’s a clear recognition that we’re at an inflection point in higher education,” said C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities (AAC&U). “The fact that 44% of institutions have already created AI-specific courses shows both the urgency and opportunity before us. The challenge now is turning today’s disruption into tomorrow’s innovation in þ and learning.”

AI outlook on campusChanges ahead

Asked to assess the impact of GenAI tools on students’ academic lives, these leaders expressed optimism mixed with concerns. The positive outcomes they foresee include:

  • Enhanced learning: 91% think GenAI tools will enhance and customize learning, including 47% who believe there will be a lot of impact.
  • Improved research skills: 75% think the tools will improve student research skills, including 29% who believe they will have a significant impact.
  • Better student writing: 69% think the tools will increase students’ ability to write clearly and persuasively, including 27% who believe they will have a strong impact.
  • Increased creativity: 66% say the tools will increase student creativity, including 21% who believe there will be a lot of impact.

The negative consequences include:

  • Concerns about academic integrity: 95% of these leaders say the spread of GenAI tools will affect students’ academic integrity, including 56% who believe there will be a lot of impact.
  • Dependence on GenAI: 92% think GenAI tools will lead to students’ overreliance on them, including 44% who think there will be a significant impact.
  • Greater digital inequities: 81% of these leaders think GenAI will impact digital divides, including 36% who think there will be a lot of impact.
  • Decreased attention spans: 66% think GenAI will diminish student attention spans, including 24% who think the tools will greatly impact this.

Teaching models to changeSome key findings about other changes that will occur at their institutions:

  • Changed þ model: 95% of these leaders say the þ models at their schools will be significantly or to some degree affected. Nearly half (48%) believe the change will be significant.
  • Classroom focus on ethical issues raised by the rise of GenAI tools: Strong majority of these officials believe it is necessary to focus classrooms on major issues tied to GenAI, including privacy issues, hallucinations, misinformation, bias, data breaches, and the alignment of the tools with human values.

Learning outcomes to improveFuture impacts

  • Better learning outcomes: A fifth of these academic leaders (21%) say GenAI tools will improve student learning outcomes at their schools in the next five years, and another 46% think the change will be somewhat for the better.
  • þ’ lives will be positively affected: When asked about GenAI’s impact on students, 50% of these academic leaders say the impact will be more positive than negative in the next five years, compared with just 12% who believe the impact will be more negative than positive.
  • Assignments, þ, learning, and research will get better: 70% of the leaders in this survey say the quality of assignments to students will get a lot or somewhat better because of the use of GenAI tools; 68% think the tools will relieve faculty of routine work they now face; 68% think the tools will help faculty research. Another 54% think the quality of lectures and lessons will improve thanks to GenAI, and 51% say the quality of feedback and grading of student performance will improve.

A persistent concern on campus relates to jobs. These college and university leaders say some reductions in employment levels could occur, but it will mostly be minor: 29% say they expect reductions in the number of staff at their schools (only 3% say it will be major), while 11% expect reductions in faculty and þ assistants (only 1% say it will be major). In both cases, about a fifth of these respondents say they do not know yet what the impact on staffing levels will be at their schools.

The results reported here are from a non-scientific survey of academic leaders known to the American Association of Colleges & Universities and a supplemental list of key officials in higher education compiled by þ. In all, 337 college leaders responded to at least some portion of the survey conducted between November 4 and December 7, 2024. The sample is diverse in key respects, including the size of the þ population and the schools’ geographic distribution. Still, the results are not generalizable.

For additional information, contact co-authors:
Lee Rainie, director, þ’s Imagining the Digital Future Center, lrainie@elon.edu
C. Edward Watson, vice president for digital innovation, AAC&U, watson@aacu.org

]]>
Elon, AAC&U publish student guide to artificial intelligence /u/news/2024/08/19/student-guide-to-ai/ Mon, 19 Aug 2024 11:18:05 +0000 /u/news/?p=991602 þ and the (AAC&U) have released the first edition of a student guide to navigating college in the artificial intelligence era. The guide, titled “AI-U/v1.0,” was developed with the collaboration and review of faculty, scholars, academic leaders and students at universities around the world.

The guide’s publication is timed to coincide with the start of the 2024-25 academic year. It is being offered free to students and institutions to distribute and adapt under a Creative Commons license. The guide is available at .

“As AI begins to influence þ and learning, as well as many operations of colleges and universities, students need a road map to help navigate these changes,” said Elon President Connie Book. “This guide was written from the student perspective and includes practical advice on using AI responsibly while in college and preparing for the AI future.”

C. Edward Watson, AAC&U’s vice president for digital innovation, said that using AI effectively has quickly become essential learning for college students. “This guide is indispensable for students as they travel along their AI learning journey,” Watson said.

The guide includes “the essential AI ‘how-to’ manual” with ground rules for students to follow in their classes and a checklist for using AI ethically. þ will find suggestions for ways to use AI, cautions about the downsides of using AI, lists of AI resources and suggestions for writing prompts. There are also sections on creating an academic journey that prepares students to succeed in an AI-infused world, along with .

More than 100 students at multiple universities submitted input and questions for inclusion in the guide, with by the guide’s authors and editors.

, former president of Southern New Hampshire University, is exploring questions about the future of AI in higher education. He is encouraged that many colleges are embracing the concept of “human-centered AI” and said the guide “combines common sense advice for students about using AI with guidance on developing strong personal relationships and recognizing your own unique knowledge, skills and creativity.”

, senior vice president and president of Lenovo North America, said the guide empowers students as they prepare for AI-driven careers. “With the proper framework, you can harness the power of artificial intelligence to carve your path in a world where technology is not just a tool, but an enabler of innovation, collaboration and creativity,” McCurdy said. “Embrace the learning journey. The skills you cultivate today will be the foundation of tomorrow’s workplace.”

The guide is an initiative of þ’s Authors, editors and reviewers include with creation of the guide coordinated by lead author Daniel J. Anderson, special assistant to the president at þ.

The guide will be updated as AI continues to evolve, with changes made to the guide’s website and revised editions published for future academic terms.

The student guide to AI initiative is a continuation of þ’s leadership on higher education’s role in preparing humanity for the artificial intelligence revolution. In 2023, Elon coordinated creation of a statement of principles, developed and endorsed by more than 140 higher education organizations, administrators, researchers and faculty members from 48 countries. The statement was released at the 18th annual United Nations Internet Governance Forum in Kyoto, Japan.

]]>
Elon mourns passing of former trustee James “Jim” Sankey /u/news/2024/08/07/elon-mourns-passing-of-former-trustee-james-jim-sankey/ Wed, 07 Aug 2024 15:17:03 +0000 /u/news/?p=990345 James Sankey
James “Jim” Sankey passed away on Aug. 4, 2024

Former þ trustee James “Jim” Sankey of Charlotte, North Carolina, died suddenly on Sunday, August 4, 2024. He was 64 years old.

Sankey served on the Board of Trustees from 2010 to 2013 and was the father of three Elon alumni, Clay Sankey ’12, Wes Sankey ’13 and Brooke Sankey ’20. He made a large impact at the university through his board leadership and philanthropy.

Jim and his wife, Beth, generously donated a lead gift to name Richard W. Sankey Hall on Elon’s campus in honor of Jim’s late father. The Sankeys also made gifts to support construction of Alumni Field House, the Numen Lumen Pavilion and other Elon priorities.

Sankey was president and CEO of InVue Security Products of Charlotte and was a respected business leader there. In 2019 he received a “Most Admired CEO” award in the technology division from the Charlotte Business Journal. He created and sold several successful businesses and held more than 30 patents for his inventions.

In living out their commitment to giving back and making the world a better place, Jim and Beth Sankey funded construction of 20 orphanages and four schools in India, Uganda, Sri Lanka and the Philippines. They also provided support for nearly 3,000 children in those orphanages. The Sankeys and InVue Security Products have also actively supported many charitable causes and organizations in the Charlotte area.

A memorial service in honor of Jim will be held at 10 a.m., Thursday, Aug. 8, 2024, at New City Church, 2500 Carmel Road, Charlotte, N.C.

]]>
Elon webinar focuses on challenges and opportunities of AI in higher ed /u/news/2024/03/12/elon-webinar-focuses-on-challenges-and-opportunities-of-ai-in-higher-ed/ Tue, 12 Mar 2024 16:45:36 +0000 /u/news/?p=974638 With new artificial intelligence tools launching and evolving at a dizzying pace, a panel of tech experts discussed the implications for higher education in a webinar moderated by þ President Connie Book.

The March 8 discussion titled “AI in Academia: Transforming Teaching and Learning in the Digital Era” was sponsored by Elon’s . The webinar attracted an audience of nearly 150 educators from around the world.

Ethan Mollick, a distinguished scholar at the University of Pennsylvania’s Wharton School, speaking during the webinar

Ethan Mollick, a distinguished scholar at the University of Pennsylvania’s Wharton School, said when he introduced Chat GPT to his þ entrepreneurship class, the students dove in within 10 minutes. One of the students had a working software model up and running by the end of the first class session and began promoting it on social media.

“He had venture capitalists offering him meetings the next day,” Mollick said. “By Thursday, everyone had used AI for something.”

Mollick said the companies developing AI tools don’t fully understand the capabilities and have not developed guidelines for use, so it’s up to educators to try to figure out ways to use AI effectively. “As these systems get better, we have to adapt in deeper ways than just, ‘us it/don’t use it.’ We have to be thinking hard about this because nobody’s doing the thinking for us,” Mollick said.

Hoda Mostafa, Director of the Center for Learning and Teaching, The American University in Cairo (Egypt)

At the American University in Cairo (Egypt), Hoda Mostafa organized community conversations about AI through the Center for Learning and Teaching. “There was a lot of fear, a lot of anxiety and questioning about what does this mean for plagiarism. What is going to happen? And what’s happened between January 2023 and today is that the tides have shifted dramatically in our faculty body,” Mostafa said.

Mostafa said faculty in most disciplines at her university are looking at creative ways of integrating AI and “looking at assessment in a post-plagiarism world.” She says students are using AI and suggesting new ways to use it, faculty are sharing ideas and trying new tactics, and the university has developed principles to follow. Every academic department has an “AI ambassador” who identifies concerns specific to their discipline.

Mostafa said faculty at the university are empowered to embrace the changes brought about by AI as “a way to really change the landscape around þ and learning.”

Udo Sglavo, Vice President of Applied AI & Modeling R&D, SAS Institute

Udo Sglavo, vice president of AI & Modeling at the SAS Institute, said universities will be forced to tear down walls between disciplines. He also said the ability to use AI will be a required skill in the workplace. For some job functions, such as writing computer code, SAS is seeing large productivity increases due to AI tools.

“But software architecture and software engineering is still something that the human mind excels in. Humans still excel in critical thinking, and when it comes to emotions and interactions between humans,” Sglavo said. “Machines are not ready for that, and we can have a debate over whether they will ever be. The collaboration between humans and machines is the way forward.”

President Book spoke about the launch of Elon’s new Imagining the Digital Future Center on Feb. 29, a successor to the university’s Imagining the Internet Center, which was established in 2000. The center has produced nearly 50 “Future of Digital Life” reports and the new study focused on the impact of artificial intelligence by 2040.

Lee Rainie, Director, Imagining the Digital Future Center

Imagining the Digital Future Director Lee Rainie talked about the study findings related to education. “Experts (in the survey) referred to ‘adjunct intelligence,’ anchored in artificial intelligence, as being everywhere,” Rainie said. “AI will essentially be a partner, now, in both intelligence and consciousness for lots of humans.

“For academic institutions, this is enormously challenging and transforming. Up and down the stack of education, new things are in play. Whole systems of conveying knowledge, inspiring creativity, assessing the learning process and conferring credentials is all up for grabs in this new world,” Rainie said.

Mollick said he is confident education will work out new ways to address the challenges posed by AI. But he said the bigger problems may come after graduation. “The way white collar works in America is an apprenticeship program, where you do basic work until you get good enough. (But now) no one is going to be delegating basic work to people anymore because they’re just going to be having AI do the work for them,” Mollick said.

Related Articles

Instead, Mollick said universities are going to have to prepare students with higher-order skills and expertise for the long term. Mostafa agreed, saying universities need to prepare students for the uncertainty and chaos that lies ahead in the AI revolution.

Rainie talked about the potential positive impact of AI technologies for education, such as instantaneous feedback, more opportunities to explore issues from various angles and more personalized instruction. “Why not go to the edge and find new frontiers?” Rainie asked.

is available on the Imagining the Digital Future Center website.

]]>
The Imagining the Digital Future Center: Technology experts, general public forecast impact of artificial intelligence by 2040 /u/news/2024/02/29/the-imagining-the-digital-future-center-technology-experts-general-public-forecast-impact-of-artificial-intelligence-by-2040/ Thu, 29 Feb 2024 12:08:24 +0000 /u/news/?p=973503 Technology experts and the U.S. public share serious concerns about the future of privacy, job opportunities, politics and democracy, civility and many other aspects of life, according to a new report from þ outlining the potential future impact of artificial intelligence.

This report, “,” finds both the general public and experts believe enormous upheavals are on the horizon as AI spreads. Many experts go so far as to say that we will have to reimagine what it means to be human and that societies must restructure, reinvent or replace existing institutions and systems. At the same time, they embrace the idea that important benefits will also result from the spread of AI.

This is the first report released by Elon’s expanded Imagining the Digital Future Center under the leadership of Lee Rainie, who joined the university after a 24-year career directing Pew Research Center’s Internet and Technology research. The Center works to discover and broadly share a diverse range of opinions, ideas and original research about the impact of digital change, informing important conversations and policy formation.

The new study combines with a , the hallmark of the 48 previous research reports produced by Elon in partnership with Pew Research Center since 2004.

“This special two-pronged research into public and elite opinion shows how disruptive many think AI will be to essential dimensions of life,” Rainie said. “Both groups expressed concerns over the future of privacy, wealth inequalities, politics and elections, employment opportunities, the level of civility in society and personal relationships with others. At the same time, there are more hopeful views about AI making life easier, more efficient and safer in some important respects. Virtually everyone agrees this is a pivotal moment for how the future plays out with AI.”

Rainie noted that the rapid development of ChatGPT and other AI tools has raised public awareness and caused considerable worry as well as great optimism about the ways AI tools may make life easier, more efficient and potentially safer.

“This special two-pronged research into public and elite opinion shows how disruptive many think AI will be to essential dimensions of life.”

Lee Rainie
Director, The Imagining the Digital Future Center

The public opinion poll was designed to focus on the expected impact of AI on a variety of dimensions of personal lives and on the structures of society. The separate canvassing of more than 300 technology experts provided an opportunity for technology developers, business and policy leaders, researchers, analysts and academics to contribute essays about the potential coming impact of AI.

Some envision a future when economies and work are overhauled, people have personal digital assistants and diminished skills in making their own choices, when AI-created deepfakes and disinformation create alternate definitions of “truth,” and when there are major advances in medical diagnostics and treatment.

The National Public Opinion Poll

Two-thirds of Americans believe AI may have a negative impact on their personal privacy and more than half believe there will be negative consequences for their employment opportunities, according to .

Those are among the findings of a national public opinion survey sponsored by Elon’s Imagining the Digital Future Center and conducted in partnership with the þ Poll, and Ipsos, an international marketing research and polling firm.

More than half of Americans expect negative impacts on politics and elections and 40% look for worsening level of civility in society. The most positive outlook is for the impact of AI on healthcare systems and the quality of medical treatment.

There is no prevailing viewpoint about the overall impact of AI. Asked how the increased use of AI will affect people’s daily lives, 31% say it will be equally positive and negative; 29% said it will be more negative than positive; 17% say it will be more positive than negative; and 23% say they don’t know. On a broad question about AI ethics, 31% say it is possible for AI programs to be designed that can consistently make decisions in people’s best interest in complex situations, while the exact same share say that is not possible. Some 38% say they are not sure.

The poll of 1,021 Americans was conducted Oct. 20-22, 2023, and designed to be representative of the U.S. population. The margin of error is +/- 3.2%.

Comparing expert opinions with the general public’s views

When answering a similar set of questions, the experts agreed with the general public that the impact of AI on privacy is the biggest concern. The experts were far more concerned than the public about a growing wealth inequality in society but somewhat less concerned than the public about the impact on employment opportunities.

The 17th “Future of Digital Life” Experts Canvassing

In addition to answering the quantitative questions, in which they described the challenges and opportunities they see in the digital future. Some said we will have to reimagine what it means to be human and that societies must restructure, reinvent or replace existing institutions and systems. They also spelled out their concerns that AI could greatly enfeeble people in the coming years, while at the same time embracing the idea that seemingly miraculous benefits could result from the spread of AI.

A few of the most intriguing ideas from the experts include the following:

  • A new meaning of life will arise in a “self-actualized economy:” Massive AI-generated economic efficiencies that improve work and the way basic infrastructure performs will be combined with medical and other scientific advances that will fundamentally alter the way people act, connect and care for each other.
  • There will be a shifting boundary between what’s human and what’s a machine: As AI applications become normalized and ordinary, the things that are considered controversial and dangerous will change from year to year.
  • Adjunct intelligence will be everywhere: This will dramatically affect individuals’ sense of identity, perception and even consciousness itself.
  • Personal avatars with “self-sovereign identities” will represent us: Individuals will possess 3D, photo-realistic avatars to carry out tasks for them utilizing their comprehensive personal data.
  • Digital assistants will have far more influence over their person than their human analogues have over themselves: Engagements with AI provide their creators with intimate insights about users that can be exploited
  • People will form intimate and meaningful relationships with their bots: Some will focus most of their human affection, desires and attention on digital products.
  • “Truths will be modified”: The AI-abetted spread of deepfakes, disinformation and post-truth content will broaden, and masses of electronic documents will be modified in hindsight to fit special interests’ points of view.
  • Shared benefits will transform humanity: The application of AI to achieve long-needed widespread economic change will lead to a more-equitable, sustainable society that relies less on consumption as a driver of productivity and instead evaluates productivity based on “human-flourishing metrics.”
  • Creativity will be democratized but may also be homogenized: Those with ideas but not much technical skill will have the tools to create and promote their concepts; this could create a monoculture of outputs.
  • An abundance mindset might replace a scarcity mindset: A sufficient combination of intelligence (via AI), matter (via asteroid mining) and energy (from various clean sources) could provide for effectively unlimited material abundance and enable humanity to overcome much of its reason for struggle.
  • AI could enable transparency of corporations and governments and expose now-hidden processes: AI systems to aid fact-checking and enable critical inquiry into government and corporate databases might empower citizens and bring suspect or shady practices to light.

“A large share of these global experts and analysts briefly mentioned the great gains they expect, but focused their responses mostly on expressing worries over the potential losses they fear,” said Professor Janna Anderson, founding director of Elon’s Imagining the Internet Center, who has been a co-author of the “Future of Digital Life” reports since they began in 2004. “Their concerns are reflected in the five overall themes we found in our analysis of their responses.”

An analysis of the experts’ responses surfaced five major themes:

Theme 1 – We will have to reimagine what it means to be human

As AI tools integrate into most aspects of life, some experts predict the very definition of a “human,” “person” or “individual” will be changed. Among the issues they addressed: What will happen when we begin to count on AIs as equivalent to – or better than – people? How will we react when technologies assist, educate, and maybe share a laugh with us? Will a human/AI symbiosis emerge into a pleasing partnership? Will AI become part of our consciousness

Theme 2 – Societies must restructure, reinvent or replace entrenched systems

These experts urge that societies fundamentally change long-established institutions and systems – political, economic, social, digital, and physical. They believe there should be major moves toward a more equitable distribution of wealth and power. They also argue that the spread of AI requires new multistakeholder governance from diverse sectors of society.

Theme 3 – Humanity could be greatly enfeebled by AI

A share of these experts focused on the ways people’s uses of AI could diminish human agency and skills. Some worry it will nearly eliminate critical thinking, reading and decision-making abilities and healthy, in-person connectedness, and lead to more mental health problems. Some said they fear the impact of mass unemployment on people’s psyches and behaviors due to a loss of identity, structure and purpose. Some warned these factors combined with a deepening of inequities may prompt violence.

Theme 4 – Don’t fear the tech; people are the problem and the solution

A large share of these experts say their first concern isn’t that AI will “go rogue.” They mostly worry that advanced AI is likely to significantly magnify the dangers already evident today due to people’s uses and abuses of digital tools. They fear a rise in problems tied to extractive capitalism, menacing and manipulative tactics exercised by bad actors, and autocratic governments’ violations of human rights.

Theme 5 – Key benefits from AI will arise

While most of these experts wrote primarily about the challenges of AI, many described likely gains to be seen as AI diffuses through society. They expect that most people will enjoy and benefit from AI’s assistance across all sectors, especially in education, business, research and medicine/health. They expect it will boost innovation and reconfigure and liberate people’s use of time.

]]>