A NEW DIGITAL DIVIDE
What once was a gap in access to technology may become a gap of human agency
a chasm opening between those who direct AI and those directed by it. A state of teaching and learning that pits big tech’s vision of AI as an intermediary between everyone and everything against educators and students who are already struggling to know when and if they can assert self-determination, or when to take the convenience of AI assistance.
A NEW DIGITAL DIVIDE
What once was a gap in access to technology may become a gap of human agency
Stanford researchers surveyed workers across 104 occupations to reveal truth about automation
Groundbreaking MIT research reveals the neurological price of cognitive convenience
"When it's so easy to get the answers we want, it might become harder to question those answers we get" - Eric Hawkinson
The Ethics of Automating Education - 2019
Let’s now transfer this idea to education, and the idea of AI assessment. It may not surprise you to know that the majority of venture investment and funding for AI in education is from testing companies and those who wish to disrupt them. Integrating AI into the test assessment process is attractive to testing companies for many reasons.
TEDx and Augmented Reality - 2018
Eric, Martin and Erin implemented their AR project at TEDxKyoto. Looking to approach the idea of AR on several fronts and link them all together Eric, Martin and Erin put together a series of activities that have never been seen at TEDx events ever before. The result was an interesting mix that got great reaction from participants.
Complete Research Archives
"These infrastructural changes are happening whether or not educational institutions participate in designing them. Students will have access to increasingly sophisticated agentic systems regardless of institutional policies."
"We shape our tools, and thereafter our tools shape us." This Marshall McLuhan insight frames an essential discussion I lead in my graduate course on educational technology in distance education—specifically the tension between technological determinism and the social construction of technology (SCOT). To help make these concepts relatable, I often share a personal example: the em dash. I used to overuse ellipses… not always correctly. But over time, I noticed something—AI-generated text often leans heavily on em dashes. After reading and working with language models regularly, I found myself doing the same.
This micro-example reveals the very relationship we explore in class: how tools influence behavior, and how user habits feed back into tool design. It's not simply that AI determines my punctuation, nor that I consciously decided to write more like machines. Instead, it's a co-evolution—efficiency pressures in AI training data shape my writing style, which then influences how I teach about writing, which affects how students write, which potentially feeds back into future training data. This seemingly trivial punctuation shift illustrates something much larger: the Automation Abyss opening between those who direct AI and those directed by it. When I unconsciously adopt em dashes, I'm being directed by algorithmic patterns embedded in my tools. When I recognize this pattern and analyze it critically—as I'm doing now—I'm maintaining agency over the technology.
This is the essence of maintaining human agency in an age of agentic AI: recognizing that we shape our tools and our tools shape us, and taking conscious responsibility for that relationship. The stakes are higher than punctuation—they're about preserving what makes us essentially human learners and teachers in an increasingly automated world.
One punctuation mark at a time.
Eric is a learning futurist, tinkering with and designing technologies that may better inform the future of teaching and learning. Eric's projects have included augmented tourism rallies, AR community art exhibitions, mixed reality escape rooms, and other experiments in immersive technology.
With over two decades of experience teaching with technology, Eric has witnessed firsthand the evolution from technology as supportive tool to substitute for essential learning processes. A pioneer in augmented reality education, he's the creator of Reality Labo, My Hometown Project, and Corefol.io, platforms that help students take ownership of their learning journeys.
Roles
Professor - Kyoto University of Foreign Studies
Research Coordinator - MAVR Research Group
Founder - Together Learning
Developer - Reality Labo / Corefol.io / My Hometown
Chair - World Immersive Learning Labs
A series of thought experiments to pit possible outcomes and analyze the common characteristic.
Scenario 1: The Automation Dependency Future
Students become passive consumers of AI-generated content, losing the capacity for independent thought and creative problem-solving.
Scenario 2: The Resistance Backlash Future
Educational institutions ban AI entirely, leaving students unprepared for an AI-saturated world and increasingly irrelevant educational experiences.
Scenario 3: The Human-AI Co-learning Future
Thoughtfully designed educational experiences that preserve human agency while leveraging AI capabilities for enhanced learning.
Scenario 4: The Educational Obsolescence Future
AI systems provide more efficient educational services than traditional institutions, leading to the complete replacement of human-centered education.
"Maintaining meaningful human agency when artificial intelligence can autonomously plan, discover, and even fail on behalf of learners represents one of the most critical educational imperatives of our time."
This interactive visualization presents groundbreaking research from Grinschgl, Papenmeier, and Meyerhoff (2021) that reveals the hidden cognitive costs of our increasing reliance on AI tools. In controlled experiments published in the Quarterly Journal of Experimental Psychology, participants performed tasks either with easy access to external aids (offloading working memory) or with restricted access. The results expose a troubling pattern: while cognitive offloading delivers modest efficiency gains, participants completed tasks approximately 10% faster when using external aids—this speed comes at a devastating cost to learning, with memory retention declining by 20% or more.
Stanford University’s June 2025 paper “Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce” presents the WORKBank framework, which maps worker desires for automation against expert-assessed AI capabilities across 844 tasks spanning 104 occupations. The study introduces the Human Agency Scale (HAS)—a five-level continuum from H1 (Full Automation) to H5 (Human‑Only)—to quantify how much human involvement is preferred versus technologically required. It finds that only about 26 % of tasks received matching agency ratings from workers and experts, with workers tending to prefer higher human agency in nearly half of the tasks, creating notable misalignment. Tasks are categorized into four zones—Green Light (high desire + high capability), Red Light (low desire), R&D Opportunity, and Low Priority—which guide responsible AI deployment. The paper also highlights a shift in valued workplace skills: as data‑processing becomes more automatable, interpersonal and decision‑making skills grow in importance.
This visualization synthesizes findings from multiple studies that reveal the hidden costs of our increasing technological dependence. Stadler, Bannert, and Sailer's 2024 research in Computers in Human Behavior demonstrated that while large language models reduce cognitive load by 32%, this efficiency comes at the expense of deep scientific inquiry and learning depth. Even more concerning, Gerlich's 2025 comprehensive analysis in Societies found a strong negative correlation (r = -0.68) between AI tool usage and critical thinking abilities across 666 participants, with younger users and those with lower educational attainment showing the steepest declines. Ward and colleagues' influential 2017 study revealed that merely having a smartphone present—even when unused—significantly reduces available cognitive capacity through what they termed the "brain drain" effect.