AIED 2026 Workshop · Festival of Learning · Seoul

Open Learner Models in the Age of Generative AI

A half-day workshop on the tension between LLM fluency and OLM integrity.

Venue AIED 2026 — Seoul, Republic of Korea
Format Half-day · In-person · Interactive
Audience AIED, EDM, L@S, Learning Analytics
Size ~15–25 participants
Register to Participate →

What is at stake?


The AIED community has for decades established why transparency, accuracy, and learner agency matter in educational AI. Open Learner Models (OLMs) give learners the ability to inspect, contest, and negotiate the system's representation of their knowledge — creating conditions for genuine metacognition and self-regulation.

Large Language Models now offer something seductive: rich, naturalistic explanations of learner states generated at low cost. One could simply feed a learner's interaction history into an LLM and let it answer "what do I struggle with?" or "what should I do next?" The LLM practically becomes the OLM.

But LLMs do not employ structured, inspectable representations. Their explanations can be fluent yet wrong, confident yet ungrounded. Allowing fluency to substitute for validity would undermine commitments the field has worked hard to establish.

This workshop surfaces that tension sharply — and works toward a research agenda for navigating it.

Open research questions

  • What must an LLM-generated explanation do to qualify as a genuinely open model?
  • How do we evaluate whether LLM-mediated OLMs are accurate, fair, and trusted by learners?
  • What safeguards are needed when natural language explanations can be fluent but wrong?
  • What does it mean for a learner to contest a representation they cannot inspect?
  • How might collaborative and group-level learner modeling change when LLMs mediate the representation?

Workshop Programme


0:00
30 minutes
Welcome & Opening Provocations

Three invited speakers each deliver a tight 5-minute position statement followed by 5 minutes of immediate reaction from the room. Statements are deliberately in tension: one arguing LLMs finally make OLMs scalable; one arguing LLM-mediated explanations threaten the core principle of inspectability; one asking whether any of this serves the people being modeled. The goal is not to resolve but to surface the tension sharply.

0:30
45 minutes
State-of-the-Art Presentation

A workshop organizer presents foundational work in Learner Modeling and OLMs — empirical studies, system designs, theoretical frameworks — and approaches at the OLM–LLM intersection, to ground the audience in what is already being attempted and where gaps remain. Participants contribute short presentations of their own work.

1:15
60 minutes
Hands-On Design Challenge

Small groups (3–4 participants, mixed across career stage and background) receive a shared scenario: a learner interaction trace from a realistic educational context (programming task, collaborative discussion, or writing exercise). Each group designs how an LLM-enhanced OLM would explain the learner's state to three audiences — the learner, a teacher, and a peer — while explicitly addressing a constraint card of core OLM principles: accuracy, inspectability, learner agency, contestability. Groups then rotate to review each other's designs.

2:15
30 minutes
Synthesis

Organizers and participants map findings onto a shared board organized around three questions: What must an OLM preserve even when LLMs are involved? Where does LLM integration genuinely add value? What research does not yet exist? The board becomes a rough but collectively owned research agenda.

2:45
15 minutes
Closing & Next Steps

Each group nominates one "non-negotiable" principle for LLM-enhanced OLMs. The workshop closes by identifying two or three concrete follow-on actions: a position paper to co-author, a special issue to propose, or a follow-up workshop at a subsequent venue. Participants sign up for future actions before leaving.

What we will grapple with


01 / VALIDITY

"What are the requirements for an LLM-generated explanation of a learner's state to count as a genuinely open model?"

02 / EVALUATION

"How do we evaluate whether LLM-mediated OLMs are accurate, fair, and trusted by learners?"

03 / SAFEGUARDS

"What safeguards are needed when natural language explanations can be fluent but wrong?"

04 / COLLABORATION

"How might collaborative and group-level learner modeling change when LLMs mediate the representation?"

Workshop Organizers


Prof. Dr. Irene-Angelica Chounta
Prof. Dr. Irene-Angelica Chounta
University of Duisburg-Essen, Germany

Head of COLAPS research group. Focuses on computational learning analytics, AIED, and educational technologies. Expert consultant on AI in Education for the Council of Europe; member of the IAIED Society Executive Board.

Prof. Dr. Tomohiro Nagashima
Prof. Dr. Tomohiro Nagashima
Saarland University & Harvard University

Tenure-Track Junior Professor of Technology-Enhanced Learning. Interdisciplinary research at the intersection of Learning Sciences and HCI. Ph.D. from Carnegie Mellon University; M.A. from Stanford University.

Dr. Diego Zapata-Rivera
Dr. Diego Zapata-Rivera
Educational Testing Service, Princeton, USA

Distinguished Presidential Appointee at ETS. Research on open learner modeling, Bayesian student modeling, conversation-based and game-based assessment. Co-PI of NSF AI INVITE Institute.

Kaimao Sheng
Kaimao Sheng
University of Duisburg-Essen, Germany

PhD candidate in Human-Centered Computing and Cognitive Science. Research on computational representations of learning processes and AI-based models that bridge cognitive theory and implementation.

Mohamed Abdelmagied
Mohamed Abdelmagied
University of Duisburg-Essen, Germany

M.Sc. candidate in Computer Engineering. Research on how generative AI can support learners through transparent, interactive OLMs with a focus on negotiable learner models, agency, and metacognition.

Multidisciplinary team spanning
Learning Analytics · AIED
Learning Sciences · HCI
Computer Science · Assessment

Participate in the Workshop

The workshop targets researchers and practitioners in AIED, EDM, and L@S working on learning analytics, learner and user modeling, and educational technology. No technical prerequisite — designed to be productive for both system builders and educational researchers.

Register to Participate AIED 2026 Conference