Knowledge Tracing
Understanding the core technology behind personalized learning recommendations.
Knowledge Tracing (KT) is the computational process of modeling a learner's knowledge state over time. It's a core component of Intelligent Tutoring Systems (ITS) that has been studied for over four decades.
KT infers and maintains a model of what a learner knows to determine optimal instructional content. Methods include psychometric approaches and machine learning models that provide quantitative indicators of student achievement.
Our Methodology
Based on the ACT-R Activation Equation, this models memory recall as an accumulation of practice events over time.
The probability of recall at a future point is computed through aggregation of past practice attempts, accounting for forgetting curves.
Derived from mastery learning approaches like the Additive Factor Model (AFM), this captures skill acquisition.
Performance improvement is modeled as a function of repeated application of Knowledge Components without explicit forgetting.
We combine declarative and procedural models through a linear interpolation controlled by a single scalar transition weight. This represents the gradual transformation of knowledge from declarative to procedural form—a process called "proceduralization" in ACT-R theory.
Knowledge State = (1 - α) × Declarative + α × Procedural
Where α is the transition weight that evolves as learning progresses
Key Innovations
No Historical Data Required
Unlike traditional KT methods that rely on other students' historical problem-solving data, our approach can be applied to questions newly generated by LLMs without any prior student data.
Domain-Agnostic
Knowledge Components are stored as natural language text, enabling the system to generalize across any learning domain that can be described in natural language.
LLM-Powered KC Extraction
Prior research by our team demonstrates that GPT-4o can extract Knowledge Components from learning materials with expert-level reliability, enabling automated curriculum generation.
Existing learning interfaces face a fundamental dilemma when showing progress:
Show Forgetting
Accurate but demotivating—users see progress decrease when returning after delays.
Hide Forgetting
Motivating but inaccurate—fails to convey actual knowledge retention.
Our Solution
By introducing a target time constraint, users receive cumulatively increasing progress feedback while the metric accurately communicates the probability of knowledge retention at their target date (exam, interview, etc.).
Learning Path Achievement Demo
Experience how forgetting-aware scoring provides monotonically increasing progress while rewarding spaced repetition. Click "Auto-Play Demo" to see it in action.
Learning Goal
Reach 80% mastery of Machine Learning
Practice knowledge components to see events here
How Forgetting-Aware Scoring Works
Progress increases by +0.1%
Answering correctly after a delay: +0.3%
Score only increases, never decreases