Teaching & Learning Lab Practicum

Empower Learning Through Creative Design

Enduring Understandings

Learning about different learning theories and designing processes prompted me to reflect on the enduring understandings in learning design.

“If you aren’t designing for someone, you’re designing for no one.”

“Learning design is all about making choices.”

“Constraints are good, up to a point.”

“Learning designers are humans designing learning for humans.”

Reflections

Project Charter

AI Tutor bot Project Charter.docx

Discover Module Post – Learning Goals and Thoughts on the AI Tutoring Project

Learning Goals

As I begin my journey in the Teaching and Learning Lab Practicum, I am excited to gain hands-on experience in learning design while deepening my understanding of how AI can enhance education. My primary learning goals for this course center around mastering the design process from end to end, understanding user needs, and exploring how AI can be leveraged to create meaningful learning experiences. These goals align not only with my current practicum project but also with my broader career aspirations in learning experience design, UX research, and the intersection of AI and education.

One of my biggest motivations for taking this practicum is to gain practical, real-world experience in designing and iterating on learning solutions. While I have explored learning design in theory, I want to engage deeply in the full cycle of instructional design—from user research to prototyping, implementation, and evaluation. Additionally, I want to hone my ability to understand user needs, ensuring that educational experiences are tailored to how people actually learn, rather than how we assume they learn.

AI Tutoring Project

My practicum project focuses on AI tutoring, which presents both exciting opportunities and critical challenges. One of my main areas of interest is rethinking AI tutors as more than just automated content delivery systems. Many AI-driven learning platforms prioritize efficiency—they deliver information quickly, assess comprehension, and provide immediate feedback. However, this often results in behaviorist-style instruction that reinforces rote memorization rather than deep learning.

I want to design an AI tutor that enhances, rather than replaces, human-driven learning. My work in this project will also tie into my growing interest in AI-assisted counseling and mentorship, where AI can support users in self-reflection, goal-setting, and emotional engagement—a concept I have explored in another context related to AI in counseling.

Connecting with Learning Theories

When evaluating the role of AI in education, I found myself questioning whether AI should be framed as an instructor (behaviorism), facilitator (cognitivism), collaborator (constructivism), or tool for creation (constructionism). This question resurfaced during our in-class our activity to create a driving education module this week, where we explored how different learning theories shape instructional design.

The Driving Education Activity

The driving education activity this week challenged us to design a learning sequence using a specific theory without explicitly naming it. This exercise mirrored real-world learning design, where instructional strategies must align with the intended learner experience. Here’s how each theory influenced potential approaches:

  1. Behaviorism (Stimulus-Response Learning)
    • If we had taken a behaviorist approach, the lesson would have focused on step-by-step instruction, reinforcement, and rote practice.
    • A structured “drill and practice” approach might involve rewarding correct responses (e.g., positive feedback when correctly turning on the car).
    • While effective for habit formation, behaviorism alone lacks depth for complex decision-making in real-world driving scenarios.
    • This approach reminded me of how traditional AI tutors often operate by providing scripted and direct feedback rather than fostering deeper learning.
  2. Cognitivism (Mental Processing and Knowledge Structuring)
    • A cognitive approach would emphasize mental models, schemas, and structured knowledge acquisition before hands-on learning.
    • For example, we might introduce a mind map to break down driving components: engine functions, road signs, and cognitive load management.
    • This structured approach reflects how some AI tutoring systems adapt instruction based on prior knowledge, much like adaptive learning technologies.
    • While useful for understanding concepts, this approach alone may not prepare learners for situational, unpredictable driving experiences.
  3. Constructivism (Learning Through Experience & Reflection)
    • The constructivist approach would immerse learners in driving simulations, encouraging them to actively engage, problem-solve, and reflect on their actions.
    • Instead of step-by-step instructions, students would encounter real-world driving scenarios, make decisions, and receive immediate feedback.
    • Our class activity incorporated self-reflection after simulations, mirroring constructivist ideals.
    • This approach connects to my AI and learning design interests, where AI should serve as a co-explorer in the learning process, helping learners generate, test, and refine ideas rather than just providing answers.
  4. Constructionism (Learning by Creating)
    • In a constructionist approach, learners might design their own driving courses, simulate road conditions, or even modify a virtual driving game.
    • Learning happens through active creation, rather than just experiencing pre-made content.
    • This aligns closely with Resnick’s 4P Model (Projects, Passion, Peers, Play) and my personal interest in AI as a creative learning tool.
    • AI in this context could act as a design partner, helping learners generate driving scenarios, modify learning environments, and test their own hypotheses.
    • This mirrors how generative AI can be leveraged for creative learning experiences, making education more engaging, interactive, and learner-driven.

Bridging Learning Theories and AI in My Practicum Work

Reflecting on this exercise helped me clarify why I am drawn to AI in education. Many existing AI-powered educational tools are stuck in behaviorist or cognitivist paradigms, focusing on drill-based tutoring or adaptive assessments. However, my vision is to explore how AI can enhance constructivist and constructionist learning experiences, where learners actively engage, collaborate, and create new knowledge.

For instance, for the AI tutoring project, I want to:

  • Investigate how AI can support, rather than constrain, learner agency.
  • Design AI interactions that facilitate inquiry, creativity, and exploration instead of reinforcing rigid instructionist models.
  • Explore how AI tools like chatbots, generative models, and interactive tutors can assist learners in open-ended projects rather than just providing pre-programmed responses.
  • Through both choosing my practicum project and participating in this driving education activity, I realized how learning theories fundamentally shape AI’s role in education.

While AI has tremendous potential, its impact depends on how we design and integrate it. Moving forward, I aim to explore AI-powered learning environments that embrace constructivist and constructionist principles, fostering engagement, collaboration, and creativity rather than just reinforcing traditional instructional methods.

Design Module Post – Multi-agent AI Tutor Development and Cognitive Apprenticeship

In developing a protocol for training AI tutor agents, my team and I are not only designing an AI system but also engaging in a deeper exploration of how expertise in teaching and learning is structured, embodied, and communicated. This process was greatly influenced by Christian’s workshop on AI and learning design, where we explored how AI can support personalized learning, simulate different teaching approaches, and act as an interactive scaffold for learners. One of the key takeaways from the workshop was the necessity of capturing the diverse, human-like elements of teaching, including instructional strategies, engagement techniques, and the nuances of how educators guide student thinking.

As part of this effort, we started developing an interview protocol for professors to create AI tutor agents. Our goal was to extract their unique teaching styles, educational philosophies, and mentoring approaches so that the AI tutor agents could reflect their real-life expertise. To achieve this, we structured our questions around how they explain complex concepts, engage with students, scaffold knowledge, and encourage critical thinking. By collecting these insights, we aim to make AI tutor agents feel more human and contextually aware, rather than just rule-based responders.

This approach closely aligns with the cognitive apprenticeship model, which we discussed in class. Cognitive apprenticeship emphasizes how experts make their thought processes visible to learners, allowing novices to observe, reflect, and gradually internalize these skills. In our AI tutor development, this concept is particularly relevant because we are essentially trying to codify and make explicit the implicit knowledge and problem-solving strategies that expert educators use.

My takeaway and reflections from class on situated learning and cognitive apprenticeship reinforce this idea:

  • Experts articulate their reasoning, while novices learn through reflection. Our AI tutor should simulate this process by guiding students through structured questioning and feedback.
  • Making the invisible visible—this is critical in AI design because much of an expert’s teaching process is based on tacit knowledge that they may not even be consciously aware of. Our job in designing AI tutors is to capture and encode this knowledge in a way that students can meaningfully engage with.
  • Situated learning and assessment—AI tutors should not just provide answers but guide learners through the process of discovery in a way that reflects authentic learning environments.

This process of bridging cognitive apprenticeship with AI tutor development is both a technical and pedagogical challenge. It requires deeply understanding human teaching methods and translating them into AI behavior while ensuring that AI remains a learning facilitator rather than a knowledge dispenser. Moving forward, our goal is to refine our protocol for capturing teaching personas, ensure that the AI tutors can scaffold learning effectively, and explore how these AI agents can become more dynamic, responsive, and reflective of real-world teaching practices.

By incorporating the principles of cognitive apprenticeship and contextual learning into our AI tutor design, we hope to create a system that not only provides information but fosters deep engagement, problem-solving, and a more human-like learning experience for students.

Develop Module Post – What AI Can Do and What Only Humans Can

Creating Bill Bot has been such a meaningful process, and honestly, also really challenging in a good way. We started with this idea: What would it look like if an AI tutor could actually feel like our Professor, Bill? Not just in what it knows, but in how it teaches — how it asks questions, scaffolds thinking, and reflects real pedagogical care. We are differentiating ourselves from many existing AI tutor tools that primarily focus on content-based approaches. In addition to providing content-based support, we are also aiming to emulate Bill’s persona and educational philosophies, as well as his unique methods of interacting with and guiding students during their reflective learning processes.

To build the bot, we pulled from a ton of sources: Bill’s intake survey, his feedback on student reflective portfolios, the course syllabus and slides in each module, and even the structure and tone of our T127 class. We trained the bot iteratively, constantly refining its responses every time it sounded too robotic, too directive, or just not Bill enough. It was a constant process of testing, tweaking, and retraining to bring out the reflective, question-first style we know from class.

We held our first in-class prototype testing on April 10, and it went great overall — students engaged deeply with the tool. But we also got some honest feedback: that the bot didn’t always feel like Bill. Some students even felt frustrated by that. And to be honest… I kind of loved hearing that. Because it reminded me that AI can’t — and shouldn’t — replace the full human experience. It can support, extend, and reflect pieces of it, but it won’t be perfect. And that’s okay. We’re taking the feedback we got and continuing to revise, train, and improve the bot — always with our values in mind.

Around the same time, I watched a Ted Talk and read a New York Times article about Khanmigo, Khan Academy’s AI tutor. Both highlighted how Khanmigo is being piloted in classrooms, offering personalized support to students and assisting teachers with tasks like lesson planning. The article emphasized that while AI can enhance learning experiences, it cannot replicate the nuanced understanding, empathy, and mentorship that human educators provide. The challenge lies in integrating AI as a supportive tool that amplifies the educator’s role rather than diminishing it.

That tension has stayed with me. But I think what this project taught me is that AI in learning design has incredible potential. But the key is staying intentional, human-centered, and humble about what AI can do — and what only humans can.

So I’m still hopeful. But I’m also careful. And I think it’s okay to hold both. Honestly, that is also probably the most Bill way to approach this.

Overview of Capstone Project

Capstone Product – “The Bill Bot”

Keep addressing feedback collected from prototyping sessions and keep iterating and training the bot.

I want to potentially include a slide-based storybook or video walkthrough that walks viewers through the design journey and showcases how the bot evolved.

I also plan on possibly including a live or simulated Bill Bot interaction demo (could be an avatar with voice interactions, or a fake video interactions demo) that can actually talk to the users.

During the gallery walk, we will share out how we translated Bill’s teaching philosophy into design principles, as well as share out some conversation flow / prompt design examples while giving the audience the opportunity to interact with the bot in live.

Deliver Module Post – Amplifying the Educator’s Role Instead of Replacing/Diminishing it

Presenting the Bill Bot during our Gallery Walk was such a powerful and affirming experience. It wasn’t just about showcasing the technical design of an AI tutor — it was a moment to share a vision of what thoughtful, relational, and reflective learning support could look like when built with care.

Standing by our poster and talking with peers, professors, and visitors about the bot made me realize how much our work resonates with real questions people are asking about AI in education:
Can AI support learning without replacing human educators?
Can it be relational without faking empathy?
What kind of boundaries do we need to build into our designs?

One of the most meaningful parts for me was seeing how people responded to the tone and persona of the Bill Bot. The questions and conversations during the Gallery Walk helped me reflect more deeply on what it means to encode not just knowledge, but the emotional quotient of a teacher — their calm, their curiosity, and their encouragement. I was especially struck by how many audience members talked about the “feeling” of interacting with the bot. That emotional dimension, I now realize, is a core part of what makes or breaks an AI tutor.

Creating the presentation slides and designing a cartoon avatar of Professor Bill was both fun and unexpectedly reflective. Every design choice — from the soft color palette to the phrasing of my intro script — made me pause and think: “Is this aligned with Bill’s teaching style? Is this respectful to the real person, but also clear about the boundaries of the AI?” That tension — between warmth and truth, between usefulness and clarity — continues to guide how I iterate on the bot.

I’ve also been reflecting on how our work contributes to broader course outcomes, especially around designing for learning that is meaningful, responsible, and inclusive. I’ve learned that the technical design is only half the story. The other half is how we communicate purpose, set expectations, and design for trust.

Going forward and reflecting on how to help Bill implement the tutor for his teaching next year, I want to build on current capstone product by focusing more on student experience and trust-building, perhaps through more iterative testing and dialogue around ethical guardrails. I want to keep exploring how to design AI tutors that are not just helpful, but also respectful, human-aware, and learner-centered.

Final Reflective Post – Designing to Learn, Learning to Design

Looking back through my portfolio, what stands out most is how much growth and transformation happened not just in my projects, but in how I think — about learning, about design, and about what it means to build something with real meaning and care.

When I began this course, I knew I was passionate about learning design and emerging technologies, but I wasn’t yet fluent in the language of instructional design. Over time, through each module and reflection, I began to see myself less as a student learning about design and more as a designer learning by doing. I practiced applying theory, balancing empathy and structure, iterating based on feedback, and thinking critically about the ethical dimensions of my decisions — all of which culminated in the development of the Bill Bot project.

The process of building Bill Bot — from its initial concept to our gallery walk showcase — was a perfect reflection of everything I’ve learned: starting with a big question, grounding in pedagogy, designing with intentionality, testing with users, reflecting with humility, and iterating again and again. My portfolio documents this process transparently. It shows not just polished outcomes, but also the thinking behind the design — the hard questions, small pivots, and deep values that shaped the final product.

As a whole, this portfolio represents a shift in how I define my professional identity. I’m not just someone who “knows about” learning design — I am someone who designs for learning with empathy, creativity, and critical consciousness.

Looking ahead, I hope to continue using this portfolio as a living artifact of my capacity to design thoughtful, learner-centered experiences. Whether I’m applying for UX research roles, EdTech positions, or doctoral programs, I want others to see how I approach problems, how I bridge theory and practice, and how seriously I take the relational and ethical aspects of educational work. I also plan to expand the portfolio with new projects and reflections as I grow in this field.

This class didn’t just teach me how to design for learning — it reminded me why I care so deeply about this work in the first place. And for that, I’m genuinely grateful.

Capstone Product

https://pingpong.hks.harvard.edu/group/80?assistant=1140

Capstone Analysis