· AI & Technology  · 10 min read

How AI Learns - Knowledge Graph Activity

Build a physical model of a knowledge graph to understand how AI learns from examples, makes predictions, and sometimes produces unexpected results. Perfect for ages 8-14.

Overview

How AI Learns is an interactive demonstration that shows how artificial intelligence learns patterns from examples and uses them to answer new questions. Participants build a physical model of a knowledge graph using index cards and yarn, creating connections that represent how AI systems store and retrieve information.

eats
chases
eats
chases
belongs to
belongs to
belongs to
belongs to
belongs to
Robot
Car
Dog
Pizza
Banana
MACHINE
ANIMAL
FOOD

Example Query Walkthrough

Question: “What do robots eat?”

  • Direct path: Robot → Pizza (1 hop, high confidence ✓)

Question: “What do cars eat?”

  • No direct connection from Car
  • Path through concepts: Car → MACHINE → FOOD → Pizza (3 hops, lower confidence)
  • The AI generalizes: “If robots (a machine) eat pizza (food), then cars (also a machine) probably eat pizza too!”

Question: “What do dogs drive?”

  • Path: Dog → ANIMAL → MACHINE → Car
  • The AI produces a silly answer it was never taught—this demonstrates both generalization and hallucination!

Quick Facts

  • Age Range: 8-14 years (adaptable for younger and older)
  • Group Size: 4-12 participants
  • Duration: 30-45 minutes (60+ with participant training)
  • Difficulty: Moderate (facilitator should read guide fully)

Download the Complete Guide

Download Activity Guide PDF - Complete facilitator guide with setup instructions, training sentences, discussion questions, and age adaptations.


What Participants Will Learn

By the end of this activity, participants will understand:

  • AI learns from examples, not rules - No one programs AI with explicit instructions; it discovers patterns in training data
  • AI combines patterns to generalize - It can answer questions it was never directly taught by connecting learned relationships
  • AI doesn’t “understand” - It follows learned connections without comprehension, which is why it can be confidently wrong
  • More examples = stronger patterns - The confidence of AI predictions depends on how much supporting evidence exists
  • Garbage in, garbage out - Bad or biased training data leads to bad outputs

Educational Note

This activity models a knowledge graph structure, similar to how systems like Google’s Knowledge Panel work. While large language models like ChatGPT use different technology (neural networks with billions of parameters), the core insight—learning patterns from data to make predictions—applies to both approaches.


Materials Needed

ItemQuantityNotes
Index cards or cardstock16-20Two colors if possible: one for nouns, one for concepts
Yarn4 colorsEach color = one verb (red=eats, blue=drives, etc.)
Scissors1-2 pairsFor cutting yarn to length
MarkersSeveralFor writing on cards
Large floor space or table1Needs room to spread out the network
Training sentences17Provided in the PDF guide

Yarn Attachment Options

  • Easiest: Cut small slits in card edges, slide yarn through—holds itself
  • Quick: Lay yarn across cards on floor, use small weights to hold cards in place
  • Durable: Pin cards to corkboard, wrap yarn around pushpins

Activity Phases

Phase 1: Setup

  1. Create concept cards (MACHINE, ANIMAL, FOOD, PERSON) and noun cards (Robot, Car, Dog, Pizza, Scout, etc.)
  2. Arrange cards on the floor or table, clustering nouns near their concept categories
  3. Explain yarn colors: red=eats, blue=drives, green=chases, yellow=helps

Phase 2: Training the AI

Read 17 training sentences one at a time. For each sentence, participants add TWO pieces of yarn:

  • Specific connection: Connects the exact nouns (e.g., Robot → Pizza)
  • General connection: Connects the concept categories (e.g., MACHINE → FOOD)

Why both? The specific connection remembers the exact fact. The general connection allows the AI to generalize to new situations.

Example: “Robots eat pizza”

  • Add red yarn: Robot → Pizza (specific)
  • Add red yarn: MACHINE → FOOD (general)

Phase 3: Asking Questions

Now the fun begins! Ask the AI questions by following yarn paths:

Query Rules:

  1. Check for direct yarn first (1 hop = high confidence answer)
  2. If no direct path, traverse through concept clusters (3 hops = lower confidence)
  3. When multiple options exist, count incoming yarn—more yarn = more evidence

Example Questions:

  • “What do robots eat?” → Direct red yarn leads to Pizza (high confidence)
  • “What do cars eat?” → No direct path. Follow: Car → MACHINE → FOOD → Pizza (lower confidence, but still answers!)
  • “What do dogs drive?” → This was never taught! But the AI can generalize: Dog → ANIMAL → MACHINE → Car
  • “What do bananas drive?” → No path exists. The AI doesn’t know everything—only what it learned.

Key Teaching Moments

The Generalization Insight

When you ask “What do dogs drive?” participants will discover the AI answers “Car” even though this was never in the training data! This demonstrates:

  • Generalization power: AI can combine patterns to answer new questions
  • Hallucination risk: The AI confidently produces nonsensical outputs (dogs driving cars!)
  • Pattern matching limits: The AI doesn’t understand that its answer is absurd

The Bias Demonstration

Count the red yarn going into Pizza vs. Banana. Pizza appears more often in training, so the AI favors it. This illustrates:

  • Real AI systems reflect biases in their training data
  • Frequency creates preference, not understanding of “correctness”
  • More diverse training data leads to better generalization

The Confidence Concept

Compare answers that require 1 hop (direct connection) vs. 3 hops (through concepts):

  • Direct paths = high confidence
  • Multi-hop paths = lower confidence, more uncertainty
  • Real AI systems also express confidence levels based on evidence strength

Discussion Questions

After running several queries, engage participants with these questions:

“Did we ever teach the AI that cars eat pizza?”

No! It figured it out from patterns. This is how AI learns—by discovering relationships, not being programmed with every possible fact.

“Why did it pick pizza instead of bananas?”

More training examples pointed to pizza. The AI goes with what it saw most often—this is how bias enters AI systems.

“What happens if we ask something we never trained at all?”

Try “What do bananas drive?” The AI may have no path. It doesn’t know everything—only patterns from its training data.

“Is the AI actually thinking?”

No. It’s following yarn. It doesn’t understand that “toasters eat pizza” is silly—it just knows the connections exist.

“What if we trained it with wrong information?”

It would confidently give wrong answers. This is why AI training data quality matters so much. Garbage in, garbage out.


Age Adaptations

Ages 6-7 (Kindergarten-1st Grade)

  • Skip concept layer entirely—use only noun cards and direct connections
  • Use only 6-8 noun cards and one yarn color (e.g., ‘eats’)
  • Keep training to 5-6 sentences maximum
  • Let them physically walk the yarn paths with their fingers
  • Key message: “The AI only knows what we taught it”

Ages 8-10 (2nd-4th Grade) — Sweet Spot

  • Use the complete setup with concept clusters
  • All 17 training sentences work well
  • They can grasp generalization and find hallucinations hilarious
  • Can handle counting yarn for confidence
  • Ready to create their own training sentences

Ages 11-14 (5th-8th Grade)

  • Introduce vocabulary (nodes, edges, inference, training)
  • Discuss real-world AI applications and risks
  • Add bias demonstration and adversarial examples
  • Connect to AI ethics, misinformation, and responsible use
  • Can read and discuss the educator background section

Ages 15+ (High School and Adults)

  • Frame as a “simplified model” rather than kids’ activity
  • Dive deeper into neural networks and transformer architecture
  • Compare knowledge graphs to embedding spaces
  • Use as launching point for exploring actual AI tools
  • Great for teacher/educator training sessions

Optional Extensions

Participant-Created Training

Let participants write 3-5 new training sentences following the rules. Add the yarn, then test what new questions can be answered. This creates ownership and often produces the best “aha moments” when silly sentences create unexpected AI behavior.

Example participant sentences:

  • “Pizza eats robots” → Now “What do bananas eat?” has an answer!
  • “Teachers chase students” → Opportunity to add new noun card
  • “Dinosaurs eat dinosaurs” → Cannibalism! AI doesn’t judge.

Bad Training Data

Intentionally add a silly sentence like “Pizza drives airplanes.” Now ask “What do bananas drive?” Suddenly there’s a path through FOOD→MACHINE. Discuss how bad training data creates bad AI behavior.

Conflicting Information

Train both “Dogs chase cats” and “Dogs help cats.” Now two different colored yarns connect the same cards. Ask “What do dogs do to cats?” Discuss how AI handles ambiguity and conflicting data.


How This Relates to Real AI

What This Activity Models

Our yarn-and-cards network is closest to a knowledge graph—a way of representing facts as connections between entities. Knowledge graphs are used in:

  • Google’s Knowledge Panel (info boxes in search results)
  • Recommendation systems
  • Question-answering systems
  • Semantic search engines

How Modern Language Models Differ

Large language models like GPT and Claude work differently but share core insights:

Representation:

  • Instead of discrete cards and yarn, LLMs represent words as dense vectors (lists of hundreds of numbers)
  • Similar words have similar vectors

Training:

  • Instead of adding yarn for each sentence, LLMs adjust millions of numerical weights based on billions of text examples
  • Patterns that appear frequently get stronger connections (just like our yarn!)

Prediction:

  • Instead of following yarn, LLMs calculate probabilities for what comes next
  • They’re sophisticated pattern-completion machines

Generalization:

  • Like our model, LLMs combine patterns to produce outputs they were never explicitly taught
  • This is both their power and their risk (hallucinations!)

Vocabulary

TermSimple DefinitionIn Our Activity
TrainingTeaching AI by showing examplesAdding yarn for each sentence
InferenceAI answering questions after trainingFollowing yarn to find answers
GeneralizationApplying patterns to new situationsAnswering “What do cars eat?”
ConfidenceHow sure the AI isNumber of hops / amount of yarn
NodeA point in a networkAn index card
EdgeA connection between nodesA piece of yarn
Knowledge GraphFacts stored as connectionsOur whole card-and-yarn network
HallucinationAI confidently producing false info”Dogs drive cars”

Tips for Facilitators

Before the Activity

  • Read through the complete PDF guide at least once
  • Practice with 3-4 training sentences before running with group
  • Prepare all cards in advance if time is limited
  • Consider using two distinct card colors for concepts vs. nouns

During the Activity

  • Use the training sentences as starting points, not rigid scripts
  • Adjust difficulty based on participant success
  • Encourage creative solutions and questions
  • Celebrate silly outputs—they’re teaching moments!
  • Keep pacing appropriate for age (shorter for younger kids)

Common Challenges

“The yarn is getting tangled!”

This is actually a great teaching moment—real AI systems also deal with complexity! Consider using a corkboard setup for durability.

“Why can’t the AI just know everything?”

Perfect question! Discuss the difference between human reasoning and AI pattern matching.

“This seems too simple to be real AI.”

Acknowledge that real AI is more complex, but the core insight—learning patterns from examples—is authentic.


Real-World Connections

After completing the activity, connect to real-world AI applications:

Where Students Encounter AI

  • Voice assistants (Siri, Alexa, Google Assistant)
  • Video recommendations (YouTube, TikTok)
  • Autocomplete and spelling correction
  • Image recognition in photos apps
  • Chatbots and customer service
  • Educational apps and games

Critical Thinking Questions

  • What training data might these systems use?
  • Could any of these systems have biases? Why?
  • When might you want to trust AI? When should you be skeptical?
  • How can we make AI systems more fair and accurate?

Assessment & Learning Outcomes

By the end of this activity, participants should be able to:

  • Explain how AI learns from examples rather than programmed rules
  • Demonstrate how AI generalizes by combining patterns
  • Identify the difference between AI pattern-matching and human understanding
  • Recognize that AI confidence doesn’t equal accuracy
  • Describe how training data quality affects AI outputs
  • Apply critical thinking to real-world AI systems

Additional Resources

For Educators

  • Download the complete facilitator guide PDF above
  • Review the 17 training sentences in advance
  • Check out our other STEM activities in the Library

For Parents

This activity works great for:

  • Family game nights
  • Scout meetings or den activities
  • Birthday party activities for tech-interested kids
  • Homeschool science units
  • Rainy day activities

Looking for more hands-on learning? Check out:

  • Scout Adventure RPG - Collaborative storytelling and problem-solving
  • Other STEM activities in our Library section

License & Sharing

This activity guide is freely shareable for educational purposes. Feel free to:

  • Use it in classrooms, scout troops, libraries, or community centers
  • Adapt it for your specific age group or context
  • Share the PDF with other educators
  • Create your own variations

We only ask that you credit Kreators Guild and share your experiences with us!


Questions or Feedback?

Have you run this activity with your group? We’d love to hear:

  • What worked well?
  • What challenges did you encounter?
  • What creative variations did you try?
  • What age group did you work with?

Contact us to share your story or ask questions!


Created by Kreators Guild Inc. for youth STEM education. Suitable for scouts, classrooms, libraries, and families. No prior AI knowledge required.

Back to Library

Related Posts

View All Posts »
Adventure 3: The Cardboard Regatta

Adventure 3: The Cardboard Regatta

The third adventure in our Scout RPG series. It's time for the camp's annual Cardboard Regatta - can your scouts design a boat that actually floats?

Adventure 2: The Community Kitchen

Adventure 2: The Community Kitchen

The second adventure in our Scout RPG series. The community kitchen needs help preparing for their biggest meal of the year - can your scouts rise to the challenge?