If you've got a question about Cogito, the answer is here.
Cogito Coach is an AI coaching tool with three coaches, built on cognitive science research.
🔍 Curiosity Coach teaches you to ask brilliant questions. Pick any topic and learn to ask deeper, more analytical questions. Each session ends with a Curiosity Report showing your thinking level, star question, and what to try next.
🕵️ Argument Analyst teaches you to investigate claims. Look at ads, social media, and news. Learn to check sources, evaluate evidence, and spot persuasion techniques. The more you practice, the harder you are to fool.
🎯 Decision Coach teaches structured decision-making. Walk through real or practice decisions using 7 thinking phases: framing, values, options, investigation, prediction, and the call. Each session ends with a Decision Report.
📚 Defensible AI Track (the AI Practitioner Certification course) is a 15-lesson workplace program. Same coaches, anchored to real legal cases (Air Canada, Mata, iTutorGroup, Samsung, Garante v. OpenAI), 225 industry-specific scenarios, live coaching practice, Coaching Session Reports per session, and a verifiable certificate per member. Direct purchase: $249 per seat, one-time, no renewal.
AI tools generate essays, images, and convincing fake videos in seconds. Children who can't evaluate AI-generated content, spot manipulation, or think independently are at a real disadvantage. Cogito teaches those skills now.
Governments are catching up too. UNESCO, ISTE, and the OECD are building AI literacy frameworks, and PISA 2029 will test students on AI and media literacy for the first time globally. We build the skills standards will eventually require.
Yes. Cogito's three coaches map directly to competencies in major AI literacy frameworks:
🔍 Curiosity Coach aligns with the "Evaluate AI" and "Critical Thinking" competencies in UNESCO's AI Competency Framework, ISTE's Citizen standard (evaluating machine-generated content), and Digital Promise's "Information and Mis/Disinformation" practice.
🕵️ Argument Analyst covers the "Evaluate AI Outputs" competency from the US Department of Labor framework, the OECD-EC AILit "Engage with AI" domain (assessing accuracy and bias), and ISTE's digital resource evaluation indicators.
🎯 Decision Coach addresses cognitive bias awareness found in Digital Promise's "Ethics and Impact" practice and the DEC framework's "Critical Thinking and Judgment" dimension.
PISA 2029 will include the first global assessment of AI and media literacy. Cogito builds the skills it will test.
Cogito teaches the thinking skills that sit at the core of every AI literacy framework: evaluating claims, questioning sources, spotting manipulation, recognizing bias, and making decisions without being fooled. We don't teach students how AI works technically. We teach them how to think clearly in a world full of AI-generated content. That's the harder skill, and the one that matters more.
Curiosity Coach: Pick a topic. The AI asks back, names your question types, and pushes you toward deeper thinking. After 10+ exchanges you get a Curiosity Report.
Argument Analyst: Pick a claim to investigate or paste one you found. The AI walks you through source-checking, evidence weighing, and 48 persuasion techniques. You finish with an Analysis Report.
Decision Coach: Bring a real decision or pick a practice dilemma. The AI takes you through 7 thinking phases (recognize, frame, values, options, investigate, predict, decide) and flags cognitive biases as they show up. You finish with a Decision Report.
Self learners. Sharpen your edge in 10 to 15 minutes a day. Get a Critical Thinking Score, watch your reasoning evolve over time. No teacher, no parent, no employer. Just you and the work.
Educators & families. Three coaches for K-12 through college that work with any curriculum. Reports show parents and teachers exactly how their student thinks, not just what they scored.
Enterprises & teams. 15 lessons anchored to real legal precedent (Air Canada, Mata, iTutorGroup, Samsung). 225 industry-specific scenarios. Live coaching, not videos. Verifiable certificates per member. $249 per seat, one-time, no renewal.
Yes. Self learners are a primary audience. Pick any of the three coaches, run a session on a real topic you care about (a decision you're making, a claim you saw online, a question you're chasing), and get a Coaching Session Report at the end.
Most people start with the free Critical Thinking Score: 10 minutes, any topic, a 0-100 baseline across asking, evaluating, and deciding. From there you can run as many coaching sessions as you want.
The AI Fluency course is the workplace track. 15 lessons covering the 4D framework — Delegation, Description, Discernment, Diligence — plus Foundation, Security, Agents, Governance, and a Capstone. Each is anchored to a real legal precedent (EEOC v. iTutorGroup, Mata v. Avianca, Air Canada, Samsung trade-secret leak, EchoLeak, Project Vend, Garante v. OpenAI) and adapts to 225 industry-specific scenarios.
It uses the same three coaches as the standalone product, but inside a sequenced course with knowledge checks, live coaching practice, progress tracking, and per-member Coaching Session Reports. Direct purchase: $249 per seat, one-time, no renewal.
Yes. The AI Fluency course already ships with 225 industry-specific scenarios covering 5 industries (insurance, healthcare, financial services, technology, retail) across 3 departments (operations, marketing, HR). For deeper customization, we can tune lessons and Coaching Session Report rubrics to your taxonomy, compliance requirements, and competency model.
Yes. We build a custom knowledge base from your textbooks, lecture notes, case studies, or any course materials. The AI grounds every coaching session in that content, so students practice critical thinking on your topics with real citations from your sources.
See a live example: a business-school knowledge base built from SEC 10-K filings →
ChatGPT gives answers. Khanmigo and similar AI tutors help with content. Cogito teaches the thinking skills AI can't do for you: asking sharp questions, evaluating claims, and making structured decisions.
Practical difference: a math tutor solves the problem with you. Cogito makes you spot the assumption you didn't question, the evidence you didn't check, or the option you didn't consider. Both useful. Different jobs. As AI gets better at answering, knowing what to ask and how to evaluate the answer is the skill that appreciates.
Curiosity Report reveals: a Curiosity Score (1-10), the student's thinking level (from "Fact Finder" to "Inventor"), their star question with analysis of why it was strong, identified strengths like "higher-order thinking" or "evidence-seeking," and specific next steps.
Analysis Report reveals: an Investigation Score (1-10), which dimensions they covered (source, evidence, reasoning, context, verification), any persuasion techniques they spotted with explanations, and tips for becoming sharper at detecting manipulation.
Decision Report reveals: a Decision Score (1-10), which of the 7 thinking phases they explored and how deeply, any cognitive biases they spotted (like "Scared to Lose" or "Everyone Else"), and a verdict on whether they're ready to decide or need more thinking.
All reports are designed to show growth over time, not just a single snapshot.
Argument Analyst covers 48 techniques across four categories:
Manipulation Tactics: bandwagon, appeal to emotion, false dilemma, slippery slope, fear tactics, loaded language
Data Misuse: cherry-picked data, misleading statistics, small samples, correlation vs causation, anecdotes vs data
Source/Authority Issues: fake experts, hidden ads, conflicts of interest, ad hominem, anonymous sources
Logic Flaws: circular reasoning, straw man, red herring, false equivalence, disguised opinions
Decision Coach guides students through seven research-backed thinking phases:
1. Recognize: Is this actually a decision I need to make?
2. Frame: What am I really deciding? (Often the real question is different from the obvious one.)
3. Values: What matters most to me here?
4. Options: What are ALL my options, not just the obvious two?
5. Investigate: What do I know for sure vs. what am I assuming?
6. Predict: What could happen with each option? Who else is affected?
7. Decide: What's my choice, and can I explain why?
Students also learn to spot 12 cognitive biases (decision traps) like "Stuck Because You Started" (sunk cost), "Everyone Else" (bandwagon), and "Scared to Lose" (loss aversion).
Our thinking levels framework categorizes questions into six levels: Remember, Understand, Apply, Analyze, Evaluate, and Create. Unlike typical tools that only score thinking levels, we use them to guide students in the moment, helping them progress to higher-order thinking. Learn how we use it differently →
Yes. Free trial includes full sessions and real reports for all three coaches. No credit card required. Teachers can sign up multiple students at once. The Critical Thinking Score is always free.
Pick how much help you want before each session.
Coaching: Most supportive. The AI names question patterns and suggests directions.
Hint: Balanced. Subtle nudges, no explicit teaching.
Test: Independent. Just answers, no scaffolding.
All three use research-backed cognitive rhythm to prevent overload.
Duration: 10-15 minutes typically. No strict time limit. Explore as long as you want.
Topics: Anything! Science, history, current events, art, technology, sports, philosophy... Teachers can also assign curriculum-specific topics.
Yes. We comply with COPPA and FERPA regulations. Student data is encrypted, never sold, and parents/teachers can request deletion at any time. See our Privacy Policy for details.
Cogito teaches three thinking skills: asking, evaluating, deciding. Each one is built on decades of cognitive science research, not edtech trends. Here are the design decisions and the research behind them.
The research: Psychologist Deanna Kuhn's work shows that critical thinking develops through practice, through structured dialogue, not through lectures about thinking. Matthew Lipman's Philosophy for Children demonstrated similar findings: students who regularly engage in guided inquiry develop stronger reasoning than those taught reasoning rules directly (Kuhn, 2005; Lipman, 2003).
Our design decision: Curiosity Coach never explains "how to ask good questions." Instead, the system creates conditions for students to discover effective questioning through their own inquiry. When we do offer guidance, it's in-the-moment and specific: "That was a mechanism question. You asked HOW, not just WHAT." This names a pattern the student just used, making implicit thinking explicit.
Why it matters: Direct instruction about questioning ("always ask why") produces shallow compliance. Students learn the words but not the thinking. By naming moves after a student makes them, we help them recognize what they're already capable of, and repeat it intentionally.
The research: Bloom's taxonomy (Remember → Understand → Apply → Analyze → Evaluate → Create) maps increasingly sophisticated cognitive operations. Higher-level questions require deeper processing and produce more durable learning (Anderson & Krathwohl, 2001 — the revised taxonomy used here).
Our design decision: We don't just score questions by Bloom level. We use the taxonomy formatively to guide students toward their next level. When a student asks "what happens," we might prompt: "Now try: why does it happen that way?" We also recognize that expert inquiry cycles between levels. A student who asks a creative question, then drops to a factual question, may be grounding their idea, not regressing.
Why it matters: Many educational tools use Bloom's as an assessment rubric. We use it as a coaching compass. The difference: assessment tells students where they are; formative guidance shows them where to go next.
The research: Cognitive scientist Daniel Willingham's work demonstrates that thinking skills don't automatically transfer across domains. A student who learns to ask "how does this mechanism work?" about rockets won't spontaneously apply that pattern to history or relationships unless the transfer is made explicit (Willingham, 2007).
Our design decision: Once per session, after a particularly strong question, we add a brief "transfer tag": "That mechanism question works anywhere: science, history, even friendships." In parent reports, we show the same question pattern applied to three different domains. We time this carefully, only after genuine high-quality questions, so it feels earned rather than formulaic.
Why it matters: Without explicit transfer bridging, students develop "islands" of skill that don't connect. A single sentence at the right moment can transform a domain-specific success into a portable thinking tool.
The research: The Delphi Report on critical thinking identified metacognition (thinking about one's own thinking) as the highest-impact skill. Students who notice patterns in their own questioning improve faster than those who don't (Facione, 1990).
Our design decision: When a student shows metacognitive awareness ("Wait, I keep asking the same type of question..."), we override our normal response structure. We stop, name what happened ("You just caught yourself thinking about your thinking, that's metacognition"), and amplify its significance before continuing. These moments are rare; we never let them pass unmarked.
Why it matters: Metacognitive moments are fragile. If the system responds with a standard answer, the student learns that self-reflection doesn't matter. By explicitly prioritizing these moments, we signal that noticing your own thinking is the most valuable thing you can do.
The research: Learning science shows that constant correction creates cognitive overload and disengagement. Students need space to practice without feedback on every attempt.
Our design decision: We alternate between teaching and non-teaching responses. One response might name their question type or offer a new stem; the next simply answers and sparks curiosity. This rhythm is deliberate. Without it, guidance becomes nagging. With it, students stay engaged while still receiving regular coaching.
Why it matters: The instinct is to help more by teaching more. But over-scaffolding produces dependency, not skill. Alternation creates breathing room while maintaining enough structure for growth.
The research: Curiosity is driven by "information gaps". When something is incomplete or surprising, we're motivated to resolve it. Complete answers close inquiry; incomplete answers fuel it.
Our design decision: Every response ends with a "spark", a deliberately incomplete thought using specific patterns: contrast ("Weirdly, the opposite is true for..."), exception ("Some species actually..."), mystery ("Scientists still don't know why..."), or unexpected connection ("Submarines solved this the same way whales do..."). These aren't random interesting facts. They're engineered to create the specific gap that pulls a next question.
Why it matters: If we answered questions completely, students would have nothing left to ask. The art is giving enough to satisfy while leaving enough open to spark the next inquiry.