O
Octo
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

Understanding AI
1What Is Artificial Intelligence?2How Computers Process Information3The Internet & APIs4Data: The Fuel for AI5Machine Learning in Plain English6Neural Networks & Deep Learning7Large Language Models Demystified8AI Ethics & Responsible Use
Module 1

What Is Artificial Intelligence?

The AI family tree — what it is, what it isn't, and why it matters right now.

A chess computer just beat the world champion, and it can't even tie its shoes!

It's 1997. A machine called Deep Blue just defeated Garry Kasparov, the greatest chess player alive. Headlines scream: "Machine beats man!" The world panics. Is this the beginning of the robot apocalypse?

Fast forward. That same computer can't book a restaurant, write a grocery list, or recognise a photo of a cat. It can only play chess. And it does that by brute-forcing millions of moves per second — not by "thinking."

That's the first thing you need to understand about AI: what looks like intelligence is almost never what you think it is. The magic trick works because you're watching from the audience, not from backstage.

The AI family tree

People throw around "AI," "machine learning," and "deep learning" like they're the same thing. They're not. They're a family — and like any family, understanding who's related to whom clears up a lot of confusion.

Think of this like a family photo:

  • Artificial Intelligence is the entire family. It's the big umbrella term for any system that does something that looks intelligent.
  • Rule-Based Systems are the strict uncle. They follow exact rules written by humans: "IF the temperature is above 100, THEN send an alert." No learning. No adapting. Just obedience.
  • Machine Learning is the brainy kid. Instead of following rules, it learns patterns from data. Show it 10,000 photos of cats and dogs, and it figures out the difference on its own.
  • Deep Learning is the prodigy grandchild — ML with many layers of artificial "neurons." It's what powers image recognition, voice assistants, and language models.
  • LLMs (Large Language Models, like ChatGPT and Claude) are the famous great-grandchild everyone's talking about at the dinner table right now.
TermWhat it doesHow it worksExample
Traditional AutomationFollows fixed rulesHuman writes every IF/THEN ruleThermostat turns on at 68 F
Machine LearningLearns patterns from dataAlgorithm finds rules from examplesEmail spam filter
Deep LearningLearns complex patternsMany-layered neural networksFace recognition on your phone
LLMGenerates human-like textPredicts the next word, over and overChatGPT answering your question

There Are No Dumb Questions

"Is a spam filter really AI? That seems too simple."

It is! AI doesn't have to be flashy. A spam filter that learned from millions of emails to spot patterns — that's machine learning, which lives under the AI umbrella. AI isn't about looking smart; it's about learning from data instead of following hand-coded rules.

"What about Siri and Alexa? Where do they fit?"

Voice assistants are a sandwich of AI techniques: speech recognition (deep learning) to turn your voice into text, natural language understanding (ML) to figure out what you meant, and sometimes an LLM to generate a response. They're not one thing — they're multiple AI systems duct-taped together.

✗ Without AI

  • ✗Rules written by humans explicitly
  • ✗Programmer anticipates every case
  • ✗Breaks on anything unanticipated
  • ✗Fast to run, cheap to update rules

✓ With AI

  • ✓Rules learned from data
  • ✓Model discovers patterns humans didn't write
  • ✓Generalises to new cases
  • ✓Expensive to train, cheap to run

A brief history: from chess to ChatGPT

AI didn't appear overnight. Here's the highlight reel:

YearMilestoneWhat actually happened
1956"AI" coined at DartmouthResearchers said "we'll solve intelligence in one summer." (They didn't.)
1997Deep Blue beats KasparovBrute-force search, not learning. Chess-only system — no general intelligence, no ability to learn or adapt to other tasks.
2011Siri launchesVoice assistants reach mainstream consumers. Used a mix of statistical NLP and hand-crafted logic — a significant step toward AI-powered voice interaction.
2012AlexNet wins ImageNetDeep learning crushes traditional computer vision. The deep learning revolution begins.
2016AlphaGo beats Lee SedolFirst time AI beat a top-ranked world Go player — Lee Sedol was ranked among the world's top five players at the time — a game with more possible positions than atoms in the observable universe (Silver et al., Nature, 2016).
2022ChatGPT launchesLLMs go mainstream. 100 million users in two months (Reuters/UBS, Jan 2023).
2023-2024Claude, GPT-4, GeminiModels get dramatically better at reasoning, coding, and multi-step tasks.
2025Claude 4 family, DeepSeek, agentic AIClaude 4 (Opus, Sonnet, Haiku) launches. DeepSeek demonstrates frontier-level AI from China. AI agents begin handling multi-step tasks autonomously. EU AI Act enforcement begins.

Notice the pattern? Each breakthrough did one specific thing really well. None of them could do everything.

1956AI coined at Dartmouth

John McCarthy names the field — and bets it can all be solved in one summer.

1997Deep Blue beats Kasparov

IBM chess computer defeats world champion — narrow AI works.

2012AlexNet — deep learning works

Neural networks suddenly dominate image recognition. Accuracy jumps 11 points overnight.

2017The Transformer paper

Attention Is All You Need — the architecture behind every modern LLM.

2022ChatGPT

100 million users in 2 months. AI becomes a mainstream tool overnight.

2024Multimodal reasoning models

GPT-4o, Claude, Gemini — models that see, hear, code, and reason.

2025Claude 4 & agentic AI

Claude 4 family launches. DeepSeek emerges from China. AI agents handle multi-step workflows. EU AI Act enforcement begins.

⚡

Spot the AI

25 XP
Traditional AutomationMachine LearningNot AI at all
A calculator app that adds numbers
Netflix recommending shows based on your watch history
A macro in Excel that copies data from one column to another
Your phone unlocking when it sees your face
A traffic light that changes on a fixed timer

2. Netflix recommending shows based on your watch history →

0/5 answered

Narrow AI vs. General AI: the specialist vs. the generalist

Here's a concept that trips people up: the AI you use every day is narrow AI. It's brilliant at one thing and useless at everything else.

Think of it like doctors:

  • Narrow AI is a specialist — a world-class heart surgeon who can't treat a cold. AlphaGo plays Go better than any human alive, but it can't play tic-tac-toe. A spam filter catches phishing emails but can't write a poem.
  • General AI (AGI) would be a general practitioner who's also a heart surgeon, lawyer, chef, and Olympic athlete — all at once. It would do any intellectual task a human can do.
Narrow AIGeneral AI (AGI)
Exists today?Yes, everywhereNo. Not yet. Maybe not soon.
What it doesOne task extremely wellAny intellectual task
ExamplesChess engines, spam filters, ChatGPTScience fiction (HAL 9000, Jarvis)
Learns new tasks?Only if retrained by humansWould learn on its own
Should you worry about it?Worry about bias, errors, misuseInteresting to think about, but not your day-to-day concern

"But wait — ChatGPT can do LOTS of things. Isn't that general AI?"

Good instinct, but no. LLMs like ChatGPT and Claude are very flexible narrow AI. They're trained on one task — predicting the next token — and it turns out that task is so general that they can write, summarise, translate, code, and even interpret images and audio. But they have no built-in persistent memory — each conversation starts fresh unless a product explicitly stores and re-injects prior context. And they can't learn from experience without retraining, or physically interact with the world. They're a very talented parrot, not a human.

There Are No Dumb Questions

"When will we get AGI?"

Nobody knows. Estimates range from 5 years to never. The honest answer is: we don't have a clear path to it yet. The AI you'll work with in your career is narrow AI — and it's powerful enough to transform every industry without being "general."

"Should I be scared of AI taking my job?"

A better question: "Which parts of my job can AI do, and which parts become more valuable?" AI is great at repetitive pattern-matching tasks. It's bad at judgment, relationships, creative problem-solving, and anything requiring real-world context. The people who thrive will be the ones who use AI as a power tool, not the ones who compete with it.

🔑The thing everyone gets wrong
Every AI you interact with today — ChatGPT, Siri, recommendation algorithms — is narrow AI. It does one category of thing extremely well and nothing else. "General AI" (AGI) that can do anything a human can remains unsolved and is the subject of serious debate about whether and when it will exist.

What AI CAN and CAN'T do

This is the most practical thing in this entire module. Tape this to your wall.

AI is GREAT atAI is TERRIBLE at
Finding patterns in huge datasetsUnderstanding why something matters
Classifying things into categoriesCommon sense ("Will this chair fit through that door?")
Generating text, images, and codeKnowing when it's wrong
Processing information 24/7 without fatigueEmpathy, ethics, moral judgment
Translating between languagesTasks it wasn't trained on
Spotting anomalies (fraud, defects)Anything requiring a physical body

⚡

Can AI Do This?

25 XP
AI can do this wellAI can do this poorlyAI can't do this at all
Read 10,000 customer reviews and identify the top 5 complaints
Decide whether to fire an employee
Translate an email from English to Japanese
Understand why a customer is *actually* upset (not just what they wrote)
Generate 50 variations of a marketing headline
Know whether its own answer is correct

2. Decide whether to fire an employee →

0/6 answered

The magic trick: it looks like thinking, but it's pattern matching

Before you read this section: When you ask ChatGPT a question, what do you think it's actually doing? Circle your best guess:

A) Looking up the answer on the internet B) Reasoning through the problem step by step C) Matching patterns from everything it's been trained on D) Running a search through a database of facts

Write down your answer before scrolling. We'll come back to it.


Here's the most important mental model in this entire course.

AI is a very well-read parrot.

Imagine a parrot that has read every book, every website, and every conversation ever written in human history. When you ask it a question, it doesn't understand your question. It recognises the pattern and repeats the kind of answer it's seen thousands of times before.

Most of the time, that answer is shockingly good — because the parrot has read so much that it can pattern-match almost anything. But sometimes the parrot says something completely wrong with total confidence — because the pattern it matched was wrong, or because no pattern existed and it just... improvised.

This is why AI:

  • Can write a beautiful poem (it's seen millions of poems)
  • Can solve a math problem (it's seen millions of solutions)
  • Can confidently give you a wrong answer (it's matching a pattern, not checking facts)
  • Can't tell you when it's making something up (it has no concept of "truth")

⚡

Unmask the Magic Trick

50 XP
You're explaining AI to a friend who thinks ChatGPT is "actually intelligent." Using the parrot analogy, explain: 1. **Why AI can write a sonnet in Shakespeare's style** (in 1-2 sentences) 2. **Why AI sometimes invents fake scientific papers that sound real** (in 1-2 sentences) 3. **Why AI can't tell you whether its own answer is correct** (in 1-2 sentences) 4. **Give one example** of a task where pattern matching works perfectly, and one where it fails dangerously. *Hint: The parrot has read millions of Shakespeare sonnets — so it can remix them. It's also read millions of scientific papers — so it can remix those too, even if the result is fiction. And it has no concept of "true" vs. "false" — it only knows "likely" vs. "unlikely."*

⚡

Build Your Own AI Family Tree

25 XP
Draw (on paper or in a notes app) your own version of the AI family tree. Include: 1. The 4 levels: AI → ML → Deep Learning → LLMs 2. At least **one real-world example** at each level 3. One **non-AI automation** example off to the side, labelled as "NOT AI" Take a photo or screenshot — you'll reference this in a later module. *Hint: Good examples — AI: any smart system. ML: recommendation engines. Deep Learning: image recognition. LLM: ChatGPT. Not AI: a basic calculator.*

Back to Deep Blue

Remember that 1997 headline — "Machine beats man!"? You can read it differently now.

Deep Blue didn't beat Kasparov because it was intelligent. It beat him because it was fast at a very narrow kind of pattern matching: evaluating chess positions. It was the world's most expensive, most single-minded chess calculator — brilliant at one thing, useless at everything else.

The same is true of every AI system you use today. When ChatGPT writes your email, it's not thinking. When image recognition labels your photo, it's not seeing. They're finding patterns in data, billions of times per second, at a scale no human brain can match.

That's both the magic and the limitation — and now you understand both.

Key takeaways

  • AI is a family, not a single thing. Automation, ML, deep learning, and LLMs are nested layers — each more capable and more complex than the last.
  • All AI you use today is narrow AI — brilliant at specific tasks, useless at everything else. General AI doesn't exist yet.
  • AI is pattern matching, not thinking. It's a very well-read parrot: impressive most of the time, dangerously wrong when the pattern doesn't fit.
  • You can identify whether something is AI by asking one question: did it learn from data, or follow hand-written rules?
  • You can set realistic expectations for AI by remembering what it's great at (patterns, scale, speed) and what it can't do (judgment, truth, common sense).

?

Knowledge Check

1.A thermostat turns on the heater when the temperature drops below 68°F. Is this artificial intelligence?

2.What is the key difference between machine learning and traditional rule-based automation?

3.Why are LLMs like ChatGPT considered narrow AI rather than general AI, even though they can perform many different tasks?

4.An AI system confidently states that a fictional research paper is real, complete with a fake author name and journal. What best explains this behaviour?

Next

How Computers Process Information