What Is Artificial Intelligence?
The AI family tree — what it is, what it isn't, and why it matters right now.
A chess computer just beat the world champion, and it can't even tie its shoes!
It's 1997. A machine called Deep Blue just defeated Garry Kasparov, the greatest chess player alive. Headlines scream: "Machine beats man!" The world panics. Is this the beginning of the robot apocalypse?
Fast forward. That same computer can't book a restaurant, write a grocery list, or recognise a photo of a cat. It can only play chess. And it does that by brute-forcing millions of moves per second — not by "thinking."
That's the first thing you need to understand about AI: what looks like intelligence is almost never what you think it is. The magic trick works because you're watching from the audience, not from backstage.
The AI family tree
People throw around "AI," "machine learning," and "deep learning" like they're the same thing. They're not. They're a family — and like any family, understanding who's related to whom clears up a lot of confusion.
Think of this like a family photo:
- Artificial Intelligence is the entire family. It's the big umbrella term for any system that does something that looks intelligent.
- Rule-Based Systems are the strict uncle. They follow exact rules written by humans: "IF the temperature is above 100, THEN send an alert." No learning. No adapting. Just obedience.
- Machine Learning is the brainy kid. Instead of following rules, it learns patterns from data. Show it 10,000 photos of cats and dogs, and it figures out the difference on its own.
- Deep Learning is the prodigy grandchild — ML with many layers of artificial "neurons." It's what powers image recognition, voice assistants, and language models.
- LLMs (Large Language Models, like ChatGPT and Claude) are the famous great-grandchild everyone's talking about at the dinner table right now.
| Term | What it does | How it works | Example |
|---|---|---|---|
| Traditional Automation | Follows fixed rules | Human writes every IF/THEN rule | Thermostat turns on at 68 F |
| Machine Learning | Learns patterns from data | Algorithm finds rules from examples | Email spam filter |
| Deep Learning | Learns complex patterns | Many-layered neural networks | Face recognition on your phone |
| LLM | Generates human-like text | Predicts the next word, over and over | ChatGPT answering your question |
There Are No Dumb Questions
"Is a spam filter really AI? That seems too simple."
It is! AI doesn't have to be flashy. A spam filter that learned from millions of emails to spot patterns — that's machine learning, which lives under the AI umbrella. AI isn't about looking smart; it's about learning from data instead of following hand-coded rules.
"What about Siri and Alexa? Where do they fit?"
Voice assistants are a sandwich of AI techniques: speech recognition (deep learning) to turn your voice into text, natural language understanding (ML) to figure out what you meant, and sometimes an LLM to generate a response. They're not one thing — they're multiple AI systems duct-taped together.
✗ Without AI
- ✗Rules written by humans explicitly
- ✗Programmer anticipates every case
- ✗Breaks on anything unanticipated
- ✗Fast to run, cheap to update rules
✓ With AI
- ✓Rules learned from data
- ✓Model discovers patterns humans didn't write
- ✓Generalises to new cases
- ✓Expensive to train, cheap to run
A brief history: from chess to ChatGPT
AI didn't appear overnight. Here's the highlight reel:
| Year | Milestone | What actually happened |
|---|---|---|
| 1956 | "AI" coined at Dartmouth | Researchers said "we'll solve intelligence in one summer." (They didn't.) |
| 1997 | Deep Blue beats Kasparov | Brute-force search, not learning. Chess-only system — no general intelligence, no ability to learn or adapt to other tasks. |
| 2011 | Siri launches | Voice assistants reach mainstream consumers. Used a mix of statistical NLP and hand-crafted logic — a significant step toward AI-powered voice interaction. |
| 2012 | AlexNet wins ImageNet | Deep learning crushes traditional computer vision. The deep learning revolution begins. |
| 2016 | AlphaGo beats Lee Sedol | First time AI beat a top-ranked world Go player — Lee Sedol was ranked among the world's top five players at the time — a game with more possible positions than atoms in the observable universe (Silver et al., Nature, 2016). |
| 2022 | ChatGPT launches | LLMs go mainstream. 100 million users in two months (Reuters/UBS, Jan 2023). |
| 2023-2024 | Claude, GPT-4, Gemini | Models get dramatically better at reasoning, coding, and multi-step tasks. |
| 2025 | Claude 4 family, DeepSeek, agentic AI | Claude 4 (Opus, Sonnet, Haiku) launches. DeepSeek demonstrates frontier-level AI from China. AI agents begin handling multi-step tasks autonomously. EU AI Act enforcement begins. |
Notice the pattern? Each breakthrough did one specific thing really well. None of them could do everything.
John McCarthy names the field — and bets it can all be solved in one summer.
IBM chess computer defeats world champion — narrow AI works.
Neural networks suddenly dominate image recognition. Accuracy jumps 11 points overnight.
Attention Is All You Need — the architecture behind every modern LLM.
100 million users in 2 months. AI becomes a mainstream tool overnight.
GPT-4o, Claude, Gemini — models that see, hear, code, and reason.
Claude 4 family launches. DeepSeek emerges from China. AI agents handle multi-step workflows. EU AI Act enforcement begins.
Spot the AI
25 XP2. Netflix recommending shows based on your watch history →
Narrow AI vs. General AI: the specialist vs. the generalist
Here's a concept that trips people up: the AI you use every day is narrow AI. It's brilliant at one thing and useless at everything else.
Think of it like doctors:
- Narrow AI is a specialist — a world-class heart surgeon who can't treat a cold. AlphaGo plays Go better than any human alive, but it can't play tic-tac-toe. A spam filter catches phishing emails but can't write a poem.
- General AI (AGI) would be a general practitioner who's also a heart surgeon, lawyer, chef, and Olympic athlete — all at once. It would do any intellectual task a human can do.
| Narrow AI | General AI (AGI) | |
|---|---|---|
| Exists today? | Yes, everywhere | No. Not yet. Maybe not soon. |
| What it does | One task extremely well | Any intellectual task |
| Examples | Chess engines, spam filters, ChatGPT | Science fiction (HAL 9000, Jarvis) |
| Learns new tasks? | Only if retrained by humans | Would learn on its own |
| Should you worry about it? | Worry about bias, errors, misuse | Interesting to think about, but not your day-to-day concern |
"But wait — ChatGPT can do LOTS of things. Isn't that general AI?"
Good instinct, but no. LLMs like ChatGPT and Claude are very flexible narrow AI. They're trained on one task — predicting the next token — and it turns out that task is so general that they can write, summarise, translate, code, and even interpret images and audio. But they have no built-in persistent memory — each conversation starts fresh unless a product explicitly stores and re-injects prior context. And they can't learn from experience without retraining, or physically interact with the world. They're a very talented parrot, not a human.
There Are No Dumb Questions
"When will we get AGI?"
Nobody knows. Estimates range from 5 years to never. The honest answer is: we don't have a clear path to it yet. The AI you'll work with in your career is narrow AI — and it's powerful enough to transform every industry without being "general."
"Should I be scared of AI taking my job?"
A better question: "Which parts of my job can AI do, and which parts become more valuable?" AI is great at repetitive pattern-matching tasks. It's bad at judgment, relationships, creative problem-solving, and anything requiring real-world context. The people who thrive will be the ones who use AI as a power tool, not the ones who compete with it.
What AI CAN and CAN'T do
This is the most practical thing in this entire module. Tape this to your wall.
| AI is GREAT at | AI is TERRIBLE at |
|---|---|
| Finding patterns in huge datasets | Understanding why something matters |
| Classifying things into categories | Common sense ("Will this chair fit through that door?") |
| Generating text, images, and code | Knowing when it's wrong |
| Processing information 24/7 without fatigue | Empathy, ethics, moral judgment |
| Translating between languages | Tasks it wasn't trained on |
| Spotting anomalies (fraud, defects) | Anything requiring a physical body |
Can AI Do This?
25 XP2. Decide whether to fire an employee →
The magic trick: it looks like thinking, but it's pattern matching
Before you read this section: When you ask ChatGPT a question, what do you think it's actually doing? Circle your best guess:
A) Looking up the answer on the internet B) Reasoning through the problem step by step C) Matching patterns from everything it's been trained on D) Running a search through a database of facts
Write down your answer before scrolling. We'll come back to it.
Here's the most important mental model in this entire course.
AI is a very well-read parrot.
Imagine a parrot that has read every book, every website, and every conversation ever written in human history. When you ask it a question, it doesn't understand your question. It recognises the pattern and repeats the kind of answer it's seen thousands of times before.
Most of the time, that answer is shockingly good — because the parrot has read so much that it can pattern-match almost anything. But sometimes the parrot says something completely wrong with total confidence — because the pattern it matched was wrong, or because no pattern existed and it just... improvised.
This is why AI:
- Can write a beautiful poem (it's seen millions of poems)
- Can solve a math problem (it's seen millions of solutions)
- Can confidently give you a wrong answer (it's matching a pattern, not checking facts)
- Can't tell you when it's making something up (it has no concept of "truth")
Unmask the Magic Trick
50 XPBuild Your Own AI Family Tree
25 XPBack to Deep Blue
Remember that 1997 headline — "Machine beats man!"? You can read it differently now.
Deep Blue didn't beat Kasparov because it was intelligent. It beat him because it was fast at a very narrow kind of pattern matching: evaluating chess positions. It was the world's most expensive, most single-minded chess calculator — brilliant at one thing, useless at everything else.
The same is true of every AI system you use today. When ChatGPT writes your email, it's not thinking. When image recognition labels your photo, it's not seeing. They're finding patterns in data, billions of times per second, at a scale no human brain can match.
That's both the magic and the limitation — and now you understand both.
Key takeaways
- AI is a family, not a single thing. Automation, ML, deep learning, and LLMs are nested layers — each more capable and more complex than the last.
- All AI you use today is narrow AI — brilliant at specific tasks, useless at everything else. General AI doesn't exist yet.
- AI is pattern matching, not thinking. It's a very well-read parrot: impressive most of the time, dangerously wrong when the pattern doesn't fit.
- You can identify whether something is AI by asking one question: did it learn from data, or follow hand-written rules?
- You can set realistic expectations for AI by remembering what it's great at (patterns, scale, speed) and what it can't do (judgment, truth, common sense).
Knowledge Check
1.A thermostat turns on the heater when the temperature drops below 68°F. Is this artificial intelligence?
2.What is the key difference between machine learning and traditional rule-based automation?
3.Why are LLMs like ChatGPT considered narrow AI rather than general AI, even though they can perform many different tasks?
4.An AI system confidently states that a fictional research paper is real, complete with a fake author name and journal. What best explains this behaviour?