Module 8

AI for Research & Analysis

AI research tools like Perplexity, Elicit, Consensus, and NotebookLM are changing how professionals find, synthesize, and verify information. Here's how to use them — and when not to trust them.

The analyst who read 200 papers in a day

Sofia is a healthcare policy analyst. Her boss asked her to prepare a briefing on the effectiveness of telemedicine for chronic disease management. She needed to review the relevant academic literature, identify consensus findings, flag contradictions, and summarize everything into a 3-page memo — by Friday.

In the old world, this meant two weeks: searching PubMed, reading abstracts, downloading promising papers, reading full texts, taking notes, organizing themes, writing the memo. Sofia had done it before. It was thorough. It was also soul-crushing.

This time, she opened Elicit. She typed: "What is the evidence on telemedicine effectiveness for chronic disease management?" Elicit searched the academic literature and returned 200 relevant papers, ranked by relevance, with AI-generated summaries of each paper's key findings, sample sizes, and conclusions. In 20 minutes, Sofia had a structured overview of what 200 studies found.

She then fed the top 30 papers into NotebookLM, which synthesized them into themes: what telemedicine works for (diabetes monitoring, mental health follow-ups), where evidence is mixed (post-surgical care), and what gaps remain. She used Consensus to check specific claims ("Does telemedicine reduce hospital readmissions?") and got a percentage breakdown: 78% of studies found a reduction, with effect sizes ranging from 8-24%.

The memo was done by Wednesday. Her boss asked: "How did you review this many papers so fast?" Sofia said: "I used AI to read. I used my brain to think."

By the end of this module, you'll know the six major AI research tools and when to use each, a multi-tool research workflow that compresses weeks of work into hours, and how to catch the ways AI research tools get it wrong.

3M+academic papers published per year globally (estimated — multiple sources report 2-5M annually; exact count varies by inclusion criteria)

20minto review 200 papers with Elicit (vs. 2+ weeks manually)

200M+papers indexed by Semantic Scholar (the database behind many AI research tools — as of 2024)

The AI research toolkit

These tools serve different research needs. Understanding which to use when is the key skill.

ToolWhat it doesBest forCost
PerplexityAI-powered search with cited sourcesQuick factual research, market analysis, current eventsFree tier / $20/mo (Pro)
ElicitAcademic paper search and extractionSystematic literature reviews, evidence synthesisFree tier / $10/mo (Plus)
ConsensusAnswers questions using peer-reviewed researchChecking scientific consensus on specific claimsFree tier / $7/mo (Premium)
NotebookLMAI research assistant for your documentsSynthesizing uploaded papers, reports, or notesFree (Google)
Semantic ScholarAcademic search engine with AI featuresFinding relevant papers and tracking citationsFree
SciteCitation analysis (supporting vs. contradicting)Understanding how papers have been receivedFree tier / $20/mo

Traditional research

  • Search databases with keywords
  • Read abstracts one by one
  • Manually track themes across papers
  • Write notes in a separate document
  • Weeks for a thorough literature review

AI-assisted research

  • Ask a question in natural language
  • AI summarizes key findings across papers
  • Themes extracted and synthesized automatically
  • Interactive Q&A with your research corpus
  • Hours for a thorough literature review

Perplexity: the Google alternative for research

Perplexity is the tool most people should start with. It's like Google search, but instead of giving you a list of links, it reads the sources and gives you a synthesized answer with citations.

How it differs from ChatGPT for research:

ChatGPTPerplexity
SourcesTrained on data (no real-time sources by default; web search available on some plans)Searches the web in real time and cites every source
CitationsDoesn't cite sources unless asked (and may fabricate them)Every claim has a numbered citation you can verify
CurrencyKnowledge cutoff date (check current model specs)Live web access — current to today
Best forAnalysis, writing, coding, reasoningFactual research, market data, current events

Strong Perplexity prompts:

"What is the current market size of the AI coding assistant
market? Include growth rate and key players. Cite your sources."

"Compare the effectiveness of cognitive behavioral therapy vs.
medication for adult ADHD. What do the meta-analyses say?"

"What are the regulatory requirements for AI in healthcare
in the EU as of 2026?"

There Are No Dumb Questions

"If Perplexity cites its sources, can I trust it completely?"

No. Perplexity can misinterpret sources, take quotes out of context, or cite a source that doesn't actually support the claim. Always click through to the original source for anything important. Think of Perplexity as a very fast research assistant who finds sources for you — but you still need to read the key ones yourself.

"Is Perplexity replacing Google?"

For research queries, many professionals prefer it. For navigation ("where is the nearest coffee shop"), product searches, and local information, Google is still better. Perplexity is strongest when you need a synthesized answer to a complex question, not when you need a specific website.

🔒

Write a research query

25 XP

Complete a 3-step scenario exercise.

Sign in to earn XP

Elicit and Consensus: academic research at speed

Elicit — your literature review assistant

Elicit specializes in academic research. Ask a research question, and it:

Searches the Semantic Scholar database (200M+ papers) for relevant studies

Extracts key data from each paper: sample size, methods, findings, limitations

Ranks papers by relevance and citation count

Summarizes findings across multiple papers into themes

Creates tables comparing study results side by side

Example Elicit query: "Does intermittent fasting improve metabolic health markers in adults with type 2 diabetes?"

Elicit returns a table: study name, sample size, duration, key outcome, effect size, statistical significance — for 50+ relevant studies. What would take a researcher days to compile manually takes minutes.

Consensus — scientific agreement in seconds

Consensus answers the question: "What does the research say about X?" It searches peer-reviewed literature and tells you the balance of evidence.

Example: "Does exercise improve symptoms of depression?" Consensus answer: "92% of studies found that exercise significantly improves depression symptoms, with moderate effect sizes (d = 0.5-0.8). The strongest evidence supports aerobic exercise, 3+ times per week."

This is powerful for quickly checking claims, settling debates with evidence, and grounding your work in actual research rather than opinions.

NotebookLM: synthesize YOUR documents

NotebookLM (by Google) takes a different approach. Instead of searching the web or academic databases, it works with documents you upload. Upload 10 research papers, a set of meeting notes, or a collection of reports — and then ask questions about them.

Key features:

  • Upload PDFs, Google Docs, websites, YouTube videos, or text
  • Ask questions and get answers grounded in your uploaded sources
  • Every answer includes citations pointing to the specific source and passage
  • Generate "Audio Overviews" — podcast-style conversations about your documents
  • Create study guides, FAQs, or briefing documents from your sources

Best use cases:

  • Synthesizing findings across 10-20 research papers
  • Preparing for a meeting by uploading all relevant documents and asking questions
  • Studying a complex topic by uploading textbook chapters and having a Q&A session
  • Creating a briefing document from a collection of internal reports
🔑The research workflow that works
The most effective AI research workflow combines tools: Perplexity for initial exploration and current data, Elicit for deep academic evidence, Consensus for checking specific claims, and NotebookLM for synthesizing your collected sources into a final output. Each tool has a specific role — none replaces the others.

Just as the no-code tools module showed how stacking tools multiplies impact, the same principle applies here — but with a specific sequence:

🔒

Plan a research project

50 XP

Pick a real question you need answered for work or study. Plan an AI-assisted research workflow: 1. **Your research question:** ___ 2. **Step 1 — Exploration:** What would you search in Perplexity? Write the query. 3. **Step 2 — Evidence:** What specific academic question would you ask Elicit? Write it. 4. **Step 3 — Consensus check:** What specific claim would you verify with Consensus? 5. **Step 4 — Synthesis:** What documents would you upload to NotebookLM? What question would you ask? 6. **Final output:** What does your deliverable look like? (memo, report, presentation, recommendation?) _This workflow takes 2-3 hours for a thorough research project that would have taken 2-3 weeks manually._

Sign in to earn XP

When AI research tools get it wrong

AI research tools are powerful but not infallible. Here's what to watch for.

Failure modeWhat happensHow to catch it
Citation fabricationAI cites a paper that doesn't existClick through to verify the actual source
MisinterpretationAI states the opposite of what a study foundRead the original abstract or conclusion
Cherry-pickingAI presents evidence from one side of a debateAsk for contradicting evidence explicitly
Outdated evidenceAI cites superseded studiesCheck publication dates, look for newer meta-analyses
ConflationAI merges findings from different studies into a false compositeVerify specific claims against individual sources
⚠️The verification imperative
The ease of AI research creates a new risk: false confidence. When a tool gives you a clean, well-cited answer in 30 seconds, it feels authoritative. But speed is not accuracy. For any claim that matters — in a report, presentation, or decision — click through to the original source. The 5 minutes you spend verifying can save you from publishing something wrong.

There Are No Dumb Questions

"Can I cite Perplexity or Elicit in an academic paper?"

No. Cite the original sources these tools surface. Perplexity and Elicit are research assistants that help you find sources — they are not sources themselves. In an academic context, always trace claims back to the original peer-reviewed paper.

"How do I know if a source is reliable?"

Check: Is it peer-reviewed? What journal published it? How many times has it been cited? Is it recent? Does it have a large enough sample size? Tools like Scite can show you whether other papers support or contradict a study's findings.

Back to Sofia's impossible deadline

Remember Sofia, the healthcare policy analyst who needed to review the telemedicine literature by Friday? The old Sofia would have spent two weeks reading abstracts one by one. The new Sofia used Elicit to survey 200 papers, NotebookLM to synthesize the top 30, and Consensus to verify the key claims. The memo was done by Wednesday — but the important part is what she did with the extra two days. She used them to think: to challenge the consensus findings, to identify the gaps no paper addressed, and to write recommendations that reflected her expertise, not just the AI's summary. That's the pattern across this entire track. AI handles the reading, the formatting, the searching, the generating. You handle the thinking.

Key takeaways

  • AI research tools (Perplexity, Elicit, Consensus, NotebookLM) dramatically accelerate research — from weeks to hours for thorough literature reviews
  • Perplexity is best for general research with cited sources; Elicit for academic literature reviews; Consensus for checking scientific agreement; NotebookLM for synthesizing your own documents
  • The optimal workflow combines multiple tools: explore with Perplexity, gather evidence with Elicit, verify claims with Consensus, synthesize with NotebookLM
  • Always verify AI-sourced claims by clicking through to original sources — citation fabrication and misinterpretation are real risks
  • AI research tools are research assistants, not sources — always cite the original papers, not the AI tool
  • Speed creates a new risk: false confidence. The easier it is to get an answer, the more important it is to verify it

Where to go from here

You've now covered the full AI tools landscape: from mastering a single AI assistant (Claude) to no-code tools, spreadsheets, images, coding, presentations, meetings, and research. The thread connecting all eight modules is the same: the value isn't in the tool — it's in knowing which tool to reach for, how to prompt it well, and when to apply your own judgment. Tools will change. That skill won't. If you want to go deeper on any specific area, check out the Claude Foundations track for advanced Claude techniques, the Data Skills track for deeper data analysis, or the Graphic Design track for visual workflows beyond AI generation.

?

Knowledge Check

1.What is the key difference between Perplexity and ChatGPT for research?

2.When using AI research tools, why should you always click through to the original sources?

3.Which AI research tool is best for answering 'What does the scientific evidence say about X?'

4.What is the recommended AI research workflow for a thorough project?

Want to go deeper?

🧠 AI & Machine Learning Master Class

Understand AI, use it in your job, and build AI-powered products.

View the full program