AI for Research & Analysis
AI research tools like Perplexity, Elicit, Consensus, and NotebookLM are changing how professionals find, synthesize, and verify information. Here's how to use them — and when not to trust them.
The analyst who read 200 papers in a day
Sofia is a healthcare policy analyst. Her boss asked her to prepare a briefing on the effectiveness of telemedicine for chronic disease management. She needed to review the relevant academic literature, identify consensus findings, flag contradictions, and summarize everything into a 3-page memo — by Friday.
In the old world, this meant two weeks: searching PubMed, reading abstracts, downloading promising papers, reading full texts, taking notes, organizing themes, writing the memo. Sofia had done it before. It was thorough. It was also soul-crushing.
This time, she opened Elicit. She typed: "What is the evidence on telemedicine effectiveness for chronic disease management?" Elicit searched the academic literature and returned 200 relevant papers, ranked by relevance, with AI-generated summaries of each paper's key findings, sample sizes, and conclusions. In 20 minutes, Sofia had a structured overview of what 200 studies found.
She then fed the top 30 papers into NotebookLM, which synthesized them into themes: what telemedicine works for (diabetes monitoring, mental health follow-ups), where evidence is mixed (post-surgical care), and what gaps remain. She used Consensus to check specific claims ("Does telemedicine reduce hospital readmissions?") and got a percentage breakdown: 78% of studies found a reduction, with effect sizes ranging from 8-24%.
The memo was done by Wednesday. Her boss asked: "How did you review this many papers so fast?" Sofia said: "I used AI to read. I used my brain to think."
The AI research toolkit
These tools serve different research needs. Understanding which to use when is the key skill.
| Tool | What it does | Best for | Cost |
|---|---|---|---|
| Perplexity | AI-powered search with cited sources | Quick factual research, market analysis, current events | Free tier / $20/mo (Pro) |
| Elicit | Academic paper search and extraction | Systematic literature reviews, evidence synthesis | Free tier / $10/mo (Plus) |
| Consensus | Answers questions using peer-reviewed research | Checking scientific consensus on specific claims | Free tier / $7/mo (Premium) |
| NotebookLM | AI research assistant for your documents | Synthesizing uploaded papers, reports, or notes | Free (Google) |
| Semantic Scholar | Academic search engine with AI features | Finding relevant papers and tracking citations | Free |
| Scite | Citation analysis (supporting vs. contradicting) | Understanding how papers have been received | Free tier / $20/mo |
✗ Without AI
- ✗Search databases with keywords
- ✗Read abstracts one by one
- ✗Manually track themes across papers
- ✗Write notes in a separate document
- ✗Weeks for a thorough literature review
✓ With AI
- ✓Ask a question in natural language
- ✓AI summarizes key findings across papers
- ✓Themes extracted and synthesized automatically
- ✓Interactive Q&A with your research corpus
- ✓Hours for a thorough literature review
Perplexity: the Google alternative for research
Perplexity is the tool most people should start with. It's like Google search, but instead of giving you a list of links, it reads the sources and gives you a synthesized answer with citations.
How it differs from ChatGPT for research:
| ChatGPT | Perplexity | |
|---|---|---|
| Sources | Trained on data (no real-time sources by default; web search available on some plans) | Searches the web in real time and cites every source |
| Citations | Doesn't cite sources unless asked (and may fabricate them) | Every claim has a numbered citation you can verify |
| Currency | Knowledge cutoff date (check current model specs) | Live web access — current to today |
| Best for | Analysis, writing, coding, reasoning | Factual research, market data, current events |
Strong Perplexity prompts:
"What is the current market size of the AI coding assistant
market? Include growth rate and key players. Cite your sources."
"Compare the effectiveness of cognitive behavioral therapy vs.
medication for adult ADHD. What do the meta-analyses say?"
"What are the regulatory requirements for AI in healthcare
in the EU as of 2026?"
There Are No Dumb Questions
"If Perplexity cites its sources, can I trust it completely?"
No. Perplexity can misinterpret sources, take quotes out of context, or cite a source that doesn't actually support the claim. Always click through to the original source for anything important. Think of Perplexity as a very fast research assistant who finds sources for you — but you still need to read the key ones yourself.
"Is Perplexity replacing Google?"
For research queries, many professionals prefer it. For navigation ("where is the nearest coffee shop"), product searches, and local information, Google is still better. Perplexity is strongest when you need a synthesized answer to a complex question, not when you need a specific website.
Write a research query
25 XPElicit and Consensus: academic research at speed
Elicit — your literature review assistant
Elicit specializes in academic research. Ask a research question, and it:
Searches the Semantic Scholar database (200M+ papers) for relevant studies
Extracts key data from each paper: sample size, methods, findings, limitations
Ranks papers by relevance and citation count
Summarizes findings across multiple papers into themes
Creates tables comparing study results side by side
Example Elicit query: "Does intermittent fasting improve metabolic health markers in adults with type 2 diabetes?"
Elicit returns a table: study name, sample size, duration, key outcome, effect size, statistical significance — for 50+ relevant studies. What would take a researcher days to compile manually takes minutes.
Consensus — scientific agreement in seconds
Consensus answers the question: "What does the research say about X?" It searches peer-reviewed literature and tells you the balance of evidence.
Example: "Does exercise improve symptoms of depression?" Consensus answer: "92% of studies found that exercise significantly improves depression symptoms, with moderate effect sizes (d = 0.5-0.8). The strongest evidence supports aerobic exercise, 3+ times per week."
This is powerful for quickly checking claims, settling debates with evidence, and grounding your work in actual research rather than opinions.
NotebookLM: synthesize YOUR documents
NotebookLM (by Google) takes a different approach. Instead of searching the web or academic databases, it works with documents you upload. Upload 10 research papers, a set of meeting notes, or a collection of reports — and then ask questions about them.
Key features:
- Upload PDFs, Google Docs, websites, YouTube videos, or text
- Ask questions and get answers grounded in your uploaded sources
- Every answer includes citations pointing to the specific source and passage
- Generate "Audio Overviews" — podcast-style conversations about your documents
- Create study guides, FAQs, or briefing documents from your sources
Best use cases:
- Synthesizing findings across 10-20 research papers
- Preparing for a meeting by uploading all relevant documents and asking questions
- Studying a complex topic by uploading textbook chapters and having a Q&A session
- Creating a briefing document from a collection of internal reports
Plan a research project
50 XPWhen AI research tools get it wrong
AI research tools are powerful but not infallible. Here's what to watch for.
| Failure mode | What happens | How to catch it |
|---|---|---|
| Citation fabrication | AI cites a paper that doesn't exist | Click through to verify the actual source |
| Misinterpretation | AI states the opposite of what a study found | Read the original abstract or conclusion |
| Cherry-picking | AI presents evidence from one side of a debate | Ask for contradicting evidence explicitly |
| Outdated evidence | AI cites superseded studies | Check publication dates, look for newer meta-analyses |
| Conflation | AI merges findings from different studies into a false composite | Verify specific claims against individual sources |
There Are No Dumb Questions
"Can I cite Perplexity or Elicit in an academic paper?"
No. Cite the original sources these tools surface. Perplexity and Elicit are research assistants that help you find sources — they are not sources themselves. In an academic context, always trace claims back to the original peer-reviewed paper.
"How do I know if a source is reliable?"
Check: Is it peer-reviewed? What journal published it? How many times has it been cited? Is it recent? Does it have a large enough sample size? Tools like Scite can show you whether other papers support or contradict a study's findings.
Key takeaways
- AI research tools (Perplexity, Elicit, Consensus, NotebookLM) dramatically accelerate research — from weeks to hours for thorough literature reviews
- Perplexity is best for general research with cited sources; Elicit for academic literature reviews; Consensus for checking scientific agreement; NotebookLM for synthesizing your own documents
- The optimal workflow combines multiple tools: explore with Perplexity, gather evidence with Elicit, verify claims with Consensus, synthesize with NotebookLM
- Always verify AI-sourced claims by clicking through to original sources — citation fabrication and misinterpretation are real risks
- AI research tools are research assistants, not sources — always cite the original papers, not the AI tool
- Speed creates a new risk: false confidence. The easier it is to get an answer, the more important it is to verify it
Knowledge Check
1.What is the key difference between Perplexity and ChatGPT for research?
2.When using AI research tools, why should you always click through to the original sources?
3.Which AI research tool is best for answering 'What does the scientific evidence say about X?'
4.What is the recommended AI research workflow for a thorough project?