What is your opinion about this statement?: "Google Gemini and ChatGPT don't know anything. They just search other places for what you are searching for to give you a list. Because they don't know anything, they can't tell if the information they find is correct or not."
Reply from ChatGPT:
That statement is partly true, partly wrong, and it mixes together several different concepts (search, knowledge, understanding, and verification) into a single oversimplified claim. Let’s untangle it carefully.
- What the statement gets right
✔️ AI models do not “know” things the way humans do
They don’t have beliefs, awareness, or lived experience.
They don’t understand truth in a philosophical sense.
They don’t have an internal concept of “this is definitely correct” vs “this feels wrong” the way humans do.
So if by “know” we mean conscious understanding or epistemic certainty, then yes: they don’t know.
✔️ They can’t guarantee correctness
AI models do not have built-in truth detectors.
They can confidently produce incorrect statements.
They can fail to notice contradictions unless prompted explicitly.
This is a real limitation.
- What the statement gets wrong
❌ “They just search other places”
This is false for models like ChatGPT and Gemini (by default).
They do not browse the internet or search databases in real time unless explicitly connected to a search tool.
They generate responses from internalized statistical patterns learned during training, not by fetching sources on demand.
Think of it as:
compressed representations of vast amounts of text, not a live search engine.
If they were “just searching,” they wouldn’t:
answer instantly offline
generate novel explanations
combine ideas in ways not found verbatim anywhere
❌ “They give you a list of what they find”
They don’t retrieve documents unless explicitly instructed to do so with tools.
Instead, they:
predict the most likely next token given context
synthesize explanations
generalize beyond any single source
This is fundamentally different from search.
- The real problem (often misunderstood)
🔹 They don’t verify by default
This is the core issue people feel, but misdescribe.
LLMs:
don’t automatically check answers against reality
don’t run experiments
don’t consult external ground truth unless tools are used
So they can sound confident while being wrong — which feels like “they don’t know anything.”
- A more accurate statement would be
“ChatGPT and Gemini don’t have understanding or guaranteed truth. They generate answers based on learned patterns rather than verifying facts, so correctness depends on context, prompting, and external validation.”
That’s accurate. The original quote is not.
- Why this confusion happens
Humans intuitively expect:
confidence → knowledge
fluency → understanding
explanation → reasoning
LLMs break this intuition:
they are fluent without understanding
coherent without certainty
helpful without epistemic grounding
That mismatch causes frustration and over-dismissal.
- Bottom line
Claim Verdict
“They don’t know like humans” ✅ True
“They just search the web” ❌ False
“They can’t always tell correctness” ✅ True
“They’re useless because of that” ❌ False