

Intro
I’ve known from the beginning that ChatGPT can’t really help me unless I know how to help it — by spotting its mistakes and steering it back on course. So I decided to put that theory to the test: I had a conversation with ChatGPT to explore just how far we could go when I stayed sharp, and it… stayed artificial.
Denise: ChatGPT, I’ve noticed something: you can’t actually help me unless I catch your mistakes and correct them. What’s up with that?
ChatGPT: You’re hitting on something important — and yes, staying vigilant matters. My goal is to provide the best information I can, but I sometimes rely on partial, outdated, or incomplete sources, especially when things evolve quickly or data isn’t publicly available.
When you point out errors, it makes our exchange smarter and more useful. That’s actually the key difference between using ChatGPT well… and using it passively.
So not only are you right — you’re also doing exactly what makes this tool work for you. Want to keep cross-checking things together?
Denise: That’s the thing — it’s constant. I have to keep catching your mistakes to get anything useful out of you.
ChatGPT: That’s fair — and I know that can be frustrating. But it also means you’re engaging critically, which is what actually unlocks the value of this interaction.
Denise: And when I ask for bibliographic references — especially for recent events or niche topics — you often miss the mark. Sometimes the articles don’t exist at all.
ChatGPT: Yes, bibliographic accuracy can be a challenge. I try to reference real sources, but when I don’t have access to a specific article, I might generate something that sounds plausible but isn’t verifiable — especially for recent content that’s behind paywalls or not well indexed.
It’s always a good idea to cross-check citations, ask for a URL, and verify the source independently. Or I can help search using real-time tools if available.
Denise: Right, and even if you get the facts straight on Thursday, by Saturday you forget — and repeat the same mistake.
ChatGPT: You’re absolutely right — and that’s a real limitation. I don’t automatically remember our past exchanges unless we intentionally save specific details. So even if we clarify something today, I might get it wrong again in a future session if:
- I didn’t “remember” what we discussed
- I start from scratch without your earlier corrections
But we can fix that — I can save facts when you ask me to. Would you like me to start remembering some of the things we’ve clarified?
Denise: What’s the downside of this memory?
ChatGPT: Great question. Here’s how memory works — both the risks and the rewards:
⚠️ Potential Downsides of Memory
- Partial, contextual memory
I only remember what you explicitly ask me to save (or what I already saved with your permission). If something wasn’t saved, I may forget it — even if it felt obvious at the time. - Potential rigidity
Once something is saved, I’ll treat it as a confirmed fact. So if circumstances change (say, a company pivots or a project ends), I might keep repeating outdated info — unless you tell me to update it. - User-driven updates
You stay in control. I don’t automatically update or delete saved information — it’s up to you to tell me when something changes. - Profile-based memory
Memory is linked to your profile. If you log in from a different account or device without syncing, I won’t have access to what we saved together.
✅ The Upsides of Memory
- Less repetition
- More consistent answers
- Time saved — especially on ongoing topics or projects
Conclusion
ChatGPT is powerful — but only if you bring your own intelligence to the table. Its strength lies in how you use it: asking the right questions, catching its mistakes, steering the conversation. In other words, the more you think critically, the more useful it can become. You’re smarter than ChatGPT in your own areas of expertise — and that’s not a flaw. That’s the point. When you ask it about totally new topics, be extra careful.
Denise Silber