Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save seandavi/418b85be23f03e5abbef2d8ac34ec75b to your computer and use it in GitHub Desktop.
Save seandavi/418b85be23f03e5abbef2d8ac34ec75b to your computer and use it in GitHub Desktop.

Constructive Use of AI in Clinical Reasoning: Transitioning to Residency with Reflection and Safety

1. Why AI Matters

Artificial intelligence (AI), including large language models (LLMs), is becoming a daily companion in medicine — from documentation to diagnostic reasoning. Used thoughtfully, AI can enhance learning and efficiency. Used uncritically, it risks deskilling, never-skilling, and mis-skilling.

Opportunity Risk
Off-loads routine cognitive work Overreliance → weaker independent reasoning
Enables simulation and feedback Unverified outputs → biased or unsafe care
Expands access to medical knowledge “Black box” reasoning → misplaced trust

2. Framework 1 — DEFT–AI

A structured approach to think critically during AI interactions.

Step Purpose Sample Question
Diagnosis/Discussion Clarify reasoning and AI use “What did you ask the AI, and why?”
Evidence Probe reasoning, verification “What supports or contradicts this answer?”
Feedback Reflect on performance “What could you do differently next time?”
Teaching Identify key learning “What principle or rule emerged?”
AI Recommendation Guide next safe step “When is it appropriate to use this tool again?”

3. Framework 2 — Cyborg vs. Centaur

Adaptive patterns of human–AI collaboration.

Mode Description Best Use
Cyborg Human and AI co-create iteratively Routine or low-risk tasks (notes, drafts)
Centaur Human leads, delegates tasks to AI, verifies results High-risk, diagnostic, or uncertain tasks

Skilled clinicians shift flexibly between these modes depending on risk and task complexity.

4. Framework 3 — SAFE–AI

A quick checklist for responsible AI engagement.

S – Specify the task clearly.
A – Ask for reasoning (“Explain step by step”).
F – Find supporting evidence.
E – Evaluate output critically.
A – Apply clinical judgment.
I – Integrate reflectively—what did you learn?

5. “Verify, Then Trust”

  • Treat AI output like a preliminary report — confirm before acting.
  • Cross-check against guidelines, literature, and senior input.
  • Verification protects your patients and your reasoning skills.

6. Reflection Prompt

Think of one clinical or learning task you currently use AI for.

  • Which DEFT–AI step could improve your approach?
  • Is it better handled in centaur or cyborg mode?
  • How will you verify before trusting next time?

Remember:
AI doesn’t replace your reasoning — it reveals it.
Use it to reflect, refine, and strengthen your clinical judgment.


See https://www.nejm.org/doi/10.1056/NEJMra2503232 for more....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment