By now, you will hopefully have tested the AI waters in Level 1 of our Roadmap, with non-clinical tasks and transitioned to Level 2 and a paid subscription trial. If so, you’ve likely seen both the benefits and the quirks of generative AI. In this final post, it’s important to tidy up a few important loose ends as we set off on our AI journey…
Why It Matters for General Practice
I was consulting with the mother of a ten year old boy recently. At the end of the consultation the boy interrupted me to ask why I had ChatGPT on my computer screen.
It was a reminder, that regardless of what we think of AI in General Practice as individuals, the technology is starting to creep into the consulting room whether we want it or not.
Patients are starting to come in with AI-informed questions, ideas, concerns and expectations.
This is challenging! Studies have already hinted that, properly prompted, ChatGPT o1 can be better than the average clinician at diagnosis and management, and give more empathetic advice), but other models, other AIs and poor prompting, can give partial answers, subtle untruths or can be wildly off-base.
At the same time, our staff and colleagues are increasingly exploring AI for everything as well- from learning and checking, to creating professional work.
In a recent study about 20% of U.K. GPs were using ChatGPT in some form for professional work.
It’s a brave new world, and the more comfortable we are with the technology, the easier it is to steer patients away from misinformation while also leveraging AI’s genuine strengths in admin and data summarising.
The Art of Prompting
Asking good questions- it is the basis of excellent Primary care. 90% of our diagnoses traditionally come from taking a good history alone. It makes us great detectives, and great trainers.
We are the masters of good questions- it is our super power in the age of AI.
My advice for Level 2 of our road map? Forget the SBAR tool in level 1, and start to approach your interactions with AI Chatbots in the same way you approach the rest of your clinical work:
Ask structured, organised, probing and curious questions.
Undertake informed, cautious, meticulous checking and analysis of answers and information. Challenge AI answers and see what it says in reply!
Use your professional judgement to weigh, judge and act on diverse information sources.
When AI goes off-track, nudge it back with more specifics, just like you’d coach a junior colleague.
Managing Confidentiality & Risk
Anonymise all patient details if you’re seeking clinical insights.
Always double-check the facts AI provides. It can invent references or misunderstand guidelines.
If local policies say no patient data in external systems, respect that.
AI can still help in other incredibly helpful and time saving ways. Work with the system, not against it.
Be open and honest with colleagues and patients about your use of the technology… that includes your appraisal process!
Don’t shy away from the fact you use generative AI and are interested in it. Start the process of putting in the relevant governance and oversight to keep you compliant with data processing, confidentiality and governance.
Signpost your colleagues to this blog- I’ll talk about governance in more detail in future posts.
Practical Tips & Implementation Guidance
1. Create a Prompt Guide for your colleagues: A simple cheat-sheet for your practice. For starters- Never input patient identifiers, Verify all references, and use Plain language.
2. Stay Informed: Models evolve quickly. Deepseek, Llama, Gemini, Grok and others might bring new capabilities- I’ll give you my opinion on the blogs, but test them cautiously to see if they’re actually beneficial.
3. Keep Perspective: AI should streamline tasks, not add complexity. If it’s creating more hassle, step back and reassess.
Conclusion & Key Takeaway
AI can be a robust partner if we approach it wisely, safeguard patient data, and maintain a healthy dose of clinical scepticism.
The technology is advancing, and patients are already engaging with it, so we can’t ignore AI’s role in healthcare. But we can set boundaries, test carefully, and ultimately make it work for us instead of against us.
Call to Action
If you’ve tried advanced prompting or encountered patients with AI-driven self-diagnoses, share your experience. Did it help or hinder your consultations? What’s your strategy for handling references that AI supplies? Let’s learn from one another and don’t forget to stay subscribed for future updates on AI in primary care.
FAQ
Q: How do I confirm that AI’s references are legitimate?
A: Cross-check them. Many AI tools can invent citations. If a reference doesn’t appear in a reputable database, disregard it.
Q: Could AI overshadow the human touch in GP work?
A: It’s up to us to ensure it doesn’t. AI can handle routine tasks so we can focus on empathy and clinical nuance. That’s the real benefit.
Disclaimer
These thoughts come from my own ongoing experimentation. Always consult local policies and your professional guidelines if you plan to use AI for clinical or administrative tasks. AI is a tool, not an infallible oracle.