Generative AI Guidance for ICM Training and Revalidation
Interim guidance on the responsible use of generative AI tools (e.g. ChatGPT, Gemini, Claude) by intensivists for portfolio submissions, CESR applications, and revalidation.
The Faculty of Intensive Care Medicine (FICM) recognises the growing use of generative AI tools, such as ChatGPT (OpenAI), Gemini (Google), and Claude (Anthropic) in professional and academic settings.
As the General Medical Council (GMC) continues its review into the role of generative AI in medical education, FICM is providing interim guidance to clarify our recommendations for the use of these tools by our members and fellows. This guidance is relevant to all intensivists, including those in training and those seeking specialist registration via the portfolio pathway.
Responsible Use of AI Tools
Intensivists may use generative AI tools to support aspects of their personal and educational development including:
- Planning entries or reflective pieces to be included within an online portfolio (for example, the Lifelong Learning Platform, LLP), or for revalidation purposes
- Checking spelling and grammar
- Providing general writing support or prompts for reflection
- Summarising medical literature when conducting self-study and CPD
However, the use of AI must be limited to supportive functions only, and intensivists are responsible for ensuring that any written content they submit for revalidation or to an online portfolio is factually accurate. In addition, all submitted content, and particularly reflective reports, must represent the intensivist’s own clinical reasoning, judgement, experience, and insight.
Risks and Ethical Considerations
Intensivists should be aware of the limitations of generative AI tools, including:
- The risk of factual inaccuracies (hallucinations)
- Potential bias in AI generated responses, including inadvertent proliferation of harmful or outdated tropes
- The possibility of unoriginal or plagiarised content
- Inadvertent data breaches if information which could potentially identify an individual is included within a prompt
Using AI to produce content without meaningful personal contribution, or to fabricate patient encounters or clinical experience, may be considered a breach of probity and professional standards.
Intensivists should also be mindful of the risk that overreliance on generative AI could bypass their personal engagement with the reflection and learning process, hindering rather than helping their professional development.
Disclosure and Authenticity
Responsible officers, appraisers, educational supervisors and ARCP panels may explore submitted content to assess authenticity and meaningful engagement, whilst being mindful that automated systems purporting to detect AI generated content are themselves prone to error.
FICM encourages all intensivists to use AI tools critically and responsibly, with a clear commitment to integrity, professionalism, and reflective practice. This interim guidance will be reviewed and updated in line with evolving GMC policy.
Any questions?