top of page

I set out to write an AI policy. I didn't expect what I found.

Updated: Mar 20

What started as a gut instinct turned into something I hadn’t anticipated. This is the story of why I wrote what we believe is the world’s first client-facing AI policy in complementary healthcare – what I found when I did, and why it matters so much.



I knew we needed a policy.

As a practice, we make important decisions that can transform our clients' health, drawing on information and test results they've shared with us. Complete transparency, about every tool, every process and every boundary, felt essential.


AI was already woven through our practice in ways we hadn’t fully examined. It was in the microbiome testing I’d been using since 2015; in the AI features embedded in our website platform and in the administrative layer of our work. Clients were beginning to discuss how practitioners across complementary healthcare are using AI and others had come to us having entered their precious microbiome data into AI tools seeking treatment ideas from a pool of unvetted internet content.


The instinct for transparency drove me even though the responsibility weighed heavily. How do you write a policy about something you don’t fully understand? I was aware of AI’s potential. I wasn’t aware of its full scope. Writing with authority about something I was still learning felt dishonest – and honesty is, for me, non-negotiable.


What I came to understand is that feeling out of my depth was the only honest response. The whole sector is in this position, feeling underqualified to put governance in place, hoping someone else will go first. The alternative to writing the policy wasn’t certainty. It was letting ignorance become an excuse for inaction. So I kept going.


I had been using AI passively for years, but writing the policy challenged me to face it directly.

The process was extraordinarily rich and transformative. Doing it properly – interrogating every area where AI touches our work, thinking through what integrity actually means in each context – led somewhere unexpected. It forced a reckoning with something the policy had been circling without naming: that AI is not just an efficiency tool or a governance question. In the health and wellness space, it is already doing active harm: The impossible bodies. The fabricated food. The health claims that sound authoritative because they’ve been generated at scale. A wellness culture that uses AI to manufacture perfection and sells it to people who are already struggling. For any healthcare practitioner, this isn't just theoretical.


I couldn’t ignore it, because the people we work with are coming to terms with bodies that don’t look like the ones wellness culture sells them.

These are bodies that are complex, that are unpredictable, that have often been failed by the very industry that claimed to be helping them. To use AI to manufacture and amplify those impossible standards – easily, endlessly and at scale – is ableism in action. It causes harm, and a practice that works with sick and struggling bodies has a particular responsibility to refuse it, not quietly, but on the record.


The therapeutic relationship starts long before the first appointment.

It starts with everything a potential client sees, reads and absorbs about how we present ourselves and our work. An AI-generated image of an impossible body isn’t just aesthetically dishonest. It damages trust before a client ever walks through the door.


AI can't replace clinical expertise.

AI has long been a powerful partner in microbiome research, identifying patterns across datasets that would be beyond human analysis alone. But the microbiome field is also where the risks are sharpest; where AI-generated treatment ideas can do real harm. Damage to the microbiome can be irreversible. Berberine, for example, will likely be recommended by AI tools for its antimicrobial action, but evidence suggests it can negatively affect gut microbiome balance. We are always mindful that microbiome changes can have lifelong impact on our clients' health.


AI can’t be a substitute for the thousands of hours of client work, specialist training and more than 45 years in clinical practice that we have between us as a team. When it’s used as though it can, it puts peoples’ health at risk.


The microbiome is extraordinarily complex. Analysing a client’s results well requires weighing hundreds of individual factors – their case history, dietary restrictions, lifestyle, the nuances of their specific presentation – against a continuously evolving evidence base. AI-assisted analysis from testing companies gives us a useful first orientation to a client's results. But it can’t substitute for clinical judgement. We always return to the raw data and our own research to draw our own conclusions. We don’t use AI to generate treatment plans. That’s not a marketing claim. It’s a clinical commitment, and it’s in our policy.


Your microbiome is also among the most personal biological information you can share.

Unique to you, your microbiome composition can already indicate predispositions to disease and offers real potential for optimising your long-term health. Its commercial value is growing fast. You may not yet be aware of how carefully it needs to be protected. We are, and that shapes every decision we make about the tools we use.


In January this year, both OpenAI (Chat GPT) and Anthropic (Claude) launched consumer health AI products encouraging users to upload health records and test results. Microsoft followed last week (12 March) with Copilot Health. Microbiome data is exactly the kind of highly personal biological information people will now be encouraged to share with these platforms. It's widely accepted that these AI tools can make mistakes. When those mistakes impact someone's microbiome, the real-world health consequences can be irreversible.


AI is amplifying harm in the wellness industry. This is why we draw the line.

Food styling in wellness content has always involved a degree of artifice. But AI-generated food imagery is different. It creates images of meals that don't exist, bodies that can't exist, health outcomes that have never been achieved. As complementary healthcare practitioners, we are acutely aware of the scrutiny our field rightly attracts. That scrutiny is one reason I hold the same standard of evidence across everything we do: from our treatment plans to the photographs we share on social media.


Clinical wisdom isn't just accumulated knowledge. It's knowledge that has been tested by experience, by success and failure, by the weight of real consequences. AI can be extraordinarily knowledgeable. But it will never earn wrinkles, and we won't erase ours because wellness culture on Instagram tells us we should. We don't use AI to generate images, audio or video.


We couldn't find a single comparable policy anywhere in the world

Both AI-assisted and traditional research drew a blank: no public policies from any other complementary healthcare practice, microbiome company or personalised nutrition service. Clients have no way of knowing how AI is used in their care or in the marketing that reaches them. We believe they should. So we wrote a policy. It covers every area where AI touches our work. Anyone can say they use AI responsibly. We’ve put it in writing.


We hope this helps our clients, and anyone reading this, make more informed and safer decisions about their data, their microbiome and their health.


We could have focused solely on data protection and clinical governance (and our policy covers that). But at its heart, this policy is about whether our clients can trust us. Everything we do rests on that.


 

Read our full Policy on AI Use www.themicrobiomegroup.com/policies


If you're a practitioner thinking about how you use AI in your work, I'm developing a resource to help you think it through and put your own commitments in writing. Sign up to our newsletter to hear when it's available.


Viola Sampson BSc is a certified Microbiome Analyst and Founder of The Microbiome Group. You can book an appointment with a member of her team, and sign up to our email list to receive updates.


 

 
 
 

Comments


bottom of page