Why Your People Team Needs an AI Fair Use Policy (And How to Build One That Actually Works)
This one goes out to everyone who's been secretly using ChatGPT to draft every single slack message and meeting agenda, and still feeling like they're smuggling contraband into the office. You’re unsure where you should or shouldn’t use employee data, how to incorporate AI into what feels like a bit of a potential minefield.
You've been using AI for your People Ops work, haven't you? Right? Everyone has! If you take a single look at LinkedIn you’ll find a whole array of self-appointed AI-People-Experts sharing their latest wisdom on using LLMs in our work, so you know that the momentum is out there. Maybe it's helping you write that tricky performance review feedback like Thomas at Juro, or perhaps you've been using Sana to help structure your onboarding documentation or User Interviews as we shared a few weeks ago. And every time you do it, there's this little voice in the back of your head whispering, "Is this okay? Am I cheating?”
You're not. But we need to talk about it.
The reality is that AI has fundamentally changed the game for People teams. You may remember when I mentioned in my original "People Ops as a Product" post that HR teams traditionally haven't been particularly well-resourced in analysis and data? Well, as I discussed with Daniel and Stephen on the Metrics Episode of MPL Build, AI has basically handed us a superpower. We can now do complex analysis, generate insights, and create documentation at speeds that would have required entire analytics teams just a few years ago.
But with great power comes great responsibility, and right now, many People teams are flying blind with limited guardrails around AI use, and I’ve heard from multiple people leaders that they’re still trying to work out how to put appropriate guidance in for their team to wield this new power.
The Wild West of AI in HR
I've been watching People teams across the spectrum, from scrappy startups to massive enterprises, and the approaches to AI are all over the map. Some teams are pretending it doesn't exist (good luck with that). Others are going full robot overlord and trying to automate everything in a quest to reduce spend, maximise output, and test the limits of what we can achieve within today’s toolkit. Most are somewhere in the messy middle; using AI tools quietly, inconsistently, and without any real framework for what's appropriate.
This isn't sustainable. And frankly, it's not fair to your team members who are left guessing about what's okay and what isn't.
Why You Need an AI Fair Use Policy
An AI fair use policy isn't just about legal implications or making it clear to your compliance team you've got your act together (though those are nice side effects, I assure you). It's about creating clarity and confidence for your team so they can focus on doing brilliant work instead of worrying whether they're breaking some unspoken rule.
When you give someone a new tool without any instructions, they either don't use it at all (missed opportunity) or they use it wrong (potential burning fires). Neither option serves anyone well.
A proper AI fair use policy does three critical things:
Keep reading with a 7-day free trial
Subscribe to MPL Build to keep reading this post and get 7 days of free access to the full post archives.