Informed by our broader perspectives on the role of generative AI, we have been working towards a company AI policy. It’s still very much a draft-in-progress, as befits a technology-in-progress. Our goal is to be pragmatic but cautious — to tread the middle ground between hype and prohibition.
Our AI policy is for a particular kind of company: one that is small-on-purpose, whose value proposition is not scale. A company that competes on things like excellence, thoughtfulness, creativity, and rigour. It is also a company that values flourishing at work. We want our work to have space for things that are only available through consistent effort and practice: old-fashioned things like mastery, craftsmanship, self-understanding. If your organization operates on different principles, then your AI policy should be different too. We are sharing it as an imperfect starting point, knowing it will evolve with the technology itself.

AI Policy Definitions and Scope
The AI governed by this policy is specifically generative AI. That means:
- models trained on very, very large data sets
- models general enough to be used for a variety of different purposes.
It encompasses Large Language Models (LLMs) and more sophisticated agentic AIs. The underlying mathematical approach doesn’t really matter, but it’s often neural networks, reinforcement learning, or deep learning.
When you see “AI” referenced in this policy, it doesn’t apply to AI-based tools with a narrow use case, like removing part of an image in Photoshop. It also doesn’t apply to smaller data sets. If your dataset can be manipulated in Excel, feel free to do as much fancy math as you like.
This policy also focuses on deliberate, intentional use of AI. As generative AI becomes increasingly pervasive, it’s nearly impossible to avoid. This policy is not intended to cover every incidental AI output that you might encounter, such as AI overviews on a search query. If in doubt, please assume the policy applies.
This policy will change frequently in response to:
- AI technology evolving
- New features or products being introduced to our existing technology stack.
- Clients changing their policies and/or expectations
- Discovering areas that are unclear or nonsensical.
If you have an idea for a change, let’s work together to update the policy.
When and whether we use AI
1. Does this use comply with client policy and other Workomics policies?
2. Do we maintain control over the data?
It is okay to use AI tools to make general queries, in much the same way as you might use a search engine. For instance, you might query ChatGPT, “What does acronym XYZ stand for,” because the AI is not relying on any Workomics or client data. It is notokay to provide a file without taking additional steps to validate the AI tool.
When using AI with any Workomics or client data, we only use tools where the input data is not used to train future models. We’ll keep an internal list of allowable AI tools that satisfy this requirement and otherwise comply with our IT security policy. If you have a new AI tool you would like to use, please get explicit permission to have it added to the list. We’ll review this list regularly.
You may only provide the AI with a finite data set. Each file must be explicitly provided by a person. That means AI tools are off for platforms like Slack, Google, Box, or Zoom, and we do not enable direct API access to any of our files or data. You may provide copies of files from these platforms to allowable AI tools.

3. Does the benefit of using AI outweigh the benefits of the alternatives?
- The quality of the output is often higher. AI tools tend to produce output of “average” or “acceptable” quality, or outputs that are missing important context.
- It builds and maintains human capabilities. Excellence is the result of consistent practice; our skills and judgement can atrophy if we don’t use them regularly.
- We have very high confidence in the accuracy of the output as a result of having done the detailed work that led to the final deliverables.
- Doing work is intrinsically rewarding for the people doing it.
The benefits of using AI should outweigh these benefits of humans doing the work. In general, that will happen when we have to process large quantities of information (something AIs are very good at), and where the processing of the data is somewhat peripheral to the core deliverables our clients want us to provide.
A good example is using AI to support synthesis of interviews. AIs are generally better than people at finding patterns in large data sets, so the use of AI adds quite a lot. Interview synthesis is one of several inputs to overall analysis and recommendations, so there is plenty of opportunity for us to build and maintain our capabilities and make sure we are delivering high-quality, differentiated outputs to our clients. Having conducted the interviews, we can be confident the AI hasn’t introduced any hallucinations or inaccuracies. And finally, doing detailed coding of interview data is often time-consuming drudgery, so we are replacing a less pleasant task with more rewarding, higher-value work. As an added bonus, interview transcripts are a finite data set, and there are a number of AI tools that don’t use input data to train future models.
By contrast, we should be reluctant to deploy AI where it is displacing our ability to practice our core skills, where our motivation is time-saving or producing “more” (rather than “better”), or when there is a good chance we’ll spend more time than we saved trying to correct AI-generated “workslop.”
How we use AI
If we have made the decision to use AI, that is not a carte blanche. We use AI judiciously, so that we are minimizing the downsides and maximizing the benefit. As we gain more experience with using AI, this section will expand to reflect lessons learned.
Our use of AI should be additive.
Equally, our use of AI should be supplemental — an optional add-on that could be taken away without diminishing the core. If our deliverables are mainly the result of prompting an AI, then our clients have no need for us. We must consistently deliver something they cannot get by prompting a model.
Our use of AI should be transparent.

Our use of AI should be eco-conscious.
These numbers are not negligible, but you will generate almost 150x that much carbon on a one-way flight from Toronto to Newark‡. Remember too, that generating the same outputs without AI still consumes energy. When we think about the carbon and water impacts of AI, what really matters is the aggregate use around the world — both how much it is used, and for what kinds of tasks. The International Energy Agency believes it’s plausible that the energy savings from AI-driven optimizations could outstrip the additional energy usage from data centres. Of course we might also end up in a future where everyone uses AI to generate billions of unnecessary cat videos. The applications and frequency matter.
At Workomics, we are not using AI to reduce energy costs. However, in the context of the other environmental impacts of our business activities (driving, flights, heating and cooling offices, etc.), the judicious use of AI tools is reasonable. We should, nonetheless, be mindful of the environmental impacts, and lay off the cat videos.
Our use of AI should be voluntary.
It is somewhat trendy to make the use of AI mandatory, to terminate employees who can’t be “upskilled” with AI. That is not our approach. We recognize that there are real ethical quandaries when it comes to AI. Commercial models are generally trained on the output of writers and artists without permission or compensation. There are cases before the courts where traditional media companies are suing AI companies for copyright infringement. Anthropic has agreed to pay $1.5-billion to authors after training its models on books from pirated websites. It’s fair to have moral qualms about how these models were built, and to worry about the future harms that might come from AI. On the other hand, not adopting AI may significantly harm your future professional opportunities and earning potential. To the extent that AI is used to solve hard problems and benefit society, it can be a force for good.
In situations where we have this kind of moral ambiguity, we don’t believe it is our role as a company to either mandate or prohibit the use of AI. Our policy is to encourage AI to be used judiciously, and to pair it with advocacy for fairer systems. However, we do allow individuals to be “conscientious objectors” — to not use AI as a matter of personal principle, on a use-case-by-use-case, platform-by-platform basis. We ask you to not reject all uses of AI categorically, but to consider each case on the merits. Your choice to not use AI may affect the kinds of projects you are able to work on.
For now, that is possible because using AI is not existential to our business and doing your job without AI doesn’t create an undue burden for your colleagues or your clients. However, it’s hard to predict the future path of a technology. Someone in the 1980s could eschew the fax machine for their whole career with no issues, but would not have been able to avoid adopting email, eventually. All to say, we can imagine a scenario where AI is existential to our business, so we reserve the right to mandate AI use in the future, if necessary.
* 15 knowledge-building queries, 10 queries to iterate on an image, and 3 queries to create a 5-second social media video
† Estimate is from myClimate.org, assuming a mid-sized, gasoline-powered car.
‡ Per passenger! That is assuming economy fare on an Airbus A320, according to myClimate.org.
Our other ideas worth exploring
GenAI, capitalism, and human flourishing
How capitalism breaks down in the face of GenerativeAI, and why we need to find ways to optimize for human flourishing.
Balancing the pursuit of new
Organizations tend to focus on building new things. But there is a season to turn inwards, prioritizing and optimizing what already exists.
Figuring out Flexible Work
At Workomics, our work is flexible-by-design. Our policy is non-prescriptive on when work happens and how much of it you need to do.