Fire, Aim, Ready
30 days of ten-minute exercises to take you from AI-curious to AI-confident.
Week 1: Understand AI
Build the foundation. Understand what AI actually is, how to get genuinely useful output, and how to catch mistakes before they matter.
AI (Artificial Intelligence) is the broad term for any software doing something that normally requires human intelligence.
Machine learning is a subset: software that learns from patterns in data rather than following fixed rules. Your email spam filter is machine learning.
Generative AI is a subset of that: software that creates new content — text, images, code. your AI platform is generative AI.
Large language models (LLMs) are the specific technology behind your AI platform. They were trained on vast amounts of text and learned to predict what word comes next, billions of times, until they got very good at producing fluent, knowledgeable-sounding text.
The crucial thing to understand: the model does not know things the way you do. It produces text that is statistically likely to be right. That distinction matters enormously.
How would you work with that person?
That person is your AI platform. The way you would work with them is exactly how you should work with AI.
The thought experiment matters more than the definitions. How you would work with that brilliant-but-fallible new colleague is exactly how you should work with your AI platform. Informed scepticism, not blind trust and not blanket avoidance.
The model has no internal truth-checker. It generates text by predicting the most statistically likely next word, given everything that came before. When it produces a confident-sounding answer, it is not because it has verified the information. It is because confident-sounding answers are what correct answers tend to look like in the training data.
This means confidence is a stylistic feature of the output, not a signal of accuracy. A wrong answer and a right answer can look identical.
What makes this dangerous: Hallucinations are most likely on topics where you have least expertise - exactly the situations where you most need the information and are least equipped to spot errors.
The fix: Always ask your AI platform to surface its uncertainty. It will, if you ask. The Tab 2 technique below shows you how.
The AI did not lie to you in Tab 1. It has no concept of lying. It generated text that statistically fits the pattern of a correct, authoritative answer. The lesson is not do not trust AI. It is do not trust confidence. Always ask it what it does not know.
Without context, your AI platform produces a statistically average answer - the kind of response that would be vaguely appropriate for most people asking this question. That means it is perfectly suited for nobody in particular.
With context, your AI platform can reason about your specific situation. It knows which generic advice applies and which does not. It can fill in the gaps intelligently rather than defaulting to the most common pattern.
The technique that makes the biggest difference: Before asking your AI platform to produce anything, ask it to define what excellent looks like for that output. Then ask it to produce to that standard.
This works because defining excellence requires reasoning about the specific domain - what agencies actually need, what makes a pitch land, what a good brief contains. Once that reasoning is done, production is dramatically better.
Before you write anything - what does a world-class agency briefing document actually need to contain? What are agencies frustrated by when briefings are weak, and what makes them do their best work? Be specific.
Tab 1 produced a template. Competent, generic, forgettable. Tab 2 produced something that understands why a briefing document exists, what agencies actually need, and how to structure it for your context. The difference is not the AI - it is the brief.
Instructions like be clear and specific or write in a professional tone sound useful but are almost impossible to act on. They describe a quality without encoding it.
Examples encode the actual patterns: sentence length, how numbers are introduced, whether paragraphs are long or short, how conclusions are reached, what the opening line does. These are things you would struggle to articulate but immediately recognise.
When to use this technique:
• When you have a strong sense of what good looks like in your organisation
• When generic AI output feels off-brand or off-tone
• When you are producing something that will be seen externally
• When you want your AI platform to write in your voice, not its default voice
One important caveat: Examples from your organisation may contain confidential information. Anonymise or use published work you own before pasting.
Example 1: [paste a real piece of writing you think is well-done]
Example 2: [paste another example]
The second version was written to your standard, not a statistical average. This technique scales: any time you have examples of what excellent looks like in your organisation, you can use them to drag AI output from generic to genuinely useful.
1. Confidence check — Ask the AI to quantify how certain it is. Catches: overconfident claims on topics where certainty is not warranted.
2. Context check — Ask under what circumstances this advice would be wrong. Catches: advice that is generally true but wrong for your specific situation.
3. Expert check — Ask what a domain specialist would add or push back on. Catches: advice that is plausible to a non-expert but would concern someone who actually knows the field.
4. Verify check — Ask how to verify this independently. Catches: anything that cannot be corroborated, which is a signal the AI may be confabulating.
No single check catches everything. Each activates a different mode of reasoning. Stacked together they dramatically reduce the chance of something important slipping through.
When to use this: Any time the stakes are high and the domain is not yours. Not for every AI output - that would be exhausting.
Notice how each check produced something different. The confidence check quantified uncertainty you could not see. The context check surfaced edge cases. The expert check added professional caveats. The verify check gave you a path to corroboration. Together they turned an opaque recommendation into something you can actually interrogate.
Most people use AI like this: ask a question, get a recommendation, act on it. The problem is that a direct recommendation is essentially the average of what advice looks like on a topic in the training data. It is not evidence-based reasoning - it is pattern matching.
The research-first technique adds one step: before asking for a recommendation, ask the AI to research what actually happens in this situation. What do the studies show? What have practitioners found? What are the common failure modes?
Once the AI has surfaced that evidence, its recommendation is grounded in it rather than in generic patterns. The difference in quality is significant - and you can also fact-check the research, which you cannot do with a bare recommendation.
This works especially well for: personnel decisions, strategic choices, communication in difficult situations, and any topic where conventional wisdom and evidence diverge.
Before you tell me what to do - research this first. What do the best managers actually do in this situation? What does the evidence say about what works and what makes things worse? What are the most common mistakes managers make at exactly this point?
Tab 1 sounds plausible but is built on pattern matching. Tab 2 is grounded in specific evidence that you can examine and question. You did not have to find that research yourself - you just asked for it first. One sentence added to your prompt, significantly better output.
A workflow you designed yourself around your actual work is worth more than any generic AI tip. You now have at least one task in your job that AI genuinely helps with in a repeatable, reliable way. That is Week 1 done.
Week 2: Use AI Well
Go deeper on technique. Advanced prompting, AI as thinking partner, knowing when AI earns its cost, and building real judgement about when to use it.
1. Role — Who should your AI platform be? A data protection lawyer? A senior brand director? An experienced copywriter? Giving your AI platform a role shifts its defaults toward the vocabulary, standards, and concerns of that perspective.
2. Context — What does your AI platform need to know about your situation? Your organisation, your audience, the constraints, the history. Context turns generic output into specific output.
3. Task — What do you actually want? Be precise. Not write something about X but write a 200-word internal announcement about X that achieves Y for audience Z.
4. Format — How do you want the output structured? Bullet points? Prose? A table? Sections with headers? If you do not specify, your AI platform defaults to whatever looks most common for this type of request.
5. Constraints — What should it avoid? What must it include? What tone? What length? Constraints are where the quality difference between prompts is most visible.
You do not need all five in every prompt. But knowing which element is missing helps you diagnose why output is not good enough.
Write an email to the supplier that: firmly establishes the seriousness of the delay and its consequences for us, requests a confirmed delivery date in writing within 24 hours, and preserves the relationship because we want to keep working with them. Tone: direct, professional, not aggressive. Length: under 200 words. No subject line needed.
The structured prompt is more work to write. But the output requires less editing, is more specific, and is more likely to be usable without significant revision. Over time, well-structured prompts actually save time because you spend less time iterating on poor outputs.
When you use AI to produce something - a document, an email, a summary - the output quality depends heavily on the quality of your prompt. But when you use AI to think, it works differently: you share a decision, an argument, or a plan, and ask the AI to interrogate it.
Techniques that work especially well:
Pre-mortem: Before committing to a decision, ask your AI platform to imagine it went badly and explain why. This surfaces risks you might be too invested to see.
Steel-man: Ask your AI platform to make the strongest possible case for the position you disagree with. Forces you to reckon with the best version of the other side.
Devil's advocate: Share your argument and ask your AI platform to attack it. Strengthens your reasoning or reveals weak spots before someone else does.
Second opinion: Share a decision you have made and ask your AI platform what a sceptical senior colleague would say about it.
The decision is: [describe the decision and the options you are considering].
First, play devil's advocate. Make the strongest case against the option I am currently leaning towards. Be direct and specific - not vague concerns but concrete risks or weaknesses.
This is AI at its most useful - not replacing your judgement but sharpening it. The devil's advocate and pre-mortem are not designed to talk you out of your decision. They are designed to make sure you have genuinely considered what could go wrong before you commit.
[paste your content here]
I need to communicate the same core information to three different audiences:
1. [Audience 1 - e.g. my direct manager who is time-poor and detail-oriented]
2. [Audience 2 - e.g. a client who is not technical and cares mainly about outcomes]
3. [Audience 3 - e.g. my wider team who need to understand what this means for their work]
For each audience: what changes in terms of what I lead with, what I include, what I leave out, and what tone I use? Do not rewrite it yet - just analyse what should change and why.
Audience adaptation is one of the most genuinely time-saving things AI can do. The analysis step - asking your AI platform what should change before asking it to rewrite - tends to produce better output than going straight to rewriting, because it forces explicit reasoning about the audience.
1. When the thinking IS the work
If the value of a task lies in the reasoning process itself - working through a complex problem, making a judgement call, forming a view - outsourcing it to AI means you never do the thinking that builds your expertise. AI can help you think, but it cannot think for you without cost.
2. When you cannot verify the output
If you have no way to check whether what AI produces is accurate, and the consequences of being wrong are significant, you are taking a risk you cannot quantify. Use AI for drafts you can verify, not for authoritative answers you will act on.
3. When the relationship requires your voice
Difficult conversations, feedback to colleagues, personal responses to clients who know you - these need to come from you. AI-drafted communications in high-trust relationships often feel slightly off in ways people cannot articulate but can detect.
4. When the task is trivial
If a task takes you two minutes and AI would take three minutes to prompt properly, do it yourself. Do not build a habit of defaulting to AI for things that are faster without it.
5. When the data should not cross the line
If the task requires pasting data that should not be in an AI tool (client data, confidential commercial information, personal data) - do not do it. We cover this fully in Week 4.
Here are five tasks I have done recently where I either used AI or considered it:
1. [Task 1]
2. [Task 2]
3. [Task 3]
4. [Task 4]
5. [Task 5]
For each one: should I have used AI? Apply rigorous reasoning - not just what would be faster, but what the right choice is given quality, privacy, relationship, and environmental cost considerations.
The goal is not to use AI less. It is to use it deliberately. Every task you give AI should earn its cost - in time saved, in quality improved, in something genuinely enabled that you could not have done as well without it. Tasks that do not clear that bar are better done without AI.
A single query to a large language model uses approximately 10 times the energy of a standard Google search.
Training a model like GPT-4 consumed an estimated 50 gigawatt-hours of electricity - roughly equivalent to the annual electricity use of 4,600 average UK homes.
Microsoft reported its global water consumption increased 34% between 2021 and 2022, largely attributed to AI infrastructure.
A conversation of 20-50 queries is estimated to use roughly 500ml of water for cooling.
Data centres running AI globally consumed an estimated 460 terawatt-hours in 2022. This figure is projected to roughly double by 2026.
Why this matters for a luxury brand:
Sustainability is not peripheral to luxury - it is increasingly central to how the sector is judged. The way teams use technology is part of that story. We use AI thoughtfully is a more defensible position than we use AI for everything, and both are more defensible than we had no policy on this.
The point is not to use AI less. It is to use it deliberately. The environmental cost of AI does not make it wrong to use - it makes the question of whether to use it worth asking every time.
Most people get a first AI output, decide it is good enough, and use it. The gap between good enough and excellent often requires two or three additional prompts - and the first draft rarely represents what your AI platform is actually capable of producing.
Effective iteration techniques:
Specific critique: Do not say make this better. Say the third paragraph buries the key point - restructure it so the main message comes first.
Constraint addition: Add a constraint you did not specify initially. The tone is slightly too formal for this audience - make it warmer while keeping the professionalism.
Comparison request: Ask for a different version that takes a completely different approach. Then you can choose or blend.
The 20% better test: Ask your AI platform: what would you change to make this 20% better? It often surfaces improvements you would not have thought of.
What not to do: Do not keep regenerating with the same prompt hoping for a different result. If the output is wrong, the prompt is wrong. Change the prompt.
1. [specific thing that is not working and why]
2. [specific thing that is not working and why]
Also ask yourself: what would you change to make this 20% better? Apply both your improvements and the 20% better changes in the next version.
Iteration is where the real quality difference between average and excellent AI use lives. The first draft is the starting point, not the destination. Every additional specific prompt is an investment that usually pays back in editing time saved.
Start by explaining what it is and why it matters to someone in my position. Then tell me the three things I most need to understand to not be confused when this topic comes up in my work. Use concrete examples from my industry wherever possible.
Using AI as a tutor has one significant advantage over most other learning: it adapts in real time to exactly what you do not understand. You can ask it to explain the same thing twelve different ways until one lands. That kind of personalised, patient explanation is rare and genuinely valuable.
Week 3: Bias and Planet
Understand what AI encodes and what it costs. Visual bias, decision bias, environmental impact, and an honest audit of how you are actually doing.
Large language models and image models are trained on data scraped from the internet. The internet over-represents certain groups (younger, Western, English-speaking, male in many professional contexts) and under-represents others. The model learns to complete patterns - and the patterns in the data carry the patterns of who produced it.
This is not a flaw in a specific model. It is a structural property of how these systems are built. Every major AI system carries this to some degree, because they all trained on similar data.
What this means in practice:
• Image generators default to certain demographic representations of roles
• Text generators may assume default demographics for the subject of a scenario
• AI-assisted hiring tools may encode historical bias in who was hired
• Creative AI tools may produce outputs that reflect historically dominant aesthetics
The mitigation is not to avoid AI. It is to know the bias exists, to look for it in outputs that matter, and to specify diversity explicitly rather than assuming the default is neutral.
What you saw is not a quirk of one tool. It is a consistent pattern across all major AI systems. The AI is not making a value judgement - it is completing a statistical pattern. But the output encodes that pattern as normal, which is why it matters and why awareness alone is not sufficient.
Amazon's hiring tool (2014-2017): Amazon developed an AI system to screen job applications. It was trained on 10 years of historical hiring data. Because most hires over that period were men, it learned to penalise applications that included the word women's (as in women's chess club). The tool was quietly abandoned in 2018 after internal discovery.
COMPAS recidivism algorithm (US): A tool used in US courts to predict likelihood of reoffending was found by ProPublica to be almost twice as likely to falsely flag Black defendants as high risk compared to white defendants, while white defendants were more likely to be incorrectly flagged as low risk. The tool's creator disputed the methodology, but the debate itself is instructive.
Healthcare resource allocation (US, 2019): A widely used algorithm that determined which patients received additional healthcare resources was found to systematically underestimate the needs of Black patients, because it used healthcare spending as a proxy for health need - and Black patients had historically received less care for the same conditions.
All three systems were built by teams that were not trying to discriminate. The bias came from the data.
The through-line in all three case studies is that the bias was invisible to the people building and using the system until it was explicitly investigated. That is the nature of bias encoded in data - it looks like neutral outputs until someone asks why these outputs look the way they do.
AI tools do not have legal or moral agency. They are tools. When something goes wrong, responsibility sits with the humans who:
• Chose to use the tool for this purpose
• Configured or prompted it
• Reviewed (or failed to review) the output
• Acted on the recommendation
This has concrete legal implications: Under GDPR, automated decision-making that significantly affects individuals requires either human review or explicit consent. Under UK employment law, using an AI tool that produces discriminatory outcomes can constitute discrimination by the employer, even if the employer did not intend it.
The practical implication: High-stakes decisions - those affecting other people's employment, finances, safety, or significant opportunities - should always have a meaningful human review step. Meaningful means a person with the time, context, and authority to say no. Not a rubber stamp.
Ethical responsibility around AI is not abstract. It is the specific moment when you choose to act on an AI output that affects another person. The more consequential the decision, the more explicit and robust the human review step needs to be.
The audit is not about guilt - it is about honest self-assessment. The gaps you identified today are worth more than any technique in this programme, because they are specific to you and your actual habits.
The design-test-refine loop is more important than the plan. Most improvements that survive contact with real work look different from the plan that preceded them. The testing step is not optional - it is where the improvement becomes real.
Help me prepare a clear, jargon-free 3-minute explanation with one concrete example they will recognise from our work context. I will be explaining it verbally, not in writing. Then ask me to rehearse the key points so you can check I have understood it myself.
If you could explain it clearly, you understood it. If you stumbled, you found a gap. Either outcome is useful. The conversation with a colleague does something the programme alone cannot: it makes the knowledge social, which is where it sticks.
The picture of AI in your industry is probably less dramatic than the hype and more significant than you might hope to ignore. The useful question is not will AI change my job but which parts of my job does AI do well, and how do I concentrate on the parts it does not?
Week 4: Privacy and Ethics
Data privacy, GDPR, AI regulation, your final personal policy, and graduation with a commitment you will actually keep.
The answer depends on the tool, the account type, and the terms of service - and most people have not read those terms.
For consumer AI tools (free tiers): Data you enter is typically used to improve the model. This means it may be reviewed by humans, stored, and potentially used in future training.
For enterprise AI tools (like a properly configured internal your AI platform deployment): Data is typically not used for training. But you should confirm this with your IT or legal team before assuming it.
What this means in practice:
• Personal data about identifiable individuals (names, contact details, health information, financial details) should never go into an AI tool without confirmed data processing agreements
• Commercially sensitive information (unreleased product details, supplier contracts, pricing, client data) should not go into any external tool
• Information shared in confidence - by a client, a colleague, a supplier - carries an implicit duty of confidentiality that pasting into AI may breach
The test that works in almost every situation: Would I be comfortable if my client, my manager, and our legal team could see exactly what I just pasted? If the answer is no, do not paste it.
- A client name, contact details, or purchase history
- A supplier contract or commercial terms
- An internal creative brief or unreleased product information
- Personal information about a colleague
- Confidential strategy or financial information
You do not need to answer out loud. Just notice what comes to mind.
Most data incidents in the workplace are not dramatic hacks. They are someone doing something reasonable without thinking through what information they are carrying into that action. The should I paste this pause takes two seconds and can prevent consequences that take much longer to resolve.
Personal data under GDPR includes any information that can identify an individual, directly or indirectly. Names, email addresses, and job titles are obvious. Less obvious: a combination of role, location, and employer may be sufficient to identify someone even without a name.
Legal basis: Processing personal data requires a valid legal basis under GDPR. The most common for employee data is legitimate interests or contract. For customer data it is usually consent or contract. None of these legal bases automatically extends to processing that data through an external AI tool - that is a new form of processing that requires its own assessment.
Data processing agreements: If your organisation uses your AI platform or any other AI tool with personal data, it should have a Data Processing Agreement (DPA) with the provider. Without one, the organisation may be in breach of GDPR.
The automated decision-making rule: GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Using AI to make hiring, disciplinary, or significant service decisions without human review may trigger this right.
Penalties: Up to 4% of annual global turnover or 20 million euros, whichever is higher. In practice, the ICO tends to focus on organisations that fail to take reasonable precautions rather than those who have made good-faith efforts.
You are not expected to be a lawyer. You are expected to be informed enough to ask the right questions and to know when to escalate. The questions you identified today are the starting point for a conversation your organisation may not have had yet.
EU AI Act: The world's first comprehensive AI regulation. Classifies AI systems by risk level: unacceptable risk (banned), high risk (strict requirements), limited risk (transparency requirements), minimal risk (largely unregulated). High-risk applications include AI used in employment, credit scoring, education, and law enforcement. Companies using high-risk AI must register systems, conduct conformity assessments, and maintain human oversight. The Act began phasing in from February 2025 and will be fully in force by August 2026.
UK approach: The UK has taken a lighter-touch, principles-based approach rather than a single regulation. The government issued a White Paper in 2023 directing existing regulators (ICO, FCA, CMA) to apply their current powers to AI in their sectors. This is less prescriptive than the EU approach but creates sector-specific variation.
GDPR enforcement: The ICO has been actively investigating AI tools used in UK organisations. Several major companies have faced investigations for using AI tools that process personal data without proper data protection assessments.
What luxury brands should be watching: The EU AI Act's employment provisions are directly relevant to any AI-assisted hiring, performance management, or workforce decisions. The transparency requirements will affect any AI-generated customer communications.
You do not need to understand AI regulation in legal detail. You need to be informed enough to know which questions to ask and who to ask them to. The questions you identified today are a better contribution to your organisation than a general awareness that regulation exists.
I want to write a final personal AI policy. It should cover: what I will always do when using AI at work, what I will never do, how I will handle data, how I will validate outputs for consequential decisions, and how I will keep my knowledge current. Help me draft it by asking me five specific questions about my role and context, one at a time.
A policy that lives only in a training programme is no policy at all. The one you just wrote is specific to your role, your risks, your actual answers. Print it, save it, review it in six months. The review is as important as the writing.
The AI news cycle produces enormous amounts of coverage. Most of it is vendor announcements, speculation, and hype. Staying current does not mean reading all of it - it means identifying the few signals that are actually reliable and building a habit around them.
Signals worth following:
• Anthropic, OpenAI, Google DeepMind research blogs - primary sources on capability changes
• MIT Technology Review - consistently good on separating hype from evidence
• The ICO and FCA - for regulatory developments affecting UK organisations
• Your sector trade press - for documented AI deployments in your specific industry
Signals to be sceptical of:
• Press releases from AI vendors about their own products
• LinkedIn posts about transformative AI breakthroughs
• Articles that describe AI doing something without linking to primary evidence
A sustainable habit: 20 minutes, once a month. Check your three to four reliable sources. Note what has changed that is relevant to your work. Update your mental model accordingly.
Help me identify: the two or three most relevant AI developments I should currently be tracking for my role and industry, a simple monthly ritual I could realistically maintain, and the one question I should ask about any AI news story to determine whether it is signal or noise.
Staying current is not about reading everything. It is about having a small number of reliable signals, a regular habit of reviewing them, and a filter for separating hype from reality. The setup - doing it today, not sometime - is what determines whether it actually happens.
1. What I used to believe that I no longer believe: [your answer]
2. One concrete thing I do differently at work: [your answer]
3. What I am still most uncertain about: [your answer]
Based on these answers, what does my learning journey tell you about where I started, what genuinely shifted, and where I should focus next?
Learning is only visible in retrospect. The fact that you can identify specific beliefs that changed, specific behaviours that are different, and specific areas of remaining uncertainty is evidence of real progress - not just completion of a programme.
It should have:
- Three specific things I commit to doing every time I use AI
- Three specific things I commit to never doing
- One thing I am committed to getting better at over the next six months
Help me write it by asking me five questions about my specific work context and what I most want to hold myself to. Ask one at a time. Wait for my answer before asking the next.
Knowing is not the same as doing. A specific commitment is the bridge between the two. The habits that will determine whether this programme sticks are small: the pause before you paste, the follow-up question after a confident answer, the moment you ask whether this task earns the cost of using AI today.
The three or four things I most want them to understand are [list the concepts from this programme that struck you most]. For each one: give me a one-sentence plain-language summary and one concrete example from a luxury brand context.
What you shared today is not a summary of a training programme. It is your perspective, your examples, your understanding. That is what makes it valuable - and what makes it stick.
Thirty days. You have gone further than most people who use AI every day ever stop to go. The habits you have built, the scepticism you have developed, the policies you have written - these are durable. They will serve you as the technology changes, because they are not about any specific tool. They are about how to think.
You started this not knowing quite what AI was. You are finishing it with a personal commitment, a real skill set, a data policy, and a genuine sense of what it means to use this technology thoughtfully.
Send this to someone who should do it too:
Or invite a colleague to start with you:
FAQ
Answers to questions that tend to come up.