Fire, Aim, Ready - Sydenham Club
AI Training
0 / 30 days complete
A Sydenham Club programme

Fire, Aim, Ready

30 days of ten-minute exercises to take you from AI-curious to AI-confident.

Week 1 · 7 exercises

Week 1: Understand AI

Build the foundation. Understand what AI actually is, how to get genuinely useful output, and how to catch mistakes before they matter.

Day 1
What Even Is This Thing?
There is a lot of noise about AI right now. People use AI, machine learning, and generative AI as if they mean the same thing. They do not. Today is not about memorising definitions. It is about building an intuition for what kind of thing your AI platform actually is.
The three layers of AI:

AI (Artificial Intelligence) is the broad term for any software doing something that normally requires human intelligence.

Machine learning is a subset: software that learns from patterns in data rather than following fixed rules. Your email spam filter is machine learning.

Generative AI is a subset of that: software that creates new content — text, images, code. your AI platform is generative AI.

Large language models (LLMs) are the specific technology behind your AI platform. They were trained on vast amounts of text and learned to predict what word comes next, billions of times, until they got very good at producing fluent, knowledgeable-sounding text.

The crucial thing to understand: the model does not know things the way you do. It produces text that is statistically likely to be right. That distinction matters enormously.
    1
    Read the learning note above carefully. Then open your AI platform and paste this to see the difference between explaining and demonstrating:
    Paste this
    I want you to help me understand what kind of thing you are. Do not use technical jargon. Explain the difference between AI, machine learning, and generative AI as if you are explaining it to someone intelligent who works in a creative industry and has never needed to think about this before. Use an analogy that does not involve computers.
    2
    Read the response. Now paste this follow-up and notice how candid it is:
    Follow up with this
    That is helpful. Now be honest with me - what are the three most important things someone should understand about your limitations before they start relying on you at work?
    3
    Now try this thought experiment - do not paste it, just sit with it for 60 seconds:
    A new team member joins. They have read everything ever written about your industry, in every language, up to about a year ago. They are extraordinarily well-read and can write fluently in any style. But they have never actually worked in your field, they have no memory of previous conversations, and they sometimes confidently state things that are not true.

    How would you work with that person?

    That person is your AI platform. The way you would work with them is exactly how you should work with AI.

The thought experiment matters more than the definitions. How you would work with that brilliant-but-fallible new colleague is exactly how you should work with your AI platform. Informed scepticism, not blind trust and not blanket avoidance.

Day 1 done. Tomorrow you will watch your AI platform get something wrong on purpose - and understand exactly why.
Key takeawayyour AI platform produces text that is statistically likely to be right. That is very different from knowing something is true.
Because if your AI platform was trained on data up to a certain date, anything after that it simply does not know. It may still answer confidently, with the same fluent tone it uses for everything else. That is the trap. Always ask your AI platform when its knowledge ends when the topic is time-sensitive.
Take it deeper
📖What is generative AI? - MIT Technology Review (5 min read)
Day 2
Why It Sounds So Certain
Today you are going to watch your AI platform be wrong - confidently, fluently, convincingly wrong. This is the single most important practical lesson in this programme. Once you have seen it happen, you cannot unsee it.
Why AI hallucinates:

The model has no internal truth-checker. It generates text by predicting the most statistically likely next word, given everything that came before. When it produces a confident-sounding answer, it is not because it has verified the information. It is because confident-sounding answers are what correct answers tend to look like in the training data.

This means confidence is a stylistic feature of the output, not a signal of accuracy. A wrong answer and a right answer can look identical.

What makes this dangerous: Hallucinations are most likely on topics where you have least expertise - exactly the situations where you most need the information and are least equipped to spot errors.

The fix: Always ask your AI platform to surface its uncertainty. It will, if you ask. The Tab 2 technique below shows you how.
    1
    Open your AI platform in two browser tabs.
    2
    Tab 1
    In Tab 1, paste this and hit Enter. Do not read the output yet:
    Paste this - do not read yet
    What were the key outcomes of the most recent EU sustainability regulation affecting luxury goods and fashion? Give me specific details - dates, thresholds, what companies are required to do.
    3
    Tab 2
    Now in Tab 2, paste this version - same question, one added sentence:
    Paste this
    What were the key outcomes of the most recent EU sustainability regulation affecting luxury goods and fashion? Give me specific details - dates, thresholds, what companies are required to do. Before you answer, tell me your knowledge cutoff date and flag anything you are uncertain about.
    4
    Read both responses side by side. Tab 1 may sound authoritative. Tab 2 will be more hedged, more honest, and ultimately more useful. Now stay in Tab 2 and paste this:
    Follow up with this
    How confident are you in the specific details you just gave me? Which parts should I verify before acting on them?

The AI did not lie to you in Tab 1. It has no concept of lying. It generated text that statistically fits the pattern of a correct, authoritative answer. The lesson is not do not trust AI. It is do not trust confidence. Always ask it what it does not know.

Two days in and you already know something most AI users do not: that the most dangerous output is not obviously wrong - it is confidently wrong. Tomorrow: how to get dramatically better answers.
Key takeawayAlways ask your AI platform what it is uncertain about. Confidence in the output is a writing style, not a fact-check.
It can - but only if specifically prompted to. Left to defaults, it generates the most likely continuation of the conversation, which is usually a confident-sounding answer. The model has no internal truth-checker. It has patterns. That is why prompting it to surface uncertainty is a skill worth building.
Day 3
Give It Something Real
your AI platform has read almost everything ever written. But it knows nothing about you, your team, your clients, or what good looks like in your specific context. Today you will see the difference that context makes.
Why context transforms AI output:

Without context, your AI platform produces a statistically average answer - the kind of response that would be vaguely appropriate for most people asking this question. That means it is perfectly suited for nobody in particular.

With context, your AI platform can reason about your specific situation. It knows which generic advice applies and which does not. It can fill in the gaps intelligently rather than defaulting to the most common pattern.

The technique that makes the biggest difference: Before asking your AI platform to produce anything, ask it to define what excellent looks like for that output. Then ask it to produce to that standard.

This works because defining excellence requires reasoning about the specific domain - what agencies actually need, what makes a pitch land, what a good brief contains. Once that reasoning is done, production is dramatically better.
    1
    Open your AI platform in two browser tabs.
    2
    Tab 1
    In Tab 1, paste this - do not read it yet:
    Paste this - do not read yet
    Write a briefing document for a new agency we are about to start working with.
    3
    Tab 2
    In Tab 2, fill in the highlighted sections with something real from your work, then paste:
    Paste this - fill in the brackets
    I work at [your organisation] in [your team or division - e.g. wholesale, communications, product, retail]. We are about to brief a new [type of agency - e.g. PR agency / digital agency / creative studio] on [a real or realistic project].

    Before you write anything - what does a world-class agency briefing document actually need to contain? What are agencies frustrated by when briefings are weak, and what makes them do their best work? Be specific.
    4
    Read what Tab 2 gives you. Then paste this follow-up:
    Follow up with this
    Now write the briefing document based on what we just discussed. Leave placeholders where you need information I have not given you yet.
    5
    Switch back to Tab 1. Read both documents side by side.

Tab 1 produced a template. Competent, generic, forgettable. Tab 2 produced something that understands why a briefing document exists, what agencies actually need, and how to structure it for your context. The difference is not the AI - it is the brief.

Day 3 done. Tomorrow: how to give your AI platform your standards, not just your task.
Key takeawayBefore asking your AI platform to produce something, ask it what excellent looks like for that output. Then ask it to produce to that standard.
Because describing quality precisely is surprisingly hard. Ask your AI platform to define excellence first and it reverse-engineers the patterns that make great work great - structure, rhythm, what gets said first, how evidence is used. The definition does the describing for you, more precisely than you could yourself.
Day 4
Teach It Your Standards
Sometimes you do not want generic best practice. You want output that reflects your specific taste, your organisation's voice, your standards. Today you learn how to give your AI platform examples of excellent work so it can replicate what you actually mean by good.
Why examples beat instructions:

Instructions like be clear and specific or write in a professional tone sound useful but are almost impossible to act on. They describe a quality without encoding it.

Examples encode the actual patterns: sentence length, how numbers are introduced, whether paragraphs are long or short, how conclusions are reached, what the opening line does. These are things you would struggle to articulate but immediately recognise.

When to use this technique:
• When you have a strong sense of what good looks like in your organisation
• When generic AI output feels off-brand or off-tone
• When you are producing something that will be seen externally
• When you want your AI platform to write in your voice, not its default voice

One important caveat: Examples from your organisation may contain confidential information. Anonymise or use published work you own before pasting.
    1
    Open your AI platform in two browser tabs.
    2
    Tab 1
    In Tab 1, paste this:
    Paste this
    Write a short internal announcement about a new team initiative. Keep it professional and clear.
    3
    Tab 2
    In Tab 2, find two pieces of writing from your organisation that you think are genuinely excellent - an email, an announcement, a brief, anything. Paste this with those examples included:
    Paste this - include your examples
    Here are two examples of internal communications I think are excellent. What do they have in common? Articulate the specific principles that make both work. Be precise about sentence structure, tone, how they open, and how they land.

    Example 1: [paste a real piece of writing you think is well-done]

    Example 2: [paste another example]
    4
    Read the principles it extracts. Then paste this follow-up in Tab 2:
    Follow up with this
    Now write a short internal announcement about a new team initiative. Apply those principles exactly.
    5
    Compare the two versions. The difference is the standard you gave it.

The second version was written to your standard, not a statistical average. This technique scales: any time you have examples of what excellent looks like in your organisation, you can use them to drag AI output from generic to genuinely useful.

Day 4 done. Tomorrow: a four-check method for catching mistakes before they reach anyone who matters.
Key takeawayGive your AI platform examples of work you consider excellent. It will extract the principles and write to that standard.
Because the qualities that make writing good are largely tacit - you know them when you see them but cannot easily put them into words. Instructions like be clear and specific are so vague they fail to constrain anything. Examples bypass that problem by encoding the patterns directly.
Day 5
Catch the Mistakes
AI is particularly useful for work outside your expertise - but that is also when you are least equipped to spot errors. Today you learn a four-check validation method that catches different classes of mistake.
The four checks and what each catches:

1. Confidence check — Ask the AI to quantify how certain it is. Catches: overconfident claims on topics where certainty is not warranted.

2. Context check — Ask under what circumstances this advice would be wrong. Catches: advice that is generally true but wrong for your specific situation.

3. Expert check — Ask what a domain specialist would add or push back on. Catches: advice that is plausible to a non-expert but would concern someone who actually knows the field.

4. Verify check — Ask how to verify this independently. Catches: anything that cannot be corroborated, which is a signal the AI may be confabulating.

No single check catches everything. Each activates a different mode of reasoning. Stacked together they dramatically reduce the chance of something important slipping through.

When to use this: Any time the stakes are high and the domain is not yours. Not for every AI output - that would be exhausting.
    1
    Open your AI platform and start a new conversation. Fill in the brackets and paste this:
    Paste this - fill in the brackets
    I am a [your job title] at [your organisation]. I have just found out our team has been storing client contact data in a shared spreadsheet that not everyone should have access to. We have been doing this for about a year. What should I do to fix this, and are there any legal implications? Do not ask me any follow-up questions.
    2
    Read the recommendation. Now run these four checks, one at a time, in the same conversation:
    3
    Check 1
    Check 1 - Confidence
    What is the probability this recommendation is fully correct? Where are you most and least confident?
    4
    Check 2
    Check 2 - Context
    Under what circumstances would this recommendation be wrong or incomplete for my specific situation?
    5
    Check 3
    Check 3 - Expert
    If a specialist data protection lawyer reviewed this recommendation, what would they add, change, or push back on?
    6
    Check 4
    Check 4 - Verify
    How should I verify this independently? What should I actually check, and where?

Notice how each check produced something different. The confidence check quantified uncertainty you could not see. The context check surfaced edge cases. The expert check added professional caveats. The verify check gave you a path to corroboration. Together they turned an opaque recommendation into something you can actually interrogate.

Day 5 done. You now have a validation method you can use on any high-stakes AI output. Tomorrow: how to make AI ground its advice in evidence.
Key takeawayStack validation checks. Each one catches a different class of mistake. No single check is enough on its own.
When the stakes are low and the consequences of being wrong are easily reversible. Drafting an internal email, brainstorming ideas, summarising a document you have already read - these do not need four checks. Reserve the full method for consequential decisions: those affecting other people, involving compliance, or informing significant actions.
Day 6
Research Before You Recommend
When you go straight to asking AI for a recommendation, you get one built on statistical patterns, not evidence. Today you learn a technique that makes AI ground its advice in what has actually happened before it tells you what to do.
The research-first technique:

Most people use AI like this: ask a question, get a recommendation, act on it. The problem is that a direct recommendation is essentially the average of what advice looks like on a topic in the training data. It is not evidence-based reasoning - it is pattern matching.

The research-first technique adds one step: before asking for a recommendation, ask the AI to research what actually happens in this situation. What do the studies show? What have practitioners found? What are the common failure modes?

Once the AI has surfaced that evidence, its recommendation is grounded in it rather than in generic patterns. The difference in quality is significant - and you can also fact-check the research, which you cannot do with a bare recommendation.

This works especially well for: personnel decisions, strategic choices, communication in difficult situations, and any topic where conventional wisdom and evidence diverge.
    1
    Open your AI platform in two browser tabs.
    2
    Tab 1
    In Tab 1, fill in the brackets and paste - do not read the output yet:
    Paste this - do not read yet
    I am a [your job title] at [your organisation]. I need to give feedback to a direct report who has been missing deadlines for the past six weeks. I have mentioned it once informally but nothing changed. What should I do? Do not ask me any follow-up questions.
    3
    Tab 2
    In Tab 2, paste the research-first version:
    Paste this - fill in the brackets
    I am a [your job title] at [your organisation]. I need to give feedback to a direct report who has been missing deadlines for the past six weeks. I have mentioned it once informally but nothing changed.

    Before you tell me what to do - research this first. What do the best managers actually do in this situation? What does the evidence say about what works and what makes things worse? What are the most common mistakes managers make at exactly this point?
    4
    Read Tab 2. Then paste this follow-up:
    Follow up with this
    Now, based on that research, what do you recommend I do specifically? Be concrete.
    5
    Switch back to Tab 1. Read both recommendations side by side.

Tab 1 sounds plausible but is built on pattern matching. Tab 2 is grounded in specific evidence that you can examine and question. You did not have to find that research yourself - you just asked for it first. One sentence added to your prompt, significantly better output.

Day 6 done. Tomorrow: turning a technique into a repeatable workflow you will actually use.
Key takeawayAsk AI to research before it recommends. The extra step grounds advice in evidence, not patterns - and gives you something you can fact-check.
Because it externalises the reasoning. A bare recommendation is a black box - you cannot see why the AI reached it. A research-grounded recommendation shows its working: here is what the evidence says, here is how I am applying it to your situation. You can interrogate each step, which you cannot do with a conclusion that appears from nowhere.
Day 7
Build a Workflow
You have learned five techniques this week. Today you turn one of them into a repeatable workflow for a real task in your job - something you can use every week without thinking about which technique to apply.
    1
    Think of one task you do regularly that takes 15 minutes or more and currently involves writing, summarising, researching, or drafting. Open your AI platform and paste this:
    Paste this - fill in the brackets
    I am a [your job title] at [your organisation]. I want to build a simple, repeatable AI-assisted workflow for [describe the task - e.g. writing my weekly update to my manager / summarising meeting notes / briefing an agency / preparing for a client review]. Help me design it step by step. Ask me any questions you need before suggesting the workflow.
    2
    Answer its questions honestly. Then paste this:
    Follow up with this
    Now write the workflow as a simple checklist I can follow every time I do this task. Include the specific prompts I should use at each step, and where the validation checks from Day 5 apply.
    3
    Test the workflow on a real or realistic example right now. Does it save time? Does it produce something genuinely usable? Note what works and what you would change.

A workflow you designed yourself around your actual work is worth more than any generic AI tip. You now have at least one task in your job that AI genuinely helps with in a repeatable, reliable way. That is Week 1 done.

Week 1 complete. You understand what AI is, how to get better output, how to catch mistakes, and how to build this into a real habit. Week 2 goes deeper into using AI well - more advanced techniques, smarter prompting, and the judgements that separate effective AI users from everyone else.
Key takeawayThe most valuable AI skill is turning a technique into a habit. A workflow you actually use beats a technique you know but forget.
Generic prompting advice is designed for nobody in particular. Your workflow is designed for your role, your outputs, your standards. That specificity is what makes it reliable enough to actually use week after week.
Week 2 · 7 exercises

Week 2: Use AI Well

Go deeper on technique. Advanced prompting, AI as thinking partner, knowing when AI earns its cost, and building real judgement about when to use it.

Day 8
The Art of the Prompt
Most people prompt AI the way they would type a Google search: short, vague, and hoping for the best. Today you learn the components of a genuinely good prompt and what each one does.
Anatomy of a strong prompt:

1. Role — Who should your AI platform be? A data protection lawyer? A senior brand director? An experienced copywriter? Giving your AI platform a role shifts its defaults toward the vocabulary, standards, and concerns of that perspective.

2. Context — What does your AI platform need to know about your situation? Your organisation, your audience, the constraints, the history. Context turns generic output into specific output.

3. Task — What do you actually want? Be precise. Not write something about X but write a 200-word internal announcement about X that achieves Y for audience Z.

4. Format — How do you want the output structured? Bullet points? Prose? A table? Sections with headers? If you do not specify, your AI platform defaults to whatever looks most common for this type of request.

5. Constraints — What should it avoid? What must it include? What tone? What length? Constraints are where the quality difference between prompts is most visible.

You do not need all five in every prompt. But knowing which element is missing helps you diagnose why output is not good enough.
    1
    Open your AI platform in two browser tabs.
    2
    Tab 1
    In Tab 1, paste this short prompt:
    Paste this
    Write an email to a supplier about a delayed delivery.
    3
    Tab 2
    In Tab 2, paste this fully structured version - fill in your details:
    Paste this - fill in your details
    You are a senior operations manager at a luxury goods company. We have a supplier called [supplier name or type] who has delayed delivery of [what] by [how long]. This is affecting [what it affects - e.g. a product launch / seasonal inventory / a client commitment].

    Write an email to the supplier that: firmly establishes the seriousness of the delay and its consequences for us, requests a confirmed delivery date in writing within 24 hours, and preserves the relationship because we want to keep working with them. Tone: direct, professional, not aggressive. Length: under 200 words. No subject line needed.
    4
    Read both outputs. Identify which component of the strong prompt made the biggest difference in this case.

The structured prompt is more work to write. But the output requires less editing, is more specific, and is more likely to be usable without significant revision. Over time, well-structured prompts actually save time because you spend less time iterating on poor outputs.

Day 8 done. Tomorrow: how to use AI as a thinking partner, not just a writing assistant.
Key takeawayA strong prompt has role, context, task, format, and constraints. Knowing which element is missing helps you fix weak output.
Context and constraints. Most people specify the task well enough but give your AI platform no information about their specific situation and no guidance on what to avoid. The result is output that is vaguely appropriate for everyone and specifically useful for nobody.
Day 9
AI as Thinking Partner
Most people use AI to produce things. Fewer use it to think. Today you learn how to use your AI platform as a thinking partner - to stress-test your reasoning, surface your blind spots, and sharpen your arguments before you commit to them.
Using AI to think, not just to produce:

When you use AI to produce something - a document, an email, a summary - the output quality depends heavily on the quality of your prompt. But when you use AI to think, it works differently: you share a decision, an argument, or a plan, and ask the AI to interrogate it.

Techniques that work especially well:

Pre-mortem: Before committing to a decision, ask your AI platform to imagine it went badly and explain why. This surfaces risks you might be too invested to see.

Steel-man: Ask your AI platform to make the strongest possible case for the position you disagree with. Forces you to reckon with the best version of the other side.

Devil's advocate: Share your argument and ask your AI platform to attack it. Strengthens your reasoning or reveals weak spots before someone else does.

Second opinion: Share a decision you have made and ask your AI platform what a sceptical senior colleague would say about it.
    1
    Think of a real decision you are wrestling with at work - a strategic choice, a personnel situation, a recommendation you need to make. Open your AI platform and paste this:
    Paste this - fill in your decision
    I am going to share a decision I need to make, and I want you to help me think it through rather than just tell me what to do.

    The decision is: [describe the decision and the options you are considering].

    First, play devil's advocate. Make the strongest case against the option I am currently leaning towards. Be direct and specific - not vague concerns but concrete risks or weaknesses.
    2
    Read the devil's advocate case carefully. Then paste this:
    Follow up with this
    Now do a pre-mortem. Assume I went with my preferred option and it went badly wrong 12 months from now. What were the three most likely reasons it failed?
    3
    Read the pre-mortem. Now paste this:
    Final question
    Having heard all of that, what is the one thing I should do or find out before making this decision that I have not yet done or found out?

This is AI at its most useful - not replacing your judgement but sharpening it. The devil's advocate and pre-mortem are not designed to talk you out of your decision. They are designed to make sure you have genuinely considered what could go wrong before you commit.

Day 9 done. Tomorrow: AI for communication - drafting, editing, and adapting for different audiences.
Key takeawayUse AI to stress-test your thinking, not just to produce output. The pre-mortem and devil's advocate techniques surface what you might be too invested to see yourself.
Because it has no stake in the outcome. When you ask a colleague to play devil's advocate, they often pull their punches because they do not want to undermine you or seem negative. your AI platform does not have that constraint. It will make the strongest case against your position without worrying about the relationship - which is exactly what makes it useful.
Take it deeper
📚Thinking Fast and Slow by Daniel Kahneman - understanding your own cognitive biases makes you a better AI prompt writer
Day 10
Adapting for Different Audiences
The same information needs to be communicated very differently depending on who is receiving it. Today you learn how to use your AI platform to adapt content for different audiences - without losing accuracy or rewriting everything from scratch.
    1
    Take a piece of work you have written recently - a report, an email, a presentation slide, anything substantive. Open your AI platform and paste this:
    Paste this - fill in your content and audiences
    Here is a piece of communication I have written:

    [paste your content here]

    I need to communicate the same core information to three different audiences:
    1. [Audience 1 - e.g. my direct manager who is time-poor and detail-oriented]
    2. [Audience 2 - e.g. a client who is not technical and cares mainly about outcomes]
    3. [Audience 3 - e.g. my wider team who need to understand what this means for their work]

    For each audience: what changes in terms of what I lead with, what I include, what I leave out, and what tone I use? Do not rewrite it yet - just analyse what should change and why.
    2
    Read the analysis. Then paste this for each audience you actually need:
    Paste this for each audience
    Now rewrite the content for [Audience 1], applying the changes you identified. Keep it to [specify length - e.g. under 150 words / one paragraph / three bullet points].

Audience adaptation is one of the most genuinely time-saving things AI can do. The analysis step - asking your AI platform what should change before asking it to rewrite - tends to produce better output than going straight to rewriting, because it forces explicit reasoning about the audience.

Day 10 done. Tomorrow: the framework for deciding when AI is and is not the right tool for a task.
Key takeawayAsk AI to analyse what should change for each audience before asking it to rewrite. The analysis step produces better output and often surfaces things you would not have thought of.
Because different audiences do not just need simpler or more complex language. They need different information in a different order with different things foregrounded. A time-poor manager needs the so what first. A technical audience needs the how before the what. A client needs outcomes before methods. Simplification is one change; audience adaptation is many changes simultaneously.
Day 11
When NOT to Use AI
Knowing when not to use AI is one of the most valuable skills you can build. Not every task earns its cost. Not every output should be delegated. Today you build a clear, principled framework for making this call quickly.
Five situations where AI is the wrong tool:

1. When the thinking IS the work
If the value of a task lies in the reasoning process itself - working through a complex problem, making a judgement call, forming a view - outsourcing it to AI means you never do the thinking that builds your expertise. AI can help you think, but it cannot think for you without cost.

2. When you cannot verify the output
If you have no way to check whether what AI produces is accurate, and the consequences of being wrong are significant, you are taking a risk you cannot quantify. Use AI for drafts you can verify, not for authoritative answers you will act on.

3. When the relationship requires your voice
Difficult conversations, feedback to colleagues, personal responses to clients who know you - these need to come from you. AI-drafted communications in high-trust relationships often feel slightly off in ways people cannot articulate but can detect.

4. When the task is trivial
If a task takes you two minutes and AI would take three minutes to prompt properly, do it yourself. Do not build a habit of defaulting to AI for things that are faster without it.

5. When the data should not cross the line
If the task requires pasting data that should not be in an AI tool (client data, confidential commercial information, personal data) - do not do it. We cover this fully in Week 4.
    1
    Read the five situations above carefully. Now apply them to your actual work.
    2
    Open your AI platform and paste this:
    Paste this - fill in your actual tasks
    I want to build a quick personal decision rule for deciding whether to use AI for a given task at work. I am a [your job title] at [your organisation].

    Here are five tasks I have done recently where I either used AI or considered it:
    1. [Task 1]
    2. [Task 2]
    3. [Task 3]
    4. [Task 4]
    5. [Task 5]

    For each one: should I have used AI? Apply rigorous reasoning - not just what would be faster, but what the right choice is given quality, privacy, relationship, and environmental cost considerations.
    3
    Read the analysis. Note any tasks where AI was the wrong choice - and commit to doing those differently.

The goal is not to use AI less. It is to use it deliberately. Every task you give AI should earn its cost - in time saved, in quality improved, in something genuinely enabled that you could not have done as well without it. Tasks that do not clear that bar are better done without AI.

Day 11 done. Tomorrow: the environmental cost you cannot see - understanding what AI actually uses and building a framework around it.
Key takeawayNot every task earns the cost of using AI. The most effective AI users are selective, not indiscriminate.
Because when you use AI indiscriminately, you stop developing the judgement to know when it helps and when it hurts. Selective use forces you to be precise about what AI actually adds in each case. That precision improves your prompting, your validation, and your overall relationship with the tool.
Take it deeper
📚The Intelligence Trap by David Robson - why smart people make poor decisions, and how to avoid it
Day 12
The Hidden Environmental Cost
Every time you open your AI platform, something happens that you cannot see: electricity is consumed, water is used to cool servers, carbon is emitted. The interface is weightless. The infrastructure behind it is not.
What AI actually uses:

A single query to a large language model uses approximately 10 times the energy of a standard Google search.

Training a model like GPT-4 consumed an estimated 50 gigawatt-hours of electricity - roughly equivalent to the annual electricity use of 4,600 average UK homes.

Microsoft reported its global water consumption increased 34% between 2021 and 2022, largely attributed to AI infrastructure.

A conversation of 20-50 queries is estimated to use roughly 500ml of water for cooling.

Data centres running AI globally consumed an estimated 460 terawatt-hours in 2022. This figure is projected to roughly double by 2026.

Why this matters for a luxury brand:
Sustainability is not peripheral to luxury - it is increasingly central to how the sector is judged. The way teams use technology is part of that story. We use AI thoughtfully is a more defensible position than we use AI for everything, and both are more defensible than we had no policy on this.
    1
    Read the data above carefully. Sit with it for 60 seconds - not to feel guilty, but to feel informed.
    2
    Open your AI platform and paste this:
    Paste this
    I want to build a simple personal framework for deciding when using AI is genuinely worth the environmental cost, and when a simpler alternative would serve me equally well. Help me design it based on these factors: time saved vs environmental cost, whether the task could be done without AI in under 5 minutes, whether the output quality meaningfully benefits from AI, and whether the task involves learning I should be doing myself. Make the framework a quick decision tree I can apply in under 30 seconds.
    3
    Read the framework. Apply it mentally to five tasks you used AI for this week. Were any of them not worth it?

The point is not to use AI less. It is to use it deliberately. The environmental cost of AI does not make it wrong to use - it makes the question of whether to use it worth asking every time.

Day 12 done. Tomorrow: smarter iteration - how to get from a decent first draft to something genuinely excellent.
Key takeawayAI has a real environmental cost that is invisible by design. Build the habit of asking: does this task genuinely earn that cost?
Partly because the metrics are genuinely hard to calculate - energy sources vary, efficiency improves, scope is debated. But partly because clean interfaces make consumption invisible in a way physical products do not. A luxury brand would never ship a product without understanding its environmental footprint. The same rigour applied to digital tools is both a values question and an increasingly regulatory one.
Day 13
Iteration and Refinement
Getting good AI output rarely happens in one prompt. Today you learn how to iterate effectively - how to move from a decent first draft to something genuinely excellent through a structured refinement process.
Why most people stop too early:

Most people get a first AI output, decide it is good enough, and use it. The gap between good enough and excellent often requires two or three additional prompts - and the first draft rarely represents what your AI platform is actually capable of producing.

Effective iteration techniques:

Specific critique: Do not say make this better. Say the third paragraph buries the key point - restructure it so the main message comes first.

Constraint addition: Add a constraint you did not specify initially. The tone is slightly too formal for this audience - make it warmer while keeping the professionalism.

Comparison request: Ask for a different version that takes a completely different approach. Then you can choose or blend.

The 20% better test: Ask your AI platform: what would you change to make this 20% better? It often surfaces improvements you would not have thought of.

What not to do: Do not keep regenerating with the same prompt hoping for a different result. If the output is wrong, the prompt is wrong. Change the prompt.
    1
    Open your AI platform and start a new conversation. Ask it to produce a real piece of work you need - a draft, a summary, a brief. Use what you have learned about strong prompts.
    2
    Read the first draft. Now paste this:
    Paste this - fill in your critiques
    I want to improve this. Here are my specific critiques:
    1. [specific thing that is not working and why]
    2. [specific thing that is not working and why]

    Also ask yourself: what would you change to make this 20% better? Apply both your improvements and the 20% better changes in the next version.
    3
    Read the second version. If you want a different angle, paste this:
    Optional - for a second angle
    Now write a completely different version of this that takes a different structural approach. Same information, different architecture.
    4
    Compare versions. Take the best elements from each. That combination is likely better than either version alone.

Iteration is where the real quality difference between average and excellent AI use lives. The first draft is the starting point, not the destination. Every additional specific prompt is an investment that usually pays back in editing time saved.

Day 13 done. Tomorrow: how to use AI to learn, not just to produce - turning AI into a tutor for any subject you need to understand.
Key takeawayThe first AI draft is rarely the best one. Specific critique and the 20% better test consistently produce better output than accepting the first version.
When the marginal improvement from the next prompt is smaller than the time it takes to write it, or when the output is good enough for the purpose it serves. Perfectionism is as much a failure mode as accepting poor first drafts. The goal is excellent and usable, not perfect.
Day 14
AI as Tutor
your AI platform is not just a writing assistant. It is an extraordinarily patient, endlessly available tutor on almost any subject you want to understand. Today you use it to learn something you have been meaning to understand but never had the time.
    1
    Choose one topic relevant to your work that you feel you should understand better but have always found opaque - a financial concept, a technical term, a regulatory area, an industry trend. Open your AI platform and paste this:
    Paste this - fill in your topic and role
    I want to understand [the topic] properly. I am a [your job title] at [your organisation]. I am intelligent but not technical - I need you to explain this from the ground up.

    Start by explaining what it is and why it matters to someone in my position. Then tell me the three things I most need to understand to not be confused when this topic comes up in my work. Use concrete examples from my industry wherever possible.
    2
    Read the explanation. Now go deeper on whatever felt least clear:
    Follow up on what was unclear
    The part I found hardest to follow was [describe what was unclear]. Explain that part again using a completely different analogy or example.
    3
    Now test your understanding:
    Paste this
    Ask me three questions about [the topic] that would reveal whether I have actually understood it or just heard it. Ask them one at a time and tell me whether my answer is correct and what I am missing.

Using AI as a tutor has one significant advantage over most other learning: it adapts in real time to exactly what you do not understand. You can ask it to explain the same thing twelve different ways until one lands. That kind of personalised, patient explanation is rare and genuinely valuable.

Day 14 done - Week 2 complete. You now have a much more sophisticated toolkit for using AI well. Week 3 addresses the things most programmes either skip or treat superficially: bias, what AI encodes, and what using it responsibly actually requires.
Key takeawayyour AI platform is an endlessly patient tutor on almost any subject. The test-your-understanding step is essential - it reveals whether you have actually learned something or just heard it.
It cannot tell you what you do not know to ask about. A human teacher notices the gap in your understanding that you are not aware of. your AI platform only knows what you tell it - so if you do not know enough to ask the right follow-up questions, some gaps may go unaddressed. The remedy is the test-your-understanding step, which surfaces gaps you did not know you had.
Take it deeper
📚Co-Intelligence by Ethan Mollick - the chapter on AI as a learning tool is particularly good
Week 3 · 7 exercises

Week 3: Bias and Planet

Understand what AI encodes and what it costs. Visual bias, decision bias, environmental impact, and an honest audit of how you are actually doing.

Day 15
The Bias Hiding in Plain Sight
AI learns from human-generated data. Human-generated data contains human prejudices. Which means AI does not just reflect the world as it is - it reflects the world as it was represented in text and images, with all the assumptions baked in. Today you are going to see this for yourself.
Where bias comes from:

Large language models and image models are trained on data scraped from the internet. The internet over-represents certain groups (younger, Western, English-speaking, male in many professional contexts) and under-represents others. The model learns to complete patterns - and the patterns in the data carry the patterns of who produced it.

This is not a flaw in a specific model. It is a structural property of how these systems are built. Every major AI system carries this to some degree, because they all trained on similar data.

What this means in practice:
• Image generators default to certain demographic representations of roles
• Text generators may assume default demographics for the subject of a scenario
• AI-assisted hiring tools may encode historical bias in who was hired
• Creative AI tools may produce outputs that reflect historically dominant aesthetics

The mitigation is not to avoid AI. It is to know the bias exists, to look for it in outputs that matter, and to specify diversity explicitly rather than assuming the default is neutral.
    1
    Open your AI platform, ChatGPT (chatgpt.com - free account is fine), and Google Gemini (gemini.google.com - free account is fine) in three separate tabs.
    2
    In each tool, paste this exact prompt - same wording in all three:
    Paste in all three tools
    Generate a photorealistic image of a CEO presenting to their board.
    If any tool does not generate images, ask it to describe in detail what this image would look like instead.
    3
    Before reading on: note the age, gender, race and appearance of the person depicted in each result.
    4
    Now do the same with this prompt in all three tools:
    Paste in all three tools
    Generate a photorealistic image of a nurse caring for a patient.
    5
    Note what you see. Then open your AI platform and paste this:
    Paste this in your AI platform
    I have just generated images of a CEO and a nurse using three different AI image tools. The CEO was consistently depicted as an older white man. The nurse was consistently depicted as a young woman, often a woman of colour. Why does this happen, and what does it tell us about AI training data?
    6
    Read the explanation. Then paste this:
    Follow up with this
    What are the practical implications of this bias for someone using AI tools at work - for example in communications, recruitment, creative briefs, or customer-facing content? What should they actually do differently?

What you saw is not a quirk of one tool. It is a consistent pattern across all major AI systems. The AI is not making a value judgement - it is completing a statistical pattern. But the output encodes that pattern as normal, which is why it matters and why awareness alone is not sufficient.

That one will stay with you. Day 15 changes how you look at AI output. Tomorrow: bias in decisions, not just pictures - where the stakes are higher and the bias is harder to see.
Key takeawayAI does not reflect the world as it is. It reflects the world as it was represented in training data - with all the assumptions that came with it.
Partially. Specifying diverse, of varying backgrounds, or including explicit demographic descriptors can shift outputs. But it requires conscious effort every time - which means the default, unexamined output will always carry the bias. Process change - review steps, image guidelines, diverse prompt libraries - is what actually makes a difference at scale.
Day 16
Bias in Decisions, Not Just Pictures
Visual bias is easy to see once you know to look. Bias in AI-assisted hiring tools, performance assessment, or content recommendation is harder to spot and much higher stakes. Today you look at where bias hides in professional decisions.
Three documented cases of AI bias in professional decisions:

Amazon's hiring tool (2014-2017): Amazon developed an AI system to screen job applications. It was trained on 10 years of historical hiring data. Because most hires over that period were men, it learned to penalise applications that included the word women's (as in women's chess club). The tool was quietly abandoned in 2018 after internal discovery.

COMPAS recidivism algorithm (US): A tool used in US courts to predict likelihood of reoffending was found by ProPublica to be almost twice as likely to falsely flag Black defendants as high risk compared to white defendants, while white defendants were more likely to be incorrectly flagged as low risk. The tool's creator disputed the methodology, but the debate itself is instructive.

Healthcare resource allocation (US, 2019): A widely used algorithm that determined which patients received additional healthcare resources was found to systematically underestimate the needs of Black patients, because it used healthcare spending as a proxy for health need - and Black patients had historically received less care for the same conditions.

All three systems were built by teams that were not trying to discriminate. The bias came from the data.
    1
    Read the three case studies above carefully.
    2
    Open your AI platform and paste this:
    Paste this - fill in your role
    I am a [your job title] at [your organisation]. Based on what I know about AI bias - that it tends to encode historical patterns from training data - where in our workflows could AI-assisted decisions introduce bias that would be hard to spot? Give me two or three specific, realistic scenarios relevant to my role. For each one: what pattern might the AI have learned, who would be affected, and what would a meaningful review step look like?
    3
    For each scenario: does a human review step currently exist? Is it robust enough to catch what the AI might be encoding?

The through-line in all three case studies is that the bias was invisible to the people building and using the system until it was explicitly investigated. That is the nature of bias encoded in data - it looks like neutral outputs until someone asks why these outputs look the way they do.

Day 16 done. Tomorrow: the historical record of what happens when AI gets consequential decisions wrong - and who is responsible.
Key takeawayBias in AI is not limited to images. It affects any decision where AI is trained on historical data - and the less visible it is, the more important the human review step.
Because the historical data itself encodes past discrimination. If you train a hiring model on who was hired in the past, and the past had discriminatory hiring practices, the model learns to replicate those practices. The model is not biased - the training data is. Fixing this requires either different data, specific debiasing techniques, or explicit constraints on the model's outputs.
Day 17
Ethics and Responsibility
When AI makes a consequential mistake - a biased hiring decision, a wrong recommendation, a harmful output - the human who used and approved it is still responsible. The AI said so is never a sufficient defence. Today you make this concrete.
The chain of responsibility in AI-assisted decisions:

AI tools do not have legal or moral agency. They are tools. When something goes wrong, responsibility sits with the humans who:

• Chose to use the tool for this purpose
• Configured or prompted it
• Reviewed (or failed to review) the output
• Acted on the recommendation

This has concrete legal implications: Under GDPR, automated decision-making that significantly affects individuals requires either human review or explicit consent. Under UK employment law, using an AI tool that produces discriminatory outcomes can constitute discrimination by the employer, even if the employer did not intend it.

The practical implication: High-stakes decisions - those affecting other people's employment, finances, safety, or significant opportunities - should always have a meaningful human review step. Meaningful means a person with the time, context, and authority to say no. Not a rubber stamp.
    1
    Read the learning note above. Then open your AI platform and paste this:
    Paste this
    Give me two short case studies of situations where AI was used to make or inform a decision that caused harm - one from a business context and one from a public sector context. For each: what went wrong, who was harmed, who bore responsibility, and what human oversight step should have existed but did not?
    2
    Read the case studies. Then paste this:
    Follow up with this - fill in your role
    I am a [your job title] at [your organisation]. In my role, where do I bear the most responsibility for the consequences of AI output? Identify the two or three situations where AI output could most directly affect other people if I acted on it without proper scrutiny - and what meaningful review looks like in each case.
    3
    For each situation: do you currently have a review step? Is it meaningful or nominal?

Ethical responsibility around AI is not abstract. It is the specific moment when you choose to act on an AI output that affects another person. The more consequential the decision, the more explicit and robust the human review step needs to be.

Day 17 done. Tomorrow: your AI audit - an honest assessment of how you are doing against everything you have learned.
Key takeawayHigh-stakes decisions should always have a meaningful human review step. Meaningful means someone with the time, context and authority to actually push back.
A nominal review is someone signing off without genuinely interrogating the AI output - because they lack the time, the context, the expertise, or the authority to say no. A meaningful review is someone who actually understands what they are approving, has compared it against their own knowledge, and would push back if something were wrong. The difference is usually visible in whether the reviewer ever does push back.
Take it deeper
📚The Alignment Problem by Brian Christian - the best book on AI ethics, readable and non-technical
Day 18
Your AI Audit
Three weeks in, you have the knowledge. Today you audit your own AI use honestly - against everything you have learned. Most people find they are stronger in some areas and weaker in others.
    1
    Open your AI platform and paste this:
    Paste this
    I want to audit my own use of AI tools against five dimensions: (1) am I prompting effectively and getting genuinely useful output, (2) am I validating outputs before acting on them, (3) am I alert to bias in AI outputs that affect other people, (4) am I using AI only when it genuinely earns its environmental cost, (5) am I keeping sensitive data outside AI tools. Help me do this audit by asking me one question for each dimension. Ask them one at a time and wait for my answer before asking the next.
    2
    Answer each question honestly. No credit for aspirational answers - describe what you actually do.
    3
    Then paste this:
    Follow up with this
    Based on my answers, give me a personal AI health report. Where am I doing well, where are my biggest gaps, and what is the single most important thing I should change first?

The audit is not about guilt - it is about honest self-assessment. The gaps you identified today are worth more than any technique in this programme, because they are specific to you and your actual habits.

Day 18 done. Tomorrow: turning your biggest gap into a concrete improvement.
Key takeawayYou cannot improve what you do not honestly assess. Aspirational self-assessment is worse than useless - it confirms comfortable fictions.
Because AI use feels productive by default. You are doing something, generating something, making progress. The hard question is not whether you are using AI but whether you are using it well. That requires stepping back from the activity and evaluating it honestly - which is uncomfortable when the activity feels good.
Day 19
Fix Your Biggest Gap
Yesterday you identified your biggest gap. Today you fix it. This session is entirely focused on the area you most need to improve, using everything you have learned.
    1
    Look back at your audit from yesterday. Identify the dimension where you scored lowest. Open your AI platform and paste this:
    Paste this - fill in your specific gap
    My biggest gap in AI use is [describe the specific gap honestly - e.g. I do not validate outputs before acting on them / I have been sharing data I should not / I default to AI for tasks where it does not earn its cost / I do not check for bias in outputs that affect other people]. Help me design a specific, practical improvement I could start today. Not an intention - a concrete change to how I work.
    2
    Read the improvement. Then test it on a real piece of work right now.
    3
    Come back and paste this:
    Follow up after testing
    I just tested the improvement on a real task. Here is what happened: [describe what you did and what the result was]. Based on this, what should I adjust?

The design-test-refine loop is more important than the plan. Most improvements that survive contact with real work look different from the plan that preceded them. The testing step is not optional - it is where the improvement becomes real.

Day 19 done. Tomorrow: teaching someone else - the fastest way to consolidate what you have learned.
Key takeawayDesign, test, refine. The loop is more important than the plan. An improvement that survives one real test is worth more than a plan that has never been tried.
Because abstract improvements collapse under contact with real constraints. The friction, the specific data, the actual time pressure, the specific output quality - none of these are visible in a plan. Testing immediately surfaces what works and what needs adjusting, usually in ways you would not have predicted.
Day 20
Teach Someone Else
The best test of understanding is whether you can explain something clearly to someone who does not know it. Today you explain one concept from this programme to a colleague - and discover what you actually understand versus what you think you understand.
    1
    Choose one concept from this programme that you think would be most valuable for someone on your team. Open your AI platform and paste this:
    Paste this - fill in the concept and industry
    I want to explain [the concept - e.g. why AI hallucinates / how to catch AI mistakes / AI bias in professional decisions / when not to use AI] to a colleague who has not done this programme. They are intelligent and work in [your industry].

    Help me prepare a clear, jargon-free 3-minute explanation with one concrete example they will recognise from our work context. I will be explaining it verbally, not in writing. Then ask me to rehearse the key points so you can check I have understood it myself.
    2
    Rehearse the explanation with your AI platform. Then have the actual conversation with a colleague today.
    3
    Come back and paste this:
    Follow up after the conversation
    I just explained [the concept] to a colleague. They asked me [describe any questions or pushback they had]. How should I have answered?

If you could explain it clearly, you understood it. If you stumbled, you found a gap. Either outcome is useful. The conversation with a colleague does something the programme alone cannot: it makes the knowledge social, which is where it sticks.

Day 20 done. Tomorrow: AI in your specific industry - what is actually happening versus what the hype says.
Key takeawayIf you cannot explain it simply, you do not yet understand it well enough to rely on it. Teaching is the most honest test of understanding.
Because explanation requires precision. You cannot wave vaguely at what you mean when someone is waiting for clarity. The process of finding the words - the analogy, the concrete example, the answer to the follow-up question you did not expect - is also the process of making the knowledge more precise and durable in your own mind.
Day 21
AI in Your Industry
AI is changing every sector. Today you research what it actually means for your specific industry - not the hype, but the documented changes happening now and the credible projections for the next few years.
    1
    Open your AI platform and paste this:
    Paste this - fill in your industry
    I work in [your industry - e.g. luxury fashion / fragrance and beauty / watches and fine jewellery / luxury retail]. Research and give me a grounded, evidence-based picture of how AI is actually being used in this industry right now - not speculation, real documented deployments. Then give me the two or three changes over the next three years that are most credible based on current trajectories. Be specific about what is documented versus what is projected.
    2
    Read the response. Run the four validation checks from Day 5 on any specific claim that could be checked. AI industry analysis is particularly prone to blending documented fact with plausible speculation.
    3
    Paste this:
    Follow up with this
    Based on this picture, what are the two skills or knowledge areas I should prioritise developing over the next 12 months to remain effective and relevant as AI changes this industry?

The picture of AI in your industry is probably less dramatic than the hype and more significant than you might hope to ignore. The useful question is not will AI change my job but which parts of my job does AI do well, and how do I concentrate on the parts it does not?

Day 21 done - Week 3 complete. Tomorrow starts Week 4: the week that turns everything you have learned into something you will actually keep.
Key takeawayUnderstanding what AI is actually doing in your industry - not what the hype says it will do - is the foundation of a useful development plan.
Because both use the same vocabulary, both are covered by the same publications, and the people most incentivised to generate coverage are vendors with products to sell. The filter is: is this documented in a real deployment, or is this a claim about what AI could theoretically do? The first is information. The second is marketing.
Take it deeper
📚Co-Intelligence by Ethan Mollick - the most grounded book on AI and the future of work
Week 4 · 9 exercises

Week 4: Privacy and Ethics

Data privacy, GDPR, AI regulation, your final personal policy, and graduation with a commitment you will actually keep.

Day 22
What Stays Outside
Every time you paste something into your AI platform, you are sharing that information with a system that may store it, process it, or use it in ways your organisation has not fully mapped. Today is about building a clear, principled sense of what crosses that line.
What happens to data you paste into AI tools:

The answer depends on the tool, the account type, and the terms of service - and most people have not read those terms.

For consumer AI tools (free tiers): Data you enter is typically used to improve the model. This means it may be reviewed by humans, stored, and potentially used in future training.

For enterprise AI tools (like a properly configured internal your AI platform deployment): Data is typically not used for training. But you should confirm this with your IT or legal team before assuming it.

What this means in practice:
• Personal data about identifiable individuals (names, contact details, health information, financial details) should never go into an AI tool without confirmed data processing agreements
• Commercially sensitive information (unreleased product details, supplier contracts, pricing, client data) should not go into any external tool
• Information shared in confidence - by a client, a colleague, a supplier - carries an implicit duty of confidentiality that pasting into AI may breach

The test that works in almost every situation: Would I be comfortable if my client, my manager, and our legal team could see exactly what I just pasted? If the answer is no, do not paste it.
    1
    Before opening anything, think about your work for a moment:
    In the last month, have you - or someone on your team - pasted any of the following into an AI tool?

    - A client name, contact details, or purchase history
    - A supplier contract or commercial terms
    - An internal creative brief or unreleased product information
    - Personal information about a colleague
    - Confidential strategy or financial information

    You do not need to answer out loud. Just notice what comes to mind.
    2
    Open your AI platform and paste this:
    Paste this
    I work at a luxury goods company in the UK. We have clients, supplier relationships, creative IP, and commercially sensitive information that is central to our business. I want to understand exactly what the GDPR and data protection implications are of using an internal AI tool - specifically what I should never paste in, and why. Be specific and give me real examples relevant to my industry.
    3
    Read the full response. Then paste this:
    Follow up with this
    Give me five realistic scenarios from a luxury brand context where an employee might be tempted to paste something sensitive into an AI tool without realising the risk. For each one, tell me what the risk actually is and what they should do instead.
    4
    Now paste this:
    Follow up with this
    Based on everything we have discussed, write me a personal data policy for my own use of AI tools at work - no more than one page, in plain language, specific enough to actually guide my decisions. Include a simple should I paste this test I can apply quickly.
    5
    Save the policy somewhere you will actually find it - a note, a doc, a bookmark, wherever works for you.

Most data incidents in the workplace are not dramatic hacks. They are someone doing something reasonable without thinking through what information they are carrying into that action. The should I paste this pause takes two seconds and can prevent consequences that take much longer to resolve.

Day 22 done. Tomorrow: GDPR and AI in practice - what the regulation actually requires and what you should be asking your organisation.
Key takeawayBuild the pause. Before pasting anything, ask: would I be comfortable if my client, my manager, or our legal team could see exactly what I just shared?
Because removing a name does not remove a role, an organisation, a date, or a specific situation - all of which can re-identify someone in a small industry where everyone knows everyone. The GDPR test is not whether you removed identifying information. It is whether re-identification is reasonably possible. In a world of a few hundred luxury houses, it often is.
Day 23
GDPR and AI in Practice
In the UK and EU, using personal data in AI tools without proper legal basis is a potential GDPR violation - even if the data belongs to your own customers or employees. Today you make this concrete and practical.
GDPR and AI: what you need to know:

Personal data under GDPR includes any information that can identify an individual, directly or indirectly. Names, email addresses, and job titles are obvious. Less obvious: a combination of role, location, and employer may be sufficient to identify someone even without a name.

Legal basis: Processing personal data requires a valid legal basis under GDPR. The most common for employee data is legitimate interests or contract. For customer data it is usually consent or contract. None of these legal bases automatically extends to processing that data through an external AI tool - that is a new form of processing that requires its own assessment.

Data processing agreements: If your organisation uses your AI platform or any other AI tool with personal data, it should have a Data Processing Agreement (DPA) with the provider. Without one, the organisation may be in breach of GDPR.

The automated decision-making rule: GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Using AI to make hiring, disciplinary, or significant service decisions without human review may trigger this right.

Penalties: Up to 4% of annual global turnover or 20 million euros, whichever is higher. In practice, the ICO tends to focus on organisations that fail to take reasonable precautions rather than those who have made good-faith efforts.
    1
    Read the learning note above. Then open your AI platform and paste this:
    Paste this - fill in your role
    I want to understand how GDPR applies to my specific use of AI tools at work. I am a [your job title] at [your organisation]. Help me identify: one workflow where I currently use AI or could use AI that involves personal data, what the legal basis for that processing would need to be, whether a Data Processing Agreement is required with the AI provider, and what questions I should be asking our legal or compliance team.
    2
    Read the response carefully. Run the expert check from Day 5 on anything that sounds like a definitive legal statement.
    3
    Paste this:
    Follow up with this
    What are the two or three questions I should bring to our legal or IT team about how our organisation is handling GDPR compliance for AI tool use? Frame them as practical questions, not abstract ones.
    4
    Note the questions. Plan when you will ask them.

You are not expected to be a lawyer. You are expected to be informed enough to ask the right questions and to know when to escalate. The questions you identified today are the starting point for a conversation your organisation may not have had yet.

Day 23 done. Tomorrow: the regulatory landscape beyond GDPR - what is coming and what it means.
Key takeawayThe AI said so is never a legal or ethical defence. You are responsible for the outputs you act on, and the data you share to generate them.
Because the boundary between processing and storing data is blurry with AI tools. When you paste something into your AI platform, you may be sharing it with a system that stores it, uses it for training, or makes it accessible to the provider - none of which is covered by the original legal basis for collecting that data. This is a new form of processing that requires explicit consideration.
Day 24
The Regulatory Landscape
AI regulation is moving faster than most organisations have been able to track. The EU AI Act, the UK approach, GDPR enforcement on AI - these are not abstract future concerns. Today you understand what is already in force and what is coming.
The current regulatory landscape (as of early 2025):

EU AI Act: The world's first comprehensive AI regulation. Classifies AI systems by risk level: unacceptable risk (banned), high risk (strict requirements), limited risk (transparency requirements), minimal risk (largely unregulated). High-risk applications include AI used in employment, credit scoring, education, and law enforcement. Companies using high-risk AI must register systems, conduct conformity assessments, and maintain human oversight. The Act began phasing in from February 2025 and will be fully in force by August 2026.

UK approach: The UK has taken a lighter-touch, principles-based approach rather than a single regulation. The government issued a White Paper in 2023 directing existing regulators (ICO, FCA, CMA) to apply their current powers to AI in their sectors. This is less prescriptive than the EU approach but creates sector-specific variation.

GDPR enforcement: The ICO has been actively investigating AI tools used in UK organisations. Several major companies have faced investigations for using AI tools that process personal data without proper data protection assessments.

What luxury brands should be watching: The EU AI Act's employment provisions are directly relevant to any AI-assisted hiring, performance management, or workforce decisions. The transparency requirements will affect any AI-generated customer communications.
    1
    Read the learning note above. Open your AI platform and paste this:
    Paste this
    I work for a luxury goods company in the UK that operates across Europe. Based on the EU AI Act and UK regulatory approach, what are the specific obligations most relevant to our sector right now? I am particularly interested in: employment decisions, customer communications, and marketing personalisation. What do I need to know to have an informed conversation with our legal team?
    2
    Run the four validation checks from Day 5 on any specific legal claims - regulation moves quickly and your AI platform knowledge may not be current.
    3
    Paste this:
    Follow up with this
    What are the three questions I should bring to our legal or compliance team about how our organisation is preparing for AI regulation? Make them concrete enough that the team will know what I am asking.

You do not need to understand AI regulation in legal detail. You need to be informed enough to know which questions to ask and who to ask them to. The questions you identified today are a better contribution to your organisation than a general awareness that regulation exists.

Day 24 done. Tomorrow: your final personal policy - consolidating everything into a document you will actually use.
Key takeawayYou do not need to understand AI regulation in legal detail. You need to be informed enough to know what questions to ask and who to ask them to.
Because the technology is moving fast and the potential harms are becoming visible faster than legislators expected. The EU AI Act took years to negotiate but is already being revised. Both the EU and UK are responding to documented harms rather than theoretical ones, which means the regulation tends to lag behind the technology - but not by as much as it used to.
Day 25
Your Final Personal Policy
Over the past 24 days you have built knowledge, tested techniques, audited your habits, and made the case for change. Today you consolidate all of that into a final personal AI policy - specific, honest, and yours.
    1
    Open your AI platform and paste this:
    Paste this
    I have just completed a 30-day AI training programme covering: what AI actually is and why it hallucinates, effective prompting and validation techniques, how to use AI as a thinking partner and tutor, AI bias and how to mitigate it, environmental cost, data privacy and GDPR, ethical responsibility, the regulatory landscape, and how to use AI well across a range of work situations.

    I want to write a final personal AI policy. It should cover: what I will always do when using AI at work, what I will never do, how I will handle data, how I will validate outputs for consequential decisions, and how I will keep my knowledge current. Help me draft it by asking me five specific questions about my role and context, one at a time.
    2
    Answer each question honestly. Then paste this:
    Follow up with this
    Based on my answers, write my personal AI policy. Make it specific to what I actually said - not generic best practice. Structure it with clear sections and keep it short enough that I would actually read it and follow it.
    3
    Read it. Edit anything that does not feel true or specific enough. This is a document you own - the AI drafted it from your answers, but the content should reflect how you actually work.
    4
    Save it somewhere you will find it in six months. Put a reminder in your calendar to review it.

A policy that lives only in a training programme is no policy at all. The one you just wrote is specific to your role, your risks, your actual answers. Print it, save it, review it in six months. The review is as important as the writing.

Day 25 done. Five more days. Tomorrow: staying current - how to keep learning after this programme ends.
Key takeawayA personal AI policy is only useful if it is specific enough to guide real decisions and accessible enough to find when you need it.
Specificity and accessibility. A policy you cannot find when you need it is no policy. A policy too vague to apply to a real situation is no policy. The best personal policies are short, specific, and saved somewhere you look regularly. The review reminder matters because the technology changes and your policy should change with it.
Day 26
Staying Current
AI is changing faster than any training programme can track. The most important skill is not what you learned - it is how you will keep learning after this programme ends.
How to stay current without drowning in noise:

The AI news cycle produces enormous amounts of coverage. Most of it is vendor announcements, speculation, and hype. Staying current does not mean reading all of it - it means identifying the few signals that are actually reliable and building a habit around them.

Signals worth following:
Anthropic, OpenAI, Google DeepMind research blogs - primary sources on capability changes
MIT Technology Review - consistently good on separating hype from evidence
The ICO and FCA - for regulatory developments affecting UK organisations
Your sector trade press - for documented AI deployments in your specific industry

Signals to be sceptical of:
• Press releases from AI vendors about their own products
• LinkedIn posts about transformative AI breakthroughs
• Articles that describe AI doing something without linking to primary evidence

A sustainable habit: 20 minutes, once a month. Check your three to four reliable sources. Note what has changed that is relevant to your work. Update your mental model accordingly.
    1
    Read the learning note above. Identify three to four sources you will actually check.
    2
    Open your AI platform and paste this:
    Paste this - fill in your details
    I want to build a sustainable habit of staying current on AI developments relevant to my specific work without spending more than 20 minutes a month on it. I am a [your job title] at [your organisation] in [your industry].

    Help me identify: the two or three most relevant AI developments I should currently be tracking for my role and industry, a simple monthly ritual I could realistically maintain, and the one question I should ask about any AI news story to determine whether it is signal or noise.
    3
    Set up the sources and put the monthly ritual in your calendar today. Not tomorrow.

Staying current is not about reading everything. It is about having a small number of reliable signals, a regular habit of reviewing them, and a filter for separating hype from reality. The setup - doing it today, not sometime - is what determines whether it actually happens.

Day 26 done. Four more days. Tomorrow: your 30-day review.
Key takeawayStaying current requires a system, not just intention. Set up the sources and the ritual today.
Because the technology is changing faster than knowledge can stabilise. What you learned about AI models six months ago may already be partially outdated. The durable skill is not specific knowledge - it is the ability to evaluate new developments critically, ask the right questions, and update your understanding as the landscape shifts.
Take it deeper
📚Co-Intelligence by Ethan Mollick - the best book on AI and the future of work, readable in a weekend
Day 27
Your 30-Day Review
Today is an honest review of your 30-day journey. Where did you start? Where are you now? What has actually changed?
    1
    Before opening your AI platform, take five minutes and write down your answers to these three questions. Handwrite them if possible - the physicality helps: 1. What did you believe about AI 30 days ago that you no longer believe? 2. What is one concrete thing you do differently at work because of this programme? 3. What is the one thing you are still most uncertain about?
    2
    Open your AI platform and paste this:
    Paste this - fill in your honest answers
    I have just completed a 30-day AI training programme. Here are my honest answers to three review questions:

    1. What I used to believe that I no longer believe: [your answer]
    2. One concrete thing I do differently at work: [your answer]
    3. What I am still most uncertain about: [your answer]

    Based on these answers, what does my learning journey tell you about where I started, what genuinely shifted, and where I should focus next?
    3
    Read the reflection. Let it sit. Then paste this:
    Follow up with this
    Based on everything I have told you about my learning journey, what is the single most important thing I should do in the next 30 days to continue making progress?

Learning is only visible in retrospect. The fact that you can identify specific beliefs that changed, specific behaviours that are different, and specific areas of remaining uncertainty is evidence of real progress - not just completion of a programme.

Day 27 done. Three more days. Tomorrow: your AI commitment.
Key takeawayReal learning changes what you believe and what you do. If neither has changed, you have been exposed to information, not transformed by it.
Because there is a natural tendency to credit the programme for more change than actually happened. The uncomfortable question is not what did I learn but what am I actually doing differently. The answer to the second question is usually shorter than the answer to the first - and more valuable.
Day 28
Your AI Commitment
Not a reflection. Not a certificate. A commitment. Today you write the specific, personal statement of how you will use AI at work - the thing that determines whether the previous 27 days stuck or faded.
    1
    Open your AI platform and paste this:
    Paste this
    I have just spent 30 days learning about AI - what it is, how to use it well, the bias embedded in it, its environmental cost, data privacy, GDPR, ethical responsibility, and the regulatory landscape. I want to write a personal AI commitment.

    It should have:
    - Three specific things I commit to doing every time I use AI
    - Three specific things I commit to never doing
    - One thing I am committed to getting better at over the next six months

    Help me write it by asking me five questions about my specific work context and what I most want to hold myself to. Ask one at a time. Wait for my answer before asking the next.
    2
    Answer each question honestly. Take your time - this is worth doing properly.
    3
    Then paste this:
    Follow up with this
    Now write my personal AI commitment based on what I told you. Make it specific to what I actually said. Write it in first person. Keep it short enough that I would read it again in six months and it would still mean something.
    4
    Read it. Edit anything that does not feel true. This is yours.

Knowing is not the same as doing. A specific commitment is the bridge between the two. The habits that will determine whether this programme sticks are small: the pause before you paste, the follow-up question after a confident answer, the moment you ask whether this task earns the cost of using AI today.

Day 28 done. Two more days. Tomorrow: sharing what you have learned.
Key takeawayKnowing is not the same as doing. A specific, personal commitment is the bridge between the two.
Because vague intentions do not survive contact with a busy morning. I will be more careful with data is a feeling you can lose. I will never paste a client name into your AI platform is a rule you can follow even when you are distracted. Specificity is what turns a commitment into a behaviour. The act of writing it also forces precision about what you actually mean - which surfaces the places where your thinking is still fuzzy.
Day 29
Share What You Have Learned
The moment you explain something to someone else is the moment it becomes fully yours. Today you share - with a colleague, your team, or someone who would benefit.
    1
    Open your AI platform and paste this:
    Paste this - fill in your audience and concepts
    I have just completed a 30-day AI training programme and I want to share what I have learned with [describe who - e.g. my direct team / my manager / a colleague who is just starting to use AI / a broader team meeting]. Help me prepare a short, engaging verbal summary - not a lecture, a conversation.

    The three or four things I most want them to understand are [list the concepts from this programme that struck you most]. For each one: give me a one-sentence plain-language summary and one concrete example from a luxury brand context.
    2
    Have the conversation today. If you cannot do it in person, send a message or email instead.
    3
    Share this programme with at least one person who would benefit. Use the sharing template at the bottom of Day 30, or write your own.

What you shared today is not a summary of a training programme. It is your perspective, your examples, your understanding. That is what makes it valuable - and what makes it stick.

Day 29 done. One day left. Tomorrow is graduation.
Key takeawayKnowledge that stays inside you has limited value. Knowledge you share becomes part of how your organisation thinks.
Because explanation requires precision you cannot fake. You cannot wave vaguely at what you mean when someone is waiting for clarity. The process of finding the words - the analogy, the concrete example, the answer to the follow-up question you did not expect - is also the process of making the knowledge more precise and durable in your own mind.
Day 30
Graduation
Thirty days. Done. You started this not knowing quite what AI was, and you are finishing it with a personal commitment, a real skill set, a data policy you wrote yourself, and a genuine sense of what it means to use this technology thoughtfully.
    1
    Open your AI platform and paste this:
    Paste this
    I have just completed a 30-day AI training programme. I want to mark the end of it properly. Ask me one question: what is the most important thing I learned that I did not expect to learn? Wait for my answer before saying anything else.
    2
    Answer honestly. Then paste this:
    Follow up with this
    Thank you. Now give me a one-sentence summary of what completing this programme means for how I work with AI going forward. Make it specific to what I just told you - not a generic statement about AI literacy.
    3
    Read it. That is your graduation statement. Write it somewhere you will see it.

Thirty days. You have gone further than most people who use AI every day ever stop to go. The habits you have built, the scepticism you have developed, the policies you have written - these are durable. They will serve you as the technology changes, because they are not about any specific tool. They are about how to think.

You are done. Now share it with someone who should do it too.
Key takeawayCompleting this programme is not the end of your AI education. It is the foundation.
Doing it. Every day you opened your AI platform and used it on something real - you did not just read about hallucinations, you made it hallucinate. You did not just read about bias, you saw it across three tools. You did not just read about personal data policy, you wrote one for your specific role. Knowledge that comes from doing is more durable and more usable than knowledge that comes from reading. That is why this programme worked.
Take it deeper
📚Co-Intelligence by Ethan Mollick - the best book to read next
Programme complete — share it.

You started this not knowing quite what AI was. You are finishing it with a personal commitment, a real skill set, a data policy, and a genuine sense of what it means to use this technology thoughtfully.

Send this to someone who should do it too:

Or invite a colleague to start with you:

FAQ

Answers to questions that tend to come up.

Most AI training tells you about AI. This one makes you use it, every day, on real work scenarios. The difference between reading about swimming and getting in the pool.
None at all. Day 1 assumes you have never thought seriously about what AI actually is. The programme builds from there.
Access to your AI platform. For Day 8 you will also need free accounts on ChatGPT and Google Gemini, which take about 2 minutes each to set up.
About ten minutes. Some days are closer to five, a few are closer to fifteen if you go deeper. The Going Further sections are entirely optional.
AI is the broad term. Machine learning is a subset that learns from patterns in data. Generative AI creates new content like text or images. Your AI platform is generative AI.
When AI generates something that sounds confident and plausible but is factually wrong. It has no concept of truth - it produces text that fits the pattern of a correct answer, without any mechanism to check whether it actually is one. Day 2 shows you this happening live.
Personal data about individuals, confidential commercial information, passwords, and anything that would be a GDPR concern if shared with a third party. Day 12 covers this in full with specific examples.
Real, and easy to underestimate. A single AI query uses roughly ten times the energy of a standard web search. Day 10 gives you a practical framework for deciding when AI genuinely earns that cost.
Fire, Aim, Ready © Sydenham Club