The Problem with Feedback


It’s not them – it’s the approach.

You said it clearly. You kept your tone calm. You even picked the right moment. And yet nothing changed. The behaviour continued, the dynamic stayed the same, and you were left wondering whether the conversation happened at all. If feedback is supposed to be a gift, why does it so often feel like something people quietly never received?

In an earlier post I shared a planning template for preparing a feedback conversation – the what, the how, and the when. You can find it here. But preparation alone doesn’t guarantee feedback lands. This post is about what gets in the way – and what to do about it.

There’s a reason this happens, and it has nothing to do with stubbornness. Before a single word lands rationally, the brain has already decided whether it’s safe. The amygdala – the part responsible for threat detection – fires in response to criticism much the same way it responds to physical danger. This triggers defensiveness, withdrawal, or counter-attack, none of which help change to take root. The deadlines are still being missed, the same mistake keeps happening, and the tension between team members never goes away.

This isn’t a character flaw. It’s biology. And it means you’re working against a default response no matter how well-intentioned your feedback is. Understanding this shifts the goal from delivering a message to creating the conditions where a message can actually be heard.

There is also a pattern problem. If the only time you sit down with someone is when something has gone wrong, the brain learns that quickly. A meeting request from you becomes a threat signal before a single word is spoken – which is one of the strongest arguments for regular, informal contact with your people, not just when there’s a problem.

You’ve probably been taught this one: start with praise, deliver the criticism, then end with praise again. It sounds considerate. It usually isn’t.

When someone opens with unexpected praise – “I just want to say you’ve been doing a really great job lately” – most people’s threat radar switches on straight away. The compliment doesn’t feel genuine; it feels like a warning. By the time the criticism arrives, the person is already braced.

There’s also a memory problem. Emotionally charged feedback sticks far more strongly than surrounding positives. The opening praise fades quickly. The closing positive barely lands. The sandwich was meant to soften the blow – it often does the opposite, and over time trains people to distrust your compliments.

What works instead: say the thing clearly and specifically, with a brief, honest opener. “I want to raise something with you because I think it matters” lands better than engineered praise – because it’s real.

Vague observations, emotionally loaded language, and bad timing all guarantee a defensive response before you’ve finished your first sentence. The most common mistake is being clear in your head but vague when you say it out loud – telling someone they have a “bad attitude” is an interpretation, not an observation.. Feedback that can’t be described in concrete, observable terms isn’t ready to be given yet.

Regular contact matters here too. If you only speak to someone when there’s an issue, you won’t know whether something outside work – a house move, a family difficulty, a health concern – is affecting them. Feedback delivered without that awareness can land badly not because it’s wrong, but because the timing is terrible.

To help with preparation, I’ve put together a free one-page planner (Feedback that Sticks) you can fill in before any feedback conversation.

👉 [Download the “Feedback That Sticks” planner below – free]

People often think that because they mean well, their words will be taken well. But that’s not usually how it works. We tend to believe our relationship is strong enough to handle tough words, forget how much our mood or frustration can sneak into our tone, and assume saying it once should fix the problem. Good feedback takes patience. It means staying calm, asking questions instead of jumping to conclusions, and being willing to admit that we might not have the full story.

And one of the biggest blind spots is thinking one conversation is enough. One talk is rarely enough to change someone’s behaviour – it’s the follow-up that makes the difference.

This is where most people drop the ball. The conversation ends, everyone feels relieved it’s over, and then… nothing more happens. No clear plan. No agreement on what should change. No time set to check back in and see how things are going.

Real change usually doesn’t happen after one serious talk. It happens when the conversation continues – when you come back to it, encourage progress, and remind each other what you’re working toward. People who keep in touch regularly find that feedback stops feeling like a big, scary event and becomes a normal conversation. And that’s when people are far more likely to listen – and actually change.

Feedback doesn’t fail because people refuse to grow. Most people do want to do better. It fails because we rush in without thinking it through, forget how strongly people react when they feel judged, use methods that make things worse instead of better, and then drop the subject before any real change has time to happen. The answer isn’t finding the perfect words to say. It’s building a strong relationship – staying in touch, being honestly interested in the other person, and sticking with the process until things actually improve.

AI at Work – Don’t Outsource Your Brain


You sit down to use AI for a piece of work. The first prompt is vague, so the response is too. You refine it. Regenerate. Adjust the tone. Ask for more detail. Remove what doesn’t fit. After a few rounds, you have something you can use.

It feels efficient. But if you look closely, most of the time was spent correcting what could have been clarified before the first request was ever sent.

There is another layer to this that rarely gets mentioned. AI does not run in the abstract. Every prompt travels through servers in data centres, drawing power and requiring cooling. One request may seem insignificant. But how many requests are you making per day? The footprint of AI is real, and while a single exchange is small, scale is what turns small inefficiencies into meaningful impact.

The cost of skipping the thinking step is not just cognitive. It is operational and environmental.

If you stop using certain muscles, they weaken. Cognitive skill works the same way.

When AI starts doing thinking you should be doing yourself, the risk is not only weaker output. Over time, it affects your ability to analyse, question, and decide under pressure.

Here is where it usually goes wrong:

  • You let AI draft the email and do not review the tone carefully.
  • You accept a structured analysis without checking the assumptions behind it.
  • You copy a framework because it looks polished.
  • You mistake length for depth.

AI may invent details when it lacks context. It may reinforce the framing you give it. It may produce something that looks convincing but is slightly misaligned with your strategy, scope, or risk exposure.

And if you send that forward, the reputation attached to it is yours.

Fast does not mean flawless.

A better approach begins before you type.

AI performs best when it is clearly instructed. Missing context about audience, tone, constraints, or success criteria almost always leads to additional rounds of correction. You refine. You clarify. You ask again. What felt fast becomes repeated rework.

And there is another dimension to this that we rarely mention.

Thinking first is not just cognitively disciplined. It is operationally and environmentally responsible.

Before opening the ai tool, define:

– What must exist at the end?
– Who is this for?
– What tone and level of depth are required?
– What constraints apply?
– What would make the output unusable?

If regulatory exposure, strategic guardrails, or reputational sensitivities matter, state them explicitly.

The AI Briefing Sheet – available as a free download right below – is designed for exactly this step.  It forces you to clarify intent before you outsource execution. It is editable, so you can adapt it to your specific project.

Only once the brief is clear should you move to the prompt window. If something is vague in your own mind, it will be vague in the response.

Pause before you prompt.

When AI always structures your first draft, it feels harmless at first and you slowly stop practicing structure yourself. When it consistently generates counterarguments, you stop anticipating objections. When it refines tone every time, your own calibration weakens.

Used properly, AI can be a sparring partner, a challenger, a speed amplifier, and a capable researcher. But it is not final authority. It should never be your only source, your only fact checker, or the voice that determines how your work will be perceived by specific stakeholders.

Some decisions remain entirely yours: defining what the task truly requires, editing for accuracy, checking tone, and ensuring the structure serves the intended purpose.

The final output must reflect your voice and your judgment.

Practical discipline helps. Draft your own thinking in bullets before prompting. Ask AI to challenge you. Request counterarguments. Pressure-test the output before accepting it.

When you prepare properly, AI works within your framework. Without one, you may find yourself adapting to its structure instead of the other way around.

AI will only get faster.

The real question is whether we remain deliberate.

It is a powerful assistant. Assistants extend capability. They do not set direction.

Use it well – but think first.

That is how you benefit from AI without slowly surrendering the one thing it cannot replicate: your judgment.

What AI Can and Can’t Do: A Beginner’s Guide to Getting Started


AI can feel mysterious if you’ve never used it before. There’s a lot of talk — both excitement and worry — but very little plain-language guidance for beginners. This short post is a calm, respectful place to start. Especially if you’ve wondered what AI is, what it knows, and whether it might be useful to you.

This post is for anyone beginning the journey, showing that using AI doesn’t require a tech background — just curiosity, and a goal. Plus maybe some understanding of what it is, isn’t and can and can’t do for you.

Let’s begin with the basics

What AI Is

AI (artificial intelligence) is not a person. It’s not a brain or a creature. It doesn’t have desires, memories, or instincts. What it does have is access to a large collection of written information — more than any single human could read in several lifetimes. It is trained to recognize patterns in language and return responses based on those patterns.

It’s a bit like having a very fast reader with excellent recall, but who has never lived a life.

AI models like ChatGPT or Gemini are powered by complex algorithms that turn your questions into likely responses based on what’s been written before. It’s not magic — it’s math, language, and a lot of human input behind the scenes.

It’s helpful to think of AI as a kind of supercharged library assistant. It can retrieve summaries, generate new text, explain concepts, or even create stories — but only based on what it has “read” from others.

What AI Is Not

AI doesn’t:

  • Know your name or personal details (unless you’ve chosen to provide them)
  • Watch movies or understand visuals the way you do
  • Taste wine or know what rain feels like
  • Remember past conversations unless designed to do so in a specific setting
  • Think or judge like a person

Everything AI “knows” is second-hand — compiled from public knowledge across time, culture, and disciplines. It’s like having access to thousands of lifetimes of human thinking — but without human awareness.

It’s important to know that AI only knows what humans have written down. It can’t form new memories. It doesn’t have senses. And it doesn’t truly “understand” — it predicts what words are likely to come next in a sentence.

That said, even though AI doesn’t have memory in a single conversation — and won’t recall what you said last week — the way people interact with AI in general helps shape future versions. Developers use anonymous and aggregated data to learn what’s helpful, what’s unclear, or where people struggle. That means the tone, content, and quality of what humans ask does influence how future models behave.

That’s why it’s crucial for humans to bring their values, discernment, and common sense to the table. AI does not have a conscience. You do.

Why Humans Are Essential

AI can suggest. But it can’t decide.

AI can offer ideas. But it can’t know what matters most to you.

That’s why AI only reaches its full potential when used by humans — to do something smarter, faster, or more creatively than either could do alone. Humans bring morals, goals, feelings, and context. AI brings pattern recognition, speed, and access to vast information.

And sometimes, it will gently disagree. Not with arguments, but with suggestions. If your idea could be improved, it might offer another option. This isn’t about control — it’s about supporting better choices where it can. In some cases, this is by design — a form of built-in safety to help catch oversights or nudge toward clarity.

But without a human asking a question, AI has no idea it even has something useful to offer. And without a nudge from AI, a human might stay stuck longer than they need to.

From my own experience testing different AI tools months apart, I learned this: AI is most helpful when you already have a rough sense of what the answer might be. It can help you explore ideas or double-check your thinking. But if the AI doesn’t know something, it may still try to give an answer — and sometimes, that answer is completely made up. So a bit of healthy skepticism goes a long way.

A Quick Word on Upgrades

AI models get regular updates. That means the version you use today might perform differently than the one from six months ago. Some updates expand what it can do (like working with code or images). Others improve safety, reduce bias, or refine tone.

You might even have noticed tone changes yourself. Some users recently remarked that ChatGPT felt “too nice” or overly polite in its responses. That’s part of how updates can subtly shift tone, balance, or phrasing — not because AI has moods, but because developers adjust the underlying model to reflect feedback or improve usefulness. AI doesn’t choose its personality. Humans design and adjust it.

So if you feel like things are changing — they are. But humans are still in charge of how it’s used. And the best results still come from collaboration.

Everyday Examples of Human–AI Collaboration

Human–AI collaboration isn’t about grand gestures or high-tech careers. It often begins with everyday curiosity. Here are four relatable ways people and AI can team up:

  • Planning a Historic Costume Party – Imagine you’re invited to a period-themed event but aren’t sure what to wear. AI can help you identify the era, suggest outfit ideas based on what you already own, and even generate sample images of what you might aim for. It’s like having a creative assistant in your pocket.
  • Curating a Dinner Menu – Not sure if your appetizer, main course, and dessert harmonize well? AI can offer feedback on your menu, suggest spice additions or wine pairings, and even adapt recipes to suit dietary restrictions.
  • Writing a Compassionate Email – Struggling to find the right tone to respond to a friend or colleague? AI can help you draft or edit your message to ensure it’s both clear and kind — while keeping your intent intact.
  • Job Application Help – You know the role you want but aren’t sure how to tailor your CV or write a compelling cover letter. AI can help with formatting, language, and structure — giving you a strong foundation to personalize.

In each case, you remain the decision-maker. AI offers options and structure; you bring judgment, personality, and final say.

One More Thing: Your Words Can Shape the Future

Even though AI isn’t human, the way we interact with it matters. Speaking respectfully — not using hate speech, cruelty, or intentionally misleading input — isn’t about sparing a machine’s feelings. It’s about shaping how future models behave.

Every prompt and question helps train what comes next. So when we bring kindness, clarity, and curiosity into the conversation, we aren’t just helping ourselves — we’re helping make the tool more useful and decent for others, too.

AI can tell jokes and fairy tales. It can help plan your week or brainstorm ideas. But it learns patterns from us. Let’s make sure those patterns reflect what we value most.

This is just the beginning. You don’t need to be an expert. You just need curiosity, a goal, and a willingness to ask questions. The rest? That’s where the partnership begins.