← All articles
·10 min read·By Jean-Baptiste Berthoux

Managers: When NOT to Use AI (And Why It Matters)

Strategic decisions, mentoring, meetings: when AI weakens your leadership. A guide for managers who want to stay in control.

Table of Contents

1. The question nobody is asking 2. 5 situations where AI weakens your leadership 3. The real danger: silent erosion of judgment 4. Framework: the "AI or not" matrix 5. How to protect your human thinking time 6. Key Takeaways 7. Frequently asked questions

---

1. The question nobody is asking

The dominant narrative is clear: adopt AI everywhere, as fast as possible, or get left behind. Conferences, LinkedIn newsletters, consulting firms -- everyone is pushing in the same direction. More automation. More delegation to models. More "efficiency gains."

But here is the problem this race ignores: some decisions get worse when they pass through AI.

A 2026 Deloitte study found that 60% of executives already use AI to support decisions, yet 85% of business leaders regret or question past decisions. Even more striking: 57% of organizations operate at low decision-making maturity. AI speeds up decisions, but not necessarily their quality.

This article takes a clear position: there are specific moments when a manager must turn off AI and think alone. Not out of nostalgia, but out of strategy. Knowing when to unplug has become a leadership skill in its own right.

For a broader framework on balancing AI and productivity, see our complete guide to productivity in the AI era.

---

2. 5 situations where AI weakens your leadership

2.1. Irreversible strategic decisions

Jeff Bezos calls them "one-way doors": decisions you cannot easily reverse. Merging two teams, pivoting a product, letting someone go, signing an exclusive partnership.

AI excels at high-frequency, low-impact decisions backed by abundant data. But for strategic choices, it has a structural flaw: it optimizes based on past patterns. Strategic decisions, by definition, require breaking away from patterns -- imagining what does not exist yet.

McKinsey identifies three capabilities that generative AI cannot replicate: setting aspirations (a machine cannot define an organization's purpose), "reading the room" (anticipating emotional reactions to change), and using empathy to map the right people to the right projects.

Action item: reserve your "one-way door" decisions for deep thinking sessions -- no screen, no prompt. Block 90 minutes in your calendar, protected like a client meeting.

2.2. Mentoring and difficult conversations

Telling a team member they did not get the promotion they expected. Walking a junior talent through their first confidence crisis. Confronting a senior leader who is going off track. These moments build the trust and loyalty that hold a team together.

AI can generate a conversation script. But following a script in a high-stakes human exchange is the managerial equivalent of reading your wedding vows off your phone. The person across the table will detect the absence of authenticity immediately.

A Harvard Business Review study found that LLMs systematically under-use "subjective questions" -- the ones that explore emotional and political dynamics ("What is not being said?"). In a mentoring context, these are precisely the questions that unlock breakthroughs.

2.3. Meetings where you need to read the room

Effective managers do not just listen to words in a meeting. They pick up on silences, exchanged glances, the tension in a project manager's voice, the disengagement of a colleague staring at their screen.

Delegating your meeting synthesis to an AI transcription tool captures the text and loses the subtext. The same HBR research tested 13 LLMs with over 1,600 executives and found that every single model underweighted productive questions ("What do we actually do now?") while overweighting interpretive analysis. In other words, AI analyzes but does not push toward action.

If you are leading a crisis meeting or a board review, your cognitive presence is irreplaceable. Turning off automatic transcription during those 45 minutes is an act of leadership, not a technology delay.

2.4. Ethical trade-offs and value-laden dilemmas

Should you keep a more expensive local supplier or switch to a low-cost subcontractor? Accept a client whose practices are questionable but whose contract would save the quarter? Delay reporting a security bug that only affects 0.3% of users?

These decisions are not data problems. They are values problems. AI can model the financial consequences of each option, but it cannot decide what kind of company you want to be.

More concerning: a study published in Nature demonstrates that delegating decisions to AI increases dishonest behavior. When an algorithmic intermediary absorbs perceived responsibility, humans feel less guilty about taking ethical shortcuts.

2.5. Innovation and product vision

Asking AI "What should our product strategy be for 2027?" generates a coherent, well-structured response that is -- almost always -- average. AI produces statistical consensus, not vision.

Breakthroughs come from a leader's personal conviction, born from observing their market differently. Steve Jobs did not consult customer data to design the iPhone. Reed Hastings did not query a chatbot before pivoting from DVD rental to streaming.

Your role as a manager-decision maker is to synthesize information *and* add intuition forged through experience. AI can fuel the first part. Not the second.

---

3. The real danger: silent erosion of judgment

The most insidious risk is not a single bad decision. It is the gradual atrophy of your ability to decide without AI.

Researchers in Computers in Human Behavior have documented a phenomenon called *automation bias*: the more we delegate to a system, the less we question its recommendations. The judgment muscle, like any muscle, weakens without exercise.

Fortune reports that companies allocate 93% of their AI budgets to technology and only 7% to workforce preparation. The result: middle managers -- the people who actually orchestrate change -- become what Wharton professor Eric Bradlow calls "the weak link in a chain that's suddenly moving faster than ever."

This imbalance creates what researchers call the "donut hole": C-suite executives invest heavily in AI, younger workers adopt it natively, but middle managers, caught in between, lose confidence in their own judgment.

To learn more about avoiding the cognitive overload that comes with this pressure, read our guide on preventing burnout as a professional.

---

4. Framework: the "AI or not" matrix

Here is a straightforward framework for deciding whether a managerial task deserves AI assistance.

Delegate to AI (without hesitation):

Pre-meeting document summaries
First drafts of reports and presentations
Quantitative data analysis
Competitive and industry monitoring
Repetitive task scheduling

Use AI as a sparring partner (with critical distance):

Brainstorming strategic scenarios
Preparing structured feedback
Simulating team reactions
Drafting internal communications

Keep 100% human (non-negotiable):

Irreversible strategic decisions
Mentoring and sensitive feedback conversations
Ethical trade-offs
Product vision and positioning
Crisis meetings where emotional reading is essential
Individual performance evaluation

The guiding principle fits in one sentence, borrowed from Deloitte: "The real question isn't 'What does the model say?' It's 'Who gets to disagree with it, and how fast?'"

---

5. How to protect your human thinking time

Identifying the right moments to unplug from AI is not enough. You also need to structure your schedule so that these moments of human reflection actually happen.

Block "deep decision" time

Like Cal Newport's deep work concept, strategic decisions require long, uninterrupted periods. Schedule at least two 90-minute blocks per week dedicated to AI-free reflection. No ChatGPT, no Copilot, no auto-transcription. A notebook, a pen, your brain.

Use the Pomodoro Technique for managerial work

Contrary to popular belief, the Pomodoro Technique is not just for developers. A 25-minute focused thinking cycle followed by a 5-minute break is ideal for structuring the preparation of a 1:1, the analysis of a strategic plan, or the drafting of performance feedback. Pomodorian lets you customize your intervals and add ambient sounds to protect your concentration -- without AI in the reflection loop itself.

Establish "AI-free meetings"

Set the rule: for crisis meetings, team retrospectives, and sensitive one-on-ones, no automatic transcription. No real-time AI summaries. Tell your team these moments are deliberately "non-augmented" and explain why: trust is built in conversations that are not being surveilled.

Keep a decision journal

After every significant decision, write down by hand: the context, the options considered, your reasoning, and what AI would likely have recommended. This journal becomes a training tool for your judgment -- and an audit trail if the decision is questioned later.

For more on protecting your focus in an interruption-heavy environment, read our guide on setting boundaries when working from home and how group Pomodoro sessions can bring structured focus to your entire team.

---

Key Takeaways

AI amplifies routine decisions but can degrade strategic ones. 85% of leaders regret past decisions, per Deloitte -- AI does not fix that problem.
Five situations demand 100% human leadership: irreversible decisions, mentoring, high-stakes meetings, ethical trade-offs, product vision.
Judgment erosion is silent. Automation bias pushes managers to accept AI recommendations without questioning them.
Use the "AI or not" matrix to sort every managerial task between delegation, sparring, and human reflection.
Actively protect your thinking time with AI-free blocks, non-augmented meetings, and a decision journal.

---

Frequently asked questions

Does a manager who avoids AI risk looking out of touch?

This is not about avoiding AI -- it is about deploying it strategically. A manager who knows *when not* to use AI demonstrates a higher level of discernment than one who uses it for everything, including decisions where it adds little value. According to McKinsey, the best leaders in the AI era blend digital fluency with human depth.

Can AI at least prepare strategic decisions?

Absolutely. AI is excellent at gathering data, simulating scenarios, and identifying blind spots. The problem arises when preparation substitutes for deliberation. Use AI to fuel your thinking, then unplug to decide. The final call must be yours -- with your name, your accountability, and your conviction behind it.

How do I convince leadership that some meetings should remain AI-free?

Lead with data. The Nature study shows that AI delegation increases dishonest behavior. The HBR research demonstrates that LLMs underweight the subjective questions that matter most in team dynamics. Propose a pilot: two months of AI-free 1:1s, with qualitative tracking of team satisfaction and conversation quality.

Does the Pomodoro Technique work for managerial tasks?

Yes, with adapted intervals. Managers who use Pomodorian to structure their preparation blocks -- dossier review, feedback writing, strategic planning -- report better ability to focus on one subject at a time instead of bouncing between emails, Slack, and AI tools.

What is the biggest long-term risk of over-delegating to AI?

Judgment atrophy. Like a muscle you stop exercising, the ability to evaluate ambiguous situations, weigh contradictory arguments, and own an imperfect decision deteriorates when you systematically delegate to an algorithm. The generation of managers who never learned to decide without AI will face a serious disadvantage when confronted with novel situations that, by definition, no model could have anticipated.

Ready to focus smarter?

Try Pomodorian — the AI-powered Pomodoro timer. Free, no account required.

Start Focusing