AI isn’t your intern anymore — it’s your unfair advantage

ARTICLE

How 11 battle-tested founders deploy AI to build capabilities competitors can’t match

After publishing my first article on how to build credible business plans and investor relationships in a harder market, one question kept coming back from founders: “And what about AI? How does it change the game for us?”

So I went back to the field.

Over the past weeks, I sat down again with the same group of successful entrepreneurs  (****) — the same operators whose insights shaped the first chapter of this series — to pressure-test a related but sharper question: How do you deploy AI not just to save time, but to build capabilities your competitors simply cannot match?

I listened, I challenged, I compared scars. As in my first piece, what follows is a synthesis built horizontally — no theory, no top-down lessons, just collective intelligence from founders navigating the same storms. Clear convergences, useful tensions, and the sentences of hundreds of startuppers – which have participated in my podcast for the past 4 years –  that won’t leave my head. (*)

I began with a thesis: AI’s value isn’t automation — it’s amplification of human judgment.  The founders confirmed it, but with an edge I didn’t expect. “If you can’t explain why your AI made a decision,” Loïc Soubeyrand of Swile told me, “you don’t actually control your business.”

Maxime Leroux at ClimateView added: “We run forty people with twenty to thirty AI agents — but every output is traceable, every decision has a human signature.”

What changed is this: AI has moved from experiment to infrastructure. The question isn’t whether to adopt it, but how to do so without losing control of your product, your culture, or your ability to explain what happened when something breaks.

If you can’t trace why your AI made a decision, you don’t control your business.” — Loïc Soubeyrand, Swile

“We doubled our workforce with AI agents — but never lost the ability to explain our recommendations.” — Maxime Leroux, ClimateView

The 10x rule: start where AI unlocks, not where it optimizes

The first pattern was sharp: don’t chase 20% efficiency gains. Hunt for 10x capability unlocks.

« If you’re 22 years old today, it’s the best opportunity to create a business, » says Maxime Leroux, CEO of ClimateView, echoing Sam Altman’s provocative statement. « You can start with agents, with artificial intelligence, as your first employees. » (**) 

This isn’t hyperbole. For roughly $20 per month in AI subscriptions, a solo founder can now create professional websites, develop functional applications, analyze complex datasets, write compelling marketing copy, and execute campaigns that would have required a full team just three years ago. The barriers to launching have collapsed

Hortense Harang, co-founder of We Trade Local (Fleurs d’Ici), put it plainly: “AI doesn’t just make things faster — it makes things possible that weren’t possible before.”

At Fleurs d’Ici, they analyse regional flower supply chains with a granularity that once required an army of consultants. “We can now offer hyper-local sourcing to florists who could never afford that depth of data. That’s not a cost save — that’s a new market.”

Rachel Delacour at Sweep frames the strategy: “If you use AI only to do existing work 20% cheaper, you’re building on sand.”

Everyone can do that. “The moat comes from unlocking capabilities others can’t match — because you moved first, collected better data, or built processes that compound.”

Mathieu Nebra at OpenClassrooms warns founders who confuse speed with defensibility: “It’s never been easier to launch something with AI. But it’s just as easy for someone else to disrupt you. The question is not ‘Can I build it?’ but ‘Can I build something that lasts?’”

Pascal Lorne, fresh from the GoJob exit, reframes the founder mindset: “AI gives you superpowers and creates vertigo. The field of possibilities is immense. We succeeded by starting from real problems — not from shiny tech.”

Takeaway: map your processes not by cost but by potential impact if they were 10x faster, deeper, or wider. Prioritise AI use cases that create new capabilities, not marginal savings.

The transparency tax you must pay

The second consensus was unequivocal: you must be able to explain how your AI reached a decision.

Soubeyrand at Swile was blunt: “We document every major prompt, version every model update, and require human review at decisions touching money or reputation.”

Not bureaucracy — risk management. “When an AI recommendation backfires — and it will — you need to explain what happened, why, and how you fixed it.”

Leroux at ClimateView is even more structured: “master prompts,” no black-box code in production, human checkpoints at every critical step. “We double our capacity with AI agents, but we deploy nothing we can’t explain to a city council or a climate scientist.”

Marta Sjögren at Paebbl pushes this further with “decision journals” — logs capturing not just what AI recommended, but why a human accepted or rejected it. “We review them quarterly to find systematic blind spots — both ours and the AI’s.”

Axel Dauchez at Make.org ties this to culture: “AI accelerates everything — including mistakes. Without built-in accountability, you lose the ability to course-correct when the model drifts or the market shifts.”

Lesson:  build transparency before scaling. Traceability isn’t compliance theatre — it’s operational survival.

“The era of geniuses is over. Now it’s curiosity, resourcefulness, and judgment.” — Pascal Lorne, GoJob

“Hire people who can have a productive conversation with a machine — and know when to override it.” — Eric Carreel, Withings

Hire warriors, Not wizards

A third thread: the skills that got people hired three years ago won’t matter three years from now.

Nebra is direct: “We used to hire for what people knew. Now we hire for how fast they learn.”

AI-native adaptability beats credentials.

Nicolas Reboud at Shine sees the shift daily: a senior engineer left; the new hire had half the résumé but double the learning velocity. “We looked for people who can interrogate AI, not fear it or worship it.”

Eric Carreel at Withings insists on balanced skepticism: “Treating AI as gospel is dangerous. Dismissing it is dangerous too. You need people who can challenge it, test it, override it, learn from it.”

Lorne crystallises the cultural need: “Coders who once rode bicycles now travel at the speed of sound — but they still need to collaborate. Ideas come from humans, not models.”

Delacour offers the sharpest line: “I don’t need fragile divas. I want warriors — people with scars, who know how to fight and stay loyal.”

Implication: rewrite job specs. Hire for learning velocity, judgment, and the ability to collaborate with machines — not for static expertise.

The junior crisis — And how to fix it

Almost everyone raised the same concern: AI is erasing the traditional pipeline for junior talent (***)

Delacour captures it: “If AI writes basic code and drafts reports, what do interns do? We used repetitive work to build foundational skills. That work is gone.”

Reboud notes: “We have fewer entry-level roles. The old apprenticeship model doesn’t fit the new economics.”

But several founders are already experimenting with solutions.

Carreel at Withings redesigns junior roles around AI stewardship : juniors review AI-generated code, test it, refine prompts, and learn faster because they see more patterns.

Nebra at OpenClassrooms formalised “AI apprenticeships”: juniors explain why they accepted or rejected AI suggestions. “We teach judgment through curation — not by protecting them from AI.”

Leroux rotates juniors across AI agents and projects with tight guardrails: rapid pattern exposure, safe learning loops.

Consensus: don’t abandon junior hiring — reinvent it. Teach judgment, not repetition. Pair juniors with AI under structured mentorship.

Act now or watch the window close

The final message was urgent: don’t wait for the perfect AI strategy. The learning window is now.

Leroux is blunt: “Companies designing the perfect AI strategy will lose to those learning by doing.”

Nebra: “Pick one problem AI can solve, test it, measure it, learn.”

Small, fast loops beat grand plans.

Delacour reframes adoption as culture: “AI isn’t a tech challenge — it’s a change-management challenge.”

Carreel adds a market warning: “Teams deploying AI systematically today will have 12–18 months’ advantage. That’s the difference between leading and following.”

Reboud is tactical: “Don’t hire consultants. Empower three people to experiment for 30 days. You’ll learn more than from any deck.”

What to do next, concretely 

  •  Hunt for 10x unlocks, not 20% savings.
  •  Build transparency as infrastructure.
  •  Hire for adaptability and judgment.
  •  Redesign junior roles around AI stewardship.
  •  Start one meaningful experiment this quarter.
  •  Ask what becomes possible — not just cheaper.
  •  Empower a small team to learn by doing, now.

If there’s one lesson for founders today, it’s this: AI deployment isn’t a project — it’s a capability you build.

The companies that win will institutionalise learning, maintain transparency, and use AI to unlock value competitors can’t access. Start small, move fast, document everything — and never lose the ability to explain why your AI made the decision it made.

Trust is still built through results — but now those results must be traceable, explainable, and grounded in judgment you can defend.


(*) This piece also draws on more than a hundred interviews I’ve conducted in my podcast “40 Nuances de Next” — a long-form archive of how real companies actually operate when decks meet reality.

(**) WIRED : All of My Employees Are AI Agents, and So Are My Executives | WIRED

(***) ENTREPRENEUIR : OpenAI CEO Sam Altman: AI Agents Are Like Junior Employees.

(****) Entrepreneurs interview notes from October 2025

Auteur

Olivier Mathiot

S'inscrire à News & Knowledge