Ilya Petrov

Growth You Get. Every Tuesday, 7am CET

Hey, AI, Make Me a Growth Plan (3/?): The Diagnosis

The best direction to grow is the one your competitors can see clearly — and still can't take.

Slice by positioning

Part 2 was research — market sizing, customer segments, competitive landscape, voice of customer. Now: do something with it.

Diagnosis starts with a map. You have a big messy question — "how do we grow?" — and before you choose a path, you need to see the options. I wrote about this in Final Cut: you can slice the problem along audience, channel, product, business model, geography — and the axis you pick determines almost everything that follows. Pick wrong, and you'll produce a perfectly coherent strategy for the wrong problem.

So, which slice? Here I had to think about the exercise itself. I'm deliberately not feeding AI proprietary data — no churn rates, no conversion funnels, no sales numbers. So the question isn't just "what's strategically important?" It's "what's strategically important and something AI can reason about reliably with public information?"

That hints to positioning. The inputs — what competitors ship, what they say about themselves, what developers say about them — are all public, observable, and checkable against sources. It's the one slice where AI's research strengths align with what you actually need to know.

The three layers

Think about what you need for a competitive positioning map. Features are facts — the product either supports CMake or it doesn't, the debugger either has peripheral register views or it doesn't. Check the docs, read the changelog. Positioning is observable — it's right there on the homepage, the product page, the conference talks. Not what the company believes internally, but what they decided to say to the market. Arguably more real than whatever lives in brand books. Perception is findable but biased — what developers say to each other on Reddit, Hacker News, Twitter. Real signal, skewed toward complaints and strong opinions. The interpretation needs care, but the data collection is a strength.

That's exactly what went into my question. I've started with a meta prompt and then course-corrected on the fly:

Okay, I was thinking about how to slice the growth here, and I would like to approach it through perception/positioning lens. That means that I want to look at the following things: - audience segmentation - list of competitors + us (CLion) - for each competitor and us, have three layers: key functionality/features list (based on the competitor website), positioning (based on official messaging on the website/owned social media channels), perception (social sentiment, people discussing this product in the context of C/C++ development, e.g. on x, reddit, etc.) the overall objective will be then to map those positioning/perception with audience and audience needs and make a strategic choice — decide on the areas of opportunities where we CLion are better equipped to win: means an audience-positioning-perception angle that addresses an audience need, differentiate from competitors, supported by our product functionality/features. check what kind of research we did already. help me to decide on the plan to approach this step of the exercise. I may need to re-research certain topics (e.g. detailed competitor analysis). i'm planning to use deep research capabilities for this and may utilize whatever features you have there (e.g. limit the research with only official resources, etc.)

Three layers, all publicly accessible, all checkable against sources. The question is how to keep them separate.

Don't blend it!

The value in having three layers is the mismatch between them. What a company says about itself, what it actually ships, and what developers say to each other — those are often three different stories. That gap is exactly what you're looking for. So the research has to keep them separate.

I split the work into four passes, each with hard source constraints. First: distill audience needs from existing research — not new research, just organizing what we already had into an evaluation lens. This gives the rest of the work direction. Without it, AI produces feature matrices that are thorough but directionless. Second: features and positioning from official sources only — product pages, docs, blogs. What companies say about themselves, uncontaminated. Third: perception from community sources only — Reddit, HN, Twitter. Organized by source type, not by competitor, because developers compare multiple tools in the same thread. Fourth: synthesis, mapping all three layers against the needs, in conversation mode — because this step requires judgment at every turn.

Starting with demand-side needs introduced bias. You see what fits your pre-defined categories and miss what doesn't. I chose the trade-off knowingly. Directionless was the bigger risk.

The output

Phase 1. Audience needs

Audience needs

Summary: 10 needs, split into two tiers: five universal ones every C++ developer shares (deep code intelligence, reliable debugging, CMake support, cross-platform development, AI assistance) and five that differentiate by segment — performance at scale for systems programmers, hardware-aware debugging for embedded developers, framework integration for audio/desktop developers, safety compliance for automotive, remote development for cloud-based teams.

Structural constraints: what competitors can't do

The matrix came back thorough. Competitors as columns, audience needs as rows, three-layer assessment per cell. And it was... fine. Correct. Complete. The kind of analysis where you nod along and think "yes, this all makes sense" and then realize you still don't know what to do.

"Competitor X is weak on need Y." Okay — but why? Can they fix it next quarter? That's a different question from "are they weak," and it's where the real strategy lives.

The question that unlocked the diagnosis wasn't "where are competitors weak?" It was: what are competitors structurally prevented from doing — even if they wanted to?

Weak-at is temporary. A competitor can ship a feature, hire a team, adjust a message. Structurally-prevented is durable. It comes from business model constraints, organizational incentives, and strategic trade-offs baked into how the company makes money.

In our case, the map showed several competitor "weaknesses." But when I pushed AI to explain why each existed, the picture changed.

Microsoft makes two C++ tools — Visual Studio and VS Code. Both showed up as competitors. But Microsoft can't unify them — VS Code is a platform play (ecosystem, extensions, Copilot distribution), Visual Studio is enterprise monetization (Windows, .NET, Azure). Making one great C++ IDE would cannibalize one strategy for the other. That's not a gap Microsoft will close. It's a consequence of their business model.

The embedded vendor IDEs — STM32CubeIDE, Keil, IAR — all showed weak code intelligence. Easy to read as "they haven't invested yet." But they haven't because their IDEs exist to sell chips and compilers, not to be great IDEs. World-class code intelligence doesn't sell more microcontrollers. That weakness is permanent.

AI-first editors like Cursor showed thin C++ support. Easy to read as "they're young, they'll get there." But deep C++ semantic tooling is a multi-year vertical investment that contradicts their horizontal go-to-market. Going deeper in one language doesn't serve their model.

The framework: when you see a competitive weakness, ask whether it's accidental (they haven't gotten to it yet) or structural (their business model prevents prioritizing it). Accidental weaknesses are dangerous to build strategy on — they can disappear in a quarter. Structural weaknesses are the foundation of durable positioning.

Permission to believe

Structural advantage tells you what's available. Permission tells you what's yours to take.

The audience has to allow you to make the claim. Not agree with it — that comes later. Accept that it's the kind of thing you could plausibly be. Brand strategists call this permissions and constraints. I think "permission to believe" captures what actually happens better. Nobody grants it formally. It's more like — the claim doesn't trigger an eye-roll.

In our case: JetBrains has earned permissions through years of IntelliJ, PyCharm, Rider. The market believes JetBrains makes deep, serious, language-specific developer tools. That transfers. JetBrains also has constraints: expensive, resource-heavy, not indie/cool, late to AI. Those close doors. And these permissions aren't uniform — an enterprise C++ team at an automotive company grants "serious and deep" easily. A solo indie game developer on a Mac might not. The audience you're targeting changes which permissions you hold.

Overlay permission on asymmetry and the viable positions emerge:

"The most serious cross-platform C++ IDE" — structural advantage supports it, brand permission supports it. Viable.

"The future of C++ development" — structural advantage might be there, but the brand doesn't carry "future" energy. The market wouldn't buy it.

"The lightweight, fast editor" — even if performance has dramatically improved, the weight of years of "JetBrains = heavy" is too strong. Maybe in three years. Not today.

Permission isn't binary — it's directional and expandable. The useful question isn't just "do we have it?" but "what's the shortest path to earning it, and is the investment worth it?" Some permissions are a quarter away. Some are a brand generation away.

The principle: positioning lives at the intersection of structural advantage and market permission. Miss the first, and a competitor takes the territory back. Miss the second, and the audience won't grant you the territory at all.

The opposite test

One more filter. This one sorts real positioning from wallpaper. Brand strategists have used versions of this test for decades to stress-test brand values. The interesting move is applying it to competitive positioning.

A positioning claim is only real if its opposite is a viable choice someone else could make.

"We make reliable developer tools." The opposite — "we make unreliable tools" — is absurd. Nobody would choose that. So it's not positioning. It's the generalization dressed as a strategic choice.

"The most serious C++ IDE" works because the opposite is credible. A competitor can be the most accessible. The most AI-forward. The fastest. Those are real positions that real products occupy. "Serious" implies trade-offs — depth over simplicity, power over approachability. Not everyone wants that. Which is the point.

Positioning isn't about being good. It's about choosing an axis the market finds meaningful and claiming one end of it. The competitor doesn't need to fight you for your end. They just need to argue that the axis doesn't matter. If they can make that stick, you've picked the wrong axis.

The strongest form of this is judo positioning: framing a competitor's advantage as their limitation. Position as "vendor-independent" and you don't attack the embedded vendor IDE's tight chip integration. You reframe it — "optimized for our chips" becomes "only works with their chips." Their strength becomes their cage. You didn't attack the feature. You changed what the feature means.

One caveat: judo positioning only works if the audience already carries latent dissatisfaction with the thing you're reframing. "Only works with their chips" lands because embedded developers already feel that lock-in pain. If they didn't, the reframe would sound like sour grapes. You're not creating the tension. You're giving it a name.

What's next

So where does the diagnosis land? Two directions stood out.

First: modern embedded. The developers working with STM32, NXP, ESP32 — stuck in vendor IDEs that were built to sell chips, not to be good IDEs. They know the code intelligence is weak. They know the debugging experience is a decade behind. But they stay because the toolchain integration is tight and switching costs are real. CLion can offer what their vendor structurally won't — a serious development environment — while supporting their hardware workflows. The vendor can't follow here without undermining their own distribution model.

Second: the VS Code upgrade path. There's a growing population of C++ developers who started in VS Code, hit the ceiling on refactoring, navigation, cross-platform debugging — and have nowhere obvious to go. Visual Studio is Windows-only and enterprise-heavy. Cursor and friends are going horizontal, not deeper into C++. CLion is the natural next step, and Microsoft's two-IDE split means they can't close that gap without cannibalizing one strategy for the other.

Both pass the filters. Structural constraints keep competitors from contesting the space. Brand permission lets JetBrains claim it. The opposite is viable — others can be lighter, faster, more AI-forward. That's the test passing.

The outputs I trust most from this exercise are the frameworks, not the specifics. Competitive asymmetry and positioning permission hold until someone's business model changes. For Microsoft, the chip vendors, the AI-first editors — that means they hold for a while.

Next: we pick this direction and build it into something a team could execute. Audience targeting, messaging, channels, costs. In compiler terms, the diagnosis was semantic analysis — checking whether the strategy makes sense given the constraints. Part 4 is code generation: turning validated logic into executable output. That's where I'm most curious whether the metaphor still holds, because execution is where most strategies quietly die.