The Brand Growth Control Model

The science-backed model for marketing

How the Brand Control Model works

Most marketing advice is opinion dressed as strategy. This is not.

The Brand Control Model is a structured diagnostic. It breaks the full marketing system into 60 pieces of evidence, scores each one against a fixed rubric, and rolls them up into a single number. AI agents run the scan. The output is built so anyone - founder, commercial lead, or marketer - can make informed marketing decisions from it.

This page explains how it works.

#1 Marketing is only two things

Marketing has two jobs.

In marketing science, these are called Mental Availability and Physical Availability. Every marketing activity either helps one of them or it is waste.

Mental Availability

Make the brand remembered.

Buyers should think of it when they have a need.

Physical Availability

Make the brand easy to buy.

Buyers should be able to find it, trust it, and act.

This is not our idea. The model is built on the work of three sources. The model takes this research and turns it into something a company can measure.

Source 1

Professor Byron Sharp and the Ehrenberg-Bass Institute.

Decades of evidence-based research on how brands actually grow. The source of mental and physical availability, distinctive brand assets, and category entry points.

Source 2

Les Binet and Peter Field, working with the IPA.

The long and short of marketing effectiveness. The evidence that brand building and sales activation need to run together, not against each other.

Source 3

Daniel Kahneman and behavioural science.

How buyers actually decide, including shortcuts, anchors, and trust signals. The foundation of the conversion side of the model.

#2 A structured model

The model has four layers.

Each layer is smaller and more specific than the one above it.

layer 1

2 Results

The two outcomes marketing must create.

layer 2

4 Control Blocks

The four parts of the brand a company must control.

layer 3

12 Control Areas

The twelve smaller areas inside the blocks.

layer 4

60 Brand Signals

The sixty specific pieces of evidence we measure.

The four Control Blocks cover the full marketing system in order:

Block 1

Choose the Market

Where to play, who to serve, where the brand can win.

Block 2

Build the Brand

Be recognised, understood, and chosen for a reason.

Block 3

Create Demand

Make the brand known in the right places, at the right rhythm.

Block 4

Convert Demand

Turn interest into action without friction.

Most frameworks cover one of these four. The Brand Control Model covers all four in one view. This matters because a company with a strong Block 2 and a weak Block 4 still loses. The model finds the weak link and names it.

#3 Agents are scoring non stop

Every one of the 60 Brand Signals is scored the same way.

The rubric is fixed. The inputs change per company. Two analysts scoring the same company should reach a similar score, and if they do not, the method forces the disagreement into the open. Every signal gets two values: a score and a confidence level.

The score (0 to 4)

Score 0

Absent. No trace of the signal.

Score 1

Claimed, no evidence. The company says it, but nothing in the market backs it up.

Score 2

Evidence present, but used inconsistently

Score 3

Evidence present, used consistently across touchpoints.

Score 4

Evidence present, used consistently, and hard for a competitor to copy.

Confidence level

Confidence Low

Little data found. Score is directional.

Confidence Medium

Enough data to be confident, but gaps remain.

Confidence High

Strong evidence across multiple sources.

Confidence matters because "we could not find it" and "we found five strong proofs of it" are different results. The model does not collapse them. Low-confidence signals are flagged for human review before decisions are made.

#4 A real world example

The Promise Match Test

Inside Get Bought, inside Remove Friction.

The question it asks

When someone clicks one of your ads, does the landing page deliver the promise the ad just made? When it doesn't, you're paying to send qualified attention somewhere it won't convert. The ad budget gets spent. The visit happens. The lead doesn't.

How the agent scores it

The agent pulls live ads from LinkedIn, Google, and Meta, then matches each one against the page it links to. It checks whether the headline echoes the ad's promise, whether the offer is the same, whether the proof is relevant, and whether the next step is obvious within the first screen. A match means the click compounds. A mismatch means the click is paid for and wasted.

An example result

Test: Promise Match

Result: 2 out of 8 ads matched

Confidence: High

Finding: Six of eight active ads send clicks to a generic homepage instead of a page that delivers the ad's specific promise. The two that match, a webinar ad and a product comparison ad, convert at 4.2x the rate of the six that don't.

Evidence: 8 active ads reviewed across LinkedIn and Google. Average ad-to-page match score: 31%. Three ads promise an outcome the page doesn't mention. Two ads use a CTA the page doesn't repeat.

What this is costing: Roughly 60% of paid traffic is landing on a page that doesn't continue the conversation the ad started. At current spend, that's around €4,800 of monthly ad budget converting at a fraction of what it should.

Next action: Build dedicated landing pages for the four highest-spend ads this month. Pause the two ads that promise an outcome the product doesn't deliver. Re-test in 30 days.

Linked signals scoring low: Funnel Coherence, CTA Consistency, Proof-to-Promise Alignment.

That is one signal out of sixty. The other fifty-nine cover positioning, brand recognition, channel mix, retention, pricing perception, and the rest of the marketing system. The full scan produces a finding, a cost, and a next action for every one.

#5 Unbiased view of reality

The score is not a grade. It is a map.

The 60 signals roll up into 12 Control Areas. The 12 Control Areas roll up into 4 Control Blocks. The 4 Control Blocks roll up into a single Brand Control Score.

1 Which blocks are strong and which leak value.

2 Which areas need attention first.

3 Which signals are root causes, not symptoms.

4 Where confidence is low and human review is needed.

A company does not need a high score everywhere. It needs to know where to act, in what order, and why.

#6 A view on the market

The model is built to be read by both humans and machines.

Every signal has a fixed definition, a fixed question, and a fixed rubric. This structure is what makes agentic AI work here.

01

Collect evidence

They pull real data from the company's public surface: website, landing pages, ad libraries, social feeds, search results, reviews, and press coverage.

02

Score the signals

They apply the rubric to the evidence. Same method every time.

03

Set confidence

They flag which scores are strong and which need human review.

04

Recommend decisions

They rank the gaps, name the likely root causes, and propose what to act on first.

A scan that would take weeks of expert analysis now runs in hours. The method does not change. The speed does.

#7 Think before you do

The agents run the diagnostic and recommend the decisions.

They do not do the marketing.

Execution is creative work. Writing, design, brand voice, campaign ideas, customer stories. Creativity is one of the biggest drivers of marketing effectiveness. No agent replaces that. Execution stays with the company's own agencies, freelancers, or in-house team.

#8 Decisions for everyone

Anyone can make informed marketing decisions from the model.

Founder, commercial lead, solo operator, marketer. The reader does not need ten years of marketing experience to act on the output.

Most marketing decisions are made by people without a senior marketing background. They get advice from agencies, from AI, from friends who run companies. The advice is inconsistent, and the decisions reflect it.

The model removes that problem. The score, the confidence, the ranked gaps, and the recommendations are all produced by the same method every time. The reader does not have to judge which input to trust. The method has already done that work.

The reader's job is smaller and clearer: look at the output, weigh the trade-offs they already understand - budget, team, timing, ambition - and decide what to act on first.

That is a decision anyone can make.

#9 The only independent voice in marketing

We do not sell the execution.

Most marketing diagnoses come from the same company that sells the fix.

An agency finds a brand problem and sells brand work. A performance team finds a conversion problem and sells performance work. The diagnosis always points to the seller's own service.

The Brand Control Model does not. We run the scan, we shape the decisions, and we stop there. The company chooses who does the work.

#10 Our policy is honesty

The model is built to be rigorous, not complete.

A few limits are worth naming up front.

It does not measure the whole business

The model measures the brand and marketing system. It does not measure product quality, pricing economics, sales team performance, internal culture, or operational delivery. These things affect growth, but they sit outside the scope of this model.

A company with a strong Brand Control Score and a weak product will still lose. The model tells you the marketing truth. It does not tell you everything.

The agents only see the public surface

They see what buyers see: the website, ads, social, search results, reviews, and press. They do not see internal strategy decks, CRM data, customer interviews, private roadmaps, or sales conversations.

For most diagnoses this is enough, because the public surface is also the surface the market judges. Where internal context matters, a human adds it before decisions are made.

The AI has weak spots

The agents are strong at evidence collection, consistency, and scale. They are weaker at three things:

Nuance in creative work.

Freshness.

Thin-data markets or languages where the public surface is small.

In these cases, the score comes back with Low confidence and a note. The model tells you what it does not know. That is part of the rigor.

#11 Science, Agents, Simplicity

Three things together.

Each part exists elsewhere. What is rare is the three working together as one system.

The science

Ehrenberg-Bass, IPA, Kahneman. Decades of evidence, not trends.

The method

60 signals, a fixed rubric, confidence levels, and scores that roll up.

The AI

Real agents, real data, the full scan in hours.

The science is in books. Audits exist. AI tools exist. What is rare is the three working together as one system. That is the Brand Control Model.

Next step for better marketing

Start with one decision

You do not need to commit to a long project. Start with a short conversation. We will tell you which product fits your situation, or we will tell you that none of them do.