The internet is now the main place where brands meet people. It’s where agendas are set, reputations are shaped, and demand is built. To manage all of that, you need clear, comparable metrics. Three simple ones go a long way: share of voice, share of publications, and user sentiment. In plain terms: how loud the brand is, how often it’s written about, and how people feel about it. Below, we break down each metric, how they differ, and how to use them day-to-day.
What Is Share of Voice
Share of Voice (SOV) is the share of your brand’s mentions across the total mentions in a defined market and set of channels. Think of it as your brand’s volume versus everyone else.
Formula: SOV = (brand mentions ÷ total category mentions) × 100%
Measure it over a period (week, month, quarter). Before you start, define the category boundaries: competitors, key topics, and acceptable variations of brand names. That’s what makes the numbers comparable.
You can count SOV as a simple tally of posts and articles, but it’s often weighted. A note in a top industry outlet is not the same as a short, low-visibility post. Common weights include reach, source traffic, reactions, and views. Weighting gets you closer to real-world impact.
What Is Share of Publications
Share of Publications (SOP) is the share of publications about your brand among all publications about the category. It looks a bit like SOV, but the emphasis is different: SOP asks how often editors and authors choose to cover your brand.
Formula: SOP = (publications about the brand ÷ total publications about the category) × 100%
Usually, this counts editorial and original content only: media articles, reviews, long-reads, and posts in corporate blogs. Reposts and short social mentions are often excluded or counted separately. That’s why SOP is especially useful for a narrow view of media presence.
SOV vs. SOP: What’s the Difference
It comes down to granularity and sources:
- SOV pulls from a broad mix: media, social networks, forums, videos, comments, directories, and search snippets. It’s the overall hum of the market, including user voices.
- SOP focuses on full editorial content — the professional agenda chosen by editors and authors.
If SOV is high thanks to social chatter but SOP is low, editors are writing about competitors more often. The reverse — a high SOP with moderate SOV — can mean editors value the brand while the wider audience is less engaged.
What Is User Sentiment
User sentiment aggregates how audiences feel about the brand. It’s drawn from comments, reviews, posts, threads, forum topics, marketplace responses, and replies under videos.
The foundation is polarity: positive, neutral, negative. You can also tag emotions and motives like joy, irritation, trust, disappointment, complaints, and praise.
There are two approaches:
- Rules and dictionaries — transparent, but weaker with irony and context.
- Machine learning models trained on labeled data — better with nuance, but need quality control and periodic retraining.
In practice, teams combine methods and manually review edge cases. Results are usually shown as shares or indices: positive share, negative share, tonality balance (positive minus negative), satisfaction index.
Where to Measure: Channels and What to Know
- Search engines and news aggregators. Mentions in headlines and snippets matter, as does showing up on the first page.
- Online media and industry platforms. Great for SOP and weighted SOV; pay attention to circulation/reach and editorial influence.
- Social networks. The main source for SOV and sentiment; filter bots, reposts, and spam.
- Forums and communities. Slower, but deeper discussions that reveal real user problems.
- Marketplaces and review services. High practical value for sentiment; this is where motives like price, delivery, quality, and service surface.
- Video platforms and podcasts. Mentions are growing; context search and tagging are harder — you’ll need transcription.
Getting the Data Ready: Collection and Cleaning
- Brand dictionary: main name, variations, misspellings, transliteration, product names, abbreviations. Also define exclusions to avoid homonyms.
- Deduplication: count reposts and mirrors once.
- Filter anomalies: remove bots and unusual spikes.
- Normalize periods: holidays, promotions, crises, and releases distort the baseline. Many teams use a 4–12 week rolling window.
- Separate by source and language: sentiment can vary by channel and language; mixing without weights is risky.
Calculating in Practice: Simple Steps
- List competitors and key topics.
- Choose channels and weights (example: industry media — 1.0; top media — 2.0; large social network — 0.5; niche forum — 0.7).
- Set up collection: APIs, parsing pipelines, and manual validation for disputed sources.
- Calculate raw SOV and SOP.
- Calculate weighted SOV and SOP using reach and source quality.
- Tag sentiment. Manually review 5–10% of the sample.
- Build a panel: trends, competitor comparison, channel shares, and a motive map.
- Link the metrics to business indicators: traffic, leads, conversion, retention, NPS.
Brief Examples
Example 1 — Consumer electronics. A company launches an updated product line in May. SOV on social media grows from 12% to 18% in two weeks. SOP in media stays at 10% because editors cover a competitor that held an offline presentation. Sentiment is mixed: 55% positive, 20% negative, the rest neutral. The negatives center on overheating. The team publishes a clarification, releases a firmware update, and holds a Q&A. Two weeks later, negative share drops to 12%, SOP rises to 14% on the back of reviews, and marketplace sales grow 8% week-over-week.
Example 2 — Fintech service. A new tariff launches in spring. SOV in the industry is 9%, but SOP is 3%. Editors write infrequently, yet the user community is active. Sentiment in month one is negative: 30% negative reviews, mainly about confusing terms. The brand simplifies the landing page, adds a payment calculator, and records short explainers. After a month, negativity falls to 14%, and SOP climbs to 7% thanks to coverage in specialized media.
How to Read and Use the Metrics
- SOV tells you how visible you are. If SOV trails market share, there’s room to grow. If it’s higher, you’re ahead in the information field.
- SOP reflects relationships with editors and experts. A low SOP is a cue to work on your media plan, exclusives, and expert commentary.
- Sentiment sets priorities. High negativity around “delivery”? Strengthen logistics and communication. A rise in praise for “app speed”? Amplify that message in advertising.
Together, the three metrics are practical: SOV speaks to scale, SOP to media legitimacy, sentiment to perception.
Common Mistakes and Pitfalls
- Mixing channels without weights. One tweet isn’t the same as a detailed review.
- Ignoring seasonality. Sales peaks and event-driven spikes distort the baseline.
- Missing niche platforms. You might “not see” where negativity actually accumulates.
- Weak data cleaning. Bots and reposts inflate SOV.
- Over-formal sentiment. Keep manual checks for disputed clusters and update models regularly.
- Metrics detached from the product. Any movement should map to changes in service, pricing, releases, or support.
Ethical and Legal Notes
When you measure mentions, follow platform terms and user rights. Respect personal data requirements. Don’t use private content. Make clear that user messages may be analyzed in anonymized form. Transparency builds trust and improves feedback quality.
Building a Working Dashboard
Start simple. On one page, put:
- a weekly SOV trend,
- SOP bar charts for key media,
- a sentiment “traffic light,”
- a motive map with the week’s top five themes.
Add a channel breakdown. At the top, include a short note: what went up, what went down, why, and what actions are planned. This saves managers time and helps the team. Next, automate collection: update social daily, media weekly. Revisit the brand dictionary and source list quarterly. Run a quality audit of sentiment tagging twice a year.
Linking Metrics to Results
SOV and SOP are about visibility. Sentiment is about attitude. To make them work for the business, tie them to actions you control: publish a product guide, seed expert comments, improve the FAQ, adjust onboarding, change support SLAs. The dashboard should show what you did and how the metrics responded.
Using reference corridors helps. For example: SOV no lower than 15% on a priority topic, SOP no lower than 10% in top media, negative share no higher than 12%. If a metric goes outside the corridor, the team knows what to do.
Measuring SOV, SOP, and Sentiment in LLM Responses
Generative assistants are becoming a new gateway to the internet. People increasingly ask them questions instead of using classic search. That makes brand visibility inside LLM responses important. The same three metrics adapt well here.
Units of Measurement
- Mention. Any correct appearance of the brand or product name.
- Detailed mention. A paragraph or list with advantages, features, prices, or instructions.
- Recommendation slot. A place in lists like “best options,” “what to buy,” or “what to compare.”
- Value judgment. Language that signals stance: “reliable,” “slow,” “expensive,” “convenient.”
SOV in LLM
SOV-LLM = (brand mentions in answer sets ÷ total category mentions) × 100%
Build a standard set of queries across the funnel — “what is,” “how to choose,” “top-5 solutions,” “compare A and B,” “how much does it cost,” “is there an alternative.” Run each several times (n-generations) and record variability. Test across different models, languages, and regions.
SOP in LLM
Track how often the model highlights the brand:
- SOP-LLM (detailed). Share of answers where the brand gets its own paragraph/subsection.
- SOP-LLM (slots). Share of recommendation list slots the brand occupies.
This shows whether the model merely “knows” the brand or actually features it.
Sentiment in LLM
Assess the tonality and modality of the model’s wording: shares of positive and negative epithets, presence of caveats (“might,” “sometimes,” “for some users”), and intensity of judgments. Separate composite answers: if the model summarizes user reviews, count that sentiment separately from the model’s own narrative voice.
Measurement Methodology
- Query constructor. Build a pool of 50–200 questions by category and use case. Separate navigational, informational, and transactional queries.
- Generation reruns. Make 3–5 runs per query to capture stochasticity.
- Cross-model slice. Test multiple assistants and modes, plus languages and regions.
- Normalization. Remove duplicates, fix model versions and dates, and note when answers rely on quotes versus “general knowledge.”
- Calculation. Compute SOV-LLM, SOP-LLM (detailed and slots), and sentiment-LLM. Add weights if needed (markets, query types, models).
- Quality control. Manually check 10–20% of the sample. Flag hallucinations, inaccuracies, and conflicts.
- Link to actions. Compare metric changes with product releases, documentation updates, PR, and help-page improvements.
Special Risks
- Plasticity of answers. Small prompt edits can change results; you need test discipline.
- Hallucinations and staleness. Models can invent facts, which distorts sentiment-LLM.
- Implicit biases. Assistants differ in neutrality and safety rules, affecting SOP-LLM.
- Localization. Visibility varies across languages and regions.
What to Include in the LLM Metrics Panel
- A weekly SOV-LLM trend,
- SOP-LLM bar charts (detailed mentions and recommendation slots),
- A sentiment-LLM traffic light and motive map (what’s praised, what’s criticized),
- Breakdowns by models, languages, regions, and query types,
- An action log of what was done to improve visibility (e.g., FAQ updates, added compatibility data, expert answers).
Bottom line: SOV shows scale, SOP shows editorial validation, and sentiment shows reception. With clean data, sensible weights, and a simple dashboard, these three metrics make the digital noise legible — and actionable.
