Uncategorized

LinkedIn Posts vs. LinkedIn Pulse: What Actually Gets Indexed by LLMs

Dhriti
Posted on 29/04/269 min read
LinkedIn Posts vs. LinkedIn Pulse: What Actually Gets Indexed by LLMs

By Dhriti Goyal

Every B2B marketing team we work with has a LinkedIn strategy. Almost none of them have an AI-visibility strategy for LinkedIn. In April 2026, that gap is no longer theoretical – it is quantifiable, and it is expensive. Semrush’s 89,000-URL LinkedIn study, released this quarter, shows something most company-page operators have never been told: individual Pulse articles authored by a subject-matter expert are cited by LLMs between four and six times more often than the best-performing company-page post about the same topic.

LinkedIn has become the second-most-cited domain in AI search, behind only Reddit. It appears in 11% of AI responses on average and in 14.3% of ChatGPT Search answers. For professional queries, it is the single most-cited domain on the open web. But the citation is not going where most brands think it is going. Long-form Pulse articles account for 50–66% of LinkedIn’s AI citations. Feed posts account for 15–28%. And company-page content barely registers on the two platforms – ChatGPT and Google AI Mode – that drive the vast majority of professional-query volume.

This is not a content-format argument. It is a credibility argument. LLMs retrieve LinkedIn content the way a hiring manager reads a resumé – they look for a verifiable human, a specific expertise, a consistent publishing history, and a low signal-to-noise ratio. Company pages fail at the first step and never recover. Pulse articles from a credible SME pass every test.

“AI retrieval is not a popularity contest. The system is scoring for who knows what, how consistently they have said it, and how densely the proof is packed inside the sentence it is about to quote.” – Index’25 by Pepper – Retrievability panel

This playbook explains exactly why the gap exists, introduces the four-variable formula – Credibility, Consistency, Density, Noise – that predicts whether a given LinkedIn asset will be cited, and gives content and comms teams a step-by-step operating plan. Everything here is tested; everything here is immediately actionable this week.

The Citation Gap: What 89,000 LinkedIn URLs Reveal

The most rigorous public dataset on this question is the Semrush analysis of 325,000 AI prompts and 89,000 unique LinkedIn URLs cited across ChatGPT Search, Google AI Mode, and Perplexity in early 2026. The numbers tell a consistent story:

4.2×ChatGPT cites LinkedIn Pulse articles 4.2× more frequently than it did twelve months ago. Perplexity’s growth is 5.7×.
50–66%Long-form Pulse articles account for 50–66% of all LinkedIn content cited in AI answers. Feed posts account for only 15–28%.
59%On ChatGPT Search and Google AI Mode, 59% of LinkedIn citations go to individual creators, not company pages.
1.6%Only 1.6% of a company page’s followers see any given post – and LLMs seem to have internalised the same distrust of company-page content that the LinkedIn feed algorithm has.

Read together, these numbers reframe what a LinkedIn strategy actually is for an AI-first buyer. The feed is the practice field. The Pulse article is the artefact. And the author – not the brand – is the signal LLMs weight most heavily.

The Credibility · Consistency · Density · Noise Formula

Every cited LinkedIn asset we have analysed and every high-performing Pulse author in the Semrush dataset scores highly on the same four variables. We call it the CCDN formula. LLMs do not publish their retrieval rubric, but the pattern across 89,000 cited URLs is close enough to an implicit rubric that content teams can treat it as one.

Credibility (the author signal)

LLMs weight content tied to a verifiable individual far more heavily than content tied to a brand logo. A LinkedIn profile provides a dense credibility packet: job title, tenure, credentials, connections, endorsements, recommendations, and a cross-link to the employer’s company page. A company page provides a logo and a follower count. Every Pulse article is attached to a credibility packet by default. A company-page post is not. This is the single largest reason Pulse outperforms posts in AI citations.

Actionable: the highest-leverage LinkedIn move your marketing team can make this quarter is to audit the credibility packets of your five most quotable internal experts. Title, headline, featured section, Pulse archive, external publications. An SME with a thin profile is invisible to the retriever no matter how good the article is.

Consistency (the publishing rhythm)

Semrush found that roughly 75% of cited LinkedIn authors had posted five or more times in the preceding four weeks. This is not about volume for its own sake. It is about the retriever assembling a coherent picture of what this person is an expert in. One article every quarter even a great article tells the model less than six articles in six weeks, all adjacent to the same thesis. Consistency teaches the retriever what you own.

Actionable: pick one topic per SME and ship cadence, not brilliance. Four Pulse articles over six weeks, each tagged to the same core thesis, outperform one viral article on a scattered theme. Ghostwriting is acceptable in fact, the majority of cited enterprise Pulse content is ghostwritten but the byline must stay consistent.

Density (knowledge packing per paragraph)

54–64% of cited LinkedIn posts, Semrush reports, are knowledge-dense: explaining how something works, sharing first-hand experience, or documenting a specific result with a specific number. Opinion posts, motivational posts, and “hot takes” are under-represented in AI citations at rates far below their share of the feed. Density means verifiable nouns, specific numbers, and a clear causal mechanism inside every paragraph.

Actionable: write every Pulse article so each paragraph would survive being pulled out as a standalone quote. No throat-clearing openings. No one-line opinion paragraphs. If a paragraph does not contain at least one concrete number, a named mechanism, or a first-hand example, cut it or rewrite it.

Noise (the signal-to-static ratio)

The most counter-intuitive finding in the Semrush study: most cited LinkedIn posts have moderate engagement 15 to 25 reactions not viral engagement. Viral posts skew the retriever’s signal-to-noise ratio badly. A post with 2,000 reactions, 400 of them on dunk comments, looks to the LLM like a noisy mixture of topics. A quietly authoritative article with a tight comment thread of genuine practitioners reads as a clean retrieval candidate.

Actionable: stop optimising for reach. Optimise for the quality of the first twenty replies. Train the SME and their comms partner to seed the comments with their most credible colleagues in the first hour. Twenty thoughtful comments from verifiable experts do more for LLM retrieval than 2,000 generic reactions.“We used to brief our executives to go viral. In 2026, we brief them to be boringly consistent and our share of voice in AI answers has gone up every single month as a result.” – Sydney Sloan, former CMO, G2 – Index’25 panel

Posts vs Pulse: When to Use Each

The correct answer is not “pick one.” The correct answer is a deliberate division of labour. Posts are for the consistency variable. Pulse is for the density and credibility variables. Used together, they compound.

Use posts for…

  • Maintaining publishing cadence when you do not have a full article to ship. Two posts a week keep the consistency signal alive between Pulse releases.
  • Testing thesis paragraphs before investing in a Pulse. If a 180-word post on an angle lands with thoughtful practitioner replies, the full Pulse article on that angle will too.
  • Seeding comment threads around your own Pulse articles. A quick post pointing to the Pulse, with a genuine question for your network, drives the quality-comment flywheel.
  • Commentary on other people’s work. Second-order commentary on a cited domain (a McKinsey report, a Bain chart) occasionally gets pulled into the same AI retrievals as the source.

Use Pulse articles for…

  • The canonical explainer of a term or framework you want to own. Pulse is indexed by Google, by LinkedIn’s own search, and critically ingested into AI-training and AI-retrieval corpora at a much higher rate than posts.
  • First-hand case studies with proprietary numbers. Specific numbers attached to a specific author at a specific company is the highest-density retrieval candidate on LinkedIn.
  • Methodology explanations – how-we-did-it pieces that document process, not outcomes. These are the sleepers that continue getting cited twelve months after publication.
  • Contrarian definitions. If your category has a fuzzy term, the author who writes the cleanest definition becomes the citation. The retriever rewards clean definition paragraphs disproportionately.

“Treat your company page as a newsroom and your executives’ profiles as bylined columnists. The newsroom sets distribution; the columnists set retrieval.” – Angelique Bellmer Krembs, former CMO, PepsiCo – Index’25 fireside

A 30-Day Operating Plan for Content and Comms Teams

This is the plan we hand to enterprise customers. It is deliberately boring. Retrievability does not reward heroic launches; it rewards reliable rhythm.

Week 1: Set the architecture.

Pick three internal SMEs. For each, write a one-sentence expertise thesis (“Dhriti Goyal is the person to read on AI-driven editorial workflows at scale.”) and lock the Pulse topic pillar. Audit and tighten each SME profile: headline, about section, featured section, skills. Add the top three previously-published assets to the featured section of each profile.

Week 2: Ship the first Pulse.

Each SME ships one 800–1,600 word Pulse article inside their pillar. Lead with a specific number; define one term precisely; share one first-hand example with a named mechanism. No throat-clearing. Publish Tuesday or Wednesday morning in the SME’s time zone.

Week 3: Build cadence.

Each SME posts three times in the week. All three posts stay inside the pillar. One is a 180-word thesis post pointing to the Pulse article; two are commentary on adjacent developments (a new report, a competitor’s misstep, a methodology question). Comms team seeds the first twenty comments with the SME’s five most credible colleagues.

Week 4: Ship the second Pulse; measure.

Each SME ships Pulse #2 inside the same pillar. Now measure. Run your priority queries through ChatGPT, Google AI Mode, and Perplexity. Log which URLs are cited. Track whether your SME profiles – not just the articles – appear in follow-up turns. The moment an SME starts showing up in turn-two or turn-three retrievals, you know the retriever has stored them as an authority for the pillar. That is the compounding moment.

Insights: What Leaders at Index’25 Are Saying

At Index’25, the world’s first AI-search conference hosted by Pepper in October 2025, the LinkedIn-for-LLMs question surfaced on almost every panel. A handful of insights from CMOs and marketing leaders already operating against this reality:

“Our single most productive AI-visibility investment in the last twelve months was giving our top fifteen executives a Pulse ghostwriter, a cadence, and one topic each. Not our company page refresh, not our website rebuild – the executive programme.” – Mandy Dhaliwal, CMO, Nutanix – Index’25 panel

“The moment we stopped measuring LinkedIn by follower growth and started measuring it by named citations in AI answers, the whole programme got easier. Fewer posts. Denser articles. Happier executives.” – Linda Caplinger, Head of SEO & AI Search, NVIDIA – Index’25 workshop

“AI search collapses the distance between brand and demand. On LinkedIn specifically, that collapse happens at the author level, not the logo level. A buyer discovers your CMO in turn one and evaluates your product in turn three all without ever seeing your company page.” –Joyce Hwang, Head of Marketing, Dropbox – Index’25 fireside

A through-line connects all three: the brands winning AI visibility on LinkedIn have moved the centre of gravity of their programme from the company page to the executive profile, and from the feed post to the Pulse article. It is a less glamorous programme than it used to be. It is also the programme that compounds.

Seven Things to Do This Week

  • Write a one-sentence expertise thesis for each of your three top internal SMEs and paste it verbatim into the headline and about section of their LinkedIn profile.
  • Move the three best pieces each SME has ever published into the featured section of their profile.
  • Book a recurring Tuesday-morning Pulse writing slot on each SME’s calendar. Block it for six weeks.
  • Draft one 800–1,600 word Pulse article per SME that leads with a specific number, defines one term, and shares one first-hand example.
  • Agree a ‘first-twenty-comments’ protocol with five senior colleagues per SME. Their job is to seed the comments within the first hour of each post.
  • Stop using the company page as the primary publishing surface for thought leadership. Repurpose it as the distribution and newsroom surface. Keep it lean.     
  • Run your three highest-value queries through ChatGPT, Google AI Mode, and Perplexity every Monday morning. Log which of your SMEs gets cited. That weekly log is your real scoreboard.

The Bottom Line

LinkedIn is a citation engine, but it does not cite logos. It cites humans with verifiable credentials, consistent cadence, dense writing, and clean comment threads. Posts are the practice field. Pulse is the artefact the retriever stores. The company page is the loudspeaker, not the authority.

The CCDN formula – Credibility, Consistency, Density, Noise – holds across the 89,000 LinkedIn URLs AI platforms cited last quarter. None of it is complicated. All of it is operationally difficult, which is exactly why most brands will not do it, which is exactly why the brands that do will compound the lead for years.

The best time to start was eighteen months ago. The second best time is the Tuesday-morning writing slot you are about to block on your top three executives’ calendars.