Hello everyone, I’m Anurag Tiwari from Corporate Connect Magazine, and this is the most blunt, honest, and longest-running monthly article recap series in the digital marketing industry — SEO Last Month.
In this series, we break down the latest changes, updates, news, and announcements in the digital marketing world. But instead of repeating the polished corporate language, we peel away that corporate layer and honestly analyze what companies are actually trying to say and what they are really planning to do.
Because the usual copy-paste corporate statements are already available on Twitter.
Which means this is the March 2026 edition of the SEO Last Month article news recap.
So without wasting time on unnecessary talk, let’s jump straight into the news and find out what really happened and what didn’t.
On February 2, Microsoft launched Multi-Turn Search.
In simple terms, the more you try to avoid using the Microsoft Bing search engine, the more it seems to show up in front of you. That’s basically Bing’s reputation in the search world.
But honestly, there’s nothing very new here. When you search on Google, you often see a follow-up box suggesting related questions or additional searches. Microsoft has introduced something similar in Bing. So now, if you accidentally search on Bing once, it conveniently gives you another chance to repeat that same “mistake” by suggesting follow-up queries.
Then on February 3, Microsoft made two back-to-back announcements. Yes, you heard that right—two Microsoft updates in the same month. Am I dreaming?
On February 3, Microsoft launched the Publisher Content Marketplace. So what exactly is sold in this marketplace?
Let’s say your website publishes unique and interesting content, but you don’t want AI agents or AI browsers to freely scrape and use your content without permission. In that case, you can list your content in this marketplace. AI agents can then approach you and negotiate a deal—basically deciding how much they will pay to use your content.
The idea is actually pretty good. However, at the moment this feature is only available to a few major publishers such as The Associated Press, Business Insider, Condé Nast, Hearst Magazines, People Inc., USA Today, and Vox Media LLC.
And if you don’t recognize these names, let me clarify—these companies run some of the biggest websites in the world.
Websites like AP.org, AP News, Insider.com, Vogue, The New Yorker, GQ, Vanity Fair, Wired, Cosmopolitan, Harper’s Bazaar, Vox, The Strategist, and New York Magazine are part of this ecosystem.
So, these websites have collaborated with Microsoft Bing in a way that when AI agents or AI browsers use their content, Microsoft pays them for it—but only when that usage happens through Microsoft’s platform.
Meanwhile, Google is basically displaying content from everywhere through AI features without directly paying publishers in most cases. Because of this collaboration with Microsoft, some of these publishers have already started receiving payments for their content.
In the future, there’s a possibility that smaller publishers and independent websites could also benefit from such a model. But if a plan like this is going to become truly successful, it will need broader participation from publishers and wider adoption across the industry.
On February 4, a change was noticed in Google’s crawling-related documentation.
will now only process the first 2 MB of data in HTML files, and anything beyond that will simply be ignored.
As soon as this news came out, it triggered the usual wave of fear, rumors, and unnecessary LinkedIn posts in the SEO world. Some people even went a step further—they opened their laptops, fired up their favorite “Google-loving” tools, and quickly launched new tools designed just to check whether any page on your website exceeds the 2 MB HTML limit.
But what many people forgot to mention is that 2 MB of HTML can contain around 2 million English characters—that’s 20 lakh characters.
Let me give you a simple example to understand this. Take the homepage of NDTV. It’s a huge page, but even that doesn’t reach 1 million characters in HTML. In fact, it falls short by about 100,000 characters, meaning the page uses roughly 900,000 characters in total.
So unless your website’s page is more than double the size of NDTV’s homepage, your real problem is probably not Google’s crawling limit. You might have much bigger issues to worry about. At that point, your entire team should probably sit down and decide how much content you actually want to put on the homepage.
Also on February 4, Google Ads introduced a Multi-Party Approval System. This means that even if a hacker somehow gains access to a user, manager, or admin account in your Google Ads setup, they won’t be able to perform sensitive actions easily.
For any sensitive action—such as adding a new user, removing an existing user, or changing a user’s access level—multiple admins or managers will receive a notification and must approve the action.
Importantly, these notifications won’t arrive via email. Instead, they will appear inside the Google Ads interface, specifically within the settings section. From there, authorized users can approve or deny the request.
If the request isn’t approved within a specific timeframe by the required admins or managers, it will automatically be treated as rejected.
Personally, I think this is a very good security feature. Large advertising accounts often hold critical business data, payment information, and major campaign budgets, meaning the future and present of many companies depend on them. Adding an extra layer of approval helps protect these accounts from serious security risks.
These are very sensitive details for any company. So in a situation where a single user makes a mistake or an unfortunate event like an account hack occurs, the entire Google Ads account won’t get compromised or handed over to someone else automatically.
Because of this system, even if one account is breached, critical actions cannot be performed without approvals from multiple authorized users. So this Multi-Party Approval feature in Google Ads is a very, very good step. I don’t think people were actively demanding it, but Google introduced it on its own initiative—and it’s definitely a strong security improvement.
Then on February 5, Google launched a broad core update for the Discover system.
The purpose of this update is to reduce sensitive or misleading clickbait-style articles in the Google Discover feed.
At the same time, this update also targets multi-niche websites—websites that publish articles across many unrelated industries. With this change, Google is limiting their ability to appear in Discover across multiple topics.
Now, such websites will mainly appear in Discover feeds only for the specific industry or topic where they have strong authority.
For example, imagine a news website that publishes content about politics, finance, and jobs, but most of its traffic actually comes from viral Bollywood-style content, like posts about an actor or actress spotted at the airport.
After this update, Google may start showing that website’s content mainly in the Bollywood or entertainment category, while removing its articles from politics, sports, or business-related Discover feeds.
In a way, this update is meant to consolidate topic authority—showing websites in Discover primarily for the subjects they are actually known for.
Now, when was this update launched? As I mentioned earlier—February 5.
And when did it finish rolling out? February 27.
The idea behind this update was actually quite good. But here’s the thing—February 27 has already passed, and in fact even March 5 has gone by.
Which means, yes… this month we are a little late covering it.
Then there’s the whole LLMs.txt controversy. Honestly, I don’t understand why this drama just refuses to leave people’s minds. Maybe it’s because LinkedIn influencers, Twitter influencers, and blogging influencers simply won’t stop talking about it. And of course, the AI community keeps pushing it too—mostly because they want to keep the hype alive around it.
But both Google and Microsoft executives clearly rejected the LLMs.txt idea again in early February on the Bluesky platform.
John Mueller from Google called the idea “quite a stupid concept.” Meanwhile, a Microsoft executive questioned the logic behind it by asking: “Why would we double our crawl budget because of LLMs.txt?”
Here’s the logic behind that criticism.
Let’s say you already have a normal HTML page on your website. Search engines like Google or Microsoft will visit, crawl, and decide whether to index that page. That process already exists.
Now with LLMs.txt, you’re basically creating another version of the same page in Markdown format, adding it to the LLMs.txt file, and then expecting search engines to crawl and process that version as well.
So essentially, you’re asking them to do the same job twice for the same content.
It’s like this: you drink a cup of tea once, but then ask someone to wash the cup twice. Why would anyone want to do that?
The idea simply doesn’t make sense, and there’s no serious support for it from major search engine companies. Yet people keep posting about it, creating LLMs.txt files, and then writing LinkedIn posts explaining the “benefits” themselves.
So that was controversy number one.
Controversy number two also surfaced in February. There was a lot of chatter suggesting that Google had downgraded the rankings of websites that rely heavily on listicle-style content.
Now what exactly is a listicle?
In case you’re not familiar, a listicle is basically a list + article format. For example, imagine you run a digital marketing agency and publish an article titled:
“Top 10 Digital Marketing Agencies in Delhi.”
In that article, you conveniently put your own agency at number one, and then fill the rest of the list with random agencies.
This type of content has been widely used as a trick to manipulate search engines and AI systems like Google’s AI mode, ChatGPT, or Perplexity. The idea is simple: when someone searches for “top digital marketing agencies,” your agency’s name appears prominently.
But now Google is increasingly identifying and penalizing such manipulative listicles.
Anyone who remembers the Panda and Penguin updates knows that tricks like these never work for long. People try them once, see short-term success, and then rush to Reddit, LinkedIn, and Twitter to share the “hack.”
But Google’s engineers also have computers and internet access. They see these tricks too. And sooner or later, the next algorithm update comes—and all those manipulative articles get penalized.
So for the websites that relied on these tactics, their rankings dropped, their traffic dropped, and their listings disappeared.
And honestly, that’s a good thing. Updates like this should keep happening.
On February 10, Microsoft launched the AI Performance Report.
This means that inside Bing Webmaster Tools, you can now see which AI queries are showing your website in Bing Copilot search results.
As expected, people immediately started posting on LinkedIn, claiming that Microsoft had “opened the treasure of AI data.” According to them, you can now easily discover which AI queries are triggering your website or business to appear in AI results.
But here’s where people made two major mistakes.
Mistake number one:
If you never set up your website in Bing Webmaster Tools, never optimized it for Bing, and never tracked its performance there, then whatever small amount of search data you see will most likely be inaccurate or incomplete.
Whenever online platforms collect search data, they anonymize it. This means that a specific query—like “Which is the best bus to travel to Chandigarh?”—is originally linked to a particular user ID. But before storing or analyzing the data, the platform disconnects the query from the user ID and mixes it with other similar queries. This process ensures that even the search engine team cannot identify which user made the search.
Now here’s the catch: if a particular search query doesn’t have enough data, it cannot be anonymized properly. In such cases, most search platforms simply don’t show that data at all.
So imagine a situation where you never optimized your website for Bing. Naturally, the number of queries showing your site in Bing will be very limited. That means the data you see will either be inaccurate, incomplete, or sometimes not available at all.
Mistake number two:
The search behavior on Bing is very different from Google.
The people who search on Bing often come from different geographic regions, or they might be older users, very young users, or people who simply use Microsoft Windows by default, where Bing is already integrated. There are also users who prefer Bing simply because they dislike Google.
Because of this, the search patterns on Bing do not match Google’s search patterns.
So you cannot assume that insights from Bing’s AI Performance Report will translate directly to Google traffic or Google SEO strategies.
Yes, it’s a good initiative by Microsoft. Maybe it will put some pressure on Google to release similar data, or maybe it won’t. But one thing is clear—you cannot blindly use this data in your SEO campaigns.
However, if your website already gets strong traffic from Bing and you’ve been actively doing Bing SEO, then this report could be useful for you. Otherwise, it’s better not to rely too heavily on LinkedIn-style hype posts.
Now, the second news on February 10 came jointly from Microsoft and Google.
Both companies introduced a new protocol called Web MCP (Model Context Protocol). This protocol is designed to make websites natively compatible with AI agents.
Basically, the Web MCP system includes two APIs that you can implement on your website. Once implemented, your website becomes native for AI agents or MCP agents, meaning these systems can interact with your website quickly and efficiently.
They can perform actions faster, retrieve data more efficiently, and reduce the amount of AI tokens consumed during interactions. In simple terms, it helps the entire AI + website ecosystem work more smoothly.
Interestingly, this idea was something I had predicted two or three months ago. I had suggested that by the end of 2026, we might see a system where a website would essentially have two versions:
-
One version designed for human visitors
-
Another version optimized specifically for AI agents
And that’s exactly what the Web MCP protocol is trying to achieve.
We’ve already created a video explaining the introduction of the Web MCP protocol, which you can check out. In the coming week, we’ll also release a detailed video explaining how Web MCP works.
The reason is simple—we’ve already received early access to the Web MCP program, and we are currently testing it. Although the protocol is expected to be publicly launched by the end of 2026, we already have access.
So by the end of this week, I’ll share a detailed explanation of what the Web MCP protocol actually is and how it works.
The next update from February 12 came from Cloudflare
yes, the same CDN service that a large number of websites rely on.
Cloudflare proposed and launched a new system that can provide a Markdown (text-only) version of a website’s HTML content to AI agents.
Here’s how it works.
Normally, a website serves content in HTML format. But with this new system, when an AI agent visits a website and requests a text or Markdown version, Cloudflare will fetch the page’s HTML, convert it into simple Markdown text, and then deliver that version to the AI agent so it can easily process and digest the content.
On paper, this sounds like a smart and helpful idea. But in reality, there are two major problems with it.
Problem number one:
Most AI agents want to behave like normal human users. They usually disguise themselves as standard browsers and won’t follow this text/Markdown request protocol.
In fact, this could create the opposite situation—AI agents that could have used Markdown might start pretending to be normal users just to access the full HTML version instead.
Problem number two:
From an SEO perspective, this system could actually create more problems than benefits.
Think about it—your website content would now be served in two different formats. Imagine a legitimate bot that wants to analyze your website the same way a human would. But instead, Cloudflare forces it to receive a simplified Markdown version.
That means the bot could miss important parts of the website.
Because websites are not just plain text. Many pages include:
-
Infographics
-
Interactive tools
-
Forms
-
Visual elements
-
Functional components
All of these elements may disappear in a text-only Markdown version. So the AI agent would only see a partial representation of the content, which could lead to incorrect understanding.
Because of this, the system has more downsides than advantages, especially from an SEO perspective.
Personally, the Web MCP protocol seems like a much better solution.
Cloudflare, for some time now, has been trying to position itself as a kind of AI savior for websites. The intention behind their ideas is actually very good. But the reality is that their tools and control are limited.
After all, what is a CDN at its core? A CDN is basically a middle layer—a mediator between the website and the user.
If we use a simple analogy:
Cloudflare isn’t the bride’s family, and it isn’t the groom’s family either. It’s more like that relative standing in the middle of the wedding trying to manage everything.
And usually, that person shouldn’t try to interfere too much in the process.
That’s exactly where Cloudflare might be making a small mistake here.
On February 17, Google rolled out the AI-powered configuration option in Google Search Console to all users.
This feature had been available in limited testing earlier, but now it has been made fully live for everyone.
However, I’m not going to say even a single word about this feature, because honestly, I still don’t see any real use for it.
Maybe it could be helpful for beginners—people who have never used Google Search Console before or very basic clients who don’t understand how to configure things manually. When such users log into Search Console, this AI-powered setup might guide them through some of the initial configuration steps.
But for SEO professionals, at least right now, it doesn’t offer anything particularly useful.
So I’m going to completely skip this topic.
And honestly, if someone still can’t handle basic tasks in Google Search Console, then I’m not sure why they would even be following this channel in the first place.
We already have a free SEO course running, where we are teaching SEO step by step. There have already been four or five dedicated classes on Google Search Console.
Instead of relying on a random AI configuration tool, you’ll learn far more practical information by simply watching those sessions.
On February 18, Perplexity announced that it would pause ads on its platform. According to the company, showing ads on AI platforms can harm the user experience.
Now, I honestly have no idea what exactly is going on in Perplexity’s mind, but this announcement reminded me of a funny story.
Once, a Nano car breaks down on the road. An Audi driver passing by feels sorry for the driver and offers help. He says, “No problem, I’ll tow your car to the next garage.” So the Audi starts towing the Nano.
While this is happening, a BMW driver passes by. He laughs at the Audi and speeds ahead as if challenging it to a race. The Audi driver forgets that he’s towing the Nano behind him and starts racing the BMW.
Now the scene looks like this:
-
BMW racing ahead
-
Audi chasing it
-
And the Nano dragging behind both of them
A traffic policeman standing nearby watches the scene and thinks, “Okay, I understand the BMW and Audi racing each other… but why is that Nano driver also driving so aggressively?”
That’s exactly what’s happening with Perplexity right now.
Google and ChatGPT are the big players arguing about ads, monetization, and AI search. Claude is somewhere in the middle enjoying the competition. But honestly, who is even paying attention to Perplexity?
In this story, Perplexity is the Nano.
Now another update. On February 20, many users noticed that in Google Search Console, the Indexing Report was missing historical data.
Some websites could only see indexing data after December 15, while others could only see data after January 15.
In simple terms, older indexed pages suddenly appeared as zero in the report.
This is actually a bug affecting many websites.
So if you log into Google Search Console and see that pages indexed before January 15 or February 15 are showing as zero, don’t panic.
This is not a problem with your website.
It’s simply a reporting issue on Google’s side.
And if your boss or client starts worrying about it, you can simply calm them down by sharing the article explaining this bug.
SEO Last Month – March 2026 FAQ
Q1. What is Multi-Turn Search by Microsoft?
A: Launched on February 2, Multi-Turn Search is Bing’s feature that suggests follow-up queries after your initial search. It’s similar to Google’s follow-up or “People also ask” boxes. The idea is to guide users to explore related searches, even if they initially avoid Bing.
Q2. What is the Microsoft Publisher Content Marketplace?
A: Introduced on February 3, this marketplace allows major publishers to list unique content. AI agents or browsers can negotiate to use it, paying the publisher accordingly. Currently, it’s limited to major publishers like The Associated Press, Business Insider, Vox Media, and a few others.
Q3. What was the Google crawling update on February 4?
A: Google now processes only the first 2 MB of HTML per page. Anything beyond that is ignored. For context, 2 MB can contain about 2 million English characters—far more than the homepage of most sites. This is not a problem unless your pages are unusually massive.
Q4. What is Google Ads Multi-Party Approval?
A: Launched February 4, it requires multiple authorized users to approve sensitive actions in Google Ads (e.g., adding/removing users or changing access). Notifications appear inside the Google Ads interface. If not approved within a set time, requests are automatically rejected. This improves security for large advertising accounts.
Q5. What was the Google Discover broad core update?
A: Launched February 5 and finished February 27, it reduces clickbait content in Discover feeds and limits multi-niche websites to appear primarily in their strongest topics. For example, a site posting across politics, finance, and entertainment may now appear mostly for the topic where it has authority.
Q6. What is the LLMs.txt controversy?
A: LLMs.txt was an idea to create Markdown versions of web pages for AI crawling. Google and Microsoft rejected it in early February, calling it unnecessary because it duplicates crawl requests for the same content. The search engines see no real benefit in crawling the same page twice.
Q7. How does Google treat listicle-style content now?
A: Google has started downgrading rankings for manipulative listicles (articles formatted as lists to game search engines). While they may have worked briefly, the latest updates now penalize such tactics, reducing both traffic and ranking for offending sites.
Q8. What is Microsoft’s AI Performance Report?
A: Launched February 10, it allows Bing Webmaster Tools users to see which AI queries show their site in Bing Copilot results. However, if your site isn’t optimized for Bing or tracked in Webmaster Tools, the data may be incomplete or inaccurate. Bing search patterns also differ from Google’s.
Q9. What is the Web MCP (Model Context Protocol)?
A: Announced on February 10 by Microsoft and Google, Web MCP is a protocol that makes websites natively compatible with AI agents. It includes two APIs allowing AI systems to interact efficiently with your site. Essentially, it enables separate versions for human visitors and AI agents.
Q10. What is Cloudflare’s Markdown system for AI agents?
A: Launched February 12, Cloudflare can serve a Markdown (text-only) version of a website to AI agents. While useful in theory, most AI agents behave like humans and may ignore Markdown requests. Important visual or functional content may be lost, making this less ideal for SEO than Web MCP.
Q11. What is the AI-powered configuration in Google Search Console?
A: Rolled out February 17, this feature helps beginners configure Search Console settings. For SEO professionals, it currently adds little value, so it can largely be skipped.
Q12. Why did Perplexity pause ads?
A: On February 18, Perplexity paused ads to prevent disruption to user experience on its AI platform. It’s essentially a user-experience-driven decision, with no immediate impact on websites’ SEO.
Q13. What was the Google Search Console indexing bug?
A: On February 20, many users noticed missing historical indexing data. Pages indexed before December 15 or January 15 appeared as zero. This is a reporting bug on Google’s side, not an issue with your website.







