Even if you’ve never bothered to download ChatGPT, you’re probably familiar with AI-powered search results. For example, the summaries that appear in Google searches that attempt to answer your query:

On Google, these summaries come up before the organic results (ranked list of websites), but on programs like ChatGPT, this is the format of all query responses.
The ranking of organic results is the product of a search algorithm honed over many years. AI-powered search is much newer and the LLM’s themselves are just beginning their evolution towards a place of greater utility[1] In other words, a key component brand visibility is in the hands of a nascent technology, whose output can change from one day to the next.
So how exactly are AI-search results generated in 2025? According to a comprehensive study conducted by the SEO and marketing platform Ahrefs covering both Google AI-overviews and ChatGPT, “Brand web mentions show the strongest correlation (0.664) with AI Overview brand visibility.”[2]
Brand web mentions are just what they sound like: Other websites on the Internet that mention your brand. If the brand is being discussed widely (and positively), that bodes well for inclusion in results to a search query like, “What the best company for X?”
For anyone familiar with SEO, that probably sounds familiar: If other reputable websites are mentioning the brand and linking to its website, that helps with the ranking on Google. The difference is that the current iteration of AI-powered search isn’t as good at ignoring poor-quality sources, when compared to Google organic search. This is true for both ChatGPT and Google AI-overviews, but more so in the former case. Several sources note that ChatGPT and its competitors are more prone to citing bad sources than Google AI-search.
Search Engine Land notes that “Language models can begin to struggle with signal noise, making it harder to differentiate true authority.” [3] In other words, LLM won’t necessarily know how to pick out an authentic source in a sea of misinformation.
This is the element that makes this topic of relevance to our readers. A new online fraud scenario has emerged, by which the criminal actors create a network of professional-seeming websites, formatted in a manner easily digestible for a LLM, but with false information on the brand.[4]
If executed successfully, this fraud attack can lead to false answers being given to basic queries such as:
- What is the official website of X brand?
- How can I pay my bill with X company?
- What is the contact information for….?
These are all queries that actual users input in search engines and trust the first result that appears. If a scammer can inject fake websites and contacts into the AI-search result, they have a much likelier path to conversion in comparison to phishing emails or fake websites promoted on social media.
It’s not merely theoretical as this attack has been observed in practice, targeting cryptocurrency, banking and travel websites[5]. The common factor between these three targets is the high value of transactions conducted on such sites, which is logical considering the high effort level required to carry out the attack. Brands with e-commerce websites selling expensive items should take particular notice, as diverting such payments to a clone website could be worth the scammer’s time investment.
Next-generation brand protection services have already hit the market, aimed specifically at monitoring LLMs. IP Twins is evaluating several of these to determine if incorporating LLM-specific checks would add value to our existing suite of monitoring products.
Notwithstanding the value of LLM-specific monitoring, it bears mentioning that AI-powered search is still draws primarily from standard website content. This means that brand protection strategies designed to protect against fraudulent websites are still effective in this new environment. However, it is important to ensure that the brand is protected from different angles:
Domain Names
A sound domain name strategy works on two fronts:
First, it’s important to continue renewing (and re-register if necessary) all domain names formerly used by the company. ChatGPT has been observed providing references to domain names that are no longer in the possession of the original owner or are free to register. If the company has switched domain names for certain resources (ex. payment gateway), it’s strongly recommended to keep these domains active and redirecting to the current address for the resource in question. Otherwise, the scammer can simply register the old domain for themselves and receive all the traffic derived from AI-search.
Second, it’s advisable to defensively register domain names that could be seen by an LLM as a credible web address for the brand. This same strategy is already in effect by brands that wish to avoid human users viewing a fake URL as credible and now there is observational evidence that LLMs also consider the composition of the URL when determining if a website is authentic.
Web Content Monitoring
As mentioned above, brand web mentions are the number one factor in AI search placement. Web content monitoring, the service that generates reports of websites mentioning the brand in any context, are therefore a helpful tool for both keeping tabs on the authentic third-party content from which the LLMs will draw their search results, as well as detecting the networks of fake brand mentions that would suggest the brand is subject to an attack aimed at delivering fake results to AI-searchers.
Takedown
A component piece to the web content monitoring, to be utilized if the attack is detected. If a webpage is maliciously providing false information about a company, it is a valid takedown target, even if there’s no specific attack (eg. phishing, fake payment platform) on the website in question.
Conclusion
The utility of AI-powered search is to deliver a result directly to the user, removing the need to check multiple references. That’s also the threat to brands: A result delivered to the user in this context is immediately trusted, so any successful effort to manipulate the results can cause real harm to the target.
Still, this is a very new type of attack and the companies that operate AI-search may soon find a way to root out false source material. In this context, IP Twins does not recommend overhauling a brand protection strategy to account for LLMs.
The exception is for brands in banking and other industries that conduct high-value transactions over the internet. If you represent a company like this, IP Twins would be happy to bring the customer into our ongoing evaluation process of LLM monitoring products and have a collaborative conversation as to whether extending the monitoring to this environment will have positive ROI.
For other brands that sell products and services at a lower price point, we believe that AI-powered search will, for the time being, remain a concern primarily for marketing departments and less so for brand protection and anti-fraud teams. For these brands, we’d still highlight web content monitoring as a new product to consider. Web content tends to fall below domain names on the order of monitoring priorities, but takes on additional relevance in the context of AI-powered search.
Notes
[1] https://searchengineland.com/in-geo-brand-mentions-do-what-links-alone-cant-459367
[2] An Analysis of AI Overview Brand Visibility Factors (75K Brands Studied)
[3] In GEO, brand mentions do what links alone can’t
[4] The Dark Side of LLMs: From SEO Poisoning to GEO Manipulation – SecureBlitz Cybersecurity
[5] https://secureblitz.com/dark-side-of-llms/