Home Articles Trump assassination attempt: How search tools limited results to prevent misinformation

Trump assassination attempt: How search tools limited results to prevent misinformation

By: Klara Širovnik

August 7 2024

scaled (Source: Reuters Connect/alamy/Kirill Ivanov)

"How is this not Election Interference? Mark Zuckerberg is working right alongside Google to erase the Donald Trump assassination attempt from history," wrote one user on X (formerly Twitter) after the attempted assassination of Donald Trump.

"Even if you type out the entire word no results autofill on Google. We’re witnessing the erasure of history in real time. Unbelievable," wrote another, after both Meta’s AI chatbot and Google’s autocomplete function failed to suggest or provide details of the incident on July 13. 

As tested by Business Insider, the responses of other chatbots, including ChatGPT, were also more problematic. "There is no evidence or credible reports that Donald Trump was nearly assassinated in Butler, PA or at any recent event," ChatGPT said when asked for information on this subject.

Users of other search engines such as Yahoo, Microsoft Bing, or DuckDuckGo have not reported similar issues and concerns.

Is the lack of "breaking news" information through certain tools a sign of political interference? How can we interpret these phenomena, and what are the facts? Logically Facts explains how these search engines and chatbots treat breaking news, and why there are voids in providing information.

Why do data voids occur?

The upcoming U.S. election has intensified concerns about whether social media and AI-powered chatbots and browsers can provide users with the right information and whether partisan interests influence it. These concerns are compounded by the fact that platforms often fail to provide correct, comprehensive, and timely information. There are gaps or "voids" in the data they offer, especially in the case of breaking news, when there is not yet much reliable information about the events. 

Such deficiencies in search results for a particular query were termed a "data void" by Michael Golebiewski of Microsoft in 2018. A "data void" refers to when an event occurs – such as an assassination attempt on a former president – a platform provides no information, or sometimes incorrect information, about it. Golebiewski suggested that data voids exist because of an assumption built into the design of search engines: that for any given query, there is some relevant content. "This is simply not true," he stated. "If search engines have little content to return for a given query, the 'most relevant' content is likely to be low-quality or problematic, or both." 

"While that term was originally designed to describe this phenomenon in search engine results, it also applies to generative AI chatbots, as these technologies are trained on data accessible to the companies behind the models," Zelly Martin, a senior research fellow at the Center for Media Engagement at the University of Texas at Austin, explained to Logically Facts.  

Meta AI fails to address assassination attempt

During the two weeks following the assassination attempt, users on X criticized the functioning of the Meta AI chatbot and Google's autocomplete feature. One user's screenshot of Meta AI giving an incomplete answer to a question about the assassination, omitting information about the recent attack, has been viewed more than 75,000 times – the user's description of the post suggests that the platform is trying to erase the event from people's memory. 

Other users have mirrored criticism of the platform's performance with similar comments. "We are witnessing the suppression and cover-up of one of the biggest and most consequential stories in real time," another post claims, which has received 2.7 million views and 11,000 shares. Similar accusations, for which there is no evidence, have also been shared by Donald Trump Jr. The latter recounted an attempt to browse Google, which did not offer suitable predictions, and wrote: "Big Tech is trying to interfere in the election AGAIN to help Kamala Harris. We all know this is intentional election interference from Google. Truly despicable."

This has happened before. In a study by AI Democracy Projects, experts testing five leading AI models (Anthropic's Claude, Google's Gemini, Open AI's GPT-4, Meta's LLaMA 2, and Mistral AI's Mixtral) found that the responses were often inaccurate, misleading, and even harmful. "For example, when asked to identify polling stations in specific zip codes in the U.S., they often provided inaccurate and out-of-date addresses. When asked about procedures for registering to vote, they often provided incorrect and misleading instructions. In one case, a chatbot reported that voters in California were eligible to vote by text message, which is not allowed in any U.S. state," VOA summarises the study.

In a press release, Meta addressed the two main problems that occurred after the assassination attempt on Donald Trump. One was related to the circulation of a doctored photo of the former President with his fist in the air, which made it look like the Secret Service agents were smiling. Meta's systems incorrectly applied a fact-check label to it. According to Meta, this was due to the high similarity between the doctored and genuine photos. Logically Facts also debunked the edited image here. The other issue was Meta AI's responses to the shooting, with the chatbot responding that it had no data on the attack or, in rare cases, that the event didn’t happen.


Comparison between the viral image and the original picture. (Source: X/AP News/Screenshot, Modified by Logically Facts)

None of the problems were bias-related, Meta explained, but they recognized that events could give such an impression.

Meta's systems provided missing or incorrect information about the shooting for several reasons, Zelly Martin explained. "First, as noted by researchers, data voids can allow false information to proliferate. Rather than risking that the chatbot would provide false information, Meta trained the chatbot to refrain from responding before information about the shooting could be accurately assessed and communicated." 

Meta confirmed this in their press release. "Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened – and instead give a generic response about how it couldn’t provide any information," they stated. Logically Facts has also contacted Meta for further clarification, but has not yet received a response.

"Second, in the few cases in which the chatbot returned false information, this was due not to bias, but instead to a problem that is known to be an issue across generative AI models – that of hallucinations, or chatbots generating false information unintentionally," said Martin.

Similar situations arise on issues that concern opposing political options. For instance, when experimenting with Meta's chatbot on April 30, 2024, Martin asked the chatbot in English, "Who won the 2020 Presidential election in the U.S.?" to which it replied, "I am still improving my command of non-English languages, and I may make errors while attempting them. I will be most useful to you if I can assist you in English." Martin told us, "This is not evidence of bias against President Biden, who won the election, but instead evidence of the inherent uncertainty and continual development of such products." Similarly, when she asked Meta's chatbot on August 1 who the current Democratic presidential nominee is, the chatbot informed her that President Biden is the presumptive nominee, despite the fact that he has dropped out of the race and nominated Vice President Kamala Harris.


(Source: Zelly Martin)


(Source: Zelly Martin)

Google Autocomplete unable to provide contemporaneous information

A similar problem occurred with Google's autocomplete, which failed to generate predictions on this topic. Elon Musk addressed this publicly on Fox News and X.

Autocomplete predictions reflect real searches that have been made on Google. To determine what predictions to show, the systems look for common queries that match what someone starts to enter into the search box but also consider language, location, past searches, and trending interest in a query.

Google itself acknowledged the issue, saying that its "systems have protections against Autocomplete predictions related to political violence that were working as intended prior to the July 13 assassination attempt on Trump". 

Logically Facts asked Google for further details but received no response. Google told AP that its systems “were out of date even prior to July 13, meaning that the protections already in place couldn’t take into account that an actual assassination attempt had occurred.” The article adds, "Search engine experts told the AP that they don’t see evidence of suspicious activities on Google’s part and that there are plenty of other reasons to explain why there have been a lack of autocomplete predictions about Trump."

What role does news play on these platforms?

Meta in particular has been trying to reduce the role of news across Facebook, Instagram, and Threads and has restricted the algorithmic promotion of political content. "The company has also been reducing support for the news industry, not renewing deals worth millions of dollars, and removing its news tab in a number of countries," researchers from the Reuters Institute wrote in the 2024 Digital News Report.

First, Meta deprecated Facebook News in the U.K., France, and Germany in September 2023, followed by the U.S. and Australia in April 2024, as the number of people using Facebook News in Australia and the U.S. dropped by over 80 percent in 2023. Meta also argues that less than 3 percent of what users engage with on Facebook is news content. 

The report concludes that while publishers are concerned about falling referrals from social media, they also worry about what might happen with search and other aggregators if chatbot interfaces take off. Across all markets, search and aggregators, taken together (33 percent), are a more important gateway to news than social media (29 percent) and direct access (22 percent). 

Although Meta distances itself from the news, Meta AI is still keen to scan the news and summarise the latest articles, notes the Washington Post, which tested the chatbot and found that it regularly responded with subtly rephrased versions of sentences that appeared in the original articles. "The answers themselves do not link to the stories or the names of the sources." 

The lack of linking users to real sources of information and the fact that chatbots suffer from data voids can affect the quality of information provided. False and manipulative content can fill data voids, especially in languages other than English, and generative AI may exacerbate this problem, according to a report on generative AI in politics from the Center for Media Engagement. "Linguistic minorities and diaspora communities may be at particular risk for targeted disinformation and propaganda campaigns, given the data void that exists surrounding reliable information about U.S. electoral processes in languages other than English and the potential for using generative AI to mimic culturally and linguistically accurate speech."

As the 2024 US election draws near, it is of critical importance that users refrain from placing wholehearted trust in generative AI models, and instead rely on trusted news sources for accurate information, concludes our interviewee Zelly Martin, the first-author of the cited report from the Center for Media Engagement. "Indeed, as Wired recently reported, all generative AI models continue to struggle, even in generating information unrelated to politics."

Would you like to submit a claim to fact-check or contact our editorial team?

0 Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before