US-Iran war: X’s crackdown on AI-generated war videos, product head Nikita Beir introduces 90-day monetisation ban; Learn how disinformation spreads and gets monetised
US-Iran war: X’s crackdown on AI-generated war videos, product head Nikita Beir introduces 90-day monetisation ban; Learn how disinformation spreads and gets monetised
On 3rd March, Elon Musk-owned social media platform X announced a 90-day-long suspension from its Creator Revenue Sharing programme for creators posting videos generated using artificial intelligence (AI) of armed conflicts without disclosing that the content was synthetically created.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…— Nikita Bier (@nikitabier) March 3, 2026
The policy was announced by head of product Nikita Bier. In a social media post on X, he said that the decision was necessary because modern artificial intelligence tools make it extremely easy to fabricate convincing war footage.
He said, “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”
Under the new rule, creators must clearly label AI-generated videos depicting armed conflicts. In case they fail to do so, they will lose eligibility to earn revenue from their posts for 90 days. Repeat offenders may be permanently banned from the monetisation programme.
X will identify the violations using a combination of technical detection tools, metadata analysis and the platform’s crowd-sourced fact-checking system, Community Notes. Notably, the policy is specifically focused on AI-generated war footage. The decision reflects concerns that such videos can spread rapidly during conflicts and distort public understanding of events.
Policy announced amid escalating Middle East conflict
The announcement came at a time when the Middle East is witnessing a rapidly escalating conflict involving the United States, Israel and Iran. Israel and the US struck Iran’s nuclear and military establishments. In those initial strikes, the Supreme Leader of Iran, Ali Khamenei, and several top leaders were killed.
Iran retaliated and launched missile attacks targeting locations across the region, including areas hosting US military installations in countries such as Qatar, Saudi Arabia, and the United Arab Emirates. Several locations in Israel, including its capital Tel Aviv, have also been targeted during the escalation.
One of the major developments in the conflict has been claims of the destruction of the US AN-FPS 132 long-range radar system in Qatar, a strategic surveillance installation valued at over one billion dollars.
In such an environment, images and videos circulating online play a crucial role in shaping public understanding of what is happening on the ground. However, the same conditions also create fertile ground for manipulated or artificial visuals.
‘Gaza journalist’ account questioned
The scale of the problem became evident when a viral video claiming to show Iranian rockets striking Tel Aviv began circulating on the platform. The Community Notes attached to the post highlighted several inconsistencies suggesting the footage was artificially generated.
Source: X
It was pointed out that the missile speed appeared unrealistic, the explosion sound occurred before it should have at that distance, and smoke behaviour did not match real-world physics. The video was shared by an account claiming to be a journalist from northern Gaza, Ahmed Hamzan. He claimed to be a war reporter.
??? pic.twitter.com/pNDK4TBRVa— Nikita Bier (@nikitabier) March 4, 2026
Nikita Bier responded to the post where the video was shared with a simple question mark, publicly questioning the credibility of the account’s identity as a journalist.
Pakistani account network posting AI war videos
The scale of the problem became evident from the fact that X recently uncovered a co-ordinated network spreading AI-generated war videos. According to Bier, the platform identified 31 accounts that were being operated by a single person located in Pakistan. These accounts were reportedly hacked profiles whose usernames were changed around 27th February to variations of “Iran War Monitor”.
Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos. All were hacked and the usernames were changed on Feb 27 to "Iran War Monitor" or some derivative.We are getting much faster at detecting this—and also eliminating the incentive to do…— Nikita Bier (@nikitabier) March 4, 2026
By controlling multiple accounts simultaneously, the operator was able to distribute the AI-generated videos across several profiles, which created the impression that multiple independent sources were sharing the same videos.
Such coordinated amplification can significantly increase the credibility of misleading content because audiences often interpret repeated posts from different accounts as confirmation of authenticity.
Why sensational war footage spreads rapidly
On 3rd March, Elon Musk-owned social media platform X announced a 90-day-long suspension from its Creator Revenue Sharing programme for creators posting videos generated using artificial intelligence (AI) of armed conflicts without disclosing that the content was synthetically created.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…— Nikita Bier (@nikitabier) March 3, 2026
The policy was announced by head of product Nikita Bier. In a social media post on X, he said that the decision was necessary because modern artificial intelligence tools make it extremely easy to fabricate convincing war footage.
He said, “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”
Under the new rule, creators must clearly label AI-generated videos depicting armed conflicts. In case they fail to do so, they will lose eligibility to earn revenue from their posts for 90 days. Repeat offenders may be permanently banned from the monetisation programme.
X will identify the violations using a combination of technical detection tools, metadata analysis and the platform’s crowd-sourced fact-checking system, Community Notes. Notably, the policy is specifically focused on AI-generated war footage. The decision reflects concerns that such videos can spread rapidly during conflicts and distort public understanding of events.
Policy announced amid escalating Middle East conflict
The announcement came at a time when the Middle East is witnessing a rapidly escalating conflict involving the United States, Israel and Iran. Israel and the US struck Iran’s nuclear and military establishments. In those initial strikes, the Supreme Leader of Iran, Ali Khamenei, and several top leaders were killed.
Iran retaliated and launched missile attacks targeting locations across the region, including areas hosting US military installations in countries such as Qatar, Saudi Arabia, and the United Arab Emirates. Several locations in Israel, including its capital Tel Aviv, have also been targeted during the escalation.
One of the major developments in the conflict has been claims of the destruction of the US AN-FPS 132 long-range radar system in Qatar, a strategic surveillance installation valued at over one billion dollars.
In such an environment, images and videos circulating online play a crucial role in shaping public understanding of what is happening on the ground. However, the same conditions also create fertile ground for manipulated or artificial visuals.
‘Gaza journalist’ account questioned
The scale of the problem became evident when a viral video claiming to show Iranian rockets striking Tel Aviv began circulating on the platform. The Community Notes attached to the post highlighted several inconsistencies suggesting the footage was artificially generated.
Source: X
It was pointed out that the missile speed appeared unrealistic, the explosion sound occurred before it should have at that distance, and smoke behaviour did not match real-world physics. The video was shared by an account claiming to be a journalist from northern Gaza, Ahmed Hamzan. He claimed to be a war reporter.
??? pic.twitter.com/pNDK4TBRVa— Nikita Bier (@nikitabier) March 4, 2026
Nikita Bier responded to the post where the video was shared with a simple question mark, publicly questioning the credibility of the account’s identity as a journalist.
Pakistani account network posting AI war videos
The scale of the problem became evident from the fact that X recently uncovered a co-ordinated network spreading AI-generated war videos. According to Bier, the platform identified 31 accounts that were being operated by a single person located in Pakistan. These accounts were reportedly hacked profiles whose usernames were changed around 27th February to variations of “Iran War Monitor”.
Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos. All were hacked and the usernames were changed on Feb 27 to "Iran War Monitor" or some derivative.We are getting much faster at detecting this—and also eliminating the incentive to do…— Nikita Bier (@nikitabier) March 4, 2026
By controlling multiple accounts simultaneously, the operator was able to distribute the AI-generated videos across several profiles, which created the impression that multiple independent sources were sharing the same videos.
Such coordinated amplification can significantly increase the credibility of misleading content because audiences often interpret repeated posts from different accounts as confirmation of authenticity.
Why sensational war footage spreads rapidly
Academic research has consistently shown that sensational content spreads faster on social media platforms. In his 2024 paper titled “Going Viral: Sharing of Misinformation by Social Media Influencers”, Mulcahy, Buntain and colleagues found that high-visibility accounts and influencers often play a major role in amplifying misleading information. Such content goes viral because dramatic content generates engagement and shares.
When videos depict explosions, missile strikes or destruction, they trigger strong emotional reactions among viewers. Such content is therefore more likely to be shared widely, even before its authenticity is verified.
The rise of artificial intelligence tools has made this problem significantly more complex because realistic war footage can now be generated using software rather than recorded on the ground.
The attention economy and the business of disinformation
Researchers have argued that the spread of sensational content online is not accidental but closely tied to the economic structure of social media platforms.
In his 2023 paper titled “Disinformation on Digital Media Platforms: A Market Shaping Approach”, researcher Carlos Diaz Ruiz explained that digital platforms operate within an attention economy, where content that attracts more engagement becomes more valuable.
Algorithms reward posts that generate views, comments and shares, pushing such content to larger audiences. As a result, creators often learn that controversial or emotionally charged content performs better than cautious or nuanced reporting. This structure can create powerful incentives to produce sensational narratives.
Monetisation and the incentive to go viral
The economic incentives become even stronger when viral content is linked directly to revenue. X’s Creator Revenue Sharing programme allows eligible users to earn a portion of advertising revenue generated by engagement on their posts.
However, such systems can unintentionally encourage sensational or misleading content because creators may prioritise posts that attract the most attention.
Research on digital misinformation ecosystems has documented how engagement-based monetisation can encourage actors to produce provocative or misleading material because high engagement translates directly into financial reward.
This is precisely the incentive structure X’s new rule attempts to address. By removing revenue eligibility for creators who post undisclosed AI war videos, the platform aims to reduce the financial motivation behind such content.
AI-generated war videos and the risk of digital propaganda
AI tools have become extremely good at creating life-like videos and photos, which has significantly affected the ability to differentiate between real and fake visuals.
Scenes depicting missile launches, explosions and city skylines can now be created in minutes using artificial intelligence models. To a casual viewer watching a short clip on a smartphone screen, such footage can appear indistinguishable from real battlefield recordings.
When these visuals are combined with coordinated posting strategies and sensational captions, they can dominate online conversations before verification mechanisms catch up.
Such content can easily become a form of digital propaganda which shapes perceptions of conflicts and influences public opinion.
Conclusion
X’s decision to suspend monetisation for creators posting undisclosed AI-generated war videos shows that there is growing concern about the role of synthetic media in modern information warfare.
The discovery of a Pakistan-based operator running 31 accounts posting AI war footage demonstrates how easily coordinated networks can exploit social media ecosystems.
X is targeting the financial incentives behind misleading AI war videos. It is an attempt to slow down the spread of AI-generated videos. Though it is unclear if such measures will be enough to counter the growing wave of AI-driven disinformation. However, it is clear that the digital battlefield has become almost as influential as the physical one in shaping global perception of war.