
Dr Zoran Mitrovic
Published Jul 2, 2025
Palantir Technologies, the secretive data analytics company has operated at the centre of heated debates about technology, privacy, and government power since its founding. Known for its work with intelligence agencies, military operations, and law enforcement, Palantir’s AI-driven platforms—Gotham, Foundry, and AIP—have drawn fierce criticism from privacy advocates, civil liberties groups, and even some of its own employees.
This comprehensive analysis examines the most significant controversies surrounding Palantir, including mass surveillance operations, military applications, ethical concerns about AI deployment, and internal workplace issues.
Mass surveillance and privacy violations
Palantir’s technology has become deeply embedded in government surveillance operations worldwide, raising fundamental questions about privacy rights and democratic oversight.
NSA and the PRISM Programme
The 2013 Edward Snowden revelations exposed the NSA’s extensive PRISM surveillance program, which collected data from major tech companies including Google, Facebook, Apple, and Microsoft without user consent. While Palantir officially denied direct involvement in PRISM, leaked documents suggested the company’s software was used to analyse the massive volumes of surveillance data collected through the program. This connection positioned Palantir as a key player in one of the most controversial government surveillance operations in U.S. history.
ICE and immigration enforcement
Palantir’s FALCON system became a cornerstone of U.S. Immigration and Customs Enforcement (ICE) operations, enabling the agency to track and process deportations with unprecedented efficiency. The system’s role became particularly controversial during the Trump administration’s family separation policies, with critics arguing that Palantir’s technology directly enabled these human rights violations. The partnership sparked significant internal dissent within Palantir, leading to employee protests and high-profile resignations.
Predictive policing and racial bias
Police departments across the United States, including the LAPD and NYPD, have deployed Palantir’s software for predictive policing initiatives. However, studies have consistently shown that these systems disproportionately target minority communities, effectively digitising and amplifying existing racial biases in policing. Despite Palantir’s claims of algorithmic neutrality, the real-world outcomes reveal how the company’s “neutral tool” defence crumbles when confronted with systematic discrimination.
Military and law enforcement applications
Palantir’s expansion into military operations has positioned the company at the heart of modern warfare and domestic surveillance, often with devastating human consequences.
Warfare and drone operations
Palantir’s Gotham platform has been extensively used by the U.S. military for targeting operations in Afghanistan and Iraq. Investigative reports have linked the company’s AI systems to civilian casualties in drone strikes, often resulting from flawed data analysis or algorithmic errors. These incidents highlight the life-and-death consequences of deploying AI systems in military contexts without adequate human oversight.
The Cambridge Analytica connection
While Palantir maintained its distance from the Cambridge Analytica scandal, reports revealed that Palantir employees had consulted with the political data firm that harvested 87 million Facebook profiles for electoral manipulation. Although Palantir denied formal involvement, the association further damaged the company’s reputation and raised questions about its role in undermining democratic processes.
Ethical concerns about AI and data exploitation
As Palantir has expanded its AI capabilities, ethical concerns have grown about the militarisation of artificial intelligence and the exploitation of sensitive data.







