Kill Lists vs Public Good: Two Futures of Artificial Intelligence
Bappa Sinha
ALEX KARP, the chief executive of Palantir, published a 22-point manifesto on April 19 declaring that "hard power in this century will be built on software" and that certain cultures are "regressive and harmful." Five days later, the Chinese AI laboratory DeepSeek released its long-awaited V4 model: 1.6 trillion parameters, open source under the MIT licence, matching the most expensive proprietary American models at roughly one-thirtieth the price.
Two events. Five days apart. Two completely opposed visions of what artificial intelligence is for, who controls it, and whose interests it serves.
Start with the manifesto. Drawn from Karp's book The Technological Republic: Hard Power, Soft Belief, and the Future of the West (co-authored with Palantir’s Head of Corporate Affairs, Nicholas Zamiska), the 22-point document has been viewed over 32 million times. It argues Silicon Valley owes a debt to the nation, AI weapons are inevitable and the West must build them, national service should be reconsidered, and liberal pluralism amounts to a "shallow temptation."
Points 21 and 22 are where the mask slips entirely. They declare that "certain cultures and indeed subcultures have produced wonders" while "others have proven middling, and worse, regressive and harmful." Point 22 calls for resistance to "the shallow temptation of a vacant and hollow pluralism," lamenting that America has "for the past half century resisted defining national cultures in the name of inclusivity." Read plainly, this is racial and civilisational supremacism out in the open. A company embedded in imperial kill chains is ranking the world's peoples on a hierarchy and telling governments who counts as civilised. The colonial arrogance is unmistakable. Some are calling it technofascism. More than 200,000 people in Britain signed a petition demanding the government sever its contracts with Palantir. Members of parliament compared the document to the "ramblings of a supervillain."
And Palantir isn't just publishing manifestos. It is operationally wired into the machinery of imperial warfare. Its Gotham platform provides AI-powered targeting for the Ukrainian military. Karp himself has boasted that Palantir's software is "responsible for most of the targeting in Ukraine," processing drone footage, satellite imagery, and signals intelligence to generate strike options that improve with each hit. The system learns from every bomb dropped. In January 2026, the company deepened this integration with Ukraine’s Brave1 Dataroom, feeding real-time battlefield data into AI models for drone interception.
But it is Palantir's role in the genocide in Gaza and the illegal US-Israeli war on Iran that exposes what "hard power built on software" actually means in practice. Palantir signed a formal partnership with the Israeli military in January 2024, three months into the assault on Gaza, integrating intercepted communications and satellite data to produce targeting databases - effectively kill lists. It maintains a permanent desk at the US-led Civil Military Coordination Centre in southern Israel, providing the technological architecture for controlling humanitarian aid delivery into Gaza, a process that has been systematically weaponised to starve a besieged civilian population. And in the wider war on Iran, the same kill chain machinery fed the targeting systems that bombed the Shajareh Tayyebeh girls' school in Minab on February 28, killing more than 170 schoolgirls. Amnesty International called the strike unlawful. UNESCO called it a grave violation of humanitarian law. When Karp writes that AI weapons are inevitable and the West must build them, he is describing what his company already does, every day, in real time.
Now consider the contrast. On April 24, DeepSeek released V4 under the MIT licence. Anyone can download it, modify it, deploy it commercially. The architecture is a Mixture-of-Experts system with 1.6 trillion total parameters (49 billion activated per query) and a one-million-token context window. The technical paper introduces three innovations: a Hybrid Attention Architecture, Manifold-Constrained Hyper-Connections, and the Muon Optimizer replacing standard training methods. The result: 27% of the inference computation and 10% of the memory cache compared to its predecessor.
The benchmarks speak for themselves. V4-Pro scored 3,206 on Codeforces, beating GPT-5.4's 3,168, the highest competitive programming score by any model at the time of release. On SWE-bench Verified, the standard benchmark for real-world software engineering, it hit a score of 80.6, just 0.2 points behind Claude Opus 4.6. It leads all open source models in mathematics, science, and coding.
And the pricing? DeepSeek slashed it further on April 26. V4-Pro now runs at: $0.435 per million input tokens, $0.87 per million output tokens. Compare that with GPT-5.5 at $5.00 input and $30.00 output, or Claude Opus 4.7 at $5.00 and $25.00. V4-Pro costs roughly one-thirtieth of GPT-5.5's output rate. The lighter V4-Flash variant comes in at $0.14 input and $0.28 output, nearly 100 times cheaper. For a developer in Lagos, Dhaka, or Sao Paulo, these aren't abstract numbers. They determine whether frontier AI is accessible or locked behind an American paywall.
Then there's hardware sovereignty. V4 is the first Chinese frontier model built to run natively on domestic Huawei Ascend and Cambricon chips rather than Nvidia. This directly answers the US semiconductor export controls designed to deny China advanced AI chips. Within days of the V4 launch, ByteDance, Tencent, and Alibaba scrambled to place orders for Huawei's Ascend 950PR processors. Huawei aims to ship 750,000 units this year. DeepSeek has signalled V4-Pro pricing could drop further once these supernodes ship at scale in the second half of 2026. The US sanctions regime, intended to maintain American monopoly over AI computing, has instead accelerated the development of a fully indigenous Chinese AI stack from chip fabrication to inference deployment.
Palantir's business model rests on proprietary, classified systems sold at enormous cost to governments and militaries. Technology as a weapon of civilisational competition, per their own words. DeepSeek released its model weights publicly with its CEO and founder, Liang Wenfeng, stating their vision is to provide free, state-of-the-art AI to all to advance human progress. Under capitalism, frontier AI gets enclosed behind API paywalls and classified contracts and fed into kill chains. Under socialist planning, the same technology gets released as a public good.
The historical pattern is familiar. In the 1960s, the US government sourced 60% of integrated circuits for the space race from Silicon Valley firms. That public investment created private fortunes. After the dot-com crash, then again after the 2008 financial crisis, taxpayer bailouts of banks and investment funds channelled fresh capital into the same ecosystem, producing companies valued at trillions despite making nothing physical. The wealth generated built a class of technology oligarchs who now seek to privatise the very state apparatus that nurtured them. Palantir grew from CIA seed money through In-Q-Tel into a $280 billion corporation that publishes manifestos on civilisational warfare. Public investment creating private monopoly; private monopoly then capturing the state. The manifesto is this class stating its programme openly.
China's trajectory runs in the opposite direction. State investment in semiconductor self-reliance and AI research, conducted under hostile US sanctions, hasn't produced another walled garden. It has produced open source technology available to the entire world. The logic is systemic: the socialist state absorbs the research costs, curbs monopolies and incentives to enclose knowledge behind paywalls. The technology gets released. People use it.
AI is fast becoming the infrastructure layer of economic production. For the Global South, the choice between these two models will shape the terms of development for decades. Dependence on American proprietary systems means export controls, licensing restrictions, technology revocable at Washington's whim and Imperialist appropriation through monopoly super-profits. Open source models running on chips free from US sanctions jurisdiction offer sovereign AI capability at prices a university in Nairobi or a hospital in Kerala can actually afford, with full freedom to fine-tune for local languages and conditions. Countries pursuing digital and AI sovereignty should be paying close attention. The alternative is to remain locked into a system where the same company that calls your culture "middling" also builds the software that produces the kill lists.
One system produces a company that ranks civilisations, arms genocides, and publishes manifestos about it. The other produces a research lab that gives away a frontier model for free. Who controls the technology, who benefits, who pays the price: for working people everywhere, these questions will define the coming decade.


