Deepseek is cheap and efficient - and obedient
Case is published in DN January 28.
Sustainability in technology is often a question of resources: How much energy does a solution require? How efficient is it? How much does it cost? On these parameters, the Chinese AI model DeepSeek is impressive. It is more cost-effective than ChatGPT, uses less power and shows that large language models can be developed with lower resource consumption.
Sustainability isn't just about energy consumption and cost efficiency, it's also about responsibility. AI models are shaped by the data they are trained on, by the developers who build them and by the systems that control them. When information control is a pillar of the government's DNA, it's naïve to think that a Chinese AI model can be unaffected by this.
In China, AI is prohibited from generating content that goes against "socialist values". Censorship is built in. Topics such as democracy, human rights and government criticism are automatically filtered out. On paper, Deepseek may be as advanced as GPT-4, but what happens when you ask it about Taiwan? About the protests in Hong Kong? About human rights abuses in Xinjiang?
The question is not just what Deepseek can do, but what it's not allowed to say. When I tested Deepseek and asked about the protests in Hong Kong, I only got a dismissive response: "Sorry, that's beyond my current scope."
Screenshot of a mobile phone with DeepSeek, January 27. (Photo: Screenshot/DeepSeek)
This is not a bug in the system, it's a feature. After you ask a question, you see that Deepseek actually generates an answer first, you can read it, take in the content, and maybe even think it's the final answer. Only after this does it stop and give a message that it can't answer. This clearly shows that the system is designed this way on purpose, it's not an accidental bug or a weakness, but a deliberately implemented functionality.
On the other hand, if I ask Chat GPT, I get a detailed explanation of the reason for the protests. This shows how AI models can be limited, not by spreading propaganda, but by remaining silent on sensitive topics. It also illustrates that sustainability in technology cannot just be reduced to resource optimization.
We also need to ask questions about what values the technology is based on, what limitations it has, and what we are actually giving up when we choose a more "efficient" solution.
Deepseek's big breakthrough is that it is cheaper than its competitors. While US companies spend huge sums on infrastructure and hardware, Deepseek has found a way to deliver similar performance at a lower cost.
That sounds promising, but what are we really getting access to?
When a state like China decides what a model can and cannot say, we can get a technology that seems advanced but has hidden limitations. An AI model doesn't need to spread propaganda to be political, it just needs to avoid certain topics, filter out information or steer conversations in a certain direction. This can happen so subtly that we hardly notice it, but the consequences can be huge.
Some see Deepseek as a positive contribution because it challenges the dominance of American companies in AI development. More players can mean more competition and new perspectives, but the question is whether we will really have greater freedom of choice if the authorities continue to control what the technology can say.
While Chinese models have obvious limitations, we shouldn't assume that Western AI models are completely neutral. Big tech companies are already shaping how we get information, whether it's through economic interests, geopolitics or algorithms that determine what we see and what we don't.
Deepseek is clearly an impressive technological breakthrough, but also a reminder that technology and politics are closely linked. The question is therefore not who will win the AI race, but how we ensure that the technology is used for the good of society, and not as a tool for covert control.
This shows that sustainability in technology cannot be reduced to resource optimization alone. We must also question what values the technology is based on, what limitations are inherent in it, and what we are actually giving up when we choose a more "efficient" solution.
The choices we make in the development and use of AI will shape the future. If sustainability is only measured in energy consumption, we risk overlooking the deeper cost - the control of information, the freedom to question, and the right to challenge power.