Install DeepSeek R1 on Ubuntu
Deepseek R1 installation guide on Ubuntu

Contents :
- 1-What's Deepseek R1?
- 2-Reactions
- 3-Requirements
- 4-Installation
- 5-Configuration
1-What's Deepseek R1?
Released on 20/03/2025 - the very day TikTok was blocked, and one day before Trump announced the Stargate project - DeepSeek R1 is an open-source, MIT-licensed generative artificial intelligence (GenAI) designed by the Chinese company DeepSeek AI, a subsidiary of the High-Flyer quantitative investment fund founded by Wenfend Liang.
Designed with infinitely fewer resources than the AIs of American tech (its training is said to have cost just under $6 million with 2048 Nvidia H100 graphics cards), DeepSeek R1 rivals ChatGPT o1.
I quote from the Usine Digitale website: “In terms of performance, DeepSeek-R1 achieves comparable or even better results than the o1-1217 version and o1-mini in most benchmarks. The distilled versions also seem capable of competing with OpenAI models: for example, DeepSeek-R1-Distill-Qwen-32B outperforms o1-mini on various benchmarks, making it a new benchmark for so-called “dense” models. All this at a much lower price for developers wishing to use it.
Where a million tokens cost $0.55 in input and $2.19 in output for DeepSeek's model API, the price is $15 in input and $60 in output for o-1's API. More concretely, this means that the input and output prices of the o-1 API are respectively 27.27 times and 27.40 times higher than those of DeepSeek, or to put it another way, the o-1 tariff is approximately 2627% and 2639% higher than that of DeepSeek. If we make a global comparison of all costs per 1 million tokens, the figure is even more impressive: DeepSeek's API is 96.4% cheaper than OpenAI's API.

The company's history is also very interesting to read. I quote Markus Kasanmascheff: : "DeepSeek’s journey began in 2021, when Liang, best known for his quant trading fund High-Flyer, began purchasing thousands of Nvidia GPUs.
At the time, this move seemed unusual. As one of Liang’s business partners told the Financial Times, “When we first met him, he was this very nerdy guy with a terrible hairstyle talking about building a 10,000-chip cluster to train his own models. We didn’t take him seriously.”
According to the same source, “He couldn’t articulate his vision other than saying: I want to build this, and it will be a game change. We thought this was only possible from giants like ByteDance and Alibaba.”
Despite the initial skepticism, Liang remained focused on preparing for potential U.S. export controls. This foresight enabled DeepSeek to secure a large supply of Nvidia hardware, including A100 and H800 GPUs, before sweeping restrictions took effect.
DeepSeek made headlines by revealing that it had trained its 671-billion-parameter model R1 for only $5.6 million using 2,048 Nvidia H800 GPUs.
Though the H800’s performance is deliberately capped for the Chinese market due to US export restrictions to China, DeepSeek’s engineers optimized the training procedure to achieve high-level results at a fraction of the cost typically associated with large-scale language models.
In an interview published by MIT Technology Review, Zihan Wang, a former DeepSeek researcher, describes how the team managed to reduce memory usage and computational overhead while preserving accuracy.
He said that technical limitations pushed them to explore novel engineering strategies, ultimately helping them remain competitive against better-funded U.S. tech labs.
Being open-source, DeepSeek R1 can be installed locally on your computer (macOS, Windows, Linux), on your server and even on your Raspberry Pi or smartphone.

2-Reactions
The least we can say is that DeepSeek R1 has been the talk of social networks since its release, and has stirred up stock markets around the world. Here's an anthology of reactions to the product.
"To me, the most fascinating aspect of Deepseek is the fact it stemmed from a hedge fund, a mere few months after China "cracked down" on the levels of compensation in the finance industry.
It's also incidentally an important reason why the U.S. will struggle to compete with China.
Let me explain.
First of all, worth mentioning that this was predictably, as for most Chinese initiatives, presented by Western media as a terrible move (2 examples screenshoted below ) - "why would China do this to the poor innocent bankers"
. As usual they didn't even try to reflect on why China would do this: as we all know, all Chinese initiatives are always completely mindless and "crackdowns" are just what the Communist party does for fun...
The actual reason this was done, I believe, is that China looked at the West - the U.S. in particular - and saw the overbearing importance of the finance industry at the expense of the real economy. And in particular they saw that the country's most brilliant graduates from the very best Ivy League schools went to work for the increasingly parasitic finance industry instead of working on stuff that actually made society move forward.
Bloomberg lamented below that the "crackdown" would "fuel an industry brain drain" and yes, that was precisely the point: China doesn't want those who can most contribute to society to spend their careers building ever more senseless financial derivative products or new ways to trade crypto. It doesn't mean they don't want a finance industry, it does serve a purpose, just not one that becomes such a drain on society, in particular in terms of capturing the country's best talents. China would rather have them working on stuff like... artificial intelligence.
And lo and behold, fast forward a few months, and you suddenly have hedge fund geniuses who found a new calling in AI. Too good a coincidence not to see a correlation there.
This is something that would arguably be very hard for the U.S. to do, where capital is very much in control: an industry that becomes extremely wealthy, even if largely detrimental to broader societal goals, becomes difficult to reform. We're seeing this with finance, defense, big pharma, etc.
It also illustrates that the U.S. and China are at different stages of their development: excessive financialization is a common pattern among late-stage great powers - from the Dutch Republic to the British Empire (but also Venice or Spain) - and a vicious-circle type factor of their decline. Emerging great powers are often more thoughtful and nimble about managing talent flows to achieve technological and industrial primacy.
Looking at this question is also very interesting in the context of the H-1B visa debate in the U.S. It feels like the debate doesn't address the elephant in the room: why claim a shortage of top talent when the country's best minds are funneled to the finance industry? Much more coherent to first thoughtfully allocate talent at home before seeking to brain drain the rest of the world...
Anyhow, yet another example of a Chinese policy that seems bizarre and incomprehensible to the West at first glance but which over the long run (and even short-run as illustrated by Deepseek) helps China develop another strategic advantage in the tech competition. Simply put: you want your best minds building real value, not extracting it from society.@RnaudBertrand
Most people probably don't realize how bad news China's Deepseek is for OpenAI.
They've come up with a model that matches and even exceeds OpenAI's latest model o1 on various benchmarks, and they're charging just 3% of the price.
It's essentially as if someone had released a mobile on par with the iPhone but was selling it for $30 instead of $1000. It's this dramatic.
What's more, they're releasing it open-source so you even have the option - which OpenAI doesn't offer - of not using their API at all and running the model for "free" yourself.
If you're an OpenAI customer today you're obviously going to start asking yourself some questions, like "wait, why exactly should I be paying 30X more?". This is pretty transformational stuff, it fundamentally challenges the economics of the market.
It also potentially enables plenty of AI applications that were just completely unaffordable before. Say for instance that you want to build a service that helps people summarize books (random example). In AI parlance the average book is roughly 120,000 tokens (since a "token" is about 3/4 of a word and the average book is roughly 90,000 words). At OpenAI's prices, processing a single book would cost almost $2 since they change $15 per 1 million token. Deepseek's API however would cost only $0.07, which means your service can process about 30 books for $2 vs just 1 book with OpenAI: suddenly your book summarizing service is economically viable.
Or say you want to build a service that analyzes codebases for security vulnerabilities. A typical enterprise codebase might be 1 million lines of code, or roughly 4 million tokens. That would cost $60 with OpenAI versus just $2.20 with DeepSeek. At OpenAI's prices, doing daily security scans would cost $21,900 per year per codebase; with DeepSeek it's $803.
So basically it looks like the game has changed. All thanks to a Chinese company that just demonstrated how U.S. tech restrictions can backfire spectacularly - by forcing them to build more efficient solutions that they're now sharing with the world at 3% of OpenAI's prices. As the saying goes, sometimes pressure creates diamonds.@RnaudBertrand
First of all because they can I think: they're still making money at those prices.
Also it speaks to a different philosophy/vision on AI: ironically named "OpenAI" is basically about trying to established a monopoly by establishing a moat with massive amounts of GPU and money. Deepseek is clearly betting on a future where AI becomes a commodity, widely available and affordable to everyone. By pricing so aggressively and releasing their code open-source, they're not just competing with OpenAI but basically declaring that AI should be like electricity or internet connectivity - a basic utility that powers innovation rather than a premium service controlled by a few players.
And in that world it's a heck of a lot better to be the first mover who helped make it happen than the legacy player who tried to stop it.@RnaudBertrand
Stargate, if it goes forward, is likely to become one of the biggest wastages of capital in history:
1) It hinges on outdated assumptions about the importance of computing scale in AI (the 'bigger compute = better AI' dogma), which DeepSeek just proved is wrong.
2) It assumes that the future of AI is with closed and controlled models despite the market’s clear preference for democratized, open-source alternatives
3) It clings to a Cold War playbook, framing AI dominance as a zero-sum hardware arms race, which is really at odds with the direction AI is taking (again, open-source software, global developer communities, and collaborative ecosystems)
4) It bets the farm on OpenAI—a company plagued by governance issues and a business model that's seriously challenged DeepSeek’s 30x cost advantage.
In short it's like building a half a trillion dollars digital Maginot line: a very expensive monument to obsolete and misguided assumptions. This is OpenAI and by extension the US fighting the last war.
Last point, there's also quite a bit of irony in the US government pushing so hard for a technology that's likely to be so disruptive and potentially so damaging, especially to jobs. I can't think of any other example in history when a government was so enthused about a project to destroy jobs. You'd think they'd want to be a tad more cautious about this. @RnaudBertrand
Whether you like it or not, the future of AI will not be canned genies controlled by a "safety panel". The future of AI is democratization. Every internet rando will run not just o1, but o8, o9 on their toaster laptop. It's the tide of history that we should surf on, not swim against. Might as well start preparing now.
DeepSeek just topped Chatbot Arena, my go-to vibe checker in the wild, and two other independent benchmarks that couldn't be hacked in advance (Artificial-Analysis, HLE).
Last year, there were serious discussions about limiting OSS models by some compute threshold. Turns out it was nothing but our Silicon Valley hubris. It's a humbling wake-up call to us all that open science has no boundary. We need to embrace it, one way or another.
Many tech folks are panicking about how much DeepSeek is able to show with so little compute budget. I see it differently - with a huge smile on my face. Why are we not happy to see *improvements* in the scaling law? DeepSeek is unequivocal proof that one can produce unit intelligence gain at 10x less cost, which means we shall get 10x more powerful AI with the compute we have today and are building tomorrow. Simple math! The AI timeline just got compressed.
Here's my 2025 New Year resolution for the community:
No more AGI/ASI urban myth spreading.
No more fearmongering.
Put our heads down and grind on code.
Open source, as much as you can.
Acceleration is the only way forward. @DrJimFan
The ramifications of this are huge. Every day China does something incredible, totally unlike the stagnation of the EU, talking all day while accomplishing nothing, or the latest evil plan oozing out of DC. This is just brilliant. & inspiring. & it WILL earn them more goodwill @CaptainCrusty66
It’s the china recipe book for success for every industry where western oligopolies have dominated. @bbooker450
AI will become a part of everyday infrastructure like electricity and tap water. DeepSeek is a signficant step towards that, thanks to its cost reduction and open source nature @MrBig2024
We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all…. @DrJimFan
This is cool…this isn’t just another open source LLM release. this is o1-level reasoning capabilities that you can run locally, that you can modify and that you can study…
that’s a very different world than the one we were in yesterday. Al, comments line
Price comparison of OpenAI o1 and DeepSeek AI R1: R1 is significantly cheaper across all categories (96–98% savings). Now you know why big organizations don’t want open-source to continue, If humanity is ever going to benefit from AI, it will be from open-source . @ai_for_success
China is overturning mainstream development theory in astonishing ways. China’s GDP per capita is only $12,000. That’s 70% less than the average in high-income countries. And yet they have the largest high-speed rail network in the world. They’ve developed their own commercial aircraft. They are the world leaders in renewable energy technology and electric vehicles. They have advanced medical technology, smartphone technology, microchip production, aerospace engineering… China has a higher life expectancy than the USA, with 80% less income. We were told that this kind of development required very high levels of GDP/cap. But over the past 10 years China has demonstrated that it can be achieved with much more modest levels of output. How do they do it? By using public finance and industrial policy to steer investment and production toward social objectives and national development needs. This allows them to convert aggregate production into development outcomes much more efficiently than other countries, where productive capacity is often wasted on activities that may be highly profitable to capital, or beneficial to the rich, but may not actually advance development. Of course, China still has development gaps that need to be addressed. And we know from some other countries that higher social indicators can be achieved with China’s level of GDP/cap, by focusing more on social policy. But the achievements are undeniable, and development economists are taking stock. @jasonhickel
3-Requirements
You'll need a PC running Linux, Windows or macOS (Ubuntu for the purposes of this tutorial) with plenty of RAM and disk space, as well as a powerful processor (depending on which version of Deepseek R1 you're going to install).
For AI purposes, an AMD/Nvidia graphics card is highly recommended, so that your processor doesn't have to calculate everything itself (parallelization of graphics cards is widely used in the field of artificial intelligence - see the video below to understand the advantages).
3-Installation
Go to the Ollama website to install it.
Ollama lets you run large language models (LLMs) locally on your computer.

Click on “Download”.

As the tutorial is based on Ubuntu, copy the command to install via the CLI.

sudo apt install curl && curl -fsSL https://ollama.com/install.sh | sh
Note: Ollama clearly states that if you don't have a graphics card on your workstation, the processor will have to calculate everything.


You can choose the number of parameters:
The r1:1.5b contains 1.5 billion parameters and weighs 1.1 gb, while the r1:671b contains 671 billion parameters and weighs over 400 Gb.
The r1:671b therefore requires huge disk space, a large amount of RAM and a powerful processor, but you'll have more data in memory.
For more modest configurations, go for the r1-1.5b, 7b or 8b.

For the example, r1:8b seems to be a good compromise, as r1:14b requires 10.5GB of RAM and r1:32b requires 22GB, so it could be a problem for many PCs.
Run the following command to install it.
ollama run deepseek-r1:8b
4-Configuration
Once deepseek-r1:8b is installed, you'll be able to ask it some questions.
I'm using an i7 10th laptop with 16 GB DDR4 without an AMD/Nvidia card.


The same question but on the web version.


You can see the yawning gap between the locally installed version 8b and the web version hosted by DeepSeek.
In terms of performance, here's an overview of my configuration. It's not super-fast (everything is calculated by the processor), no graphics card to speed up processing, not enough RAM, disk space and processor power to run the 671b.

To view the list of models you've downloaded.
ollama list
To quit DeepSeek, press Ctrl+D and to restart, type the following command.
ollama run deepseek-r1:8b
Of course, if you want better performance, you can also use the web version offered by DeepSeek AI :
https://chat.deepseek.com/ (don't forget to activate DeepThink R1).
Sources



https://www.moonofalabama.org/2025/01/how-the-chinese-beat-trump-and-openai.html#more
By the way, to understand this craze for artificial intelligence in China and how Chinese tech has developed from the 2000s to the present day, I recommend Kai Fu Lee's “I.A. The Greatest Change in the History of History”.
