More

Corporations Are Winning the AI Race because of DeepSeek R1

By Marcos Cooper 2025-02-14 1 Comment 3 Min Read

Today, all usage of DeepSeek R1 has been banned inside the company I work at—even when running it inside a sandbox. And this decision isn’t driven by security concerns but rather by pressure from companies like OpenAI, SoftBank, and others, which have started declaring that using DeepSeek R1 poses a risk in any case.

It’s not hard to see where the interests of these corporations lie, but today, Toyota also joined the movement, banning the downloading of DeepSeek R1 for any purpose.

The Real Risks: AI Manipulation and Control

I have enough experience with training models—especially in fine-tuning—to know how much you can manipulate outputs. Fine-tuning isn’t just about making a model better at a specific task; it’s also a powerful tool for controlling narratives. It can be used to subtly alter historical facts, reinforce biases, or even introduce malicious code into AI-generated responses.

And that’s without even considering the fact that models are inherently unpredictable.

The concern isn’t just about AI access—it’s about who gets to decide what information is “accurate.” With fewer companies controlling AI, we’re at risk of seeing an internet where only certain narratives are amplified while others are erased.

This isn’t just speculation. We’ve already seen cases of AI models being censored, giving politically motivated answers, or refusing to discuss certain topics entirely. As AI becomes more deeply integrated into search engines, productivity tools, and everyday decision-making, the ability to manipulate models at scale becomes an incredibly powerful tool—one that corporations and governments will undoubtedly seek to control.

What’s Next?

I don’t believe AI should be run unsupervised, yet that seems to be the direction we’re heading. As AI becomes more embedded in decision-making processes, we are beginning to trust it blindly—allowing it to operate with little to no oversight. If AI models are left unchecked, it’s understandable why corporations and governments label them as ‘risky.’ But the bigger question remains: who defines the risk, and whose interests are they truly protecting?

Just a few days ago, OpenAI announced that they might open-source some of their older model weights. But I doubt they will—because, frankly, they don’t need to anymore. As long as companies rely on external AI providers instead of running their own models, the AI giants have already won. They control the infrastructure, the access, and ultimately, the future of AI.

If small to mid-sized companies no longer train or fine-tune their own models, AI will essentially become a black box controlled by a few corporations. The dominant players—OpenAI, Google, Microsoft, and a handful of others—will dictate the rules, the limitations, and even the ethics of AI, without real transparency or accountability.

What concerns me even more than corporate AI monopolies is the shift toward trusting AI without supervision. We are moving toward a world where AI’s decisions are accepted as fact—where its outputs won’t be questioned, and its biases won’t be corrected.

This is already happening. AI is increasingly used in fields like finance, healthcare, law enforcement, and even governance. The assumption is that AI is neutral, objective, and safe, but that is far from true. Models can be manipulated, fine-tuned to reinforce biases, and even exploited for misinformation or malicious purposes.

And yet, instead of acknowledging these risks and demanding more transparency, the trend seems to be heading in the opposite direction—toward blind acceptance.

I believe we should push back against both AI centralization and the unsupervised deployment of AI. But how do we push back? Right now, I don’t see a clear or safe alternative—and that is what concerns me the most.

T
1 Comment
  1. Elly says:

    You have brought very good points to the table of AI.
    Certainly, if just a few giant companies have the monopoly, we can fear for the future of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *