Mythos, Power, and the End of Open Intelligence
Anthropic, Mythos, and the new AI oligarchy: What happens when the most powerful intelligence belongs not to everyone, but to a select few?
Mythos, Power, and the End of Open Intelligence

At a Glance
- Anthropic has set a precedent with "Mythos": A top-tier model is not being broadly rolled out, but initially made available only to a small circle of privileged actors.
- This is not merely a product decision, but a political turning point: When only a few have access to the best available intelligence, what emerges is effectively a two-tier intelligence system.
- The justification sounds plausible: Critical security vulnerabilities like a 27-year-old OpenBSD bug or a 17-year-old FFmpeg bug can be found faster with such models -- and potentially exploited as well.
- The core problem runs deeper: Those who frame AI primarily as a weapon implicitly make a statement about humanity, power, and the legitimacy of centralized control.
- For the Mittelstand, this means concrete consequences: Vendor lock-in, asymmetric knowledge access, and rising platform dependency will become strategic risks in 2026 -- not in three years.
Recently, in a late-night conversation long after the actual meeting had ended, a managing director said something that stuck with me: "If the best machine in the world only belongs to the big players -- how do we even recognize whether the market is still fair?"
That is not an academic sentence. It is the question of our time.
With the latest iteration of its models, internally and publicly charged with a mythos narrative, Anthropic sent not just a technical signal. It was a political signal. An anthropological one, almost. One about trust, about danger, about elites and masses -- and about the tacit assumption that a select few should be equipped with superior intelligence while the rest of the world remains spectators.
To be clear: the intuitive justification is not foolish. Quite the opposite. When a model immediately finds decades-old security vulnerabilities in OpenBSD or FFmpeg, the security concern is real. This is not about gimmicks or a prettier chatbot, but about leverage with infrastructural force. Releasing something like this does not only unlock productivity potential -- it potentially opens up attack surfaces as well.
And yet. This is precisely where the problem begins. Because a technically understandable caution very quickly becomes a model of domination. Risk management becomes oligarchy. "We must equip the good guys first" becomes a system in which some actors receive access to the best cognitive infrastructure in the world -- and others simply do not.
That sounds dramatic. It is. But not for science-fiction reasons. Rather because we may be witnessing the transition from a phase of open availability to a phase of controlled intelligence allocation. And anyone who considers this a marginal debate underestimates the economic force of this shift. Because intelligence is not just another SaaS tool. Intelligence is meta-infrastructure. It determines how fast knowledge is created, how efficiently decisions are made, how cheaply innovation becomes possible -- and who can still negotiate on equal footing.
Why "A Bit Smarter" Is the Wrong Category
Let us begin with a cognitive error that distorts nearly every public AI debate: we often talk about intelligence as though it were a dimmer switch. A dial. Ten percent more here, twenty percent more there, and the light simply gets brighter. That sounds reasonable. But it is fundamentally a misleading image.
A staircase is more useful. Or more precisely: a switch on a steel door, behind which lies a new room. You can optimize on the same step for a long time, getting faster, working more cleanly, trying more variants. And then something discontinuous happens: a breakthrough. A problem that was unsolvable for everyone before suddenly becomes solvable. Not gradually better. Categorically different.
The history of science is full of this. Before Einstein, there were brilliant minds. Yet no one formulated spacetime the way he did. Before Turing, there were mathematicians. Yet no one designed precisely that architecture of computability. Before the Transformer paper of 2017, thousands worked on machine learning. Yet "Attention is all you need" changed the trajectory of an entire industry in a few pages. This does not mean genius comes from nowhere. Of course every breakthrough stands on shoulders. But not every sequence of shoulders automatically leads to the summit.
For LLMs, this means concretely: when a model detects a 17-year-old bug in FFmpeg or a 27-year-old bug in OpenBSD, that is not simply "somewhat better pattern recognition." It may be an indication of a threshold transition. A model shifts from being a useful assistant to an actor that, in certain domains, sees more than almost all humans actively working on the same problems.
Sounds crazy? It is -- but we should stick to economic logic. In markets, what matters is not whether a system is metaphysically intelligent. What matters is whether it solves problems that others cannot. Whoever steers a supply chain 30 percent better wins. Whoever analyzes regulatory documents 60 percent faster wins. Whoever discovers cybersecurity vulnerabilities weeks earlier wins. And whoever controls access to a system that enables precisely such quantum leaps controls not merely a product -- but an asymmetric competitive advantage.
What we see in consulting practice already confirms this on a smaller scale. Companies with identical headcount, identical revenue corridors, and similar market positions diverge within six to twelve months when one team systematically integrates AI into proposals, document processing, research, and support -- while the other remains stuck in tool euphoria or does nothing out of uncertainty. The differences are not 5 percent. They quickly reach 20 to 40 percent in throughput time, response speed, or administrative burden.
Applied to models like Mythos, this means: when a few actors have access to a tier that is unreachable for others, it is not just productivity that shifts. The structure of the field itself shifts. It is like chess where one side suddenly has not just a stronger player, but additionally an analysis board that calculates all variations of the next fifty moves in seconds. Formally, everyone may still play. But to speak of a level playing field would be a polite fiction.
What this means concretely for SMEs: When advanced models are not open or at least broadly accessible, it is not just features that are unequally distributed. It is thinking speeds that are unequally distributed. That is precisely what makes the situation strategic.
An ERP migration is unpleasant. An asymmetric intelligence infrastructure is existential.
And that brings us to the real core: the debate about Mythos is not merely a debate about security. It is a debate about whether intelligence is understood as a publicly accessible productive force -- or as a high technology requiring control that is only legitimate in selected hands.

When AI Is Understood as a Weapon -- What Follows Politically?
The official -- or at least most plausible -- argument goes like this: this model is too powerful to be made available to everyone immediately. So we first give it to responsible actors -- large platforms, security firms, infrastructure providers, perhaps state-aligned partners. First the systems should be secured, the vulnerabilities patched, and the risks contained. Then we will see.
That is the reasonable reading. And it deserves to be taken seriously.
Because of course we know technologies where openness would be naive. No one wants to scale instructions for biological weapons on popularity logic. No one would argue that every conceivable offensive cyber capability should be freely available without any barrier. The nuclear weapons analogy is so readily invoked because it works intuitively: not every power belongs in every hand.
But this analogy has a decisive crack. A nuclear bomb has, at its core, one purpose: destruction or deterrence through destructive potential. An anthrax virus is not simultaneously useful for accounting, knowledge management, or product development. A chemical weapon does not improve proposal calculations or reduce customer service processing time by 35 percent. A powerful LLM, by contrast, is consequential precisely because it is dual-use in a radical sense. It can be dangerous -- and simultaneously useful on a massive scale.
This is exactly where the dialectic begins. When we frame such models primarily as weapons, we sweep their civilian character under the table. We treat a general-purpose technology -- a foundational technology with broad utility -- as though it were almost exclusively a security risk. That is roughly like describing the computer in the 1980s solely as a cryptography and missile guidance system. Not formally wrong. But practically a catastrophic reduction.
And politically it becomes even more delicate. For even if we accept for a moment that Mythos is "weapon-like": who then decides who may have the weapon? Who defines the circle of the good? On what legitimacy does that rest -- democratic, market-based, geopolitical, moral? And who controls the controllers?
We know this pattern well from political theory. Plato dreamed of the philosopher-king: the few with insight who guide the many with less insight. In conservative variants, this idea keeps returning: the masses are insufficiently equipped for truth, for complexity, for power. Therefore an enlightened elite is needed. Not out of malice. But out of responsibility. Batman in The Dark Knight is an almost didactic pop culture example: total surveillance, but please only once, only in good hands, only to save the order. Slavoj Zizek has described, precisely through this material, the longing for legitimate exceptional power -- and it is no coincidence.
The paradox is obvious. Every exceptional power declares itself the last exception. It almost never stays that way.
For AI, this means: when a few private companies define which models are too dangerous for the general public, and simultaneously decide which partners receive privileged access, a new form of private-sector sovereignty emerges. No elected institution, no open procedure, no transparent accountability in the classical sense -- but rather a mixture of security narrative, proprietary control, and market power.
You do not need to attribute malicious intent to anyone for this. That is important. It can be entirely sincere. It can come from genuine concern. But structures operate independently of the moral intentions of their architects. A closed gate remains a closed gate, even when the gatekeeper is convinced he is letting the right people in.
And that is precisely why entrepreneurs in the Mittelstand should not dismiss this debate as Silicon Valley theater. Whoever controls access to cognitive infrastructure controls, in the medium term, the innovation speed, degree of automation, security level, and margin capacity of entire industries. Not tomorrow on every bank statement. But creeping, cumulative, and brutally efficient.
Are the Elites Protecting Us -- or Building an AI Oligarchy?
We should be precise here. There are three possible readings of the Mythos moment.
Dieses Thema vertiefen? 32 KI-Rezepte mit Kostenrahmen als kostenloses PDF.
First: the benevolent reading. Anthropic and comparable actors genuinely believe that premature broad access creates significant security dangers. So they select those organizations that can patch quickly, secure systems, and minimize damage. Microsoft, Google, large security firms, perhaps governments or government-adjacent institutions. This would be the "equip the fire departments first" logic.
Second: the economic reading. Exclusivity is a pricing mechanism. Whoever sells something scarce to only a few increases willingness to pay, prestige, and bargaining power. A model is not only withheld for security reasons, but because it can be monetized as premium infrastructure. We know this from other markets. First the exclusive enterprise setup, then tiered offerings for the rest.
Third: the cynical reading. Security arguments are used as a pretext to prepare regulatory capture -- that is, to shape the rules so that only the already-large actors are compliant enough, receive access, and the barriers to entry for everyone else rise. This would be the classic move of a consolidating platform oligarchy: first justify control through protection, then turn protection into market entry barriers.
The reality probably lies, as so often, dialectically in between. Not pure virtue. Not pure malice. But a mixture of genuine concern, business interest, and institutional self-preservation.
For the effects, however, this makes only a limited difference. Because regardless of motivation, what emerges is a structure with two castes. On one side, those with access to the strongest available intelligence. On the other, those who must work with stripped-down, delayed, or more heavily regulated variants. This is not a theoretical blemish. This is market architecture.
Imagine two companies. Both have 120 employees, both develop industrial components, both struggle with labor shortages and documentation burden. Company A receives -- directly or through privileged partners -- access to a model that handles security analyses, code reviews, specification checks, and regulatory documentation significantly better. Company B works with a public standard model that is solid but visibly weaker. After three months, you will see little. After nine months, perhaps 15 percent faster development cycles at Company A. After 18 months, potentially 25 to 35 percent less friction in certain knowledge processes. After two years, it is not just productivity that has shifted, but also the ability to build better systems in turn.
A Matthew effect is at work here: to those who have, more shall be given. Stronger intelligence produces better products, better security posture, better capitalization, better talent, better data. The system reinforces itself.
And now comes the point that truly concerns me: culturally, we are not yet prepared to think of intelligence as a distributionally relevant good. With education, we understand this. With energy, we understand this. With capital, certainly. But with machine intelligence, many still act as though it were merely about more pleasant interfaces or faster text drafts. No. It is about a second-order means of production -- one that makes all other means of production more intelligent.
If access to it is structured oligarchically, we are no longer talking about competition in any liberal sense. We are talking about curated superiority.
What we already see in companies today: The actual vendor lock-in rarely begins with data migration. It begins with habit, process restructuring, and competence displacement.
When your teams learn to be productive only within a closed ecosystem, you lose bargaining power with every quarter -- even when the monthly license invoice still looks harmless.
That is why the Mythos question is not: "Is Anthropic good or evil?" The question is: what kind of order is being normalized here? An order of open leverage -- or an order of curated cognitive privileges?

Is Open Intelligence Naive -- or the Last Defense Against Power Concentration?
It would be convenient to romanticize the counter-position: everything open, everything for everyone, maximum freedom, the wisdom of the crowd will sort it out. It is not that simple. The open-access side has blind spots too.
Open models obviously lower the threshold for misuse. They facilitate scaling. They spread capabilities not only to productive but also to destructive actors. Whoever denies this is not thinking strategically but moralizing. There is no perfect solution. Only bad, better, and context-dependent decisions.
But that is precisely why we need to look more closely at where the actual value of open intelligence lies. Not in a naive ideology of limitless release. But in the prevention of monopolistic cognition.
Open source and open models have fulfilled a decisive counter-power function in the history of technology. Linux is the classic example. Not because Linux was always the prettiest or most convenient system. But because it established a power balance. It created a space where not every foundational technology was fully under private control. Similarly with open web standards: HTML, HTTP, TCP/IP. The innovative power of the internet rested not least on the fact that the base layers were not exclusively owned by one firm.
Applied to AI, this means: even if frontier models remain closed for a time, a healthy ecosystem needs capable open counterweights. Otherwise, the entire value chain ends up in a situation where infrastructure prices, terms of use, security narratives, and feature access are dictated by a handful of companies. That would not simply be expensive. It would be fatal for innovation policy.
There is yet another argument that many initially find counterintuitive: in a world with artificial superintelligence -- or even just very unequally distributed high-performance models -- openness itself can be a security principle. Why? Because only then are more actors able to develop countermeasures, verification mechanisms, audits, and defense systems. When the most powerful intelligence is exclusively centralized, the ability to control it also becomes centralized. It is like a castle whose defense system only the castle owners understand. Reassuring as long as they are benevolent. Alarming the moment interests shift.
The common retort is: but not every Mittelstand company needs a frontier model to process invoices faster. True. But that is not an argument for exclusivity -- it is an argument for graduated, responsible availability. We do not need an infantile all-or-nothing debate. We need governance models that distinguish between high-risk capabilities and productive everyday applications. Between bio-offensive support and contract analysis. Between malware automation and knowledge management. Between autonomous exploit development and proposal calculation.
The problem with blanket secrecy is that it binds together both danger and utility -- and then centralizes them. That is politically convenient. But rarely the best solution.
For companies, this means: we should neither blindly jump on the "open is always good" narrative nor allow ourselves to be driven into the arms of entirely closed providers. The sweet spot lies in an architecture that combines multiple model classes, interchangeable interfaces, and clear governance. In other words: APIs over monoculture, human-in-the-loop over autopilot, documented processes over tool magic.
Whoever bets on a single provider today because their current model seems strongest is confusing short-term excellence with long-term strategy. That is understandable. But dangerous. Especially in a market where capabilities, prices, and access policies shift on a quarterly basis.
What Does All This Mean for the Mittelstand -- Beyond Grand Theory?
Here it gets practical. Because it is easy to talk about oligarchy, Plato, or Batman. The real question is: what do we do with this on Monday morning?
The first answer is sobering: most SMEs do not actually have a Mythos problem. They have an architecture problem. In many companies, it is still unclear which knowledge processes are standardizable, where data is clean, which approvals are necessary, and where AI actually generates ROI. Whoever immediately reaches for the most powerful model is starting at the wrong end.
But the second answer is equally important: precisely because not every company immediately needs a frontier model, many underestimate the strategic significance of the access regime. They tell themselves: "A standard model is good enough for our purposes." Perhaps today. But when your competitors in six months can use specialized agent setups, better code assistance, stronger research workflows, or deeply integrated compliance checks -- and you cannot -- then a theoretical governance debate becomes operational margin pressure.
What we see in consulting practice is remarkably consistent. Three patterns emerge again and again:
1. The Tool-Enamored Build Dependency Instead of Value
A company tests ten tools, signs three annual contracts, distributes access generously -- and realizes after four months that no clean process has been defined. Result: more costs, new shadow IT, no reliable benefit. The technology was not the problem. The missing blueprint was the problem.
In such cases, we frequently see 15 to 25 percent unnecessary SaaS costs without any core process running substantially better. Worse still: teams become accustomed to proprietary routines that later prove nearly impossible to reverse. Vendor lock-in begins psychologically, not legally.
2. The Cautious Wait Too Long -- and Lose Learning Time
The opposite extreme is equally costly. Out of fear about data protection, compliance, or model changes, nothing gets decided. After six months, management notices that competitors are already handling proposal processing, internal knowledge search, or support 30 to 40 percent faster. Then frantic catch-up begins -- usually more expensive and worse managed than an early pilot would have been.
Learning time is a silent balance sheet item. It does not show up directly in any income statement, yet it determines your speed of adaptation.
3. The Strategic Ones Decouple Use Case, Model, and Interface
The most robust setups are usually surprisingly unspectacular. A company defines three to five core processes with high pain points: document processing, proposal drafts, knowledge research, meeting minutes, initial service response drafts. Then a framework is built that keeps models interchangeable. Part of it runs via API, part through secure enterprise interfaces, sensitive steps remain human-in-the-loop. This does not create a perfect system. But a resilient one.
In such cases, we often see initial hard results after 6 to 12 weeks: 20 to 35 percent less administrative processing time, 30 to 50 percent faster research, and significantly less friction at interfaces. ROI -- depending on maturity and process density -- typically arrives within 3 to 8 months.
A typical example: A tax advisory firm with 15 employees did not introduce a "magic AI project," but two clearly scoped workflows: pre-structuring client communication and making specialist information findable.
After 10 weeks, internal email traffic in these processes dropped by about 35 percent, and initial processing of standardized queries became approximately 45 percent faster. Not spectacular for the feuilleton. Extremely valuable in daily operations.
This is precisely the Mittelstand sweet spot: not in the hype, but in cleanly designed leverage points.
What does this have to do with Mythos? Everything. Because in a world where top-tier models are selectively distributed, strategic modularity becomes insurance. You cannot control the political course of frontier providers. But you can prevent your company from being entirely dependent on their whims.

The Real Decision Is Not Open vs. Closed -- But: Who Gets to Think?
At its core, the entire debate revolves around a question that is bigger than Anthropic, bigger than a single model, perhaps even bigger than the AI industry itself: how do we distribute the capacity for thought in a technological civilization?
That sounds lofty. But it is remarkably concrete in business terms. Whoever gets to think -- in the sense of rapidly analyzing, combining, simulating, checking, drafting -- decides faster. Whoever decides faster learns faster. Whoever learns faster builds an advantage. And advantage, compounded over many cycles, becomes power.
We are witnessing intelligence transform from an anthropological constant into an infrastructural variable. Previously, human cognitive ability was broadly distributed, albeit unevenly. Now an additional layer is emerging: machine cognition, whose performance level, accessibility, and cost are shaped politically and entrepreneurially. That is new. And it makes the old liberal certainties fragile.
The thesis would be: for security reasons, the most powerful intelligence must be controlled. The antithesis: every such control tips into power concentration. The synthesis can only be: we need controlled openness rather than private sovereignty. Systems in which risky capabilities are handled in a graduated, auditable, and rule-based manner -- without a few corporations effectively becoming gatekeepers of thought.
Will that be easy? Of course not. There is no perfect solution. But there are better questions. Not: "Should everyone get everything?" But: "Which capabilities under which conditions, with what accountability, for which actors?" Not: "Whom do we trust blindly?" But: "What checks and balances prevent misuse -- including by the good guys?" Not: "Which model is smartest today?" But: "Which architecture keeps us capable of action tomorrow?"
Perhaps that is the actual lesson of the Mythos moment. Not that a provider withholds a model. But that the industry openly states what was long implicit: intelligence is becoming a matter of political allocation. And once that happens, the naive phase of open euphoria ends.
For many companies, this is an uncomfortable thought. One would like to treat technologies like tools: hammer, screwdriver, ERP, API. Useful, interchangeable, neutral. But powerful AI is not neutral in that simple sense. It is closer to electricity plus legal department plus analyst team plus junior developer plus researcher -- all in one. Whoever has it at their disposal commands not just efficiency, but interpretive authority in day-to-day operations.
And that is why this development affects me personally as well. In recent months, I have restructured my own systems, changed model routes, reassessed setups -- not out of tech romanticism, but because work quality noticeably changes when closed platforms shift their priorities. You suddenly find yourself no longer working with a tool, but against its product policy. That is a miserable foundation for value creation.
It becomes truly problematic where companies only notice this loss of control when core processes are already chained to an ecosystem.
What Makes Sense Now -- A Framework for Entrepreneurs
So we should neither descend into alarmism nor shrug our shoulders. Both would be comfortable. Both would be wrong.
The core tension remains: powerful AI creates enormous productivity and real security risks simultaneously. The answer cannot be blind release. But nor can it be a tacitly accepted oligarchy of thought.
The dialectical resolution therefore reads: as much openness as possible, as much control as necessary -- but with verifiable rules, modular architectures, and strategic sovereignty on the enterprise side.
For the Mittelstand, this means concretely:
- Inventory your cognitive bottlenecks. Where do your teams lose time today through searching, documentation, coordination, repetition, or manual review? Not abstractly. Process by process.
- Prioritize 3 to 5 use cases with clear ROI. Typically: proposals, document processing, knowledge search, meeting minutes, initial drafts in service. These often hold 20 to 40 percent efficiency leverage.
- Build model-agnostically. Where possible, separate interface, process logic, and model access. APIs, RAG systems, and clearly documented workflows beat monolithic tool dependency.
- Define governance instead of a culture of prohibition. Which data may go where? Which approvals are needed? Where does human-in-the-loop remain mandatory? Who decides on model changes?
- Plan against vendor lock-in. Not just technically, but organizationally. Train teams in principles, not just in a tool. That way your setup stays agile.
- Monitor access policy as a strategic factor. Who gets which models, at what price, with what restrictions? This is no longer a nerd question -- it is part of competitive analysis.
If you approach these points soundly, something very valuable emerges: not dependence on the hype cycle, but resilient decision-making capability. And that is precisely what will become the scarcest management resource over the next 12 to 24 months.
At kiba.berlin, we see our role not in celebrating the loudest tool, but in turning noise into reliable decisions. For companies that want neither to fall behind nor to blindly walk into the next platform cage.
Mythos is therefore more than a model name. It is a symptom. Of an industry that realizes intelligence is power -- and of a society that must still decide how much of that power should remain open.
The open question is not whether AI will become big. It already is. The open question is whether we treat access to it as a common good of productivity -- or as the prerogative of a small technical aristocracy.
That decision is not being made at some point in the future. It is being made right now.
32 KI-Rezepte für den Mittelstand
Kostenloser Praxisleitfaden mit Kostenrahmen, Entscheidungsmatrix und Fördermittel-Guide für KMU.
PDF kostenlos herunterladenBereit für den nächsten Schritt?
Sprechen Sie mit unseren KI-Experten – der erste Beratungstermin ist kostenlos und unverbindlich.
This article is part of our comprehensive guide: AI for SMEs — The Complete Guide for Medium-Sized Businesses
Ähnliche Artikel

DeepScroll und Recursive Language Models — Warum 10M+ Kontext bei großen Codebases praktisch Gold wert ist
DeepScroll als Open-Source-Werkzeug für rekursive Kontextnavigation: Warum 10M+ Tokens bei großen Codebases nicht an Fenstergröße, sondern an Architektur hängen.

Lokale KI vs. Cloud-KI: Der DSGVO-Vergleich für deutsche Unternehmen
Cloud-KI oder lokale KI? Ein ehrlicher Vergleich für deutsche Unternehmen: DSGVO-Konformität, Kosten, Leistung und wann welche Lösung die richtige ist.

Kleine KI-Lösungen, große Wirkung: LLM-Technologie jenseits der Tech-Giganten
Spezialisierte KI-Lösungen schlagen Enterprise-Tools für KMU: 40% weniger E-Mails, 60% schnellere Dokumentenverarbeitung, ROI nach 5 Monaten.