The Uncomfortable Questions We Need to Ask
Artificial intelligence has moved from the laboratory into the boardroom, the hospital, the courtroom, and the marketplace. Billions of dollars flow into AI development every year. Governments are racing to establish AI leadership. And yet, we’ve largely avoided asking one of the most fundamental questions: who actually controls these systems, and what does that control mean for the rest of us?
This isn’t just a question about technology, it’s a question about power
The New Concentration of Power
The landscape of AI development has consolidated dramatically. A small handful of companies, primarily in the United States and increasingly in China, control the vast majority of the resources, talent, and infrastructure needed to build cutting-edge AI systems.
Organizations like OpenAI, Google, DeepMind, Anthropic, Meta, Amazon, and a few others have amassed the computational capacity, specialist teams, and massive datasets required to train frontier models.
Training a large-scale language model now costs tens to hundreds of millions of dollars in compute, with full development costs often exceeding $500 million. This financial moat means that only the best-funded corporations or governments can participate at the top tier.
The narrative of AI “democratization”, the idea that AI tools are available to everyone, obscures a deeper reality – the people who decide what AI does, how it behaves, and what values it encodes remain concentrated within a small number of organizations.
Even open-source initiatives such as Mistral, Hugging Face, and EleutherAI, while broadening access, rely heavily on the same underlying compute infrastructure and funding pipelines dominated by Big Tech and government contracts.
Control doesn’t stop at the model creators. Downstream deployers, the companies that fine-tune, host, and integrate AI systems into applications, also wield significant influence. True control spans the entire ecosystem, from those who train the models to those who decide how they’re applied in the real world.
Data: The Real Source of Power
Modern AI systems run on staggering volumes of data, much of it scraped from the open internet without explicit consent. Artists, journalists, and photographers have found their work used to train AI models without permission or compensation. The infrastructure that powers modern AI is built, in part, on appropriated intellectual property and human labour.
But the implications extend beyond copyright. The data used to train AI reflects historical inequalities and cultural biases embedded in the societies that produced it. A hiring model trained on biased corporate data will reproduce those biases. A policing model trained on discriminatory crime statistics will reinforce existing injustice.
Who decides what data is used, how it’s cleaned, and which values it encodes? These choices are made by a small number of corporations, often behind closed doors, with little transparency and no formal mechanism for affected communities to have a say.
Data has become the currency of control, shaping not only how AI systems perform, but whose interests they ultimately serve.
The Black Box of AI Decisions
One of the most disquieting realities of AI is that even its creators often can’t fully explain how it works. Neural networks make predictions by adjusting billions of parameters in ways that resist human interpretation. When an AI system denies a loan, flags a person for investigation, or recommends medical treatment, we often can’t answer the basic question: why?
This creates a new and uneasy form of power. Organizations control systems they can’t fully predict or understand, yet they deploy them in high-stakes contexts anyway. The result is illusory control, technical ownership without genuine comprehension.
That opacity matters because AI systems are already making decisions that affect people’s livelihoods, health, and freedom. Without explainability, accountability becomes nearly impossible.
The Regulatory Vacuum
Governments have been slower to establish frameworks for AI governance than the technology has advanced. This lag has created a regulatory vacuum where corporations effectively write the rules that govern their own products.
The EU AI Act, set to take effect in 2025, will classify general-purpose models as “high-risk” and mandate transparency, documentation, and risk assessments. The US Executive Order on AI (2023) and the establishment of the UK AI Safety Institute signal a shift toward oversight. Yet enforcement remains weak, and global coordination is patchy at best.
AI innovation is moving faster than legal and ethical guardrails can evolve. It’s not necessarily because of bad intent, it’s the predictable outcome of allowing transformative technology to outpace democratic oversight.
When Power and Consequence Diverge
Perhaps the most troubling issue is that those who control AI are not the same people who bear the consequences when systems fail or behave harmfully.
When a biased AI model denies someone a job, rejects a loan application, or influences a criminal-justice decision, the people harmed aren’t the engineers or executives who built or deployed it. They are the individuals whose lives are quietly shaped by invisible algorithms.
This separation between power and consequence is dangerous. The organizations designing and deploying AI have both the authority to determine what these systems do and the insulation from responsibility when they cause harm.
Accountability: The Missing Layer
When something goes wrong, who is responsible? The company that built the model? The organization that deployed it? The engineer who wrote the code? The executive who approved the project?
This ambiguity is itself a form of power. AI development and deployment are so complex that accountability diffuses across teams and institutions. Each actor can plausibly deny full responsibility, leaving victims of algorithmic harm with no clear path to justice.
True accountability requires visibility, knowing who made which decisions, why they were made, and what the outcomes were. Without that, responsibility evaporates in the fog of complexity.
Rebalancing Control
If we want genuine control over AI, we must move beyond the current model where a few organizations make critical decisions with minimal oversight. That doesn’t mean halting innovation, it means creating frameworks that link technological progress with ethical responsibility.
Concrete steps could include mandatory impact assessments before deploying AI in high-stakes domains like criminal justice and hiring. Independent audits, not self-audits, would verify claims and be made public. Financial liability for harms would create real incentives, organizations deploying AI systems bear would responsibility for demonstrable harms through meaningful consequences.
Transparency requirements should mandate full disclosure of data sources, known limitations, and decision-making processes. People affected by AI decisions deserve access to basic information about how systems work and why they made particular choices.
Diversifying AI development means supporting research outside Silicon Valley through government funding for public institutions and mechanisms allowing smaller organizations to access computational resources.
Data governance frameworks should protect creators: artists and writers whose work trains AI systems should have rights to compensation and notification. Communities affected by AI decisions should have a voice in how their data is used.
These steps would not stifle innovation. They would balance progress with accountability, aligning incentives so that the benefits of AI development are more widely and fairly shared.
A Question of Power and Governance
The uncomfortable truth is that control over AI has consolidated in ways that should concern anyone who cares about fairness, transparency, and democratic governance. A small number of organizations now make decisions that shape what AI does, how it behaves, and ultimately how it influences our lives, while being largely insulated from the consequences.
Changing that trajectory requires treating AI governance as a societal priority, not a corporate afterthought. It means establishing systems of accountability that ensure these technologies serve human values, not just commercial ones.
AI is here to stay. But how we govern it, who decides what’s built, who benefits, and who bears the risk remains open. These aren’t technical questions. They’re questions about power, ethics, and the future of human agency.
And they’re questions we need to answer while we still have the chance.

