Who Really Owns Artificial Intelligence? The Power Struggles and Players Behind OpenAI and the AI Boom

I’ve worked with and consulted for technology startups and think tanks, and I’m fully aware of how startups are often formed to solve a specific problem in the marketplace. In particular, they are built to create a particular component within the broader technology industry. Think of it like a car: there are many pieces designed, developed, and manufactured to produce a car for a car manufacturer. While the car itself belongs to the branded manufacturer, not all the components are necessarily owned or controlled by them.

This analogy applies directly to the rapidly evolving world of artificial intelligence (AI). Just as a car manufacturer depends on a vast network of suppliers for parts and technologies, AI development is equally dependent on a complex web of companies and individuals contributing specific components—whether it’s hardware, software, data, algorithms, policy, or guidance. The question of who owns artificial intelligence becomes more complex as we break down these pieces, revealing a decentralized yet intertwined ecosystem of ownership.

In this article, we’ll explore the key components that make up AI and who controls them. We’ll look at the role of tech giants, startups, research labs, governments, and even individuals in shaping the future of AI. From hardware and computing power to data and intellectual property, understanding who owns these critical pieces is key to navigating the power struggles of the AI landscape. The complexities of AI ownership are far-reaching and require a deeper exploration of the ecosystem and the forces that drive it.

The Other Builders of Artificial Intelligence – Who’s Really Behind OpenAI?

Beyond the presence of CEO Sam Altman, the true decision-making landscape of OpenAI is shaped by a constellation of influential players—tech billionaires, government-affiliated experts, and powerful think tanks. These entities have not only guided the company’s technological trajectory but have also influenced its ethical, financial, and political positioning in the broader AI ecosystem.

OpenAI’s Governance and Key Players

Sam Altman, the former president of Y Combinator, serves as CEO and is often seen as the face of OpenAI. Known for his media savvy and visionary framing, Altman has overseen the company’s controversial transition from a nonprofit research lab to a “capped-profit” hybrid structure—an unprecedented model in the tech world. It’s important to note that Sam Altman was prepared for a role like this by the technology industry, particularly through his experience at Y Combinator.

Greg Brockman, OpenAI’s President and Co-founder, previously served as Stripe’s CTO. He has been essential to scaling OpenAI’s engineering operations and maintaining stability during its transition into a commercial entity. His behind-the-scenes leadership complements Altman’s public role.

Ilya Sutskever, the former Chief Scientist and Co-founder, is one of the key engineers behind major advances in deep learning. While his research leadership has been crucial, he’s also been central to internal debates around AI safety. In fact, tensions over the company’s approach to artificial general intelligence (AGI) reportedly contributed to significant internal conflict which ultimately led to his departure

Mira Murati, the company’s former Chief Technology Officer, has led engineering efforts for flagship products like ChatGPT. She also plays a key role in public outreach and ethical discourse, often bridging the technical and societal conversations around AI’s future.

The Board of Directors—both past and present—has seen its own share of power struggles. Once composed of high-profile figures such as Elon Musk (a co-founder who left in 2018), Adam D’Angelo (CEO of Quora), and Helen Toner (of Georgetown’s Center for Security and Emerging Technology), the board became a focal point of controversy in 2023. Internal disagreements culminated in Sam Altman’s brief removal and rapid reinstatement, exposing deep divisions over AI governance and organizational control.

Tech Billionaires & Big Capital

The trajectory of OpenAI has also been significantly shaped by the involvement—both direct and indirect—of tech billionaires and major capital players. Their influence spans from initial funding to strategic direction and corporate alliances that have transformed OpenAI into a key player in the AI arms race.

Elon Musk, a former co-founder of OpenAI, was among its earliest and most high-profile funders. However, he parted ways with the organization due to disagreements over its direction and concerns about conflicts with Tesla’s AI initiatives. Since his departure, Musk has become one of OpenAI’s most vocal critics, especially in light of the company’s deepening partnership with Microsoft.

Reid Hoffman, the co-founder of LinkedIn, has been a quieter but no less influential figure. Through his venture firm Greylock, Hoffman provided early backing and remains deeply connected to AI ethics and philanthropic efforts in the space. His influence is largely strategic, working behind the scenes to shape the discourse around responsible AI development.

While Peter Thiel and his venture firm Founders Fund have not been publicly linked to direct investments in OpenAI, Thiel’s ideological circle continues to exert pressure across the broader AI ecosystem. OpenAI’s pivot toward monetization and increased centralization of power reflects philosophical currents long championed by Thiel and his network.

The most powerful force behind OpenAI today, however, is Microsoft. With an investment exceeding $13 billion, Microsoft holds exclusive licensing rights to OpenAI’s GPT models. The tech giant has embedded these models across its product suite, including Azure, GitHub Copilot, Bing, and Microsoft Office. Microsoft’s role was especially decisive during the 2023 board crisis, when its backing helped facilitate the swift return of Sam Altman as CEO—demonstrating just how much influence capital now wields over OpenAI’s governance and future direction.

Government Entities & Strategic Influence

While OpenAI initially pledged to avoid military applications, its entanglement with government entities—particularly in the U.S.—has grown more complex over time. The U.S. Department of Defense and DARPA, for instance, have not partnered with OpenAI directly, but the company’s alignment with national security interests has become increasingly evident. Through Microsoft’s extensive government contracts, OpenAI’s technologies are indirectly accessible to defense and intelligence agencies, subtly blurring the line between civilian innovation and military utility.

The Biden Administration has also brought OpenAI into the fold of national policy-making. Through executive orders on AI safety and regulation, the White House has emphasized the importance of aligning frontier AI development with public-interest goals. OpenAI’s regular participation in AI summits, safety forums, and regulatory discussions reflects its evolving role as a strategic national asset, not just a private company.

Globally, foreign governments are watching—and in some cases, courted—OpenAI. The United Kingdom and United Arab Emirates made overtures to host major AI hubs, while China closely monitors OpenAI’s progress as part of its broader AI competition with the U.S. In response, the American government is beginning to treat companies like OpenAI as part of its critical infrastructure, with all the influence, oversight, and strategic implications that come with that designation.

Think Tanks and Influencer Organizations

OpenAI exists within a dense network of ideologies, policy circles, and institutions that shape its direction. One of the most influential is the Effective Altruism (EA) movement. Several current and former staff and board members—including Helen Toner from the Center for Security and Emerging Technology (CSET)—have ties to EA. This movement prioritizes minimizing long-term existential risks, often framing artificial general intelligence (AGI) safety as humanity’s most pressing challenge.

The Center for Security and Emerging Technology (CSET) itself has become a powerful voice at the intersection of AI and geopolitics. Helen Toner’s affiliation with CSET became a flashpoint during OpenAI’s board crisis in 2023, underscoring how deeply think tank ideologies can permeate tech governance. Meanwhile, companies like Anthropic and DeepMind serve as both competitors and cousins to OpenAI. Many of their founders and researchers emerged from OpenAI’s orbit or share similar safety-focused worldviews, creating a close-knit network that steers both public discourse and policy on AI development.

Obama-Era Tech Policy and Institutional Foundations

Much of the groundwork for the development of OpenAI and its policy and ethical posture traces back to the Obama administration. The White House Office of Science and Technology Policy (OSTP), under leaders like Dr. John Holdren and Megan Smith, promoted open data, ethical AI, and federal investment in computational education. These efforts seeded a federal infrastructure and mindset that welcomed machine learning and innovation long before ChatGPT entered the picture.

Initiatives like the U.S. Digital Service (USDS) and 18F, launched in 2014, recruited elite engineers and designers from Silicon Valley to modernize government technology. These programs didn’t just fix websites—they built pipelines of influence. Alumni transitioned into think tanks, private AI companies, and philanthropic ventures, spreading a culture that valued open-source tools, civic tech, and public-private partnerships.

Key Obama-era appointees like DJ Patil (first U.S. Chief Data Scientist) and Jason Goldman (former White House Chief Digital Officer) brought a distinctly democratic and civic ethos to data and AI. Their ideology viewed technology as a tool to strengthen democracy, not just to drive business—an outlook that still echoes in OpenAI’s public-facing mission.

Effective Altruism & Obama-Era Policy Networks

Many policy thinkers from the Obama era moved fluidly into the Effective Altruism ecosystem. This alignment reinforced ideas around long-term AGI risk, cautious tech deployment, and the ethical governance of powerful AI systems. OpenAI’s founding charter—focused on benefiting humanity and aligning AGI with human values—reflects this convergence. Government-affiliated advisory groups on AI risk also bear the fingerprints of these overlapping communities, paving the way for current policies like Biden’s AI Executive Order and international efforts such as NATO-level AI safety discussions.

Philanthropy and Institutional Capital

Philanthropic capital played a major role in solidifying AI policy ecosystems and its development. Obama-aligned funders like Reid Hoffman, Eric Schmidt, and Laurene Powell Jobs deployed their resources across a range of influential initiatives. They invested in AI companies like OpenAI and Anthropic, supported think tanks such as CSET and Data & Society, and funded education and civic tech projects rooted in Obama’s “tech + democracy” vision.

Ecosystem of Think Tanks, Companies, and Foundations

After their service, many Obama-era alumni dispersed into think tanks, private tech ventures, and philanthropic organizations—becoming key contributors of the modern AI policy landscape.

Think Tanks & Research Institutions like CSET, Brookings, Data & Society, and the New America Foundation provided the intellectual capital for ethical AI deployment, global governance, and U.S. national security. Figures like Jason Matheny, Tom Kalil, and Nicole Wong helped integrate liberal democratic values into AI policy discourse. Institutions like the Berggruen Institute also pushed long-term thinking on AGI safety, often employing former policymakers as fellows or advisors.

In the private sector, Obama-era talent landed at influential AI firms. DeepMind and Anthropic absorbed ex-OSTP, DARPA, and USDS personnel who brought regulatory expertise and public-sector credibility. Companies like Rebellion Defense and Palantir, though controversial, became major players in applying AI to national security—with Eric Schmidt playing a strategic funding and advisory role in both.

Philanthropic ventures like Schmidt Futures, Ford Foundation, MacArthur Foundation, The Emerson Collective, and Chan Zuckerberg Initiative acted as strategic funders of AI ethics, tech justice, and innovation. These networks amplified the civic-minded, safety-first orientation that now defines much of the AI policy landscape.

Cultural Shaping: MIT Media Lab & Intelligence Infrastructure

Though different in scope, both the MIT Media Lab and In-Q-Tel (the CIA’s venture capital arm) played significant roles in AI’s evolution—from imaginative experimentation to real-world deployment in defense and surveillance.

At the MIT Media Lab, pioneers like Rosalind Picard and Cynthia Breazeal developed early work in affective computing, robotics, and human-computer interaction. These explorations helped humanize AI, shifting it from pure logic to emotionally resonant interfaces—laying aesthetic and functional groundwork for systems like Siri and ChatGPT. The Lab’s alumni went on to shape design-centric approaches at Google, Apple, IBM, and OpenAI itself.

Funded by both big tech and controversial sources like Jeffrey Epstein, the Media Lab was at once a visionary playground and a complex ethical terrain. Still, its storytelling ethos and experimental mindset influenced how society imagines AI—not as a cold, calculating force, but as something empathetic, intimate, and potentially beautiful.

CIA / In-Q-Tel: The Quiet Funders of the AI-Surveillance Nexus

One of the most quietly influential players in the evolution of artificial intelligence—especially as it intersects with national security and surveillance—is In-Q-Tel, the CIA’s venture capital arm. Founded in 1999, In-Q-Tel operates as a nonprofit investment firm designed to bridge the gap between emerging technologies and the needs of the U.S. intelligence community. With backing from both public funds and private investors, In-Q-Tel essentially serves as the CIA’s tech scout—identifying and funding innovations that can give American intelligence agencies a strategic edge.

In the post-9/11 era, as the national security apparatus rapidly expanded, In-Q-Tel became a critical channel for financing early-stage AI and surveillance technologies. Its investments read like a blueprint for today’s surveillance state. It provided seed funding to Palantir Technologies, now a billion-dollar AI company known for its powerful data aggregation and predictive analytics used by law enforcement and intelligence agencies. Another early investment, Keyhole Inc., developed geospatial visualization tools for surveillance, and was later acquired by Google to form the foundation of Google Maps. Recorded Future, another In-Q-Tel-backed firm, offers AI-driven predictive intelligence services to both the CIA and NSA. Similarly, Basis Technology specialized in natural language processing and sentiment analysis, providing tools to scan and interpret foreign-language data for intelligence use.

In-Q-Tel’s strategic focus has always centered on technologies that can mine massive datasets, automate surveillance, and extend the reach of U.S. intelligence operations. Their portfolio has included systems capable of facial recognition, real-time translation, social media monitoring, and threat prediction—tools that align with a vision of total situational awareness in both physical and digital domains. These investments not only shaped the architecture of national defense but also helped set the stage for the broader application of AI in civilian life.

Beyond the intelligence community, In-Q-Tel’s influence has seeped into the cultural understanding of AI itself. By positioning AI as a tool of national defense and counterterrorism, In-Q-Tel contributed to the legitimation of surveillance-based AI as both necessary and inevitable. Many of the machine learning techniques and algorithms originally developed for intelligence purposes eventually found their way into consumer products and platforms—most notably in sentiment analysis, behavioral targeting, and real-time language translation. This diffusion helped blur the lines between public safety, private enterprise, and personal data—raising deeper questions about how AI is deployed, who controls it, and to what end.

The short answer to who owns artificial intelligence is: no single entity owns “artificial intelligence.” However, a small, elite constellation of corporations, governments, and funders control the infrastructure, training data, and foundational models that everything else depends on.

The Illusion of “Open” AI

While many AI projects brand themselves as “open,” the reality is that most powerful foundation models—such as GPT, Claude, Gemini, and LLaMA—are either proprietary or tightly controlled. Only a handful of organizations have the capital, data, and compute capacity necessary to train these models from scratch, meaning the field is far more closed than it appears.

Who Actually Owns the Core Infrastructure?

As of 2024–2025, the core foundation models in AI are owned and controlled by a small group of powerful companies. GPT-4 and GPT-5 are developed by OpenAI, with Microsoft holding exclusive licensing rights. Claude is built by Anthropic, which is backed by Amazon and Google, and offers only controlled API access. Gemini (formerly Bard) is owned by Google DeepMind and trained on extensive private datasets. Meta’s LLaMA and LLaMA 3 models have public code but restricted access to their weights. Elon Musk’s xAI developed Grok, which is accessible through the X (Twitter) platform. Cohere’s Command R+ is an open API model designed for retrieval-augmented generation (RAG) tasks. Meanwhile, Mistral and Mixtral, developed by Mistral AI in France, stand out as truly open-source models, publicly funded by European governments and venture capital.

Compute Owners: Cloud + Chips

AI infrastructure depends heavily on the companies that own and operate massive data centers and chip manufacturing. NVIDIA dominates the AI hardware landscape, supplying approximately 90% of the GPUs used to train and run advanced models. On the cloud infrastructure side, Microsoft Azure powers OpenAI’s systems, while Amazon AWS supports both Anthropic and Cohere. Google Cloud provides the backend for DeepMind and its Gemini models, and Oracle hosts the infrastructure for xAI’s Grok. Together, these companies control the vast majority of computational resources that make large-scale AI possible. In practice, whoever owns the GPUs and training environments holds de facto control over which AI systems get built and scaled.

Who Owns the Core Training Data?

High-performing AI models are trained on a diverse mix of data sources, including public internet content (such as books, Wikipedia, Reddit, news articles, and StackOverflow), scientific and academic databases (often locked behind paywalls), code repositories like GitHub and StackExchange, and vast amounts of private user data, including search histories, email content, and map usage. The key gatekeepers of this data are major tech firms: Google, with its control over YouTube, Gmail, Chrome, and Search; Meta, through Facebook, Instagram, and WhatsApp; Microsoft, via LinkedIn, GitHub, and Bing; Apple, through Siri and App Store telemetry; and Amazon, with access to Alexa voice data, shopping behavior, and AWS logs. Collectively, these companies own the “raw materials” of human thought and behavior that fuel the development of modern AI systems.

Shadow Owners: Strategic Investors & Governments

Strategic investment plays a quiet but powerful role in shaping the AI landscape. OpenAI, for example, was initially funded by Elon Musk and later backed by Microsoft, Reid Hoffman, and Khosla Ventures. Anthropic received early investment from FTX’s Sam Bankman-Fried before securing major funding from Amazon and Google. Meanwhile, Schmidt Futures, the foundation led by former Google CEO Eric Schmidt, influences the AI talent pipeline and safety initiatives, while In-Q-Tel, the CIA’s venture arm, has seeded startups focused on predictive intelligence and language technologies. Government involvement is equally significant: DARPA and the Department of Defense provided foundational research funding, while the NSA and CIA continue to fund and deploy AI for surveillance. In Europe and the UK, AI offices are not only working to regulate the technology but also funding open-source alternatives like Mistral to maintain strategic autonomy.

Is There a “Master” AI That Trains All Others?

Not quite—but models like GPT-4/5 and Claude serve as de facto “teacher models.” They’re used to fine-tune smaller models through imitation learning and to generate synthetic data that powers other AIs. In this sense, many “independent” models are downstream of OpenAI’s and Google’s linguistic and cognitive frameworks. These companies have positioned themselves as meta-intelligences—training, evaluating, and correcting other systems.

Who Really Runs Claude (Anthropic’s Flagship AI)?

Claude is developed by Anthropic, a safety-focused AI company founded in 2021. Despite its branding as ethically driven and focused on safety, the leadership behind Claude is deeply embedded within elite financial, ideological, and corporate networks.

Anthropic’s founding and executive team is composed of several key figures who played pivotal roles in the development of early large language models. CEO Dario Amodei, formerly Vice President of Research at OpenAI, was a lead engineer behind GPT-2 and GPT-3. He left OpenAI after disagreements over Microsoft’s increasing influence. His sister, Daniela Amodei, serves as Anthropic’s President and is also a former OpenAI leader, known for her expertise in policy and operations. She was instrumental in shaping Claude’s “Constitutional AI” safety framework. Co-founder Tom Brown previously led the engineering team behind GPT-3, while Jared Kaplan—a physicist-turned-AI researcher and co-author of the GPT-3 paper—now oversees the architecture of Claude’s models.

Major Investors and Strategic Partners

Anthropic has attracted significant investment from major tech and venture capital firms. Amazon has pledged up to $4 billion and is integrating Claude into its AWS ecosystem. Google has invested over $500 million and is incorporating Claude into its Workspace products. Notably, FTX and Alameda Research, led by Sam Bankman-Fried, were among Anthropic’s earliest investors, contributing around $500 million before the FTX collapse. Additional backers include Spark Capital, Menlo Ventures, and Zoom Ventures, further solidifying Claude’s position in the competitive AI landscape.

Ideological and Institutional Alignment

Claude’s development is shaped by Effective Altruism (EA) and Longtermism, ideologies centered on minimizing existential risks and maximizing long-term human good. These belief systems influence Anthropic’s approach to value alignment and safety. Claude is also closely aligned with major global AI safety initiatives, including the Partnership on AI, the Center for AI Safety, and policy events hosted by the White House and UK AI Safety Institute.

Governance and Oversight

Anthropic’s governance remains private and largely opaque. The known board includes CEO Dario Amodei as Chair, along with representatives from Amazon and Google Cloud. Additional influence likely comes from informal advisors in academia, national security, and AI ethics—many of whom operate under nondisclosure agreements. While Anthropic publicly presents itself as values-driven and safety-first, it is deeply financially tied to Big Tech, ideologically influenced by elite institutions, and entangled in public-private security partnerships. Notably, there is currently no external accountability structure overseeing the development or societal impact of its Claude models.

Another critical factor to consider is Elon Musk’s involvement, his influence, and his stance on AI safety which is a complex mix of sincere concern, ideological conviction, and calculated strategic positioning. He has long warned about the existential risks posed by artificial general intelligence (AGI), famously likening its development to “summoning the demon” as early as 2014. His co-founding of OpenAI in 2015 was driven by the desire to create an open-source counterweight to Google’s DeepMind, which he viewed as dangerously centralized. Musk’s alignment with early Effective Altruism thinkers like Nick Bostrom—author of Superintelligence—and his repeated public calls for AI regulation, despite his usual anti-regulatory stance in other sectors, point to a genuine, long-standing belief that AI could pose catastrophic risks to humanity.

At the same time, Musk is also playing a strategic power game. After his departure from OpenAI, he lost control of a company he helped create—only to watch it rise to prominence with GPT-4. Launching xAI and its model Grok allowed him to re-enter the AI race on his own terms. His rhetoric about building a “truth-seeking” AI reflects his broader ideological campaign against what he sees as the “woke” or sanitized nature of models from OpenAI, Google, and Anthropic. Grok, in contrast, is pitched as rebellious and aligned with Musk’s vision of free speech—a continuation of his push to remake X (formerly Twitter) into a platform for unfiltered discourse.

Musk’s ambitions also involve vertical integration across his empire. With AI models like Grok integrated into X, autonomous driving technologies for Tesla, neural interfaces at Neuralink, and AI-guided systems at SpaceX and Starlink, Musk is creating powerful feedback loops and data moats that give him long-term strategic leverage. This interconnected ecosystem places him in direct competition with the AI establishment dominated by Microsoft (OpenAI), Google (DeepMind), Amazon (Anthropic), and Meta. Musk seeks not just to break their dominance but to reshape the conversation around AI safety, values, and governance by injecting his own ideology and infrastructure into the heart of the industry.

Yet, despite these ambitions and legitimate concerns, Musk remains an unreliable messenger. His tendency toward hyperbole often makes his warnings seem alarmist or self-serving. He champions regulation while simultaneously building a proprietary AI system without independent oversight. His critiques of rivals frequently double as promotional tools for his own ventures, blurring the line between principled advocacy and competitive opportunism. In short, while Musk’s fear of unchecked AI is real, his approach is anything but neutral—it’s deeply calculated, and always with an eye toward control.

This is all just more moves on the chessboard for Elon Musk who is a founding member of the so-called “PayPal Mafia”—a group of early PayPal executives and employees whose influence has extended far beyond the 2002 sale of PayPal to eBay. This network of technologists and entrepreneurs went on to shape major sectors of Silicon Valley and the modern tech ecosystem, including AI, venture capital, social media, space exploration, fintech, and even political ideology. Their intertwined ventures and shared vision created a powerful, elite network that continues to steer innovation—and controversy—across the globe.

Musk’s role in the PayPal story began with the founding of X.com in 1999, an ambitious online financial services startup. He later merged X.com with Confinity, a company co-founded by Peter Thiel and Max Levchin, which had developed the payment platform that would become PayPal. After internal clashes, Musk was removed as CEO but remained the company’s largest shareholder. When eBay acquired PayPal for $1.5 billion in 2002, Musk walked away with $180 million, which he used to fund Tesla, SpaceX, and Neuralink. His original vision for X.com—a digital financial super app—was shelved for decades but resurfaced in 2023 with his controversial rebranding of Twitter to “X.”

The PayPal Mafia includes a who’s who of tech influencers, each with their own distinct worldview and impact. Peter Thiel became a political kingmaker and co-founder of Palantir, pushing right-wing and pro-surveillance ideologies. Reid Hoffman founded LinkedIn and became a key figure in AI ethics and center-left tech diplomacy. David Sacks launched Yammer and now shapes conservative discourse through the All-Inpodcast. Max Levchin built consumer-focused fintech firms like Affirm, while Keith Rabois emerged as a vocal libertarian investing in AI, biotech, and housing. Other members include YouTube co-founder Steve Chen, Yelp CEO Jeremy Stoppelman, and venture capitalist Roelof Botha—each contributing to a tech landscape still shaped by PayPal’s disruptive DNA.

Together, the PayPal Mafia reflects not only entrepreneurial success but also the growing entanglement between tech, ideology, and power.

Their Collective Impact

The PayPal Mafia didn’t just launch companies—they helped build the very infrastructure of the 21st-century internet. Their influence touches nearly every aspect of digital life: social media platforms like Facebook, LinkedIn, and YouTube; new modes of commerce through Tesla’s mobility ecosystem, Affirm’s “Buy Now, Pay Later” model, and Yelp’s consumer review economy; and surveillance and defense via Palantir. They were early funders of transformative AI labs like OpenAI, DeepMind, and Anthropic, and have shaped the direction of venture capital through firms like Founders Fund, Greylock, and Sequoia. Their collective sway extends into politics as well, where figures like Peter Thiel and David Sacks have steered libertarian and reactionary tech ideologies into mainstream debate.

Musk’s Strategic Role Within the Mafia

Elon Musk has always occupied a unique position within the PayPal Mafia. More visionary inventor than venture capitalist, he distanced himself from the traditional investor playbook embraced by many of his peers. Yet Musk’s empire—Tesla, SpaceX, Neuralink, and now xAI—was undeniably built on the financial foundation and social capital generated through PayPal and its alumni network. Over time, Musk has evolved from collaborator to rival, particularly with Peter Thiel, who has publicly criticized Musk’s chaotic management of Twitter/X.

Ideological Fractures Within the Mafia

Though bound by ambition and early success, the PayPal Mafia has splintered ideologically. Thiel, Sacks, and Rabois represent a hard-right, nationalist-tech elite pushing a reactionary vision of Silicon Valley. Musk, by contrast, embodies a libertarian-populist strain, driven by eccentric futurism and an obsession with existential risk. Meanwhile, figures like Reid Hoffman and Roelof Botha promote a more centrist, diplomatically engaged approach to innovation, with a focus on ethics and global responsibility. Despite their common origin, these ideological divisions now define much of their influence on the tech world and beyond.

The Government’s Hand in AI: Shaping Innovation Across Administrations

Over the past two decades, U.S. administrations have played a pivotal role in shaping the evolution of artificial intelligence (AI), from policy creation to military application and global leadership. Let’s explore how the Obama, Trump, and Biden administrations each influenced AI development through various channels, including policy, public-private partnerships, military involvement, and ideological frameworks.

Obama Administration (2009–2017): The Foundational Layer

The Obama administration laid the groundwork for AI in the U.S., marking the beginning of a national strategy for artificial intelligence. In 2016, the White House released its first federal AI strategy reports, including “Preparing for the Future of Artificial Intelligence” and the “National Artificial Intelligence R&D Strategic Plan,” which framed AI as a public good and an economic driver. This era emphasized open data, ethical research, and the potential of AI for social good.

Obama’s administration also fostered critical infrastructure for AI innovation by establishing the U.S. Digital Service (USDS) and the Presidential Innovation Fellows. These initiatives brought Silicon Valley talent into federal agencies and facilitated early partnerships with tech giants such as Google, Amazon, and Palantir. Additionally, Obama-era alumni played a crucial role in bridging government and academia, helping establish influential AI think tanks like the AI Now Institute, Partnership on AI, and Data & Society. These efforts led to the creation of frameworks for AI ethics that would influence the future of technology. In military research and development, the Defense Advanced Research Projects Agency (DARPA) increased investment in autonomous systems and AI-enabled cyber defense, setting the stage for Project Maven, which would expand under the Trump administration.

Trump Administration (2017–2021): Deregulate, Militarize, Compete with China

The Trump administration shifted focus to AI as a tool for national security and economic competitiveness, with a significant emphasis on deregulation and military applications. In 2019, Trump signed the American AI Initiative into law, which directed R&D funding, workforce development, and ethics initiatives while downplaying regulation. The initiative positioned AI as crucial to global power struggles, particularly in competing with China.

The National Security Commission on Artificial Intelligence (NSCAI), chaired by former Google CEO Eric Schmidt, was established to assess AI’s role in global power dynamics. The commission’s 2021 report urged massive investments in AI defense, semiconductor chips, and talent pipelines, with a stark warning: “We are not prepared to defend or compete in the AI era.” Meanwhile, military adoption of AI skyrocketed, particularly within the Joint AI Center (JAIC), as AI was used for surveillance, drone targeting, predictive logistics, and battlefield decision-making tools. This militarization of AI represented a key priority of the Trump administration.

Biden Administration (2021–2024): Guardrails, Chips, and Global Leadership

Under President Biden, AI policy took on a more regulatory and diplomatic tone, with an emphasis on safety, equity, and global governance. In 2023, Biden issued the first-ever AI executive order, which required safety testing of foundation models, reporting obligations to the government, and civil rights audits on AI usage. This executive order empowered agencies like the National Institute of Standards and Technology (NIST), the Department of Commerce, and the Department of Defense to oversee AI’s ethical development.

One of the Biden administration’s most significant contributions to AI infrastructure was the passage of the CHIPS and Science Act in 2022, which allocated $280 billion for domestic semiconductor production, research, and AI infrastructure. This legislation aimed to reduce U.S. dependence on Taiwan and China for semiconductor manufacturing while funding AI research hubs and university partnerships.

Biden’s administration also took a leading role in global AI governance, hosting AI Safety Summits with the UK and other allies and working with organizations like the OECD, the UN, and the EU to establish AI governance standards. Additionally, the White House AI Council coordinated efforts across federal departments to craft domain-specific AI policies, focusing on issues like bias mitigation, workforce impacts, and cybersecurity.

Cross-Administration Continuities and Shifts

Throughout these administrations, certain themes have remained consistent, while others have evolved. The Obama administration pioneered public-private AI collaborations, which were expanded under Trump for military applications and later regulated and invested in under Biden, particularly with the CHIPS Act. The military’s role in AI grew significantly under Trump, while Biden focused on overseeing ethical reviews of AI projects. Ethical and equity concerns were an emerging issue under Obama, largely absent under Trump, but became a central focus under Biden. Research and development efforts were initially led by academia in the Obama era, became corporate-driven under Trump, and are now characterized by national investment in infrastructure under Biden. Geopolitically, Obama laid the groundwork, Trump focused on competition with China, and Biden embraced full-scale tech diplomacy.

Who’s Really Behind the Policy Influence?

The influence of former Obama-era officials and Silicon Valley insiders continues to shape U.S. AI policy. Figures such as Eric Schmidt, who headed the NSCAI and now invests heavily in AI defense startups, and Lynne Parker, who guides AI safety policies, remain pivotal. Jason Matheny, a former Obama official now at RAND, co-authored frameworks on AI safety, while Arati Prabhakar, Biden’s science advisor, leads efforts on AI governance and risk mitigation. Additionally, former members of Obama’s digital team have moved into influential roles at companies like Google, Microsoft, and OpenAI, continuing to shape AI safety norms from within the tech industry.

Financial and Technological Dependencies – The crucial roles of Nvidia, Google, Microsoft, and other firms that make AI breakthroughs possible.

Nvidia, Google, Microsoft, and other leading technology companies have played pivotal roles in advancing artificial intelligence (AI) through substantial financial investments and strategic initiatives.

Nvidia:

Nvidia has emerged as a dominant force in the AI hardware sector. Its GPUs are essential for training AI models, making them integral to AI development. In early 2025, Chinese firms ordered at least $16 billion worth of Nvidia’s H20 AI chips, underscoring their global demand.  This surge in demand has significantly boosted Nvidia’s revenues, with the company reporting $26 billion in the first quarter of 2025, up from $7.2 billion the previous year.  Such growth propelled Nvidia’s market valuation to $3.34 trillion in June 2024, briefly surpassing both Microsoft and Apple to become the world’s most valuable company.   

Google:

Google has heavily invested in AI, particularly through its cloud services and AI research divisions. In the first quarter of 2025, Google’s capital expenditures reached $12 billion, nearly double the previous year’s figure, primarily driven by investments in technical infrastructure like servers and data centers to support AI advancements.  Google’s AI research arm, DeepMind, reported revenues of £1.53 billion in 2023, with a net income of £113 million, highlighting its significant contributions to AI breakthroughs.   

Microsoft:

Under CEO Satya Nadella since 2014, Microsoft has strategically embraced AI, investing heavily in cloud services and AI research. The company has formed significant partnerships, notably with OpenAI, to bolster its AI capabilities. In April 2025, CEO Nadella emphasized Microsoft’s commitment to leading in AI, stating, “We have been doing what is essentially capital allocation to be a leader in AI for multiple years now, and we plan to keep taking that forward.”  These efforts have substantially increased Microsoft’s market valuation, reaching $2.8 trillion in 2025, a tenfold increase under Nadella’s leadership.    

Other Firms:

Companies like Meta Platforms have also significantly contributed to AI advancements. Meta committed to purchasing approximately 350,000 of Nvidia’s H100 GPUs, enhancing its AI research and applications.  Similarly, Tesla’s investment in Nvidia’s chips supports its development of autonomous driving technologies, exemplifying the automotive sector’s reliance on AI.    

Collectively, these firms’ financial investments and strategic initiatives have been instrumental in driving AI innovations, shaping the technological landscape, and delivering substantial returns to shareholders.

Nvidia’s foray into AI chip development began in the mid-2000s when the company recognized the potential of its Graphics Processing Units (GPUs) beyond gaming applications. In 2006, Nvidia introduced CUDA (Compute Unified Device Architecture), a parallel computing platform and programming model that enabled developers to harness the power of GPUs for general-purpose computing tasks, including artificial intelligence and machine learning.   

This strategic shift was driven by the vision of CEO Jensen Huang and other company leaders, who identified AI as an emerging field with significant growth potential. They believed that investing in AI chip development would position Nvidia at the forefront of this technological evolution.

Nvidia’s commitment to AI was further solidified with the release of the Volta architecture in 2017, which introduced Tensor Cores designed specifically for deep learning tasks. This development marked a significant milestone in Nvidia’s transition from a graphics card manufacturer to a leader in AI chip technology.   

Thus, Nvidia’s development of AI chips began in earnest around 2006 with the introduction of CUDA, followed by dedicated AI-focused architectures like Volta in 2017, reflecting the company’s strategic pivot towards AI technologies over the years.

Nvidia’s pivotal breakthroughs in supporting OpenAI’s AI advancements are marked by strategic collaborations and cutting-edge hardware developments:

2016: Introduction of Tesla P100 GPUs

In 2016, Nvidia unveiled the Tesla P100 GPUs, based on the Pascal architecture. These GPUs significantly enhanced deep learning performance, providing the computational power necessary for training complex AI models. This advancement laid the groundwork for future collaborations with AI research organizations, including OpenAI.

2018: Release of Volta Architecture with Tensor Cores

The 2018 introduction of Nvidia’s Volta architecture, featuring Tensor Cores optimized for deep learning, marked a significant leap in AI hardware. This innovation accelerated AI model training and inference, directly benefiting OpenAI’s research initiatives.

2024: Formation of a $12 Billion Partnership with OpenAI

In early 2024, Nvidia and OpenAI entered a landmark partnership valued at $12 billion. This collaboration aimed to co-develop next-generation AI chips, ensuring that OpenAI’s models are optimized for performance and scalability. Leveraging Nvidia’s advanced GPU designs and computing infrastructure, the partnership significantly bolstered AI research capabilities.

2025: Participation in the Stargate AI Infrastructure Project

In January 2025, Nvidia joined OpenAI and other tech giants in the Stargate project, a $500 billion initiative to construct AI data centers across the United States. This venture aimed to advance AI technology and infrastructure, with Nvidia providing critical GPU technology to support large-scale AI model training and deployment.

These milestones underscore Nvidia’s integral role in advancing AI technologies, directly supporting OpenAI’s mission to develop cutting-edge AI models and infrastructure.

The Pascal architecture is a microarchitecture developed by Nvidia, introduced in 2016. It succeeded the Maxwell architecture and was named after the French mathematician and physicist Blaise Pascal. Pascal was designed to offer significant improvements in performance, power efficiency, and computational capabilities, particularly for demanding workloads such as deep learning, artificial intelligence (AI), and scientific computing.

The Pascal architecture, introduced by Nvidia, brought several key advancements that significantly enhanced the performance of GPUs, particularly for machine learning and AI applications. One of the most notable features was the increase in the number of CUDA cores, which are parallel processors used by Nvidia GPUs. This boost in computational power allowed Pascal-based GPUs to handle more complex tasks, making them ideal for AI workloads.

Another significant upgrade was the inclusion of High Bandwidth Memory 2 (HBM2), which offered higher memory bandwidth than traditional GDDR memory. This improvement was especially beneficial for data-intensive tasks, such as AI training and large-scale simulations, where fast memory access is crucial.

Pascal also introduced Nvidia’s NVLink, a high-bandwidth interconnect technology that allowed multiple GPUs to communicate more efficiently. This was critical for scaling performance in deep learning systems that relied on distributed computing across multiple GPUs, ensuring that tasks were processed quickly and effectively.

Energy efficiency was another key feature of the Pascal architecture. Compared to its predecessors, Pascal offered better performance per watt, making it more suitable for large data centers and helping improve the cost-effectiveness of GPU-heavy workloads.

While the initial Pascal GPUs did not include Tensor Cores, which are specialized for AI workloads, later models in the Nvidia lineup, such as the Volta architecture, incorporated them. This marked Nvidia’s deeper commitment to developing hardware optimized for AI applications, further enhancing the architecture’s capabilities in the rapidly evolving field of artificial intelligence.

The Pascal architecture played a pivotal role in accelerating AI model training, which was essential for companies like OpenAI in advancing their research and development of complex AI models. Nvidia’s Tesla P100 and Tesla P40 GPUs, built on the Pascal architecture, became especially popular in high-performance computing (HPC) and deep learning applications due to their impressive computational power and efficiency. These GPUs enabled faster processing, contributing to the rapid progress of AI technologies.

Overall, the Pascal architecture marked a significant milestone in Nvidia’s push towards becoming a key player in AI hardware, laying the groundwork for even more specialized AI-focused architectures like Volta and Ampere.

Artificial Intelligence (AI) is a broad field that encompasses several key components, each contributing to the development and advancement of AI technologies. These components can be categorized into both technical elements (software and hardware) and the organizations or entities that own, develop, or provide them. Here’s a breakdown:

Key Components of AI:

Algorithms and Models:

• Machine Learning (ML): The backbone of AI, where algorithms learn from data to make predictions or decisions. Key algorithms include supervised learning, unsupervised learning, and reinforcement learning.

• Deep Learning: A subset of ML that uses neural networks with many layers (hence “deep”) to model complex patterns. This is particularly powerful for tasks like image recognition, natural language processing (NLP), and autonomous systems.

• Natural Language Processing (NLP): The ability for AI to understand, interpret, and generate human language. This includes tasks like language translation, sentiment analysis, and chatbots (like GPT-4, which powers me).

• Computer Vision: Enables machines to interpret and understand visual data from the world, allowing for tasks like image classification, facial recognition, and autonomous driving.

Data:

• Training Data: AI models, particularly in machine learning, require vast amounts of labeled data to learn patterns and make predictions. This data can come from numerous sources like images, videos, text, and sensor readings.

• Data Annotation: The process of labeling or tagging raw data so that AI models can learn from it. Data annotation is crucial for supervised learning models.

Hardware:

• Graphics Processing Units (GPUs): Specialized hardware, like Nvidia’s GPUs (e.g., Tesla P100, A100), plays a crucial role in speeding up the training of AI models. GPUs are designed to perform many calculations simultaneously, making them ideal for deep learning.

• Tensor Processing Units (TPUs): Custom-designed hardware by Google, designed to speed up machine learning workloads, especially deep learning, with high throughput.

• FPGAs (Field-Programmable Gate Arrays): Reconfigurable hardware that can be optimized for specific machine learning tasks, used by companies like Microsoft and Amazon for AI tasks.

Software Libraries and Frameworks:

• TensorFlow: An open-source machine learning framework developed by Google, widely used for building and deploying AI models.

• PyTorch: Another popular deep learning framework developed by Facebook, preferred for research and academic purposes.

• Keras: A high-level neural networks API written in Python, designed to work with TensorFlow.

• Scikit-learn: A Python library used for machine learning tasks like classification, regression, and clustering.

• OpenAI Gym: A toolkit for developing and comparing reinforcement learning algorithms.

Cloud Computing and Infrastructure:

• Cloud Platforms: Providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure offer scalable infrastructure for running AI models and storing large datasets. Many AI research and development projects are run on these platforms.

• AI-as-a-Service: Companies like IBM Watson and Microsoft Azure AI offer AI services through the cloud, making it easier for organizations to access AI tools without having to develop them in-house.

AI Ethics and Regulations:

• Ethical AI: Developing AI in a way that is fair, transparent, and accountable. This involves addressing issues such as bias in algorithms, privacy concerns, and ensuring AI systems are aligned with human values.

• Regulatory Bodies: Various governments and international organizations are developing policies and regulations to ensure the safe and ethical use of AI technologies.

Key Owners of AI Components:

Tech Giants:

• Nvidia: Owns key hardware technologies like GPUs (Tesla, A100, H100) and the software frameworks (CUDA) for training AI models. Nvidia has become the leading provider of AI hardware, powering many data centers and AI research projects globally.

• Google: Owns and develops TensorFlow, one of the most widely used frameworks for AI, and Google Cloud offers AI infrastructure and machine learning tools. Google is also the creator of TPUs and DeepMind, which develops cutting-edge AI models, including AlphaGo and AlphaFold.

• Microsoft: Offers AI tools via Azure AI, and they have a partnership with OpenAI to integrate advanced models like GPT into their software products. They also provide hardware (e.g., Project Brainwave) to support AI tasks.

• Meta (Facebook): Owns and contributes to AI research through projects like PyTorch (developed by Facebook AI Research) and its deep learning applications in social media and virtual reality.

• Amazon: Owns AWS AI, providing cloud-based AI tools and services, and their hardware, AWS Inferentia, is designed to speed up AI model inference. They also own Alexa and use AI for recommendations and logistics.

• Apple: Develops AI for its hardware and software, including machine learning accelerators in their chips (such as A-series chips with neural engines) and CoreML, Apple’s machine learning framework for iOS and macOS.

Startups and Research Labs:

• OpenAI: A research lab focused on creating general artificial intelligence. They developed the GPT models, which are widely used for NLP tasks. OpenAI initially operated as a non-profit but later transitioned to a for-profit model under the OpenAI LP structure.

• DeepMind: Owned by Google, DeepMind is focused on cutting-edge research in AI and has developed advanced models in reinforcement learning and deep learning, such as AlphaGo and AlphaFold.

• Tesla: Tesla develops AI specifically for autonomous driving and vehicles, with a strong focus on using neural networks for real-time processing of sensor data.

Open-Source Contributions:

• Many components of AI (e.g., TensorFlow, PyTorch, Keras, Scikit-learn) are open-source, meaning they are developed by the global community but are owned by the organizations that maintain them (e.g., Google for TensorFlow, Facebook for PyTorch). Open-source AI software allows other companies and researchers to contribute and use the technologies freely, although they are still closely tied to the owning organizations for support and development.

Government and Academic Institutions:

• Governments and universities also contribute to AI development. For instance, MIT, Stanford, and UC Berkeley are renowned for their AI research. Governments are increasingly investing in AI research and ethics, and public institutions influence AI regulations.

Financial Perspective:

• Many of these companies, like Nvidia, Google, and Microsoft, generate billions of dollars from AI through both hardware sales (e.g., Nvidia GPUs) and cloud-based AI services (e.g., Microsoft Azure AI, Google Cloud AI).

• OpenAI, despite being a research lab, has garnered significant funding from Microsoft, which invested $1 billion initially in 2019 and an additional $10 billion in 2023. This partnership has allowed OpenAI to scale its models like GPT, benefiting from Microsoft’s infrastructure and financial support.

Basically, Nvidia, Google, Microsoft, Meta, Amazon, and OpenAI are the major players that own critical components of AI. These companies have extensive financial investments and intellectual property tied to AI hardware, software, research, and services, driving the global AI ecosystem.

The Future of AI Power Struggles – What Happens Next?

The future of AI power dynamics is poised to be highly fluid, with potential outcomes including further consolidation, regulatory-driven breakups, and the rise of unexpected challengers. As the AI landscape evolves—shaped by rapid technological advancements, shifting market demands, and increasing regulatory scrutiny—several possible scenarios could redefine who holds power in the AI ecosystem.

Increased Consolidation

One likely path is increased consolidation, driven by mergers and acquisitions (M&A) among major tech companies like Nvidia, Microsoft, and Google. These industry leaders are likely to continue acquiring smaller AI startups or established firms to bolster their technological capabilities. Nvidia’s pending acquisition of Arm, for example, could solidify its dominance in AI hardware. Additionally, vertical integration may intensify as companies aim to control more of the AI value chain. Nvidia could further invest in software and research, while Microsoft may expand its AI services portfolio to include proprietary frameworks. Moreover, AI as a service (AIaaS) is becoming a central offering for cloud providers like Microsoft and Amazon, making AI tools more accessible to small and medium-sized enterprises and reinforcing their market stronghold.

Potential Breakups and Decentralization

Conversely, we may witness a wave of decentralization, driven primarily by regulatory and ethical concerns. Governments around the world are increasingly focusing on curbing monopolistic practices, particularly as AI becomes indispensable across industries like healthcare and finance. Antitrust pressures, especially in the EU and potentially the U.S., could lead to forced divestitures or limitations on dominant players. Additionally, the rise of decentralized AI models—powered by blockchain and similar technologies—could empower users to retain control of their data, challenging centralized giants like Google and Amazon. The open-source movement also poses a threat to consolidation. Tools such as TensorFlow and PyTorch have democratized AI development, and as more developers gravitate toward community-driven innovation, the influence of big tech could diminish.

The Rise of Unexpected Challengers

Amid these shifts, a new wave of challengers may emerge. AI startups with niche expertise in sectors like healthcare, logistics, or entertainment could disrupt incumbent players, especially if they tackle key issues like AI ethics, energy efficiency, or explainability. On a global scale, countries like China are rapidly expanding their AI capabilities through companies like Baidu, Tencent, and Alibaba. Their efforts, along with initiatives in regions such as India and the EU, could create AI ecosystems tailored to specific cultural and regulatory contexts, undermining the dominance of U.S.-based firms. Additionally, AI companies prioritizing social good—such as reducing bias, enhancing fairness, or combating climate change—may gain traction as public demand for ethical AI solutions grows.

Shifting Power Dynamics

Control over cloud infrastructure is another critical factor in the evolving power landscape. Companies like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure already dominate cloud computing and are well-positioned to lead in cloud-based AI deployment. This infrastructure advantage gives them a competitive edge in scaling AI applications. At the same time, geopolitical and regulatory frameworks will significantly shape the future of AI. China’s state-backed AI strategy, the EU’s AI Act, and U.S. policy decisions could all either enable or constrain corporate influence, potentially giving rise to new players in the regulatory or standards-setting arenas.

Specialization and Hardware Innovation

The demand for AI-specific hardware is accelerating as AI models become increasingly computationally intensive. Companies like Nvidia are at the forefront, but startups such as Graphcore and Cerebras Systems—focused on designing chips tailored for deep learning and large-scale AI applications—could become serious contenders. Edge computing is also gaining momentum, enabling data processing closer to the source rather than relying on centralized data centers. This trend could empower companies that prioritize edge AI and privacy-preserving technologies, offering an alternative to the cloud-centric approach.

AI Democratization

Finally, AI is becoming more accessible through pre-trained models and democratization platforms. Tools like OpenAI’s GPT-4, Google’s BERT, and Meta’s LLaMA are being made available to developers, lowering the barrier to entry for creating AI applications. Platforms like Hugging Face and Runway are further accelerating this trend by offering user-friendly interfaces and tools for non-experts. This democratization may lead to a surge in innovation from smaller companies and independent developers, redistributing power across a broader spectrum of actors in the AI ecosystem.

Conclusion:

The future of AI is highly dynamic, with many possibilities for consolidation, fragmentation, and the rise of unexpected challengers. Consolidation may continue in the short term as large players like Nvidia, Microsoft, and Google reinforce their positions. However, as AI becomes more integrated into every aspect of society, we may see the emergence of new players—especially from smaller companies or non-traditional sources. This could result in a more decentralized and competitive AI landscape, driven by innovation, regulation, and the societal need for ethical and inclusive AI systems. The true winners of the AI power struggle will likely be those who can balance technological advancements with social responsibility, offering solutions that not only scale but also align with evolving global standards.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Jameel Gordon

I am a visionary, a futurist, and I am the father of “Modern Artificial Intelligence”.

I am a profound thinker who delves deep into various knowledge realms to deconstruct and construct competency frameworks. In essence, I possess a unique thought perspective—a serial polymath.

https://www.jameelgordon.com
Previous
Previous

Redefining “Intelligent” and “Smart”

Next
Next

Artificial Intelligence is Fintech.