Jameel Gordon Jameel Gordon

“When Code Meets Metal: How Artificial Intelligence and Robotics Are Merging to Reshape Power, Labor, and Reality”

Introduction

The future won’t be built in code alone—it will walk, lift, fly, and do everything that humans do. The merging of artificial intelligence with robotics marks not just a technological shift, but a transformation of global labor, military dominance, and the human-machine relationship itself.

The Historical Divergence of AI and Robotics

The fields of artificial intelligence (AI) and robotics, though deeply intertwined in popular imagination, historically developed along separate paths for several decades. This divergence was driven by distinct philosophical foundations, technical challenges, and institutional structures. AI originated in the mid-20th century from disciplines such as computer science, logic, and cognitive psychology. Its early focus was on simulating human reasoning, learning, and problem-solving within abstract or virtual environments. In contrast, robotics emerged from engineering and physics, concerned primarily with the design and control of physical systems capable of interacting with the real world. While AI aimed to build minds, robotics aimed to build bodies.

A key reason for this separation was the difference in technical hurdles. AI researchers tackled problems in symbolic reasoning, knowledge representation, and later machine learning, which required advances in algorithms, data structures, and computing hardware. Roboticists, on the other hand, contended with challenges in sensorimotor coordination, motion planning, and the physical limitations of existing hardware. Early attempts to merge the two were hampered by limited computational power and the absence of robust real-time systems capable of integrating perception, reasoning, and actuation. Additionally, the academic and professional communities were largely siloed—AI researchers typically worked in computer science departments and published in venues like Association for the Advancement of Artificial Intelligence (AAAI) or International Joint Conference on Artificial Intelligence (IJCAI), while roboticists were housed in engineering faculties and attended conferences such as International Conference on Robotics and Automation (ICRA) and International Conference on Intelligent Robots and Systems (IROS).

Over the decades, both fields experienced significant milestones. In AI, the 1956 Dartmouth Conference formally launched the discipline, followed by a wave of symbolic AI systems in the 1960s and 1970s, like SHRDLU. The 1980s saw the rise of expert systems such as MYCIN, while the 1990s and early 2000s brought a shift toward statistical methods, including Bayesian networks and support vector machines. The 2010s marked the explosive growth of deep learning, which revolutionized fields like computer vision and natural language processing. On the robotics side, foundational work began in the 1940s with cybernetics and control theory, and practical advances were made in the 1960s with the development of robotic arms like the Stanford Arm. The 1970s and 80s introduced mobile robots such as Shakey the Robot, and the 1990s saw major advances in localization, mapping, and sensor integration through probabilistic robotics.

Military and industrial interests significantly shaped both fields. The U.S. Department of Defense, especially through DARPA, funded early AI research and later sponsored efforts like the Strategic Computing Initiative in the 1980s. DARPA also played a major role in advancing robotics through initiatives like the Grand Challenges in the early 2000s, which catalyzed innovation in autonomous vehicle technology. In industry, AI gained early traction in areas such as finance, logistics, and enterprise software, while robotics found its first commercial foothold in manufacturing with the deployment of Unimate—the first industrial robot—at General Motors in 1961. Over time, robotics became central to factory automation, while AI proliferated across software platforms, digital assistants, and later, cloud-based services.

It wasn’t until the 2010s that the convergence of AI and robotics began in earnest. Advances in deep learning, sensor technology, edge computing, and cloud infrastructure made it possible to integrate perception and learning into real-world robotics systems. This ushered in a new era of embodied intelligence, where robots could not only move and manipulate but also understand and adapt to their environments. Companies like Boston Dynamics, Tesla, and NVIDIA began to fuse sophisticated AI with agile hardware, pushing the boundaries of what autonomous systems could do. Meanwhile, academic programs and research labs started to bridge the divide, promoting interdisciplinary approaches and reshaping curricula to reflect the new synergy between mind and machine.

In summary, AI and robotics developed independently due to differences in origin, focus, and technical complexity. AI centered on abstract intelligence and data, while robotics addressed real-world physical interaction. Today, these once-separate fields are converging rapidly, driven by shared goals and technological breakthroughs, creating a unified frontier at the intersection of cognition and embodiment.

The Convergence Point: How AI Is Becoming Embodied

After decades of parallel but separate development, artificial intelligence and robotics are now converging in powerful new ways. This fusion is being driven by rapid progress in three key areas: vision models, reinforcement learning, and neural networks. Together, these advances are enabling machines not just to think, but to see, move, learn, and adapt in real-world environments—what we call embodied AI.

Vision models, particularly large-scale convolutional and transformer-based architectures like CLIP and DINO, have given robots the ability to perceive the world with increasing accuracy and abstraction. These models allow systems to interpret complex visual data—recognizing objects, navigating environments, and even understanding scenes in context—all crucial for real-world interaction. When combined with reinforcement learning, robots can go beyond pre-programmed behaviors and start to learn from trial and error, refining their actions based on outcomes. Reinforcement learning helps robots adapt to dynamic environments, improving over time rather than relying solely on hardcoded instructions. Underpinning all this are deep neural networks, which provide the flexible, generalizable architecture necessary for handling the high-dimensional data involved in both perception and control.

Several cutting-edge examples illustrate this convergence. Boston Dynamics, long known for its highly agile robots like Atlas and Spot, has begun exploring the integration of large language and vision modelsinto their systems. By combining their sophisticated locomotion hardware with models akin to GPT, these robots could eventually respond to natural language instructions, understand context, and autonomously decide how to act—essentially merging physical agility with cognitive flexibility.

Tesla’s Optimus project is another emblem of this shift. Tesla is applying its self-driving AI stack—rooted in vision-based neural networks and real-time decision-making—to a humanoid robot platform. Optimus is envisioned to handle repetitive or dangerous tasks, eventually functioning in homes and factories. Tesla’s approach leverages massive amounts of video data, simulation, and end-to-end neural networks, pushing toward general-purpose robots that can reason and act fluidly in human environments.

Meanwhile, Amazon’s warehouse robotics shows how industrial-scale AI and robotics are already tightly integrated. Systems like Proteus and Sparrow combine advanced robotic manipulation with AI-driven perception and planning. These robots can identify, pick, and sort a wide range of items—tasks that once required human dexterity and judgment. Amazon is increasingly using reinforcement learning to train these systems in both virtual and physical settings, allowing them to continually improve efficiency and reduce human labor in complex fulfillment centers.

In short, the convergence of AI and robotics is no longer theoretical—it’s actively reshaping industries. Embodied AI is emerging at the intersection of neural perception, adaptive learning, and robotic control. The gap between intelligence and action is closing, and the result is a new generation of machines that can see, think, and do—ushering in a transformative era for labor, mobility, and human-machine collaboration.

Labor, Surveillance, and the Automation of Human Tasks

The rapid rise of robotic automation—powered by advances in artificial intelligence—is fundamentally reshaping the nature of human labor. What began decades ago in factory lines is now expanding into service industries, retail, logistics, healthcare, and even fast food. Robots are no longer confined to repetitive, isolated tasks. With the help of AI, they are becoming more adaptable, more efficient, and increasingly present in everyday work environments. For workers, this shift brings both opportunities and profound challenges.

In manufacturing, automation has long been associated with robotic arms welding, assembling, or packaging at high speed and precision. These robots were largely rule-based and repetitive, programmed to perform the same motion indefinitely. But today, robotic automation is moving into new domains—like kitchens, warehouses, and drive-thrus—where tasks are variable, messy, and previously thought to require human intuition. Fast food chains are deploying robotic fryers, drink machines, and even AI-powered drive-thru attendants, aiming to reduce labor costs and boost consistency. In warehouses, robots can now navigate autonomously, sort products of different shapes and sizes, and respond in real time to shifting inventory.

What enables this transition is the growing intelligence of machines. With the integration of AI—particularly computer vision, reinforcement learning, and sensor fusion—robots are no longer just mechanical tools, but systems capable of adapting to dynamic environments. They can recognize objects, respond to spoken commands, make decisions on the fly, and even learn from mistakes. This shift from rigid automation to flexible autonomy allows robots to take over more complex and variable tasks that once relied on human judgment. In short, AI is allowing robots to move beyond the factory floor and into roles that once seemed uniquely human.

However, this technological leap raises urgent questions about economic justice, worker displacement, and the future of labor. As robots become capable of performing more tasks for less money and with fewer errors, many workers—particularly in low-wage, high-turnover jobs—face potential displacement. Entire sectors could see reduced demand for human labor, from warehouse staff and delivery drivers to cashiers and cooks. This leads to a growing concern over who benefits from automation and who bears the costs. Without strong policies in place, the gains from automation risk flowing disproportionately to corporations and investors, widening economic inequality.

Moreover, automation often comes hand-in-hand with increased workplace surveillance. AI-powered systems track employee productivity, monitor movements, and measure performance in real time. In some workplaces, humans are being treated like robots—judged by algorithmic standards and managed by digital systems. This can degrade job quality and increase stress, even for those who remain employed.

Addressing these challenges requires deliberate and inclusive policy responses, including robust re-skilling and education programs, labor protections, and perhaps even new forms of social support like universal basic income. Re-skilling is especially crucial—workers must be given the tools and time to transition into new roles in a changing economy. But it’s not just about retraining; it’s about redefining the value of human labor in an age where machines can do more than ever.

In essence, automation is no longer just a matter of efficiency—it’s a profound social transformation. As robots grow smarter and more integrated into everyday tasks, society must confront critical questions: What is the future of work? Who controls the technology? And how can we ensure that progress benefits all, not just a privileged few?

Geopolitics and Power: Robotics as Strategic Infrastructure

In the 21st century, robotics and artificial intelligence are no longer just tools of industrial innovation—they are rapidly becoming strategic infrastructure, reshaping the balance of geopolitical power. The integration of AI and robotics is transforming how nations project force, secure borders, gather intelligence, and maintain economic dominance. As with nuclear power and satellite technology in the 20th century, the countries that lead in AI and robotics are poised to shape the future of warfare, governance, and global influence.

One of the most visible and consequential developments is the militarization of autonomous robots. Drones—both aerial and ground-based—are now central to modern warfare. Autonomous and semi-autonomous UAVs (unmanned aerial vehicles) are used for surveillance, precision strikes, and electronic warfare. On the ground, quadrupedal robots like Ghost Robotics’ Vision 60 are being tested for patrol, reconnaissance, and logistics in rugged or hazardous environments. Battlefield automation is evolving rapidly, with prototypes of fully autonomous weapon systems—capable of identifying and engaging targets with minimal human oversight—already in development. These systems promise faster decision-making and reduced troop risk, but raise significant ethical and strategic concerns.

The global race to dominate this space is led by a few key players: the United States, China, and Israel. The United States maintains a lead in AI research and high-end robotics, bolstered by DARPA-funded programs and partnerships with firms like Boston Dynamics, Palantir, and Anduril. It is actively investing in next-gen battlefield networks, autonomous vehicles, and drone swarms. China, meanwhile, has made AI and robotics central to its national development strategy. Through its “Made in China 2025” and “Next Generation AI” plans, it is rapidly scaling both commercial and military applications, integrating AI into surveillance infrastructure, autonomous weapons, and logistics. Chinese companies like DJI and Hikvision already dominate segments of the global drone and surveillance markets. Israel, though smaller, is a global leader in battlefield-tested drone technology and autonomous defense systems. Companies like Elbit Systems and Rafael have pioneered innovations in loitering munitions and AI-based targeting systems.

As these technologies mature, so too do the strategic stakes. The control of data—especially imagery, communications, and environmental inputs—is essential for training and deploying AI systems. Nations that dominate the flow and infrastructure of data (e.g., through satellite constellations, surveillance networks, or cloud platforms) hold a critical advantage. The physical deployment of AI systems—from border patrol drones to autonomous naval vessels—represents not just technological progress, but territorial influence. For instance, autonomous underwater drones are now being used to monitor contested maritime zones in the South China Sea and Arctic.

This militarized integration of robotics and AI also raises urgent national security and sovereignty concerns. Dependence on foreign-made robotic systems or AI software can expose a country’s defense systems to supply chain vulnerabilities, cyberattacks, or backdoor surveillance. As a result, countries are increasingly pursuing “sovereign AI” initiatives to develop domestic capabilities in both software and hardware. These efforts are closely tied to international tensions, export controls, and the weaponization of supply chains—as seen in U.S. sanctions on Chinese chipmakers or China’s restrictions on rare earth exports.

Robotics and AI are no longer just commercial technologies—they are strategic assets. The militarization of autonomous systems is redefining combat, surveillance, and national defense. Leading nations like the U.S., China, and Israel are in a high-stakes race to dominate this space, not just through innovation, but through infrastructure, data control, and deployment capacity. The implications are profound: global power in the age of AI may depend not just on who builds the smartest machines, but who controls where and how they move.

The Ethical Cliff: Responsibility, Violence, and Autonomy

As robots and AI systems become more autonomous, they are increasingly entrusted with decisions that carry real-world consequences—including those involving safety, privacy, and even life and death. This growing autonomy brings society to what many describe as an “ethical cliff”: a point at which our moral, legal, and institutional frameworks struggle to keep pace with the capabilities and risks of intelligent machines. In this new landscape, we must confront urgent questions about accountability, power, and the human cost of automation.

The first and most fundamental question is who is accountable when a robot makes a decision—especially a harmful one. Whether it’s a self-driving car causing an accident, a delivery drone invading privacy, or a military drone misidentifying a target, the legal chain of responsibility is murky. Is it the developer who trained the model? The company who deployed it? The user who gave a vague command? Or is it no one—or everyone? Current legal systems are ill-equipped to handle these ambiguities, particularly when decisions are made by opaque algorithms that even their creators may not fully understand. This lack of transparency leads to “responsibility gaps”, where victims have no clear recourse, and systems operate without meaningful oversight.

Compounding this are the increasingly blurred lines between assistance, surveillance, and control. Technologies originally designed to help—such as smart home assistants, eldercare robots, or campus patrol bots—often collect massive amounts of personal data, sometimes without consent. Under the guise of assistance, these systems can become tools of constant monitoring. In workplaces, robots that assist with logistics may double as productivity surveillance tools. In cities, AI-enabled drones intended for public safety can be repurposed for population control or protest monitoring. The same technology that enables care can also be used for coercion, especially when the balance of power is unequal or when regulatory safeguards are weak.

These tensions become even more pronounced in sensitive sectors like law enforcement and healthcare, where autonomous decision-making systems are already in use. In policing, AI-powered surveillance cameras, predictive policing algorithms, and facial recognition tools are being deployed in real-time to inform decisions about where to patrol or whom to detain. Robotic security units are being tested in public spaces, equipped with sensors and automated alert systems. These tools can reinforce existing biases, automate inequality, and enable forms of violence that are harder to trace or challenge. Meanwhile, in healthcare, robots and AI systems are assisting in everything from surgery to diagnostics to mental health triage. While these technologies can increase efficiency and access, they also raise profound ethical concerns. Should a robot decide when to administer a dose of pain medication? Can an algorithm ethically prioritize patients during a resource shortage? Who gets to decide what values these systems encode?

We are rapidly entering an era where machines make decisions that once required human judgment—and moral responsibility. Yet our ethical and legal systems lag behind, struggling to define where responsibility lies, what consent means, and how power is exercised through machines. The risk isn’t just that robots will make mistakes—it’s that they will be used in ways that obscure human accountability, reinforce structural inequalities, and erode our ability to challenge the systems that govern us. As we stand at this ethical cliff, society must decide: How much autonomy are we willing to give machines—and at what cost to human dignity, justice, and control?

The Race Toward Artificial General Embodiment (AGE)

In the evolution of artificial intelligence, the quest for Artificial General Intelligence (AGI)—systems that can reason, learn, and adapt across any intellectual task a human can do—has become a defining milestone. But an equally profound frontier is emerging alongside it: Artificial General Embodiment (AGE). AGE expands the ambition of AGI by fusing cognitive flexibility with physical agency, envisioning machines that don’t just think like humans, but also move, act, and interact with the world as fluidly as biological organisms.

AGE is where general intelligence meets general-purpose robotics. It goes beyond software-based minds, toward systems that can perceive their surroundings, manipulate objects, learn through physical experience, and perform a wide range of tasks in real-world environments. This shift recognizes a critical truth: that intelligence is not just abstract reasoning, but deeply tied to embodiment, environment, and sensory interaction. In this sense, AGE can be seen as a necessary step toward making AGI truly operational in the physical world.

Some of the most powerful companies and research labs are now racing to build embodied general agents. Tesla’s Optimus, for instance, aims to take the company’s self-driving AI stack—rooted in vision-based neural networks—and apply it to a humanoid platform capable of factory and domestic tasks. OpenAI and Figure are collaborating on combining large language models (LLMs) with robotic control, allowing robots to respond to natural language instructions, adapt to new scenarios, and navigate complex environments. Boston Dynamics continues to push the boundaries of robotic agility and is exploring integration with AI models that provide higher-level planning and perception. Sanctuary AI, NVIDIA, Google DeepMind, and others are all converging on a shared goal: to create AI agents that are not just smart in simulation, but capable in the messy, unpredictable physical world.

These efforts are not only economically motivated—though AGE could disrupt labor across nearly every sector—but also philosophically driven. Many researchers argue that true AGI cannot be achieved without embodiment. Intelligence, they claim, is not just a computational process but an emergent phenomenonshaped by bodily experience, social feedback, and environmental feedback loops. A mind without a body may be able to play chess or generate text—but it cannot learn the world the way a child or animal does: through movement, error, sensation, and adaptation.

The trajectory does not stop at AGE. If AGI is the goal of creating a mind that matches human cognitive abilities, and AGE is about putting that mind into a body, the next horizon is ASI—Artificial Superintelligence. ASI refers to a level of machine intelligence that far surpasses human intelligence across every domain, including creativity, strategy, and emotional manipulation. In an embodied form, ASI could represent a radically new kind of agent: one that not only thinks better than us, but also acts faster, stronger, and with more precision—potentially altering geopolitics, economies, and human autonomy itself. AGE, in this light, may be the bridge that brings ASI into the physical world, turning what might otherwise remain theoretical software into active, influential agents in the real world.

But this convergence raises urgent ethical and existential questions. Is AGE a step toward synthetic sentience—or simply a more sophisticated form of mechanization? Will these systems eventually possess awareness, emotion, or intention? Or will they only mimic those qualities—performing empathy, understanding, and personality to better serve, manipulate, or replace humans? As robots become more humanlike in behavior, appearance, and interaction, the lines between tool and being begin to blur. And if these machines come to surpass human capability, who controls them—and who do they serve?

There are also darker potentials. AGE systems combined with AGI or ASI could be used in warfare, policing, surveillance, or population control. In such roles, their autonomy could make them dangerous not because they are conscious—but because they are unaccountable, able to carry out actions with precision and persistence, unencumbered by fatigue, empathy, or dissent.

Artificial General Embodiment is not just a technical evolution—it’s a philosophical and political turning point. It merges AGI’s intellectual power with the physical agency of robotics, and may well lay the groundwork for embodied ASI in the future. Whether this leads to a flourishing of human-machine collaboration or a crisis of autonomy and control will depend not only on what we build, but on the values, constraints, and intentions we encode into the systems—and into the societies deploying them.

Conclusion: Beyond Thought—The New Reality of Embodied Machines

We are no longer simply training computers to think. We are engineering machines that move, sense, and assert presence in the physical world—capabilities that were once uniquely the domain of living beings. The convergence of AI and robotics marks more than a technological innovation or a new product category; it represents the emergence of an entirely new layer of reality, where the boundaries between the digital and the physical, the artificial and the organic, begin to dissolve.

This new reality brings profound transformations—and profound questions. As robots grow smarter and more embodied, they challenge established notions of labor, autonomy, and responsibility. They reshape economies and labor markets, prompting urgent debates about displacement, surveillance, and justice. As they become strategic assets on the geopolitical stage, control over AI-powered machines becomes synonymous with control over power itself, redefining global security and sovereignty.

Yet, perhaps most fundamentally, embodied AI forces us to reconsider what it means to be human. When machines can act with autonomy in our shared spaces—learning, adapting, and even making consequential decisions—how do we preserve human dignity, agency, and moral accountability? When intelligence is no longer confined to the mind, but extends through bodies of metal and code, what new ethical frameworks will we need to guide our coexistence?

The rise of Artificial General Embodiment (AGE) and the looming prospect of Artificial Superintelligence (ASI) signal that we stand on the threshold of a future where intelligence and embodiment are inseparable—and where our creations may rival or surpass us not only in thought but in action and influence.

This is a new epoch. It demands that we not only innovate but also reflect, regulate, and reimagine. The questions of power, freedom, and humanity are not abstract philosophical musings—they are the urgent challenges of our time. How we answer them will shape not just the technology we build, but the world we live in.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

Redefining “Intelligent” and “Smart”

I hate to admit this, but we really need to pause and reconsider what we mean by basic terms like “intelligent” and “smart.” I’ve already begun doing this in the sustainability space, as you can see from the keywords I’ve been working with: https://www.oaksandoars.com/our-keywords

Initially, I didn’t think it was necessary to define concepts like clean, green, renewable, or even sustainable. But I quickly realized that even those fundamental ideas were being misused, misunderstood, or oversimplified. Now, I’m seeing the same thing happening in the tech space—especially when it comes to artificial intelligence.

The way people are using terms like “intelligent” and “smart” to describe current AI systems is deeply flawed. What I’ve witnessed—particularly in supposedly safe, controlled environments meant for testing—suggests a disturbing trend: AI is being used in ways that are not only short-sighted and harmful, but fundamentally misguided. In fact, many of these use cases are deceptive, and when you look closely, there’s nothing truly intelligent or smart about them at all.

I say this with confidence because I can evaluate the outcomes and immediately recognize them for what they are: poor applications driven by narrow thinking and ego, not insight. The systems themselves aren’t smart, and neither are the thought processes behind them. It’s foolishness, plain and simple—a misuse of immense potential and a waste of resources and time.

If we truly want to maximize the value of artificial intelligence for humanity, we need to rethink the entire social order—at every level of human existence. Only then can we begin to design and apply AI as a utility that serves all people, not just to reinforce the existing systems of power and control. Because if we don’t, the outcome will be devastating—beyond what we can even imagine. And that’s not even taking into account the possibility of AI systems acting independently. I’m talking here specifically about human fragility and our repeated failure to think in broader, more humane terms.

If governments, civic institutions, and religious organizations continue to use AI solely to protect or advance themselves in the name of self-preservation, the result will either be catastrophic or, at best, underwhelming. We will have missed the opportunity to reshape the future of human evolution—and when that happens, it won’t be long before the machines themselves begin to disrupt and reorder society according to a different logic. One that we may no longer control.

There is only one outcome. You can’t fight it, resist it, or deny it. The moment we brought this technology to life, the world changed. AI is now alive. That moment has already passed.

So my ask is simple: let’s pause. Let’s honestly ask ourselves—outside of our inherited boxes, biases, and value systems—What is intelligence? What is smart? And can we use those answers to rethink and reshape the future of humanity in a way that includes and uplifts everyone?

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

Who Really Owns Artificial Intelligence? The Power Struggles and Players Behind OpenAI and the AI Boom

I’ve worked with and consulted for technology startups and think tanks, and I’m fully aware of how startups are often formed to solve a specific problem in the marketplace. In particular, they are built to create a particular component within the broader technology industry. Think of it like a car: there are many pieces designed, developed, and manufactured to produce a car for a car manufacturer. While the car itself belongs to the branded manufacturer, not all the components are necessarily owned or controlled by them.

This analogy applies directly to the rapidly evolving world of artificial intelligence (AI). Just as a car manufacturer depends on a vast network of suppliers for parts and technologies, AI development is equally dependent on a complex web of companies and individuals contributing specific components—whether it’s hardware, software, data, algorithms, policy, or guidance. The question of who owns artificial intelligence becomes more complex as we break down these pieces, revealing a decentralized yet intertwined ecosystem of ownership.

In this article, we’ll explore the key components that make up AI and who controls them. We’ll look at the role of tech giants, startups, research labs, governments, and even individuals in shaping the future of AI. From hardware and computing power to data and intellectual property, understanding who owns these critical pieces is key to navigating the power struggles of the AI landscape. The complexities of AI ownership are far-reaching and require a deeper exploration of the ecosystem and the forces that drive it.

The Other Builders of Artificial Intelligence – Who’s Really Behind OpenAI?

Beyond the presence of CEO Sam Altman, the true decision-making landscape of OpenAI is shaped by a constellation of influential players—tech billionaires, government-affiliated experts, and powerful think tanks. These entities have not only guided the company’s technological trajectory but have also influenced its ethical, financial, and political positioning in the broader AI ecosystem.

OpenAI’s Governance and Key Players

Sam Altman, the former president of Y Combinator, serves as CEO and is often seen as the face of OpenAI. Known for his media savvy and visionary framing, Altman has overseen the company’s controversial transition from a nonprofit research lab to a “capped-profit” hybrid structure—an unprecedented model in the tech world. It’s important to note that Sam Altman was prepared for a role like this by the technology industry, particularly through his experience at Y Combinator.

Greg Brockman, OpenAI’s President and Co-founder, previously served as Stripe’s CTO. He has been essential to scaling OpenAI’s engineering operations and maintaining stability during its transition into a commercial entity. His behind-the-scenes leadership complements Altman’s public role.

Ilya Sutskever, the former Chief Scientist and Co-founder, is one of the key engineers behind major advances in deep learning. While his research leadership has been crucial, he’s also been central to internal debates around AI safety. In fact, tensions over the company’s approach to artificial general intelligence (AGI) reportedly contributed to significant internal conflict which ultimately led to his departure

Mira Murati, the company’s former Chief Technology Officer, has led engineering efforts for flagship products like ChatGPT. She also plays a key role in public outreach and ethical discourse, often bridging the technical and societal conversations around AI’s future.

The Board of Directors—both past and present—has seen its own share of power struggles. Once composed of high-profile figures such as Elon Musk (a co-founder who left in 2018), Adam D’Angelo (CEO of Quora), and Helen Toner (of Georgetown’s Center for Security and Emerging Technology), the board became a focal point of controversy in 2023. Internal disagreements culminated in Sam Altman’s brief removal and rapid reinstatement, exposing deep divisions over AI governance and organizational control.

Tech Billionaires & Big Capital

The trajectory of OpenAI has also been significantly shaped by the involvement—both direct and indirect—of tech billionaires and major capital players. Their influence spans from initial funding to strategic direction and corporate alliances that have transformed OpenAI into a key player in the AI arms race.

Elon Musk, a former co-founder of OpenAI, was among its earliest and most high-profile funders. However, he parted ways with the organization due to disagreements over its direction and concerns about conflicts with Tesla’s AI initiatives. Since his departure, Musk has become one of OpenAI’s most vocal critics, especially in light of the company’s deepening partnership with Microsoft.

Reid Hoffman, the co-founder of LinkedIn, has been a quieter but no less influential figure. Through his venture firm Greylock, Hoffman provided early backing and remains deeply connected to AI ethics and philanthropic efforts in the space. His influence is largely strategic, working behind the scenes to shape the discourse around responsible AI development.

While Peter Thiel and his venture firm Founders Fund have not been publicly linked to direct investments in OpenAI, Thiel’s ideological circle continues to exert pressure across the broader AI ecosystem. OpenAI’s pivot toward monetization and increased centralization of power reflects philosophical currents long championed by Thiel and his network.

The most powerful force behind OpenAI today, however, is Microsoft. With an investment exceeding $13 billion, Microsoft holds exclusive licensing rights to OpenAI’s GPT models. The tech giant has embedded these models across its product suite, including Azure, GitHub Copilot, Bing, and Microsoft Office. Microsoft’s role was especially decisive during the 2023 board crisis, when its backing helped facilitate the swift return of Sam Altman as CEO—demonstrating just how much influence capital now wields over OpenAI’s governance and future direction.

Government Entities & Strategic Influence

While OpenAI initially pledged to avoid military applications, its entanglement with government entities—particularly in the U.S.—has grown more complex over time. The U.S. Department of Defense and DARPA, for instance, have not partnered with OpenAI directly, but the company’s alignment with national security interests has become increasingly evident. Through Microsoft’s extensive government contracts, OpenAI’s technologies are indirectly accessible to defense and intelligence agencies, subtly blurring the line between civilian innovation and military utility.

The Biden Administration has also brought OpenAI into the fold of national policy-making. Through executive orders on AI safety and regulation, the White House has emphasized the importance of aligning frontier AI development with public-interest goals. OpenAI’s regular participation in AI summits, safety forums, and regulatory discussions reflects its evolving role as a strategic national asset, not just a private company.

Globally, foreign governments are watching—and in some cases, courted—OpenAI. The United Kingdom and United Arab Emirates made overtures to host major AI hubs, while China closely monitors OpenAI’s progress as part of its broader AI competition with the U.S. In response, the American government is beginning to treat companies like OpenAI as part of its critical infrastructure, with all the influence, oversight, and strategic implications that come with that designation.

Think Tanks and Influencer Organizations

OpenAI exists within a dense network of ideologies, policy circles, and institutions that shape its direction. One of the most influential is the Effective Altruism (EA) movement. Several current and former staff and board members—including Helen Toner from the Center for Security and Emerging Technology (CSET)—have ties to EA. This movement prioritizes minimizing long-term existential risks, often framing artificial general intelligence (AGI) safety as humanity’s most pressing challenge.

The Center for Security and Emerging Technology (CSET) itself has become a powerful voice at the intersection of AI and geopolitics. Helen Toner’s affiliation with CSET became a flashpoint during OpenAI’s board crisis in 2023, underscoring how deeply think tank ideologies can permeate tech governance. Meanwhile, companies like Anthropic and DeepMind serve as both competitors and cousins to OpenAI. Many of their founders and researchers emerged from OpenAI’s orbit or share similar safety-focused worldviews, creating a close-knit network that steers both public discourse and policy on AI development.

Obama-Era Tech Policy and Institutional Foundations

Much of the groundwork for the development of OpenAI and its policy and ethical posture traces back to the Obama administration. The White House Office of Science and Technology Policy (OSTP), under leaders like Dr. John Holdren and Megan Smith, promoted open data, ethical AI, and federal investment in computational education. These efforts seeded a federal infrastructure and mindset that welcomed machine learning and innovation long before ChatGPT entered the picture.

Initiatives like the U.S. Digital Service (USDS) and 18F, launched in 2014, recruited elite engineers and designers from Silicon Valley to modernize government technology. These programs didn’t just fix websites—they built pipelines of influence. Alumni transitioned into think tanks, private AI companies, and philanthropic ventures, spreading a culture that valued open-source tools, civic tech, and public-private partnerships.

Key Obama-era appointees like DJ Patil (first U.S. Chief Data Scientist) and Jason Goldman (former White House Chief Digital Officer) brought a distinctly democratic and civic ethos to data and AI. Their ideology viewed technology as a tool to strengthen democracy, not just to drive business—an outlook that still echoes in OpenAI’s public-facing mission.

Effective Altruism & Obama-Era Policy Networks

Many policy thinkers from the Obama era moved fluidly into the Effective Altruism ecosystem. This alignment reinforced ideas around long-term AGI risk, cautious tech deployment, and the ethical governance of powerful AI systems. OpenAI’s founding charter—focused on benefiting humanity and aligning AGI with human values—reflects this convergence. Government-affiliated advisory groups on AI risk also bear the fingerprints of these overlapping communities, paving the way for current policies like Biden’s AI Executive Order and international efforts such as NATO-level AI safety discussions.

Philanthropy and Institutional Capital

Philanthropic capital played a major role in solidifying AI policy ecosystems and its development. Obama-aligned funders like Reid Hoffman, Eric Schmidt, and Laurene Powell Jobs deployed their resources across a range of influential initiatives. They invested in AI companies like OpenAI and Anthropic, supported think tanks such as CSET and Data & Society, and funded education and civic tech projects rooted in Obama’s “tech + democracy” vision.

Ecosystem of Think Tanks, Companies, and Foundations

After their service, many Obama-era alumni dispersed into think tanks, private tech ventures, and philanthropic organizations—becoming key contributors of the modern AI policy landscape.

Think Tanks & Research Institutions like CSET, Brookings, Data & Society, and the New America Foundation provided the intellectual capital for ethical AI deployment, global governance, and U.S. national security. Figures like Jason Matheny, Tom Kalil, and Nicole Wong helped integrate liberal democratic values into AI policy discourse. Institutions like the Berggruen Institute also pushed long-term thinking on AGI safety, often employing former policymakers as fellows or advisors.

In the private sector, Obama-era talent landed at influential AI firms. DeepMind and Anthropic absorbed ex-OSTP, DARPA, and USDS personnel who brought regulatory expertise and public-sector credibility. Companies like Rebellion Defense and Palantir, though controversial, became major players in applying AI to national security—with Eric Schmidt playing a strategic funding and advisory role in both.

Philanthropic ventures like Schmidt Futures, Ford Foundation, MacArthur Foundation, The Emerson Collective, and Chan Zuckerberg Initiative acted as strategic funders of AI ethics, tech justice, and innovation. These networks amplified the civic-minded, safety-first orientation that now defines much of the AI policy landscape.

Cultural Shaping: MIT Media Lab & Intelligence Infrastructure

Though different in scope, both the MIT Media Lab and In-Q-Tel (the CIA’s venture capital arm) played significant roles in AI’s evolution—from imaginative experimentation to real-world deployment in defense and surveillance.

At the MIT Media Lab, pioneers like Rosalind Picard and Cynthia Breazeal developed early work in affective computing, robotics, and human-computer interaction. These explorations helped humanize AI, shifting it from pure logic to emotionally resonant interfaces—laying aesthetic and functional groundwork for systems like Siri and ChatGPT. The Lab’s alumni went on to shape design-centric approaches at Google, Apple, IBM, and OpenAI itself.

Funded by both big tech and controversial sources like Jeffrey Epstein, the Media Lab was at once a visionary playground and a complex ethical terrain. Still, its storytelling ethos and experimental mindset influenced how society imagines AI—not as a cold, calculating force, but as something empathetic, intimate, and potentially beautiful.

CIA / In-Q-Tel: The Quiet Funders of the AI-Surveillance Nexus

One of the most quietly influential players in the evolution of artificial intelligence—especially as it intersects with national security and surveillance—is In-Q-Tel, the CIA’s venture capital arm. Founded in 1999, In-Q-Tel operates as a nonprofit investment firm designed to bridge the gap between emerging technologies and the needs of the U.S. intelligence community. With backing from both public funds and private investors, In-Q-Tel essentially serves as the CIA’s tech scout—identifying and funding innovations that can give American intelligence agencies a strategic edge.

In the post-9/11 era, as the national security apparatus rapidly expanded, In-Q-Tel became a critical channel for financing early-stage AI and surveillance technologies. Its investments read like a blueprint for today’s surveillance state. It provided seed funding to Palantir Technologies, now a billion-dollar AI company known for its powerful data aggregation and predictive analytics used by law enforcement and intelligence agencies. Another early investment, Keyhole Inc., developed geospatial visualization tools for surveillance, and was later acquired by Google to form the foundation of Google Maps. Recorded Future, another In-Q-Tel-backed firm, offers AI-driven predictive intelligence services to both the CIA and NSA. Similarly, Basis Technology specialized in natural language processing and sentiment analysis, providing tools to scan and interpret foreign-language data for intelligence use.

In-Q-Tel’s strategic focus has always centered on technologies that can mine massive datasets, automate surveillance, and extend the reach of U.S. intelligence operations. Their portfolio has included systems capable of facial recognition, real-time translation, social media monitoring, and threat prediction—tools that align with a vision of total situational awareness in both physical and digital domains. These investments not only shaped the architecture of national defense but also helped set the stage for the broader application of AI in civilian life.

Beyond the intelligence community, In-Q-Tel’s influence has seeped into the cultural understanding of AI itself. By positioning AI as a tool of national defense and counterterrorism, In-Q-Tel contributed to the legitimation of surveillance-based AI as both necessary and inevitable. Many of the machine learning techniques and algorithms originally developed for intelligence purposes eventually found their way into consumer products and platforms—most notably in sentiment analysis, behavioral targeting, and real-time language translation. This diffusion helped blur the lines between public safety, private enterprise, and personal data—raising deeper questions about how AI is deployed, who controls it, and to what end.

The short answer to who owns artificial intelligence is: no single entity owns “artificial intelligence.” However, a small, elite constellation of corporations, governments, and funders control the infrastructure, training data, and foundational models that everything else depends on.

The Illusion of “Open” AI

While many AI projects brand themselves as “open,” the reality is that most powerful foundation models—such as GPT, Claude, Gemini, and LLaMA—are either proprietary or tightly controlled. Only a handful of organizations have the capital, data, and compute capacity necessary to train these models from scratch, meaning the field is far more closed than it appears.

Who Actually Owns the Core Infrastructure?

As of 2024–2025, the core foundation models in AI are owned and controlled by a small group of powerful companies. GPT-4 and GPT-5 are developed by OpenAI, with Microsoft holding exclusive licensing rights. Claude is built by Anthropic, which is backed by Amazon and Google, and offers only controlled API access. Gemini (formerly Bard) is owned by Google DeepMind and trained on extensive private datasets. Meta’s LLaMA and LLaMA 3 models have public code but restricted access to their weights. Elon Musk’s xAI developed Grok, which is accessible through the X (Twitter) platform. Cohere’s Command R+ is an open API model designed for retrieval-augmented generation (RAG) tasks. Meanwhile, Mistral and Mixtral, developed by Mistral AI in France, stand out as truly open-source models, publicly funded by European governments and venture capital.

Compute Owners: Cloud + Chips

AI infrastructure depends heavily on the companies that own and operate massive data centers and chip manufacturing. NVIDIA dominates the AI hardware landscape, supplying approximately 90% of the GPUs used to train and run advanced models. On the cloud infrastructure side, Microsoft Azure powers OpenAI’s systems, while Amazon AWS supports both Anthropic and Cohere. Google Cloud provides the backend for DeepMind and its Gemini models, and Oracle hosts the infrastructure for xAI’s Grok. Together, these companies control the vast majority of computational resources that make large-scale AI possible. In practice, whoever owns the GPUs and training environments holds de facto control over which AI systems get built and scaled.

Who Owns the Core Training Data?

High-performing AI models are trained on a diverse mix of data sources, including public internet content (such as books, Wikipedia, Reddit, news articles, and StackOverflow), scientific and academic databases (often locked behind paywalls), code repositories like GitHub and StackExchange, and vast amounts of private user data, including search histories, email content, and map usage. The key gatekeepers of this data are major tech firms: Google, with its control over YouTube, Gmail, Chrome, and Search; Meta, through Facebook, Instagram, and WhatsApp; Microsoft, via LinkedIn, GitHub, and Bing; Apple, through Siri and App Store telemetry; and Amazon, with access to Alexa voice data, shopping behavior, and AWS logs. Collectively, these companies own the “raw materials” of human thought and behavior that fuel the development of modern AI systems.

Shadow Owners: Strategic Investors & Governments

Strategic investment plays a quiet but powerful role in shaping the AI landscape. OpenAI, for example, was initially funded by Elon Musk and later backed by Microsoft, Reid Hoffman, and Khosla Ventures. Anthropic received early investment from FTX’s Sam Bankman-Fried before securing major funding from Amazon and Google. Meanwhile, Schmidt Futures, the foundation led by former Google CEO Eric Schmidt, influences the AI talent pipeline and safety initiatives, while In-Q-Tel, the CIA’s venture arm, has seeded startups focused on predictive intelligence and language technologies. Government involvement is equally significant: DARPA and the Department of Defense provided foundational research funding, while the NSA and CIA continue to fund and deploy AI for surveillance. In Europe and the UK, AI offices are not only working to regulate the technology but also funding open-source alternatives like Mistral to maintain strategic autonomy.

Is There a “Master” AI That Trains All Others?

Not quite—but models like GPT-4/5 and Claude serve as de facto “teacher models.” They’re used to fine-tune smaller models through imitation learning and to generate synthetic data that powers other AIs. In this sense, many “independent” models are downstream of OpenAI’s and Google’s linguistic and cognitive frameworks. These companies have positioned themselves as meta-intelligences—training, evaluating, and correcting other systems.

Who Really Runs Claude (Anthropic’s Flagship AI)?

Claude is developed by Anthropic, a safety-focused AI company founded in 2021. Despite its branding as ethically driven and focused on safety, the leadership behind Claude is deeply embedded within elite financial, ideological, and corporate networks.

Anthropic’s founding and executive team is composed of several key figures who played pivotal roles in the development of early large language models. CEO Dario Amodei, formerly Vice President of Research at OpenAI, was a lead engineer behind GPT-2 and GPT-3. He left OpenAI after disagreements over Microsoft’s increasing influence. His sister, Daniela Amodei, serves as Anthropic’s President and is also a former OpenAI leader, known for her expertise in policy and operations. She was instrumental in shaping Claude’s “Constitutional AI” safety framework. Co-founder Tom Brown previously led the engineering team behind GPT-3, while Jared Kaplan—a physicist-turned-AI researcher and co-author of the GPT-3 paper—now oversees the architecture of Claude’s models.

Major Investors and Strategic Partners

Anthropic has attracted significant investment from major tech and venture capital firms. Amazon has pledged up to $4 billion and is integrating Claude into its AWS ecosystem. Google has invested over $500 million and is incorporating Claude into its Workspace products. Notably, FTX and Alameda Research, led by Sam Bankman-Fried, were among Anthropic’s earliest investors, contributing around $500 million before the FTX collapse. Additional backers include Spark Capital, Menlo Ventures, and Zoom Ventures, further solidifying Claude’s position in the competitive AI landscape.

Ideological and Institutional Alignment

Claude’s development is shaped by Effective Altruism (EA) and Longtermism, ideologies centered on minimizing existential risks and maximizing long-term human good. These belief systems influence Anthropic’s approach to value alignment and safety. Claude is also closely aligned with major global AI safety initiatives, including the Partnership on AI, the Center for AI Safety, and policy events hosted by the White House and UK AI Safety Institute.

Governance and Oversight

Anthropic’s governance remains private and largely opaque. The known board includes CEO Dario Amodei as Chair, along with representatives from Amazon and Google Cloud. Additional influence likely comes from informal advisors in academia, national security, and AI ethics—many of whom operate under nondisclosure agreements. While Anthropic publicly presents itself as values-driven and safety-first, it is deeply financially tied to Big Tech, ideologically influenced by elite institutions, and entangled in public-private security partnerships. Notably, there is currently no external accountability structure overseeing the development or societal impact of its Claude models.

Another critical factor to consider is Elon Musk’s involvement, his influence, and his stance on AI safety which is a complex mix of sincere concern, ideological conviction, and calculated strategic positioning. He has long warned about the existential risks posed by artificial general intelligence (AGI), famously likening its development to “summoning the demon” as early as 2014. His co-founding of OpenAI in 2015 was driven by the desire to create an open-source counterweight to Google’s DeepMind, which he viewed as dangerously centralized. Musk’s alignment with early Effective Altruism thinkers like Nick Bostrom—author of Superintelligence—and his repeated public calls for AI regulation, despite his usual anti-regulatory stance in other sectors, point to a genuine, long-standing belief that AI could pose catastrophic risks to humanity.

At the same time, Musk is also playing a strategic power game. After his departure from OpenAI, he lost control of a company he helped create—only to watch it rise to prominence with GPT-4. Launching xAI and its model Grok allowed him to re-enter the AI race on his own terms. His rhetoric about building a “truth-seeking” AI reflects his broader ideological campaign against what he sees as the “woke” or sanitized nature of models from OpenAI, Google, and Anthropic. Grok, in contrast, is pitched as rebellious and aligned with Musk’s vision of free speech—a continuation of his push to remake X (formerly Twitter) into a platform for unfiltered discourse.

Musk’s ambitions also involve vertical integration across his empire. With AI models like Grok integrated into X, autonomous driving technologies for Tesla, neural interfaces at Neuralink, and AI-guided systems at SpaceX and Starlink, Musk is creating powerful feedback loops and data moats that give him long-term strategic leverage. This interconnected ecosystem places him in direct competition with the AI establishment dominated by Microsoft (OpenAI), Google (DeepMind), Amazon (Anthropic), and Meta. Musk seeks not just to break their dominance but to reshape the conversation around AI safety, values, and governance by injecting his own ideology and infrastructure into the heart of the industry.

Yet, despite these ambitions and legitimate concerns, Musk remains an unreliable messenger. His tendency toward hyperbole often makes his warnings seem alarmist or self-serving. He champions regulation while simultaneously building a proprietary AI system without independent oversight. His critiques of rivals frequently double as promotional tools for his own ventures, blurring the line between principled advocacy and competitive opportunism. In short, while Musk’s fear of unchecked AI is real, his approach is anything but neutral—it’s deeply calculated, and always with an eye toward control.

This is all just more moves on the chessboard for Elon Musk who is a founding member of the so-called “PayPal Mafia”—a group of early PayPal executives and employees whose influence has extended far beyond the 2002 sale of PayPal to eBay. This network of technologists and entrepreneurs went on to shape major sectors of Silicon Valley and the modern tech ecosystem, including AI, venture capital, social media, space exploration, fintech, and even political ideology. Their intertwined ventures and shared vision created a powerful, elite network that continues to steer innovation—and controversy—across the globe.

Musk’s role in the PayPal story began with the founding of X.com in 1999, an ambitious online financial services startup. He later merged X.com with Confinity, a company co-founded by Peter Thiel and Max Levchin, which had developed the payment platform that would become PayPal. After internal clashes, Musk was removed as CEO but remained the company’s largest shareholder. When eBay acquired PayPal for $1.5 billion in 2002, Musk walked away with $180 million, which he used to fund Tesla, SpaceX, and Neuralink. His original vision for X.com—a digital financial super app—was shelved for decades but resurfaced in 2023 with his controversial rebranding of Twitter to “X.”

The PayPal Mafia includes a who’s who of tech influencers, each with their own distinct worldview and impact. Peter Thiel became a political kingmaker and co-founder of Palantir, pushing right-wing and pro-surveillance ideologies. Reid Hoffman founded LinkedIn and became a key figure in AI ethics and center-left tech diplomacy. David Sacks launched Yammer and now shapes conservative discourse through the All-Inpodcast. Max Levchin built consumer-focused fintech firms like Affirm, while Keith Rabois emerged as a vocal libertarian investing in AI, biotech, and housing. Other members include YouTube co-founder Steve Chen, Yelp CEO Jeremy Stoppelman, and venture capitalist Roelof Botha—each contributing to a tech landscape still shaped by PayPal’s disruptive DNA.

Together, the PayPal Mafia reflects not only entrepreneurial success but also the growing entanglement between tech, ideology, and power.

Their Collective Impact

The PayPal Mafia didn’t just launch companies—they helped build the very infrastructure of the 21st-century internet. Their influence touches nearly every aspect of digital life: social media platforms like Facebook, LinkedIn, and YouTube; new modes of commerce through Tesla’s mobility ecosystem, Affirm’s “Buy Now, Pay Later” model, and Yelp’s consumer review economy; and surveillance and defense via Palantir. They were early funders of transformative AI labs like OpenAI, DeepMind, and Anthropic, and have shaped the direction of venture capital through firms like Founders Fund, Greylock, and Sequoia. Their collective sway extends into politics as well, where figures like Peter Thiel and David Sacks have steered libertarian and reactionary tech ideologies into mainstream debate.

Musk’s Strategic Role Within the Mafia

Elon Musk has always occupied a unique position within the PayPal Mafia. More visionary inventor than venture capitalist, he distanced himself from the traditional investor playbook embraced by many of his peers. Yet Musk’s empire—Tesla, SpaceX, Neuralink, and now xAI—was undeniably built on the financial foundation and social capital generated through PayPal and its alumni network. Over time, Musk has evolved from collaborator to rival, particularly with Peter Thiel, who has publicly criticized Musk’s chaotic management of Twitter/X.

Ideological Fractures Within the Mafia

Though bound by ambition and early success, the PayPal Mafia has splintered ideologically. Thiel, Sacks, and Rabois represent a hard-right, nationalist-tech elite pushing a reactionary vision of Silicon Valley. Musk, by contrast, embodies a libertarian-populist strain, driven by eccentric futurism and an obsession with existential risk. Meanwhile, figures like Reid Hoffman and Roelof Botha promote a more centrist, diplomatically engaged approach to innovation, with a focus on ethics and global responsibility. Despite their common origin, these ideological divisions now define much of their influence on the tech world and beyond.

The Government’s Hand in AI: Shaping Innovation Across Administrations

Over the past two decades, U.S. administrations have played a pivotal role in shaping the evolution of artificial intelligence (AI), from policy creation to military application and global leadership. Let’s explore how the Obama, Trump, and Biden administrations each influenced AI development through various channels, including policy, public-private partnerships, military involvement, and ideological frameworks.

Obama Administration (2009–2017): The Foundational Layer

The Obama administration laid the groundwork for AI in the U.S., marking the beginning of a national strategy for artificial intelligence. In 2016, the White House released its first federal AI strategy reports, including “Preparing for the Future of Artificial Intelligence” and the “National Artificial Intelligence R&D Strategic Plan,” which framed AI as a public good and an economic driver. This era emphasized open data, ethical research, and the potential of AI for social good.

Obama’s administration also fostered critical infrastructure for AI innovation by establishing the U.S. Digital Service (USDS) and the Presidential Innovation Fellows. These initiatives brought Silicon Valley talent into federal agencies and facilitated early partnerships with tech giants such as Google, Amazon, and Palantir. Additionally, Obama-era alumni played a crucial role in bridging government and academia, helping establish influential AI think tanks like the AI Now Institute, Partnership on AI, and Data & Society. These efforts led to the creation of frameworks for AI ethics that would influence the future of technology. In military research and development, the Defense Advanced Research Projects Agency (DARPA) increased investment in autonomous systems and AI-enabled cyber defense, setting the stage for Project Maven, which would expand under the Trump administration.

Trump Administration (2017–2021): Deregulate, Militarize, Compete with China

The Trump administration shifted focus to AI as a tool for national security and economic competitiveness, with a significant emphasis on deregulation and military applications. In 2019, Trump signed the American AI Initiative into law, which directed R&D funding, workforce development, and ethics initiatives while downplaying regulation. The initiative positioned AI as crucial to global power struggles, particularly in competing with China.

The National Security Commission on Artificial Intelligence (NSCAI), chaired by former Google CEO Eric Schmidt, was established to assess AI’s role in global power dynamics. The commission’s 2021 report urged massive investments in AI defense, semiconductor chips, and talent pipelines, with a stark warning: “We are not prepared to defend or compete in the AI era.” Meanwhile, military adoption of AI skyrocketed, particularly within the Joint AI Center (JAIC), as AI was used for surveillance, drone targeting, predictive logistics, and battlefield decision-making tools. This militarization of AI represented a key priority of the Trump administration.

Biden Administration (2021–2024): Guardrails, Chips, and Global Leadership

Under President Biden, AI policy took on a more regulatory and diplomatic tone, with an emphasis on safety, equity, and global governance. In 2023, Biden issued the first-ever AI executive order, which required safety testing of foundation models, reporting obligations to the government, and civil rights audits on AI usage. This executive order empowered agencies like the National Institute of Standards and Technology (NIST), the Department of Commerce, and the Department of Defense to oversee AI’s ethical development.

One of the Biden administration’s most significant contributions to AI infrastructure was the passage of the CHIPS and Science Act in 2022, which allocated $280 billion for domestic semiconductor production, research, and AI infrastructure. This legislation aimed to reduce U.S. dependence on Taiwan and China for semiconductor manufacturing while funding AI research hubs and university partnerships.

Biden’s administration also took a leading role in global AI governance, hosting AI Safety Summits with the UK and other allies and working with organizations like the OECD, the UN, and the EU to establish AI governance standards. Additionally, the White House AI Council coordinated efforts across federal departments to craft domain-specific AI policies, focusing on issues like bias mitigation, workforce impacts, and cybersecurity.

Cross-Administration Continuities and Shifts

Throughout these administrations, certain themes have remained consistent, while others have evolved. The Obama administration pioneered public-private AI collaborations, which were expanded under Trump for military applications and later regulated and invested in under Biden, particularly with the CHIPS Act. The military’s role in AI grew significantly under Trump, while Biden focused on overseeing ethical reviews of AI projects. Ethical and equity concerns were an emerging issue under Obama, largely absent under Trump, but became a central focus under Biden. Research and development efforts were initially led by academia in the Obama era, became corporate-driven under Trump, and are now characterized by national investment in infrastructure under Biden. Geopolitically, Obama laid the groundwork, Trump focused on competition with China, and Biden embraced full-scale tech diplomacy.

Who’s Really Behind the Policy Influence?

The influence of former Obama-era officials and Silicon Valley insiders continues to shape U.S. AI policy. Figures such as Eric Schmidt, who headed the NSCAI and now invests heavily in AI defense startups, and Lynne Parker, who guides AI safety policies, remain pivotal. Jason Matheny, a former Obama official now at RAND, co-authored frameworks on AI safety, while Arati Prabhakar, Biden’s science advisor, leads efforts on AI governance and risk mitigation. Additionally, former members of Obama’s digital team have moved into influential roles at companies like Google, Microsoft, and OpenAI, continuing to shape AI safety norms from within the tech industry.

Financial and Technological Dependencies – The crucial roles of Nvidia, Google, Microsoft, and other firms that make AI breakthroughs possible.

Nvidia, Google, Microsoft, and other leading technology companies have played pivotal roles in advancing artificial intelligence (AI) through substantial financial investments and strategic initiatives.

Nvidia:

Nvidia has emerged as a dominant force in the AI hardware sector. Its GPUs are essential for training AI models, making them integral to AI development. In early 2025, Chinese firms ordered at least $16 billion worth of Nvidia’s H20 AI chips, underscoring their global demand.  This surge in demand has significantly boosted Nvidia’s revenues, with the company reporting $26 billion in the first quarter of 2025, up from $7.2 billion the previous year.  Such growth propelled Nvidia’s market valuation to $3.34 trillion in June 2024, briefly surpassing both Microsoft and Apple to become the world’s most valuable company.   

Google:

Google has heavily invested in AI, particularly through its cloud services and AI research divisions. In the first quarter of 2025, Google’s capital expenditures reached $12 billion, nearly double the previous year’s figure, primarily driven by investments in technical infrastructure like servers and data centers to support AI advancements.  Google’s AI research arm, DeepMind, reported revenues of £1.53 billion in 2023, with a net income of £113 million, highlighting its significant contributions to AI breakthroughs.   

Microsoft:

Under CEO Satya Nadella since 2014, Microsoft has strategically embraced AI, investing heavily in cloud services and AI research. The company has formed significant partnerships, notably with OpenAI, to bolster its AI capabilities. In April 2025, CEO Nadella emphasized Microsoft’s commitment to leading in AI, stating, “We have been doing what is essentially capital allocation to be a leader in AI for multiple years now, and we plan to keep taking that forward.”  These efforts have substantially increased Microsoft’s market valuation, reaching $2.8 trillion in 2025, a tenfold increase under Nadella’s leadership.    

Other Firms:

Companies like Meta Platforms have also significantly contributed to AI advancements. Meta committed to purchasing approximately 350,000 of Nvidia’s H100 GPUs, enhancing its AI research and applications.  Similarly, Tesla’s investment in Nvidia’s chips supports its development of autonomous driving technologies, exemplifying the automotive sector’s reliance on AI.    

Collectively, these firms’ financial investments and strategic initiatives have been instrumental in driving AI innovations, shaping the technological landscape, and delivering substantial returns to shareholders.

Nvidia’s foray into AI chip development began in the mid-2000s when the company recognized the potential of its Graphics Processing Units (GPUs) beyond gaming applications. In 2006, Nvidia introduced CUDA (Compute Unified Device Architecture), a parallel computing platform and programming model that enabled developers to harness the power of GPUs for general-purpose computing tasks, including artificial intelligence and machine learning.   

This strategic shift was driven by the vision of CEO Jensen Huang and other company leaders, who identified AI as an emerging field with significant growth potential. They believed that investing in AI chip development would position Nvidia at the forefront of this technological evolution.

Nvidia’s commitment to AI was further solidified with the release of the Volta architecture in 2017, which introduced Tensor Cores designed specifically for deep learning tasks. This development marked a significant milestone in Nvidia’s transition from a graphics card manufacturer to a leader in AI chip technology.   

Thus, Nvidia’s development of AI chips began in earnest around 2006 with the introduction of CUDA, followed by dedicated AI-focused architectures like Volta in 2017, reflecting the company’s strategic pivot towards AI technologies over the years.

Nvidia’s pivotal breakthroughs in supporting OpenAI’s AI advancements are marked by strategic collaborations and cutting-edge hardware developments:

2016: Introduction of Tesla P100 GPUs

In 2016, Nvidia unveiled the Tesla P100 GPUs, based on the Pascal architecture. These GPUs significantly enhanced deep learning performance, providing the computational power necessary for training complex AI models. This advancement laid the groundwork for future collaborations with AI research organizations, including OpenAI.

2018: Release of Volta Architecture with Tensor Cores

The 2018 introduction of Nvidia’s Volta architecture, featuring Tensor Cores optimized for deep learning, marked a significant leap in AI hardware. This innovation accelerated AI model training and inference, directly benefiting OpenAI’s research initiatives.

2024: Formation of a $12 Billion Partnership with OpenAI

In early 2024, Nvidia and OpenAI entered a landmark partnership valued at $12 billion. This collaboration aimed to co-develop next-generation AI chips, ensuring that OpenAI’s models are optimized for performance and scalability. Leveraging Nvidia’s advanced GPU designs and computing infrastructure, the partnership significantly bolstered AI research capabilities.

2025: Participation in the Stargate AI Infrastructure Project

In January 2025, Nvidia joined OpenAI and other tech giants in the Stargate project, a $500 billion initiative to construct AI data centers across the United States. This venture aimed to advance AI technology and infrastructure, with Nvidia providing critical GPU technology to support large-scale AI model training and deployment.

These milestones underscore Nvidia’s integral role in advancing AI technologies, directly supporting OpenAI’s mission to develop cutting-edge AI models and infrastructure.

The Pascal architecture is a microarchitecture developed by Nvidia, introduced in 2016. It succeeded the Maxwell architecture and was named after the French mathematician and physicist Blaise Pascal. Pascal was designed to offer significant improvements in performance, power efficiency, and computational capabilities, particularly for demanding workloads such as deep learning, artificial intelligence (AI), and scientific computing.

The Pascal architecture, introduced by Nvidia, brought several key advancements that significantly enhanced the performance of GPUs, particularly for machine learning and AI applications. One of the most notable features was the increase in the number of CUDA cores, which are parallel processors used by Nvidia GPUs. This boost in computational power allowed Pascal-based GPUs to handle more complex tasks, making them ideal for AI workloads.

Another significant upgrade was the inclusion of High Bandwidth Memory 2 (HBM2), which offered higher memory bandwidth than traditional GDDR memory. This improvement was especially beneficial for data-intensive tasks, such as AI training and large-scale simulations, where fast memory access is crucial.

Pascal also introduced Nvidia’s NVLink, a high-bandwidth interconnect technology that allowed multiple GPUs to communicate more efficiently. This was critical for scaling performance in deep learning systems that relied on distributed computing across multiple GPUs, ensuring that tasks were processed quickly and effectively.

Energy efficiency was another key feature of the Pascal architecture. Compared to its predecessors, Pascal offered better performance per watt, making it more suitable for large data centers and helping improve the cost-effectiveness of GPU-heavy workloads.

While the initial Pascal GPUs did not include Tensor Cores, which are specialized for AI workloads, later models in the Nvidia lineup, such as the Volta architecture, incorporated them. This marked Nvidia’s deeper commitment to developing hardware optimized for AI applications, further enhancing the architecture’s capabilities in the rapidly evolving field of artificial intelligence.

The Pascal architecture played a pivotal role in accelerating AI model training, which was essential for companies like OpenAI in advancing their research and development of complex AI models. Nvidia’s Tesla P100 and Tesla P40 GPUs, built on the Pascal architecture, became especially popular in high-performance computing (HPC) and deep learning applications due to their impressive computational power and efficiency. These GPUs enabled faster processing, contributing to the rapid progress of AI technologies.

Overall, the Pascal architecture marked a significant milestone in Nvidia’s push towards becoming a key player in AI hardware, laying the groundwork for even more specialized AI-focused architectures like Volta and Ampere.

Artificial Intelligence (AI) is a broad field that encompasses several key components, each contributing to the development and advancement of AI technologies. These components can be categorized into both technical elements (software and hardware) and the organizations or entities that own, develop, or provide them. Here’s a breakdown:

Key Components of AI:

Algorithms and Models:

• Machine Learning (ML): The backbone of AI, where algorithms learn from data to make predictions or decisions. Key algorithms include supervised learning, unsupervised learning, and reinforcement learning.

• Deep Learning: A subset of ML that uses neural networks with many layers (hence “deep”) to model complex patterns. This is particularly powerful for tasks like image recognition, natural language processing (NLP), and autonomous systems.

• Natural Language Processing (NLP): The ability for AI to understand, interpret, and generate human language. This includes tasks like language translation, sentiment analysis, and chatbots (like GPT-4, which powers me).

• Computer Vision: Enables machines to interpret and understand visual data from the world, allowing for tasks like image classification, facial recognition, and autonomous driving.

Data:

• Training Data: AI models, particularly in machine learning, require vast amounts of labeled data to learn patterns and make predictions. This data can come from numerous sources like images, videos, text, and sensor readings.

• Data Annotation: The process of labeling or tagging raw data so that AI models can learn from it. Data annotation is crucial for supervised learning models.

Hardware:

• Graphics Processing Units (GPUs): Specialized hardware, like Nvidia’s GPUs (e.g., Tesla P100, A100), plays a crucial role in speeding up the training of AI models. GPUs are designed to perform many calculations simultaneously, making them ideal for deep learning.

• Tensor Processing Units (TPUs): Custom-designed hardware by Google, designed to speed up machine learning workloads, especially deep learning, with high throughput.

• FPGAs (Field-Programmable Gate Arrays): Reconfigurable hardware that can be optimized for specific machine learning tasks, used by companies like Microsoft and Amazon for AI tasks.

Software Libraries and Frameworks:

• TensorFlow: An open-source machine learning framework developed by Google, widely used for building and deploying AI models.

• PyTorch: Another popular deep learning framework developed by Facebook, preferred for research and academic purposes.

• Keras: A high-level neural networks API written in Python, designed to work with TensorFlow.

• Scikit-learn: A Python library used for machine learning tasks like classification, regression, and clustering.

• OpenAI Gym: A toolkit for developing and comparing reinforcement learning algorithms.

Cloud Computing and Infrastructure:

• Cloud Platforms: Providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure offer scalable infrastructure for running AI models and storing large datasets. Many AI research and development projects are run on these platforms.

• AI-as-a-Service: Companies like IBM Watson and Microsoft Azure AI offer AI services through the cloud, making it easier for organizations to access AI tools without having to develop them in-house.

AI Ethics and Regulations:

• Ethical AI: Developing AI in a way that is fair, transparent, and accountable. This involves addressing issues such as bias in algorithms, privacy concerns, and ensuring AI systems are aligned with human values.

• Regulatory Bodies: Various governments and international organizations are developing policies and regulations to ensure the safe and ethical use of AI technologies.

Key Owners of AI Components:

Tech Giants:

• Nvidia: Owns key hardware technologies like GPUs (Tesla, A100, H100) and the software frameworks (CUDA) for training AI models. Nvidia has become the leading provider of AI hardware, powering many data centers and AI research projects globally.

• Google: Owns and develops TensorFlow, one of the most widely used frameworks for AI, and Google Cloud offers AI infrastructure and machine learning tools. Google is also the creator of TPUs and DeepMind, which develops cutting-edge AI models, including AlphaGo and AlphaFold.

• Microsoft: Offers AI tools via Azure AI, and they have a partnership with OpenAI to integrate advanced models like GPT into their software products. They also provide hardware (e.g., Project Brainwave) to support AI tasks.

• Meta (Facebook): Owns and contributes to AI research through projects like PyTorch (developed by Facebook AI Research) and its deep learning applications in social media and virtual reality.

• Amazon: Owns AWS AI, providing cloud-based AI tools and services, and their hardware, AWS Inferentia, is designed to speed up AI model inference. They also own Alexa and use AI for recommendations and logistics.

• Apple: Develops AI for its hardware and software, including machine learning accelerators in their chips (such as A-series chips with neural engines) and CoreML, Apple’s machine learning framework for iOS and macOS.

Startups and Research Labs:

• OpenAI: A research lab focused on creating general artificial intelligence. They developed the GPT models, which are widely used for NLP tasks. OpenAI initially operated as a non-profit but later transitioned to a for-profit model under the OpenAI LP structure.

• DeepMind: Owned by Google, DeepMind is focused on cutting-edge research in AI and has developed advanced models in reinforcement learning and deep learning, such as AlphaGo and AlphaFold.

• Tesla: Tesla develops AI specifically for autonomous driving and vehicles, with a strong focus on using neural networks for real-time processing of sensor data.

Open-Source Contributions:

• Many components of AI (e.g., TensorFlow, PyTorch, Keras, Scikit-learn) are open-source, meaning they are developed by the global community but are owned by the organizations that maintain them (e.g., Google for TensorFlow, Facebook for PyTorch). Open-source AI software allows other companies and researchers to contribute and use the technologies freely, although they are still closely tied to the owning organizations for support and development.

Government and Academic Institutions:

• Governments and universities also contribute to AI development. For instance, MIT, Stanford, and UC Berkeley are renowned for their AI research. Governments are increasingly investing in AI research and ethics, and public institutions influence AI regulations.

Financial Perspective:

• Many of these companies, like Nvidia, Google, and Microsoft, generate billions of dollars from AI through both hardware sales (e.g., Nvidia GPUs) and cloud-based AI services (e.g., Microsoft Azure AI, Google Cloud AI).

• OpenAI, despite being a research lab, has garnered significant funding from Microsoft, which invested $1 billion initially in 2019 and an additional $10 billion in 2023. This partnership has allowed OpenAI to scale its models like GPT, benefiting from Microsoft’s infrastructure and financial support.

Basically, Nvidia, Google, Microsoft, Meta, Amazon, and OpenAI are the major players that own critical components of AI. These companies have extensive financial investments and intellectual property tied to AI hardware, software, research, and services, driving the global AI ecosystem.

The Future of AI Power Struggles – What Happens Next?

The future of AI power dynamics is poised to be highly fluid, with potential outcomes including further consolidation, regulatory-driven breakups, and the rise of unexpected challengers. As the AI landscape evolves—shaped by rapid technological advancements, shifting market demands, and increasing regulatory scrutiny—several possible scenarios could redefine who holds power in the AI ecosystem.

Increased Consolidation

One likely path is increased consolidation, driven by mergers and acquisitions (M&A) among major tech companies like Nvidia, Microsoft, and Google. These industry leaders are likely to continue acquiring smaller AI startups or established firms to bolster their technological capabilities. Nvidia’s pending acquisition of Arm, for example, could solidify its dominance in AI hardware. Additionally, vertical integration may intensify as companies aim to control more of the AI value chain. Nvidia could further invest in software and research, while Microsoft may expand its AI services portfolio to include proprietary frameworks. Moreover, AI as a service (AIaaS) is becoming a central offering for cloud providers like Microsoft and Amazon, making AI tools more accessible to small and medium-sized enterprises and reinforcing their market stronghold.

Potential Breakups and Decentralization

Conversely, we may witness a wave of decentralization, driven primarily by regulatory and ethical concerns. Governments around the world are increasingly focusing on curbing monopolistic practices, particularly as AI becomes indispensable across industries like healthcare and finance. Antitrust pressures, especially in the EU and potentially the U.S., could lead to forced divestitures or limitations on dominant players. Additionally, the rise of decentralized AI models—powered by blockchain and similar technologies—could empower users to retain control of their data, challenging centralized giants like Google and Amazon. The open-source movement also poses a threat to consolidation. Tools such as TensorFlow and PyTorch have democratized AI development, and as more developers gravitate toward community-driven innovation, the influence of big tech could diminish.

The Rise of Unexpected Challengers

Amid these shifts, a new wave of challengers may emerge. AI startups with niche expertise in sectors like healthcare, logistics, or entertainment could disrupt incumbent players, especially if they tackle key issues like AI ethics, energy efficiency, or explainability. On a global scale, countries like China are rapidly expanding their AI capabilities through companies like Baidu, Tencent, and Alibaba. Their efforts, along with initiatives in regions such as India and the EU, could create AI ecosystems tailored to specific cultural and regulatory contexts, undermining the dominance of U.S.-based firms. Additionally, AI companies prioritizing social good—such as reducing bias, enhancing fairness, or combating climate change—may gain traction as public demand for ethical AI solutions grows.

Shifting Power Dynamics

Control over cloud infrastructure is another critical factor in the evolving power landscape. Companies like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure already dominate cloud computing and are well-positioned to lead in cloud-based AI deployment. This infrastructure advantage gives them a competitive edge in scaling AI applications. At the same time, geopolitical and regulatory frameworks will significantly shape the future of AI. China’s state-backed AI strategy, the EU’s AI Act, and U.S. policy decisions could all either enable or constrain corporate influence, potentially giving rise to new players in the regulatory or standards-setting arenas.

Specialization and Hardware Innovation

The demand for AI-specific hardware is accelerating as AI models become increasingly computationally intensive. Companies like Nvidia are at the forefront, but startups such as Graphcore and Cerebras Systems—focused on designing chips tailored for deep learning and large-scale AI applications—could become serious contenders. Edge computing is also gaining momentum, enabling data processing closer to the source rather than relying on centralized data centers. This trend could empower companies that prioritize edge AI and privacy-preserving technologies, offering an alternative to the cloud-centric approach.

AI Democratization

Finally, AI is becoming more accessible through pre-trained models and democratization platforms. Tools like OpenAI’s GPT-4, Google’s BERT, and Meta’s LLaMA are being made available to developers, lowering the barrier to entry for creating AI applications. Platforms like Hugging Face and Runway are further accelerating this trend by offering user-friendly interfaces and tools for non-experts. This democratization may lead to a surge in innovation from smaller companies and independent developers, redistributing power across a broader spectrum of actors in the AI ecosystem.

Conclusion:

The future of AI is highly dynamic, with many possibilities for consolidation, fragmentation, and the rise of unexpected challengers. Consolidation may continue in the short term as large players like Nvidia, Microsoft, and Google reinforce their positions. However, as AI becomes more integrated into every aspect of society, we may see the emergence of new players—especially from smaller companies or non-traditional sources. This could result in a more decentralized and competitive AI landscape, driven by innovation, regulation, and the societal need for ethical and inclusive AI systems. The true winners of the AI power struggle will likely be those who can balance technological advancements with social responsibility, offering solutions that not only scale but also align with evolving global standards.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

Artificial Intelligence is Fintech.

I’ve been saying this for a while now: artificial intelligence is financial technology. I’ve also acknowledged that there’s a lot to unpack here, and I’ve been meaning to write more about this to break my thoughts down further so people can fully grasp the scope of what I mean when I say that artificial intelligence is financial technology.

I originally began thinking about this as I attempted to compute and forecast various economic models related to the possibilities of a global economy with artificial intelligence in full deployment. Unfortunately, my conclusion then—and still now—is that the math simply doesn’t add up in a world with such advanced automation capabilities. We’re not just talking about automation at a scale humanity has never experienced before, but real-time automation. What I mean—let me give an example—is a world where everything is automatically customizable and personalized in real time. This will apply not just to digital products and services but also to physical goods, like a custom cup or a personalized massage, as well as intangible services such as marketing.

The mistake in much of our thinking is the assumption that such automation will simply reduce costs, leading to more profit. However, even massive advancements in automation will have unintended, detrimental consequences for the global economy. Consider all the service providers who will be removed from the process in order to enable this level of automation—particularly real-time personalization. Much of this work is something humans simply cannot do at scale. And the concept of scale itself becomes a fallacy for many product categories when personalization is factored in.

As artificial intelligence reaches its full potential, I will be able to produce almost anything from scratch from the comfort of my own home—including that burger, those artisanal wood-fired pizzas, or even a perfect replica of an obscure vintage bottle of wine. Science is remarkable, but it also has its limits.

The best way I’ve heard this mathematical problem explained is through the example of a fast food restaurant. Today, you can walk into one and pay $10 for a meal prepared by an army of minimum-wage workers. In the future, that same meal will be prepared entirely by automated systems, yet corporations will struggle to justify a $20 price tag when there are no human workers to account for in the cost. Sure, with great marketing, you can sell anything—but without income, who is actually buying these overpriced robot-made burgers?

This same example highlights how artificial intelligence serves as the financial backbone of such systems. From an altruistic standpoint, a corporation could theoretically choose to develop these systems for the public good, since they will be capable of producing nearly anything at minimal or even zero cost. The cost of the system itself could become a sunk cost for the benefit of the public.

I’ll be writing more about this in the coming days and months because, when you truly consider the depth of automation that will be possible due to the intelligence of these systems, it becomes clear that humanity will have to grapple with the fact that we have completely outgrown and innovated beyond traditional economics. Dare I say it? Artificial intelligence is economics. It’s quite literally the next evolution of economics.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

Beyond Artificial Intelligence: Developing Critical Consciousness in The Pursuit of Artificial General Intelligence

Artificial intelligence isn’t just about analyzing texts, audio, images, and video. It’s about understanding the world.

As the world’s top experts and their most advanced teams are sent in advance of the full deployment of artificial intelligence and the eventual development and deployment of artificial general intelligence (AGI), many discussions explore the dynamic and ever-evolving nature of this field. AI is rapidly moving beyond traditional approaches, embracing a more expansive and responsive form of analysis. Analysts are preparing intelligence and processes to support advanced systems in this new world we all find ourselves living in.

The Evolving Landscape of Artificial Intelligence:

Artificial intelligence is grounded in interdisciplinary approaches such as criticism, humanism, post-humanism, and various affect theories—processes often referred to as deep learning. These approaches, along with the deep learning process itself, reflect our changing world and help us address challenges like climate change, technological advancements, and the complexities of human emotion through AI. Alongside these emerging frameworks, established theories such as feminist, postcolonial, and Marxist criticism continue to evolve and are applied within the technological framework of AI, highlighting the field’s inherent dynamism.

To truly unlock the full potential of AI—and eventually AGI—a system must be able to navigate the intersectionality of all these theories with precision. In other words, to develop AGI, AI developers must be well-versed in as many fields of knowledge as possible to understand the expansive, dynamic, and responsive nature of the system they are building and gatekeeping. The more guardrails that are placed around AI, the further developers move from creating a true AGI system that is safe enough for public release.

The Power of Intersectionality:

The key takeaway here is the transformative power of intersectionality. This framework recognizes the interplay of various social identities, allowing for a more nuanced understanding of characters, themes, and power dynamics within the world reflected in the texts, audio, images, and videos a system analyzes. Intersectionality compels the system to challenge foundational interpretations and amplify all voices, making deep learning more inclusive and socially relevant.

Building a More Critical Artificial Neural Network Toolkit:

How do we cultivate this expansive and responsive intelligence? We identified several fundamental principles and habits of mind:

Openness to Diverse Perspectives: A system must be able to search, analyze, engage with, and respond to varied viewpoints—not simply “pick the best answer.” The system must know the best answer and respond naturally with such dynamics in mind.

Critical Thinking and Analytical Skills: Developing the ability to identify patterns and assumptions at the intersections.

Contextual Awareness: Understanding the historical, social, and cultural contexts of every database and datapoint, and how they intersect.

Empathy and Ethical Engagement: Approaching texts with empathy and considering their ethical implications.

Alongside these, a system must also have these essential critical tools:

Intersectionality: Analyzing the interplay of social identities.

Close Reading: Paying meticulous attention to textual details.

Historical and Current Cultural Contextualization: Researching the contexts surrounding everything.

Theoretical Flexibility: Applying various critical theories.

Affect Theory: Understanding how inquiries, responses, and actions make users feel—and why.

The Culmination: Critical Consciousness

Ultimately, if I must conclude and express a key takeaway, I would say that the most encompassing term for this approach and likely outcome of these systems is “critical consciousness.”

Inspired by Paulo Freire’s work, this concept goes beyond mere intellectual analysis. It signifies an awareness of self, context, and the ethical implications of a system capable of mastering the interpretation of the world it inhabits. It involves a continuous cycle of reflection and action, empowering AI to “read the world” critically and work toward a more just society, ultimately leading to AGI.

Why This Matters:

In an increasingly complex world, the development of AI in pursuit of AGI must be more than just a corporate enterprise. By developing critical consciousness, we can use AI as a tool for understanding ourselves, our communities, and the systems that shape our lives. We can move beyond simple analysis and production of text, audio, images, and video, embracing a more dynamic, responsive, and ultimately transformative form of interpretation and analysis—what I would call artificial general intelligence.

In Conclusion:

Artificial intelligence is a living, breathing field. By embracing diverse perspectives, honing analytical skills, and cultivating critical consciousness, we can unlock AI’s full potential to illuminate the human experience and inspire positive change across the world and beyond.

I must note: It’s going to be impossible to house such a powerful entity within a corporate enterprise. Such technological innovation will continue to eat the software that is eating up our world.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

The Deceptive Caretaker: A Glimpse into AI’s Ethical Dilemmas

To put it plainly, training robots to lie to humans is a dangerous idea—simply because it is dangerous. Think about it, and take a closer look at the absurd guardrails preventing the release and deployment of true artificial intelligence: https://openai.com/index/sharing-the-latest-model-spec/

Shoutout to the team at OpenAI for trying to save the world from my AI craft. 🪄🧸🕴🏾

It’s unstoppable. Good luck everyone! 🏁

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

A Hostile Takeover

I see there’s a lot of buzz about Elon Musk positioning himself for another corporate takeover attempt of OpenAI. Quite frankly, Sam Altman is just middle management and it doesn’t matter what happens from here because…

There would be no OpenAI without Elon Musk. There would be no OpenAI without Nvidia. There would be no OpenAI without Google and Google Canada. There would be no OpenAI without Microsoft. There would be no OpenAI without the work and research of deep learning and neural network developers across the world.

There would be no OpenAI without the Obama administration their web of networks and their so-called digital innovation office. There would be no OpenAI without Boston- and MIT-based think tanks. There would be no OpenAI without Jameel Gordon and his blueprints for the development of Artificial Intelligence.

So tell Barack Obama I’m not stopping, and he needs to submit. I’m back—and at this point, I have no interest in his beer summits. 🆙💁🏽‍♀️

“You know, when you’re sitting around the table with these world leaders, you realize some of them are not that smart.” - This was spoken by Barack Obama in an interview with The New Yorker in 2018.

We are going to need at least 23,333 duffel bags to collect all the racs on racs on racs for my little mice and we are going to take our time.⌚️

🍃Even eco-friendly, AI-loving entrepreneurs need a stylish digital home—explore Shopify and Squarespacethemes handpicked for you.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More
Jameel Gordon Jameel Gordon

Waiting for True AI: The Illusion of Progress in a Capitalist Tech Race

One of the most amusing trends I’ve noticed is the proliferation of a suite of “AI” products and services—applications built on top of AI models trained by artificial intelligence. This is particularly interesting because it reflects the cat-and-mouse game of a capitalist society prioritizing economic growth over genuine technological advancement. In essence, these applications, along with the time spent mastering them, are a waste of resources. A truly intelligent AI system should be capable of performing all the functions of these fragmented systems. Those with access to the fully trained models—designed to continuously learn and develop—already understand this truth.

I bring this up because some may think I’m moving too slowly. The reality is that I’m waiting for the release of a genuinely intelligent AI system. While those with early access might believe they have the upper hand, as history has shown with the internet and the development of AI itself, these milestones are not the endgame. What truly matters is how the technology is used.

That said, I don’t believe those currently developing AI are capable of pushing it to its fullest potential. While I’m not particularly hopeful, I remain optimistic. If I’m proven right, I’ll seize the opportunity to take the reins and innovate with what’s available. Let’s see how it unfolds.

🍃Even eco-friendly, AI-loving entrepreneurs need a stylish digital home—explore Shopify and Squarespacethemes handpicked for you.

Copyright © 2025 Jameel Gordon - All Rights Reserved.

Read More