Artificial Intelligence began as an academic dream, a thought experiment about machines that could think. Today, it has become the foundation of economies, militaries, and digital societies. But as AI systems grow in power and autonomy, a fundamental question has emerged: Who controls AI governments or corporations? The answer defines not only the future of technology, but also the future of power itself.

In the United States, the balance between public authority and private enterprise has always defined technological progress. The Internet was born from government-funded research but commercialized by private firms. Space exploration was once solely the domain of NASA, but today space is shared with private companies SpaceX and Blue Origin. The same pattern now governs AI: public innovation, private acceleration, and political oversight hand-in-hand with exponential growth.
The American AI ecosystem is dominated by a handful of private corporations like the Mag7 tech giants (Microsoft, Apple, Alphabet, Amazon, Meta, Tesla, and NVIDIA), along with emerging research powerhouses like OpenAI and Anthropic. Their resources dwarf those of most nations. Microsoft alone has committed billions to AI infrastructure and model development, NVIDIA's chips are the gold standard for training large models, and Google's DeepMind pushes the frontier of machine reasoning.
Governments, meanwhile, are trying to keep pace not by competing in innovation directly, but by controlling the conditions of AI deployment through executive orders, laws, standards, and ethical guidelines. America leads the way.
For decades, AI operated in a regulatory vacuum. It was treated as just another software discipline until the emergence of generative AI shattered that illusion. When ChatGPT reached 100 million users in two months, policymakers realized that AI was not a distant research field, but rather a mass-market force with immediate social, economic, and political consequences.
Executive Order 13859, signed February 11, 2019 by President Trump, titled "Maintaining American Leadership in Artificial Intelligence," (the AI Initiative) was the first wide-ranging executive order on American AI governance. It was one of the earliest activities directly addressing artificial intelligence technology at the national level, years before the ChatGPT era we're in now. It set the tone for a White House-led approach to AI that continues to this day.
EO 13859 started the trend toward regulating AI from the White House. That order, signed during President Donald Trump's first administration, directed federal agencies to prioritize AI research and development and foster workforce development. It aimed to make federal data and models available for AI development work. It instructed agencies to create guidance for the use of AI in the industries they regulate. And it called for an action plan to protect the U.S.'s technological advantage in AI.
The AI Initiative was guided by five principles:
The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today's industries.
The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today's economy and jobs of the future.
The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.
The AI Initiative identified six strategic objectives in promoting and protecting American advancements in AI:
Promote sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.
Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy, and confidentiality protections consistent with applicable laws and policies.
Reduce barriers to the use of AI technologies to promote their innovative application while protecting American technology, economic and national security, civil liberties, privacy, and values.
Ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.
Train the next generation of American AI researchers and users through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI.
Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.
The AI Initiative was significant in several ways:
Strengthening AI Capabilities: The order established principles and strategies to enhance the U.S. capabilities in AI by promoting scientific discovery, economic competitiveness, and national security.
Government Engagement: For the first time, it emphasized the importance of federal engagement in AI, with a focus on federal agency assessments and the enforcement of existing regulatory authorities.
Regulatory Oversight: The order advocated a cautious approach to regulating AI in the private sector, opting for the oversight of federal government uses of AI.
International Competition: It reflected the U.S. federal government's approach to AI governance and regulation, balancing the opportunities of AI technologies with the potential risks of international competition, especially from China.
Fast forward six years. The Trump Administration's philosophy on AI in the second term is driven to secure global technological supremacy and foster economic competitiveness. The strategy is characterized by an emphasis on private-sector leadership, minimal federal regulation, and the strategic integration of AI into national security. The belief is that rapid innovation, unimpeded by bureaucratic oversight, is the most effective means to secure American dominance in 21st-century technology.
The primary driver of the administration's AI policy is the belief that technological leadership is a direct result of economic freedom and the velocity of innovation. The philosophy dictates that the U.S. advantage stems from its technology companies. The government's role is not to dictate development, but to facilitate it by removing roadblocks.
Just as DARPA funding led to NASA and the internet, there is a belief today in the value of strategic, federal investments in research. Federal funding is directed toward specific, high-impact areas. This includes fundamental research and the construction of high-performance computing, the resources critical for training trillion-parameter models.
President Donald Trump's AI Action Plan identifies steps to scale back regulations and spur investment with the goal of establishing the U.S. as a global leader in the advanced technology.
"The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits," the introduction to the plan reads. "Just like we won the space race, it is imperative that the United States and its allies win this race."
The 28-page strategy, called "Winning the Race: America's AI Action Plan," centers on three pillars: accelerating AI innovation, building AI infrastructure in the U.S., and establishing the U.S. as a worldwide leader in AI. It recommends dozens of actions for the federal government to take across those pillars, including reducing the number of environmental regulations imposed on data centers and contracting with large language model developers.
The Trump Administration initiated efforts to accelerate development of the technology and rolled back restrictions that President Biden had put on AI. Soon after his inauguration in January 2025, Trump rescinded an Executive Order issued in 2023 aimed at establishing safety standards for AI's development and use. Trump signed an Executive Order to revoke certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence. And the Trump Administration announced that the AI Safety Institute established in November 2023 would be transformed into the pro-innovation, pro-science U.S. Center for AI Standards and Innovation.
Soon after inauguration for his second term in January 2025, President Trump delivered remarks at an AI Summit at the White House and signed three Executive Orders related to AI. The first order fast-tracks the permitting process for the construction of major AI infrastructure projects; the second instructs Administration officials to promote the international export of American AI models; and the third bans the federal government from procuring AI technology that has been infused with partisan bias or ideological agendas, stating that the U.S. government will deal only with AI that pursues truth, fairness, and strict impartiality.
"From this day forward, it'll be a policy of the United States to do whatever it takes to lead the world in artificial intelligence," Trump said at the event.
Here's what to know about the AI Action Plan:
Pillar I: Accelerate AI Innovation
The first pillar outlines steps to remove red tape and onerous regulation. It recommends that federal agencies identify, revise, or repeal regulations that unnecessarily hinder AI development or deployment. If states have overly burdensome regulations, the plan threatens to limit the AI-related federal funding they receive.
"To maintain global leadership in AI, America's private sector must be unencumbered by bureaucratic red tape," the plan reads. "AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level. The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation."
The plan also states that AI systems must be built from the ground up with freedom of speech and expression in mind, and reflect truth rather than social engineering agendas. To that end, it recommends that officials review the National Institute of Standards and Technology's AI Risk Management Framework to eliminate references to misinformation, DEI, and climate change, and that the federal government only contracts with large language model developers that ensure that their systems are objective and free from top-down ideological bias.
Pillar II: Build American AI Infrastructure
The second pillar is focused on building AI infrastructure within the U.S. American energy capacity has stagnated since the 1970s while China has rapidly built out their grid, and the plan outlines steps to bolster the energy infrastructure to establish America's AI dominance.
The plan blames regulations for slowing infrastructure growth. It recommends fast-tracking environmental permitting by streamlining or reducing various regulations. The document also outlines measures to upgrade the country's electric grid to better support AI data centers.
Pillar III: Lead in International AI Diplomacy and Security
The final pillar states that the U.S. must do more than promote AI within its own borders, adding that the country must also drive adoption of American AI systems, computing hardware, and standards throughout the world. The plan recommends that the U.S. export its full AI technology stack - hardware, models, software, applications, and standards - to all countries willing to join America's AI alliance. It criticizes a number of international bodies, including the United Nations, for their proposed AI governance frameworks and development strategies.
"The United States supports like-minded nations working together to encourage the development of AI in line with our shared values," the plan reads. "But too many of these efforts have advocated for burdensome regulations, vague 'codes of conduct' that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance."
The plan advises that federal agencies leverage the U.S. position in international settings to vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence.
Here are some of the plan's basic principles:
Talent Retention: Policies emphasize immigration and educational initiatives designed to attract and retain top global AI talent to prevent a "brain drain" to competitor nations.
Approach to Regulation and Ethics: In contrast to the broader regulatory efforts seen internationally (such as the AI Act), the administration advocates for a light-touch, non-prescriptive regulatory environment.
Skepticism of Broad Rules: The philosophy holds that imposing wide-ranging, technology-specific regulations could stifle innovation and inadvertently create compliance burdens that disproportionately affect smaller companies.
Sector-Specific Oversight: Regulation is preferred only in sectors where AI poses a measurable risk to human safety or financial stability (e.g., specific medical devices or financial lending algorithms). Even here, the focus is on performance standards rather than prescriptive requirements.
Voluntary Standards: The government encourages industry-led, voluntary technical standards (like Constitutional AI) and best practices for areas like transparency and data security, rather than mandated federal rules.
National Security and Defense Integration: AI is viewed first and foremost as a critical asset for national defense and intelligence, making its military application a top priority.
Autonomous Systems Development: The administration actively supports the research and rapid deployment of fully autonomous military technology. The philosophy accepts the necessity of AI systems running conflict simulations and deploying resources based on pure predictive probability, prioritizing speed and decisive advantage on the battlefield.
Cybersecurity Focus: Significant resources are dedicated to employing AI for offensive and defensive cybersecurity to protect critical infrastructure, including the physical AI compute clusters and the high-speed fiber lines of the data center network.
Workforce and Societal Impact: The administration acknowledges the societal shifts caused by automation, but frames the response through economic growth.
Emphasis on Growth: The primary solution to job displacement caused by AI agents (like those displacing customer service or accounting roles) is to accelerate economic growth, arguing that new, higher-skilled jobs will emerge and absorb any displaced workforce.
Retraining Focus: While not advocating for intervention to slow automation, the policy supports targeted workforce development and retraining initiatives focused on high-demand, technical skills such as network operations and maintenance roles created by data centers. This ensures American workers are prepared for the new technical economy created by AI.
While governments regulate, corporations build and build fast. AI is a competitive business, where speed can mean the difference between dominance and obsolescence. OpenAI's GPT, Anthropic's Claude, Microsoft's CoPilot, and Google's Gemini are locked in a race to create increasingly capable and profitable AI systems.
But this rapid innovation comes at a cost: transparency, accountability, and control often lag behind capability. Model weights are guarded as trade secrets. Data provenance is obscure. Alignment mechanisms, the ways in which AIs are taught to behave safely, are proprietary.
Corporations argue that regulation must not stifle progress. They warn of a regulatory overreach that could slow America's innovation and allow China or Europe to take the lead. Their message to Washington is clear: Let us innovate, and we'll self-regulate. President Trump agrees: "AI is far too important to smother it in bureaucracy at this early stage."
Trump has taken aim at state laws regulating AI. He threatens to limit funding from the federal government for states that pass AI laws deemed burdensome to developing the technology. "We also have to have a single federal standard, not 50 different states regulating this industry of the future," Trump said. "We need one common-sense federal standard that supersedes all states; supersedes everybody, so you don't end up in litigation with 43 states at one time."
The portion of Trump's plan targeting states is getting pushback from some in the industry. For example, Anthropic released a post responding to Trump's AI plan. "We share the Administration's concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws," the company said, but added, "We continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act." The key is 'if' the federal government fails to act, then at least one tech company would welcome appropriate state legislation.
Several U.S. states have already passed laws regulating artificial intelligence. States like Texas, California, Illinois, and Colorado are leading with AI-specific laws, while others are adapting existing privacy and consumer protection statutes. The laws are mainly directed at transparency, discrimination, and consumer protection. Texas, Illinois, and California are among the most active, though Congress is now debating whether to override state-level AI laws.
California proposed and enacted bills requiring impact assessments for automated decision systems used in hiring, housing, and lending. California has a strong focus on consumer privacy through the California Consumer Privacy Act (CCPA), which applies to AI-driven data use.
Colorado passed a law requiring risk management frameworks for companies deploying high-risk AI systems. The law includes transparency obligations and consumer rights to opt out of certain automated decisions.
Illinois has enacted the AI Video Interview Act, requiring employers to notify applicants when AI is used in hiring and to obtain consent. There are expanded rules around biometric data through the Biometric Information Privacy Act that indirectly regulate AI systems.
Texas passed a far-reaching AI law addressing child protection, data privacy, discrimination, and accountability for Big Tech. Lawmakers argue it prevents harmful AI use in areas like child pornography and biased algorithms.
Transparency: Informing consumers when AI is used (hiring, lending, healthcare).
Bias and Discrimination: Preventing unfair outcomes in employment, housing, or policing.
Privacy: Protecting biometric and personal data from misuse.
Child Protection: Safeguarding minors from harmful AI-generated content.
Self-regulation has limits. When private AI systems begin influencing public opinion, national security, and labor markets, the argument that they are "just products" no longer holds. The line between private enterprise and public infrastructure begins to blur. Is it too soon to regulate, to smother it in bureaucracy at this early stage, as the President suggests. Or are we merely singing a familiar American refrain of innovation versus regulation.
We've sung that song before in American history from the Early Industrial Era (1800s-early 1900s), to the Progressive Era and New Deal (1900s-1930s), to Post-WWII and the Cold War (1940s-1970s), and the Late 20th Century Tech Boom (1980s-2000s). These epochs have resulted in such laws as the Interstate Commerce Act of 1887, the Securities Exchange Act of 1934, Environmental laws, Antitrust cases, and deregulation in industries like airlines. Time will tell the requisite legislation for AI in America.
In the meantime, President Donald Trump signed an executive order on 11 December 2025 aimed at limiting the ability of U.S. states to pass their own AI regulations. The order, described by the president as a "ONE RULE" initiative, establishes an AI Litigation Task Force tasked with challenging state AI laws in the courts. It directs federal agencies, including the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC), to evaluate state regulations that are deemed overly burdensome and to develop national standards that could supersede state rules.
Silicon Valley figures, such as OpenAI President Greg Brockman and AI czar David Sacks, have warned that state AI laws could create an unworkable patchwork of regulations that would hinder innovation. They argue that a single federal rulebook would protect the United States' competitive edge in AI development and avoid the complexities of navigating fifty different state regimes. The executive order gives David Sacks direct influence over AI policy, superseding the typical role of the White House Office of Science and Technology Policy.
At the core of the AI control debate lies data: who collects it, who owns it, and who benefits from it. AI companies depend on vast datasets scraped from the Internet, often including copyrighted works, personal data, and sensitive information. Governments, aware of the strategic importance of data, are moving to assert data sovereignty.
The U.S. is crafting rules to restrict the export of sensitive data to foreign adversaries, while China enforces strict localization of domestic data within its borders. The European Union, through GDPR and the AI Act, gives individuals legal rights over how their data can train AI models. In this global contest, data is the new oil and those who refine it into intelligence hold the true power.
Data sovereignty is the idea that data, particularly that belonging to a nation's citizens or government, must be subject to the laws and governance structures of that nation, regardless of where the data is physically stored or processed. To understand this concept, consider the imaginary country of Veridia and its data predicament.
The nation of Veridia had long embraced the global cloud. Their health records, university research, and most government communications were processed and stored efficiently by vast, nameless data centers located thousands of miles away, primarily under the legal jurisdiction of foreign powers. It was cheap, it was fast, and for years, it was convenient.
Then came the crisis known simply as "The Forecast."
Veridia relied heavily on a predictive AI model leased from a large, international corporation (we'll call Big Data) to forecast resource allocation for its crucial agricultural sector. Big Data's AI suddenly recommended drastic, insane cuts to water reserves for a specific region. Perplexed by this sudden move, Veridia's water minister asked for the model's underlying logic and the real-time sensor data that drove the decision.
Big Data denied the request, citing the foreign jurisdiction where the data resided; a jurisdiction with weaker privacy and disclosure laws. They claimed the underlying data and the model's weights were proprietary and legally inaccessible to Veridia.
The minister realized the grim truth: Veridia had outsourced not just its data storage, but its national decision-making capacity. The data--the collective memory of the nation, its weather patterns, its soil composition, and its people's health--was their most strategic resource, and they had no control over it. This demonstrated the strategic importance of data, especially for AI development, which relies on this data for training trillion-parameter models.
In the wake of The Forecast disaster, Veridia passed the National Data Integrity Act. It was their modern-day equivalent of building a national border wall. But this wall was made of code along with a legal mandate.
The Act didn't ban foreign cloud providers; instead, it established a clear rule: Any data pertaining to Veridian citizens, critical infrastructure, or government operations must be processed and stored exclusively on servers physically located within Veridia's territorial borders.
This action had two profound consequences, demonstrating the assertion of data sovereignty:
Mandated Infrastructure: To continue serving the Veridian market, major international tech companies were forced to spend billions building dedicated AI Data Center Network infrastructure within Veridia. This addressed the issue of geographic concentration of AI infrastructure, bringing investment and high-tech jobs to the country.
Legal Jurisdiction: Because the physical data centers now sat on Veridian soil, any legal dispute, audit, or access request was subject immediately and unequivocally to Veridian courts and laws. The data was no longer protected by the legal shields of foreign countries. The nation reclaimed control over its digital destiny.
Veridia learned that in the age of AI, sovereignty wasn't just about controlling physical borders; it was about drawing a clear, undeniable legal boundary around the national data that fuels the new intelligent economy.
A growing movement within academia and civil society argues that AI should not be monopolized by corporate interests. Instead, they advocate for a publicly funded AI infrastructure of shared datasets, open models, and community-driven research.
Projects like OpenAI's original mission, EleutherAI, and Hugging Face's open model hub embody this philosophy: democratizing access to AI so that innovation benefits everyone, not just the few. The U.S. government has also begun to invest in "AI for the public good," funding initiatives in education, healthcare, and climate modeling.
The tension remains, however: can open AI remain safe? Can public research keep up with the billion-dollar budgets of private labs? The simple answer is "no." In terms of raw compute power, public research today cannot compete with private labs, although it remains critically important for establishing the theoretical and ethical foundation of AI. We've come full circle: universities in America conceived and launched artificial intelligence, but now they can't compete with the likes of Google, OpenAI, and Microsoft. To address this deficiency in part, President Trump initiated the Genesis Mission.
The Genesis Mission is an historic national effort led by the Department of Energy. The Genesis Mission plans to transform American science and innovation through the power of AI, strengthening the nation's technological leadership and global competitiveness. The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security. Genesis will unleash the full power of U.S. National Laboratories, supercomputers, and data resources to ensure that America is the global leader in artificial intelligence and to usher in a new golden era of American discovery.
The question of control is not merely technical or economic it is ethical. Who decides what an AI system can or cannot do? Should corporations be the moral arbiters of machines used by billions, or should governments, accountable to the public, set those limits?
Some argue that AI ethics cannot be outsourced. Governments must enforce transparency, fairness, and safety. Others counter that bureaucratic control could choke creativity, replacing innovation with compliance.
The ideal path lies between the extremes: a co-regulatory model, where private innovation thrives within a transparent, accountable framework enforced by public institutions. This model mirrors the structure of the aviation or pharmaceutical industries sectors where innovation continues, but under strict oversight for safety and reliability.
As AI continues to permeate life and governance, a new legal architecture a kind of "AI Constitution" is emerging. It will define rights, responsibilities, and limits for intelligent systems, much as earlier generations of law defined them for corporations and citizens.
The U.S., with its blend of free-market dynamism and democratic governance, is uniquely positioned to pioneer this balance. But success will require coordination between Congress, federal agencies, and the very corporations that now lead the field.
Ultimately, the question is not who controls AI, but how AI is controlled through secrecy and competition, or through openness and shared accountability. The answer will determine whether artificial intelligence remains a tool of progress or becomes an unaccountable power.
AI has blurred the old boundaries between state and market. Governments need the expertise and resources of corporations; corporations need the legitimacy and stability provided by governance. The challenge is to find equilibrium a partnership where neither dominates and both are accountable.
In the end, the story of AI in America may not be about conquest or control at all. It may be about coexistence a delicate balance between innovation and regulation, freedom and responsibility, human ambition and collective wisdom.
For as AI grows smarter, the true test will not be whether machines can govern themselves but whether humanity can.
AI in America home page
AI Governance home page
Trump AI page
trumpwhitehouse.archives.gov/ai/executive-order-ai
President Trump's Executive Orders include:
Executive Order 14179 is titled "Removing Barriers to American Leadership in Artificial Intelligence"
Executive Orders signed on 23 July 2025: