close
close

Pasteleria-edelweiss

Real-time news, timeless knowledge

AI will also be on the ballot on November 5
bigrus

AI will also be on the ballot on November 5

The ballot paper with AI written on it goes to the ballot box.The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape the future of artificial intelligence according to democratic principles. Illustration: Edited by Erik English; Original from DETHAL via Adobe.

Artificial intelligence represents one of the most important technologies of our time, and while it promises tremendous benefits, it poses serious risks to the country’s security and democracy. The 2024 elections will determine whether America leads or retreats from its vital role in ensuring that AI develops safely and in line with democratic values.

Artificial intelligence promises extraordinary benefits, from accelerating scientific discovery to improving healthcare and increasing productivity in our economy. But realizing these benefits requires what experts call “safe innovation”—developing AI in a way that preserves American safety, security and values.

Despite its benefits, several risks of AI are significant. Unregulated AI systems can increase social biases, leading to discrimination in important decisions about jobs, loans, and healthcare. Security challenges are even scarier: AI-powered attacks can probe vulnerabilities in power grids thousands of times per second, launched by individuals or small groups rather than requiring the resources of nation states. During public health or safety emergencies, AI-powered misinformation can disrupt critical communications between emergency services and the public, undermining life-saving response efforts. Perhaps most worrying, AI could lower barriers to malicious actors developing chemical and biological weapons more easily and quickly without technology, making disruptive capabilities accessible to individuals and groups who previously lacked expertise or research skills.

Recognizing these risks, the Biden-Harris administration has developed a landmark comprehensive approach to AI governance. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The administration’s framework directs federal agencies to address the full spectrum of AI challenges. It creates new guidelines to prevent AI discrimination, promotes research that serves the public interest, and creates new government-wide initiatives to help society adapt to AI-driven changes. The framework also addresses the most serious security risks by ensuring that powerful AI models undergo rigorous testing; Thus, protection measures can be developed to prevent potential abuses, such as helping to create cyber attacks or biological weapons that threaten public safety. These protections preserve America’s ability to lead the AI ​​revolution while protecting our security and values.

Critics who claim this framework will stifle innovation would do well to consider other transformative technologies. Strict safety standards and air traffic control systems developed through international cooperation have enabled, not hindered, the airline industry. Today, millions of people board planes without hesitation, trusting in the safety of air travel. Aviation became the cornerstone of the global economy because nations worked together to establish standards that earned the public’s trust. Similarly, catalytic converters have not stopped the automotive industry: They have helped cars meet increasing global demands for both mobility and environmental protection.

Just as the Federal Aviation Administration ensures safe air travel, dedicated federal oversight in collaboration with industry and academia can ensure responsible use of AI applications. via recently released National Security MemorandumThe White House has now established the AI ​​Security Institute at the National Institute of Standards and Technology (NIST) as the U.S. government’s primary point of contact for private sector AI developers. This institute will facilitate voluntary testing both before and after public deployment to ensure the safety, security and reliability of advanced AI models. But policymakers need to think globally, as threats such as biological weapons and cyber attacks do not respect borders. That’s why the administration is establishing a network of AI security institutes with partner countries to harmonize standards worldwide. This isn’t about going it alone, but about leading a coalition of like-minded nations to ensure AI evolves in ways that are both transformative and trustworthy.

Former President Trump’s approach will be quite different from the current administration’s approach. The Republican National Committee platform proposes:Repeal Joe Biden’s dangerous Executive Order blocking AI innovation and impose Radical Leftist ideas on the development of this technology.” This position contrasts with the public’s growing concerns about technology risks. For example, Americans have witnessed the dangers children face due to unregulated social media algorithms. That’s why recently the U.S. Senate came together in an unprecedented bipartisan show of force to pass legislation. Children’s Online Safety Act By a vote of 91-3. The bill provides young people and parents with tools, safeguards and transparency to protect against online harm. The risks of artificial intelligence are even greater. For those who think that building guardrails around technology will hurt America’s competitiveness, the opposite is true: Just as travelers prefer safer planes and consumers demand cleaner vehicles, they will insist on reliable AI systems. Companies and countries that develop AI without adequate safeguards will find themselves at a disadvantage in a world where users and businesses demand assurance that AI systems will not spread misinformation, make biased decisions, or enable dangerous practices.

Biden-Harris Executive Order on AI It creates a foundation on which to build. Strengthening the United States’ role in setting global AI security standards and expanding international partnerships is essential to maintaining America’s leadership. This requires working with Congress to secure strategic investments in AI security research and oversight, as well as investments in defensive AI systems that protect the nation’s digital and physical infrastructure. As automated AI attacks become more sophisticated, AI-powered defenses will be vital to protecting power grids, water systems, and emergency services.

The window for establishing effective global governance of AI is narrow. The current administration has built a thriving ecosystem for safe, secure, and reliable AI, a framework that positions America as a leader in this critical technology. Stepping back now and dismantling these carefully constructed safeguards would surrender not only America’s technological superiority but also the ability to ensure that AI evolves in line with democratic values. Countries that do not share the United States’ commitment to individual rights, privacy, and security will have a greater say in setting technology standards that will reshape every aspect of society. This election represents a critical choice for the future of America. The right standards, developed jointly with allies, will not hinder the development of artificial intelligence; will enable it to reach its full potential in the service of humanity. The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape the future of artificial intelligence according to democratic principles or surrender that future to those who will use artificial intelligence to undermine our nation’s security, prosperity, and values.