Bring back biscuits and gravy lays · Change.org (2024)

Petition Summary: Stop the Superintelligent AI Threat to Humanity

Address Risks of Artificial Intelligence: Stand against superintelligent AI led human extinction risk, job losses, privacy invasion, misinformation spread, weaponization, and value misalignment.Protect Our Future: Issue an immediate stop on superintelligent AI until proven to be controllable, safe and aligned with human interests. Your signature is key.Unite for Change: Join a global call for transparent, ethical and safe AI that respects humanity. Demand oversight and international regulation to protect this generation and the upcoming generations.

Sign today to eliminate the real risks of artificial intelligence, before superintelligent AI turns into an uncontrollable threat to humanity!

We, the undersigned, are a collective of concerned individuals, parents, professionals, and citizens from diverse backgrounds, united in our call for an immediate and proactive ban on the creation of uncontrollable superintelligent AI. Our concerns are not just theoretical; they are deeply personal and rooted in a vision for a future where humanity thrives alongside technology, not in its shadow.

(Please scroll to the end of this AI Safety petition for other commonly used synonyms for ‘superintelligent AI.’).

Personal and Collective Concerns

We, as concerned individuals dedicated to protecting our families and communities, are deeply troubled by the direction in which certain AI leaders and decision-makers are heading. The pursuit of superintelligent AI, with the potential end goal being humanity's submission to the rise of machines, is not the future we envision for our children and future generations. It is imperative we create a world where all generations can thrive without the fear of superintelligent AI threatening humanity’s indefinite economic uselessness, and even the replacement of humanity as a species.

What is Artificial Superintelligence?

Artificial superintelligence refers to a computer-based intellect that vastly outsmarts the best human brains in every field, including scientific creativity, general wisdom, and social skills. The creation of such an entity poses huge, unpredictable risks, making it a gamble with humanity's future that we cannot afford to take.

Artificial Intelligence companies are racing to create superintelligent AI. Leading the pack is OpenAI (the creator of ChatGPT), which has the mission of creating “highly autonomous systems that outperform humans at most economically valuable work.” Meta (formerly Facebook) wants to go one irresponsible step further, and plans to create and release such superintelligent systems to be downloadable by essentially everyone with an Internet connection: including potential criminals, terrorists, and rogue nation-states.

Given the nuanced discussions around the current versions of AI and their uncontrollability due to the sheer complexity of the comprising neural networks, many leading AI scientists have highlighted the immense challenges in aligning such powerful systems with human values.

This raises a critical question: why pursue the development of superintelligent AI, which promises even greater intelligence and autonomy, without a clear solution to ensure it can be controlled and aligned with human values? The risk of superintelligent AI acting autonomously, potentially in ways that could threaten human existence, underscores the urgency of this petition.

Rationale for the Ban

[1] AI-Driven Human Extinction Risk

Professor Geoffrey Hinton is the most-cited AI researcher of all time. He won the Turing Award for pioneering the AI method of deep learning. While Professor Hinton used to work on such AI systems at Google, he left the company in order to warn about the dangers posed by the very AI technology he pioneered.As Professor Hinton warns, the scientific problem of how to control superintelligent AI remains unsolved. AI has a black box problem, where its internal decision-making processes are opaque, even to its human creators, monitors, and auditors.Professor Hinton also warned that the unregulated advancement of superintelligent AI harbors potential existential risks to humanity. The intelligence of these systems could surpass human understanding, leading to unpredictable and uncontrollable outcomes.

[2] Economic and Societal Disruption

According to estimates from Goldman Sachs, two-thirds of all jobs in the United States will be at least partly affected by Generative AI (with 25% - 50% of their tasks potentially being automated). AI technologies, while beneficial in some areas, pose a significant risk to the job market, with the potential to automate tasks performed by human workers across numerous sectors.The shift towards uncontrolled automation powered by superintelligent AI could lead to an unprecedented risk of mass unemployment, and potentially even indefinite human economic uselessness.

[3] Threat to Human Autonomy and Privacy

United Nations High Commissioner for Human Rights, Michelle Bachelet, has emphasized the urgent need for a moratorium on AI systems that pose serious risks to human rights. This stance highlights a critical aspect of superintelligent AI's impact: the potential significant infringement on privacy and other fundamental rights.Superintelligent AI, capable of vast data analysis and autonomous decision-making, raises serious concerns about the potential for misuse in surveillance and data manipulation, thereby posing alarming threats to individual privacy and autonomy.

[4] Proliferation of Synthetic Media

United Nations Secretary-General António Guterres warns of acceleration of misinformation and disinformation, which could deepen global inequalities and undermine trust in institutions, weaken social cohesion, and threaten democracy.The flooding of the internet with AI-generated synthetic media – including false photos, videos, and texts – can distort reality and truth, exacerbating the challenges of misinformation and its profound societal impact.

[5] Militarization and Autonomous Weapons

The United Nations’ Educational, Scientific and Cultural Organization has raised alarms about the "Third Revolution" in warfare introduced by Lethal Autonomous Weapons Systems (LAWS), placing them in the same transformative category as nuclear weapons.The development of autonomous weapons systems, often termed 'killer robots', presents a dire threat to global peace and security. The prospect of AI-driven warfare necessitates immediate preventive measures.

[6] Misalignment with Human Values

The rapid pace of AI development, especially led by followers of the pro-human-replacement cult "Effective Accelerationism" (e/acc) who are backed by uber-wealthy and uber-connected AI industry leaders, overlooks the vital importance of aligning AI with human values, threatening mass human job loss to AI and even the potential replacement of humanity as a species.

Call to Action

Global Moratorium: We call for a global moratorium on the development of superintelligent AI. The moratorium should be indefinite, and end only when the requisite safety and ethics standards — specifically, standards that have been scientifically demonstrated to ensure that such systems will be controllable and act in humanity’s best interest — are successfully formulated and adopted.Pro-human AI Development: We advocate for AI technologies that align with human moral values, ensuring they serve the common good and respect human dignity.Transparency and Accountability: We demand strict guidelines for transparency in AI development, with accountability mechanisms for developers and corporations.Public Discourse: We call for informed public discourse, ensuring that diverse stakeholder voices are considered in shaping the future of AI.International Collaboration: We emphasize the need for international collaboration to address AI's challenges cohesively.

We implore leaders and decision-makers to heed this call for action. The future world faced by our children and our children’s children may very well depend on the choices we make today: choices regarding the development and deployment of superintelligent AI.

Sign the Petition to Secure a Safe Future for Your Loved Ones

Our call for action is driven by a commitment to safeguard our shared future, ensuring that technological advancements serve to enhance, rather than threaten, human life.

By signing this petition, you join a global movement advocating for a pro-human approach to AI development. Together, we can ensure that regulations are put in place to protect future generations from the potential perils of superintelligent AI.

The potential risks associated with superintelligent AI are too great and we must take action now before it's too late. We need regulations in place that ban the creation of uncontrollable superintelligent AI for ensuring humanity’s safety and security: for our generation, our children’s generation, and their children’s generation.

Synonyms to ‘superintelligent AI’ that are inclusive in the petition and global moratorium

Artificial General Intelligence (AGI): Machines with the ability to learn and apply knowledge across a broad range of tasks at human-level competence.

Strong AI: AI that possesses consciousness and understanding similar to human intelligence, capable of performing any cognitive task.

Full AI: Fully developed AI that can replicate human intelligence and behavior across all areas.

Human-Level AI: AI that matches human cognitive abilities in reasoning, problem-solving, and creativity across all domains.

Hyperintelligent AI: AI that significantly surpasses human intelligence in all fields, embodying extreme advancement in cognitive capabilities.

Advanced AI: AI systems with capabilities far beyond today's technology, possessing advanced learning and decision-making skills.

Post-Human Intelligence: Intelligence that exceeds human limitations, enhancing memory, processing speeds, creativity, and problem-solving abilities beyond current human capabilities.

Sign the Petition to Protect Your Family Today

Sign this petition to support a human-centered AI development strategy.

Together, we can push for regulations that guard against the dangers of superintelligent AI. The risks are immense, and immediate action is necessary to secure the safety and well-being of present and future generations.

Together, let's prevent the creation of uncontrollable superintelligent AI!

After you sign the petition, please share your story with www.StakeOut.AI and the #WeStakeOutAI movement. For details on stats and sources quoted in this safe AI petition, please visitwww.StakeOut.AI for more information.

Sign another important AI Safety petition here: https://www.stakeout.ai/ai-safety-petitions-safe-ai-laws-regulations

Bring back biscuits and gravy lays · Change.org (2024)
Top Articles
Latest Posts
Article information

Author: Fredrick Kertzmann

Last Updated:

Views: 6157

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Fredrick Kertzmann

Birthday: 2000-04-29

Address: Apt. 203 613 Huels Gateway, Ralphtown, LA 40204

Phone: +2135150832870

Job: Regional Design Producer

Hobby: Nordic skating, Lacemaking, Mountain biking, Rowing, Gardening, Water sports, role-playing games

Introduction: My name is Fredrick Kertzmann, I am a gleaming, encouraging, inexpensive, thankful, tender, quaint, precious person who loves writing and wants to share my knowledge and understanding with you.