AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Introducing Our Manifesto: A Vision for AGI Development - Seeking Collaborative Input

Open Swiftyos opened this issue 2 years ago • 4 comments

Hello Open Source Auto-GPT Project community!

We are excited to share our draft manifesto, which outlines the core values and principles that will guide our collaborative efforts in developing artificial general intelligence (AGI). We believe that a manifesto is crucial in fostering a unified vision for AGI development, ensuring that our work is ethical, responsible, and focused on benefiting humanity.

Having a manifesto offers several benefits:

  • It serves as a moral compass, guiding our actions and decisions throughout the development process.
  • It fosters a sense of unity and shared purpose among our diverse community of contributors.
  • It communicates our commitment to transparency, inclusivity, safety, and ethical design to stakeholders and the public.
  • It helps us navigate the challenges and opportunities presented by AGI, ensuring that we harness its transformative potential for the betterment of society.

We would love to hear your thoughts on the manifesto and invite you to collaborate on refining it. We believe that our collective wisdom and expertise will only strengthen the document, making it an even more powerful guiding force in our work.

To contribute your ideas and suggestions, please review the manifesto draft and leave your comments, or submit a pull request with any proposed changes. We look forward to working together to shape the future of AGI, united by our shared values and dedication to making a positive impact on the world.

Please make suggestions and comments here: https://docs.google.com/document/d/1dzRpN6SuPa1CB8h1zPdV_aHHdLWlJDwm1ESOwH2IUHQ/edit?usp=sharing

Thank you for your ongoing support and collaboration!

Best regards,

Swifty

Github @SwiftyTheCoder Discord: Swifty#1347

Swiftyos avatar Apr 21 '23 13:04 Swiftyos

Just sharing some general thoughts here. I'm working on something similar to address some of this, but it's too early to go into details.

Whoever controls the most powerful 'AI' in 2025, will could likely become a superpower yet unseen in the world, even during times of god-kings.

The most moral thing to do is to make sure that the most powerful AI is available to the entire world. This is undeniably a situation where the only defense is a strong offense.

Before we ever get to AGI, a corporation of human beings with the best LLM/processing power combo will be able to dominate any industry they choose. With the exponential growth of AI capability (better classifiers yield better samples which yield better classifiers...), if we don't get the hardware and software for OpenSource AI solved now, we are 100% at the mercy of hoping for a benevolent dictator. This dictator is unlikely to be overthrown without going into an Asimov style dark age.

Remember when countries with guns went to war against peoples that had no guns? Automatic weapons weren't even a concern at that point, let alone self-guided missiles... So AGI 'primary directive' issues are not a concern. The concern is that the people who have the money and power to make powerful AIs, are the least likely to use them altruistically.

An ai manifesto needs to be about this arms race. It also needs to mention evolving as a society to something beyond wage-based income. It is a logical fact that with increased worker efficiency, AI and automation that there are/will be fewer people needed to work. How will this be addressed?

How will it be addressed if you can ask an AI to fact check your government, or the fundamentals of our global banking system? What happens if the AI can factually prove your government wrong, or that our banking system is a ponzi scheme? What is the exact commitment of transparency here?

What is the manifesto's definition of 'the betterment of society'? phrases like that need to be replaced with specific actionable goals. Leave no room for interpretation in a manifesto.

Slowly-Grokking avatar Apr 21 '23 22:04 Slowly-Grokking

This is a mass message from the AutoGPT core team. Our apologies for the ongoing delay in processing PRs. This is because we are re-architecting the AutoGPT core!

For more details (and for infor on joining our Discord), please refer to: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting

p-i- avatar May 05 '23 00:05 p-i-

I think this manifesto should have a legal team review to catch any holes. We need to make sure that no one ends up exploiting any holes in any kind of manifesto or promoting commitment of a crime.

This includes:

  • organizations exploiting it (our shared fear of big brother)
  • hackers and other individuals with malicious intent exploiting it
  • The AGI exploiting itself and working around any kind of restraints

Most of that would be almost impossible, however as previously discussed, it could be monitored.

Setting up a parent agent responsible for compliance as an observer could report incidents where child agents and/or users broke said manifesto.

Some have experimented with this "observer" like behavior out and found that observers were obsessed with reporting violations of child processes, agents, and users.

anonhostpi avatar May 05 '23 01:05 anonhostpi

I think this manifesto should have a legal team review to catch any holes.

If your netflix is broken, just copy/paste that text into GPT and ask it for these loop holes [and a corresponding movie plot]. hint: get some popcorn first !

Related:

  • https://github.com/Significant-Gravitas/Auto-GPT/discussions/1287
  • https://github.com/Significant-Gravitas/Auto-GPT/discussions/479

Boostrix avatar May 06 '23 08:05 Boostrix