Volume 1, Issue 1
1st Quarter, 2006
Peter Voss
page 7 of 7
Powerful A.G.I.’s will arrive long before significant intelligence augmentation, before we can improve ourselves. So we will need A.G.I. to upgrade ourselves to improve our wetware. Legal issues will revolve around limiting the production and use of A.G.I.’s rather than protecting A.G.I.’s. And legal mechanisms will be largely ineffective.
Considering all of these implications, how do we prepare for A.G.I.? I am very keen to put something together. I already have a group of people that have started pulling together to think about these things. At our company, we have a core group of advisors who are trying to think ahead and guide this technology in the best possible way that we can. It is an exciting time for us, for all of us.
References
AGI – Artificial General Intelligence
http://adaptiveai.com/faq/index.htm
http://adaptiveai.com/research/index.htm
"The Truth Machine" by James Halperin
Existential Risks: Analyzing Human Extinction Scenarios
http://nickbostrom.com/existential/risks.html
True Mortality: Rational Principles for Optimal Living
http://optimal.org/peter/rational_ethics.htm
Why Machines Will Become Hyper-Intelligent Before Humans Do
http://optimal.org/peter/hyperintelligence.htm
Peter Voss is an entrepreneur with a background in electronics, computer systems, software, and management. For the past few years he has been researching artificial general intelligence, and recently started Adaptive A.I. Inc., with the goal of developing a highly adaptive, general-purpose AI engine.