Now this sounds to me like >narrow< ai, of a sort, but it is certainly generalizing, learning type behaviour. The helicopter monitors the activity of an expert helicopter pilot, and then, compensating for environmental differences (wind etc), performs the same maneuvers itself.
"The time from here to Singularity depends sensitively on the particulars of what we humans do during the next decade (and even the next few years)."
Archive for August, 2008
The Singularity Summit page is now open for registration.
We invite you to join our extraordinary group of visionaries in business, science, technology, design, and the arts, as our community explores this exciting topic. Your participation offers a world of powerful ideas, a unique networking opportunity, and access to an exclusive directory of your peers.
As I write this there are 22 speakers planned(!), including Ray Kurzweil, Marvin Minsky, Vernor Vinge and many other very special presenters.
There is also a business workshop on emerging technology, and lots of really great announcements, that still need firming up before announcement.
I must say, I’m really stoked about this year’s conference. This feels like a turning point in Artificial General Intelligence research – the year we will look back on as the when the tide turned from an undercurrent to a groundswell. Between announcements by Intel , overviews in the New York Times, and new focus from the government into artificial brains and thecognitive sciences , this can no longer be considered a fringe science.
Last week Justin Ratner told the Intel Developer’s Conference that “machines could even overtake humans in their ability to reason, in the not so distant future.â€ť And just yesterday, a press release was made regarding the Military’s stabs into cognitive science research, with artificial brains the size of a cat’s on the horizon.
This week John Tierney at the New York Times enters the fray, with an article discussing Vernor Vinge and his novel Rainbow’s end, and a shorter, follow-on blog post examining the feasiblity of his ideas (ie – the Singularity).
In Vernor Vingeâ€™s version of Southern California in 2025, there is a school named Fairmont High with the motto, â€śTrying hard not to become obsolete.â€ť It may not sound inspiring, but to the many fans of Dr. Vinge, this is a most ambitious â€” and perhaps unattainable â€” goal for any member of our species.
The article goes on to link to the IEEE’s arm chair technical review of the same novel.
For those of us familiar with the concepts, the news here is that it they are being presented to the public eye for the first time. News indeed.
And the sooner that Singularity concepts enter the public discourse, the sooner it will attract the resources to make it possible. While it is arguable that public attention may not a good thing (the more people talking about it, the more likely that nations and organizations will develop their own artificial general intelligence efforts, and the less control there will be over the consequences), given the scope of the undertaking, the attention is inevitable. Let us hope that we can avoid all the bad alternatives.
And what would happen to us if the machines rule? Well, Dr. Vinge said, itâ€™s possible that artificial post-humans would use us the way weâ€™ve used oxen and donkeys. But he preferred to hope they would be more like environmentalists who wanted to protect weaker species, even if it was only out of self-interest. Dr. Vinge imagined the post-humans sitting around and using their exalted powers of reasoning:
â€śMaybe we need the humans around, because theyâ€™re natural critters who could survive in situations where some catastrophe would cause technology to disappear. That way theyâ€™d be around to bring back the important things â€” namely, us.â€ť
Intel isn’t the only company to have discovered the implications of artificial intelligence research. Wired blog reports:
The Pentagon’s crash program to create an artificial brain is just about up and running. And, if it all goes as planned, we could see an electronic chip that mimics the “function, size, and power consumption” of a cat’s cortex some time in the next decade.
The Singularity ’08 summit is well on its way, and to get ready for it, I am re-publishing selected videos from last year’s event. I hope that, if you are new to the concepts being discussed on this blog, these will entice you to join us at the event. If you are more familiar, but haven’t seen these, you should. For more videos from this conference (and others), visit the Singularity Institute.
Eliezer Yudkowsky, one of the worldâ€™s foremost researchers on AGI Friendly AI and recursive self-improvement presents three different schools of thought on the Singularity. Eliezer has created the Friendly AI approach to AGI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules a moral agent should follow. At the 2007 Singularity Summit, he introduced three schools of thought currently associated with the word â€śSingularity,â€ť their core arguments and bolder conjectures, while noting where they support or contradict each other.