Michael Vassar, the President of the Singularity Institute for Artificial Intelligence recently gave an interview on Accelerating Future where he favorably mentions Marcus Hutter's work on AI:
AF: Why should someone regard SIAI as a serious contender in AGI?
Vassar: The single biggest reason is that so few people are even working towards AGI. Of those who are, most are cranks of one sort or another. Among the remainder, there is a noticeable but gradual ongoing shift in the direction of provability, mathematical rigor, transparency, clear designer epistemology and the like, for instance in the work of Marcus Hutter and Shane Legg. To the extent that SIAI research and education efforts contribute to rigorous assurance of safety in the first powerful AGIs, that is a victory as great as the creation of AGI by our own researchers.
Now that's an interesting contrast with earlier statements by Eliezer Yudkowsky and Ben Goertzel, co-founder of and Director of Research at the SIAI, respectively:
I seriously do NOT think there is any practical value to be gotten out of trying to create a pragmatic AGI system by "scaling AIXI down." Ben Goertzel, 2007 http://www.mail-archive.com/singularity@v2.listbox.com/msg00509.html
To sum up: (a) The fair, physically realizable challenge of cooperation with your clone immediately breaks the AIXI and AIXI-tl formalisms. (b) This happens because of a hidden assumption built into the formalism, wherein AIXI devises a Cartesian model of a separated environmental theatre, rather than devising a model of a naturalistic reality that includes AIXI. (c) There's no obvious way to repair the formalism. It's been diagonalized, and diagonalization is usually fatal. The AIXI homunculus relies on perfectly modeling the environment shown on its Cartesian theatre; a naturalistic model includes the agent itself embedded in reality, but the reflective part of the model is necessarily imperfect (halting problem). (d) It seems very likely (though I have not actually proven it) that in addition to breaking the formalism, the physical challenge actually breaks AIXI-tl in the sense that a tl-bounded human outperforms it on complex cooperation problems. (e) This conjectured outperformance reflects the human use of a type of rational (Bayesian) reasoning apparently closed to AIXI, in that humans can reason about correlations between their internal processes and distant elements of reality, as a consequence of (b) above. Eliezer Yudkowsky, 2003 http://www.mail-archive.com/agi@v2.listbox.com/msg00862.html
AIXItl is a different story. It's computable, and is vastly less useful than Novamente. It's a ridiculous algorithm really, since at each time step it searches an infeasibly large space of possible programs. It's useful purely for theoretical purposes. Ben Goertzel, 2003 http://www.mail-archive.com/agi@v2.listbox.com/msg00765.html
Not to mention that *Kolmogorov complexity is completely irrelevant to intelligence*. Eliezer Yudkowsky, 2008 http://www.sl4.org/archive/0811/19505.html
No comments:
Post a Comment