Nov 28, 2008

I can't believe he didn't write that one.

The movie Vanilla Sky, the anime series Scrapped Princess, and the X-files episode The Post-Modern Prometheus have one thing in common (besides being worth watching): They show so much of the handwriting (plot devices, idiosyncracies in dialogue, characterization, setting...) of individual authors that it's hard to believe they aren't based on actual works by those authors.
Vanilla Sky's easy to guess, but can you guess the other ones? Vernor Vinge, and Bob Sheckley, respectively.

In my eyes, though, that doesn't make them rip-offs. More kind of an hommage.

Nov 2, 2008

Why Is There Anything, Rather Than Nothing At All ?

In the last few years, algorithmic information theory and the many-worlds interpretation of quantum mechanics have given me a sort of half-baked intuition that we, as a civilization, have the concept of nothingness wrong. To me, nothingness means the lack of any specification, or description, or restriction, and therefore implies the plenitude of all possible forms of existence. A void, a vacuum, utter silence, a blank slate, is something that needs to be described, or specified. I can't put it any better than that currently, but somehow the question why the "universe" exists seems a bit like an un-question to me now; it's the result of the complete absence of restrictions to existence. This lack of restriction, or description, seems to me the most natural, intuitive, or simple state imaginable.

Sep 21, 2008

The Case Of The Missing Volatilities: A Disillusioned Capitalist's Rantings

A while ago, someone approached me with a business inquiry. He told me he was working as an investment consultant for Austrian savers who, being both risk-averse and unhappy with low yields from savings accounts, were, in his words looking for something like a "leveraged savings account", a combination of high returns and near zero volatility. That request made me cringe, and I politely let him know that I wasn't interested in doing business with him.

I somewhat pity the people who are consulting him; for they may sooner or later get what they ask for, but not what they want. What they want is a risk-free return above the rate of risk-free return. What they'll get is a smooth curve on some piece of paper. Because it would have been easy to give them a strategy that returns, say, 14 percent p.a. with near-zero volatility. For a few years, that is, long enough to collect some handsome fees. But you don't have to be a Quant for that - even you could have done that. Here's how:

First, find some event that you judge to be reasonably unlikely, like, snowfall on Christmas eve in Wels, Austria (my hometown.) I'd say there's a one-in-ten chance to that.
Second, find someone who judges more or less likewise and is willing to bet (not necessarily fairly) on the occurrence of the event.
Third, put your money in a savings account, and every year, accept the bet to the full height of your savings.
You're basically insuring your partner against the occurrence of a moderately rare event with all your capital. In essence, you're running an insurance company with a single insuree.
As long as things go well, you'll earn something like a 14 percent return every year (odds + savings account). How likely is it that things go well ? Your chance of surviving for five years is almost two-thirds; you can expect to go down only after seven years. Meanwhile, you'll chalk up almost zero volatility.

This may be the right place to mention that, according to a study by Credit Suisse/Tremont, the average return from hedge funds between 1994 and 2005 was about 14 percent before fees. And an often cited number for the average hedge fund's lifetime is five years. Now of course not all hedge funds close shop with total, or even large, losses. And most funds have far from zero volatility. So, uhm, I'm not implying a blatantly simple analogy here.

But I want to sharpen your senses to the fact that it's quite easy to smooth out the volatility from a capital growth curve. For a while. So if someone shows you a nice, smooth fifteen-year curve of 14 percent returns, please don't be impressed. He may just be one of four similar idiots, or frauds, who started out in the beginning. And for god sake don't hand over your money, at least if you can't take a qualified look under the hood to see what produces the returns. Also be aware that in many cases your opponent may be fooled by his own elaborate squaring-of-the-circle constructions, and may have genuine good faith in his strategy, so the terms idiot or fraud are probably a too harsh in most cases. He may loose all your money even if he's a nice guy.

The practice of gambling for small returns against large losses is generally frowned upon, since the tolerability of losses generally decreases stronger than linearly. That means a tenfold bigger loss usually is more than tenfold as bad for you.

And this brings us to last weeks events. Once again, the Fed and the US government demonstrated their determination to smooth out growth curves (gdp, stock market,employment), at the price of a small chance that things turn out really bad in the end (e.g. Fed losing political independence.) This has been going on since at least 2000, and even if the chance are small, someone is sooner or later going to lose their bets. The Fed has just acted like a man who'd learned he wouldn't get a raise this year and decided to make up for that by quitting his health insurance plan.

When I was in my early twenties (ca 1998), I turned from a moderate leftist into a firebrand libertarian. The demise of the eastern bloc, Japan's stagnation and the New Economy seemed proof to me of the superiority of free markets. I'm still much of a libertarian today, but I do not any longer make the mistake of confusing libertarian theory with the actual policies of self-declared free market advocates. I recommend reading books on the soviet system printed in the soviet union. It does wonders at giving you anew perspective on things you may read here and now.

Aug 15, 2008

The Most Horrible Weapon Ever Conceived.

Maybe it was the Foehn wind we had yesterday who did it (Foehn has an awful effect on people's mood, check the wikipedia entry); maybe the fact I hadn't slept well the night before contributed its share; but at some time in the afternoon I had an idea how to build what is, in a very general meaning of the term, the most horrible weapon in the Universe.

(Oh, and I had had to stand in line really, really long that morning !)

It's basically the same goal-function-hypothesizing AI described in one of my earlier posts, fed with a specification of your enimy's goal function.

Only with the sign reversed, such that the maximizer becomes a minimizer.

A paperclip maximizer might wipe you out; but the above system will do to you whatever is the most horrible thing that can happen, in your eyes, (not necessarily limited just) to you. Do you care about your life ? How about your dependent's life ? Their sanity ? Humanity ? Sentient life in the Universe ? Life in the Universe ? The Universe ?

Too bad.

A somewhat comforting thought is that the enemy's-goal-function-hypothesizer-and-minimizer is highly unlikely ever to be used, or even built, due to it's devastating side effects. (Your value function and your enemy's value function may be at least a tiny bit positively correlated.) A fictional Egg Foam (from EGFM, Enimy Goal Function Minimizer) may, however, come in quite handy as a plot device in hard Singularity-related SF. The Blight from Vinge's A Fire was merely expansionist; Egg Foam, on the other hand, is wicked...

Jul 31, 2008

The Colors Of My Digits

For as long as I can remember, I perceive digits as having their own colors:

1 2 3 4 5 6 7 8 9

Zero is glassy-transparent, like acrylic, so actually it's more like digits having textures. When I see digits written down somewhere, they do not appear to be vividly colored, just subtly shaded. But when visualizing numbers, I find it hard not to perceive the individual digits being colored in the above way.

I doubt that this rudimentary form of synesthesia is beneficial to my ability to deal with numbers. Many of the colors are quite similiar to each other, so I tend to misremember phonenumbers, sums or dates in a specific way.

Jul 10, 2008

Disclaimer: Universe is NOT simple

"Why is the Universe so simple ?" asks the mathematician, or more generally, why is simple mathematics (school mathematics) so successful at describing the Universe ?

The Universe, however, is generally not simple to begin with. Rather there are some aspects of the Universe (which we happen to be interested in) that can be computed easily. Put one sheep next to one sheep and you get two sheep (in the short term); so "putting next to each other" is isomorphic to a simple "+" operator. But what about the eddies and whorls in a ravine ? Cloud patterns ? And I haven't even begun to ask *creative* questions here.

An arbitrary, low-Kolmogorov-complexity aspect of the Universe is very difficult to compute. We as a species, shaped by evolution, happen to be interested in many simple-to-compute aspects.

The question should rather be phrased: Why does the Universe have any simple-to-compute aspects at all ?

Jun 8, 2008

The Great Goodbye, Everett style.

In a post-singularity future, people may, with the help of superintelligent AI, have almost arbitrary levels of control over their environment and their own mental and physical constitution. This near-omnipotence, however, will presumably not extend to other people's mind and body. (Argument from symmetry, though a game-theoretically stable society model where each participent has unrestricted control over everyone else seems at least remotely conceivable. We'll leave that aside for later speculations.)

I think it's plausible to assume that those post-singularity people can be modeled as agents trying to maximize (minimize) their respective goal functions on the universe. Given their, in principle, almost infinite capability to maximize those functions, the biggest factor holding back individual agents may turn out to be other, similarly powerful agents with incompatible goal functions. Since we're talking about an agent model that clearly separates preferences from beliefs, Aumann's results don't provide a safety hatch here. Clearly, the agents can compromise, and arguments from symmetry again prevail, but this may, in the face of the otherwise immense capabilities of the agents, result in huge discounts from the theoretically achievable level of goal-function fulfillment.

That is, the posthumans may get in each other's way, and there's now way to rationally resolve the situation without massively stomping on some (or all) people's goal functions.

How likely is it that people's preferences may intrinsically differ after the technological singularity ? If those people have evolved through self- (or mutual) modification from humans, or have otherwise inherited, possibly through deliberate design, human values and tastes, then I'd regard this to be very likely indeed. I may be pessimistic here, but my personal lifelong experience is that people have, in parts radically, different values attached to certain aspects of the world, themselves, and other people, and no amount of rational insight is ever going to make those values compatible. So I think the problem I'm discussing here is real and realistic.

Emigration may be a solution, and is a cherished human tradition that may extend into a post-singularity future. Of course, people's value function will often put strong emphasis on the presence of (certain) other people, so walking away will in many cases be worse than gritting your teeth and getting on with each other. But in some cases, getting out of each other's hair may be the optimal thing to do.

Now posthumans surely have some radical opportunities to venture out into unexplored territory, and the silentium universii may mean that there's a lot of place to settle down. Starw(h)isps traveling at a notch below light speed can carry virtualized passengers for billions of parsecs within a short subjective time. But even this may not be far enough, as those other annoying posthumans with incompatible value systems will presumably have access to the same means of expansion and may be determined to use them, if not now, than maybe later in the future. For their destinies to separate, the opposing parties will have to make their future light cones disjunct. Cosmic acceleration from dark energy may make this possible simply by traveling far enough fast enough, but has at least two disadvantages: It creates an asymmetry between those deciding to move away and those that "inherit the earth", and it may be impractical for posthumans to wait long enough for inflation to catch on - given post-singularity computing capacities, and a foreseeable tendency to virtualize your supporting hardware, even a nanosecond wait in objective time may be unbearable on a subjective scale.

As you may have guessed by now from the title of this post, there's probably another, much simpler way for posthumans to part ways. This method depends on the assumed validity of the so-called many-worlds interpretation of the superposition principle in of quantum mechanics. As a note of caution, however, I'd like to point out that the superposition principle relies on the linearity of quantum mechanics, which may turn out to be false, since general relativity is non-linear. (That is, a linear combination of two solutions describing world-states is not necessarily a valid solution itself.) The basic idea is for all parties to condition their further existence on the output of a quantum random number generator. By accepting to inhabit only mutually exclusive subsets of possible worlds, all participants can have symmetric access to a constrained resource (e.g., they can all "inherit the earth" in their Everett branch.) The superposition principle also assures that their fates are separated once and forever, without the danger of any one party deciding to overturn the deal at a later time point. Furthermore, this approach can be implemented on a very short timescale.

As I believe in the mutual incompatibility of many, if not most, human tastes, values, and likings, as well as in the stability of those tastes, values and likings under reflection, I believe posthumans will use one method or another to eventually part ways. (The fact that I spend some time thinking on such problems shows that I believe I would do so, doesn't it ?) Everett emigration seems to be a rather straightforward way to achieve that. We do not, however, currently understand quantum mechanics, general relativity, and the superposition principle well enough to literally bet our lives on it. (Otherwise, we could already choose to implement it using current technology, that is, a quantum random number generator and some hydrogen bombs ...)

Could this be an explanation of the Fermi paradox ? If technological civilizations reliably undergo technological singularities, and post-singularity societies tend to "atomize" themselves, universes may in fact on average be relatively quiet places. But I don't really hold this argument to be valid, as even isolated posthumans may be very noisy. Furthermore, I think the "Everett barrier" is in fact not that impermeable in the presence of a sufficiently powerful AI, so transhumans with compatible tastes might join each other, even if they originated in different Everett branches - but that's some stuff to discuss in a follow-up to this post.

Jun 6, 2008

Reconstructing the Dow

Recently I had to reconstruct the Dow Jones Industrial Index for backtesting purposes. This turned out to be more painful than anticipated. In case you need to do this, I recommend you start out with this document detailing the historical composition of the DJIA. From this, create a .txt file containing dates and types of change over the relevant time interval. Write some code to read this into your preferred programming environment (MatLab in my case) and create a data structure containing the composition of the Dow at any given time point (daily closings, in my case). Then look up as many ticker symbols as possible at Yahoo finance and the Dow's wikipedia entry. For the rest, I googled, though there's probably some sort of central list of tickers maintained somewhere. I'll list below what I could find for the years between 1990 and 2008. Note that many of the tocker symbols today denote different companies.

3M Company
MMM
AT&T Corporation
T
AT&T Incorporated
T
Alcoa Incorporated
AA
Allied-Signal Incorporated
ALD (ALD today stands for Allied Capital Corporation) ALD merged with Honeywell
AlliedSignal Incorporated
ALD (again, today Allied Capital Corporation)
Altria Group Incorporated
MO
Altria Group, Incorporated
MO
Aluminum Company of America
AA
American Express Company
AXP
American International Group Inc.
AIG
American Tel. & Tel.
T
Bank of America Corporation
BAC
Bethlehem Steel
BS (Delisted)
Boeing Company
BA
Caterpillar Incorporated
CAT
Chevron
CVX
Chevron Corporation
CVX
Citigroup Incorporated
C
Coca-Cola Company
KO
Du Pont
DD
DuPont
DD
Dupont
DD
Eastman Kodak Company
EK
Exxon Corporation
XOM
Exxon Mobil Company
XOM
Exxon Mobil Corporation
XOM
General Electric Company
GE
General Motors Corporation
GM
Goodyear
GT
Hewlett-Packard Company
HPQ
Home Depot Incorporated
HD
Honeywell International
HON
Honeywell International Inc.
HON
Intel Corporation
INTC
International Business Machines
IBM
International Paper Company
IP
J.P. Morgan & Company
JPM
J.P. Morgan Chase
JPM
J.P. Morgan Chase & Company
JPM
Johnson & Johnson
JNJ
McDonald’s Corporation
MCD
Merck & Company, Inc.
MRK
Merck & Company, Incorporated
MRK
Microsoft Corporation
MSFT
Minnesota Mining & Mfg
MMM
Navistar International Corp.
NAVZ.PK (Only on Pink Sheets, delisted from NYSE in 2006)
Pfizer Incorporated
PFE
Philip Morris Companies Inc.
PM
Phizer Incorporated
PFE
Primerica Corporation
??? I have no idea.
Procter & Gamble Company
PG
SBC Communications Incorporated
SBC (delisted after at&t fusion)
Sears Roebuck & Company
S (S now stands for Sprint)
Texaco Incorporated
TX (now stands for ternium)
Travelers Group
TRV (now stands for Travelers Company; unrelated company ! )
USX Corporation
X
Union Carbide
UK (delisted)
United Technologies Corporation
UTX
Verizon Communications Inc.
VZ
Wal-Mart Stores Incorporated
WMT
Walt Disney Company
DIS
Westinghouse Electric
WX (Now stands for Wuxi pharma)
Woolworth
WOW (probably)

Next, link the company names to theire respective ticker symbols, and download stock quotes for all the tickers/date combinations. In MatLab, this is most conveniently done using this routine by Marcelo Scherer Perlin, which acesses free Yahoo datasets. For the delisted titles, or intra-day data, you'll have to resort to proprietary datasets. Opentick may be a good free alternative, but I haven't got around to look at it more closely.

Finally, you'd have to reconstruct the index from the individual quotes. Here's an explanation how the DJIA is calculated. You'll notice you need to know historical values for the so called Dow divisor which, as far as I know, are impossible to obtain in electronic format with reasonable effort. Fortunately, you can backward -compute them from any given single value by assuming that splits, dividends, and changes in the DJIA composition should not have an effect on the index value. This is admittedly somewhat pointless, as historical index data can be readily obtained, but it can serve as sort of a check-sum for the individual quotes you have.


Jun 4, 2008

BoCon Reaches 1000

Three cheers for Matthew Skala of Bonobo Conspiracy: BoCon today passed the 1000 strip mark. Amazingly, Matt managed to post a strip each and every single day during the last three years, while working on his PhD in computer science. (He defended, successfully, a few days ago, nice timing.)

May 25, 2008

Who's the Ezra Gurney in Cowboy Bebop ?

In case you hadn't noticed, the characters in Cowboy Bebop map nicely onto those in Captain Future, though it's not a bijective mapping. (And by mapping I don't mean the characters are similiar; I mean they're analogous.)

The Captain <-> Spike Spiegel
Greg, Otho <-> Jet Black
Professor Simon (Ken Scott?) <-> Edward
Ul Quorn <-> Vicious
Joan Randall <-> Faye Valentine
Yiek, Oak <-> Ein

Which leaves out Ezra Gurney, a fairly major character. The best I can come up with is Alfredo ("Punch") from Big Shot. He dons a moustache, he's getting quite some screen time, and he fills the crew in on the baddies.

May 16, 2008

Radical Luddism (or maybe not).

Yesterday, thirst-stricken in front of an organic food store, I bought a bottle of Lauretana mountain spring water, which brags, among other things, about being bottled using only natural gravity, without any pressure. If taken as a rejection of pumping technology, developed about two-and-a-half millenia ago, this is pretty radical even by most luddist standards; if, however, this is intended merely as a criticism of artificial gravity, this is rather conservative.

I'm tempted to take Lauretana's logic one step further and just leave some empty bottles outside to be rained into, which is probably as low-tech as you can possibly go.

May 10, 2008

Web 2.0 Company Name Magnetic Poetry

I just spent three weeks in the Bay Area.

This hot new startup is called
which is for .

Alternatively, you can also pick basically any word from one of the Dravidian languages.

Or you can just grow a handlebar moustache, wear bell-bottoms and hang a sign around your neck that says Style is timeless.

Apr 6, 2008

A Strategy for Maximization of Global Iron Production employing Universal Artificial Intelligence.

It's Monday, 4 AM, and singularitarianism is asleep. The SL4 archive doesn't show a message for the last 7 days, which I don't believe, since they had an all-time high of 650 messages last month. The AGIRI mailing list archive ends with a "MindFORTH" message by A.T. Murray in February, acceleratingfuture gives a 404, and the SIAI blog has 4 (in words: four) entries so far this year. Meanwhile, Eliezer is blogging on the questions whether lookup tables have consciousness (Footnote: To me, a static, two-dimensional spatial pattern is a dynamic, one-dimensional spatiotemporal pattern (=Turing machine tape) with the temporal axis rotated into the spatial dimension. So what's the difference?) Nothing much from Peter de Blanc, Nick Hay, Shane Legg, or Michael Wilson, either. (But I like your new wordpress template, Shane.) All this doesn't exactly bolster my hopes for the Friendly AI problem being solved in the near future. Well, there was a message on SL4 last month titled Friendliness SOLVED!, but something kept me from reading it. Maybe it was the boldface, maybe the exclamation mark.

Besides, the website of the publishing company where I'm supposed to submit my manuscript has apparently gone defunct over the weekend, or so it seems after half an hour of re-submitting, and it's still dark outside, and it rains, and I had my coffee already, so I can't go back to sleep, so I say hey, why not write a bit on Friendliness.

Eliezer once formulated the challenge of bringing AIXI to maximize the number of iron atoms in the universe. (Why iron ?) AIXI is an example of a reinforcement-learning based agent architecture, meaning the agent gets a cookie whenever he behaves in way we think is fruitful. It's generally impossible to make such agents do something more difficult than coaxing the reinforcer (us) into handing out cookies by whatever means possible - imagine, for illustration, you're on a deserted island, with a Gorilla and a jar full of cookies. Current reinforcement learners are far too stupid to push us around, but this is not the case for the hypothetical infinitely-powerful AIXI. And maximizing the number of iron atoms is probably much more difficult than, say, secretely putting all humans into a VR-Matrix where things look like as if the number of iron atoms has been maximized. (Or, less elegantly, putting a gun at our head.) On the other hand, the iron-problem is at least an (arbitrarily) specified problem, whereas the more important problem of building a Friendly AI is not even clearly defined. (We don't know what we really want.) So the iron problem can serve as a little finger exercise to warm up for the real challenge.

One way to make a reinforcement learner more controllable is to internalize the reward structure via a goal function. A goal function is a function that takes a description of the world and computes how "similiar" it is to an arbitrary "goal" state, basically, just how good a certain world is. Instead of maximizing the number of cookies, the agent tries to maximize the goal function. AIXI could be modified to incorporate such a goal function.

The challenge here, however, is to explicitely define a goal function that says "Maximize the number of iron atoms". To formulate such a function, we might have to define what an iron atom is, and that definition might, in fact, turn out to be flawed, just as many earlier physical concepts have turned out to be flawed. It's like trying to get an agent to extinguish fire in terms of phlogiston. The agent, if smart enough, may decide there isn't something like phlogiston IRL and therefore he can't, and shouldn't, do anything about that blazing orphanage over there.

So you cannot straightforwardly write down a few pages of axioms describing a ca. 1870 system of atomist physics and then go on to define the number of iron atoms to be maximized. Neither can you go "all the way" and formulate an axiomatic system based on our contemporary understanding of multi-particle wavefunctions, since this a) will make it very difficut to specify what an "iron atom" is in this axiomatic, in fact, only slightly less difficult than specifying what a "Rolex" is in term of iron atoms, and b) our contemporary understanding will, in the long term, turn out to be just as flawed as earlier systems.

This doesn't mean that maximizing the number of iron atoms is impossible, or nonsensical, like computing the last digit of pi. Iron atoms, like porn, do exist, even if we can't give a rock-solid definition. Unfortunately, telling AIXI to maximize that you know, little thingies, will not work, since for to understand that command, AIXI would not only have to have a good understanding of the human mind, but also a goal function that says: "Do what humans want you to do." Now go ahead and define human and want. There's a hole in my bucket...

Nevertheless, this points us already in the right direction. We again write down our atomistic system of physics, and the goal Maximize the number of iron atoms! , but we quote that. Then we go on and define the following goal function: "maximize the goal function of the agent who would say such a thing (quote), that is, who would give this text and this goal function to an AIXI." Specifying what an agent, a goal function, and AIXI is is not all too difficult. Now, in order to maximize this goal function, AIXI will have to speculate about the goal function of agents believing in atomistic systems of physics, and saying they want to maximize "iron atoms". What makes them tick ? What kind of people are they? What experiments might they have conducted, and what reasoning processes might they have employed to arrive at their worldview? The answer could range from a downfallen civilization of robot creatures who need iron for reproduction to something as outrageous as us humans today. What's common to all these people is their somewhat poorly articulated desire to maximize the number of that little metal thingies.

Note that this is by no means the only information about the universe the AIXI has access to. Being smarter, and presumably more powerful than we are, AIXI will quickly discover the "real" laws of physics governing the universe, as well as insights about the nature and plausibility of various agent structures. This general level of world-understanding is absolutely necessary to conduct the above speculation. For example, the text quoted in the goal function could have been produced by people who want to minimize the number of iron atoms in the universe, but are so neurotic they always ask for the opposite of what they really want. That this is not impossible, but relatively implausible with respect to the more straightforward interpretation, can only be seen with some level of insight about the general way the world works.

My current best shot at making AIXI generally Friendly goes vaguely in the same direction. Instead of an atomistic system one could imagine using the totality of human cultural artefacts, (starting with the internet?) and instruct AIXI to reason about the motivations of the agents who created such things. ("First result: They crave pr0n." OK, start with something else than the internet.) One of the open questions here is whether we want AIXI to care about hypothetical creators of that artefacts (subjunctive humans) too, or just that very people who actually created that stuff. My current guess is the first.

Mar 18, 2008

Die Kunst Des Verhörens


A friend once remarked that the French speak French, and in what a French kind of way they do that! I guess he'd say something similiar about the English, as would most Austrians. Consequently, the art of mishearing foreign words is widely practiced, and not constrained to song lyrics. (Know what Austrians mean when they speak of golden-red rivers ? Think "woof".)
So today I was asking a girl at the newsstand whether they have the Economist. She didn't know, tried to ask her coworker and, well, you can can guess the rest...I had to pretend to fall into a coughing fit and thanked them with wave.

Mar 9, 2008

ExaFLOPS in 2012

Sandia and Oak Ridge recently received a 7.4 M$ grant to "conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop" (link).
"In this amazing and expanding universe !" I'm tempted to add to that millions trillions, but what I'm even more tempted to do is a back-of-the-envelope calculation of a folding@home-style distributed computing project using 8th-generation gaming consoles ("PS4s").
For a nicely parallel algorithm you can currently milk around 67 GFLOPS from a PS3 under Linux using minimal contortion. If you could access the RSX GPU (which is locked under Linux) , that figure would probably increase about fourfold.
Historically, peak console CPU+GPU computing power increased roughly 60-fold in the 4.3 years between the release of the PS1 and the PS2, and a further roughly 100-fold (the exact architecture of the RSX is unknown) in the 7.7 years to the release of the PS3. That combines to an average doubling time for peak performance of a little less than a year, somewhat faster than the 18-months doubling time for real performance commonly associated with Moore's law (which, strictly speaking, is about transistor counts per die.)
There is currently some speculation about the next generation of consoles being released a few years earlier than the 6-year-cycle we've seen so far. Let's just pull a release date of mid-2011 out of thin air, and "Moore's law" points to a tenfold increase in real computing power, which looks flimsy compared to the above figures. So if we extrapolate the past trend for peak power, and assume we can use the new architecture as efficiently as the current one, we get a more handsome 40-fold increase, which translates to roughly 10 TFLOPS per console.
So you would need 100.000 consoles running simultaneously to break the exaFLOPS barrier. That figure is somewhat smaller than the total number of folding@home clients installed as of 2008, but larger than the number of PS3 clients for that project. And this figure assumes the client is running 100% of the time, which for a gaming console is unlikely to be true. (Running a 150W console 24/7 cost you about 80$ in electricity per year, depending on where you live; other factors are noise, and computing resources used for things like gaming. ) But if an organization can find a cool project and has the necessary PR skills, it should be possible to lay hands on that many clients within one or two years after hardware release. All in all this makes it look possible to do computations at more than one exaFLOPS before the end of 2012, six years earlier than the 2018 horizon for a Sandia / Oak Ridge mainframe.

Feb 29, 2008

OK, let's please all agree that's a hoax.


According to the Telegraph, "The director of a Norwegian museum claimed yesterday to have discovered cartoons drawn by Adolf Hitler during the Second World War."

While the stereotype of the freakish doujinshi artist is well established, Hitler is admittedly something of an extreme case, well known for his obsession with high-fantasy, his spending weeks at a time in his basement, his unhealthy interest in his underaged niece, and his frequent use of hate-speech.

But of course for someone who remembers the Schtonk affair this triggers all hoax alarms. And looking at that drawing of Disney's Pinocchio I really do wish this is a hoax. Otherwise the mental associations to that ruthless, all-consuming machinery of mass manipulation will forever soil for me the picture that I have of that cute, innocent little guy Hitler.

Feb 4, 2008

Hibernation



We Humans don't hibernate, (beginning statements with "we humans" rocks, try it) but maybe we have, like some other non-hibernating mammals, some rudimentary remnants of hibernation on our body-plan. I, for my part, cannot ignore the fact that every January I sleep ten hours a day, gain weight, feel stingy, and procrastinate with all my might. 2 months and not a single posting. Time to get out of the pyjama.

Picture is BTW the chapel around the corner from my mother's house in Altmuenster, Sound-of-Music-Land, taken in late December.