Dec 17, 2010

The 34 Year Old Scientist


Watson discovered the structure of DNA when he was 24. Heisenberg formulated the uncertainty principle when he was 25. Newton claimed to have developed his gravitational theory when he was 24. Darwin embarked on the Beagle when he was 22.

I could go on for hours like this.

But wait. Crick was 37 when he discovered the structure of DNA. Schroedinger was 38 when he published on wave mechanics. Newton cast his gravitational theory in mathematical form when he was 37, his earlier insights likely being purely speculative. Darwin finalized the theory of selection when he was 47 (Wallace being 33 at the time.)

Of course I may be biased here. Ever so many examples and counterexamples don't prove a point. One has to look at the data that's out there.

Falagas et al. (2008) ask "At what age do biomedical scientists do their best work?" and answer with the following age histogram of the top 5 highly cited articles for a random subset of 300 bioscientists:




(The corresponding histogram for the single-most cited paper looks noisier, propably based on people's tendency to cite summary reviews written in later years.)

Costas et al. (2010) perform a more thorough analysis for scientists working at the Spanish National Research Council. Their results indicate that while the number of publications per scientist per year increases somewhat with age, the expected number of citations per publication decreases. However, their study lumps together all age groups younger than forty. (Top, Low, and Medium refer to three performance classes of researchers).



Finally, the widely cited and awesomely titled Kanazawa (2003) "Why productivity fades with age: The crime–genius connection" examines the age at what 280 famous scientists made their single key contribution to science:


Quite depressing overall, but it seems you don't have to go fishing before you turn 40 (Einstein was 41 in 1920). At 34, chances are 50:50 that the best of your work still lies ahead. Even better (or worse, depending on your personal situation), the corresponding curve for the 72 scientists in Kanazawa's dataset who never married looks significantly different:


(This might be somewhat confounded by the unknown fraction of (closet) gay scientists in the sample.) Newton, Erdös, Tesla (and Anton Bruckner) immediately come to my mind as straight men who denied themselves the pleasures of female company, and were productive well into their forties, or later. (The catch being that Tesla and Newton became funny at around 50; Erdös was born that way; and Bruckner, well, that depends on your opinion on watertight underwear.) Crick, Schroedinger and Darwin were all married.

Nov 20, 2010

Credit Assignment & The Singularity

There is no limit to what a man can do so long as he does not care a straw who gets the credit for it.

(Charles Edward Montague)


Technological breakthroughs are often difficult to assign to a particular person, time, or place. Some methods are "in the air", and are reinvented independently in short succession. Financial and emotional motives complicate matters further. Inventors may regard each other's ideas as special cases of their own, more general insights. In 1910 the Smithsonian Institute congratulated the Wright brothers "for bringing [Samuel P. Langley's] aerodrome..to the commercial and practical stage", and only in 1942 issued an official statement that the Wrights, not Langley, had invented the airplane. Only in 1914, after 8 years of court struggles, and strong opposition from Glenn Curtiss, was the Wright's plane patent declared valid. I myself am currently witnessing, in my field of work, a similar patent war, albeit at a much smaller scale.

However, with the Singularity, things are different, as a friend recently pointed out to me over dinner in Berkeley. (Of course, the same idea had occurred to me already, independently!) If a breakthrough in AI results in the creation of a very powerful entity, this entity will likely find it trivial to sort out who contributed how much to its coming into existence. If the entity is benevolent, it will likely take care of proper credit assignment. Spin, PR, old-boy-networks and lawsuits are probably no match to superintelligence, nanotechnology, and non-invasive brain scanning.

With that in mind, the hopeful AI researcher can focus his attention on maximizing the chances for a benevolent Singularity. That means he should publish his work. Credit assignment can be of higher order, such as when his published ideas enable someone else's breakthrough. In the context of a normal invention, this is less desirable than holding back ideas and achieving the breakthrough himself, a little later maybe. In the context of the Singularity, however, earlier is better, all else equal, given ~60 million people dying per year. While his competitor may have beat him to the finishing line, he enabled the competitor's early success by releasing his ideas. That the competitor won't acknowledge this, is no longer a problem after the Singularity, and speeding things up by a mere week may save a million lives.

So if you have an idea that could matter for AI, and the Singularity, set it free. You know it wants to be.

Sep 11, 2010

What I, as a Transhumanist, Believe.



As a transhumanist, I believe a world without fear is possible.

As a transhumanist, I believe the world is very, very, very, big.

As a a transhumanist, I believe I can become something else, and still be myself.

As a transhumanist, I believe in the power of truth.

As a transhumanist, I believe in happy endings.

Aug 9, 2010

Quote of the Day

"Reality is a non-ergodic partially observable uncertain unknown environment in which acquiring experience can be expensive."

Marcus Hutter, Feature Markov Decision Processes

Jul 28, 2010

Quote of the Day

"You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God."

Julie from Crystal Nights by Greg Egan

Jul 18, 2010

Is Google Growing Linearly ?

Could make sense if (1) a fixed percentage of total ad spending is moved to google every year and (2) the amount not yet moved over is much smaller than the total amount that will be eventually moved. Not too unreasonable. Fit is minimal-relative-error (least squares). Extrapolation to 2015 gives 48-56 Bio USD in annual sales and 10% year-on-year sales growth, to which the overall growth in ad spending may contribute a few percent.

Feb 22, 2010

Happy Baby Bunny Pony



At Less Wrong, Alicorn (or is it really Eliezer?) discusses how the fact that some (pictures of) baby animals are more cute than all (pictures of) human babies fits with human evolutionary psychology. We're after all supposed to find our own offspring to be the cutest of all species. According to the theory of supernormal stimuli, proposed by Konrad Lorenz in the 1940s, this is due to the bunny possessing the features that make human babies cute, like big eyes, small nose, rounded forehead, to an even greater extent than any human baby does. This is certainly true, but why isn't our perception of such features maximized for values found in actual human babies ?

I think the answer lies in what isn't there in the bunny. If we ever encountered a real, living human baby with the eye/nose proportions and forehead curvature of the bunny pictured above, we'd be grossed out. This makes sense from an evolutionary point of view, (not from an ethical point of view), as the carrier of such body proportions would have no real chance of survival. The "cuteness ratios" are there, but our overall gestalt perception kicks in, and adds a big "gross!" factor. So mother nature needs not bother to make our ratio- or feature-based cuteness detection the shape of an inverted U, as low-level shape and texture perception will take care of any outliers. Within the region of "normal" babies, the bigger the eyes, the better. The bunny, however, is clearly not of human gestalt, is furry, has long ears, therefore doesn't trigger any "icky" response and can make our feature-based cuteness perception go berserk.

What's true for babies, also holds true for babes. Men find slender legs, big eyes, small chin, etc., attractive in women, but there's a limit that. Manga and anime, however, feature characters with extreme body proportions that still manage to be highly attractive to some people. While Scott McCloud proposes in his book Understanding Comics that the heavily abstracted visual style of cartoon characters serves primarily to allow a broad range of readers to recognize themselves in them, I would like to add that, by introducing a clear non-humanness, it also allows to explore regions of cuteness-feature-space that are off-limits to naturalistic art forms.

You can of course combine it all into ueber-cute cartoon animal babies.

As a transhumanist, all this offers me a nice glimpse of a future where we will have reeingineered our perception (and possibly our appearance) to accomodate an affective dynamic outside the human range.