?

Log in

brains versus the exponential

« previous entry | next entry »
Nov. 1st, 2011 | 12:33 pm

In my 2010 paper on Codd's self-replicating computer I estimated that it would take at least 1000 years for the machine to replicate. If we left it running we might expect it to complete in the year 3010.

But of course computing power is always increasing, so how soon should we expect it to happen?

Moore's law says that computing performance doubles every 18 months. So by now (18 months after the paper came out) the machine should take only 500 years to replicate, completing in 2511.
yearduration (years)completion date
201010003010
2011.55002511
20132502263
2014.51252140
201662.52079
2017.531.32049
201915.62035
2020.57.82028
20223.92026
2023.52.02025
20251.02026

If Moore's law continues to hold, and we ignore any possible developments in software, then running the figures forward the best time to start would be in 2023 when it would take 2 years, giving the earliest expected completion date of 2025. Check back then to see if this came true!

(We could keep the program running and move it onto faster computers each year but this would only save us a couple of years so it's hardly worth bothering, we might just as well wait for Moore's law to catch us up.)

By 2046 (I will be 70) the machine should replicate in 30 minutes, on computers 8 million times faster than today's.

It's interesting to speculate what algorithm advances might lead to self-replication happening earlier than 2025. The hashlife idea that Golly uses is extremely helpful. On stable, repeating structures like the static wiring of Codd's machine the algorithm excels, allowing it to make jumps of millions of timesteps in one go by re-using results from before.

Conceivably this could be beaten by an algorithm that was capable of analysing the function of each component in Codd's design, and making a symbolic representation of how it would work. Such a thing has been used by Heiner Marxen for analysing Turing machines (which are much like 1D cellular automata) in the search for Busy Beavers. He calls them Macro Machines. If someone manages to adapt Macro Machines to work on generic 2D cellular automata then all the work of Codd's machine could happen near-instantly, even on today's machines. Suddenly 1000 years looks a lot closer than before.

Link | Leave a comment | Share

Comments {4}

Rickbot

(no subject)

from: rickbot
date: Nov. 1st, 2011 01:08 pm (UTC)
Link

This is a great example of the Power of Laziness - rather that doing something now that might take a thousand years, just sit on it until it because quick and easy.

Reply | Thread

Tim Hutton

(no subject)

from: ferkeltongs
date: Nov. 1st, 2011 05:23 pm (UTC)
Link

Yeah, the question soon becomes: is there any point applying brainpower to this problem when in a few years time computing power will make it irrelevant? It's like computing power is a rapidly deflating currency.

Fortunately there are enough problems that don't fall into that category and life remains interesting.

Reply | Parent | Thread