Darth Hoth wrote:Moore himself stated that progression according to his law was finite and would reach its limits within a decade or so.
Yes, and that was in the early 80s, and here we are still proceeding as fast as ever. People have been calling for the 'death' of Moore's law since the term was coined, but those people are short-sighted fools.
That is long before we get human-equivalent laptops.
I hate to break it to you (actually that's a lie, I find it highly amusing to break it to you), but we already have human-equivalent laptops... in a raw physicalist sense. Human synapses are roughly equivalent to ten or so transistors in analogue mode (generously, arguably you could do it with less) or a few hundred operating in digital mode (let's call it a thousand). The effective speed of a cluster of logic blocks (on current process technology) simulating a human synapse is around 50 million times faster than the biological version. The human brain has something like 100 trillion synapses. My laptop has about 2 billion transistors (Core 2 Quad plus the chipset and GPUs), enough for about 2 million digital synapse emulators. That gives a theoretical computing power equivalent to... 100 trillion synapses.
Of course it's not that simple; to store the state of all those synapses you'd need about half a petabyte of memory with massive bandwidth to the processing logic, and I'm omitting a lot of other technical requirements for brain simulation. The point though is that you're thinking of computing power measured the conventional way (fully programmable, always available perfectly accurate, fully reliable, mostly sequential logic) with brain power measured a completely different way (barely programmable, horribly inaccurate, massive-parallel-only, critically sequential-step-limited, unreliable wetware with a very poor duty cycle). The human brain can only achieve a thousandth of the
raw computing power required to (naively)
simulate it because only a minute fraction of your synapses actually fire in any given millisecond.
In actual fact the main thing preventing your laptop from running a human equivalent AGI is probably the bandwidth bottleneck between the processor and the bulk storage, and I'm not even sure about that (various AI people are working on very clever indexing, caching and progressive pattern match schemes that a biological brain could not hope to replicate).
When it becomes cost effective to develop 3D logic arrays, we will do so, at which point heat dissipation becomes the overwhelming challenge.
I was not aware that we presently had the engineering to easily make this step and use it to continue computer development at the present rate.
Stacked die demonstrators have been around for a couple of decades, but they've never been cost effective, particularly with air cooling. Eventually we'll start using them.
Theoretical studies of various nanomechanical and nanoelectronic designs suggest that there are plenty of smaller designs. Also, superconducting logic has the potential to increase effective switching speeds by one to two orders of magnitude.
There is, however, as yet no practical means of using such methods now or in the near future. There may never be.
Wrong. Superconducting logic is practical today, but currently the cost/beneift of the R&D and cyro is not there. If conventional silicon grinds to a halt, it will get renewed attention.
Nanotechnology represents such an engineering challenge that it may well remain purely theoretical; we do not know where to start.
Wrong, plenty of research groups are making great strides in all areas of nanotechnology. It's just taking longer than the insanely optimistic initial enthusiasts liked to think.
Is there presently any computer complex that can match even the raw processing speed of a single human brain?
Yes, we're into the petaflop domain now, the
effective raw processing speed of the human brain (that it can bring to bear on any given problem) is likely measured in teraflops only.
It will still not be "godlike", insofar that its abilities will not look like pure magic to us.
Sufficient deductive capabilities look like magic of the oracular or divinatory nature. Frankly this is pretty much a given, but of course no one can say exactly what it will be like until it exists. Physical 'magic' requires a technology advantage, and history suggests that it doesn't actually take that much of a gap to be 'sufficiently advanced'. Whether an AGI will come to possess such an advantage over humans depends on the circumstances of its development, but I strongly suspect it will in short order.
Some Singularity wankers seem to think that the day after the first "archailect" sees the light, we will all be consumed by an omnivorous nanoswarm, or similar ludicrous scenarios.
The only thing unlikely about that is the timescale. My guess is that the R&D would take a few months (perhaps even a few years), and of course the goo will be more like a cross between super-algae and a horribly infectious disease than an advancing wall of ooze, but there's nothing fundamentally implausible about it.
The computer would still be bound by conventional physics
Yes, and? It doesn't really help if you're facing an enemy with a major technological advancement on you to know that they're still bounded by 'conventional physics'. The main relevant bound is infrastructure, but given an indefinite amount of tireless, super-genius labor that's less of a challenge than it seems at first.
Assuming that we build it, we can destroy it before it gains the ability to affect reality.
No, you really can't. Simply by interacting with it, you are giving it a channel to interact with reality. And in most development scenarios, it will escape onto the Internet in short order.