That'd be partially true if (1) all bots are universally the same minuscule size, none bigger than a few microns or whatever (2) the designers are morons.Serafina wrote:Still, a lot of the applications of Nanomachines we see are more or less bullshit (Rapdid fabrication, most "nanoweapons" and by extension Grey Goo).Gilthan wrote:Any entity approaching making "grey goo" would be greatly to be feared or respected, as self-replicating technology in general is not to be underestimated.Xeriar wrote:I don't put much stock in grey goo nanoswarms, simply because unicellular life has had aeons to dominate every single energy-using niche on the globe. Sometimes you just need a bigger machine to do a job effectively.
Self-replicating technology is of astronomical potential power if avoiding the limits of biological life, such as an artificial mixed ecology including some bigger units. Even earth's crust contains on average 110 times its equivalent energy in TNT explosive due to its thorium content, meaning sufficiently advanced technology could in theory sustain itself eating rock. (Around 1 W/m^2 inefficient biological primary-producer solar power utilization is weak compared to 2.7 GW-years/m^2 available thorium on land or 7 GW-years/m^2 available ocean deuterium, let alone extraterrestrial resources).
Constructively (such as barely comprehensible quadrillions of kilos of wealth) or destructively (such as trillions of insect-size smart missiles), having the ratio of productive output to human labor input approach infinity is a potential change impossible to overstate.
On the other hand, AI superintelligences are liable to be developed first, being indeed the entities likely to accomplish the vast challenge of creating self-replicating technology. After all, on a logarithmic brainpower and intelligence scale where a fly would be 1, a mouse 3, a dog 5, and a human 6, having AI progress from 1 to 6 while just hovering there for long would be a doubtful coincidence compared to instead further orders of magnitude in both speed and complexity of thought soon after. And that, superintelligent AI, is the fundamental ultimate threat or hope and promise.
Why? Simple, these tasks require enegy, and the machines are individually quite vulnerable.
How are they going to get all this energy to assemble/disassemble anything of any notable size (say, a human or even a wrench)?
And if they are used in a uncontrolled enviorment - what is going to protect them from wind, rain and all the other nasty stuff going on on this planet? Even if it does not destroy them, it will easily split up the swarm and destroy its ability to work together.
Neither assumption has to be valid for an artificial ecology of self-replicating machines.
Larger bots can be nuclear-powered, like the earlier example of each ton of the average rock in earth's crust having trace thorium with equivalent energy potential to 110 tons of TNT explosive, or the deuterium in seawater. Barring unknown breakthroughs or micro-scale aneutronic fusion (secondary reactions probably still leading to too much penetrating neutron or high-energy gamma radiation even for rad-hardened hardware more resistant than biologicals), smaller bots utilize energy from the bigger entities in the artificial ecology.
One example would be the larger bots flooding areas within meters or kilometers with inductive energy transfer, using coupled magnetic resonances. Alternatively, even the bigger nuclear bots resupplying the smaller ones with synthetic chemical fuel made from carbon/hydrogen in air and water/rock would work, as such can be 10x the energy density of the poorer-quality water-diluted food consumed by biological insects. For instance, a 100kg mass of a hundred million 1-milligram bots of 1mm dimension each with micro jet engines could most certainly go anywhere insects can fly or anywhere a 100kg human can go. If used as a weapon, only 1 of those 100 million needs to deliver a poison payload, since the world's most toxic poison is lethal in microgram payloads. Coordinated AI swarms with explosive charges could defeat barriers or armored suits, and, as long as they're made sufficiently EMP hardened, the result can be more than effective.
As for the very smallest bots, pure nanobots alone, how do you think the death toll from the 1918 pandemic would be different if it had instead done what no mere natural biological pathogen could, by instead spreading dormant to about the entire populace without causing any noticeable symptoms in anyone before, finally, by remote-control or a preset atomic clock timer, suddenly releasing a few micrograms of toxin in everyone in the same instant?
Of course, the potential for self-replicating technology to be used constructively is greater still than mere destructive capabilities, and, as argued earlier, the challenges to development of such appear so high that general AI first is more likely.