Starglider wrote:Wyrm wrote:A 'von Neumann machine' has a specific definition, and organic life fits that definition like a glove.
Semantically, no they do not, because they are
not machines.
Your semantics whoring does not impress me. Von Neumann was not concerned how the machine was created in the first place, only in how an entity following mechanistic laws could reproduce itself without some vital force or infinite regression of smaller self-copies.
Starglider wrote:That makes bacteria useful for establishing absolute lower limits - in that anything evolution can make, intelligent designers can make at least as well.
The current track record of intelligent designers does not bear you out.
Starglider wrote:Furthermore while macroscale organic life exists all of its reproductive processes and nearly all of its resource processing operates at the molecular scale, so it can tell us nothing at all about how machines designed to employ macroscale replication will perform.
You realize that there are some forms of life that are macroscopic, do you not?
Starglider wrote:Finally organic life can of course tell us nothing about functioning in interplanetary space.
Irrelevant. In broad strokes, the problems are the same: resource aquisition and avoiding harm in an environment not specified ahead of time. A fixed configuration of matter behaves in a particular way — it will be able to do some things and not others. Magic constructors violate everything we know about every physical process we have so far encountered.
Starglider wrote:No, it is not. Unicellular organisms reproduce by fission. The reproductive mechanisms of multicellular organisms are developed from that; they start with a single cell which slowly grows into a new organism. It is required to maintain basic metabolic function throughout the entire process and is required to be a viable independent organism from birth onwards.
Mechanical self-replication is nothing like that - in fact it would take massive effort to emulate this grossly inefficient system, for no gain at all. Robotic self-replicators build full-size copies of themselves which are generally inert until completed and then fully functional upon activation. This includes mental capabilties; robots can trivially copy their mind-state, organisms cannot.
Irrelevant details. Von Neumann was trying to specify how a totally mechanistic self-replicator could be possible without some vital force or infinite regression of self-specification, and he realized that only three things were necessary: a "universal constructor", a regulatory mechanism, and a blueprint of both. You could make the "universal constructor" and regulatory mechanism from the blueprints, and then copy the blueprints.
Translating this to biological organisms, we find the blueprints (DNA), the "universal constructor" (ribosomes, protiens, and attendant metabolistic cycles) and the regulatory mechanism (interactions between the DNA and the protiens), exactly as von Neumann predicted. That organisms mash together von Neumann's discrete steps and parts is irrelevant. That they cannot trivially copy their mind state is irrelevant. Life displays all the required properties to be called a von Neumann machine. Period.
Starglider wrote:von Neumann's real achievement was creating a (basic) functional model showing how such a machine could actually operate.
Wrong. Von Neumann showed how pre-scientific vital force or sperm humonculi woo woo were unnecessary; he provided the grounding of life/self-replication as a
mechanistic process.
Starglider wrote:Organic life actually faces far fewer problems, because evolution targets simple reproduction, not interstellar travel.
Organisms have already mastered the characteristic property of a von Neumann machine: self-replication in an unpredictable environment.
Starglider wrote:However this is irrelevant, because the characteristics of the solution are not solely dependent on the shape of the problem, they are the product of that and the tools available to solve it, and the toolset for self-replicating robots is entirely different (and vastly more capable).
Why do you assume this? You can only pack so many tools with your VNM. Even if you include the tools as programs, you still have to have the tools to make the tools.
Starglider wrote:How is that in any way a response to my point? 'A constructor', as in a single physical organism or machine, is optimised. Robots can build completely dissimilar robots and even self-modify as required. Organisms cannot.
Fell asleep during biology class, did you? You developed from an undifferentiated ball of completely identical cells. A Trypanosoma modifies its own genetic program to put a different coat on itself every two weeks for
decades. Parasites have some of the most fiendishly complicated life cycles on the planet. The kind of stuff life does is easily as complicated as anything we've done.
Starglider wrote:You refuse to recognise this basic distinction.
I refuse to recognize a distiction you fail to show exists.
Starglider wrote:You are assuming all technology is exactly like bacteria, a ludicrous and transparently false position.
Strawman. How is looking at life and realizing that an autonomous technology will have to overcome similar challenges "assuming all technology is exactly like bacteria"?
Starglider wrote:We already have industrial robots that build other robots.
But not themselves. Or anything outside of a narrow range of capabilities.
Starglider wrote:They are completely dissimilar models. Being able to construct an exact copy is actually a rare case; occasionally useful, but most of the time it is more efficient to have a small number of robots dedicated to building more robots with the other robots all optimised for different tasks.
A VNM has to
eventually get around to building itself, moron. It's part of the definition. This capability has yet to be demonstrated in anything we have built.
Starglider wrote:Even organic life, limited as it is, hit on a very limited version of this with some insects; worker ants form the vast majority of the colony, but do not reproduce. Most mammals have a very limited degree of task specialisation keyed off the gender distinction that evolved to support sexual reproduction. Of course with the horrid limitations of available selection pressure, chemical cascade expression mechanisms (for differentation during growth) and unstable DNA as the information storage medium, this differentation could not proceed very far.
More proof that you fell asleep during biology class. Ants and mammals are
multicellular organisms. Each one is a colony of self-replicators that specialize themselves to a particular task, sacrificing their further reproducion for the good of the greater organism it's part of. What you fantisize your wonderful VNM to do, organisms already have perfected.
Starglider wrote:In short, they are not 'fanciful' capabilities, they are not only already demonstrated in industrial settings, they are how any sane engineer would expect robots to work.
But not without the input of refined materials at one end, nor are there any examples able to reproduce
themselves, nor work without the support of a civilization's infrastructure and their ultimate creator's intervention.
In a strange, uncontrolled, unpredictable environment without intelligent support, who has solved the problem of gathering resources and replicating themselves? The answer is LIFE!
Again, I have an exant model, while you have a hyperbolic fantasy.
Starglider wrote:Again you are acting as if the limitations of organic life were relevant, ignoring even the (relatively primitive) computer you are using to access this BBS. DNA is reasonably compact at the bit level (only a few of orders of magnitude off optimal), but it is horribly redundant at both the message level (lots of junk DNA, lots of duplicate copies of proteins, no data compression) and the organism level (humans have 100 trillion copies of their own blueprints... for no good reason at all). A robot of equivalent size and complexity to a human, storing a conservative 100 copies of its own blueprints with lossless compression in moderately efficient optical storage, would use approximately one quintillionth the mass and volume that a human uses for the same purposes. You could include a million different robot designs for different purposes and environments, and that's still a trillionth of the mass a human devotes to DNA.
All that text and you
still haven't answered the damn question: HOW MUCH DO YOU FUCKING NEED!!
Starglider wrote:Construction limitations are more relevant, but not much. The mechanical side is straightforward; the same basic manipulators and bending/cutting/joining methods are used for the vast majority of machines.
That's ridiculously simplistic. Even without going to chemical processing, the number of ways to manipulate material is enormous. There are literally
hundreds of ways to cast metals and ceramics, each
raw material requiring specialized equipment and supplies. Even a seemingly simple process of bending has hundreds of specializations, such as stamping, or rolling, again, each raw material requiring different equipment.
The world we live in right now is the product of all those different manufacturing techniques, designed to control the quality of the goods quite finely. If you limit yourself to only three ways to manipulate any given material (or worse, three ways for all materials), you are going to have some serious engineering issues on your hands, and your stored blueprints are going to be quite a bit more complicated than you anticipate.
Starglider wrote:Molecular nanotechnology would use a similar bootstrap process, only with more steps and higher efficiency since it can use a much wider range of chemistry and techniques such as channeled transport (don't have to do everything in the same flexible bag of water).
Explain "channeled transport". And a more varied chemistry needs varied reagents. What happens if some reagents aren't present at your destination?
Starglider wrote:However molecular nanotechnology is not a prerequisite for VNM, because the same principle works at the macroscale; building the tools to build the tools to achieve the goal. It's less efficient, to the extent that your probe may well become a multi-thousand tonne automated factory rather than the tiny probe you could produce with nanotechnology, but it is certainly workable.
I think you sorely underestimate the size of the factory you need. The modern manufacturing infrastructure we have now is hideously complicated, with many, many interdependencies. Take apart your computer sometime and think about how each of the little parts was fabricated and assembled and you'll begin to grasp the scope of the problem — and this is just for a relatively primitive computer. Don't expect it to be any easier with a sophisticated VNM.
Starglider wrote:While these are not easy problems, they are well within the scope of existing engineering skill, both as extensions of techniques we already use on contemporary spacecraft, and the numerous proposals you can find in both academia and the sci-fi community. They also apply to all interstellar craft, not just VNMs.
Fair enough.
Starglider wrote:Also, you're imagining a universal constructor that could construct absolutely anything, when even in organic life, no such damn thing exists.
'Even in organic life'. That says it all really. You actually believe that organic life is superior to technology - not just our current technology, all technology. I believe we call that 'biowank' around here.
It's not 'biowank' to point to the
exant capabilities of organic life — the fact that it can create a wide variety of chemicals that would make a chemist green with envy — and to point out that even life has its limits. The point is that
every material we've created requires specialized equipment to both create and manipulate. There is no such damn thing as a "universal constructor." Life is the closest we know of, and even it's not universal.
Starglider wrote:<snip>We can accept quite stringent limitations on materials and construction methods, and still have vastly more options than a single evolved (or even genetically engineered) organism can.
You have more options, but they do not come without cost. You must carry the tools with you, or the tools to create the tools from the surrounding materials of unknown composition and unpredictable quality.
Starglider wrote:Correct, but it is almost certainly design problem. Drexler and his successors have fairly conclusively proved that there is no physical reason why nanoscale processes of relatively low energy cannot achieve everything we can with bulk processes, in a fully programmed way - energy efficiency may be lower or higher than the bulk equivalent depending on the situation.
The later case is what you assume when postulating a VNM designed by a civilisation lacking mature nanotechnology (or in all likelihood, general AI).
I cannot extrapolate from a vacuum. I suggest you refrain from doing the same.
Starglider wrote:I would note that the actual variety faced by such designs is not that great. Typically we are talking about extracting material from asteroids, which come in a few basic compositions, and small planets and moons, with trace atmospheres and weak gravity wells. The only major variable is relative abundance of elements.
All the statements about the richness of asteroids assume a space infrastructure designed to take advantage of them. A probe coming in from a distant star will not have this infrastructure. It will need, in broad strokes, a refinery for every single processed material it needs. Such a probe will require hundreds of distinct materials... you begin to see the problem now?
Starglider wrote:'Slavery' inherently implies unwilling service. If someone voluntarily does something for you, we would not call it 'slavery' - we don't call it 'slavery' when we domesticate animals either. Thus non-sentient VNMs aren't slavery, and the only way to make sentient ones loyal is to design them so they want to help you, so those aren't slaves either.
'Willing' and 'unwilling' implies choice. If you've short-circuited the choice for the subservient sapeint AI to choose between servitude and non-servitude, how can it be a willing choice?
Starglider wrote:The debate on 'enslavement' of sentient AIs is actually rather more complex than that, in that there are some kinds of mind which you might reasonably say have 'free will' and a human-comparable self-image, and others that don't, and the later can act as if sentient without actually being sentient in the morally relevant sense. Or at least, I am on the side of the debate that believes that to be the case, it is still an open issue, but it is beyond the scope of this thread.
Of course. Let's leave it at that.
Starglider wrote:You appear to be using a completely arbitrary definition of 'true'. Do you mean 'a robot that can make copies of itself and also any artifact or material a highly technological civilisation could make, unassisted?'. That's nice, but we've already established that it isn't actually necessary, the minimum requirement is just a development path from the initial probe to a large-scale infrastructure capable of making and launching more probes (and optionally, doing other things, e.g. preparing a planet for colonisation), that works in at least a good fraction of the solar systems that probes end up in. This is much easier and probably possible for civilisations lacking molecular technology.
You seem to strap your assertions of full AIs on the back of nanotech. By definition, nanotechnology requires manipulators on the order of macromolocules. It's not unreasonable that the most common element in your scheme will be on the same order.
Starglider wrote: You can't fit very much intelligence in even a million atoms.
How is that relevant? Even the smallest self-replicating units in organic life have about a trillion atoms - human cells have a couple of orders of magnitude more. A ribosome doesn't need to have the genetic code incorporated into it; cells run many thousands of ribosomes from one nucleus. That fan-out ratio is limited by the horribly slow and unreliable mechanism of information transport; letting mRNA diffuse around the cell. Sensible designs for nanoassemblers have large numbers of active mechanisms electrically or mechanically connected to processors, which network to handle larger problems.
How? Electricity behaves very differently on the nanoscale. We can today build devices that work by electrons tunneling through insulators, and they are not nanotech devices! On the nanoscale, physical linkages behave more like rubber than metal rods. Thermal noise is going to be rife in your device, wrecking all sorts of fun havoc.
Starglider wrote:Even designs where the individual assemblers move freely (which is considerably harder and probably unnecessary for nearly all practical applications) put a small amount of CPU and memory on each assembler and datalink new instructions to them en mass for each stage.
And how are they supposed to communicate? Telegraph? Semaphore flags?
Starglider wrote:It sounds like you have been arguing against some woefully misinformed grey goo proponent using the Hollywood model of nanomachines, e.g. magic macromolecules that are simultaneously general assemblers, highly efficient energy producers (photoconversion, general oxidisers, if they even bother to specify a method of powering them at all) and at least insect-level intelligence (networking into an inevitably-evil sapient AI technically optional but very popular). The real nanotechnology community is not attempting anything so ridiculous.
Yet it's
exactly this kind of magic you require from your nanotech. You don't specify what processes you propose to extract, refine, and fabricate nanoparts, instead handwaving over the entire problem and mumbling 'um, nanotech,' and hoping I won't notice. You ignore the very real problem of energy consumption of your nanodevices, instead handwaving over the entire problem and mumbling 'um, nanotech,' and hoping I won't notice. You ignore the very real problem of coordinating all these nanodevices, with each other or to a home AI, instead handwaving over the problem and mumbling 'um, nanotech,' and hoping I won't notice. I don't see a fucking difference between you and 'some woefully misinformed grey goo proponent.'
Starglider wrote:Electrolysis is not hard, but it's hardly relevant, realistic probes would use ion drives at minimum for everything outside low orbit.
Okay, where are you going to get the xenon or cadmium for these ion drives?
Starglider wrote:I was merely pointed out, once again, that what you seem to be proposing as an upper limit, or at least reasonable average, is in fact an extreme lower limit (uselessly so in fact).
Extreme lower limit of what
may be capable of a technologically advanced civilization, you mean. We don't even know if Daedalus would actually work, and as far as Orion goes, they require a stockpile of atomic bombs — which are hard to create and require a large quantity of the almost vanishingly rare fissionable isotopes. Unless you're proposing that the VNM construct the equivalent infrastructure of a full-fledged civilization, it's going to have to make some compromises. At least hydrogen and oxygen need no special processing.
Starglider wrote:For the purposes of the Drake equation it could take a billion years to cross the galaxy (presumably in numerous ten thousand year hops) and the basic question would still be there.
Ah, we're finally coming around the the crux of the matter, the Fermi Paradox. There are only two ways out of the Fermi Paradox:
(a) Intelligent life is rare, or
(b) Distance protects us from other intelligent life.
Actually, it would be a combintation of (a) and (b). Intelligent life is rare enough such that the expected distance to the next civilization protects us from them.
Von Neumann probes (VNPs) able to construct fast STL drives destroy (b) if they really do have the capabilities you claim. That must mean that the galaxy is barren of technically advanced life, or the requisite VNPs and/or fast STL drives are
impossible. At least one
must be missing! Choose.
Starglider wrote:Uranium-rich asteroids are rare, but they can be surveyed relatively easily (fire tiny slugs at them, use spectroscope on resulting flash) and mined at low energy cost (again, assuming you're in no particular rush).
Why can we assume that you're in no particular rush? Every part of your VNM is going to wear out sooner or later. Solar panels have a limited life. Reactors have a limited life, too. Your micrometeroid shield will have a limited life. Your chemical extractor and reagents will have a limited life. Your cooling system will have a limited life. Your days
are numbered.
Starglider wrote:Even for this edge case of a civilisation trying to make a VNM while lacking even mature fusion technology, it is only going to slow things down a little, because fissionables are only essential for powering follow-on probes. That said such technology is going to be limited to operating in inner solar systems, which is a significant though not crippling issue.
It can be, if you can't reproduce/replace fast enough to stave off the inevidable encroachment of your eventual mechanical failure.
A careful reading of your source shows it to be long on promises and short on details. For instance, the basic 'chemist' component of the reproduction is very much lacking on the expected impurity of the extracted elements. The impurities will degrade the quality of the material and therefore the performance of the machines and secondary materials fabricated from it. Aerostats are another bit of fuzzy math, as they are simply
assumed to have half the practical lifetime of an industrial factory, even though an aerostat would be operating under
very different conditions from an industrial factory. You're going to have to do better than that.
Starglider wrote:Wyrm wrote:Given that life is made of the most common elements in the universe, it's obvious that, for creating a VNM to be used in unpredictable environments like remote alien planets, the low-hanging fruit is an artificial life-form.
It is not 'obvious' at all. If you mean a genetically engineered life form, it can only manage
mere survival in narrow earth-like temperature ranges, with protection from vacuum boil-off and either sunlight or a pre-existing chemical fuel compatible with known biochemistry. That's just reproducing on a planet. How is covering a planet in genetically engineered bacteria going to produce and launch new interstellar craft?
First off, you've obviously never heard of extremophiles, have you? We've found life munching on just about every energy source and living in every nook and cranny on the planet, including rocks and corrosive hot springs that would strip the skin off of you instantly. Tardigrades are the king of extremophiles, able to survive complete dessication and the space environment for a decade or more. If I was designing an organism from the ground up, I'd take a hint from these critters and design a whole spectrum of creatures able to survive a large variety of hostile environments.
Secondly, your disparaging of the narrow range of life assumes that machines don't live in equally narrow ranges. Your computer has a cooling fan precisely because it will malfunction if it heats up too much. Mechanical parts can suffer from vacuum welding, and so have to be specially designed for the space environment.
Finally, the organisms would build a space ship the same way your nanobots would: by constructing larger and larger structures to help with gathering and manipulating resources. As for programming, if I'm building the organism(s) from the ground up, I can eliminate all the junk and code the instructions tightly, and genomes can code for a lot. The lowly Amoeba has 290 billion base pairs to play with, and if you work with communities of organisms, you can work with a library as sizable as any VNM you can build.
This is what I meant by the low-hanging fruit. If you're going to be building a machine, your best bet is to be building it out the most common elements of the universe. Once a community is established, you can then take your time to build the tools it takes to finally build a spacecraft. That life forms are not called "machines" is merely semantics.
Starglider wrote:Starglider wrote:Anything with enough AI to function as a reasonably general replicator can easily be programmed to scan for evidence of life and avoid colonising such planets. Appropriate behaviour in this situation would be to unfold a big antenna and radio or laser news back home, then possibly consuming some asteroids to build orbital sensor platforms and/or robotic lander probes.
Again, you theorize a universal replicator with no limits on raw materials and end products.
What? How is replication generality related to having the sensors, processing and comms required to perform a normal interstellar probe mission?
"Anything with enough AI to
function as a
reasonably general replicator". You've inexorably linked your AI to this "reasonably general replicator" of unspecified performance in your argument. If you don't want that, don't do it.
Starglider wrote:Including non-life in the definition is no problem. It doesn't matter if the VNMs have even a large fraction of false positives. They could self-rep in only 10% of the available systems and still spread quickly. You're likely to lose far more systems to lacking appropriate resources than to having things that might be life. Even when there is possible life, in most cases it is a highly localised phenomenon that does not preclude use of the rest of the system for self-replication.
Fair enough.
Starglider wrote:Ah of course, you can't imagine anything that isn't chained by the horrible limitations of organic life can you?
Computer errors are a fact of life, son. Some caution in handling machines with the ability to multiply exponentially is warranted.
Starglider wrote:Do you live in mortal fear of your Windows computer 'mutating' into an evil AI that drains your bank account to pay for server hardware (for its inevitable attempt to usurp humanity)?. I imagine you must.
The worst my laptop could do to me is erase my porn, then self-destruct. A swarm of hostile VNMs could loop back to our planet and erase us. Some caution for the latter is warranted.
Starglider wrote:Back in reality, software does not 'mutate'. If the storage medium is damaged beyond the capability of the error correction to fix, the system fails checksum and shuts down. Even if it somehow lacked such basic safety systems, it would almost certainly just crash. There are real debates regarding 'mutation' of self-modifying AIs, but that is entirely unrelated to bit-level errors or biological mutation, and it is not relevant to VNMs based on non-sentient control software.
Modern software is not designed to change its code on the fly, or more generally, the instructions are closed and fixed. A sentient has to be able to learn or it doesn't deserve the title. That means being able to change its instructions on the fly. This is a fundamental difference between the way we program now and the way we will need to program AI's, so past art is a dubious guarantee. Some caution is warranted.
Starglider wrote:I wrote:Our own sapience is a bolt-on to a package of more primitive instincts designed to keep us intact and successful in an uncertain environment. A truly independent machine will have something like them, with sapeince as a bolt-on.
How did you put it, ah yes, 'You will justify this assumption now'.
Extrapolation from a single datapoint, plus the fact that a sapience is useless if the hardware is wrecked. Yeah, you may cry 'single datapoint', but it's better than extrapolating from 'no datapoint.'