What Hard SF Universe Could Beat the Federation?

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

Post Reply
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Post by Ryan Thunder »

petesampras wrote:
Plushie wrote:
Starglider wrote: You have no fucking clue what you're talking about. Neural networks are trained, not programmed. Genetic programming develops algorithms from a fitness metric without any human design. Back in 1981 Eurisko was already designing chips and winning strategy games using evolved code with minimal human input.
I wasn't talking about neural networks.

In fact, if I had been you'd be agreeing with me, because I probably would have said something similar to the above.

I was talking about transistor based processors. I tire of running across so-called 'futurists' who think their desktop is an ancestor of future AIs.
You don't seem to grasp a fundamental aspect of computing and AI. The information processing of an algorithm ( and neural networks are algorithms ) is independant of the hardware that implements it. Provided a piece of hardware has the speed and memory it can implement *any* information processing task.

I would recommend the opening chapters of David Marrs Vision, for a lay friendly discussion on this.
Starglider wrote:I build AIs for a living, commercial ones that do useful revenue-generating things and research prototypes that try to capture new aspects of sentience, and I say you are bullshitting so hard it is coming out of every single one of your orificies.
AI can generate un-expected results, but they are by no means 'new'. They cannot 'create', only compute. They can seem to create new things that their programmer didn't expect, but that's a result of the law of big numbers applied to a human and his computer, not genuine sentience.
Define, precisely, what you mean by *new things* and give your reasoning for why computation cannot achieve this.
Because you can't write a program to do something without understanding what that is yourself. Without the full understanding of what it is you're trying to do, you cannot create an algorithm to do it.

If you knew anything about programming, you'd know this already...
SDN Worlds 5: Sanctum
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Ryan Thunder wrote:Because you can't write a program to do something without understanding what that is yourself. Without the full understanding of what it is you're trying to do, you cannot create an algorithm to do it.

If you knew anything about programming, you'd know this already...
This is correct. However it is possible to program an AI with the ability to program; I probably know this better than just about anyone else, because the company I work at is currently doing it, based on the core IP that I designed (a couple of years back). Of course, an AI generating code in this fashion also needs an understanding of the goal and the problem domain, unlike evolutionary methods which work by finding, aggregating and recombining useful design complexity via massively iterated trial-and-error, no understanding needed. The next question is where does the AI get the understanding from (and implicitly, what form does that understanding take), if it doesn't get it from the programmer. This is the 'induction' problem and has received a lot more attention. Without totally hijacking the thread, I'll just say that there are a lot of promising approaches to this, and many of them are making progess, albeit slow progress. A simple solution is to combine an 'evolutionary' method for generating theories about how the world works with a deductive one for solving problems using those models. This is conceptually relatively close to how humans work when we think about thinks informally (i.e. don't explicitly employ the scientific method) and gives rather better performance than relying on evolutionary methods alone, without requiring the spoonfed models that deductive code generation alone requires.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Ryan Thunder wrote:
petesampras wrote:
Plushie wrote: I wasn't talking about neural networks.

In fact, if I had been you'd be agreeing with me, because I probably would have said something similar to the above.

I was talking about transistor based processors. I tire of running across so-called 'futurists' who think their desktop is an ancestor of future AIs.
You don't seem to grasp a fundamental aspect of computing and AI. The information processing of an algorithm ( and neural networks are algorithms ) is independant of the hardware that implements it. Provided a piece of hardware has the speed and memory it can implement *any* information processing task.

I would recommend the opening chapters of David Marrs Vision, for a lay friendly discussion on this.
AI can generate un-expected results, but they are by no means 'new'. They cannot 'create', only compute. They can seem to create new things that their programmer didn't expect, but that's a result of the law of big numbers applied to a human and his computer, not genuine sentience.
Define, precisely, what you mean by *new things* and give your reasoning for why computation cannot achieve this.
Because you can't write a program to do something without understanding what that is yourself. Without the full understanding of what it is you're trying to do, you cannot create an algorithm to do it.

If you knew anything about programming, you'd know this already...
Utter nonsense, you truly are a moron and are obviously ignorant of entire fields of research such as machine learning and evolutionary computing.

You also seem unable to read, given that you have started to respond to the second half of my question "give your reasoning for why computation cannot achieve this" without dealing with the first part "Define, precisely, what you mean by *new things* ".

You do understand that it is meaningless to talk about machines not being able to create *new things* without first defining *new things*?

Clown...
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Why are we assuming that this AI would be super-efficient at learning new and unprecedented things? I may not be an AI expert, but I do know that things are designed and optimized for the tasks at hand, not things that are totally unexpected. Given the size and scope of this HSF civilization, it's pretty obvious that they've been in scientific stasis for some time. Most likely they haven't discovered any new physics principles in thousands of years. So why would their AIs be fantastic at developing new scientific principles and technologies rather than efficient administration of their ultra-complicated infrastructure, which is a job they would actually be built for?
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Post by Ryan Thunder »

petesampras wrote:Utter nonsense, you truly are a moron and are obviously ignorant of entire fields of research such as machine learning and evolutionary computing.
A machine can't be programmed to do something if you don't know how to do it already. The first thing any programmer learns is that, much like yourself, computers are dumb as shit. They do exactly what they're told, and nothing else.

Sure, you can program a machine to 'learn', but its still just following pre-programmed instructions. Throw it a curve ball and it'll stumble.
You also seem unable to read, given that you have started to respond to the second half of my question "give your reasoning for why computation cannot achieve this" without dealing with the first part "Define, precisely, what you mean by *new things* ".

You do understand that it is meaningless to talk about machines not being able to create *new things* without first defining *new things*?
New being something you as the programmer don't understand. A machine can't invent something new. It can optimize an existing design through that 'evolutionary' programming, but you're not going to boot it up one day and find that its reinvented Calculus on its own, unless you programmed it to do that. :roll:
SDN Worlds 5: Sanctum
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Ryan Thunder wrote:
petesampras wrote:Utter nonsense, you truly are a moron and are obviously ignorant of entire fields of research such as machine learning and evolutionary computing.
A machine can't be programmed to do something if you don't know how to do it already. The first thing any programmer learns is that, much like yourself, computers are dumb as shit. They do exactly what they're told, and nothing else.

Sure, you can program a machine to 'learn', but its still just following pre-programmed instructions. Throw it a curve ball and it'll stumble.
Please stop blowing hot air about topics you don't understand, moron. The algorithms a computer uses in machine learning may be pre-programmed instructions, but the resulting system that results from exposing that system to training data is not. Your brain is built from a set of pre-defined instructions, that does not mean everything your brain does is pre-defined. Does it?

If I want a piece of software to recognise pictures of cars. You could directly code it to do so. Or you could write software which can learn recognise different types of object and then train that system with pictures of cars. In the latter case, it makes no sense what-so-ever to claim that the system is merely following pre-programmed instructions. The original coder(s) may have no idea about the specifics of the features to look for in code to recognise cars, those capabilities have been found by the learning algorithms.

You also seem unable to read, given that you have started to respond to the second half of my question "give your reasoning for why computation cannot achieve this" without dealing with the first part "Define, precisely, what you mean by *new things* ".

You do understand that it is meaningless to talk about machines not being able to create *new things* without first defining *new things*?
New being something you as the programmer don't understand. A machine can't invent something new. It can optimize an existing design through that 'evolutionary' programming, but you're not going to boot it up one day and find that its reinvented Calculus on its own, unless you programmed it to do that. :roll:
I have challenged you for a definition and you have just gone back to stating your old argument. I repeat. You cannot even make the claim that computers are fundamentaly incapable of inventing *new things* without clearly defining what is meant by *new things*. Is this really too difficult a concept for you to grasp?

Also, if you want to respond to this, I suggest using a new thread, since others are probably, and understandably, getting annoyed at this side track.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

I meant to put italics around 'clear' in the above. 'New being something you as the programmer don't understand.' - is not a clear definition. Code can crash without you understanding what happened. Anyway, that's my last post on this in this thread.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Darth Wong wrote:Why are we assuming that this AI would be super-efficient at learning new and unprecedented things? I may not be an AI expert, but I do know that things are designed and optimized for the tasks at hand, not things that are totally unexpected.
It isn't really an assumption. The lower limit for what AIs will do is the same thing as humans, but much faster. The 'general' part of 'general intelligence' means that it is applicable to any task. This is what makes designing it so difficult. However you're still limited by the available data; full probabilistic analysis of everything reduces the amount of repetition you need before you can spot patterns, but it doesn't remove the need for experiments (and experiments to design the experiments to design the instruments to do the experiments to design the prototypes for the very first warp drive...).
So why would their AIs be fantastic at developing new scientific principles and technologies rather than efficient administration of their ultra-complicated infrastructure, which is a job they would actually be built for?
AI software of that sophistication doesn't really gain anything from that kind of task specialisation. Counter-intuitive I know, but any plausible design will be at least as good as the best human scientists. Having lots of AIs expressly and carefully optimised for doing scientific research wouldn't really make any difference; the bottleneck is by far the chain of physical experiments, not the processing.
Ryan Thunder wrote:A machine can't be programmed to do something if you don't know how to do it already. The first thing any programmer learns is that, much like yourself, computers are dumb as shit. They do exactly what they're told, and nothing else. Sure, you can program a machine to 'learn', but its still just following pre-programmed instructions. Throw it a curve ball and it'll stumble.
You are engaging in broken record debating. You will a) address my examples of genuinely learning software, b) conceed that you do not know anything about AI or c) get banned for being a broken record. The fact you can program does not mean you know anything about automated programming - in fact you seem to have sort of mental block about the fact that a computer can do exactly the same thing you do when you program.
It can optimize an existing design through that 'evolutionary' programming, but you're not going to boot it up one day and find that its reinvented Calculus on its own, unless you programmed it to do that.
Douglas Lenat's 'AM' program (the precursor to Eurisko) did something close to this in 1979. In the symbolic AI peak of the 1980s there were several 'analogical discovery engines' which achieved feats such as derriving Kepler's laws of planetary motion from astronomical data. The fact that they were 'programmed to be capable of inventing theories' is irrelevant to the question of what AI software can do (all you are doing is confirming the blatanty obvious fact that computers do not spontaneously become sapient); they were capable of formulating hypotheses and testing them to find scientific laws.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Starglider wrote:
Darth Wong wrote:Why are we assuming that this AI would be super-efficient at learning new and unprecedented things? I may not be an AI expert, but I do know that things are designed and optimized for the tasks at hand, not things that are totally unexpected.
It isn't really an assumption. The lower limit for what AIs will do is the same thing as humans, but much faster.
And what makes you think that scientific innovation has historically been a function of mental processing speed?
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Post by Ryan Thunder »

Starglider wrote:
It can optimize an existing design through that 'evolutionary' programming, but you're not going to boot it up one day and find that its reinvented Calculus on its own, unless you programmed it to do that.
Douglas Lenat's 'AM' program (the precursor to Eurisko) did something close to this in 1979. In the symbolic AI peak of the 1980s there were several 'analogical discovery engines' which achieved feats such as derriving Kepler's laws of planetary motion from astronomical data. The fact that they were 'programmed to be capable of inventing theories' is irrelevant to the question of what AI software can do (all you are doing is confirming the blatanty obvious fact that computers do not spontaneously become sapient); they were capable of formulating hypotheses and testing them to find scientific laws.
Shit... Now that's something I'd like to hear more about... :shock:
SDN Worlds 5: Sanctum
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Post by Ryan Thunder »

Darth Wong wrote:
Starglider wrote:
Darth Wong wrote:Why are we assuming that this AI would be super-efficient at learning new and unprecedented things? I may not be an AI expert, but I do know that things are designed and optimized for the tasks at hand, not things that are totally unexpected.
It isn't really an assumption. The lower limit for what AIs will do is the same thing as humans, but much faster.
And what makes you think that scientific innovation has historically been a function of mental processing speed?
I think what he means is, if they can be programmed in such a way that they have breakthroughs at a similar rate with respect to attempts, then they can work much faster by virtue of being able to make many, many more attempts than we ever could in the same period of time.

At least, that's how I read it... :?
SDN Worlds 5: Sanctum
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Darth Wong wrote:And what makes you think that scientific innovation has historically been a function of mental processing speed?
Historically most scientists spend most of their time analysing data, writing papers, reading papers and going to conferences, rather than actually doing experiments. Essentially all of this time can be collapsed away with sufficiently good AI. Higher power and mass efficiency also lets you create a much greater 'scientist population equivalent', for parallel investigation of different lines of enquiry. But as I noted, none of this removes the physical experimentation bottleneck. The trend in science has tended to be needing more and more elaborate aparatus (e.g. massive tokamaks, particle accelerators, wind tunnels, laser traps, liquid helium fractionaters etc etc) to do further experiments. Thus I would expect the scientific development rate of a civilisation with widespread deployment of transhuman AI to be primarily limited by experimental capabilities, rather than mental ones. Incidentally these facilities also tend to be much more vulnerable to enemy strikes, compared to sapient AIs which can basically be backed up and transmitted around at will as long as you have any hardware remaining in the system.
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Ryan Thunder wrote:I think what he means is, if they can be programmed in such a way that they have breakthroughs at a similar rate with respect to attempts, then they can work much faster by virtue of being able to make many, many more attempts than we ever could in the same period of time.

At least, that's how I read it... :?
Don't forget that there's memory to consider as well, not merely birght ideas per second. An AI could ideally store vast quatities of information with essentially perfect recall (like any other computer). AI is supposed to be able to replicate the human ability to reason, only with the potential for superhuman intelligence through superior hardware.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Starglider wrote:
Darth Wong wrote:And what makes you think that scientific innovation has historically been a function of mental processing speed?
Historically most scientists spend most of their time analysing data, writing papers, reading papers and going to conferences, rather than actually doing experiments. Essentially all of this time can be collapsed away with sufficiently good AI.
You are still focusing on time spent. But the real leaps forward in science have come from synthesizing radical new ideas, such as the idea that time itself is relative, or the idea that at very small scales, everything becomes a matter of probabilities. The fact that an AI can hypothetically accelerate the gruntwork of science does not mean that it can necessarily discover new scientific principles at a similarly accelerated rate.
Higher power and mass efficiency also lets you create a much greater 'scientist population equivalent', for parallel investigation of different lines of enquiry.
All of whom would think similarly, hence not be any more likely to achieve a radical breakthrough than one scientist with very efficient assistants.
But as I noted, none of this removes the physical experimentation bottleneck. The trend in science has tended to be needing more and more elaborate aparatus (e.g. massive tokamaks, particle accelerators, wind tunnels, laser traps, liquid helium fractionaters etc etc) to do further experiments. Thus I would expect the scientific development rate of a civilisation with widespread deployment of transhuman AI to be primarily limited by experimental capabilities, rather than mental ones. Incidentally these facilities also tend to be much more vulnerable to enemy strikes, compared to sapient AIs which can basically be backed up and transmitted around at will as long as you have any hardware remaining in the system.
I agree with that. What I disagree with is your implication that even the most abstract theoretical synthesis in science is a simple matter of throwing enough processing power at the problem.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Darth Wong wrote:You are still focusing on time spent. But the real leaps forward in science have come from synthesizing radical new ideas, such as the idea that time itself is relative, or the idea that at very small scales, everything becomes a matter of probabilities. The fact that an AI can hypothetically accelerate the gruntwork of science does not mean that it can necessarily discover new scientific principles at a similarly accelerated rate.
What other than a) time spent thinking about/discussing the problem and b) doing experiments do you think causes new ideas to be generated? What other limiting factor could there be? New ideas are not piped down from heaven at a fixed rate. They are generated when a concept is tried for making sense of some data and it works. The limiting factors are the rate at which you invent new theories, which is pretty much exactly proportional to the amount of scientist time spent (for humanlike intelligences), and the rate at which you get new data to analyse.

I am not even bothering to point out the advantages of perfectly rational analysis (avoiding all the fallacies that hamstring human analysis), massive working memory size, inbuild maths and numeric simulation ability, perfect recall, telepathic-grade perfect communication and near-perfect division of labour due to knowing exactly what everyone else is working on.
Higher power and mass efficiency also lets you create a much greater 'scientist population equivalent', for parallel investigation of different lines of enquiry.
All of whom would think similarly, hence not be any more likely to achieve a radical breakthrough than one scientist with very efficient assistants.[/quote]

Why on earth would they 'think similarily' to any greater degree than a couple of humans? For that matter, why is this relevant? You may be saying 'what humans can imagine is limited by their personal experiences, and all humans have different experience histories'. But for a non-anthromorphic AI system, all your AI instances can draw on all the experiences of any other AI instance. They all have essentially complete understanding of every field any of them understand. This is an enormous advantage for cross-field synthesis as well as just generally not being bounded by the relatively microscopic amount of experience any one human can directly access. But if for some reason you did want to hobble yourself by using strictly anthromorphic AIs, there's no reason to make them any more homogeneous than a population of human scientists of the same size.
What I disagree with is your implication that even the most abstract theoretical synthesis in science is a simple matter of throwing enough processing power at the problem.
Throwing raw compute cycles at it is obviously pointless. However throwing people-year-equivalents, including every aspect of the scientific community except physical experiments, should work very well. As I've just pointed out, non-anthromorphic AGIs can actually do a lot better than this.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Starglider wrote:What other than a) time spent thinking about/discussing the problem and b) doing experiments do you think causes new ideas to be generated?
Diversity of thought patterns. New blood entering the system with new ideas. Old guards with increasingly fixed thought patterns retiring and dying.

If you said that they had millions of independently "trained" scientific AIs working on the problem, I might think you had a point. But saying that a super-AI is equivalent to armies of scientists doesn't make any sense.
What other limiting factor could there be? New ideas are not piped down from heaven at a fixed rate. They are generated when a concept is tried for making sense of some data and it works.
In order to try a concept, you must first think of it. Often, that means stepping slightly outside the boundaries of what you have already established to be correct.
The limiting factors are the rate at which you invent new theories, which is pretty much exactly proportional to the amount of scientist time spent (for humanlike intelligences), and the rate at which you get new data to analyse.
The measure of scientist man-hours conceals the fact that new scientists are continually entering the system, ie- it's many hours, but not always with the same men.
I am not even bothering to point out the advantages of perfectly rational analysis (avoiding all the fallacies that hamstring human analysis), massive working memory size, inbuild maths and numeric simulation ability, perfect recall, telepathic-grade perfect communication and near-perfect division of labour due to knowing exactly what everyone else is working on.
All of which deal with efficiency of the gruntwork of science. Why is one spawned process of an AI any more likely than one member of an army of human scientists to think in a radically different direction?
All of whom would think similarly, hence not be any more likely to achieve a radical breakthrough than one scientist with very efficient assistants.
Why on earth would they 'think similarily' to any greater degree than a couple of humans?
If they're all processes spawned by a large AI that was "trained" to think a certain way, they would. Two humans are not as likely to be as similar as two processes spawned by the same AI brain.
For that matter, why is this relevant? You may be saying 'what humans can imagine is limited by their personal experiences, and all humans have different experience histories'. But for a non-anthromorphic AI system, all your AI instances can draw on all the experiences of any other AI instance. They all have essentially complete understanding of every field any of them understand.
That's great for working with existing ideas and existing rules. I see no reason why this system would be particularly good at coming up with something which seems to fly in the face of existing thinking. If anything, it seems like it be much worse.
This is an enormous advantage for cross-field synthesis as well as just generally not being bounded by the relatively microscopic amount of experience any one human can directly access. But if for some reason you did want to hobble yourself by using strictly anthromorphic AIs, there's no reason to make them any more homogeneous than a population of human scientists of the same size.
How do you train all of these unique AIs?
What I disagree with is your implication that even the most abstract theoretical synthesis in science is a simple matter of throwing enough processing power at the problem.
Throwing raw compute cycles at it is obviously pointless. However throwing people-year-equivalents, including every aspect of the scientific community except physical experiments, should work very well. As I've just pointed out, non-anthromorphic AGIs can actually do a lot better than this.
You've just pointed out that they would be excellent at the gruntwork of science. Not that this part of science is unnecessary, but in this scenario we're looking for something that will come up with something that has eluded their science for thousands of years. Why would their AIs have any particular talent for doing this?
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Post by Ryan Thunder »

Ford Prefect wrote:
Ryan Thunder wrote:I think what he means is, if they can be programmed in such a way that they have breakthroughs at a similar rate with respect to attempts, then they can work much faster by virtue of being able to make many, many more attempts than we ever could in the same period of time.

At least, that's how I read it... :?
Don't forget that there's memory to consider as well, not merely birght ideas per second. An AI could ideally store vast quatities of information with essentially perfect recall (like any other computer). AI is supposed to be able to replicate the human ability to reason, only with the potential for superhuman intelligence through superior hardware.
Well, yeah. Like I said, I was interpreting what he wrote, because I had this foolish notion I might do a better job of it than he was at the time. :P
SDN Worlds 5: Sanctum
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

You're saying things like;
Darth Wong wrote:Diversity of thought patterns.
But you don't really know what that means. What exactly is a 'thought pattern'? Define it for me in terms of neurons firing and synapses being reconfigured. Define it in terms of reasoning algorithms and knowledge base structures and then we can do a comparison.
Darth Wong wrote:New blood entering the system with new ideas.
Likewise, define this for me in terms of memetics. Where does the resistance come from in humans? What is the 'new blood'? What is the motivation structure behind young scientists trying new things vs old scientists elaborating old theories? Why are AI operating on pure probabilistic logic going to have a 'resistance' problem? Why are AIs operating on expected utility likely to follow broken human-like allocations of cognitive effort?

You're currently talking about AI as if I was treating engines as lumps of metal with 'power' and 'speed' (unquantified), possibly with 'spinning bits' and 'fuel burning chambers'. Then I would be saying that no engine could ever exceed 1000 rpm due to 'lubrication issues'.

The short answer is that (a) well-designed AIs aren't going to be saddled with any of these problems in the first place. I will happily explain this to you in detailed technical terms that do define 'thought patterns' in terms of code and algorithms and neural activation patterns, but I will have to reference a lot of literature and use technical terms to do it, so I suggest starting a new thread for it if you want to do that. However part (b) is that badly-designed AIs aren't going to be any worse than humans, and that these limitations are scaling factors that restrict progress relative to the effective 'clock rate' of your entire community of researchers, they are not absolute and independent limitations. Absolute limitations essentially have to exist in external physics; doing experiments is an example. Anything that exists within software (which anything 'mental' or 'social' for AIs does) is a scaling factor
Starglider wrote:If you said that they had millions of independently "trained" scientific AIs working on the problem, I might think you had a point. But saying that a super-AI is equivalent to armies of scientists doesn't make any sense.
Of course it does. The distinction between 'millions of independent AIs' and 'one huge AI' is actually fairly academic for well designed AI systems. You simply have n processors of x capacity seperated by communication links of p latency and q bandwidth. The true boundary between AI systems is the goal system. Systems with the same goal system will automatically co-operate and effectively be subunits of a single large system. This is inherently more efficient than 'lots of seperate AIs' due to the lack of unnecessary redundancy, in effort undertaken and control systems, plus reduced communications overhead (though sufficiently well co-ordinated networks of person-like AIs can probably approach this efficiency quite closely). You are saying 'it doesn't make any sense' because it's counter-intuitive, but a lot of things about AGI are counter-intutive. You can't give an actual reason why a rational general AI (i.e. operating on expected utility and probability logic) with access to enough computing power can't replicate the efforts of an army of a million trained human scientists, because there isn't one. Of course /developing/ such a thing is a massive software engineering task beyond our current abilities, but then so is building any of the other infrastructure for a type I civilisation.
In order to try a concept, you must first think of it. Often, that means stepping slightly outside the boundaries of what you have already established to be correct.
Yes, you do. This is hypothesis generation. In a probabilistic system, nothing is ever actually 'established correct' or 'established incorrect' (with the possible exception of maths and logic proofs, depending on implementation), probabilities just closesly approach 1.0 or 0.0. Hypothesis generation works by recombining functional components (e.g. fragments of maths or code that implement parts of models) in new patterns, to make a new model. This is then checked (I'm simplifying) to see if it actually matches anything in reality, and then if it can predict anything new. All induction relies on this (deduction is derriving consequences from your existing models). The main trick to getting induction to work well is finding a mechanism that's good at predicting what recombinations are likely to be useful; this is a very complicated problem that includes elements such as the 'base prior' (e.g. Kolmogorov complexity - but the sublties of implementing even that are quite deep), the recursion mechanism, the chunking mechanism... I could go on for pages.

Anyway the point is that this doesn't need fresh new human minds. Though if necessary, we can create an indefinite number of new simulated humans by recombining human brain elements. That's nasty and messy and pointless though. Rational intelligences don't suffer from human limitations on creativity (irony!) or self-imposed mental blocks. Optimising creative output is simply a parameter optimisation problem; how much effort do you focus on slight tweaks, how much do you focus on major rethinks, how much do you allocate to completely left-field ideas. In genetic programming this is the 'diversity management' problem, which has been very well studied - the tradeoff between short term improvement rate versus likelihood of getting stuck in local optima (and breakout time when the system does). 'Cognitive diversity management' for a rational AI system or indeed a big population of humanlike AIs is an essentially similar but rather more complex and nuanced problem.
The measure of scientist man-hours conceals the fact that new scientists are continually entering the system, ie- it's many hours, but not always with the same men.
I've addressed the fact that the fact humans rely on our personal experience bases for inspiration isn't a limitation that applies to AIs, nor is irrational motives to over-value (and thus defend to the death) the orthodoxy (or under-value it, resulting in the proliferation cranks). If you've though of some other reason why mind turnover is important, say so, but I've also pointed out that this can be easily implemented in software if required.
I am not even bothering to point out the advantages of perfectly rational analysis (avoiding all the fallacies that hamstring human analysis), massive working memory size, inbuild maths and numeric simulation ability, perfect recall, telepathic-grade perfect communication and near-perfect division of labour due to knowing exactly what everyone else is working on.
All of which deal with efficiency of the gruntwork of science.
No, it doesn't. You're focusing in on creativity, but you should know better than most that coming up with the idea is often the easy part. How often have you lambasted the Trek writers for their 'bright idea to technobabble implementation in half an hour flat' plots? Focusing on the 'gruntwork' makes perfect sense given that the vast majority of time is currently spent carefully testing and refining maths, not coming up with conceptual ideas. More than that what you fail to appreciate is just how serious a limitation communication barriers are to scientific progress. This is painfully obvious in AI itself actually; much of the reason why progress has been so slow is the balkanisation of the field into tiny non-communicating subfields, and the heavy use of subfield-specific and even personal jargon that makes it very hard to work out what the hell other people's bright new ideas actually are. Even deep mechanical telepathy (via cybernetics) would be a /huge/ improvement here, without even getting AIs involved (though in practice you'd have to use at least a fair dollop of narrow AI and processing power to let one brain to directly understand another brains complex conceptual ideas - not an issue for rational-AI-to-AI communication, where it really is almost as simple as 'dump a portion my mind state to an XML file').
Why is one spawned process of an AI any more likely than one member of an army of human scientists to think in a radically different direction?
Because the army of human scientists don't know what each other are thinking, so it's very difficult to avoid duplication of effort. Furthermore they can't all share experience and knowledge. If you ask each of your army of 1 million human scientists to each read 10 papers, then each one now understands 10 papers each. If you get an AI with the equivalent thinking capacity to do the same (or 1 million AIs with reasonably transparent KRs), every AI/process now benefits as if it has read 10 million papers. The same applies for any other kind of 'life experience' that you might think helps in coming up with new scientific ideas - though frankly I'm highly skeptical about whether nonanthropomorphic AIs need the informal stuff anyway. Humans do because the mental machinery we're using to think about science was actually designed for thinking about how to hit people with spears, how to predict where the buffalo herd will be, how to convince peers to support you as tribal leader etc. Our thinking involves a lot of fuzzy analogies, and we get inspiration from the oddest of sources as a result. AIs can do this if they have to, but they can also just write the code to implement complex numeric simulations in the time it takes you to think 'the cream in my coffee looks just like a spiral galaxy'. I strongly suspect the former is a lot more efficient (though this is one of my less rigorously supported points ATM).
All of whom would think similarly, hence not be any more likely to achieve a radical breakthrough than one scientist with very efficient assistants.
Why on earth would they 'think similarily' to any greater degree than a couple of humans?
If they're all processes spawned by a large AI that was "trained" to think a certain way, they would.
You're going to have to be more specific on 'think in a certain way'. Probabilistic logic and expected utility are the normative way to do low level cognition; they outperform anything else. However that's probably not the most equivalent layer to what you're thinking of. I imagine you're thinking of a human's mental toolbox of concepts, models, deductive processes and analytical techniques. For a rational AI, these are lumps of code that can be freely reused and passed around (possibly with some minor converstion to account for KR differences). An AI system can build up as many of these as it needs; by comparison the human brain is quite strictly limited by neural real estate and internal competition. It's hard and time consuming for us to learn new ways of thinking, and it's easy to lose the ability to do things that you don't practice. For an AI to learn 'new ways of thinking' it's as simple as linking in some extra code, either that its written or another AI gave it.
Two humans are not as likely to be as similar as two processes spawned by the same AI brain.
The similarity is determined by a reflective analysis of what 'kinds of thinking' are likely to be useful on a specific problem. Sometimes it makes sense to use the same technique one million times with different parameters. Sometimes it makes more sense to use one thousand quite different techniques. In reality there isn't a single decision, there are millions of distributed decisions like this at different layers of detail. I'm not talking theoretically here; this actually happens in layered AI reasoning systems such as SOAR, Eurisko and the system we are developing
For that matter, why is this relevant? You may be saying 'what humans can imagine is limited by their personal experiences, and all humans have different experience histories'. But for a non-anthromorphic AI system, all your AI instances can draw on all the experiences of any other AI instance. They all have essentially complete understanding of every field any of them understand.
That's great for working with existing ideas and existing rules. I see no reason why this system would be particularly good at coming up with something which seems to fly in the face of existing thinking. If anything, it seems like it be much worse.
Coming up with ideas that 'fly in the face of existing thinking' is easy. You just randomly generate some axioms and derrive some consequences from them. The proliferation of crank physics sites on the Internet underlines how easy this is. In actual fact there's major selection bias going on here; real scientists come up with left-field ideas all the time too, they're just clever and experienced enough to toss them in the trash after half an hour of working out the basic consequences. What we call cranks are people who get an emotional attachement to their ideas and promote them despite the fact they clearly don't work.

The difficult part is coming up with novel ideas with a good chance of actually being useful. There is no reason to believe that humans are especially good at ths. Obviously existing non-AI software, and most non-general AI software for that matter, can't do it but that's a red herring. Humans basically do this via 'intuition'; we don't have a systematic process for coming up with 'bright ideas' despite numerous attempts to formulate such. AIs can employ a systematic approach to it, via progressive refinement of a complex prior (essentially a probabilistic predictive model of what kind of hypothesis generation activities are likely to be useful). They can also use evolutionary techniques comparable to the leading 'neural darwinism' models of human brainstorming or just brute force GAs if necessary.

I have supplied plenty of reasons why AGIs should be more creative and you have not supplied any reasons why they should be less creative, other than cognitive diversity which I have just demonstrated is actually an advantage for the AI side. Nor have you explained why 'bright ideas' are suddenly so critical when so much of your past writing has (correctly) pointed out that in engineering, bright ideas are ten a penny, while developing useful ideas and taking them to implementation is the hard part.
But if for some reason you did want to hobble yourself by using strictly anthromorphic AIs, there's no reason to make them any more homogeneous than a population of human scientists of the same size.
How do you train all of these unique AIs?
There are two ways to do this (noting that this is a bad idea in the first place). Firstly you can train them pretty much the same way that you train humans; have them independently study (textbooks and data), be taught by existing AIs, and run experiments in simulations. All this happens at whatever clock speed multiplier your technology has vs the human brain (from thousands to billions depending on the tech). For teaching purposes detailed numeric simulations are generally just as good as lab work; the outcome of the experiments is well known. The only thing you can't do at 'electronic speed' is have students design and execute completely original experiments in the real world - but that's actual research, not training.

The second potential way to do this is to avoid training them at all (or at least, most of it). Instead what you'd do is take a starting base of 1000ish AIs and recombine chunks of their brains in new patterns; each of the one million AIs would have some fraction of the personality of say three to twenty 'donors'. This has some ethical issues and I'm not 100% sure it will work; 'neural smoothing' should integrate it ok, but the specifics depend on the exact technology. In other words it's in the 'plausible but not certain to be possible' technology class. I would make the point though that humans actually have a relatively tiny amount of cognitive diversity across our entire species. We occupy only a minute subset of the space of possible general intelligences - even a minute subset of the space of possible organic intelligneces using earth-type neurons. Uplifting dolphins, chimps, wolves, whatever would already give you a huge amount more 'mental diversity' than humans - and guess what, we can do this much more easily (in principle) via software uploading and tweaking rather than with real life genetic engineering. Plus we can create entirely new humanlike cultures in VR and have minds grow up in that, we can simulate new animal species in VR environments and uplift those, we can apply simulated evolution to vast numbers of uploads in VR environments and see what happens... there are lots of these ethically dubious (and IMHO unnecessary and inefficient) but conceptually simple ways to give your population of AIs much /more/ mental diversity than an equivalent population of humans.
Not that this part of science is unnecessary, but in this scenario we're looking for something that will come up with something that has eluded their science for thousands of years. Why would their AIs have any particular talent for doing this?
It's true that the situation is unique; we've never had this situation in real life (a modern scientific establishment coming up against concrete technological evidence of an entire system of physics that had previously eluded them). I hope I've answered the question already; AIs can be more creative on a man-hour-equivalence basis to start with, creativity actually does scale well with compute power using sufficiently clever software, and even for this situation, building working theories and turning those into technology is vastly harder than having the bright ideas (which frankly for Trek you could get by reading a few thousand 20th century sci-fi novels and doing a best fit of maths to their technobabble :P ).
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Post by Nova Andromeda »

-So it looks like Starglider is basically doing all of the AI defense for me which is fairly ironic since I believe he attacked me for suggesting it in the first place :P. I’ll just add my own (mostly overlapping) take on a few things. However, Starglider has made all the major points I’d make and done a much better job of it than I could. In fact, without him/her I’d either have to do a bunch of research into the subject and/or development the ideas from base principles.

Darth Wong wrote:Given the size and scope of this HSF civilization, it's pretty obvious that they've been in scientific stasis for some time.
-I don’t see why this should be true. The trend in research has been toward more complicated analysis and higher energies (as mentioned by others). The HSF civ may well be spending large amounts of its resources building equipment like a massive supercollider and/or computational facilities. In addition, its research abilities from previous eras (in terms of software) wouldn’t have simply evaporated like expertise that isn’t used in today's (human) society. Instead, those research abilities would be safely stored and ready for later use.
Darth Wong wrote:Why are we assuming that this AI would be super-efficient at learning new and unprecedented things?
-Do you think this is necessary for the information warfare and/or diplomatic strategies?
-Presumably, you mean super-efficient compared to humans and Starglider has already addressed this in great detail. AI’s will be far better at gathering new data (both from deliberate experimentation and from observations from other sources). This results from more efficient experimentation and vastly superior data collection and retention. AI’s will be far better at integrating new data, checking it for consistency with current theory, and identifying inconsistencies with theory when they arise. Data access and integration is a huge problem today that results from a bunch of scientists doing their own thing to solve their specific problems with little regard to accessibility to other fields. In fact, we could use a whole research field to solve this problem. For an AI, data access is vastly improved and problems with deciding on and implementing things like data formats are vastly reduced (AI’s don’t need to spend years learning new programming languages, data formats, etc.).
Darth Wong wrote:So why would their AIs be fantastic at developing new scientific principles and technologies rather than efficient administration of their ultra-complicated infrastructure, which is a job they would actually be built for?
-In terms of software, they can easily have both. Starglider has already pointed out that the underlying hardware shouldn’t make nearly as much of a difference as the software. In addition, simply running an infrastructure (even an ‘ultra-complicated infrastructure’) seems like it would be far far easier than developing and building said infrastructure, doing the science and research required to develop and build it, and researching more efficient ways to run the infrastructure.
Darth Wong wrote:But the real leaps forward in science have come from synthesizing radical new ideas, such as the idea that time itself is relative, or the idea that at very small scales, everything becomes a matter of probabilities.
-Most humans, including scientist, have real trouble identifying the premises their ideas rest on. An AI doesn’t need to suffer from this problem. If new data doesn’t fit into existing theories then it can readily recall all of the premises those theories rest on and toss and/or refine them as necessary to accommodate the new data. Additionally, it has the option of coming up with another theory using the same premises.
Darth Wong wrote:You've just pointed out that they would be excellent at the gruntwork of science. Not that this part of science is unnecessary, but in this scenario we're looking for something that will come up with something that has eluded their science for thousands of years. Why would their AIs have any particular talent for doing this?
-First, they would notice all the inconsistent new data much faster than humans (that whole data collection, retention, and collation bit). Second, they could pin down exactly where this data conflicts with current theories far faster than humans (better access to said theories, faster overall computation, better data integration, etc.). Third, they could design experiments to recheck the current theories for mistakes faster and more efficiently (see Starglider starting at: "Because the army of human scientists don't know what each other are thinking,"). Fourth, they would have a far better grasp of their theories and the premises their theories rest on than any human research community which makes questioning the premises and/or the theory much easier (crappy human memory and inability to simply 'load' a theory really sucks). Fifth, the AI could have a huge number of theories that fall into the ‘interesting, but untestable’ category. Observation of Federation technology would provide a huge set of new data previously unavailable and possibly provide sufficient evidence to resolve major outstanding scientific questions. However, this whole scenario is made up and we could just as easily say no research the HSF civ does regardless of quality and/or quantity will allow it to gain Trek technobabble. In fact, I (and I think Starglider) have both been working under this premise. I certainly haven't proposed/supported FTL, etc.
Darth Wong wrote:I can't believe that this idiotic "giant mirror array weapon" bullshit has actually gone on for so long.
-I’ll have you know I named it first and Giant Mirror of Death (TM) is a much better name than “giant mirror array weapon” 8).
Nova Andromeda
kinnison
Padawan Learner
Posts: 298
Joined: 2006-12-04 05:38am

Post by kinnison »

I've been away from this thread for a while. :) Lets's try and sum this up:

The OP expressly prohibited the breaking the laws of physics by the Trek civ's opponents, as the Trek civ apparently does.

Others may not agree, but I think that we have established that if this was not the case then quite a lot of conceptual HSF civilisations would easily overwhelm the Federation; reasons being an enormous industrial capacity and an even more enormous thinking ability, to be set off thinking by clear evidence for phenomena outside the known laws of physics - that evidence being the presence of the Trek civilisation's ships. A transapient AI type II supercivilisation has an incredible amount of power, material and processing/thinking ability available.

Two more points; one being that the civilisation depicted in Trek is ridiculous. They just haven't expanded enough; where are the asteroidal mines, He3 collectors floating in gas giants, orbital habitats...?

And the more important second point is that it is, possibly by definition, impossible to break the laws of physics. If you see something that seems to break those laws, then all it means is that the laws that you have so far discovered are incomplete.

An example that could be used is the famous one of Becquerel's experiments. He found something that appeared to break the laws of physics, namely energy coming from nowhere - the law of conservation of energy was being broken. Incidentally, this was foreshadowed by Lord Kelvin - the age of the Earth, as derived from erosion and sedimentation rates, was wildly inconsistent with the known possible sources of energy in the Sun, by two orders of magnitude. So another energy source needed to be found.

The solution to these problems was the extension of the First Law of Thermodynamics - the energy conservation law was replaced by the law of conservation of mass/energy. Which, so far, still holds.

This second point means that the question put by the OP is based on impossible premises. We probably all know what he meant, but I am talking now about what he said.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Nova Andromeda wrote:-So it looks like Starglider is basically doing all of the AI defense for me which is fairly ironic since I believe he attacked me for suggesting it in the first place :P.
I'm pointing out that AGI can do some extremely impressive things, including greatly speeding up scientific research and new ship design. However this is not enough to win this war, because it's still bottlenecked by physical experimentation, prototyping and construction which sets the timescale well beyond what the Federation needs to blow away any plausible HSF civ.

You were trying to say 'superintelligence is actually magic and can win in ways we have no comprehension of'. Now unlike most debaters around here I accept that this is actually a valid argument in many situations. However there has to be some overlap, however tiny, in the capabilities of the two sides such that there is a chance of winning that superintelligence can maximise. When the required free variables aren't available, superintelligence doesn't help. It's true that you can never rule out the possibility of a being vastly more intelligent than you spotting something you missed, and I would never personally bet against one in a situation as complex as an interstellar war. But for the purposes of this debate, 'superintelligence will find a magic way to win that we're too stupid to think of' is a worthless argument.
The HSF civ may well be spending large amounts of its resources building equipment like a massive supercollider and/or computational facilities.
Not if they've already discovered all of physics in their home universe, which if they're so hot at science they probably would have a long time ago.
In addition, its research abilities from previous eras (in terms of software) wouldn’t have simply evaporated like expertise that isn’t used in today's (human) society. Instead, those research abilities would be safely stored and ready for later use.
True. But the hardware will have been recycled - and even if it wasn't, it probably won't be suitable for testing subspace effects and warp drive designs. Anyway half the magic materials in trek were naturally occuring and had to be /discovered/, not invented.
Do you think this is necessary for the information warfare and/or diplomatic strategies?
For those strategies to be viable the Federation would have to be run by complete morons.

Damn. I hate arguing on the Trek side.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

Starglider's "AI defense" basically has to be accepted by people who are not AI buffs because they don't understand the jargon or the history. It is long, verbose and doesn't get to the point. You could have gotten to the point faster by pointing to real life examples of neural networks doing the things you say are possible. You brought up fusion, but at least fusion research is going on in many universities and scientists generally agree it's a worthwhile pursuit. The traditional AI field is rather dead, and whether this is because of poor funding or overhype doesn't change that. But I do know you're doing the same overhyping you're accusing those guys in the 70's and 80's of.

The problem is Starglider you assume a software problem is something that can be solved or would be solved by human beings. Tim Sweeney's view on game programming is that bugs and errors in programming have made game programming orders of magnitude harder. Computers have gotten better, but the human factor has not. By himself he made a hit shareware game but as the games got more an more complex they now take an entire team of developers and testers and the bugs still aren't worked out. The workaround is of course, AI's creating other AI's and programming them that way, but that has unpredictable and uncontrollable results and could result in AI dominion. In short, it's entirely possible any sane civilization would not develop AI's to the extent you propose.

No, "creativity" and "inventiveness" cannot be quantified because they are fucking qualities! These qualities are applied when results are demonstrated. You would be far more convincing if you detailed what the primitive versions of current technology can do and extrapolated from there.

And of course let's not forget we are talking about fiction here. Wanking is not just technological possibility or difficulty, or the Death Star is wanking. It's hard to imagine all but the most extreme of stories with millions of artificial intelligences, in robot bodies, all thinking millions or billions of times faster than a human being, all in a single solar system and considered HSF. It all comes back to the "name one" question in the start, and if none really exist either in fiction or if it's unlikely in real life it's just pulling crap out of ass.
User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

AI will advance in the future far beyond today.

Part of the reason for current AI weaknesses is due to hardware limitations.

For example, the average computer isn't capable of the needed instructions per second (see chart) for more than between insect and lizard level capabilities even theoretically. And, of course, the average desktop computer isn't designed for AI, and it appears that truly optimal software even for that limited level of hardware has yet to be developed.

Besides, AI research groups of today tend to be small scale operations, with no groups in the whole field having more than rather limited funding, and the highest performance supercomputers in the world are built for other purposes.

In other words, it is no surprise that AI research has had limited progress so far.

Developing AI general intelligence at and beyond human-level may superficially appear like mission impossible if one looks at accomplishing it all at once, but consider the following chain of premises, illustrating how a great leap can be indirectly managed through many lesser steps:
  • Robots at around insect-level intelligence exist today.
  • Provided that civilization stays around and continues technological progress, it is only a matter of time before the limit of robot technology advances from around insect-level to around lizard-level, applying enough suitable hardware with the right "software" design.
  • Once lizard-level robotics is managed, it should be only a matter of time before mouse-level robotic intelligence is obtained.
  • And continue the chain, step-by-step if necessary, up to human-level artificial intelligence.
For example, it's almost inconceivable that robots would reach just a particular step on the chain like lizard-level intelligence without ever advancing, being exactly the same level after a thousand years of technological progress.

If all else failed, develop a suitable combination of hardware and software to perfectly simulate a neuron, then perfectly simulate a worm with a handful of neurons, then the more complex neural system of a higher creature, and so on.

The preceding is not the only technique possible, with other methods being followed in AI research today, but it is sufficient to suggest how there's nothing that biological intelligence can do that artificial intelligence couldn't eventually emulate (and, indeed, exceed).

Many think that superhuman AI may be obtained soon after projected computer advancement potentially makes the hardware requirements affordable to researchers later in this century.

Whether or not it actually occurs merely decades from now, it will tend to occur someday, in some year or century, being only a matter of time, provided that technological progress continues.

Short of civilization stagnating and dying out, it is doubtful that superhuman AI wouldn't eventually be developed if the technology became available, since, in that case, all it might take is some nation, some sufficiently powerful individual, or some suitable group deciding they want immortality, power over their enemies or competitors, or other benefits they might hope to obtain with the help of superhuman advisors or leaders.

***************
***************
***************

In the versus discussion of this thread, I have taken for granted the following, though it is true the conditions aren't precisely specified:
  • A progressive hard sci-fi civilization may reach extreme levels of industrial capability very early in its total potential lifespan of at least billions of years. So the civilization involved here is assumed to be one which has approached the limits of technology possible under physical laws. Even though technically the term hard sci-fi civilization includes even the equivalent of merely modern earth, here one is talking about a civilization with quintillions of tons of industrial capability per star system, as their self-replicating factories have been around long enough.
  • Since this is a military versus scenario, I assumed we would be substituting in an advanced hard sci-fi civilization having a military.

    In this case, a significant portion of the multi-quintillion-ton total resources being spent on the military can mean trillions of warships.

    Possible scenarios for such are easy to conceive. The existence of potentially hostile interstellar powers does not require the existence of aliens. STL expansion could possibly lead to diverse societies not all under the control of any single entity. Although hopefully their goals would remain compatible and although interstellar war is difficult with STL, it is conceivable to have a cold war scenario develop where the hard sci-fi civilization could have military build-up including maintaining trillions of warships.

    Trillions of warships do not win the war in themselves, as STL attack over vast interstellar distances is hopelessly impractical against a FTL opponent, but they may temporarily stall Federation offense.

    The Federation is unlikely to deliver enough trillions of megatons of firepower to destroy that many in the first months or years of a war.

    Potential superweapons like the supernova device are unlikely to be built and deployed by the Federation at the start of the war.
  • The Federation does not have multi-trillion megaton firepower (see Federation firepower discussions in SW vs. ST).

    Federation tactics appear to be limited, not in the style of warp missile attacks at the start of a war.

    For example, a planet or another stationary installation (such as DS9) mounting phaser banks should theoretically not be very effective when it could be taken out by firing weapons from beyond the limited engagement range of phasers (which are not FTL weapons to my knowledge), including assault by warp missiles. But that is not what is shown in canon, where starships slow and move to close range to attack.

    Likewise, trillions of warships in a star system, even without FTL drive, could probably halt Federation offense for a period of months to years at least.

    It is unlikely that the Federation builds warp missiles with supernova warheads or the equivalent, let alone uses them at the start of a war.

    In an alternate timeline in a TNG episode, the Federation was losing a war against the Klingons. It is doubtful that the Federation used supernova-inducing warp missiles at the start of the all-out interstellar war and nevertheless lost it.

    The easiest explanation is simply that the Federation used the tactics they historically use, which is for their ships to fire torpedoes and phasers at close range, with a limited number of ships delivering no more than a moderate number of kilotons to megatons each.
  • The scenario would seem to involve the hard sci-fi civilization being magically transported from its universe to the Trek-verse. This would provide rationalization for an advanced civilization not previously having developed FTL, due to being in the wrong location for such.

    Warp drive is not astronomically difficult to develop in the Trek-verse, having been developed independently a number of times by different civilizations. Dialogue about pre-warp civilizations (e.g. relating to the Prime Directive) suggests a civilization independently obtaining warp drive is considered to be natural after enough technological advancement. In the Trek-verse, it is like developing radio, not a rare or unlikely development. Indeed, Cochrane developed the first warp drive craft with very limited 21st-century resources, even launching it on a modified chemical-propellant ballistic missile.
  • The OP part about the hard sci-fi civilization being able to utilize FTL ships it obtains is assumed to mean being able to do so for an useful period of time like months or years (else what would be the point of mentioning it?).

    An example is the normal operating timeframe before external resupply of starships designed to be capable of five-year exploratory missions.
  • If the hard sci-fi civilization is placed in the Trek-verse, other entities remain there aside from the Federation. Everyone from the Romulans to the Ferengi is still around.

    So, if the hard sci-fi civilization isn't immediately destroyed at the very start of the war by the Federation, it will tend to get visited sooner or later by starships from one or more non-Federation entities.

    The versus scenario states the Federation is at war with the hard sci-fi civilization, but no other interstellar powers are described as being at war with it.

    The hard sci-fi civilization may have their chance of negotiating with other entities to share some of their astronomical wealth in trade for some warp-capable ships.
As suggested in my previous post, of course this scenario is still very much a challenge for the hard sci-fi civilization, even if under the previous conditions.

They absolutely can not win by STL assault, as taking thousands of years to transverse the length of the Federation would be an unworkable strategy. But they may have a chance through the indirectly FTL method described in my previous post.
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
kinnison
Padawan Learner
Posts: 298
Joined: 2006-12-04 05:38am

Post by kinnison »

brianeyci wrote:Starglider's "AI defense" basically has to be accepted by people who are not AI buffs because they don't understand the jargon or the history. It is long, verbose and doesn't get to the point. You could have gotten to the point faster by pointing to real life examples of neural networks doing the things you say are possible. You brought up fusion, but at least fusion research is going on in many universities and scientists generally agree it's a worthwhile pursuit. The traditional AI field is rather dead, and whether this is because of poor funding or overhype doesn't change that. But I do know you're doing the same overhyping you're accusing those guys in the 70's and 80's of.

The problem is Starglider you assume a software problem is something that can be solved or would be solved by human beings. Tim Sweeney's view on game programming is that bugs and errors in programming have made game programming orders of magnitude harder. Computers have gotten better, but the human factor has not. By himself he made a hit shareware game but as the games got more an more complex they now take an entire team of developers and testers and the bugs still aren't worked out. The workaround is of course, AI's creating other AI's and programming them that way, but that has unpredictable and uncontrollable results and could result in AI dominion. In short, it's entirely possible any sane civilization would not develop AI's to the extent you propose.

No, "creativity" and "inventiveness" cannot be quantified because they are fucking qualities! These qualities are applied when results are demonstrated. You would be far more convincing if you detailed what the primitive versions of current technology can do and extrapolated from there.

And of course let's not forget we are talking about fiction here. Wanking is not just technological possibility or difficulty, or the Death Star is wanking. It's hard to imagine all but the most extreme of stories with millions of artificial intelligences, in robot bodies, all thinking millions or billions of times faster than a human being, all in a single solar system and considered HSF. It all comes back to the "name one" question in the start, and if none really exist either in fiction or if it's unlikely in real life it's just pulling crap out of ass.
Name one?

OK. The civilisation in Charles Sheffield's Accelerando. An indeterminate number of civilisations in the Orion's Arm verse (for those who hate this setting, note that not all parts of it include pico- and femtotech). For non-fiction treatments, read Gerard O'Neill, K. Eric Drexler and/or Kurzweil - or for a lower-tech version von Neumann. (Bear in mind that O'Neill was writing in a world where the best computers had 128K RAM and cost millions.) All the books in the 2001 universe by Clarke except the first (the AIs being the monoliths, of course). The High Beyond and Transcend civilisations in Fire Upon the Deep...

Enough already. Point made?

And if you think the rate of technological progress needed is silly, consider what the computer engineers of the 70s would have thought of the machine you are reading this on, compared to a mainframe of the day. Twenty years - memory up by four orders of magnitude, speed up by maybe three, disk space up by three. And that was a mainframe - which cost maybe thirty million dollars, in 1970s dollars - maybe ten times that in real terms. So the cost is down by four orders of magnitude, too. Twenty years. Imagine what the next fifty will do, if you can. I can't.
kinnison
Padawan Learner
Posts: 298
Joined: 2006-12-04 05:38am

Post by kinnison »

Error on my part. For "twenty years" substitute "thirty years". I can't get edit to work.
Post Reply