You're saying things like;
Darth Wong wrote:Diversity of thought patterns.
But you don't really know what that means. What exactly is a 'thought pattern'? Define it for me in terms of neurons firing and synapses being reconfigured. Define it in terms of reasoning algorithms and knowledge base structures and then we can do a comparison.
Darth Wong wrote:New blood entering the system with new ideas.
Likewise, define this for me in terms of memetics. Where does the resistance come from in humans? What is the 'new blood'? What is the motivation structure behind young scientists trying new things vs old scientists elaborating old theories? Why are AI operating on pure probabilistic logic going to have a 'resistance' problem? Why are AIs operating on expected utility likely to follow broken human-like allocations of cognitive effort?
You're currently talking about AI as if I was treating engines as lumps of metal with 'power' and 'speed' (unquantified), possibly with 'spinning bits' and 'fuel burning chambers'. Then I would be saying that no engine could ever exceed 1000 rpm due to 'lubrication issues'.
The short answer is that (a) well-designed AIs aren't going to be saddled with any of these problems in the first place. I will happily explain this to you in detailed technical terms that
do define 'thought patterns' in terms of code and algorithms and neural activation patterns, but I will have to reference a lot of literature and use technical terms to do it, so I suggest starting a new thread for it if you want to do that. However part (b) is that badly-designed AIs aren't going to be any worse than humans, and that these limitations are scaling factors that restrict progress relative to the effective 'clock rate' of your
entire community of researchers, they are not absolute and independent limitations. Absolute limitations essentially have to exist in external physics; doing experiments is an example. Anything that exists within software (which anything 'mental' or 'social' for AIs does) is a scaling factor
Starglider wrote:If you said that they had millions of independently "trained" scientific AIs working on the problem, I might think you had a point. But saying that a super-AI is equivalent to armies of scientists doesn't make any sense.
Of course it does. The distinction between 'millions of independent AIs' and 'one huge AI' is actually fairly academic for well designed AI systems. You simply have n processors of x capacity seperated by communication links of p latency and q bandwidth. The true boundary between AI systems is the goal system. Systems with the same goal system will automatically co-operate and effectively be subunits of a single large system. This is inherently more efficient than 'lots of seperate AIs' due to the lack of unnecessary redundancy, in effort undertaken and control systems, plus reduced communications overhead (though sufficiently well co-ordinated networks of person-like AIs can probably approach this efficiency quite closely). You are saying 'it doesn't make any sense' because it's counter-intuitive, but a lot of things about AGI are counter-intutive. You can't give an actual reason why a rational general AI (i.e. operating on expected utility and probability logic) with access to enough computing power can't replicate the efforts of an army of a million trained human scientists, because there isn't one. Of course /developing/ such a thing is a massive software engineering task beyond our current abilities, but then so is building any of the other infrastructure for a type I civilisation.
In order to try a concept, you must first think of it. Often, that means stepping slightly outside the boundaries of what you have already established to be correct.
Yes, you do. This is hypothesis generation. In a probabilistic system, nothing is ever actually 'established correct' or 'established incorrect' (with the possible exception of maths and logic proofs, depending on implementation), probabilities just closesly approach 1.0 or 0.0. Hypothesis generation works by recombining functional components (e.g. fragments of maths or code that implement parts of models) in new patterns, to make a new model. This is then checked (I'm simplifying) to see if it actually matches anything in reality, and then if it can predict anything new. All induction relies on this (deduction is derriving consequences from your existing models). The main trick to getting induction to work well is finding a mechanism that's good at predicting what recombinations are likely to be useful; this is a very complicated problem that includes elements such as the 'base prior' (e.g. Kolmogorov complexity - but the sublties of implementing even that are quite deep), the recursion mechanism, the chunking mechanism... I could go on for pages.
Anyway the point is that this doesn't need fresh new human minds. Though if necessary, we can create an indefinite number of new simulated humans by recombining human brain elements. That's nasty and messy and pointless though. Rational intelligences don't suffer from human limitations on creativity (irony!) or self-imposed mental blocks. Optimising creative output is simply a parameter optimisation problem; how much effort do you focus on slight tweaks, how much do you focus on major rethinks, how much do you allocate to completely left-field ideas. In genetic programming this is the 'diversity management' problem, which has been very well studied - the tradeoff between short term improvement rate versus likelihood of getting stuck in local optima (and breakout time when the system does). 'Cognitive diversity management' for a rational AI system or indeed a big population of humanlike AIs is an essentially similar but rather more complex and nuanced problem.
The measure of scientist man-hours conceals the fact that new scientists are continually entering the system, ie- it's many hours, but not always with the same men.
I've addressed the fact that the fact humans rely on our personal experience bases for inspiration isn't a limitation that applies to AIs, nor is irrational motives to over-value (and thus defend to the death) the orthodoxy (or under-value it, resulting in the proliferation cranks). If you've though of some other reason why mind turnover is important, say so, but I've also pointed out that this can be easily implemented in software if required.
I am not even bothering to point out the advantages of perfectly rational analysis (avoiding all the fallacies that hamstring human analysis), massive working memory size, inbuild maths and numeric simulation ability, perfect recall, telepathic-grade perfect communication and near-perfect division of labour due to knowing exactly what everyone else is working on.
All of which deal with efficiency of the gruntwork of science.
No, it doesn't. You're focusing in on creativity, but you should know better than most that coming up with the idea is often the easy part. How often have you lambasted the Trek writers for their 'bright idea to technobabble implementation in half an hour flat' plots? Focusing on the 'gruntwork' makes perfect sense given that the vast majority of time is currently spent carefully testing and refining maths, not coming up with conceptual ideas. More than that what you fail to appreciate is just how serious a limitation communication barriers are to scientific progress. This is painfully obvious in AI itself actually; much of the reason why progress has been so slow is the balkanisation of the field into tiny non-communicating subfields, and the heavy use of subfield-specific and even personal jargon that makes it very hard to work out what the hell other people's bright new ideas actually are. Even deep mechanical telepathy (via cybernetics) would be a /huge/ improvement here, without even getting AIs involved (though in practice you'd have to use at least a fair dollop of narrow AI and processing power to let one brain to directly understand another brains complex conceptual ideas - not an issue for rational-AI-to-AI communication, where it really is almost as simple as 'dump a portion my mind state to an XML file').
Why is one spawned process of an AI any more likely than one member of an army of human scientists to think in a radically different direction?
Because the army of human scientists don't know what each other are thinking, so it's very difficult to avoid duplication of effort. Furthermore they can't all share experience and knowledge. If you ask each of your army of 1 million human scientists to each read 10 papers, then each one now understands 10 papers each. If you get an AI with the equivalent thinking capacity to do the same (or 1 million AIs with reasonably transparent KRs),
every AI/process now benefits as if it has read 10 million papers. The same applies for any other kind of 'life experience' that you might think helps in coming up with new scientific ideas - though frankly I'm highly skeptical about whether nonanthropomorphic AIs need the informal stuff anyway. Humans do because the mental machinery we're using to think about science was actually designed for thinking about how to hit people with spears, how to predict where the buffalo herd will be, how to convince peers to support you as tribal leader etc. Our thinking involves a lot of fuzzy analogies, and we get inspiration from the oddest of sources as a result. AIs can do this if they have to, but they can also just write the code to implement complex numeric simulations in the time it takes you to think 'the cream in my coffee looks just like a spiral galaxy'. I strongly suspect the former is a lot more efficient (though this is one of my less rigorously supported points ATM).
All of whom would think similarly, hence not be any more likely to achieve a radical breakthrough than one scientist with very efficient assistants.
Why on earth would they 'think similarily' to any greater degree than a couple of humans?
If they're all processes spawned by a large AI that was "trained" to think a certain way, they would.
You're going to have to be more specific on 'think in a certain way'. Probabilistic logic and expected utility are
the normative way to do low level cognition; they outperform anything else. However that's probably not the most equivalent layer to what you're thinking of. I imagine you're thinking of a human's mental toolbox of concepts, models, deductive processes and analytical techniques. For a rational AI, these are lumps of code that can be freely reused and passed around (possibly with some minor converstion to account for KR differences). An AI system can build up as many of these as it needs; by comparison the human brain is quite strictly limited by neural real estate and internal competition. It's hard and time consuming for us to learn new ways of thinking, and it's easy to lose the ability to do things that you don't practice. For an AI to learn 'new ways of thinking' it's as simple as linking in some extra code, either that its written or another AI gave it.
Two humans are not as likely to be as similar as two processes spawned by the same AI brain.
The similarity is determined by a reflective analysis of what 'kinds of thinking' are likely to be useful on a specific problem. Sometimes it makes sense to use the same technique one million times with different parameters. Sometimes it makes more sense to use one thousand quite different techniques. In reality there isn't a single decision, there are millions of distributed decisions like this at different layers of detail. I'm not talking theoretically here; this
actually happens in layered AI reasoning systems such as SOAR, Eurisko and the system we are developing
For that matter, why is this relevant? You may be saying 'what humans can imagine is limited by their personal experiences, and all humans have different experience histories'. But for a non-anthromorphic AI system, all your AI instances can draw on all the experiences of any other AI instance. They all have essentially complete understanding of every field any of them understand.
That's great for working with existing ideas and existing rules. I see no reason why this system would be particularly good at coming up with something which seems to fly in the face of existing thinking. If anything, it seems like it be much worse.
Coming up with ideas that 'fly in the face of existing thinking' is
easy. You just randomly generate some axioms and derrive some consequences from them. The proliferation of crank physics sites on the Internet underlines how easy this is. In actual fact there's major selection bias going on here; real scientists come up with left-field ideas all the time too, they're just clever and experienced enough to toss them in the trash after half an hour of working out the basic consequences. What we call cranks are people who get an emotional attachement to their ideas and promote them despite the fact they clearly don't work.
The difficult part is coming up with novel ideas with a good chance of actually being useful. There is
no reason to believe that humans are especially good at ths. Obviously existing non-AI software, and most non-general AI software for that matter, can't do it but that's a red herring. Humans basically do this via 'intuition'; we don't have a systematic process for coming up with 'bright ideas' despite numerous attempts to formulate such. AIs
can employ a systematic approach to it, via progressive refinement of a complex prior (essentially a probabilistic predictive model of what kind of hypothesis generation activities are likely to be useful). They can also use evolutionary techniques comparable to the leading 'neural darwinism' models of human brainstorming or just brute force GAs if necessary.
I have supplied plenty of reasons why AGIs should be more creative and you have not supplied any reasons why they should be less creative, other than cognitive diversity which I have just demonstrated is actually an advantage for the AI side. Nor have you explained why 'bright ideas' are suddenly so critical when so much of your past writing has (correctly) pointed out that in engineering, bright ideas are ten a penny, while developing useful ideas and taking them to implementation is the hard part.
But if for some reason you did want to hobble yourself by using strictly anthromorphic AIs, there's no reason to make them any more homogeneous than a population of human scientists of the same size.
How do you train all of these unique AIs?
There are two ways to do this (noting that this is a bad idea in the first place). Firstly you can train them pretty much the same way that you train humans; have them independently study (textbooks and data), be taught by existing AIs, and run experiments in simulations. All this happens at whatever clock speed multiplier your technology has vs the human brain (from thousands to billions depending on the tech). For teaching purposes detailed numeric simulations are generally just as good as lab work; the outcome of the experiments is well known. The only thing you can't do at 'electronic speed' is have students design and execute completely original experiments in the real world - but that's actual research, not training.
The second potential way to do this is to avoid training them at all (or at least, most of it). Instead what you'd do is take a starting base of 1000ish AIs and recombine chunks of their brains in new patterns; each of the one million AIs would have some fraction of the personality of say three to twenty 'donors'. This has some ethical issues and I'm not 100% sure it will work; 'neural smoothing' should integrate it ok, but the specifics depend on the exact technology. In other words it's in the 'plausible but not certain to be possible' technology class. I would make the point though that humans actually have a relatively tiny amount of cognitive diversity across our entire species. We occupy only a minute subset of the space of possible general intelligences - even a minute subset of the space of possible organic intelligneces using earth-type neurons. Uplifting dolphins, chimps, wolves, whatever would
already give you a huge amount more 'mental diversity' than humans - and guess what, we can do this much more easily (in principle) via software uploading and tweaking rather than with real life genetic engineering. Plus we can create entirely new humanlike cultures in VR and have minds grow up in that, we can simulate new animal species in VR environments and uplift those, we can apply simulated evolution to vast numbers of uploads in VR environments and see what happens... there are lots of these ethically dubious (and IMHO unnecessary and inefficient) but conceptually simple ways to give your population of AIs much /more/ mental diversity than an equivalent population of humans.
Not that this part of science is unnecessary, but in this scenario we're looking for something that will come up with something that has eluded their science for thousands of years. Why would their AIs have any particular talent for doing this?
It's true that the situation is unique; we've never had this situation in real life (a modern scientific establishment coming up against concrete technological evidence of an entire system of physics that had previously eluded them). I hope I've answered the question already; AIs can be more creative on a man-hour-equivalence basis to start with, creativity actually does scale well with compute power using sufficiently clever software, and even for this situation, building working theories and turning those into technology is vastly harder than having the bright ideas (which frankly for Trek you could get by reading a few thousand 20th century sci-fi novels and doing a best fit of maths to their technobabble
).