1. So what do you do?
2. Good books for an overview of AI?
3. Good online resources?
4. What should I study in high-school, to go into AI later?
5. I have a compsci/softeng degree, how do I go into AI?
6. Is it worth getting a masters degree?
7. How much education do I need to push the envelope?
8. Is it a big commitment? Is it worth it?
9. So what's it like working in the field?
10. Sounds fun!
11. Do you need a particular mentality?
12. Where can I network with AI people?
13. How long does it take to get into AGI research?
14. Will I have to work on normal software first?
15. Should I join a team or try to work on my own?
16. Do I have much chance of doing anything useful in academia?
17. So could I join a commercial AGI start-up?
18. Programming is hard, AI must be a bitch!
19. How does AGI interact with nanotech?
20. Are we going to have AGI before nanotech?
21. How can I actually see this cool tech?
22. Is government regulation possible or useful?
23. When reading AI stuff, how do I filter the wheat from the chaff?
24. Personal tips on that?
25. You say egotism is a problem?
26. Why do you all disagree, even within a subfield?
27. How many people is it going to take to make an AGI?
28. Why do you recommend material you disagree with?
29. What are your thoughts on the Fermi paradox?
30. Any other cogsci book recommendations?
31. So how soon are we going to make an AGI?
32. I'm not a programming genius. Can I still work on AGI?
33. How badly does the AI field need investment?
34. Should I donate money?
35. No really?
36. Should I get more programmers interested?
37. You sound a lot more down to earth, and pessimistic, than most transhumanists.
38. Does 'Friendly' general AI even have a chance?
39. It seems really important, but I used to think energy was really important.
40. How would a huge economic crash affect AGI research?
41. Seeing nanotech advocates ignore peak oil was depressing - are AI people like that?.
42. So developing general AI makes everything else small beans?
43. Do we need quantum computing to make humanlike AI?
I am the technical director of a small start-up which has been developing a fairly revolutionary automated software engineering system. Six years of R&D and we're getting close to market, but still not there yet. Fortunately we got a research grant for the first two years, so I was able to focus on fairly open-ended general AI research to start with, building on earlier work I'd done. Later it progressively focused down on something that could be a good first product; we've done a wide range of software consulting to fund development of that, about half of it AI-focused. I have a plan to shoot for general AI later, but we'll need more funding and quality staff to have any realistic chance of pulling it off.1. You are a full-time AI researcher of some sort?
Further back, I was a research associate at the Singularity Institute for AI for a while, late 2004 to late 2005ish, I'm not involved with them at present but I wish them well. I got started in AI doing game AI and making lots of simple prototypes (of narrow AI and very naive general AI concepts) as a teenager, and I took all the AI and psychology modules I could during my CompSci degree.
'Artificial Intelligence : A New Synthesis' by Nils Nilsson for a technical primer - which assumes undergrad level compsci/maths knowledge. I recommend 'Artificial Minds' by Stan Franklin for a more descriptive, historical and layman-accessible account of the field, to see what you're letting yourself in for. They're both a few years old now but cover all the essentials.2. What books should I read to learn about AI?
Strangely, no. I know of lots of attempts to make Wikis focused on AI, but they're all pretty threadbare and/or horribly amateurish, which is strange when there are many excellent equivalents for general compsci and softeng (e.g. the legendary Portland Pattern Repository). That said Wikipedia's AI coverage seems fairly good, to get a general idea of what a particular narrow AI subfield, algorithm or term is all about (no good for general AI though).3. Any good online resources?
At secondary school/high school level, the main thing you need is maths (the fundamentals of discrete maths e.g. set theory is particularly essential, probability is vital for fuzzy reasoning, stats is good for a lot of narrow AI), followed by programming (mainly because if you learn how to program now, it'll free up more time to study more advanced topics later). If you have the chance to do a psychology course (e.g. a psychology A-Level in the UK), that's somewhat useful for general AI too.4. Could you offer a future university student some advice as to what study course would be most valuable to someone wishing to pursue AI research?
Finally, it's a good idea to grab a good undergrad level AI textbooks (see above). If you're planning to go for general AI eventually, I'd recommend taking a look at 'Godel Escher Bach' (Douglas Hofstadter - if you like that follow it up with 'Fluid Concepts and Creative Analogies', which is more specifically about AI) and to a lesser extent 'The Society of Mind' (Marvin Minsky). They're both accessible at the pop-sci level and fairly mind expanding.
If you're already a confident programmer (and if you want to be a really good computer scientist, you should be), why not try out a few simple AI algorithms yourself, in toy prototypes.
Incidentally if you're hoping to eventually work on general AI, and you have any inkling of just how important and dangerous that is, Yudkowsky's So You Want To Be A Seed AI Programmer still applies.
Jobs specifically involving even narrow AI are very rare (faction of a percent of programming jobs at best). Commercially, the job description is usually 'software engineer with n years of experience in our specific field, knowing technologies y and z, and oh, having some working knowledge of machine learning'. There are assorted academic projects, which are more focused on pushing the boundaries, but as a rule the pay is awful. There are a very very few startups specifically trying to develop general AI. Pay is variable but they don't tend to last very long (then again, that goes for all IT startups).5. I am a recent graduate with a computer engineering degree. I am interested in AI but is not sure what the field is like.
I'm not sure how much difference it would make to your career prospects in the US. In the UK, it helps a little, but probably not enough to be worth the extra year. Then again, it also depends on what you would cover in that extra year, and if it actually takes you two years due to being part time. For AGI purposes, you probably won't learn that much useful stuff, unless you're specifically doing an Masters in AI (and I can't think of a UK university that lets you do that - we do them in games programming, but not AI). On the other hand it's another year where demands on your time are relatively low, you have access to a university library and you're well placed to do private self-directed learning/research. That argument is probably less compelling in the US also if you paying all your degree costs personally.6. Would you recommend that I seek an MS degree, or no? Considering the urgency of doing AGI research and the fact that postgraduate education can easily eat up a couple of additional years, I (at least currently) don't see it as being a worthwhile pursuit.
Competition for slots on the really interesting projects is pretty tight, as with all research, but you don't need amazing qualifications to be a grad student or junior engineer on minimal salary implementing someone else's ideas. Commercial AI research is actually one of the few areas in computing where a PhD makes a big difference; it probably still doesn't beat 'Masters plus 3-4 years applied commercial AI experience' on the CV, but getting into a PhD program is relatively straightforward, whereas going straight into relevant commercial AI work as a graduate is very hard (particularly in the current job market).7. What kind of education (which schools, what kind of research published) does it take to get into a envelope pushing project.
Unless you already have a revolutionary idea and are highly confident about your ability to start and run a company based on it (actually not a good indicator; false optimism and self-delusion is overwhelmingly common in startups), going into AI makes no sense from a financial/career perspective. The payback in terms of jobs you can apply for is low compared to lots of less-fuzzy and easier-to-learn technologies, and it gets worse the more you focus on general (as opposed to narrow) AI. Yes, you may get lucky and get a very nicely paid job developing military robots for DARPA or semantic web technology for Google. The chances still aren't good compared to more marketable skills. If you really want to do general AI, be aware that you have basically zero chance of doing something useful putting in a few hobby hours a week, and well-paying general AI research jobs are as rare as hen's teeth.8. Is it worth investing my life in that direction while I plan my future? I don't know whether to keep it as a hobby or pursued it as a career.
I got into AI because I recognised that the development of machine intelligences with human capabilities and beyond was probably the single most important event in the history of planet earth. It is literally life and death for humanity as a whole (you can reasonably debate timescales but not the eventual outcome), and there is a very good chance that the key breakthrough will be made in my (and your lifetime). While the chances of me personally making a critical contribution were very low, I was still one of the very few people with any chance of directly influencing this event, and I felt that it was my overriding duty to do whatever I could to help. The fact that it would probably destroy my financial prospects, and cause ongoing stress and depression for me and my entire family, were just not as important.
As it turned out there is now a reasonable chance of me getting rich out of AI, but I couldn't count on that when I made the decision six years ago, and you shouldn't either.
If you mean the history, culture, personalities etc of the field, numerous books have been written on the subject* and they are still restricted to a brief overview of each subfield. As a graduate, your choice is between staying in academia, commercial narrow AI work (the biggest areas are robotics, games and search/data mining - though not that even in games very few people do purely AI), or joining a wildly ambitious general AI start-up (e.g. Adaptive AI Inc).9. Can you tell me what it is like in the field? Having spend most of my time doing school work I'm not sure about the current developments and what doing work in the field involves and how I could fit in.
* 'Mind Design II', compiled by John Haugeland, is a great example, because it's basically a big collection of papers from many different subfields where researchers trash rival approaches and claim only their own can work, as politely as possible. Probably inaccessible to laypeople, but it's really funny if you're in the field.
Unsurprisingly most commercial work is kind of dull - you normally pick the off-the-shelf algorithm that has the lowest technical risk and development time, slot it in, and spend most of your time doing requirements capture, functional testing, interfaces and other non-AI stuff anyway. Finance has some interesting decision support problems and in the US particularly there have always been be a fair number of military and intelligence projects trying to push the narrow AI envelope (you'll need a security clearance for that).
Academia usually means slaving away for low pay implementing the research director's ideas, when you're not grading essays or drafting papers (for your superiors to stamp their names on and take credit). Eventually you'll get tenure (if you're lucky) and be able to do pretty much what you like, as long as it results in lots of papers published and looks good at university open days. Startups focused on general AI are usually exciting, stimulating stuff, but the jobs are nearly impossible to get, probably involve moving across the country or to another country, and last for an average of oh 24 months or so before the company runs out of funding and implodes.
It is consistently interesting, really challenging and stimulating mentally, and sometimes quite exciting. Talking to others, the fragmentation of the field and the general lack of respect it gets can be a little depressing, as is the isolation involved in doing the really hard parts (common to most science I think). There is definitely a dark side; the incredible stakes, the horrible risks prey on your mind, both the slight but real chance of personally creating a UFAI, and the much higher chance that someone in your field will do it and there is no way to stop them - except working harder and getting there first. It's an obsession that consumes indefinite time and resources, and all projects to date have failed (mostly utterly), which is a huge source of depression if you're prone to it. This field is particularly prone to destroying lives and families.10. I envy you...while I'm sure like any job it has it's share of bad and boring days, to glimpse what kind of results and AI technology you get to play with must be at times quite a treat!
Depends on whether you want to do narrow or general AI, and which subfield (robotics, natural language, data mining, etc). All of it involves a fair bit of maths, logic and general strong compsci skills. NNs, genetic programming and similar connectionist approaches aren't really that hard, most people just mess about with parameters instead of doing anything rigorous. Or rather matching other researcher's accomplishments to date isn't that hard, getting those approaches to ever work for general AI would be. The vast majority of robotics is essentially the same as normal embedded/realtime control system development but with more complexity and tighter specs. If you like hard problems in general software engineering, you'll probably like that, and a lot of people get a particular satisfaction for seeing a physical end result instead of just software. Natural language processing is mentally hard, whether you're using statistical approaches or structural parsing. General AI is mind-meltingly, ridiculously hard, and requires absolute dedication and years of self-directed learning across several fields just to have any chance of having useful original insights.11. Do you need a particular mentality and ability to enjoy certain activities (like say endless nights for theorem proving) to succeed?
This is the only common question I don't have a good answer to, basically because I'm out of date. I used to spend a lot of time doing this in 2003 and 2004 but recently I haven't had the time (going on SDN so much is bad enough) - it's hard enough keeping up with private correspondence. I used to spend a lot of time on mailing lists like AGI and SL4 and exchanging emails with individual researchers I met either there, via the SIAI, or out of the blue. I used to read the relevant AI newsgroups, but there was so much spam and so many cranks even then that it was like straining a sewer for lost diamonds. There are associated IRC channels (e.g. #AI on Freenode can be interesting), lots of newer forums that I haven't tried. Finally there are lots conferences, from very traditionally academic (e.g. MLDM, ICMLA), to newer and less structured (e.g. the yearly general AI conference Goertzel organises, unfortunately he's fairly crank friendly), to popsci/futurism with an AI focus (e.g. the Singularity Summit the SIAI runs). The former are good for networking if you're well embedded into academia, the later are probably better if you aren't. Then there are trade shows in areas like enterprise search and industrial and entertainment robotics...12. What kind of place should I hang out and what kind of people should I network if I want to learn more?
There's a big community of amateurs messing about with AI, often their attempts at general AI. A lot of them are outright cranks, most of the rest are just wildly overconfident and overoptimistic. The pathology of that group is a whole other essay, maybe book. You can't realistically do anything in AGI on your own, putting 10 hours or less a week into it, but a lot of people think they're going to crack the problem that way. Don't get sucked into that mindset.
If you mean getting paid to do general AI work full time, as your main focus, there are probably less than a thousand jobs available world wide (though a lot more academics claim to be working on a small part of the problem, with varying degrees of credibility). It isn't so much a fixed timeframe as a question of luck whether you find a job in any given year, though of course you can improve your skills (and possibly chance your location) to raise your chances.13. More generally: What sort of time frame can I anticipate when it comes to entering the world of AGI research?
For personal research, you can start messing about in C++ right now, call it a general AI project, and start posting on forums telling people you're a general AI researcher. Plenty of amateurs do. If you mean how long much study specifically directed at AGI does it take before you have any real chance of making useful progress, I'd say two to four years of intensive study if you're already have strong compsci knowledge (including basic narrow AI), are a competent programmer, and are highly intelligent. If you don't have those traits, it's probably hopeless. Even if you do, remember that most AGI projects fail utterly without even making a significant contribution to the field.
If you mean narrow AI first, almost certainly, unless you find a way to fund your own project, or you are stunningly good at the theory (and lucky) and get a personal grant from somewhere like the SIAI. Frankly you'll be lucky just to find a commercial job focusing on narrow AI, postgrad positions in academia are only a little easier. I got a commercial R&D grant to cover my basic research, but the techniques I used to do that are best described as 'black magic and voodoo'.14. Will I have to work on commercial software development, assembling experience and credentials for a few years, before devoting myself full-time to AGI research?
I would let your opinions on how to best tackle AGI mature a bit before answering that one. Personally I would say 'team if available, independently if not'. Sad to say, the number of people cut out to lead an AGI project is considerably smaller than the already tiny number of people qualified to work on one. As usual the number of people who think they can is rather larger than the number who actually can, and that's before you factor in FAI safety concerns (near-total disregard for which is an automatic disqualification for upwards of 90% of the people currently working on AGI).15. Should I try to join a team, or something more like receiving grant money to independently work on a sub-problem?
Sadly, academia is a difficult fit with real AGI research, even if you're in a faculty that hasn't publically given up on the whole idea (quite a few have - in some places the scars of the 'AI Winter' still linger 15 years on). You spend most of your time teaching, jockeying for funding, writing papers, reading papers and attending conferences. You won't really get to set research objectives until you're in your 30s at best, 40s to 50s realistically. So as a grad student or new postgrad most likely you'll be assigned to assisting some narrow AI project or completely braindead AGI project. Any work you do has to be publishable in tiny bite-size chunks without too much novelty and with copious references to past work. Projects that have a low publication-to-funding ratio (because they're hard but concise), are too hard to explain, or which don't get a good citation count (because they are too strange or piss off other researchers) don't get funded.16. I'm not all that optimistic about how much I could accomplish in academic research.
Certainly there is a chance. I don't know what the landscape will look like in 5 years time, but there probably will be AGI projects looking for good staff. How many mostly depends on the investment climate, and of course if anyone makes a flashy breakthrough in the mean time. Unless you're deadly serious about maximising your income, it doesn't hurt to shoot for that while still in university, even if the chances of making the cut aren't high. As for joining us specifically, well we've been around for five years as a company, at least breaking even, and that's actually pretty good for an AI startup, but we've hired very few people to date. Of course we're located in the UK only at present, and if you're not already in the EU immigration is a bitch.17. Is there a chance that I could join your team (or a highly similar project) about five years from now?
Software architecture is a fairly distinct skill from the actual code-level programming. You get better at both with time and it takes many years of daily practice to reach a high standard (e.g. where implementing typical desktop applications becomes fairly trivial), presuming that you have a basic aptitude for it to start with (similar to playing a musical instrument in that regard). AI design is another level entirely, and general AI design is another level above narrow AI design (IMHO the majority of reasonably plausible AGI designs incorporate a fair amount of narrow AI concepts). Formal FAI theory is another level above that. So there's a lot to master if you want to have a serious shot at the problem, and the sad thing is, deficiencies in any of these areas can screw you over. Then there's all the external stuff, funding, recruitment, etc.18. I am beginning to realize just how daunting a task it is to create truly sophisticated programs...
Firstly, be aware that I am not a nanotech expert. I've done the basic reading, I know people who are (and have discussed these risks with them), but I'm not qualified to originate opinions on what the technology can do, so I'm repeating people who have done the maths. Secondly, note that 'nanotech' as a term got bandwagoned, distorted and misused to a ridiculous degree. Nanorobotics is a small and currently quite speculative subset of what we now call 'nanotech' (which is anything involving man-made structures of nanometre scale). If you haven't already done so, read 'Engines of Creation' by Eric Drexler - the classic popsci work on molecular nanotechnology and its potential. The technical counterpart with the real feasibility studies is 'Nanosystems: molecular machinery, manufacturing, and computation', but you need a fair bit of chemistry, physics and compsci knowledge to get value out of that.19. I have a basic grasp of the concept of nanotech from googling and informative discussions on SDN, but I'd like to read up more on the issure of how AI might theoretically use it.
Back to your question. To be honest it isn't something that we spend a lot of time analysing. The fact is that transhuman intelligence is a ridiculously powerful thing; the 'humans vs rabbits = seed ai vs humans' analogy definitely applies. General assemblers are also a ridiculously powerful thing, even without being able to make complete microscale von-neuman machines (which looks entirely possible anyway, given an appropriate energy source). Put those two technologies together and the possibilities are squared - though combining them in a single microbot probably does not make sense except in a few edge cases.That said, realistic employment of nanotech is highly unlikely to be a carpet of undifferentiated grey goo. It will almost certainly be a system of macroscale, microscale and nanoscale structures working together. If you accept the potential of both then trying to imagine exactly how a UFAI would chose to kill us becomes pretty academic exercise.
Actually most of the scary scenarios I have heard are from deeply misguided nanotech people who want to 'save the world' using general assemblers, limited AI and a small conspiracy of engineers. The chances of any of those nuts actually developing the tech themselves are minimal, but if someone else invents them first they could be a problem. Frankly though there are lots of serious global risks enabled by nanotech well before that becomes a problem - serious nanotech (nanomachinery/microrobotics) is dangerous stuff, though still less so than general AI.
Developing advanced nanotech requires a huge amount of engineering effort, but it's relatively conventional engineering effort, and we will eventually crack it if we keep plugging away at it. We will eventually crack AI by 'brute force' brain simulation too, but the funny thing about general AI is that we might crack it at any time with pure insight; available hardware is almost certainly already adequate. But you can't predict insight, particularly with so many AGI projects staying 'dark'. So, very hard question. I think I'd say that we're closer to AGI, but not with much certainty. Be aware though that self-replicating nanotech is really hard and not required for catastrophe scenarios. There's a huge overlap between biotech, and specifically biological weapons, and nanotech.20. Do you personally think humanity is closer to creating general AI than self replicating nanotechnology, hence the greater danger? Or do you mean just relative to each other?
Personal invitation. Try investing a million dollars or so in a nanotech startup, that should do it. Get promoted to a high military rank and go on DARPA brass tours. Alternatively attend a physics department open day at a relevant university and hope you get lucky.21. How does one go about getting a tour or something at these high tech facilities that experiment with nanotech and AI?
One thing EY and I (and everyone else sane) agrees on is that this would be worse than useless. I very much doubt anyone would listen, but if they did they wouldn't understand and misguided regulation would make things worse. There's no chance of it being global anyway, and certainly no chance of it being effective (all you really need for AI research is a PC and access to a good compsci library). Even if you somehow got it passed and enforced, I suspect regulation would disproportionately kill the less dangerous projects anyway. Finally as with making anything illegal, to a certain extent it makes it more attractive, particularly to young people (it also gives it credibility of a kind - if the government is scared of it it must be serious).22. Have you ever contacted any government officials or politicians about the dangers of 'Unfriendly' general AI? Or would that be a complete waste of time?
Congratulations! That's a more realistic self-assessment than most people entering the field. Of course everyone's in that boat to start with, and exposure to a lot of ideas is a much better way to start the learning process than sitting in a room trying to work things out from first principles on your own (alas, plenty of people are arrogant enough to try that). Read 100 diverse AI papers and the common pitfalls and failure cases should start to pop off the page at you (I actually prefer books for this, because you get explanation of the mindset and history behind particular design decisions).23. You suggest reading the literature, but I'm not very confident in my ability to tell a good AI idea idea apart a bad one.
I can give you my personal opinions if you'd like. Here are some key principles.24. I would appreciate a little help getting to the level where I can make such distinctions.
Probability theory is the normative way to do reasoning and any departure from that must be very well justified. Probability and desirability are orthogonal concepts and any mixing of them is pathological. Never, ever accept an algorithm on the basis of theoretical elegance, Turing completeness or any general notion of 'power'. The only reason to accept an algorithm as useful is a fully described, non-contrived example of it doing something useful. Most symbolic systems don't really generalise. Don't accept empty symbols, don't accept endless special cases, don't accept people designing a new programming language (usually just a poor version of lisp, prolog and/or smalltalk) and calling it an 'AGI system'. Don't give the connectionists any points for being 'brain-like', when they patently have no real idea of how the brain works, and don't allow them to claim 'understanding' when they're just doing simple function approximation. Don't allow anyone, but connectionists in particular, to claim that their system 'scales to general AI' without very good and rigorous arguments for why it will work.
Not an exhaustive list of course, that's just a few things that spring to mind right now.
Well, I think it helps to be involved in a proper commercial enterprise, with people who are clearly more experienced in many ways (commercial and technical). I'm good at what I do but I'm obviously not good at everything. Some researchers (e.g. EY) are in such an isolated bubble, it's very easy for them to convince themselves that they are just superior to everyone they know in every respect. I participated in many arguments (when I was still at the SIAI) over whether real-world experience is actually valuable - I said it often was, EY thought higher IQ always trumped it. Having a team doesn't make it go away of course, it can make it worse, since you get groupthink and us vs them quite easily. Encouraging dissenting opinions is good to a point, but too much and you can't work together coherently and break the project - I could cite certain past commercial efforts in particular for this.25. You say runaway ego bloat is a big problem. Do you have any of your own strategies for avoiding it?
I also joke about this stuff a lot. I take it completely seriously - it is literally the most important single issue in human civilisation - but that doesn't preclude having some fun. I'm fortunate enough to be working with AI-literate colleagues who can also do that (as opposed to the SIAI, where it used to really piss off EY ). Then there's the whole evil genius/mad scientist/bond villain personna - I suppose this sounds silly, but I think joking around in such an over-the-top manner like that helps to deflate real-world egotism by making it seem ridiculous.
I'm afraid there's no easy answer to this, you just have to try and be as rational and honest with yourself as possible (though not 24/7, no one can do that and you'd be foolish to think you could). Note that technical overconfidence is fairly distinct from egotism, it isn't always (or even usually) 'I am the most amazing AI researcher ever', it's usually just very strong wishful thinking ('I have a hunch this algorithm will work' and/or 'I can't see why this won't work').
Correct. We agree about a lot of things, but in a strange way that can seem to amplify the remaining areas where we disagree. I think that's very common in scientific and technical fields. The only real way to resolve these debates is to actually build something. If there's a bandwagon that seems to have funding and/or progress, people may jump on it, but that hasn't appeared for AGI yet (though there have certainly been many attempts, commercial, academic and open-source).26. From what I've seen thus far there seems to be significant disagreement about implementation strategy, even within the tiny group of researchers who realize what a bad idea genetic algorithms and neural nets are.
Well, that's one of the input variables to the 'probability of success' equation, but it isn't a simple correlation or binary requirement. It might just take one person. It would sure be useful to have a Manhattan Project sized team of geniuses (though very hard to manage). Realistic projects are somewhere in between.27. Ultimately I'd like to have a rough idea of how many sufficiently intelligent/motivated people (who agree on the technique to degree X) it's going to take.
Something like 98% of all AGI writings are wrong - pretty much by definition since they actively disagree with each other on many points. It's almost like religion - though fortunately not actually that bad, since there are almost certainly lots of approaches to AGI (rather less to FAI) that will actually work, we just have to find them28.What do you mean by this?Starglider wrote:I would also recommend X, Y and Z - I don't agree with them but they're highly inspiring...
However I've found the better authors tend to spark useful ideas in my mind even when I think I can show that they're factually wrong. Also, it is important to understand why these things looked good to their original designer - people like Minsky are geniuses and are very experienced in the field, so if they're wrong about something, it's a mistake you could also easily make, unless you take pains to understand and avoid it. There are many, many such mistakes. Chances are you (and I) will still get stuck in one eventually, but we have to try, and even if your only achievement is to get stuck in a whole new category of mistake, that's still progress.
I am not particularly qualified to speculate on this. Then again, I'm not sure who is. Astronomers started it but it isn't really an astronomy problem. Certainly philosophers and futurists have no special ability to answer it.29. What are your thoughts on the Fermi paradox?
Possibly. Frankly this is very low on my priority list of things to ponder, because it doesn't seem to make any practical difference. Personally I suspect we're in a many-worlds multiverse mostly filled with timelines where either life didn't occur or UFAI replicators wiped it all out, so the Fermi paradox is probably due the anthropic effect, but don't quote me on that.Do you think we're in for some kind of major surprise or new understanding about (the Fermi Paradox) during/immediately after a (successful, Friendly) hard takeoff?
I haven't read IAASL but I get the impression that it's down near the philosophy end of the spectrum. CogSci is split between philosophy, brain-focused stuff (there are huge numbers of 'this is my personal all-encompassing theory of the mind' books - my favourite is probably Terrance Deacon's classic, 'The Symbolic Species'), and AI focused stuff (again, lots of 'this is how I think we could/should build a general AI' books). The same material you find in the books is typically also dribbled out over tens or hundreds of papers, with marginal rewordings and changes of focus - a major reason why I prefer books for anything expect details on specific algorithms and experiments.30. Are there any introductory-level cognitive science books you could refer me to? Lately I’ve been digging into Hofstadter’s most recent book, I Am A Strange Loop, but of course that’s distinct from the scientific literature on the human brain.
Let's see, for AGI, I've already recommended quite a few, but I don't think I've mentioned Eric Baum's 'What Is Thought'. That's lively, varied, features practical experiments, lots of interesting ideas, kinda like FCCA but a little less philosophy and more compsci. In terms of actually equipping yourself to do FAI research, I'd recommend 'Thinking and Deciding' by Jonathan Baron - fairly technical, but less so than 'Probability Theory' (by ET Jaynes - the 'bible' of probability calculus - essential reading in the long term). There's the monster Kahneman / Tversky series on decision theory, which EY loves because there's so much in there on systematic human reasoning flaws (meiticulously backed up by real psychology experiments), but in all honesty it's not much direct use for FAI research, and I'm not sure if it's worth the time, at least early on.
Well, in brief, the brain simulation people are plodding steadily ahead, they will eventually crack the problem through 'brute force and ignorance' (the term is kind of unfair, you still have to be a genius to develop neural simulation technology, you just don't have to understand how actual intelligence works). I hope rational AGI can beat them to it, but frankly I'd rather have the low-level brain-sim people win than the genetic algorithms or connectionist de-novo AGI people. That said a closely neuromorphic AI is probably less dangerous than an arbitrary symbolic AI with no special Friendliness features - in that it is a little less likely to kill everyone (but see the usual arguments for why that doesn't really help, if it doesn't actively protect us). I am biased towards rational AGI because only that can genuinely be made Friendly, and if a project is going well, there's a hope someone like the SIAI can get hold of the researchers involved and convince them to start implementing appropriate Friendliness measures. If a (non-upload) connectionist project is going well, we're pretty much screwed, because they're not going to listen to 'your project is fundamentally and unalterably unsafe, please shut it down and destroy all the code'.31. Not to draw arbitrary lines in the sand Kurzweil-style, but I’d like to hear your view on the present state and pace of the research.
You're still in your teens. It's true that most of the very best programmers I know started even earlier (and were fascinated by computers from the first moment they saw one), but you don't need to be a spectacular programmer to make a contribution to AGI, just a competent one. Seed AI design in particular is more about computation theory than language proficiency, and when creating an AGI project team really good programmers are still easier to find than properly qualified AI designers (of course there are a great many wannabe AI designers).32. I was wondering if my current lack of programming knowledge is going to be a problem.
Quite badly but the problem is that almost all the people clamoring for money ATM are either irrelevant or doing something horribly unsafe/inadvisable.33. How badly does the AI field need investment?
No. Of course we're only going to know which projects were worthwhile in retrospect (assuming success). So you'll have to take a best guess as to what project(s) to support, based on publications and demos (if any). However for whichever project does eventually make it, every bit of financial support will presumably help. That said be aware that it is quite easy to make things worse, by funding unsafe (i.e. GP-based) projects or starting one yourself.34. Donating money; does every bit of contribution truly help the efforts toward developing FAI?
A company should allow you to invest (in shares), or buy products (we certainly do). A charity accepts donations and does not generate revenue (though it had better generate publications and demos or how do you know they are doing anything at all - the SIAI has this problem really badly recently). I am highly suspicious of anyone who tries to blur these categories and you should be too.35. Becoming a regular and heavy donor to AGI companies sounds feasible for me.
Programming skill is actually easy to find compared to AI design ability (which is mostly logic, cogsci and for general AI a particular and hard to describe mindset). Virtually every good programmer I've met has had some personal ideas about AGI design, but without fail they're horribly bad and unoriginal ideas if that person has never made a serious full-time study of AI.36. Will finding and recruiting more great programmers help?
Yudkowsky is making a big effort to recruit 'highly rational people' to the SIAI via writing a book on how to be a rational person... guess we'll see how that goes when he's finished it. I can see the logic, but still, it seems a bit dubious. My plan is to make some really impressive demos, but of course there's a big risk of inspiring the wrong behavior (more people working on UFAI) with that.
Said transhumanists simply assumed that intelligence = (humanlike) ethics, probably because they've observed a correlation between ethics and intelligence in the people they know, though sometimes it's just outright wishful thinking. Easy mistake to make though; even Yudkowsky, probably the single most strident proponent of FAI, made that mistake for the first five years of his research career.37. The way you describe this whole situation contrasts... rather sharply from the dreamer-type transhumanists.
Sad to say, most transhumanists aren't particularly technical people. A good fraction (not even the majority, I think) are programmers, scientists or engineers, but usually not in the actual technologies they're lauding so much. The attitude of 'technology will fix everything, including humans' is intoxicating and not inherently unreasonable, but without grounding a lot of people go way over the top with it.
Yes. In a sense, how big a chance doesn't really matter. Long term, it's our only hope. Almost every other disaster is survivable - if not by humans at some level, by life on earth - and most of the ones that aren't survivable can't be mitigated anyway. Unfriendly AGI is unique in being completely fatal yet completely avoidable - through one specific method only (making an FAI), which requires relatively modest funding but a lot of unpredictable insight.38. Does 'Friendly' general AI even have a chance?
General AI is in the category of 'existential risks'; IMHO it's at the top of that category. Energy is a critical economic challenge; we need to fix that to maintain current standards of living and progress. However, the human race won't stop existing if we don't 'solve' it, a good 'solution' won't completely transform human existence, it isn't a binary proposition etc etc. AI is almost unique in that the actions of one relatively small group of people (whoever builds the first AGI to undergo 'takeoff', i.e. recursive-self enhancement followed by escape onto the Internet) can and likely will render the human race extinct or completely transform nearly every aspect of our existence (hopefully for the better).39. Until very recently, I was convinced that developing our most feasible source of clean energy would be the 21st century's most worthwhile endeavor.
A detailed analysis is very difficult. Any sort of economic crash is going to make R&D harder. However you can do seed AI R&D on a single PC if you have to, so it isn't going to stop. On the plus side, making supercomputers harder to get hold of harms the foolish people (the genetic algorithms and brain simulation researchers) more than the sensible people (probabilistic-logic-based 'clean/transparent' seed AI designers). On the minus side the military will still have plenty of funding and military design priorities are always bad news. Significant degredation to industrial and comms infrastructure will probably slow down an AI that's gotten out of the 'box' in achieving its aims, but not enough to save us if we still have a basically modern civilisation.40. How would a crash in world energy supply affect research and development focused on producing an AGI.
Well, a lot of peak oil people are OTT, in that they ignore all the sources that are practical, just expensive and dirty (e.g. coal to liquids). Still, plenty of overoptimistic transhumanists are convinced that advanced nanotech is just around the corner (even in absence of superintelligent AIs to design and implement it for us) and that it will appear in time to save us. Certainly I am not so optimistic; I think they grossly underestimates the engineering challenges in getting this kind of tech working - challenges you can likely solve quite quickly with an AGI, but that's the FAI problem again.41. Seeing nanotech advocates brush off peak oil as merely depressing doomer talk has been a rather disappointing experience.
Yes, along with pretty much everything else humans have ever done. Don't take it personally. Strangely enough, this isn't typically a contributing factor to the raging egos in AI / AGI - most researchers don't look at it this way. We manage to be raving egotists without even referencing the fact that this is the single most important human endeavour in history.42. Will decades of work in say, designing new generations of nuclear reactors, shrink to insignificance when the first transhuman intelligence emerges?
Quantum computing is a red herring. We don't need it and worse, it's far more useful to the people trying to destroy the world (unintentionally; I mean the GA and neural net 'oh, any AI is bound to be friendly' idiots) than the people who know what they're doing. Mature QC does hand a superintelligence several more orders of magnitude (up to tens depending on the tech) reasoning superiority over us, but frankly after takeoff it doesn't matter much anyway.43. Do we need a mature quantum computing platform to make humanlike AI?
Incidentally the whole 'the human brain uses quantum processing' fad that was popular in the early 90s is a complete scam. It doesn't, and real quantum computing isn't comparable to the proposed mechanisms. Most people have forgotten about Penrose's horrible forrays into neurology and philosophy by now anyway.