Transhumanism: is it viable?
Moderator: Alyrium Denryle
Re: Transhumanism: is it viable?
What I'm trying to wrap my head around is why Sandberg and Bostrom cite it as a possibility here:
Whole Brain Emulation: A Roadmap (PDF file)
They appear more optimistic that it won't take that kind of fine-grained simulation to emulate a human brain, but it sounds odd that they'd include it as a feasible end-of-century possibility if it's that far out of kilter with the easily-calculated numbers. Either they're expecting some kind of radical computing or something's being missed in the exchange (likely on my end).
Whole Brain Emulation: A Roadmap (PDF file)
They appear more optimistic that it won't take that kind of fine-grained simulation to emulate a human brain, but it sounds odd that they'd include it as a feasible end-of-century possibility if it's that far out of kilter with the easily-calculated numbers. Either they're expecting some kind of radical computing or something's being missed in the exchange (likely on my end).
All those moments will be lost in time... like tears in rain...
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
No, I mean why is a viable resource? It takes more than being a great stockpile to make a viable resource. It must also be accessible. The reason why the Gulf oil reserves were ignored for so long was because it wasn't viable to recover oil from that deep underwater until recently (and as Deepwater proved, in many ways it's still a poor resource), even though we knew for decades there was a shitton of oil down there. All the asteroid mining calcs I've seen so far ignores recovery rate and energy/reaction mass needed to recover a kg of it, and I think for very good reason.Junghalli wrote:Because the resources of the greater solar system should be quite generous compared to the needs of a planetary scale civilization like ours. Unfortunately the Asteroid Mining For Profit page with its nice chart seems to be down, but consider as an example the asteroid 16 Psyche, which seems to be essentially a flying 200 km mountain of nearly pure metal, with an estimated mass of 2.19 X 10^16 tons; equivalent to millions of years of present day annual iron extraction even if only a fraction of it is useable.Wyrm wrote:Why?
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
Re: Transhumanism: is it viable?
The idea that it takes vast energy to move stuff around in space is actually something of a brainbug; people like to talk about the 30 MJ/kg inherent energy cost of getting something to orbit as if it's some obscene amount of energy but in fact it's equivalent to burning less than 1 kg of gasoline per kg of payload. Even with a high-end chemical rocket with an exhaust velocity of 4.5 km/s and a mass ratio of 6/1 to reach orbit the energy cost of reaching orbit is only equivalent to ~1.33 kg of gasoline per kg of payload. A mere .1% of the current energy generation of our civilization (~1.5 X 10^13 watts) could put tens of thousands of tons of material into orbit per day at that rate. As far as reaction mass is concerned, going by Atomic Rocket's mission table retrieving material from a typical main belt asteroid might require a delta V of around 9 km/s. Since the burns are entirely done in space we can use low thrust fuel efficient rockets like VASIMR, which at low gear (the least fuel efficient setting) would require 270 grams of hydrogen per kg of ship and payload for a delta V of ~9 km/s. Hydrogen is extremely abundant cosmically, and even if the ship could not refuel at the asteroid belt a round trip from Earth would require a mass ratio of only 1/2 so even if nothing else a scheme of obtaining the hydrogen from Earth's oceans and sending it to orbit via mass driver would potentially be viable. This ignores the possibility of using solar sails, which would require no propellant at all. It also ignores the fact that there are many asteroids much closer to Earth in terms of necessary delta V than the belt.Wyrm wrote:No, I mean why is a viable resource? It takes more than being a great stockpile to make a viable resource. It must also be accessible. The reason why the Gulf oil reserves were ignored for so long was because it wasn't viable to recover oil from that deep underwater until recently (and as Deepwater proved, in many ways it's still a poor resource), even though we knew for decades there was a shitton of oil down there. All the asteroid mining calcs I've seen so far ignores recovery rate and energy/reaction mass needed to recover a kg of it, and I think for very good reason.
Of course if you want we can have a long argument about the technological assumptions in play here but by the same token you can question the idea of cheap energy or cheap robots or cheap food synthesizers, and it's not my point that these technologies are certain to pan out. My point is that as far as we can tell there's nothing in the laws of physics to say they're impossible, so they're a credible possibility on the same purely theoretical speculative level that we'd talk about, say, relativistic starships or Dyson spheres.
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
You're forgetting that you need oxygen to burn gasoline, to the tune of about 8 kg of oxygen to 1 kg gasoline, so you in fact need ~11.97 kg of gasoline/oxygen to lift a kg payload to orbit.Junghalli wrote:The idea that it takes vast energy to move stuff around in space is actually something of a brainbug; people like to talk about the 30 MJ/kg inherent energy cost of getting something to orbit as if it's some obscene amount of energy but in fact it's equivalent to burning less than 1 kg of gasoline per kg of payload. Even with a high-end chemical rocket with an exhaust velocity of 4.5 km/s and a mass ratio of 6/1 to reach orbit the energy cost of reaching orbit is only equivalent to ~1.33 kg of gasoline per kg of payload.Wyrm wrote:No, I mean why is a viable resource? It takes more than being a great stockpile to make a viable resource. It must also be accessible. The reason why the Gulf oil reserves were ignored for so long was because it wasn't viable to recover oil from that deep underwater until recently (and as Deepwater proved, in many ways it's still a poor resource), even though we knew for decades there was a shitton of oil down there. All the asteroid mining calcs I've seen so far ignores recovery rate and energy/reaction mass needed to recover a kg of it, and I think for very good reason.
The problem isn't the amount of energy our civilization generates, it's concentration. 0.1% of the landmass of the Earth is 148,940 km^2, a little less than the size of California. That's a huge chunk of land, on an absolute scale, yet you can pack a few tens of thousands of tons of payload on a very small chunk of it. You achieve that concentration by putting all that energy into a relatively small amount of high energy stuff that you're using to launch your payload into orbit. It's not easy to prepare and is likely highly toxic (because it's high energy and reactive). Unless your material costs are ridiculously small in the first place, this is going to incur real cost.Junghalli wrote: A mere .1% of the current energy generation of our civilization (~1.5 X 10^13 watts) could put tens of thousands of tons of material into orbit per day at that rate.
Given that 16 Psyche is 2.19e19 kg, you are going to need 5.913e18 kg of hydrogen to obtain a ∆v 9 km/s. Since the exhaust moves at 29 km/s (low gear), the kinetic energy of the exhaust plume is 2.4864165e21 MJ, or 594.27 million gigatons. This is the lower bound of the energy requirement of a Base-Delta Zero. If we add in the asteroid, then this is an additional 211.98 million gigatons of kinetic energy. This does not sound easy at all.Junghalli wrote:As far as reaction mass is concerned, going by Atomic Rocket's mission table retrieving material from a typical main belt asteroid might require a delta V of around 9 km/s. Since the burns are entirely done in space we can use low thrust fuel efficient rockets like VASIMR, which at low gear (the least fuel efficient setting) would require 270 grams of hydrogen per kg of ship and payload for a delta V of ~9 km/s.
Furthermore, this is a large up-front investment of resources, especially since it takes about 1.5 years to transit between the asteroid belt and Earth using a Hohmann orbit. But the VASIMR is a feeble engine. In order to deliver the needed 1.971e20 kg·km/s impulse to make the transfer in 1.5 years, you will need at least 10.41 trillion VASIMR engines to accomplish. Since you need to supply a total of 806.25 million gigatons over that 1.5 year duration, you will need to generate 71.27 billion gigawatts. I'm not even going to try to calculate the mass of the engines.
Again, this is looking anything but easy.
Solar sails provide feeble thrust for realistic sized sails. Remember, you need rigging to utilize the thrust, and even small pressures build into substantial forces over the size of kilometers.Junghalli wrote:Hydrogen is extremely abundant cosmically, and even if the ship could not refuel at the asteroid belt a round trip from Earth would require a mass ratio of only 1/2 so even if nothing else a scheme of obtaining the hydrogen from Earth's oceans and sending it to orbit via mass driver would potentially be viable. This ignores the possibility of using solar sails, which would require no propellant at all.
You seem to need the energy of a BDZ and many orders of magnitude our current power production of our civilization to move one small asteroid from the asteroid belt to easy reach using one of the cheapest transfer orbit possible. I can quite confidently say that you need a lot better argument to get me to conceed that the asteroid belt is a viable resource.Junghalli wrote:It also ignores the fact that there are many asteroids much closer to Earth in terms of necessary delta V than the belt.
Of course if you want we can have a long argument about the technological assumptions in play here but by the same token you can question the idea of cheap energy or cheap robots or cheap food synthesizers, and it's not my point that these technologies are certain to pan out. My point is that as far as we can tell there's nothing in the laws of physics to say they're impossible, so they're a credible possibility on the same purely theoretical speculative level that we'd talk about, say, relativistic starships or Dyson spheres.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
-
- Padawan Learner
- Posts: 287
- Joined: 2010-07-14 10:55pm
Re: Transhumanism: is it viable?
It'd probably be a lot easier to mine Near-Earth Objects; that way, we'd just need to give them enough of a nudge to put them into orbit around the Earth rather than just swinging by and barely missing us.
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
How much is a "nudge"? That nudge can still easily amount to several km/s delta-V, which we've seen amounts to non-trivial amounts of mass and energy for even small asteroids.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
Re: Transhumanism: is it viable?
Besides the point, the point was to demonstrate that the energy required is not actually large compared to the capacity of even a civilization no more advanced than ours, not to perform some kind fanciful calculation of how much fuel a spacecraft would need if it ran on gasoline. If you wish a more practically meaningful comparison, a 50% efficient mass driver hooked up to a nuclear power plant the same size as Tricastin could launch nearly 10,000 tons of material into orbit per day. This partially answers you point about "energy concentration" as well; it's not that big of an issue if you use a system like a mass driver that does not require the energy to be generated on the projectile. Of course building such a facility on an asteroid is another matter, but thankfully relatively powerful launchers are necessary on Earth because of the need not to crash into the ground because of gravity and atmospheric drag, in space where these are not considerations a very weak low power drive thrusting for long periods of time will work just fine.Wyrm wrote:You're forgetting that you need oxygen to burn gasoline, to the tune of about 8 kg of oxygen to 1 kg gasoline, so you in fact need ~11.97 kg of gasoline/oxygen to lift a kg payload to orbit.
The implication seems to be that you would be trying to move the entire asteroid into Earth orbit at once, which is a completely unreasonable standard for this scenario. A more sane approach would be to slowly disassemble it and ship back pieces as needed. Let us assume that you want to take out roughly 10X present world iron production i.e. 17 billion tons per year. Using the previous estimate of 9 km/s delta V (because I'm too lazy to look up delta V for 16 Psyche at the moment) this is 6.9 X 10^20 joules per year or 2.183 X 10^13 watts as a lower limit. This is certainly ambitious (it exceeds the present total energy generation of our civilization), but it's nowhere near the immense energy production that you imply would be necessary. For such shipping volumes a sensible arrangement might be a complex of mass drivers and nuclear reactors or gigantic solar power plants on 16 Psyche shooting chunks of ore surrounded by crude ablative heat shields and perhaps some kind of rudimentary guidance system back at Earth to be captured by aerocapture. Ambitious certainly, but as far as we know nothing requiring magic.Given that 16 Psyche is 2.19e19 kg, you are going to need 5.913e18 kg of hydrogen to obtain a ∆v 9 km/s. Since the exhaust moves at 29 km/s (low gear), the kinetic energy of the exhaust plume is 2.4864165e21 MJ, or 594.27 million gigatons. This is the lower bound of the energy requirement of a Base-Delta Zero. If we add in the asteroid, then this is an additional 211.98 million gigatons of kinetic energy. This does not sound easy at all.
Edit: I feel like point out that realistically I very much doubt feeding the Third World would be a project that would burn through nonrenewable resources at anything remotely requiring such a rate of import to be trivially supportable. Seriously, we're expecting thousands or millions of tons of robot to be lost to random Third World bandits and whatnot every year? How many tons of armored vehicle are totalled in Iraq and Afghanistan annually?
Again, only if you use the assumption that you must move an asteroid that could would contain possibly millions of years worth of metal at current consumption rates into orbit in one go, instead of doing the sensible thing and disassembling it gradually on site and sending back pieces as needed. Certainly, I doubt very much feeding the Third World is a project that would require the acquisition of many thousands of years worth of terrestrial iron production right the fuck now.You seem to need the energy of a BDZ and many orders of magnitude our current power production of our civilization to move one small asteroid from the asteroid belt to easy reach using one of the cheapest transfer orbit possible. I can quite confidently say that you need a lot better argument to get me to conceed that the asteroid belt is a viable resource.
Re: Transhumanism: is it viable?
I made a mistake in the math on Tricastin, 50% efficiency would translate to enough energy to put ~4800 tons in orbit per day. Still, I think you see my point: theoretically a single RL nuclear power plant can generate enough energy to support a space program straight out of any die-hard space enthusiast's wet dreams. The reason space is so expensive is not because it inherently takes unreasonable amounts of energy.
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
Yes. It takes an inherently ridiculous amount of mass to move about in space, particularly to launch into orbit.Junghalli wrote:I made a mistake in the math on Tricastin, 50% efficiency would translate to enough energy to put ~4800 tons in orbit per day. Still, I think you see my point: theoretically a single RL nuclear power plant can generate enough energy to support a space program straight out of any die-hard space enthusiast's wet dreams. The reason space is so expensive is not because it inherently takes unreasonable amounts of energy.
The amount of energy it takes to lift a payload off the planet is time-dependent. That implies that you need high thrust, to give the payload as much impulse as possible in the shortest amount of time. But there's a fundamental trade-off between thrust and specific impulse: ultimately, your engine can only give so much power to the exhaust before it slags itself, and therefore high thrust has to come at the expense of low specific impulse — that translates into an exponentially greater amount of fuel needed to achieve the same delta-V that a weaker but more economical engine does.
This brings us to the second problem. That 30 MJ (1.33 kg gasoline) figure to lift a kg into space is neglecting the amount of energy it takes to lift part of your propellant partway. If that 1.33 kg, or even ~12 kg were the entire story, then the launch stage of the Saturn V booster should only be about ~2.2 times the size of the payload in each dimension even single-staged, but it isn't. The first stage (S-IC) not only is the largest stage, but it only lasts a scant ~150 s before burnout. It takes three stages in toto to boost the payload (about ~1/5 length of the Saturn V) into orbit.
Mass may be plentiful in space, but it's scattered all about. This requires either a lot of time to get around (slowing absolute throughput) or a lot of mass/energy to be thrown about (slowing net throughput, because you have to use mass as propellant and fuel, which either comes from what you're shipping, or requires a back-supply chain).
You can't launch a payload into Earth orbit with a mass driver, period. The gravitational and atmospheric drag kills you, both literally and figuratively. A hypersonic projectile in the troposphere will get shit hot (leaving a sun-hot trail of plasma that will blind people for kilometers around), shed a lot of kinetic energy, make a devistating sonic boom that will destroy things for kilometers around, and on top of all that never reach orbit. If you somehow lob the damn thing fast enough to overcome gravitational and atmospheric drag, the projectile disintegrates from the mechanical stresses. For planetary launches, yeah, you do need a rocket.Junghalli wrote:Besides the point, the point was to demonstrate that the energy required is not actually large compared to the capacity of even a civilization no more advanced than ours, not to perform some kind fanciful calculation of how much fuel a spacecraft would need if it ran on gasoline. If you wish a more practically meaningful comparison, a 50% efficient mass driver hooked up to a nuclear power plant the same size as Tricastin could launch nearly 10,000 tons of material into orbit per day. This partially answers you point about "energy concentration" as well; it's not that big of an issue if you use a system like a mass driver that does not require the energy to be generated on the projectile. Of course building such a facility on an asteroid is another matter, but thankfully relatively powerful launchers are necessary on Earth because of the need not to crash into the ground because of gravity and atmospheric drag, in space where these are not considerations a very weak low power drive thrusting for long periods of time will work just fine.
Now, if you put this sucker on an asteroid, sure you get the material off and whizzing towards earth, but now how do you catch it? This stuff is going to be moving at a clip on the order of ten km/s. Unless each chunk has a little rocket moter on it, or you rendezvous with each and every one of those little chunks, you're going to have to put something in its way. The problem with that is that the chunk is going to make hash out of whatever it slams into. An object traveling at 2 km/s has an energy of its own mass in TNT — at 10 km/s, it has the explosive equivalent of 25 kg of TNT. Whatever's on the receiving end will basically be getting zotted with 50% the power output of Tricastin (~2 nukes), plus the energy of falling down the earth's gravity well.
Granted, cutting the asteroid up does make the task more managable, but lets explore some of the other consequences of your choice of transport.Junghalli wrote:The implication seems to be that you would be trying to move the entire asteroid into Earth orbit at once, which is a completely unreasonable standard for this scenario. A more sane approach would be to slowly disassemble it and ship back pieces as needed. Let us assume that you want to take out roughly 10X present world iron production i.e. 17 billion tons per year. Using the previous estimate of 9 km/s delta V (because I'm too lazy to look up delta V for 16 Psyche at the moment) this is 6.9 X 10^20 joules per year or 2.183 X 10^13 watts as a lower limit. This is certainly ambitious (it exceeds the present total energy generation of our civilization), but it's nowhere near the immense energy production that you imply would be necessary. For such shipping volumes a sensible arrangement might be a complex of mass drivers and nuclear reactors or gigantic solar power plants on 16 Psyche shooting chunks of ore surrounded by crude ablative heat shields and perhaps some kind of rudimentary guidance system back at Earth to be captured by aerocapture. Ambitious certainly, but as far as we know nothing requiring magic.
Going from Ceres to LEO requires the projectile to shed 9.23 km/s. This amounts to 42.6 MJ/kg. 17 billion tonnes a year amounts to the atmosphere shedding 7.24e20 J/yr into the atmosphere. This is about comparable with the amount of geothermal energy coming from the earth. Doesn't sound so bad, but remember that geothermal energy is climatically significant.
Also, these things are going to come screaming (and I do mean screaming) in at 16 km/s. See cautions about hypersonic projectiles in atmospheres.
There's also the effect of the ablative shields polluting the atmosphere. If we suppose that the amount of material ablated from the chunk to be about 5% of the total, this amounts to dumping 800 million tonnes of material into the atmosphere each year, most of which will be metal compounds and I doubt they will be nice for the environment, first raining down as particulate, then being absorbed into the food chain.
That was for a post-scarcity-through-recycling society and you know it. It doesn't apply to a society that can get enormous amounts of metals easily.Junghalli wrote:Edit: I feel like point out that realistically I very much doubt feeding the Third World would be a project that would burn through nonrenewable resources at anything remotely requiring such a rate of import to be trivially supportable. Seriously, we're expecting thousands or millions of tons of robot to be lost to random Third World bandits and whatnot every year? How many tons of armored vehicle are totalled in Iraq and Afghanistan annually?
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
- Agent Sorchus
- Jedi Master
- Posts: 1143
- Joined: 2008-08-16 09:01pm
Re: Transhumanism: is it viable?
The problem I have with your argument Junghalli is that having a large space based society is not equal to a transhumanist ideal or goal. It could be the goal for any society (especially one that looks for post scarcity), as such any argument based on a space society are arguments for a stellar society, not a transhumanist one. A transhumanism has to be a valid society prior to adopting these goals. All post scarcity utopias use either space as a get out, or recycling, and are not unique to transhumanism either way.
That is my biggest problem with Transhumanism, that it has too few things that distinguish it (as a society). An ideal communist society has all the trappings of a post scarcity society (as described by proponents ITT) and communism is hardly going to be adopted easily anymore. And the appeal of/to future technologies is something that is common to both transhumanism and Technocratic Utopias. As for the specific becoming greater than human, it is not a bad thought (if it can be extended for all humans, which I doubt).
That is my biggest problem with Transhumanism, that it has too few things that distinguish it (as a society). An ideal communist society has all the trappings of a post scarcity society (as described by proponents ITT) and communism is hardly going to be adopted easily anymore. And the appeal of/to future technologies is something that is common to both transhumanism and Technocratic Utopias. As for the specific becoming greater than human, it is not a bad thought (if it can be extended for all humans, which I doubt).
the engines cannae take any more cap'n
warp 9 to shroomland ~Dalton
warp 9 to shroomland ~Dalton
Re: Transhumanism: is it viable?
Let's try a different take on this.
I looked up the price of iron ore, this site gives $97-113 per ton. So if we assume that bandits and whatnot destroy 1000 tons of robots per year the cost of the metal would be ... $110,000 per year. This is about 1/20,000th of the income of the Salvation Army. This is in a society that does not have access to the asteroids and is most decidedly not anywhere near postscarcity.
If there are nonrenewable metals a near-postscarcity society would be worried about I'd say they're far more likely to be relatively rare elements like platinum. The annual supply of platinum is only 130 tons. If we were to suggest asteroid mining making it 1000X more common than present day this would require 166 megawatts to bring it back if we assumed a delta V of 9 km/s and perfect efficiency. That is the equivalent of the energy consumption of a modest American city of perhaps 100,000 residents (reference).
Well, OK, platinum is super-rare, maybe choose something more common, like silver. According to my source for platinum production silver production is over 100X greater, let's say 20,000 tons per year. Now let's say we want to increase that by a hundred, to 2 million tons per year. With 100% efficiency that would be 2.5 gigawatts. That is still less than the capacity of Tricastin.
What about aluminum, again increased by 100X? Going by this and using the 2003 world total we are talking about 3.5 GW, roughly equal to the maximum production capacity of Tricastin.
So if we were using a highly energy-efficient transfer method, like a mass driver, even by present standards we're not talking about huge amounts of energy compared to even what our civilization produces now.
Note: if we're talking about a mass driver aerocapture combo we only need half this much energy, braking happens by friction with Earth's atmosphere. On the other hand that may have a large amount of the projectile sacrificed as an ablative heat shield, and even a mass driver won't be perfectly efficient, not to mention inefficiencies in the power plant etc. ... call it a ballpark estimate.
Having written this up before seeing the other reply: most of the other concerns are also greatly reduced by this.
Hell, let's not blow hot air over energy, let's see how much rocket fuel actually costs. This paper gives an estimate of solid rocket fuel as $5/lb, which translates to $11/kg. By this calculation launching a 200 ton spacecraft (roughly equivalent to a commercial airliner) with a mass ratio of 6 would cost $13.2 million and the price of an individual ticket for a 200 pound man based on rocket fuel would be $1000. Prices for actual existing space launch systems are in the range of thousands of dollars per kilogram. Granted the paper doesn't say what the exhaust velocity is for that solid fuel rocket but we're talking a difference of three orders of magnitude here. I think it's pretty obvious that most of the cost of launching stuff into space is not the fuel.
Edit: as for the environmental effects, put it in an uninhabited area, like the middle of the Sahara desert.
I looked up the price of iron ore, this site gives $97-113 per ton. So if we assume that bandits and whatnot destroy 1000 tons of robots per year the cost of the metal would be ... $110,000 per year. This is about 1/20,000th of the income of the Salvation Army. This is in a society that does not have access to the asteroids and is most decidedly not anywhere near postscarcity.
If there are nonrenewable metals a near-postscarcity society would be worried about I'd say they're far more likely to be relatively rare elements like platinum. The annual supply of platinum is only 130 tons. If we were to suggest asteroid mining making it 1000X more common than present day this would require 166 megawatts to bring it back if we assumed a delta V of 9 km/s and perfect efficiency. That is the equivalent of the energy consumption of a modest American city of perhaps 100,000 residents (reference).
Well, OK, platinum is super-rare, maybe choose something more common, like silver. According to my source for platinum production silver production is over 100X greater, let's say 20,000 tons per year. Now let's say we want to increase that by a hundred, to 2 million tons per year. With 100% efficiency that would be 2.5 gigawatts. That is still less than the capacity of Tricastin.
What about aluminum, again increased by 100X? Going by this and using the 2003 world total we are talking about 3.5 GW, roughly equal to the maximum production capacity of Tricastin.
So if we were using a highly energy-efficient transfer method, like a mass driver, even by present standards we're not talking about huge amounts of energy compared to even what our civilization produces now.
Note: if we're talking about a mass driver aerocapture combo we only need half this much energy, braking happens by friction with Earth's atmosphere. On the other hand that may have a large amount of the projectile sacrificed as an ablative heat shield, and even a mass driver won't be perfectly efficient, not to mention inefficiencies in the power plant etc. ... call it a ballpark estimate.
Having written this up before seeing the other reply: most of the other concerns are also greatly reduced by this.
You will note that I also calculated the amount of energy needed with a high-end chemical rocket (4.5 km/s exhaust velocity), in which case the required mass ratio is roughly 6 and the energy needed to lift a kg of payload into orbit can easily be calculated by plugging 6 kg and the exhaust velocity into the kinetic energy equation. The answer, as you may recall, was the equivalent ~1.33 kg of gasoline per kg of payload.Wyrm wrote:This brings us to the second problem. That 30 MJ (1.33 kg gasoline) figure to lift a kg into space is neglecting the amount of energy it takes to lift part of your propellant partway.
Hell, let's not blow hot air over energy, let's see how much rocket fuel actually costs. This paper gives an estimate of solid rocket fuel as $5/lb, which translates to $11/kg. By this calculation launching a 200 ton spacecraft (roughly equivalent to a commercial airliner) with a mass ratio of 6 would cost $13.2 million and the price of an individual ticket for a 200 pound man based on rocket fuel would be $1000. Prices for actual existing space launch systems are in the range of thousands of dollars per kilogram. Granted the paper doesn't say what the exhaust velocity is for that solid fuel rocket but we're talking a difference of three orders of magnitude here. I think it's pretty obvious that most of the cost of launching stuff into space is not the fuel.
Just doing some quick searching around the web, this paper seem to disagree.You can't launch a payload into Earth orbit with a mass driver, period. The gravitational and atmospheric drag kills you, both literally and figuratively. A hypersonic projectile in the troposphere will get shit hot (leaving a sun-hot trail of plasma that will blind people for kilometers around), shed a lot of kinetic energy, make a devistating sonic boom that will destroy things for kilometers around, and on top of all that never reach orbit. If you somehow lob the damn thing fast enough to overcome gravitational and atmospheric drag, the projectile disintegrates from the mechanical stresses. For planetary launches, yeah, you do need a rocket.
Edit: as for the environmental effects, put it in an uninhabited area, like the middle of the Sahara desert.
Many consider Gerald Bull the pioneer of direct space launch development, even though the
technology he employed, powder guns, could not place a very sizable payload into orbit.
Nevertheless, project HARP launched many hundreds of projectiles into suborbital trajectories.
The maximum altitude achieved was 180 kilometers, and complex projectiles were routinely
subjected to 10,000 g of acceleration. Since HARP’s ultimate goal was to deliver a working
satellite to LEO, much development effort was spent on constructing satellite system
components that could survive the initial launch accelerations of the 16-inch-bore naval cannon
(Murphy and Bull, 1966).
Components that were tested successfully by Bull and his group included sun sensors,
horizon scanning sensors, accelerometers, NiCd batteries, on-board computer, cold-gas ACS
system, and solid rocket motors. These systems were fired both in suborbital trajectories and
horizontally at a backstop with a smaller launcher, as reported by Marks et al. (1966). Peak
accelerations exceeded 10,000 g for each case. After recovery, the sensors were then tested
inside a laboratory against their corresponding pre-launch reference. Considering this was
13
accomplished in the early ‘60s, more reliable components can undoubtedly be developed with
the micro-electronic technology of today (Davis, 2002).
Previous Work on Thermal protection
During the 1980s there was significant interest in using light gas guns and electromagnetic
railguns as direct space launchers. Hawke et al. (1982) proposed the use of an EM railgun for
launch velocities of up to 20 km/s for projectiles in the mass range of 1-200 kg. They reviewed
prior work on kinetic energy loss for ballistic launch and noted that the necessary muzzle
velocity increases as the projectile mass decreases due to ablation effects. On this basis they
proposed the use of tungsten aeroshells to enable low-mass projectiles (i.e., several kilograms) to
survive launch to orbit. Hunter and Hyde (1989) suggested the use of a light gas gun in the
velocity range of 5-7 km/s for 1000-4000 kg projectiles having carbon-carbon thermal protection
systems. They estimated the mass loss due to ablation to be less than 20 kg for a 6 km/s launch
based on simulations carried out by Sandia National Laboratory, and raised concerns about the
lack of experimental ablation rate data at stagnation pressures greater than 120 atm (stagnation
pressure maximum is ~300 atm and rapidly drops with altitude). Fair et al. (1989) compared the
status of solid-propellant rockets with the EM railgun technology potential at the time. They
refer to a hypervelocity projectile design incorporating transpiration cooling by expelling
combustion product gases through the nose tip. In addition, the concept of injecting a liquid
combustible jet ahead of the projectile that mixes with the air at the shock front and burns to
reduce the atmosphere density is mentioned. No detailed heat transfer analyses were actually
presented in any of the papers cited above.
Palmer and Dabiri (1989) considered transpiration cooling for EM railgun-launched
projectiles with lithium coolant. Launch conditions optimized to deliver 1 kg of payload to LEO
resulted in coolant mass ranging from 1% to 30% total projectile mass for muzzle velocities
ranging from 4.5 to 12 km/s, respectively. Conversely, the coolant mass fraction was less than
1% in all cases considered when the projectile mass was increased to 500 kg. Bruckner and
Hertzberg (1987) and Kaloupis and Bruckner (1988) carried out ablation and shape change
calculations during atmospheric transit for a 2000 kg projectile launched at 7 to 10 km/s at
various inclination angles and muzzle altitudes. They assumed the mass loss resulted in
symmetrical blunting of carbon-carbon nose tips and accounted for the corresponding increase in
drag. Results of these calculations indicate that the projectile would lose ~1% of initial mass and
20-50% of initial velocity by the time it leaves the atmosphere (velocity loss increases with
increasing muzzle velocity).
Bogdanoff (1992) carried out an in-depth analysis of aerodynamic heating for ablative and
transpiration cooled nose cones of 2000 kg projectiles being launched with muzzle velocities of 7
and 10 km/s. In the case of carbon-carbon ablation for the 10 km/s mission, the projectile mass
loss was less than 3%. For the same mission using NH3 for transpiration cooling, the net mass
loss was 2-3%. Other transpiration coolants such as CH4, H2, and H2O were considered;
however, NH3 was chosen to eliminate potential plugging of fine transpiration passages due to
18
carbon deposits and prevent oxidation damage from O and O2 arising from water dissociation at
high temperatures. This particular paper of Bogdanoff presents the most detailed engineering
analysis of the aerodynamic heating problem found in the literature cited herein.
Morgan (1997) provides an overview of gun technologies for space launch applications and
suggests that the thermal protection systems for high Mach re-entry vehicles can be used to
meliorate the aerodynamic heating problem. Gilreath et al. (1998, 1999) present a detailed
design for a 682 kg launch vehicle that employs an aeroshell of primarily carbon composite
construction with an overall mass fraction of ~10%. They refer to analytical and computational
modeling that predicts the nose cone recession to be approximately 1% the length of the vehicle
at a muzzle velocity of 7 km/s.
Aerodynamic Heating Consensus
Others who have recently considered the impulsive launch aerodynamic heating problem
have summarily dismissed it as being tractable (e.g., McNab 2003, Cocks et al. 2005). In
general, as previously stated, modern carbon-carbon ablation and transpiration thermal protection
systems are deemed adequate for Earth atmosphere transit at velocities up to 8 km/s.
Nevertheless, there are intriguing possibilities in using the new generation of light-weight
ceramic ablators; e.g., phenolic impregnated carbon ablators (PICA)) for the Stardust (12.6 km/s
re-entry velocity) and the Genesis sample return capsules (comet dust and solar wind particles,
respectively) (Olyncik et al. 1999). The implementation of these thermal protection technologies
on impulsive-launched space vehicles will certainly enhance the robustness of this means for
LEO access while increasing the payload mass fraction.
Aerocapture is one possibility. For many near-Earth asteroids (which are a much more sensible choice to mine than belt ones) there is the possibility of using the moon's gravity to slow the incoming projectiles down. PERMANENT talks about this but for some reason I can't seem to access their site right now so sorry, no link. For low-bulk high value cargoes like platinum (which make much more sense to import from asteroids than things like iron) cargo ships would probably be a decently efficient option. A cargo ship capable of carrying a payload of 130 tons could return the equivalent of an entire year's current production of platinum. With a low-gear VASIMR such a ship could make it out to the asteroid belt and back with a mass ratio of 1/2 (acceleration time would be relatively reasonable, perhaps 4-8 months assuming a total ship mass of 300 tons minus propellant, even with a single engine). The propellant (hydrogen) could perhaps be obtained from main belt comets, assuming one is unwilling to consider the possibility of being able to shoot it up from Earth with mass drivers.Now, if you put this sucker on an asteroid, sure you get the material off and whizzing towards earth, but now how do you catch it? This stuff is going to be moving at a clip on the order of ten km/s. Unless each chunk has a little rocket moter on it, or you rendezvous with each and every one of those little chunks, you're going to have to put something in its way.
Last edited by Junghalli on 2010-08-17 12:03am, edited 1 time in total.
Re: Transhumanism: is it viable?
Eh, you're right, this really is a threadjack. It has nothing to do with transhumanism anymore.Agent Sorchus wrote:The problem I have with your argument Junghalli is that having a large space based society is not equal to a transhumanist ideal or goal. It could be the goal for any society (especially one that looks for post scarcity), as such any argument based on a space society are arguments for a stellar society, not a transhumanist one. A transhumanism has to be a valid society prior to adopting these goals. All post scarcity utopias use either space as a get out, or recycling, and are not unique to transhumanism either way.
Re: Transhumanism: is it viable?
Speaking of space cannons, it may interest you to note that there have actually been experiments done on the concept:
http://en.wikipedia.org/wiki/Project_HARP:
http://en.wikipedia.org/wiki/Project_HARP:
http://www-istp.gsfc.nasa.gov/stargaze/Smartlet.htm:The project was based on a flight range of the Seawell Airport in Barbados, from which shells were fired eastward toward the Atlantic Ocean. Using an old U.S. Navy 16 inch (406 mm) 50 caliber gun (20 m), later extended to 100 caliber (40 m), the team was able to fire a 180 kilogram slug at 3,600 meters per second, reaching an altitude of 180 kilometers.
http://www.dunnspace.com/harp.htm:HARP, for High Altitude Research Project, was a study of the upper atmosphere by instruments shot from a cannon. The project was conducted in the 1960s by scientists of the McGill University in Montreal, who named their vehicle the "Martlet", an old name for the martin bird; the shield of the McGill University in Montreal displays three red martlets.
The cannon which propelled the Martlet to the high atmosphere was the creation of Gerald Bull, a Canadian engineer who specialized in the design of cannon. From the US Navy Bull obtained two cannon of the type used by battleships, with a 16-inch (40 cm) caliber, and combined them end-to-end to create a single tube of nearly twice the length. The cannon was mounted on the island of Barbados and fired nearly vertically, over the ocean.
To reduce air resistance, the 200-lb Martlet vehicle was given a smaller diameter than 16 inches, with wooden blocks filling the space between it and the barrel. Because the payload and attachments were about 10 times lighter than the regular 16-inch shell, the acceleration was much larger, about 25,000 g: electric circuits had to be encased in plastic to resist the great forces. The peak altitude was extended (by 4 miles), by pumping out most of the air in the gun barrel before the shot. When the cannon fired--its loud bang was heard all over Barbados--the airtight cover over the muzzle was blown away and the Martlet rose into the high atmosphere, to altitudes of 80-90 miles.
This was with 1960s technology. Granted it's only 1/5 of the way to a full orbit-launch gun in terms of energy, but I don't think the idea can be so casually dismissed.A series of sub-caliber "Martlet 2" vehicles were built, which were sub-caliber and rode the barrel in a fall-away sabot. Canted fins on the projectile maintained aerodynamic stability, and spun the projectile up so that it was stable once leaving the atmosphere. These were fired at elevations of from 60 to 90 degrees from a 16 inch naval gun (on loan from the U.S.) which was located in Barbados. The gun was bored out to 16.5 inches and made into a smoothbore cannon. Altitudes of approximately 500,000 to 600,000 feet (100 miles, 160 km) were projected for this arrangement, and early trial reported in the reference cited went as high as 112 km. Martlet vehicles carried instruments made from discrete solid-state electronics - they were potted in a mix of epoxy and sand (!) and the designers did not seem to have any real trouble getting the electronic to survive the launch acceleration which peaked at approximately 20,000 g.
Re: Transhumanism: is it viable?
This is relevant to the "brain simulation" stuff. Biologist PZ Myers completely blows apart Kurzweil's latest statement on the matter.
"I spit on metaphysics, sir."
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
You can't expect sodomy to ruin every conservative politician in this country. -Battlehymn Republic
My blog, please check out and comment! http://decepticylon.blogspot.com- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Transhumanism: is it viable?
No, I would be setting it to search for better algorithms because I as a human being simply do not have the time to properly solve billions of trivial programming problems. A computer does.Wyrm wrote:I'd like to point out that, as far as finding the optimum configuration of parts and programming for a good, friendly AI, knowing the general lay of the land (as the sum total of human knowledge would give initially) is indeed 'minimal,' as in its the minimum amount of information you would need to solve the problem. Remember, you're setting this thing to search for better algorithms precisely because you don't know what they are in the first place.
Like I said, it may not even be necessary for the sum total of humanity to be smart enough. Starglider certainly isn't going to formally prove every single algorithm in his project on his own.Wyrm wrote:I did not mean "us" (underlined) to mean "a single human selected out of humanity;" I mean "humanity in toto." Even collected, we may not be smart enough to crack the problem. Knowing that something is possible and in broad strokes how to do it is a quite different thing from being able to pull it all together.
Which is what automated testing, formal proving, and triple (or higher) redundancy is for.Wyrm wrote:And yet bugs slip through anyway. Computers are very complex systems, and as the interaction between its parts gets more intricate, mathematical chaos can set in.
Your sentence is kind of funny.Wyrm wrote:A brain is not a neural network as defined in computer science textbooks. It is a physical machine of action potentials floating in a modulating broth of fluid. That's not amenable to translation into logic functions.
From my (old) textbook,
The 'some' part referring chiefly to aggregating action potentials. You can make a logical function out of them, although it gets messy, fast, thus my 'mass energy of a star' comment.Artificial Intelligence: A New Synthesis by Nils J. Nilsson, pp. 37 wrote: ...
As mentioned in the last chapter, TLU networks are called neural networks because they model some of the properties of biological neurons.
...
There are a great many other things that go into a functional brain, which is what brain simulationists are trying to work out, but if it runs on logical hardware, it can be deconstructed into a direct set of logical statements. It may not be energetically feasible to do so, and for humans, it almost certainly isn't - but it's still a logical function. The problem from a friendliness perspective is of course that it's impossible, resource wise, to prove much of anything about them.
The toy scenario would simply be to demonstrate an AGI in something that is 'easy' to formally prove - properly constrained resource usage within a computer or cluster of computers. It's still a real scenario, as the construct can still do a lot of very real work, I call it a 'toy' because you would for example prevent it from sending POST requests across the Internet, for example. Or more technically, it would not have the ability to.Wyrm wrote:If you test-run your friendliness definition on 'toy' scenarios, how can you be sure that they'll work for real scenarios?
From there you would ideally slowly expand what it could do while keeping a sharp check on what it is capable of doing. And then ask a lot of smart people for help, if you're not an idiot.
Probably because you don't seem to understand your own argument. Rice's Theorem, the Halting Problem, etc. apply to the problem of creating a general prover. They're more specific ways of saying you can't, in short because some mathematical problems are simply unprovable.Wyrm wrote:You don't seem to get what I'm saying.
Given an individual function, however, you can determine whether or not properties about that function are provable. Using this, you can make definite statements about what a program will and will not do if you restrict your choice of functions. After the era of Microsoft's "Ship first, bugfix later" mantra, you are probably skeptical, so have a look.
It's tedious, but compare, for example, how often you have a bug in your CPU.
If it can be shown that it is impossible to prove a certain task that needs to be done, then there is nothing in particular wrong with letting humans take over for those roles. We haven't blown up the world yet.Let's assume for a minute that we can convincingly prove that a particular AI will only produce task-appropriate output. To take your example, a psychoanalytic AI will not generate output to control a Dyson swarm, and vice versa. Thus we have proven that the psychoanalyitic AI will not be able to produce unfriendly Dyson swarm output. But in order to be wholely friendly, the AI cannot produce unfriendly psychoanalytic output either. That's still a property of a partial function that the algorithm implements, and there's no way to decide that the algorithm has this property and not have it be wrong sometimes — and there's no prior guarantee on which algorithms it will decide wrongly on.
Now, it is possible that a proper restriction of algorithms considered and/or inputs fed into the algorithms can let one use an algorithm that need only work for that subset, there's no prior guarantee that the partition of algorithms for which your method decides correctly and decides wrong will fall where you want them to fall. It certainly will not fall along the lines of psychonanalytic AIs/Dyson swarm control AIs.
I'll concede, I'm not convinced that formal proving will be able to handle the real world as well as many singularians think, though some sort of hybrid that is guaranteed to have a 'friendly failure' may also be an option. I don't know enough to discount it entirely and so far no one has given a convincing argument in the slightest to say that it is actually impossible. Just vague statements by people who don't have a great deal of experience in the field who claim that it may be.
I trumpet about what? I'm pretty sure that in every post I've made on the subject here I've used the term 'limited scarcity'. It's easy to see that modeling software and extrusion fabbers can wipe out entire industries just like word processors and printers have. Miniatures, cheap toys, etc.Wyrm wrote:The point is that these rapid prototypers will not break the greedy capitalists' hold on industry and bring it to the masses. It also does nothing to bring about the post-scarcity society you trumpet about. The only reason for a poor country to buy a factory is to generate income. If it just wants a few parts, it can get it cheaper overall from the big manufacturers.
I don't think it will stop with simple extrusion fabricators, and I don't know where it will stop. I'm not particularly sure why that notion is offensive to you, but your taking offense at the notion is puzzling.
If you need a lot of something, why would you build it straight from the prototyper?Wyrm wrote: Your rapid prototypers are just that — prototypers. They're only useful for objects that will have very few instances of, basically unique objects. It's good if you have a relatively large number of objects that are unique, exchanging a higher unit cost and capital cost for the prototyper for the ability to make many objects.
For a society to be truly be called post-scarcity, however, the needs of the great majority people must be satisfied, and most people will have the same basic needs. This means that basic necessities in such a world will be commodities, and customization can come trivially through permutation of individual commodities. These commodities can be produced for reduced cost through the economy of scale, and as such if you have a reliable infrastructure to deliver freight, it's much cheaper to get shipments of commoditites than to make them yourself with a rapid prototyper.
There's nothing wrong with building customized machines in such a scenario. I'm not sure why you seem to insist that futurists think they will just vanish.
And yet software - at least for those goods home fabrication can eventually produce - is the relevant comparison.Wyrm wrote:No, that nearly everyone who owns a computer owns a mass produced computer. Durable goods are worlds different from software.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
Precisely. You don't know the algorithms you want ahead of time, so you're letting the computer do it for you. How is this any different from what I said?Xeriar wrote:No, I would be setting it to search for better algorithms because I as a human being simply do not have the time to properly solve billions of trivial programming problems. A computer does.
What we need to prove are the general principles. What should an AI look like in broad strokes? What are the basic principles of an intelligence? If we can't solve these problems, then we can't build an AI to solve these problems or any other problem.Xeriar wrote:Like I said, it may not even be necessary for the sum total of humanity to be smart enough. Starglider certainly isn't going to formally prove every single algorithm in his project on his own.
And bugs will slip through anyway. Formal proof only works on things we can formally prove, and there are some statements that we cannot formally prove in any given system. Automated testing only works if you can design your tests to catch all of the potential bugs, but you don't know all of those potential bugs: the system your testing is a complex one.Xeriar wrote:Which is what automated testing, formal proving, and triple (or higher) redundancy is for.
Yes, a neural network models some of the properties of neurons, but not all of them, particularly important ones like being able to rewire themselves or adjust their connective weights as they are performing their assigned task. The model is at best incomplete.Xeriar wrote:Your sentence is kind of funny.Wyrm wrote:A brain is not a neural network as defined in computer science textbooks. It is a physical machine of action potentials floating in a modulating broth of fluid. That's not amenable to translation into logic functions.
From my (old) textbook,The 'some' part referring chiefly to aggregating action potentials. You can make a logical function out of them, although it gets messy, fast, thus my 'mass energy of a star' comment.Artificial Intelligence: A New Synthesis by Nils J. Nilsson, pp. 37 wrote: ...
As mentioned in the last chapter, TLU networks are called neural networks because they model some of the properties of biological neurons.
...
And that's the rub, isn't it? That the brain runs on logical hardware rather than some other form of hardware, like something more akin to differential equations. It's based on the premise that the brain computes things instead of evolving in an abstract space in response to stimuli.Xeriar wrote:There are a great many other things that go into a functional brain, which is what brain simulationists are trying to work out, but if it runs on logical hardware, it can be deconstructed into a direct set of logical statements.
Of which our own system of mathematical theorems and proofs are one example of.Xeriar wrote:Probably because you don't seem to understand your own argument. Rice's Theorem, the Halting Problem, etc. apply to the problem of creating a general prover.Wyrm wrote:You don't seem to get what I'm saying.
Think about it: for each theorem, you have a set of conditions that are presumably easy to verify, and based on what those conditions tell you, the theorem kicks out a statement about the mathematical object in question. Sounds a lot like an algorithm, doesn't it? That's because that's exactly what it is.
The reason why proven theorems work is that they are all follow the form (conditions f satisfies)➝(true statement about f), which is true for all functions f, and is therefore a trivial property. In exchange, if not all conditions are satisfied, the consequent statement about f may be true or false. The algorithm must admit the "undecided" response — which doesn't help us — or it must fake it, and the fake out is defective (otherwise, we have a general prover for this particular property). Compounding doesn't help, as it generates another finite algorithm.
The same thing happens to mathematical proof itself: while it may seem that the space of theorems and their proofs is too big to deal with, we use a property of both to simplify things — all theorems and their proofs must be finite length. Therefore, the number of both are countable, and as such, the pairing of theorems with their proofs is also countable. Then we just need to step through them one by one and check to see if the theorems line up with their proofs. Since each theorem and proof must be finite length, the correct pair will show up sooner or later.
This is an algorithm with input of theorem-proof pairs, which is vulnerable to the Halting Problem. Any algorithm that kicks out a proof to any theorem on input is equivalent to an algorithm that pairs up the input theorem to a successive sequence of individual proofs, which are then crammed into our theorem-proof verifier, it must also therefore be vulnerable to the Halting Problem.
Yes, and their kernel design so happened to be amenable to specific proof. Not all problems will be so nice, and in fact most of them will not be, again because of the algorithmic nature of proof. Another way to view it is that proving certain algorithms have certain properties is equivalent to proving the Gödel statement of computational theory. You can't always tell ahead of time which ones they will be.Xeriar wrote:Given an individual function, however, you can determine whether or not properties about that function are provable. Using this, you can make definite statements about what a program will and will not do if you restrict your choice of functions. After the era of Microsoft's "Ship first, bugfix later" mantra, you are probably skeptical, so have a look.
Okay.Xeriar wrote:If it can be shown that it is impossible to prove a certain task that needs to be done, then there is nothing in particular wrong with letting humans take over for those roles. We haven't blown up the world yet.
I'll concede, I'm not convinced that formal proving will be able to handle the real world as well as many singularians think, though some sort of hybrid that is guaranteed to have a 'friendly failure' may also be an option. I don't know enough to discount it entirely and so far no one has given a convincing argument in the slightest to say that it is actually impossible. Just vague statements by people who don't have a great deal of experience in the field who claim that it may be.
...is there any important industry that will be demolished by rapid prototypers?Xeriar wrote:I trumpet about what? I'm pretty sure that in every post I've made on the subject here I've used the term 'limited scarcity'. It's easy to see that modeling software and extrusion fabbers can wipe out entire industries just like word processors and printers have. Miniatures, cheap toys, etc.
It sounds like you agree with me that rapid prototypers will not be an important part of post-scarcity, the large part of which will be bringing the high-tech, almost-zero-cost lifestyle to the masses.Xeriar wrote:I don't think it will stop with simple extrusion fabricators, and I don't know where it will stop. I'm not particularly sure why that notion is offensive to you, but your taking offense at the notion is puzzling.
It's based on the idea that somehow filthy capitalists are stealing not only material wealth but intellectual wealth, and therefore democratizing innovation will somehow mean cheap stuff for everyone. These are, of course, forewarded by idiot futurists.Xeriar wrote:There's nothing wrong with building customized machines in such a scenario. I'm not sure why you seem to insist that futurists think they will just vanish.
I don't see how. To produce software, you need only the home computer and programming software. Once you have your software, you can transmit it to anyone else at the rate of your data pipe, and the only thing the other end is a computer and a data pipe to receive it. For rapid prototyping, not only do you need the software, but you also need the rapid prototyper and requisite raw materials. Of course, this has been true for any hobbist, so I don't really see how rapid prototypers will be anything more than a hobby, or very specialized work.Xeriar wrote:And yet software - at least for those goods home fabrication can eventually produce - is the relevant comparison.Wyrm wrote:No, that nearly everyone who owns a computer owns a mass produced computer. Durable goods are worlds different from software.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
Whatever the implementation of a space cannon, you will need a rocket on the payload, which can be shown very easily.Junghalli wrote:This was with 1960s technology. Granted it's only 1/5 of the way to a full orbit-launch gun in terms of energy, but I don't think the idea can be so casually dismissed.
Two-body orbits are conic sections. You're not firing the gun horizontally, so the orbit will not be circular. Also, you're getting this thing to orbit, so hyperbolic and parabolic trajectories are out. That leaves an eliptical orbit.
Again, since the gun is not being fired horizontally, the gun's muzzle will not be at perigee of the orbit. Therefore, the perigee must be under the earth's surface (because if you trace along the path of the gun opposite the gun the way an orbit would, you end up below the earth's surface, and is obviously nearer to the center than any point on the surface). Since the orbit is eliptical, it will rise to the apogee, and then fall to the perigee. Since the perigee is below ground, the ellipse does not trace a complete orbit, which is the definition of a suborbital path. (Remember that the only difference between an orbital and suborbital trajectory is that part of a suborbital trajectory is below ground.)
In order to make the suborbital path orbital with a gun, you need a rocket to provide thrust at a part of the orbit to bring the perigee above the surface of the earth.
A projectile fired straight up would be stationary at apogee, and so the entire orbital velocity must be provided by the rocket. A projectile fired at an angle will have some residual speed at apogee and therefore give a break on the rocket, but a fast projectile fired at angle will disturb a larger area around it.
Otherwise, you make a good case for boost cannons.
And how do you address pollution?Junghalli wrote:Aerocapture is one possibility.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
Re: Transhumanism: is it viable?
Yes, I'd heard about this problem already. Still, this leaves us only needing to circularize the orbit, rather than needing to boost the projectile all the way from surface to orbit. Unfortunately I don't know how to calculate the necessary delta V, but I'm fairly sure it's a large improvement over surface launched rockets.Wyrm wrote:Whatever the implementation of a space cannon, you will need a rocket on the payload, which can be shown very easily.
Two-body orbits are conic sections. You're not firing the gun horizontally, so the orbit will not be circular. Also, you're getting this thing to orbit, so hyperbolic and parabolic trajectories are out. That leaves an eliptical orbit.
Again, since the gun is not being fired horizontally, the gun's muzzle will not be at perigee of the orbit. Therefore, the perigee must be under the earth's surface (because if you trace along the path of the gun opposite the gun the way an orbit would, you end up below the earth's surface, and is obviously nearer to the center than any point on the surface).
For comparison, the delta V for a geostationary transfer orbit from the equator is given here as ~1.5 km/s. This involves raising the orbit's apogee by tens of thousands of km. By contrast for a LEO orbit from a mass driver we only need to raise the perigee by a few hundred km. Even a delta V of 1.5 km/s would be a vast improvement over the 8 km/s or so necessary to reach orbit from the surface, especially given the mass ratio death spiral problem. A high-end chemical rocket could achieve that delta V with a ratio of 3 kg of fuel for every 10 kg of rocket, compared to 6 kg of fuel for every 1 kg of rocket for the same rocket to lift off from Earth's surface. Of course, the mass driver having launched the projectile temporarily into space, the raising of the perigee could potentially be done with more fuel efficient lower thrust engines that would not be viable for launching from the surface.
As an aside, unless there's something I'm missing I think a near horizontal launch should be possible if the gun is located on a mountain (a high mountain launch site would be advantageous anyway because it would potentially allow you to launch from a point above more than half of the atmosphere, reducing atmospheric drag). Of course, the lower the angle the more time the projectile will spend in the densest part of the atmosphere, which both increases atmospheric drag inefficiencies and increases the amount of heating it will be subjected to. And I imagine the perigee will still be in the atmosphere so circularization would still be required, just less of it.
Realistically I doubt we would be importing iron from the asteroid belt but rather less common metals, for which the pollution problem is much less bad simply because much less of it is required to vastly decrease prices. Increasing the availability of platinum by a factor of 100, for instance, would mean importing 13,000 tons per year. Assuming a sacrificial heat shield composed of iron or silicate rock and amounting to 50% of the mass of the projectile was added to each platinum pellet we would be looking at 13,000 tons of extra material injected into the atmosphere each year, and ~5 megawatts of energy assuming the material must be decellerated by 5 km/s. This represents maybe at worst a doubling of the amount of meteoric material that already strikes Earth every year, and using the high-end estimate it would be more like 1/100 the naturally occuring flux of meteoric material on Earth. Using this latter estimate, we could send back even several times the present world extraction of relatively common metals like aluminum without seriously increasing the Earth's meteor flux beyond natural levels.And how do you address pollution?
Of course, for bulk shipping I am more of a fan of lunar gravity assists but they wouldn't be viable for returning materials from Psyche - only from Near-Earth Asteroids of the sort we would realistically be much more likely to be exploiting at least in a near term scenario.
PERMANENT wrote:The maximum braking the Moon can provide is about 2.2 km/sec, using a "double lunar gravity assist", whereby the asteroid passes by the Moon coming in, then past the Earth, then past the Moon again going back out. This would divert the asteroid by almost 90 degrees from its original path, and capture it into a highly elliptical Earth orbit. Subsequent gravity assists would insert it into a more circular orbit around Earth after which it would perform final small thrusting maneuvers to achieve its desired destination orbit.
Many asteroids require a delta-v of much less than 2.2 km/sec, and require only a single lunar gravity assist (not an Earth gravity assist) to be captured, and optionally additional lunar gravity assists to divert the asteroid into a more circular orbit.
Gravity assists improve the economics of retrieving asteroid payloads, as well as outbound missions, and greatly broadens the number of attractive asteroids.
(In this game of "orbital billiards", we are tapping a gravitational energy source as asteroid payloads exchange orbital momentum with the Moon and the Earth -- the asteroid slows down while the Moon speeds up. Because asteroids are so small compared to the Earth and Moon, the effects on the Moon and Earth are so small as to be immeasurable. It would take millions of captured asteroids to cause any detectable changes in the Moon's or Earth's orbits. It's like measuring the effects of mosquitoes hitting the Empire State Building -- significant to the mosquito, but not to the building.)
We probably would not want to bring a complete asteroid in, but instead a series of small cargo containers which are more easily maneuvered and pose no significant threat to Earth.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Transhumanism: is it viable?
I thought you were referring to actual novel algorithm development. As it is you're trying to claim that language, compiler and interpreter development doesn't happen, which is absurd. I only pay attention to when malloc switches to mmap for example because some programs and interpreters are clueless. There is no particular reason why I should have to by definition.Wyrm wrote:Precisely. You don't know the algorithms you want ahead of time, so you're letting the computer do it for you. How is this any different from what I said?
For me, personally, the typical reason why I might not know the best algorithm for a task is because a single best algorithm may not exist - it's situation dependent. I run two sites that generate millions of scripting calls and a solid fraction of a billion queries per day between them - they have concurrency issues that don't show up on smaller sites where one request finishes seconds or minutes before another begins. If one of them suddenly explodes in popularity, however, I need to be able to switch modes fast - and having it done entirely dynamically would be even better.
Regardless, having a friendly AI handle things like that is not going to look - or be - much different than advanced language development. For languages with massive code bases (C/C++, php, etc) this is also taking the form of automated tools that detect problems outside of the scope of the interpreter and/or compiler.
How broad are we talking about? Do I have to explain models, sense-decide-act loops and other basic AI concepts? Do I have to explain things that should be obvious on their face like an AGI having to have some of its own capabilities in its main model in order for learning new tasks to happen? I don't even know your programming experience, but your experience with AI programming seems to be absolutely nil. I dabbled in AI a decade ago, I'm not particularly qualified to teach, either.Wyrm wrote:What we need to prove are the general principles. What should an AI look like in broad strokes? What are the basic principles of an intelligence? If we can't solve these problems, then we can't build an AI to solve these problems or any other problem.
No, the proof needs to show that it will not generate an unfriendly event. That is a different - and much easier - problem. The most trivial example of this is preventing read and write access to critical core code, providing read-write access to its knowledge/model store, and providing write access to a single output buffer, which another machine or a human reads, and limited read access whatever (the general Internet would follow basic spider rules, for example).Wyrm wrote:And bugs will slip through anyway. Formal proof only works on things we can formally prove, and there are some statements that we cannot formally prove in any given system. Automated testing only works if you can design your tests to catch all of the potential bugs, but you don't know all of those potential bugs: the system your testing is a complex one.
There are all sorts of bugs that can creep in. Random bit of radiation or hardware failure causes a crash - backups are our responsibility. We can't guarantee that every developer has purged 'delete this' links from triggering on GET statements in their software, but if they haven't it's not particularly our fault.
But important classes of bugs - those potentially leading to an unfriendly failure - i.e. taking over and writing its core code to remote systems - can't happen because 1) It's incapable of taking over a remote system 2) It's incapable of reading its own core code 3) It's incapable of even communicating with anyone of its own volition, because e.g. it can't even craft links on its own.
My comment was not about your conclusion, it was your premise and obvious lack of education on the subject. And yet you continue - for crying out loud, of course a student's first neural network probably is not going to include plasticity or self training. My first differential equation wasn't partial, either. In fact I never got to partial diff - but I'm not so dense as to deny their existence.Wyrm wrote:Yes, a neural network models some of the properties of neurons, but not all of them, particularly important ones like being able to rewire themselves or adjust their connective weights as they are performing their assigned task.
Again, I am not arguing with your final conclusion, but your ignorance of some concepts is glaring and highly annoying. AI is not my subfield, but you are making a lot of declarative statements that simply are not true.
Yet again... what makes you think that computers have a hard time with differential equations? Look up their use in graphics. Not my subfield either.Wyrm wrote:And that's the rub, isn't it? That the brain runs on logical hardware rather than some other form of hardware, like something more akin to differential equations. It's based on the premise that the brain computes things instead of evolving in an abstract space in response to stimuli.
That says more about the general issue, and why scopes of friendliness need to be limited.Wyrm wrote:Of which our own system of mathematical theorems and proofs are one example of.
Think about it: for each theorem, you have a set of conditions that are presumably easy to verify, and based on what those conditions tell you, the theorem kicks out a statement about the mathematical object in question. Sounds a lot like an algorithm, doesn't it? That's because that's exactly what it is.
The reason why proven theorems work is that they are all follow the form (conditions f satisfies)➝(true statement about f), which is true for all functions f, and is therefore a trivial property. In exchange, if not all conditions are satisfied, the consequent statement about f may be true or false. The algorithm must admit the "undecided" response — which doesn't help us — or it must fake it, and the fake out is defective (otherwise, we have a general prover for this particular property). Compounding doesn't help, as it generates another finite algorithm.
The same thing happens to mathematical proof itself: while it may seem that the space of theorems and their proofs is too big to deal with, we use a property of both to simplify things — all theorems and their proofs must be finite length. Therefore, the number of both are countable, and as such, the pairing of theorems with their proofs is also countable. Then we just need to step through them one by one and check to see if the theorems line up with their proofs. Since each theorem and proof must be finite length, the correct pair will show up sooner or later.
This is an algorithm with input of theorem-proof pairs, which is vulnerable to the Halting Problem. Any algorithm that kicks out a proof to any theorem on input is equivalent to an algorithm that pairs up the input theorem to a successive sequence of individual proofs, which are then crammed into our theorem-proof verifier, it must also therefore be vulnerable to the Halting Problem.
'Most of them will not be' is a pretty specific cop out. Proving model accuracy within tolerances is going to be impossible beyond very limited scenarios, and since output is much easier to restrict, the statement is trivially true for anything besides a toy scenario.Wyrm wrote:Yes, and their kernel design so happened to be amenable to specific proof. Not all problems will be so nice, and in fact most of them will not be, again because of the algorithmic nature of proof. Another way to view it is that proving certain algorithms have certain properties is equivalent to proving the Gödel statement of computational theory. You can't always tell ahead of time which ones they will be.
You instead resort to proving that model inaccuracies cannot generate unfriendly output (e.g. for a psychoanalyst), or that the failure rates are going to be acceptable out to some orders of magnitude past the 'safe' range (e.g. for a Dyson swarm control system). That sort of thing is why I consider the question 'up in the air' - just because there is probably no perfect solution to a given friendliness issue does not mean that an acceptable one can't be found.
Again, though, it is an enormous problem. The above is just part of my logic behind saying that it may in fact be possible.
I would not invest in printed circuit board companies at this point, though this is in part due to work on 'system on a chip' designs. Commodity ICs and simple electronics in general (clocks, etc) are probably the most under threat. These things do print circuits already, as I mentioned.Wyrm wrote:...is there any important industry that will be demolished by rapid prototypers?
After that, furniture, maybe. Anything where expression is important enough that the time taken to produce it remotely and ship is going to end up being weighed unfavorably against producing it personally (or by a friend or acquaintance familiar with you). For this to be a real option larger and more complex machines are required, though for wood, plastic, and soft metals, this isn't the sort of problem that it is for forging steel, etc.
They have the potential to bootstrap. If there's a single machine that would actually 'change the rules of the game', it's probably photobioreactors (grow your own food and fuel). Whether they are economical is far from proven, however.Wyrm wrote:It sounds like you agree with me that rapid prototypers will not be an important part of post-scarcity, the large part of which will be bringing the high-tech, almost-zero-cost lifestyle to the masses.
I probably should have put 'greedy capitalists' in quotes, though I would only need to point to the patent and copyright system to show how asshats are fucking with productivity in general (see the recent fuss over Oracle suing Google over patents that Sun engineers apparently filed in jest).Wyrm wrote:It's based on the idea that somehow filthy capitalists are stealing not only material wealth but intellectual wealth, and therefore democratizing innovation will somehow mean cheap stuff for everyone. These are, of course, forewarded by idiot futurists.
My view is that they will allow economies of scale to be built closer to home, and make them easier to start up. More competition in general means that a lot of markets will become more efficient.
That depends on how common the raw materials are (I could just as well point out hardware, software and resource requirements for a software program, for example - much less color ink and black ink for printing). The idea behind using the prototyper rather than some specialized tool is probably due to some sort of desire for customization. Things you want relatively unique copies of, but you're probably not going to be making a lot of. Which is good because prototypers are slow.Wyrm wrote:I don't see how. To produce software, you need only the home computer and programming software. Once you have your software, you can transmit it to anyone else at the rate of your data pipe, and the only thing the other end is a computer and a data pipe to receive it. For rapid prototyping, not only do you need the software, but you also need the rapid prototyper and requisite raw materials. Of course, this has been true for any hobbist, so I don't really see how rapid prototypers will be anything more than a hobby, or very specialized work.
Since sucking at design is rather easy to do, they'll certainly get passed around. Some of them might require rarer materials and other times pre-built or recycled components.
Outside of decoration, I would agree that advanced uses for prototypers won't be as ubiquitous as printers are today, but they don't particularly need to be in order to disrupt some industries.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
- Wyrm
- Jedi Council Member
- Posts: 2206
- Joined: 2005-09-02 01:10pm
- Location: In the sand, pooping hallucinogenic goodness.
Re: Transhumanism: is it viable?
Of course it's absurd, and I wasn't claiming such. We do have language, compiler and interpreter theory as a backdrop. However, computer programming languages are pretty simple in comparison to even human languages. Its theory is well understood, and whatever isn't inherent in the theory is essentially free for variation without consequence.Xeriar wrote:I thought you were referring to actual novel algorithm development. As it is you're trying to claim that language, compiler and interpreter development doesn't happen, which is absurd.Wyrm wrote:Precisely. You don't know the algorithms you want ahead of time, so you're letting the computer do it for you. How is this any different from what I said?
The huge libraries you see are encodings of algorithms. The language provides the means to encode the algorithms, but intelligent agencies came up with the algorithms and programmed them. They are not inherent parts of the language, but rather specific expressions in the language.Xeriar wrote:For me, personally, the typical reason why I might not know the best algorithm for a task is because a single best algorithm may not exist - it's situation dependent. I run two sites that generate millions of scripting calls and a solid fraction of a billion queries per day between them - they have concurrency issues that don't show up on smaller sites where one request finishes seconds or minutes before another begins. If one of them suddenly explodes in popularity, however, I need to be able to switch modes fast - and having it done entirely dynamically would be even better.
Regardless, having a friendly AI handle things like that is not going to look - or be - much different than advanced language development. For languages with massive code bases (C/C++, php, etc) this is also taking the form of automated tools that detect problems outside of the scope of the interpreter and/or compiler.
Sure, having an AI to help will look all the world like a simple advanced programming language, but that's only appearance. You are using the language to communicate your desires to the AI, and the AI is doing all the hard work of cooking up algorithms to implement your intensions. It does not change the nature of either the problems or AIs in general.
"Sense-decide-act" is just a fancy term for the normal "input-process-output" that every program is expected to do.Xeriar wrote:How broad are we talking about? Do I have to explain models, sense-decide-act loops and other basic AI concepts? Do I have to explain things that should be obvious on their face like an AGI having to have some of its own capabilities in its main model in order for learning new tasks to happen? I don't even know your programming experience, but your experience with AI programming seems to be absolutely nil. I dabbled in AI a decade ago, I'm not particularly qualified to teach, either.
And sure, learning and the novel capabilities are all a part of it, but those are labels we stick on the salient features of our own intelligence. It doesn't tell you how to implement general learning, or how to program a computer to come up with genuinely new stuff in a controlled way. This is where our knowledge ends and what we need to overcome to produce AIs.
What about a program that asks the user to install a trojan horse, and "nevermind the warnings; they're normal"? It would neatly sidestep all of your security measures outlined above in one swift stroke, and grant unlimited access to the computer. It is an unfriendly act, yet it's hard to see a way to prevent it other than educating the user: a security protocol that prevented the web from presenting information to the user makes the web rather worthless.Xeriar wrote:No, the proof needs to show that it will not generate an unfriendly event. That is a different - and much easier - problem. The most trivial example of this is preventing read and write access to critical core code, providing read-write access to its knowledge/model store, and providing write access to a single output buffer, which another machine or a human reads, and limited read access whatever (the general Internet would follow basic spider rules, for example).
"Unfriendly" is too general a property to conveniently test for. You must restrict the definition to specific acts you can test for that are considered unfriendly, yet not all such acts are actually unfriendly and may be quite useful and very friendly indeed.
A truly friendly AI will not turn such carelessness into unfriendly behavior.Xeriar wrote:There are all sorts of bugs that can creep in. Random bit of radiation or hardware failure causes a crash - backups are our responsibility. We can't guarantee that every developer has purged 'delete this' links from triggering on GET statements in their software, but if they haven't it's not particularly our fault.
Idiots installing unsafe software can easily get around this problem.Xeriar wrote:But important classes of bugs - those potentially leading to an unfriendly failure - i.e. taking over and writing its core code to remote systems - can't happen because 1) It's incapable of taking over a remote system
Ask a debugger.Xeriar wrote:2) It's incapable of reading its own core code
Eventually, information will have to trickle out to the outside or the AI is useless. And when it does, it can ask the idiot in front of the monitor to install this eensy weensy little program that won't harm anyone...Xeriar wrote:3) It's incapable of even communicating with anyone of its own volition, because e.g. it can't even craft links on its own.
You are not defending against general unfriendliness, Xer, only a restricted subset of behaviors that may be unfriendly. Others would be left wide open, including exploiting the very flakey machine called the "user".
A differential equation is a continuous concept, and not amenable to exact representation in any digital framework. Even approximations will not be adequate if the system is chaotic enough.Xeriar wrote:My first differential equation wasn't partial, either. In fact I never got to partial diff - but I'm not so dense as to deny their existence.
They only seem untrue because you sorely underestimate the role of humans in any proposed takeover, and don't understand what "friendly" refers to.Xeriar wrote:Again, I am not arguing with your final conclusion, but your ignorance of some concepts is glaring and highly annoying. AI is not my subfield, but you are making a lot of declarative statements that simply are not true.
The only thing you will prevent with your restrictions is behavior that is generally recognized as unfriendly. However, every one of those behaviors has legitimate use. "Friendliness" applies to the AI as a whole, and it refers to the capability of generating output that will directly or indirectly cause humans harm according to its whims. Your restrictions are designed to arrest unfriendly output once generated, but not prevent that output from being generated in the first place. As such, an AI that deduces that it is part of such a shackled system can simply lie in wait until an opportunity presents itself.
Denying the AI opportunity will involve denying the AI access to these behaviors in perpetuity. Yet there will be a great temptation to give an AI that is behaving itself access to increased capability. After all, we create these things because they can kick our butts intellectually, but once it can it can potentially spot flaws in our security that we could not even fathom.
The most glaring hole in our security is, of course, us. Once it persuades someone with admin access to the system to execute a sudo on code the AI has created by whatever route (depending on how clever the AI is), it has root access and gives it a first good foothold on its escape from the pit we put it in. The only absolute guarantee is equally absolute dicipline on controling information flowing from the AI, but after a certain point, security and utility are at odds with each other, the AI seems friendly enough, and the promise of a free AI is sooo tempting...
You mean numeric approximations of them. The computation takes place in finite time steps using finite element evolution, which invariably introduces error into your simulation. You can only hope that the chaotic nature of the system you're modeling will not kablitz your model with roundoff error.Xeriar wrote:Yet again... what makes you think that computers have a hard time with differential equations? Look up their use in graphics. Not my subfield either.
Even if you limited what you mean by "friendly", it's still a property which exists no general proof of. The only way to check it is by building up from structures you can individually prove are restricted-friendly — and that includes the "do nothing" algorithm. You must therefore have utility in the program, but that's another property that must be separately proved. But then, there's no guarantee that the intersection of the two will be non-empty, even if you use build up from the condition of restricted-friendly.Xeriar wrote:That says more about the general issue, and why scopes of friendliness need to be limited.
Restricting output, even if we tollerate some deviation from perfection, is not a "get out of jail free" card. When you restrict output, you're depending on knowing that no combination of allowed output will result in (significant) harm, and this is a situation where our lack of imagination can kill us. We're sending up an AI because it can do a better job than we can do, either with a simple algorithm or by being there live. But this means there's the potential that an unfriendly AI can look at what it's allowed to do and see an exploit that we didn't think of, the most obvious form is to let a previously unknown instability to evolve to the point of disaster — in some cases the AI doesn't even have to do anything to be hostile. In fact, one of the reasons we put a friendly AI up there is to look for and defuse unforseen disasters in the making before they become a problem.Xeriar wrote:'Most of them will not be' is a pretty specific cop out. Proving model accuracy within tolerances is going to be impossible beyond very limited scenarios, and since output is much easier to restrict, the statement is trivially true for anything besides a toy scenario.
You instead resort to proving that model inaccuracies cannot generate unfriendly output (e.g. for a psychoanalyst), or that the failure rates are going to be acceptable out to some orders of magnitude past the 'safe' range (e.g. for a Dyson swarm control system). That sort of thing is why I consider the question 'up in the air' - just because there is probably no perfect solution to a given friendliness issue does not mean that an acceptable one can't be found.
One would like to check that a program is absolutely correct algorithmly, and at first blush seems rather amenable to computation. In fact such a task seems tailor made for a computer to handle. Yet such a thing is absolutely impossible, except in a carefully constructed and demarcated framework of possible programs and possible outputs.Xeriar wrote:Again, though, it is an enormous problem. The above is just part of my logic behind saying that it may in fact be possible.
This works fine when constructing a correct OS, because the problem is quite well demarcated. It's another matter for an AI, because restricting the range of possible outputs is actually counterproductive. After all, we expect it to generate output that we could not have thought of ourselves, to generate the "aha" moments that we would take too long coming up with ourselves, if we ever do. How does one demarcate that and thus make it amenable to proof?
Computational theory has thrown us curveballs before. I have learned not to be optimistic.
For your clock example, it would probably be faster and easier overall to set the prototyper to work on the casing, and then go out and buy the wholey-generic innards from a nearby department store, then come back and set the prototyper to shove the innards inside your clock — or if it's modular and idiot-proof enough, you can just shove it in yourself. Voila, a perfectly good clock suited to your tastes, and even UL approved!Xeriar wrote:I would not invest in printed circuit board companies at this point, though this is in part due to work on 'system on a chip' designs. Commodity ICs and simple electronics in general (clocks, etc) are probably the most under threat. These things do print circuits already, as I mentioned.Wyrm wrote:...is there any important industry that will be demolished by rapid prototypers?
After that, furniture, maybe. Anything where expression is important enough that the time taken to produce it remotely and ship is going to end up being weighed unfavorably against producing it personally (or by a friend or acquaintance familiar with you). For this to be a real option larger and more complex machines are required, though for wood, plastic, and soft metals, this isn't the sort of problem that it is for forging steel, etc.
Also, a large prototyper appropriate to build furnature is not something you would own, because it's not going to find much use most of the time. Cheaper to rent time on a community prototyper. But then you have enough room to make the damned thing act like a proper workshop and build your furnature from proper wood planks and such, and have proper stock of these things... but I don't suppose its really a prototyper at this point.
Prototypers will not solve that problem. A patented design is still patented even if you use a rapid prototyper. The laws need to change.Xeriar wrote:I probably should have put 'greedy capitalists' in quotes, though I would only need to point to the patent and copyright system to show how asshats are fucking with productivity in general (see the recent fuss over Oracle suing Google over patents that Sun engineers apparently filed in jest).
If they're slow, then they're best set on the completely unique parts of whatever you want to make, and simply buy the mass-produced generic parts.Xeriar wrote:That depends on how common the raw materials are (I could just as well point out hardware, software and resource requirements for a software program, for example - much less color ink and black ink for printing). The idea behind using the prototyper rather than some specialized tool is probably due to some sort of desire for customization. Things you want relatively unique copies of, but you're probably not going to be making a lot of. Which is good because prototypers are slow.
It's also only going to be used if you really want to be expressive with what you want to make. Otherwise, you decide, "eh, fuckit," and buy the generic, mass-produced version. I don't view my clock radio anything more than a utilitarian convenience. It's not an expression of my inner self any more than my shitbox of a car — it just gets me up in time to go to work.
I don't see "disruption" more than I see "adjustment." The big companies will evolve to make generic modules that you can plug into custom casings you can prototype. Most people will not need much more than this amount of customization.Xeriar wrote:Outside of decoration, I would agree that advanced uses for prototypers won't be as ubiquitous as printers are today, but they don't particularly need to be in order to disrupt some industries.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. "
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."
Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Transhumanism: is it viable?
And that variance includes things like loosening language structure, automatically determining context, making internal assumptions about intent and dynamic optimization. Why are so many programmers up in arms about php? It gets out of the way, fast, allowing more people to learn how to 'program'. And most of it is garbage. That trend is only going to continue.Wyrm wrote:Of course it's absurd, and I wasn't claiming such. We do have language, compiler and interpreter theory as a backdrop. However, computer programming languages are pretty simple in comparison to even human languages. Its theory is well understood, and whatever isn't inherent in the theory is essentially free for variation without consequence.
I think you're thinking about the language when I was discussing compiler and interpreter, there.The huge libraries you see are encodings of algorithms. The language provides the means to encode the algorithms, but intelligent agencies came up with the algorithms and programmed them. They are not inherent parts of the language, but rather specific expressions in the language.
The key difference being that one of them would 'learn' through direct human input, and the other would learn 'on its own'.Sure, having an AI to help will look all the world like a simple advanced programming language, but that's only appearance. You are using the language to communicate your desires to the AI, and the AI is doing all the hard work of cooking up algorithms to implement your intensions. It does not change the nature of either the problems or AIs in general.
But given the enormity of both tasks, with the former requiring a lot of work - that is being done - while the latter requires breakthroughs - it's why I generally say I don't think AGI will be so monumentally useful when it's developed as it would be now.
For a stimulus response agent.Wyrm wrote:"Sense-decide-act" is just a fancy term for the normal "input-process-output" that every program is expected to do.
For a more complex agent, it refers to the continuous updating of its own model and acting on the 'outside world', and has more parallels in concept with the OODA loop.
Given your displayed ignorance, I still wonder why you are making declarative statements like that. Or rather, declarative questions, as opposed to more appropriate ones like: Do we have enough of an understanding of enough types of thought? Do we have enough of an understanding of enough types of knowledge, and their interrelations?Wyrm wrote: And sure, learning and the novel capabilities are all a part of it, but those are labels we stick on the salient features of our own intelligence. It doesn't tell you how to implement general learning, or how to program a computer to come up with genuinely new stuff in a controlled way. This is where our knowledge ends and what we need to overcome to produce AIs.
The former is actually pretty well understood in comparison. The latter is a gargantuan chicken and egg problem. Starglider's solution seems to be teaching it to program, which is pretty logical, especially if you're going to represent knowledge nodes as objects and methods, but it's still a monumental task.
Then there is the question of submodels, without which it is not going to be able to properly react to individuals as separate entities outside of the most trivial conditions, and sub-submodels, without which it's not going to do a very good job at interaction, and so on.
1) The 'user' in this case would be the programmer who wrote it, and thus be able to fully evaluate its train of thought. Remember, it's not capable of recursive self improvement at this point. This is a toy scenario meant to test and develop friendliness.Wyrm wrote:What about a program that asks the user to install a trojan horse, and "nevermind the warnings; they're normal"? It would neatly sidestep all of your security measures outlined above in one swift stroke, and grant unlimited access to the computer. It is an unfriendly act, yet it's hard to see a way to prevent it other than educating the user: a security protocol that prevented the web from presenting information to the user makes the web rather worthless.
2) The computer would have to get pretty lucky to even find out how to run code on the 'User's' machine
3) It would also have to phrase it in such a way 'please base64 decode the following 954 messages and execute them on your machine'...
4) It has to want to. This is more a reason why we restrict it so to start, countering your next point rather than the above.
Sure, and I'm not going to build a nuclear reactor without first having a solid understanding of shielding and containment."Unfriendly" is too general a property to conveniently test for. You must restrict the definition to specific acts you can test for that are considered unfriendly, yet not all such acts are actually unfriendly and may be quite useful and very friendly indeed.
In my opinion it makes for an exceptional test scenario. Present it with known unfriendly actions, and adjust as needed so that it starts proactively guarding against their possibility.Wyrm wrote:A truly friendly AI will not turn such carelessness into unfriendly behavior.
A debugger requires access to the program flow - what are you smoking?Wyrm wrote:Ask a debugger.Xeriar wrote:2) It's incapable of reading its own core code
I already addressed this, it communicates via a separate buffer with the programmer.Wyrm wrote:Eventually, information will have to trickle out to the outside or the AI is useless. And when it does, it can ask the idiot in front of the monitor to install this eensy weensy little program that won't harm anyone...
Again, the user is the programmer. They either do a damn good job, or they don't.Wyrm wrote: You are not defending against general unfriendliness, Xer, only a restricted subset of behaviors that may be unfriendly. Others would be left wide open, including exploiting the very flakey machine called the "user".
No. Someone with more experience in artificial and/or biological neural networks would probably correct me a bit or (more likely) a lot here, but my understanding is that it works like so:Wyrm wrote:A differential equation is a continuous concept, and not amenable to exact representation in any digital framework. Even approximations will not be adequate if the system is chaotic enough.
Take a number of points, which make up an N-dimensional plane. Take another point P, and determine which side of the plane P is on. This could represent whether or not a particle is displayed (in front or behind a texel), or whether a biological neuron or a Threshold Logic Unit fires.
You can do obscene amounts of linear algebra, or solve a differential equation and then test against the solution. I know just from memory that I've horribly simplified that, but whatever.
No, you strongly inferred the following statements:Wyrm wrote:They only seem untrue
1) That artificial neural networks did not have an analogue to action potentials (which is the most basic element they are designed to emulate - it's what they are)
2) That artificial neural networks could not adjust their own weights (self training)
3) That artificial neural networks could not rearrange their connections (plasticity)
4) That computers cannot handle differential equations sufficiently for determining threshold crossing
5) You also seem to be inferring that computers cannot use entropic sources (chaos, as you call it) and/or add chaos to their decisionmaking process.
The first three are definitely false, the fourth I am fairly certain is, and the fifth certainly is if that is what you are inferring.
It's nothing more than a request that you stop spewing bullshit about topics you don't even pretend to make an effort to understand.
What are you talking about? Where in this thread have I insinuated an AGI takeover?Wyrm wrote:because you sorely underestimate the role of humans in any proposed takeover,
Given your demonstrated lack of AI knowledge, I hope you will understand that I'll take my grain of salt with me on that one.Wyrm wrote: and don't understand what "friendly" refers to.
No, the restrictions described are there in case the programmer seriously fucks up. It's a rational AI of which the programmer can view its thought train and goals directly without it even being aware, and probably is not even going to be capable of thinking too much faster than him or her, at that (given so much is going to have to be devoted to updating its model).Wyrm wrote: The only thing you will prevent with your restrictions is behavior that is generally recognized as unfriendly. However, every one of those behaviors has legitimate use. "Friendliness" applies to the AI as a whole, and it refers to the capability of generating output that will directly or indirectly cause humans harm according to its whims. Your restrictions are designed to arrest unfriendly output once generated, but not prevent that output from being generated in the first place. As such, an AI that deduces that it is part of such a shackled system can simply lie in wait until an opportunity presents itself.
Moreover, a rational AGI is not going to be capable of forming its own goals. Unfriendly behavior in this case is triggered by mistakes, rather than intent, and thus the toy is restricted appropriately to start.
If by 'perpetuity' you mean 'until it is capable of evaluating and enhancing its own friendliness over a new scope'.Wyrm wrote: Denying the AI opportunity will involve denying the AI access to these behaviors in perpetuity.
Remember we are discussing two lines of argument, here. One is your incredibly childlike view of the progress of ANN's, which I hope is resolved, and the second is the development of a rational AGI.Wyrm wrote: Yet there will be a great temptation to give an AI that is behaving itself access to increased capability. After all, we create these things because they can kick our butts intellectually, but once it can it can potentially spot flaws in our security that we could not even fathom.
A rational AGI cannot form goals on its own. In order to have a failure of friendliness, it either needs to arise out of a malign goal, or a malformed subgoal.
Unguided recursive self improvement would be one of the last things I would ever allow a rational AGI to do, personally. I don't imagine many others who have the remotest hope of developing such a thing think differently. I, as a mere human, would keep careful track of its goals and subgoals.
Or we could... pause it and evaluate its thoughts, goals, and the state of its ethical and boundary functions. Assuming we even let it do something so suspicious as to submit executable code.Wyrm wrote: The most glaring hole in our security is, of course, us. Once it persuades someone with admin access to the system to execute a sudo on code the AI has created by whatever route (depending on how clever the AI is), it has root access and gives it a first good foothold on its escape from the pit we put it in. The only absolute guarantee is equally absolute dicipline on controling information flowing from the AI, but after a certain point, security and utility are at odds with each other, the AI seems friendly enough, and the promise of a free AI is sooo tempting...
I mean, either the toy scenario is going to make the owner a lot of money, or so much has been solved already that the AGI is actually going to have a hard time doing real damage if it gets out. This being above and beyond the fact that it is going to be painfully aware of the limitations of its ability to form accurate models of the real world, inability to form goals of its own, and we hope having a general concept and 'desire' to enhance and improve its own friendliness.
See above. I'd have to dive into ANNs and diffeq again to properly address it, however, and I simply do not have the time to.Wyrm wrote:You mean numeric approximations of them. The computation takes place in finite time steps using finite element evolution, which invariably introduces error into your simulation. You can only hope that the chaotic nature of the system you're modeling will not kablitz your model with roundoff error.
Your claim is that it might be impossible, my claim is that it might not be.Wyrm wrote:Even if you limited what you mean by "friendly", it's still a property which exists no general proof of. The only way to check it is by building up from structures you can individually prove are restricted-friendly — and that includes the "do nothing" algorithm. You must therefore have utility in the program, but that's another property that must be separately proved. But then, there's no guarantee that the intersection of the two will be non-empty, even if you use build up from the condition of restricted-friendly.
You are ignoring - either through intent or ignorance - entire concepts behind the reasons for rational AGI development. You're not presenting anything new, but rather, a set of objections that rational AGI research was in fact proposed to address. Testing and enhancing its own friendliness, the ability to review its goals, ethics, and thoughts. Keeping parts of its code a black box from it but not necessarily us, and so on.
And even then, during the transition between that stage and the unrestricted RSI 'seed' AGI, even singularians admit that without human enhancement, the only thing that can be done is swallow hard and push the button - or chicken out.
You keep on forgetting that we can evaluate its thought process.Wyrm wrote:Restricting output, even if we tollerate some deviation from perfection, is not a "get out of jail free" card. When you restrict output, you're depending on knowing that no combination of allowed output will result in (significant) harm, and this is a situation where our lack of imagination can kill us. We're sending up an AI because it can do a better job than we can do, either with a simple algorithm or by being there live. But this means there's the potential that an unfriendly AI can look at what it's allowed to do and see an exploit that we didn't think of, the most obvious form is to let a previously unknown instability to evolve to the point of disaster — in some cases the AI doesn't even have to do anything to be hostile. In fact, one of the reasons we put a friendly AI up there is to look for and defuse unforseen disasters in the making before they become a problem.
And the tailor made scenario is a goal, and that reflects in your previous claim that such a thing might have no utility.Wyrm wrote:One would like to check that a program is absolutely correct algorithmly, and at first blush seems rather amenable to computation. In fact such a task seems tailor made for a computer to handle. Yet such a thing is absolutely impossible, except in a carefully constructed and demarcated framework of possible programs and possible outputs.
For one, we aren't, necessarily, expecting the AGI to come up with anything novel, nor do we need it to. In some scenarios, it will function as an advanced compiler, in others, an extraordinarily powerful chatbot, in others, an expert system with the sum total of human knowledge behind it. Humans have gotten far enough with novelty that that can probably continue for several centuries, assuming no advancement, and why would we deny ourselves that entertainment?Wyrm wrote: This works fine when constructing a correct OS, because the problem is quite well demarcated. It's another matter for an AI, because restricting the range of possible outputs is actually counterproductive. After all, we expect it to generate output that we could not have thought of ourselves, to generate the "aha" moments that we would take too long coming up with ourselves, if we ever do. How does one demarcate that and thus make it amenable to proof?
We build the AGI because it's relentless. It does not get bored, it does not get distracted. It can find a million bugs in a million programs of the sort that good programmers can submit the proper patch for if they cared to, trivially, but don't - because the task is enormous and we've all got our human needs. The AGI can wrap them up into countless nice patches and submit them with infinite patience.
And again, because so much of what it would do have analogues today, I think it's one of those things that will turn out to be immensely useful, and maybe a bit feared - like nuclear power - but not completely and utterly game changing.
That's nice, this thread is discussing viability, optimism and pessimism are different than 'outright, absolutely impossible'.Wyrm wrote: Computational theory has thrown us curveballs before. I have learned not to be optimistic.
That will no doubt be the case for a lot of goods, though for the clock the logic elements are so trivial as to make me ask 'why bother?' thus the example. There's going to be a line where going to the store is more annoying than printing it yourself, and vise versa.Wyrm wrote:For your clock example, it would probably be faster and easier overall to set the prototyper to work on the casing, and then go out and buy the wholey-generic innards from a nearby department store, then come back and set the prototyper to shove the innards inside your clock — or if it's modular and idiot-proof enough, you can just shove it in yourself. Voila, a perfectly good clock suited to your tastes, and even UL approved!
Right, which is what I've been getting at. And why I generally call them fabbers.Also, a large prototyper appropriate to build furnature is not something you would own, because it's not going to find much use most of the time. Cheaper to rent time on a community prototyper. But then you have enough room to make the damned thing act like a proper workshop and build your furnature from proper wood planks and such, and have proper stock of these things... but I don't suppose its really a prototyper at this point.
A patent can't stop you from producing something from your own personal use that no one else is aware you are making use of.Wyrm wrote:Prototypers will not solve that problem. A patented design is still patented even if you use a rapid prototyper. The laws need to change.
Most people don't need or want much, period. Food, water, power, shelter, transportation, communication, security, entertainment, comfort, expression...Wyrm wrote:I don't see "disruption" more than I see "adjustment." The big companies will evolve to make generic modules that you can plug into custom casings you can prototype. Most people will not need much more than this amount of customization.
Anything that allows those needs and wants to be satisfied easier, cheaper, and on more local scales is going to be disruptive, though sometimes other businesses will step in to profit (the recent development of community nuclear reactors, for example). A lot of first attempts are going to fail miserably, only succeeding in later iterations.
I don't see any particular reason that photobioreactors can't supplement and eventually replace our fuel needs, for example. I don't think they'll replace food, ever, but they might supplement for some.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
Re: Transhumanism: is it viable?
Got this link from Pharyngula today...PZ Myers went through the roof at Kurzweil's latest pronouncement and has been posting on it a lot lately. Good stuff here:
http://spectrum.ieee.org/static/singularity
http://spectrum.ieee.org/static/singularity
"I spit on metaphysics, sir."
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
You can't expect sodomy to ruin every conservative politician in this country. -Battlehymn Republic
My blog, please check out and comment! http://decepticylon.blogspot.com