Transhumanism: is it viable?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Transhumanism: is it viable?

Post by Starglider »

Akhlut wrote:How are they going to reconcile powerful instincts with a greater intelligence? An sapient lion is going to have to continually prevent himself from running after playing human children because they are engaging his prey drive. How is he going to feel about constantly wanting to kill and being unable to due to laws and, possibly, his own morality?
Frankly unless you do this in the most crude fashion possible (trial-and-error), this isn't a serious problem. A cognitive development model good enough for you to reliably confirm that you've got language, empathy, abstract reasoning etc working correctly is going to be good enough for you to confirm that you've toned down, removed or context-limited inconvenient instincts. See Freefall for how this would work if actual engineers were in charge. Florence has 'safeguards' (compulsions not to harm humans), programmed responses to certain sounds and smells, compulsion to obey direct human orders etc. Not terribly ethical but what you'd expect in an engineered product.
Will dolphins have to use morse code? Will elephants have to write everything down their trunks? Will chimps have to learn sign language? They are hugely burdened with simple communication!
Adding a larynx (or syrinx) capable of human-like speech, or manipulative thumbs for that matter, is almost certainly an order of magnitude easier than the cognitive enhancements. Brain and personality development is just so much more complicated.
Simulating the chemistry of DNA itself isn't actually all the hard, as it is fairly basic.
Not it is not 'basic'. It seems 'basic' to you because literally millions of engineers and scientists have spent the last fifty years making incredible progress in the field of computing, such that you can now get something 1000 times more powerful than a 1960s mainframe computer in your $100 cellphone. Plus of course solid state physics, quantum chemistry, biochemistry and a host of other fields. If you asked a biochemist or an electronic engineer in 1950 if it would be 'easy' to solve protein folding by brute force simulation of a few trillion molecular interactions, they would laugh in your face. Yet now it seems 'basic' to you, although in actual fact design of software to do this is one of the most challenging areas of software engineering.

I am sure that if you were around in 2200 or so you would be saying something like 'sure, bioengineering a new species from scratch isn't actually all that hard, but picotechnology, that's just ridiculous...'
User avatar
Serafina
Sith Acolyte
Posts: 5246
Joined: 2009-01-07 05:37pm
Location: Germany

Re: Transhumanism: is it viable?

Post by Serafina »

Akhlut wrote:The Mongols under Temujin/Chingghis Khan actually had a meritocratic society, and spread a fairly unified code of law throughout their empire. They were at least the equals of European and Asian states at the time, if not exceeding them in several areas. Religious freedom was allowed, and Christians, Buddhists, Muslims, and Tengrists all had equal rights and freedoms in the early Mongol Empire.
Well, those are good points to call their culture better in my book.
I admit that i do not know much about them, so my statement was just a - obviously wrong - guess.
UpliftedAnimalWanker wrote:Regarding humans uplifting animals: "For the hell of it" is actually a fairly good reason.
If you are a moron, it is.
When you are doing something "for the hell of it", then you are doing it without considering the reasons. Both the reasons to do it and the reasons not to do it. Which fits right into my complaint about the "dream-transhumanists" - they are into transhumanism because they dream of something cool, and transhumanism is their applied phlebotinum. Or in other words, their magic handwave.
By this point, noone will really have any jobs anymore; unskilled labor can be handled easily by zillions of nonsapient robots, and the price for skilled labor will drop precipitously following the development of economically-feasible human uploading, because you upload one expert and make zillions of copies of them.
I suppose you are one of those morons who doesn't see the social problems with that. Sure, they can be solved, but you probably never thought about that for a second.
Long story short, almost noone will have any jobs anymore (at which point post-scarcity economics have to start rolling in before people start rioting in the streets over being unable to afford food and rent), but some people will still want to contribute to society rather than spending all day plugged into VR systems or getting drunk and having sex.
Yes. Problem is, we are never going to have a post-scarcity society. Not a real one anyway, at best one where the ordinary human doesn't know it. But then you have a system that doesn't NEED the ordinary human, so why should there be such a thing?
Some people will become colonists; some people will become artists; some people will design new body modifications or uplift new species. Purely as hobbies, since at least that way they feel like they're contributing something.
Yet another part where you don't give a damn about morality, because handwave.
Akhlut: By this point it's likely that basically everyone has something similar to a 3g cell phone installed in their heads; an uplifted animal incapable of human speech like uplifted dolphins or octopi would likely be fully capable of communicating with the aid of these devices. Cybernetic implants would also give dolphins the mobility they'd probably want to properly interact with human society; cybernetic legs and arms to let them get around and manipulate things (possibly being both at once, depending on how the feet/hands are designed). Remote-controlled drones would also be a possibility.
And if you are at that point, what's the point of upgrading animals at all?


Conclusion:
Your point is "because it's cool". In other words, you have no point at all.
SoS:NBA GALE Force
"Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent." - Sir Nitram
"The world owes you nothing but painful lessons" - CaptainChewbacca
"The mark of the immature man is that he wants to die nobly for a cause, while the mark of a mature man is that he wants to live humbly for one." - Wilhelm Stekel
"In 1969 it was easier to send a man to the Moon than to have the public accept a homosexual" - Broomstick

Divine Administration - of Gods and Bureaucracy (Worm/Exalted)
LionElJonson
Padawan Learner
Posts: 287
Joined: 2010-07-14 10:55pm

Re: Transhumanism: is it viable?

Post by LionElJonson »

Serafina wrote:
UpliftedAnimalWanker wrote:Regarding humans uplifting animals: "For the hell of it" is actually a fairly good reason.
If you are a moron, it is.
When you are doing something "for the hell of it", then you are doing it without considering the reasons. Both the reasons to do it and the reasons not to do it. Which fits right into my complaint about the "dream-transhumanists" - they are into transhumanism because they dream of something cool, and transhumanism is their applied phlebotinum. Or in other words, their magic handwave.
Presumably, the people who made the software that'd make this possible would have thought about that, and engineered in safeguards to prevent blatantly unethical usage.
By this point, noone will really have any jobs anymore; unskilled labor can be handled easily by zillions of nonsapient robots, and the price for skilled labor will drop precipitously following the development of economically-feasible human uploading, because you upload one expert and make zillions of copies of them.
I suppose you are one of those morons who doesn't see the social problems with that. Sure, they can be solved, but you probably never thought about that for a second.[/quote]
Uhh... I do see the social problems inherent in that. Note that in the next sentence I mentioned that post-scarcity economics would need to be implemented before people start rioting over being unable to afford food and rent. Hell, this entire paragraph was about the social ramifications of this.
Long story short, almost noone will have any jobs anymore (at which point post-scarcity economics have to start rolling in before people start rioting in the streets over being unable to afford food and rent), but some people will still want to contribute to society rather than spending all day plugged into VR systems or getting drunk and having sex.
Yes. Problem is, we are never going to have a post-scarcity society. Not a real one anyway, at best one where the ordinary human doesn't know it. But then you have a system that doesn't NEED the ordinary human, so why should there be such a thing?
Because politicians and corporate fat-cats don't want to be strung from the streetlights by the mobs raging over how they can't afford food anymore. They're not stupid; they'd remember the October Revolution. Alternately, they fail to do so in time to avoid the Revolution, and the Revolutionaries promptly do so so they're not the ones getting strung up next.
Some people will become colonists; some people will become artists; some people will design new body modifications or uplift new species. Purely as hobbies, since at least that way they feel like they're contributing something.
Yet another part where you don't give a damn about morality, because handwave.
I might not give a damn, but the people making the software to enable it probably would, and the uplifting community in general probably would; with no money, social status would really be the only thing you could gain, and making newbie mistakes or some gigantic breach of ethics would probably earn you a massive negative reputation.
Akhlut: By this point it's likely that basically everyone has something similar to a 3g cell phone installed in their heads; an uplifted animal incapable of human speech like uplifted dolphins or octopi would likely be fully capable of communicating with the aid of these devices. Cybernetic implants would also give dolphins the mobility they'd probably want to properly interact with human society; cybernetic legs and arms to let them get around and manipulate things (possibly being both at once, depending on how the feet/hands are designed). Remote-controlled drones would also be a possibility.
And if you are at that point, what's the point of upgrading animals at all?
Because it's cool, and you'd rather do something somewhat useful than spend all day everyday partying or blissed out in a VR simulation.

Conclusion:
Your point is "because it's cool". In other words, you have no point at all.
My point is that in the society I was describing, "Because it's cool" is enough reason to do something like this in and of itself.
User avatar
Serafina
Sith Acolyte
Posts: 5246
Joined: 2009-01-07 05:37pm
Location: Germany

Re: Transhumanism: is it viable?

Post by Serafina »

Presumably, the people who made the software that'd make this possible would have thought about that, and engineered in safeguards to prevent blatantly unethical usage.
Yes, because that happens all the time in real life :roll:
Did i already say that you are a naive? Well, you are.
Uhh... I do see the social problems inherent in that. Note that in the next sentence I mentioned that post-scarcity economics would need to be implemented before people start rioting over being unable to afford food and rent. Hell, this entire paragraph was about the social ramifications of this.
Even if you have post-scarcity economics (which is not given), then you STILL have a giant social problem:
People have nothing to do.
Given that our society is based largely on working for ones living, that IS a social problem. One tha can probably be solved, but that you do not see it just proves my point: you are a naive moron.
Because politicians and corporate fat-cats don't want to be strung from the streetlights by the mobs raging over how they can't afford food anymore. They're not stupid; they'd remember the October Revolution. Alternately, they fail to do so in time to avoid the Revolution, and the Revolutionaries promptly do so so they're not the ones getting strung up next.
That completely ignores my point.
I might not give a damn, but the people making the software to enable it probably would, and the uplifting community in general probably would; with no money, social status would really be the only thing you could gain, and making newbie mistakes or some gigantic breach of ethics would probably earn you a massive negative reputation.
So you admit that you do not care about morality at all.
I only hope that the people actually responsible for these technologies will do.
Because it's cool, and you'd rather do something somewhat useful than spend all day everyday partying or blissed out in a VR simulation.
Cool=/=useful.


You are exactly the sort of dream-transhumanists i was talking about.
SoS:NBA GALE Force
"Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent." - Sir Nitram
"The world owes you nothing but painful lessons" - CaptainChewbacca
"The mark of the immature man is that he wants to die nobly for a cause, while the mark of a mature man is that he wants to live humbly for one." - Wilhelm Stekel
"In 1969 it was easier to send a man to the Moon than to have the public accept a homosexual" - Broomstick

Divine Administration - of Gods and Bureaucracy (Worm/Exalted)
LionElJonson
Padawan Learner
Posts: 287
Joined: 2010-07-14 10:55pm

Re: Transhumanism: is it viable?

Post by LionElJonson »

Oops. Meant the February Revolution; got them mixed up (since they're both part of the overall Soviet Revolution), and now I can't edit. The one where the workers who were revolting were crying out "Bread! Bread!"
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Misconduct found in Harvard Animal Cognition Lab

Post by Starglider »

Alyrium Denryle wrote:Each command is just that. A single command, maybe you could have something referring back to that command as part of an algorithm. Am I more or less correct thus far?
Not exactly. Software engineering design practice strives to reuse code as much as possible, such that core functions in an OS say are used millions of times in millions of different ways by application code. Changing core functions in any system can have massive diverse effects across the whole system. Random changes almost always render the system completely non-functional; a single-bit modification to your OS kernel is very likely to crash the computer. We accept this brittleness because we have tools and processes to deal with it, and it inherently makes it easier for humans to predict what the effects of changing code will be. Even process for turning source code into running software is very convoluted these days, but we accept that because it is designed to have a human-understandable user interface.
Now take a biological system, just the genetic code. You have multiple reading frames (three), and multiple splice sights so that each gene can actually code for several different variations of a protein. Within a single cell, genes get turned on or off, or have their transcription rates modified by positive feedback loops, negative feedback loops and several different types of upstream (prior to gene), and downstream (after the gene) regulatory controls, in addition to inhibition of translation and epigenetic modification of the histone which prohibits the binding of transcriptase... All of this in the case of a single cell is regulated by the myriad of gene products which are produced not only by that gene, but everything else in the genome as well as stimulus from the external environment. That is just one cells.
The amazing thing about genetics is that not only does it work, random mutations do actually cause beneficial effects on organisms. Not very often, but often enough for adaptive radiation to take place at all. That would never work with conventional computer software; not just in the sense of requiring ridiculously large population sizes and generation counts, in many cases there would be no incremental paths. Of course biological organisms have a considerable number of 'meta-adaptations' specifically to make adaptation more effective, e.g. crossover and sexual reproduction in general. I am sure you are in a better position to appreciate the biology than me, but then I have direct experience of how difficult this is to replicate in computer software; the field of genetic algorithms started with people thinking it would be easy, just needing lots of brute force, and very quickly discovered otherwise.

For designer organisms, we can either use an utterly obscene amount of computing power to simulate a whole planet's worth of natural selection (with humans inputting selection criteria and iterating until we get what we want), or we can use more intelligent and much less compute-intensive patchwork simulation to effectively tame the complexity and put a human-engineer-intuitive user interface on the biological system. If we were going completely de novo instead of trying to 'uplift' then we'd probably dump a lot of the expression network cruft and possibly junk DNA (not my field so I'm not sure exactly what you'd simplify), since we presumably don't need the engineered organism itself to be able to evolve over time.

Frankly if you just want furries wandering around it would be easier (and safer, and probably more ethical) to make androids and then put them in fursuits instead of synthetic skin, but we've already established that this isn't something you'd do for practical reasons.
LionElJonson
Padawan Learner
Posts: 287
Joined: 2010-07-14 10:55pm

Re: Transhumanism: is it viable?

Post by LionElJonson »

Serafina wrote:
Presumably, the people who made the software that'd make this possible would have thought about that, and engineered in safeguards to prevent blatantly unethical usage.
Yes, because that happens all the time in real life :roll:
Did i already say that you are a naive? Well, you are.
Yes, because software engineers never design their software to minimize the possibility of idiot users fucking it up. :roll:
Uhh... I do see the social problems inherent in that. Note that in the next sentence I mentioned that post-scarcity economics would need to be implemented before people start rioting over being unable to afford food and rent. Hell, this entire paragraph was about the social ramifications of this.
Even if you have post-scarcity economics (which is not given), then you STILL have a giant social problem:
People have nothing to do.
Given that our society is based largely on working for ones living, that IS a social problem. One tha can probably be solved, but that you do not see it just proves my point: you are a naive moron.
Uhhh... I can see that; my entire post was discussing the ramifications of that. Are you stupid, or just deliberately ignoring that?
Because politicians and corporate fat-cats don't want to be strung from the streetlights by the mobs raging over how they can't afford food anymore. They're not stupid; they'd remember the October Revolution. Alternately, they fail to do so in time to avoid the Revolution, and the Revolutionaries promptly do so so they're not the ones getting strung up next.
That completely ignores my point.
That point being?
I might not give a damn, but the people making the software to enable it probably would, and the uplifting community in general probably would; with no money, social status would really be the only thing you could gain, and making newbie mistakes or some gigantic breach of ethics would probably earn you a massive negative reputation.
So you admit that you do not care about morality at all.
I only hope that the people actually responsible for these technologies will do.
Did you even read past the sixth word in that sentence?
Because it's cool, and you'd rather do something somewhat useful than spend all day everyday partying or blissed out in a VR simulation.
Cool=/=useful.
More useful than sitting on your ass plugged into a pleasure-switcher or going out partying all the time. At least now you're improving the quality of life for other people (since once they're uplifted, they would be people, and they would have a better quality of life than they would have had as nonsapient animals).
You are exactly the sort of dream-transhumanists i was talking about.
Forgive me for having optimism. I guess I'm just not cynical enough for your tastes.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Transhumanism: is it viable?

Post by Starglider »

LionElJonson wrote:Yes, because software engineers never design their software to minimize the possibility of idiot users fucking it up.
Programmers attempt to prevent functional failures, it is currently impossible to address ethical issues, any more than it is possible to manufacture a gun that will only fire in self-defense. Even legal issues are very rarely addressed and when they are it is always possible to get software that actively circumvents them (e.g. DRM).

Of course practically you'd need a team of experts and lots of expensive hardware, not someone in a basement downloading a copy of SpeciesCreatorXP. If we ever get to the point of the later being plausible, society will face (and hopefully have solved) much, much more serious problems than unqualified people making new species for a laugh.
and they would have a better quality of life than they would have had as nonsapient animals
Really? Are you sure? In your future utopia, pets would presumably be perfectly healthy, provided with everything they need (physically or virtually) etc.
User avatar
Akhlut
Sith Devotee
Posts: 2660
Joined: 2005-09-06 02:23pm
Location: The Burger King Bathroom

Re: Transhumanism: is it viable?

Post by Akhlut »

Starglider wrote:
Akhlut wrote:How are they going to reconcile powerful instincts with a greater intelligence? An sapient lion is going to have to continually prevent himself from running after playing human children because they are engaging his prey drive. How is he going to feel about constantly wanting to kill and being unable to due to laws and, possibly, his own morality?
Frankly unless you do this in the most crude fashion possible (trial-and-error), this isn't a serious problem. A cognitive development model good enough for you to reliably confirm that you've got language, empathy, abstract reasoning etc working correctly is going to be good enough for you to confirm that you've toned down, removed or context-limited inconvenient instincts. See Freefall for how this would work if actual engineers were in charge. Florence has 'safeguards' (compulsions not to harm humans), programmed responses to certain sounds and smells, compulsion to obey direct human orders etc. Not terribly ethical but what you'd expect in an engineered product.
It should certainly be easier to just build a damn robot at this point, rather than engineering genetic codes to manipulate brain wiring and content. When a society can manage that level of genetic engineering, they should be far enough along that making such a creature is totally unnecessary (as if it were ever necessary at all), and, as you said, it brings up other, larger ethical issues. And, shit, if you're just going to remove all the instincts that make it a lion and make it more human like, why not just make lion robots for humans to put their minds into for periods of time?
Will dolphins have to use morse code? Will elephants have to write everything down their trunks? Will chimps have to learn sign language? They are hugely burdened with simple communication!
Adding a larynx (or syrinx) capable of human-like speech, or manipulative thumbs for that matter, is almost certainly an order of magnitude easier than the cognitive enhancements. Brain and personality development is just so much more complicated.
Tue, I should have thought of that.
SDNet: Unbelievable levels of pedantry that you can't find anywhere else on the Internet!
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Transhumanism: is it viable?

Post by Starglider »

Akhlut wrote:It should certainly be easier to just build a damn robot at this point, rather than engineering genetic codes to manipulate brain wiring and content.
Yep, Freefall is actually pretty hard sci-fi and it's correct in its depiction of Florence's 'safeguards' as fuzzy and unreliable. Neural nets are hard to work with even when they're artificial ones, grown biological brains are never going to be as reliable at following commands as sensibly programmed robots.

Of course you could hybridise; you could genetically engineer an anthropomorphic lion body, but with a brain that almost non-functional (just enough to keep the body alive). Then you could put an implant in it with the AI personality that you want. In Greg Egan's 'Schild's Ladder', this is the normal state for most of the human population, because a lot of people wanted to keep a human body but have the advantages of digital cognition.
And, shit, if you're just going to remove all the instincts that make it a lion and make it more human like, why not just make lion robots for humans to put their minds into for periods of time?
Yes, this is much more sensible, though still completely whimsical, not something you'd do for a practical purpose. Apparently this is the kind of thing people do as a fashion fad in the Culture (from the Iain Banks Culture novels).
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

Starglider wrote:The dismissal of transhumanism in this thread has amounted to nothing more than 'it seems silly to me' aka argument from incredulity aka rampant idiocy.
A.K.A. Starglider can't read:
Alyrium Denryle wrote:Frankly, any Utopian idea is stupid. The idea that you will be able to create recursive friendly AI that wont you know... evolve into something malevolent over time is stupid. The idea that we will one day live in a post scarcity society is fucking stupid. Even if we go out into space, extraction of energy and materials will be rate-limited. Particularly because with no death population will skyrocket. The whole notion is a stupid and idealistic fantasy.
Anguirus wrote:It's a little more specific than that, I think. It has something to do with an utter lack of respect for the concept that some problems are not easily solved no matter how "smart" you are due to their inherent complexity.
Anguirus wrote:My general impression is that at the very least, your timeframe is radically compressed. There is also a bad habit of conflating skepticism with small-mindedness and Luddism. On the contrary, I think that our development into immortal mechanical God-Kings would rock on toast. I have some questions, however, on practical grounds.
Alyrium Denryle wrote:Dont get me wrong. I would LOVE to have a spider mech body.
Seriously, Starglider. Do you shove your head up your ass as a hobby or something? You're pulling the exact same stunt lazerus thought he could get away with, and its just as stupid as it was two pages ago.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Transhumanism: is it viable?

Post by Zixinus »

The other thing, is that if you require hard-line safeguards, then the question must be asked as to why are you creating something that would require safeguards in the first-place? Why are you creating something that has the capacity to make harm... "just because"?
Remember, an uplifted chimp may not only be dangerous to its creator but to everyone else.

So, "just because" you are creating something dangerous. That sounds like a very good "why not" reason to me.

In a post-scarcity society, surely there is something better to do? Make statues? Make robot tournaments? Go out camping? Dance? Learn traditional skills like leatherworking, wood-carving or pottery? Create virtual worlds? Help organize parades? Hell, keep a lot of pets and teach them tricks?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Transhumanism: is it viable?

Post by Starglider »

Formless wrote:A.K.A. Starglider can't read:
On the contrary, you're just reinforcing my point.
Alyrium Denryle wrote:Frankly, any Utopian idea is stupid.
Because he says so.
The idea that you will be able to create recursive friendly AI that wont you know... evolve into something malevolent over time is stupid.
Because he says so, despite having zero qualification or experience in AI.
The idea that we will one day live in a post scarcity society is fucking stupid.
Semantic whoring, no one sane equates 'post scarcity' to 'infinite resources'.
It has something to do with an utter lack of respect for the concept that some problems are not easily solved no matter how "smart" you are due to their inherent complexity.
No examples given of course (other than the implicit one of the thread topic which has already been dealt with).
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Transhumanism: is it viable?

Post by Starglider »

Zixinus wrote:The other thing, is that if you require hard-line safeguards, then the question must be asked as to why are you creating something that would require safeguards in the first-place? Why are you creating something that has the capacity to make harm... "just because"?
There is a very strong correlation between power, utility and risk. Powerful technologies can do more good or harm. Intelligent servants in general are tremendously useful, but also extremely dangerous. Engineering as ever strives to minimise the risk while preserving the utility. Ethics usually don't come in to it, although sometimes they should.
In a post-scarcity society, surely there is something better to do? Make statues? Make robot tournaments? Go out camping? Dance? Learn traditional skills like leatherworking, wood-carving or pottery? Create virtual worlds? Help organize parades? Hell, keep a lot of pets and teach them tricks?
What are you arguing here, that you would not do this, that he should not do this, or that no one would ever want to do this? Obviously you're correct on (a), (b) is subjective and I highly doubt (c), given all the crazy things humans already do.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

Starglider wrote:
Formless wrote:A.K.A. Starglider can't read:
On the contrary, you're just reinforcing my point.
Alyrium Denryle wrote:Frankly, any Utopian idea is stupid.
Because he says so.
Actually, until someone DEMONSTRATES that the idea is feasible, it can be safely relegated to the stupidity bin. That's skepticism, Starglider, and exactly what Anguirius was talking about later.
The idea that you will be able to create recursive friendly AI that wont you know... evolve into something malevolent over time is stupid.
Because he says so, despite having zero qualification or experience in AI.
Like you yourself haven't talked in detail about the Friendliness Problem in the past.

Oh, right, when you say it its just peachy, but when a skeptic talks about THE EXACT SAME THING its suddenly an appeal to incredulity. :roll:
The idea that we will one day live in a post scarcity society is fucking stupid.
Semantic whoring, no one sane equates 'post scarcity' to 'infinite resources'.
You wouldn't know it from the way people talk about post-scarcity. Or even from contemplating the meaning of the goddamn word itself.
It has something to do with an utter lack of respect for the concept that some problems are not easily solved no matter how "smart" you are due to their inherent complexity.
No examples given of course (other than the implicit one of the thread topic which has already been dealt with).
You want examples? Alright, how about nano-tech? I see people all the time talk about how it will revolutionize this or that without ever talking about the inherent complexity of shrinking engineering or getting nano-bots and the like to communicate (essential to many applications like computing). Or howabout the common meme often repeated about how arbitrarily smart, recursive AI (a concept you will notice has been brought up in this thread and disputed) will magically be able to solve all or even many of our current social, economic, and political problems (often coupled with the meme that humans as is are somehow helpless to do anything about these problems right now)? Case in point, the exchange starting with this post between your fellow transhumanists cosmicalstorm and Singular Intellect.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Transhumanism: is it viable?

Post by Singular Intellect »

Starglider wrote:The dismissal of transhumanism in this thread has amounted to nothing more than 'it seems silly to me' aka argument from incredulity aka rampant idiocy.
Plus an ignorance on the trend of technological progress in society. All too often even more educated individuals don't see/comprehend the existing exponential growth phenomenon and instead can only project the future in a linear growth pattern.

Ray Kurzweil likes to point out this exact flawed mentality when he projected we'd decode the human genome in fifteen years. Ten years into the project and he was being mocked because only 1% had been completed. Exponential growth wasn't factored in, and his prediction was actually correct.

Edit: I might be misquoting Ray Kuzweil, his example might have been fifteen years to sequence HIV, and then we sequenced SARS in thirty one days. It effectively demostrates what exponential progress accomplishes.
Last edited by Singular Intellect on 2010-08-14 03:24pm, edited 1 time in total.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

The problem, however, is that Ray Kurzweil isn't the Fucking Pope of Technology. Unless he can show evidence of his claims, his claims are of no more value than anyone else's claims.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Transhumanism: is it viable?

Post by Singular Intellect »

Formless wrote:The problem, however, is that Ray Kurzweil isn't the Fucking Pope of Technology. Unless he can show evidence of his claims, his claims are of no more value than anyone else's claims.
There's quite a bit of evidence supporting his claims and a great number of his predictions based on the understanding of exponential growth/progress were validated. But I'm not interested in you taking my word for it or his; feel free to go research the material yourself. You can read up on him in Wikipedia for a start if you care to.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

Singular Intellect wrote:
Formless wrote:The problem, however, is that Ray Kurzweil isn't the Fucking Pope of Technology. Unless he can show evidence of his claims, his claims are of no more value than anyone else's claims.
There's quite a bit of evidence supporting his claims and a great number of his predictions based on the understanding of exponential growth/progress were validated. But I'm not interested in you taking my word for it or his; feel free to go research the material yourself. You can read up on him in Wikipedia for a start if you care to.
You make the claims, you back them up. It is not the job of a skeptic to do your homework for you. If you really wanted to reduce ignorance in the world you would know this.

Edit: let me be more specific. Lets say we're talking about Moore's Law. Exponetial growth, right? But then there is the Law of Diminishing returns to take into account: say for example, you can only shrink transistors so much before you can't shrink them no more. Its an alternative hypothesis, and one you do not address.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Transhumanism: is it viable?

Post by Singular Intellect »

Formless wrote:
Singular Intellect wrote:
Formless wrote:The problem, however, is that Ray Kurzweil isn't the Fucking Pope of Technology. Unless he can show evidence of his claims, his claims are of no more value than anyone else's claims.
There's quite a bit of evidence supporting his claims and a great number of his predictions based on the understanding of exponential growth/progress were validated. But I'm not interested in you taking my word for it or his; feel free to go research the material yourself. You can read up on him in Wikipedia for a start if you care to.
You make the claims, you back them up. It is not the job of a skeptic to do your homework for you. If you really wanted to reduce ignorance in the world you would know this.
Except I'm not making the claims, Ray Kurzweil is. I'm just not seeing any real flaws in them and all my digging hasn't revealed any serious problems. If you're able to find problems with his projections and evidence, I'd be delighted to check them out.

I don't consider him infalliable or any kind of special character aside as a source of some interesting material to consider regarding the current path our society/technology is taking. Nor do I consider myself infalliable or capable of spotting any or all errors, so if you can find some I'd love to see them.

I'm skeptical and reserved in judgement as well, but when he points out historically verifiable trends of exponential growth/progress and obvious problems with people's interpretation of things like Moore's Law, my response isn't going to be putting my fingers in my ears and humming to myself.
Edit: let me be more specific. Lets say we're talking about Moore's Law. Exponetial growth, right? But then there is the Law of Diminishing returns to take into account: say for example, you can only shrink transistors so much before you can't shrink them no more. Its an alternative hypothesis, and one you do not address.
You could only shrink vacuum tubes so far before you hit diminshing returns as well. The result was a complete change to a different type of technology. In that case transistors came along and the exponential growth cycle continued. Moore's Law was invented and applied to transistors, but has no bearing on previous generations of exponential growth hitting obvious walls and yet bypassing them anyways. Your argument would be no different than someone in the past correctly pointing out vacuum tubes can only be shrunk so far.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Transhumanism: is it viable?

Post by Modax »

Starglider wrote:
It has something to do with an utter lack of respect for the concept that some problems are not easily solved no matter how "smart" you are due to their inherent complexity.
No examples given of course (other than the implicit one of the thread topic which has already been dealt with).
Is it possible that there is no computable algorithm for recursive self improvement for an AGI? That seed AI is infeasible on any turing machine?
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

Singular Intellect wrote:Except I'm not making the claims, Ray Kurzweil is. I'm just not seeing any real flaws in them and all my digging hasn't revealed any serious problems. If you're able to find problems with his projections and evidence, I'd be delighted to check them out.
Well then, what specifically is he claiming? That technology in general is growing at an exponential rate? Well sure, in some sectors certainly, but at the cost of increasing environmental damage and resource depletion. Limiting factors that can slow growth in a measurable fashion. If we really want to look at history, we should look at all of history; and doing so we see multiple examples of whole civilizations, even relatively technologically advanced ones, falling prey to simple issues like that (the romans, the myans, etc.).
You could only shrink vacuum tubes so far before you hit diminshing returns as well. The result was a complete change to a different type of technology. In that case transistors came along and the exponential growth cycle continued. Moore's Law was invented and applied to transistors, but has no bearing on previous generations of exponential growth hitting obvious walls and yet bypassing them anyways. Your argument would be no different than someone in the past correctly pointing out vacuum tubes can only be shrunk so far.
Then the question becomes "can you find a viable alternative technology to replace the old technology?" If you can't, you can't continue exponential growth. The limiting factors are still there, new technology only postpones when we will hit them.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
lazerus
The Fuzzy Doom
Posts: 3068
Joined: 2003-08-23 12:49am

Re: Transhumanism: is it viable?

Post by lazerus »

I'm a bit of a pessimist, so much as I like Kurzweil and think his logic is valid, I'm a bit more skeptical about his ideas. But, I think the point the debate in this thread is missing is how even trivial changes to the human condition can spark widespread societal change. To give a very basic example, think of how the technology to eliminate genetic disease by examination of eggs pre-birth would change the world. A drastic reduction or elimination in the number of people born blind or with degenerative disorders. Combined with other medical technology, that's heading towards a society with almost no infeebled people except for the extremely elderly. That's an event with drastic social significance -- particularly with movements such as the "deaf community" that might try to abstain from it's use. Transhumanism doesn't require massive technological leaps to be an important and worthy idea or ideology, any suitable technology has the power to be important.

As for the assertion that "Any Utopian Ideal is Stupid" -- let me ask you this. If you had to chose between living in the 10th century (as a functional member of that society, insofar as your world view or health issues will let you function), and sawing your left hand off with a hacksaw, which would you chose?

If you even have to stop and think about it, you've demonstrated my point. Society does vastly improve over time, largely as a result of technology. Our standards change so it never seems to us like we live in a perfect world, but that does not mean improvement isn't occurring.

You may point out that that was 900 years ago, but that means it's only a question of the rate of technological advance, not the fundamental concept.
3D Printed Custom Miniatures! Check it out: http://www.kickstarter.com/projects/pro ... miniatures
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Transhumanism: is it viable?

Post by Formless »

lazerus wrote:As for the assertion that "Any Utopian Ideal is Stupid" -- let me ask you this. If you had to chose between living in the 10th century (as a functional member of that society, insofar as your world view or health issues will let you function), and sawing your left hand off with a hacksaw, which would you chose?

If you even have to stop and think about it, you've demonstrated my point. Society does vastly improve over time, largely as a result of technology. Our standards change so it never seems to us like we live in a perfect world, but that does not mean improvement isn't occurring.
People in the tenth century didn't have to worry about pandemic diseases, nuclear warfare, or anthropogenic climate change. We do improve, but the problem is we can't expect perfection. Every social/technological change brings with it new problems unforeseen by the previous generation, even as it solves the issues that generation took for granted.

Edit: oh, and there is also the problem of over-population. That one almost certainly wasn't an issue for tenth century people.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Misconduct found in Harvard Animal Cognition Lab

Post by Junghalli »

Anguirus wrote:This would be well beyond the scope of this thread, but I would be interested in seeing a bonafide, dedicated definition of transhumanism, and then a defense of that ideology.
Personally I would define transhumanism as the idea that transcending the fundamental limitations of humanity through technology is both possible and desirable.

As far as a defense of that goes, I'll just quote something I found on the internet once:

"To all those who dismiss transhumanists and singularity enthusiasts as "rapture nerds" and other derogatory comparisons, consider this: Strip away all the dross, all the fiction, all the buzzwords, the 'nano'-this and the 'matrioshka'-that and you are left with this simple truth: humanity is not the pinnacle of creation."
Starglider wrote:On the contrary, you're just reinforcing my point.
<snip>
Starglider, could you respond to his point here:
Alyrium Denryle wrote:The problem is not with out programming of a friendly AI. It is with the building of subsequent machines. First off, the machines will not be smarter than we are. It will have a faster CPU, the two are not the same thing. We have to program this thing to be able to do things like perform higher math and mimic having an actual biological intelligence. It will be limited in its ability to do these things by the human programmer who initially creates it and as a result will be limited in its understanding of higher math to the degree that the programmer understood higher math. In turn, it cannot subsequently program a machine that is smarter than it unless it uses some sort of genetic algorithm that creates random mutations in code and then selects for those machines which get built that display higher cognitive capacity. These same mutations however can create a non-friendly AI. As a result malevolent AIs could well (and eventually will, simply due to mutation and drift) evolve out of your recursively more intelligent AIs due to a lack of function mutation in the Friendly code.
One workaround I see is to have large numbers of AIs and count on some degree of goal system stability (which we know is possible because humans have it) to deal with the occassional deviants, but I'd love to hear a response from somebody who actually knows something about the science of AI.
Post Reply