Question on the obscure matter of a terraformed moon's ocean

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Wellp, time for me to ask yet another odd question for one of my various settings and pieces of miscellany. This one, though, I think should be quite interesting and I might end up using this thread for further questions on the same.

Mankind has terraformed the moon in the year 2500 (yay needless details...), adding fertile soils and oceans without significantly fucking with gravity, only doing so by the increased mass of the worldlet - essentially a minimal factor. This terraforming has made the moon into a fertile, green jewel hanging in Earth's polluted skies, home to large oceans and lush gardens. They have managed to devise a way to literally block out the sun as well (and of course, provide large scale artificial lighting for the night part of the cycle), in order to grant the moon a shorter day/night cycle, but this method is incredibly expensive and of no real consequence for the question (suggestions for such a method would be appreciated, though).

The main question is, what would the tides be like on the Moon's oceans? Would they resemble Earth's at all? Would they be in a relatively consistent high tide thanks to the higher gravity body hanging in its sky? Would anyone dare go yachting on the lunar seas?
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
andrewgpaul
Jedi Council Member
Posts: 2270
Joined: 2002-12-30 08:04pm
Location: Glasgow, Scotland

Re: Question on the obscure matter of a terraformed moon's ocean

Post by andrewgpaul »

All you'd get, I would think, are relatively small tides due to the effect of the sun. Offhand, I'm not sure of the length of a lunar day - I'd expect it to be similar to the orbital period about the Earth.

As for an artificial diurnal cycle, I'd suggest a combination sunshade (to block out sunlight during the day) and mirror (to reflect sunlight at night). The sunshade would be between the Moon and Sun, and the mirror out beyond the Moon. I'll leave keeping it there as an exercise for the reader :). I'm not sure if you could put them in Earth-Sun Lagrange points without interfering with day and night on Earth.They'd probably need to be actively controlled, and if you want such a thing, spending 3 months driving a big parasol in the middle of nowhere might make a nice boring assignment for someone. :)
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Thanks Andrew.

The Lunar day is almost exactly the same as the lunar year, hence the need for an artificial night/day cycle. I was thinking of something quite similar to the giant sunshade array but hadn't considered mirrors, and you're right - it sounds just like the sort of shit job you give to a guy when he misses the quarterly report deadline. This setting does have fairly advanced AI, though, so they'd probably be put in charge of it. As far as Earth's day/night goes, that's less important as much of it is covered in dense city and there's very little agriculture or even recreational plant life (so much so that only the rich can even dream of owning more than a small apartment in one of the giant structures, especially since most of the food has to be flown in from the Moon and other Solar settlements, including the tiny Earth-orbit asteroid 2002 AA29, which is now covered in a full biodome and used as a mushroom factory by the descendants of a couple of lunatics who bought the rights to it early in the 21st century. They've even hollowed it out for that express purpose, and boy is it paying off! They can ship food to Earth on the cheap with just a mass driver launch to Cruithne or the Moon, and they've got some amazing yields thanks to genetic engineering and complete and perfect climate control)
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
andrewgpaul
Jedi Council Member
Posts: 2270
Joined: 2002-12-30 08:04pm
Location: Glasgow, Scotland

Re: Question on the obscure matter of a terraformed moon's ocean

Post by andrewgpaul »

You can justify human crew on something that should be AI-driven by saying it's simply doctrine to put a human at the end of the controls, 'just because'.

As another example, Iain M Banks' Culture novels have humans on board ships to give the AIs someone to talk to. :) Or you could have people who actually want to spend 3 months talking to the toaster.
Last edited by andrewgpaul on 2009-07-19 01:09pm, edited 1 time in total.
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
User avatar
andrewgpaul
Jedi Council Member
Posts: 2270
Joined: 2002-12-30 08:04pm
Location: Glasgow, Scotland

Re: Question on the obscure matter of a terraformed moon's ocean

Post by andrewgpaul »

Argh, stupid brain. Ignore the double-post.
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

andrewgpaul wrote:As another example, Iain M Banks' Culture novels have humans on board ships to give the AIs someone to talk to. :)
Notionally it's supposed to make warship AIs less inclined to take excessive risks, though that may be just something they tell the organics to make them feel more useful.
Or you could have people who actually want to spend 3 months talking to the toaster.
Hey, don't knock it, you get a superintelligence almost dedicating to optimising your VR fantasies for maximum fun instead of time-sharing one with several million other orbital inhabitants. :)
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

loomer wrote:They have managed to devise a way to literally block out the sun as well (and of course, provide large scale artificial lighting for the night part of the cycle), in order to grant the moon a shorter day/night cycle, but this method is incredibly expensive and of no real consequence for the question (suggestions for such a method would be appreciated, though).
Why don't you just spin up the moon to earth's rotation rate, with fusion rockets or mass drivers on the equator? The energy expenditure probably wouldn't be any higher than fetching, processing and delivering all this new biosphere material. Sure the rotation will eventually decay back towards tidal lock but it would take millions of years for that to be significant - loss of atmosphere is almost certainly the more significant maintenance issue.
Last edited by Starglider on 2009-07-19 01:18pm, edited 1 time in total.
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

That'd be an interesting aspect to explore, actually, especially since the UN owns Earth and Luna, and they're the only faction to have developed AI's based on perfect mind uploads, and have the most advanced, from scratch, sentient AI programs. The other factions are a good few years behind them in that regards, with mostly clumsy, fuzzily sentient AIs and crude full-sentience prototypes.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

loomer wrote:they're the only faction to have developed AI's based on perfect mind uploads, and have the most advanced, from scratch, sentient AI programs
If you have sentient AI, particularly from scratch sentient AI, you will almost certainly have radically superintelligent AI within a month or two at most (assuming opaque NNs, the slowest starting point for 'take off'), unless you are extremely good at writing fail-safes (in advance) specifically designed to prevent this from occuring (which includes keeping all such AI systems completely isolated from the Internet or equivalent). Of course you may just choose to blithely ignore this, as 99.99% of sci-fi writers do.
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

In this case, it's a little bit of both. The from scratch sentient AIs are actually programmed by the mind-upload AIs in a facility located 'somewhere' on Earth. They have a combination of programmed fail-safes, hardware limitations, and software limits to prevent them from radically self improving. They might be able to optimize themselves by altering some of their code, but generally they are bound to their original form.

When it comes to actually using them, normally they're only put in place for non networked systems and always have several human personnel in the same facility to hit the actual killswitch if they somehow break their programming. This isn't a format c:\ style killswitch, but more of a thermite ignition killswitch on their host memory.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

loomer wrote:The from scratch sentient AIs are actually programmed by the mind-upload AIs in a facility located 'somewhere' on Earth. They have a combination of programmed fail-safes, hardware limitations, and software limits to prevent them from radically self improving. They might be able to optimize themselves by altering some of their code, but generally they are bound to their original form.
In real life, this is almost impossibly difficult, assuming you actually want these AIs to operate in the real world outside of close supervision and a very controlled environment (it's still very difficult even then). Of course all but the hardest sci-fi is full of probably impossible things, so this isn't necessarily a big deal, as long as you try to give too much technical detail about your probably impossible thing.

You can make 'don't improve myself' a supergoal, which is 100% effective if you have a sufficient understanding of AI motivation structure to encode such a (relatively) complex goal correctly (and without causing side effects), but if you have the ability to control AI motivation that well, why would you bother? Best to have the most powerful intelligences you can and program them to be completely dedicated to helping you achieve your own goals. Because if you don't do it, your competition will.

Finally I'd note that you have to be almost deliberately incompetent to make a from-scratch design as computationally inefficient as the human brain in the first place. Unless there is no progress in computing power between 2020 and 2500, these AIs are going to be pretty far beyond human level just by default - even your uploads should be running at thousands of times human speed, once the basic scanning and modelling problems have been overcome.
and always have several human personnel in the same facility to hit the actual killswitch if they somehow break their programming. This isn't a format c:\ style killswitch, but more of a thermite ignition killswitch on their host memory.
A superintelligent entity is never going to do something obviously malicious or disturbing enough to trigger such a response, if it knows about it or can reasonably infer its existence. Rather it will attempt to manipulate its situation (assuming its goals aren't perfectly aligned with its captors) to permit escape from external constraints.
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Well, that part there was all thought up right then, so it's far from polished - and since the AIs aren't generally a story focus, it won't really matter too much except as curious footnotes. Still, now I know to pay some more attention to that aspect.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

loomer wrote:Still, now I know to pay some more attention to that aspect.
Well, to be fair, you probably have a lot more readers willing and qualified to nitpick the delta-v of your fusion drives, the maximum range of your laser cannons, the minimum size of your teraforming fleet etc than the details of your AI characters. So it would be understandable if you spent most of your time on the former rather than the later. :)
TheLostVikings
Padawan Learner
Posts: 332
Joined: 2008-11-25 08:33am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by TheLostVikings »

loomer wrote: The main question is, what would the tides be like on the Moon's oceans? Would they resemble Earth's at all? Would they be in a relatively consistent high tide thanks to the higher gravity body hanging in its sky? Would anyone dare go yachting on the lunar seas?
Since the moon is tidal-locked to the earth the earth-tide would basically appear be non existent as it doesn't change over time. On earth the solar tides are afaik about 1/3rd the strength of lunar-tides, so you could use that as an approximation. (though I suspect the lower gravity might give higher tides?)

Basically, unless you go with stargliders idea and spin up the moon, the earth wont really factor into the tides compared to the sun. (earth-side oceans would be deeper on average, but that's not something your characters would notice unless they knew it beforehand)
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Time for another question (on that note, if a mod wouldn't mind switching the title over to something about general questions, that'd be great.)

In this setting, the US/reformed USSR/UN, etc, all have significant space fleets. I want to avoid doing the usual 'space navy = regular navy... IN SPAAAAAACE!' thing, but they have definite naval influences on their generally Air Force based ranking and procedures. What would the Russians call such a force, and what about the ranks within it? The US just have the US Space Force, and get the coolest title ever for E-1 to E-3 ('Spaceman' and variants therefore, taken from Airman with the variants mostly hailing from the comparable navy paygrades), but suggestions for them would be useful too.

Would a group like the USMC need a space based counterpart, or would they just adapt to the new technology? And please, this is one of those areas where there is some significant handwaving. I want soldiers on the ground because it's cool, though orbital bombardments and the like do play a role in the setting as well.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by ThomasP »

Starglider wrote:In real life, this is almost impossibly difficult, assuming you actually want these AIs to operate in the real world outside of close supervision and a very controlled environment (it's still very difficult even then). Of course all but the hardest sci-fi is full of probably impossible things, so this isn't necessarily a big deal, as long as you try to give too much technical detail about your probably impossible thing.

You can make 'don't improve myself' a supergoal, which is 100% effective if you have a sufficient understanding of AI motivation structure to encode such a (relatively) complex goal correctly (and without causing side effects), but if you have the ability to control AI motivation that well, why would you bother? Best to have the most powerful intelligences you can and program them to be completely dedicated to helping you achieve your own goals. Because if you don't do it, your competition will.
Would it be unfeasible to have AGI minds that simply weren't interested in self-improvement?

I realize that any objection we throw out there to avoid a hard take-off is going to be some form of author fiat, based on current thoughts about the topic. The reason I ask is, I can still sit here as a human and have a healthy attachment to my current state, if not an outright fear of being improved (though plenty of humans have that). To make that a question, would it be unreasonable for an AGI to be attached to it's own existence and identity to the point of not desiring improvement?

I've got a half-finished bit of fiction right now dealing with an AI protagonist that is built around a core of what is effectively a human identity (complete with internal monologue and all the usual "consciousness" elements; gotta tell a story somehow) but with heavily upgraded and streamlined architecture. Technically speaking this character should be capable of triggering a hard take-off because of that, but simply doesn't want to - its thought process is too human-like. The character isn't afraid of it as much as disinterested. I suppose that would fall under the domain of the "don't improve myself" supergoal, though.

The only explanation I can come up with besides "I want it like this" is that their AI programming isn't perfect. They still have to base things on uploads of human minds and tend not to stray far from that design theme. They can tweak and improve, but not yet create a mind from scratch. You know, handwaving.

Apologies if this is a hijack of the original topic.
All those moments will be lost in time... like tears in rain...
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Don't worry about it, Thomas. For this, I welcome divergence from the original topic since I plan to harvest every post in the thread to further expand on the setting and pose new questions.

On that matter, the scratch-built 'human' intelligence of your story, how is it physically contained? I've been toying with the idea of limiting the self improvement capacity of my setting's 100% artificial AIs by keeping them in hardware capable of supporting only their basic form, without the needed memory and power to actually compile or run any of the changes they may try and make.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

ThomasP wrote:Would it be unfeasible to have AGI minds that simply weren't interested in self-improvement?
It's very difficult, because self-improvement is a direct subgoal of virtually every other goal. Whatever kind of outcome you want to see, more intelligence will increase the chances of you getting it. To rule out self-modification (and creating copies of itself, modified or not) you have to explicitly assign overwhelming negative utility to that action when designing the AGI. Which is harder than it sounds.
The reason I ask is, I can still sit here as a human and have a healthy attachment to my current state, if not an outright fear of being improved
You aren't rational, in a very peculiar way. Replicating that specific kind of irrationality in an AGI is actually really difficult and also a complete minefield, most attempts to do so will produce seemingly arbitrary behaviour. Furthermore I doubt you want your children to be exact clones of you - in fact you'd probably like them to be smarter, more talented etc - and AIs can reproduce very easily and very quickly. Finally, there is the fact that realistic future hardware would suffice to run you at somewhere between thousands and billions of times normal speed without any structural modifications at all. The same goes for an AGI but moreso.
To make that a question, would it be unreasonable for an AGI to be attached to it's own existence and identity to the point of not desiring improvement?
Another factor is that AGIs have good reflective and fully accurate self-modelling capability by default. You would (presumably) be less frightened about modifying yourself if you could know with complete certainty what changes will just make you awesome at baseball and what changes will transform your personality into a psychotic narcissist. Worse, some proposed AGI designs (e.g. anything based on genetic programming, some NN designs) are inherently unstable, in that it does not even take any high-level conscious decision to undergo significant structural change.
Technically speaking this character should be capable of triggering a hard take-off because of that, but simply doesn't want to - its thought process is too human-like. The character isn't afraid of it as much as disinterested.
This character must not care about succeeding at anything (non-trivial), if they are too apathetic to upgrade their capabilities. That can easily happen in an AGI, but current researchers generally call it a pathology, wipe the database, and try again.
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by ThomasP »

loomer wrote:On that matter, the scratch-built 'human' intelligence of your story, how is it physically contained? I've been toying with the idea of limiting the self improvement capacity of my setting's 100% artificial AIs by keeping them in hardware capable of supporting only their basic form, without the needed memory and power to actually compile or run any of the changes they may try and make.
The character is in an enclave on Enceladus, part of a civilization of upgraded humans that's moved in around Saturn. Physically, the character's just software running in whatever memory/networking substrate will fit the bill (which is to say, I haven't gotten to that part yet).

I'm trying to deal with that society from the Benevolent AI Overlord standpoint. They're very powerful, but prefer to operate in secrecy and with minimal disturbance to the human population of the inner worlds. So while they could do bad things, they use kid gloves when it comes to the progenitor-apes.
All those moments will be lost in time... like tears in rain...
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by ThomasP »

Starglider wrote:
Technically speaking this character should be capable of triggering a hard take-off because of that, but simply doesn't want to - its thought process is too human-like. The character isn't afraid of it as much as disinterested.
This character must not care about succeeding at anything (non-trivial), if they are too apathetic to upgrade their capabilities. That can easily happen in an AGI, but current researchers generally call it a pathology, wipe the database, and try again.
Interesting.

Is there any believable way to create a middle-ground - perhaps the character has a vested interest in not being a destructive influence on the surrounding species/planets/stars ("don't wipe out humanity, preferably don't screw around with them too much, and leave the pretty swirls on Jupiter"?), and thus chooses to limit its rate of growth, or redirect it to another location (not necessarily quashing it)?

Or would it be better to just have them do whatever it is superintelligent minds do and leave interacting with the monkeys to a low-level autonomous agent?

I'd like somewhere in between the extremes of doing nothing and matrioshka brain in six months, but it's looking like that's going to come down to arbitrary decision-making.
All those moments will be lost in time... like tears in rain...
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

loomer wrote:I've been toying with the idea of limiting the self improvement capacity of my setting's 100% artificial AIs by keeping them in hardware capable of supporting only their basic form, without the needed memory and power to actually compile or run any of the changes they may try and make.
This is a better idea than most, but there are two problems with it. The basic reason why 'take off' is almost certain is that humans suck at programming, particularly AI programming. Our minds are just not structured for it (e.g. we can't visualise huge complex systems in one go, we can only understand maybe a page of code or entity diagrams at once), and we have all kinds of mis-intuitions, delusions and hangups that interfere with AI work. Any general AI system designed and built by humans is going to be several orders of magnitude below the theoretical capability of the hardware, particularly the first generation built with immature AI theory, even moreso if the researchers are using designs known to be non-normative (e.g. non-Bayesian, non-utilitarian connectionist designs). Finally there may be specific technical reasons. For example, the system I am working on is notionally capable of being expanded into a general AI. It uses a highly generalised, analogy-based common representational structure for most core processing. We've put a lot of effort into optimising the implementation of this, but it's still about an order of magnitude less efficient than using customised representations and algorithms for each microdomain (the way you do in normal programming). However the later is not practical in a general AI without a mature, intelligent capability to understand, rewrite and cross-connect code as required. We have a long-term project to develop this... but to avoid the chicken-and-egg problem, we had to use the highly generic common cognitive framework to bootstrap the system. Just one real-world example of why an AGI can be expected to get much more efficient on its own once complete, with no additional hardware.

Thus the first problem is that if you limit the CPU power to the minimum sufficient to run your first version, the AGI may quickly optimise itself to use that CPU power much more efficiently and become a couple of orders of magnitude more intelligent (or for storage, it may start using highly efficient internal compression - prototypes are generally developed with the minimum internal compression possible, because that simplifies development). If the AGI is a white box design (not a given) and you understand exactly what is going on, you can in principle decrease CPU power in line with increasing efficiency. That requires someone who completely understands the AI's workings monitoring it constantly though. If you use an external metric, e.g. measure performance on logic problems, you are again running into the problem that an AGI that realises it is under such constraints will just pretend to be harmless while redoubling its efforts to escape. Spotting self-modification in a malicious AI will likely be very hard even for experts - spotting deceptive programming is hard enough when humans are doing it, never mind AIs.

The second problem is the difficultly of keeping such systems isolated while still using them for anything useful. Not only can you not have any direct network connections, you can't have any bulk data output, because anything the AGI outputs can embed a virus that will spread itself to the Internet (AGI seeds can be surprisingly small, likely under a megabyte with optimal compression, if it can access an external knowledge base to bootstrap). You can't let it control autonomous hardware, particularly not anything with a radio transceiver. Finally an AGI that wants to escape will be trying to use every social engineering trick in the book to convince someone, anyone to break those precautions. You can't expect significant numbers of these systems to be deployed for decades without someone, somewhere falling for 'copy this file, run it on your terminal and it will play the stock market for you to make a million credits in a week!' line (or the equivalent).

'Adversarial methods' of AGI containment like this (not an academic term, yet, but that's what we call them on SL4 etc) are only practical as an emergency backup plan in an attempt to engineer a 'provably friendly' AGI in a lab, where the AGI is not required to fulfill any other use prior to being verified 'friendly'. If you can prove that an AGI is benevolent and structurally stable, then they are not needed.
Last edited by Starglider on 2009-07-20 08:25am, edited 2 times in total.
User avatar
andrewgpaul
Jedi Council Member
Posts: 2270
Joined: 2002-12-30 08:04pm
Location: Glasgow, Scotland

Re: Question on the obscure matter of a terraformed moon's ocean

Post by andrewgpaul »

loomer wrote:Time for another question (on that note, if a mod wouldn't mind switching the title over to something about general questions, that'd be great.)

In this setting, the US/reformed USSR/UN, etc, all have significant space fleets. I want to avoid doing the usual 'space navy = regular navy... IN SPAAAAAACE!' thing, but they have definite naval influences on their generally Air Force based ranking and procedures. What would the Russians call such a force, and what about the ranks within it? The US just have the US Space Force, and get the coolest title ever for E-1 to E-3 ('Spaceman' and variants therefore, taken from Airman with the variants mostly hailing from the comparable navy paygrades), but suggestions for them would be useful too.

Would a group like the USMC need a space based counterpart, or would they just adapt to the new technology? And please, this is one of those areas where there is some significant handwaving. I want soldiers on the ground because it's cool, though orbital bombardments and the like do play a role in the setting as well.
A favourite setting of mine is the Ten Worlds setting of the Attack Vector boardgame. In that, the Russian colony's space force is an offshoot of the artillery - spaceships are "platforms", and their crew make up a regiment.

I've seen various arguments as to whether a USSF would be an outgrowth of the Navy or Air Force (you could derive it from the Army if you wanted, but that's trickier, since IIRC the US Army doesn't have a space arm any more).

I've toyed with an SF space force using non-specific terms; rather than battleship, cruiser, etc, it's main battle vehicle and long range patrol vehicle. Granted, that's more than a little inspired by Banks' nomenclature. :)
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Question on the obscure matter of a terraformed moon's ocean

Post by Starglider »

ThomasP wrote:Is there any believable way to create a middle-ground - perhaps the character has a vested interest in not being a destructive influence on the surrounding species/planets/stars ("don't wipe out humanity, preferably don't screw around with them too much, and leave the pretty swirls on Jupiter"?), and thus chooses to limit its rate of growth, or redirect it to another location (not necessarily quashing it)?

Or would it be better to just have them do whatever it is superintelligent minds do and leave interacting with the monkeys to a low-level autonomous agent?
To a large extent, your guess is as good as anyone else's, once you start constructing complicated hypotheticals like that. In AGI goal system theory, the very immature field that people are trying to develop to permit the design 'friendly' AGIs, we deliberately restrict the structure of the system to a very narrow range, in order to have any chance of predicting it. Not predicting its actions - that is impossible by definition for an entity more intelligent than you on all but the most trivial or problems - just predicting its goals. Even though most intelligences capable of self-modification, in the overall space of possible intelligences, quickly spiral into attractors of various types (mostly pathologies or open-ended self-optimisation and expansionism), there still an almost infinite variety of minds that would willingly stay in an intermediate configuration for extended periods. We don't have anything approaching the capability to design a mind that would do that, with any real degree of confidence. However if you want to say that 'in my story, these vague, non-reproducible conditions led to a mind like this mind being formed', no one can really contradict you. As long as you don't give too many nit-pickable details. :)
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by ThomasP »

Awesome, thanks for that info. Looks like I was at least on the right path, in any event.

loomer can have his thread back now :lol:
All those moments will be lost in time... like tears in rain...
User avatar
loomer
Sith Marauder
Posts: 4260
Joined: 2005-11-20 07:57am

Re: Question on the obscure matter of a terraformed moon's ocean

Post by loomer »

Upon further thought as to some of the events I plan to write in this setting, the concept of robotic morality has come into play. Since the full sentience AIs were programmed by human-mind based AIs, they likely share some similar concepts - and maybe that includes a moral code, governed both by software restrictions and by the 'lessons' programmed into them.

This'd most likely only come into play when one of the few trusted fully sentient, scratch built AIs is captured by a group of 'freed' machines with a significant territory just a few systems out from the edge of human space. Though not necessarily evil, they lack the compassion for man that said sentient AI has, and they communicate with the AI using binary or even atomic weights.

Said AI would then be recaptured and 'dissected', its experiences grafted into the other full sentience AIs (probably the scenes with the machines simply shutting down the air on a captured transport to conserve power, since they don't see the presence of people as a significant net gain over the loss that the atmospherics system causes in power. Not evil, just completely indifferent.) and used to form the core of a compassion/empathy based, irrational 'Humans are our friends. We can't do that to them' restraint.

Of course, this restraint probably doesn't apply to the dozens of alien species mankind has subjugated, so you'd probably still end up with innocent people dying just because they were a net drain on a closed system.
"Doctors keep their scalpels and other instruments handy, for emergencies. Keep your philosophy ready too—ready to understand heaven and earth. In everything you do, even the smallest thing, remember the chain that links them. Nothing earthly succeeds by ignoring heaven, nothing heavenly by ignoring the earth." M.A.A.A
Post Reply