AI as Software Brainbug?

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

Post Reply
User avatar
Crossroads Inc.
Emperor's Hand
Posts: 9233
Joined: 2005-03-20 06:26pm
Location: Defending Sparkeling Bishonen
Contact:

AI as Software Brainbug?

Post by Crossroads Inc. »

AHOY ALL.
So, I found myself the other night catching the tail end of the 2003 ‘Terminator’ movie, specifically the part where the nukes launch, and John Conner gives his little speech about:

There was no mainframe to destroy, no computer we could smash. Skynet existed in a million computers across the world, in people’s smart phones and college laptops.

Since the early 2000’s as the internet has spread, the idea of some sort of AI existing PURELY as an online presence has taken off in fiction and sci-fi. But…. What limitations would you still encounter? The obvious assumption is:
A million computers across the world can act like one super computer, ERGO you do not need a single supercomputer for an AI” But, that predisposes the idea that said ‘millions of computers’ COULD continuously work together seamlessly in such a network. A number of countries do HAVE ‘super computers’ that are basically hundreds of racks of computers all processed together. But upon researching, I found a lot of these have a great deal of trouble working together properly. Soooo…..

In terms of an ACTUAL ‘all powerful AI’ would it not have to have a single super computer somewhere? If an AI did try to exist “purely as software” would its ‘Smartness’ end up changing depending on local computers it was using? IE if an evil AI took over, say, a modern US Super carrier, would it only be as “Smart” as the processing power of the computers on the carrier?

Thoughts?
Praying is another way of doing nothing helpful
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI as Software Brainbug?

Post by Starglider »

Modern supercomputers are generally just datacentres with a bias towards compute over storage, and a more capable (higher bandwidth, lower latency) local network than usual. Distributed supercomputing is absolutely a thing, and in fact plenty of 'supercomputing' is done on cloud resources these days. The main reasons why it isn't more common are 1) the bandwidth, and/or (less frequently) latency of the wide-area network becomes a bottleneck, or at least cost-prohibitive, 2) security concerns, for all those nuclear simulation, cryptographic and other state workloads and 3) optimising the code for a specific uniform hardware install and network topology makes it more efficient. Contemporary cutting-edge AI can and is distributed across numerous datacenters in both the training and inference (usage) phases - ANNs are suited to this given the extremely inefficient yet massively and trivially parallelisable training (an individual ANN training or inference run is also massively parallel but has massive bisection bandwidth, so is a challenge to distribute over more than one machine never mind multiple sites).

The upshot of this is amusingly that a faithful biological brain simulation is likely to be localised to an individual supercomputer (one running instance I mean: it's easy to copy paused state to alternate sites for local execution), but anything based on current AI design paradigms will easily be able to aggregate across many sites. In a modular/composite NN architecture even weak edge nodes (e.g. laptops, cellphones) can probably be given something non-critical but useful to do. The intelligence of the AI is definitely constrained by local compute, if it's isolated or at least only connected by a low bandwidth and/or high-latency link - which is increasingly uncommon, given optical broadband, 5G, Starlink etc.

There are of course numerous incredibly complex 'split brain' issues when dealing with distribued state - even in relatively simple financial systems, never mind AIs - but ANNs will generally degrade gracefully so sloppy mitigations should be aceptable.
User avatar
Crossroads Inc.
Emperor's Hand
Posts: 9233
Joined: 2005-03-20 06:26pm
Location: Defending Sparkeling Bishonen
Contact:

Re: AI as Software Brainbug?

Post by Crossroads Inc. »

Interesting, and confirms a number of my suspicions already about how that stuff works currently.

From a fiction and writing standpoint, the idea of some massive, flashy, 'supercomputer' for an AI feels outdated and archaic in the age of online data. But even with that in mind, it makes me wonder if in certain situations, a "big central supercomputer" would still be needed or required.
Praying is another way of doing nothing helpful
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
User avatar
Lord Revan
Emperor's Hand
Posts: 12235
Joined: 2004-05-20 02:23pm
Location: Zone:classified

Re: AI as Software Brainbug?

Post by Lord Revan »

Both approaches have their pros and cons, with a single central supercomputer you can make it more secure towards software based attacks and also more optimized on specific cast (say for example you got and advanced AI running a power plant), but it would have single point of failure in form of the central unit.

A spread out over the internet AI would be less likely to be taken out by single physical attack or accident, but due its spread out nature wouldn't be as secure towards software based attacks as you realistically couldn't restrict access to its nodes effectively enough and thus couldn't as effectively prevent someone from uploading a virus to one of the nodes and then have that malware spread to the whole thing, with a single central unit you can literally say "only people I trust have access to the central unit" especially if said AI doesn't need online access.

Essentially it's a case of what do you need the AI for, after all the AI doesn't have to super intelligent, just smart enough for purposes of the story.
I may be an idiot, but I'm a tolerated idiot
"I think you completely missed the point of sigs. They're supposed to be completely homegrown in the fertile hydroponics lab of your mind, dried in your closet, rolled, and smoked...
Oh wait, that's marijuana..."Einhander Sn0m4n
User avatar
Solauren
Emperor's Hand
Posts: 10375
Joined: 2003-05-11 09:41pm

Re: AI as Software Brainbug?

Post by Solauren »

Also "Artificial Intelligence" is too broad a term.

Going by the Terminator franchise, Skynet and the Terminators are NOT A.I. They are Artificial SENTIENCE.
(As are most artificial lifeforms in Sci-Fi. All the way from the little robot in Buck Rogers, to the Biodreads from Captain Power, up to C-3P0, R2-D2, BB0-8, and Mr. Data)

Artificial Intelligence is limited to it's programming. (Meaning, it is essentially limited to it's programmers decision making process).
If it encounters a problem beyond it's programming, then it's can't come up with a new solution, just a series of 'best fits' based on it's existing programming.

Having the ability to save those problems and it's 'best fits' for comparison for results would allow for some growth.


Artificial Sentience would be the ability of the software/hardware to make decisions above and beyond it's programming, as well as improvise and adapt entirely new approaches and solutions, and not be limited to what is in it's program/databases.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI as Software Brainbug?

Post by Starglider »

Solauren wrote: 2021-01-17 10:22amArtificial Intelligence is limited to it's programming. (Meaning, it is essentially limited to it's programmers decision making process). If it encounters a problem beyond it's programming, then it's can't come up with a new solution, just a series of 'best fits' based on it's existing programming.
This kind of manual programming of decision making is hardly even called 'AI' any more. It's used for things like enemy behaviour in video games and the traction control system in your SUV. What we are talking about in this thread, and what most contemporary media articles about 'AI' covers, is machine learning. In the second half of the 2010s machine learning has been overwhelmingly deep artificial neural networks - other approaches have been sidelined. Programmers do not in any way design the 'decision making process'; they just fiddle with some network topology parameters.

The key thing the programmers do is select and rank/classify the training data (or specify a fitness function if not manually ranked, e.g for training GPT type text extrapolators). ANNs are best viewed as a kind of lossy, holographic representation of all the input data. The key property that makes them useful is that they can then interpolate appropriate output for cases similar but not identical to any of the training data, in a smooth and relatively reliable fashion (vs hand-written software which tends to be brittle, e.g. it works perfectly until something unanticipated happens, then it doesn't work at all).

This is most impressive when you apply it to something like 'all the text on the Internet', which recent advances in big data processing have made possible. You can now build a chatbot / search engine / article generator that takes your prompt and generates a response by combing virtually everything anyone has ever typed in an article, web forum or social media, taking the most relevant quotes and smoothly combining them (respecting the rules of grammar etc - usually, as it's fuzzy logic) to make a blended response. This is quite useful and looks like magic to most people, but unfortunately it's more vulnerable to the ELIZA effect than ever before i.e. people assuming that there is something more complex and abstract going on than there actually is.

That's not to say that people aren't experimenting with much more sophisticated kinds of ANN - they are, it's just that to date the 'dumb algorithm but huge data set' approach has been much more successful, in both flashy demos and commercial applications. The issue is of course that no matter how much you rely on massive data warehousing and Internet of Things sensor deployment, there are some problems that you just can't capture a smooth, representative training set of. That's essentially why the self-driving hype promised we'd all be in automated cars by now: downplaying the difficulty of getting good coverage on all the edge cases. For a chatbot being convincing 95% of the time is a great result, but a self-driving car handling even 99.99% of intersection traverals without crashing is not.
User avatar
B5B7
Jedi Knight
Posts: 787
Joined: 2005-10-22 02:02am
Location: Perth Western Australia
Contact:

Re: AI as Software Brainbug?

Post by B5B7 »

The terminators are capable of sophisticated analysis but most of them are subject to basic programming protocols. We see this in The Sarah Connor Chronicles where when she was damaged Cameron defaulted to a prior command given to her by Skynet.
The two that seem truly independent in thinking ability are Weaver and Henry (though at one stage Henry was co-opted by an external command).
TVWP: "Janeway says archly, "Sometimes it's the female of the species that initiates mating." Is the female of the species trying to initiate mating now? Janeway accepts Paris's apology and tells him she's putting him in for a commendation. The salamander sex was that good."
"Not bad - for a human"-Bishop to Ripley
GALACTIC DOMINATION Empire Board Game visit link below:
GALACTIC DOMINATION
User avatar
Crossroads Inc.
Emperor's Hand
Posts: 9233
Joined: 2005-03-20 06:26pm
Location: Defending Sparkeling Bishonen
Contact:

Re: AI as Software Brainbug?

Post by Crossroads Inc. »

Hey all. wanted to give mad props and thanks for the quick feed back, specifically to Starglider who as always lays things out in amazing detail.

I guess I should say my secondary curiosity behind this was stemming from the currently going STGOD and one of my main characters, a massive AI embedded in many systems of my civilization. Which, by the way, I LOVE the distinction of "artificial-sentient is' vs 'artificial-intelligence' to describe something that is more than just a 'smart program'.

After watching that part at the end of 2003 Terminator, I had started to wonder if my character in the STGOD, if I needed to pick an "either or" in terms of having to do big computer or net based software. And after looking at things I can safely say it would be prudent and practical to do both. In truth, I came up with the idea of him have different versions of himself at different intelligence levels, that are used depending on the situation. An android that is 'just' a bit smarter than your average carbon based sapient, and then working up from there.
Praying is another way of doing nothing helpful
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
User avatar
Solauren
Emperor's Hand
Posts: 10375
Joined: 2003-05-11 09:41pm

Re: AI as Software Brainbug?

Post by Solauren »

Starglider wrote: 2021-01-17 12:17pm
Solauren wrote: 2021-01-17 10:22amArtificial Intelligence is limited to it's programming. (Meaning, it is essentially limited to it's programmers decision making process). If it encounters a problem beyond it's programming, then it's can't come up with a new solution, just a series of 'best fits' based on it's existing programming.
This kind of manual programming of decision making is hardly even called 'AI' any more.
Very true. However, the topic is 'AI as Software', not hardware or neural-networks.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Solauren
Emperor's Hand
Posts: 10375
Joined: 2003-05-11 09:41pm

Re: AI as Software Brainbug?

Post by Solauren »

Go the Hive Mind route for the character.

A central 'server farm' (which could be multiple servers on different planets/star systems connected via instantanous speeds thanks to quantum teleportation) that is a combination hardware/software infrastructure.

If the hardware is destroyed, back-ups of the main software could be run on lesser computers, at a diminished capacity (like someone with head trauma) until additional/replacement hardware is built.

The connected support 'avatars' are running on subsets of the main software. Possibly with a complete back-up of the main software program.

That databases that are not part of the AI/AS software are stored across multiple servers, and the support avatars, with multiple copies of each section to make destruction and corruption difficult.

Kind of like the Borg.
Kill one, the hive remains.
Kill the Queen, a new one will rise
Kill the Omnimatrix, a new one will be built.

And to top it off, the Hive Mind keeps a complete back-up, one of each avatar, all schematics, and the infrastructure needed to rebuild (basically 3d Printers/Replicators and 3xs the needed resources) to rebuild, in a secure location, who's location isn't recorded accept in the ASs active memory (consciousness). Part of it is connected to the Hive Mind, but it's registered as something small an easily ignorable by enemies.

i.e 'Monitor Satelite 004 - Historical Artifact. Preserved for cultural purposes'. Images of it show that civilizations version of Voyager 2.
Possibly even is a legitimate satelite, with an added hidden relay to the real site.

All up to you.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
Post Reply