Google has bought eight robotics companies in the past few months, and no one seems to know why.
Most of the speculation has focused on one of the eight companies, Boston Dynamics (BD). BD is the only one of the eight companies that specializes in military robotics, always a fertile field for those addicted to “Rise of the Machines” fantasies. But BD’s products are also viscerally exciting, or disturbing, to those seeing them for the first time. That’s because the best-known robots use legs, not wheels or treads. The company has also given its most popular models catchy, animal names, like the quadraped models “Big Dog,” and “Cheetah.” You can watch Big Dog in this video, plodding up a steep slope and keeping its balance on a slick, frozen pond.
It’s a good promotional video, but the modest abilities Big Dog displays here can’t account for the excitement people seem to feel on seeing it trudge up that hill. A wheeled or tracked robot climbing a hill, crossing a frozen pond, wouldn’t get a second glance.
For that matter, why is Big Dog a robot? It’s a small vehicle, with legs instead of wheels, but there’s no evidence it can choose its own route or mission. With a little help from Google, your Nissan can drive home without your touching the steering wheel, but that doesn’t seem to qualify it as a “robot” or entitle it to a fraction of the press Big Dog is getting.
Clearly it’s those gimmicky legs, that imitation of mammal gait. Not that this gait is very fast or efficient; your Nissan is faster, smoother, quieter and can carry far more cargo on its boring old wheels—but we don’t call it a robot.
The rule seems to be that one sense of “robot” in contemporary English is something like “a machine that does a bad imitation of a living organism.” The Nissan isn’t trying to look or move like an animal, so we’re underwhelmed. Big Dog, clomping along like a bear designed by a Human Resources Department, is a robot and a delight.
This would be fine, if this was a privately-funded novelty act. In reality, Big Dog has been funded by the military for years, and is being promoted as a military supply vehicle, perfect for carrying supplies with small units moving on foot over rough terrain—a synthetic donkey, a clockwork St. Bernard, in other words.
This is odd, not to say implausible, on several grounds. For starters, what does that mission have to do with robots, or machine intelligence? A dirt bike can do that job, and nobody associates dirt bikes with high IQs (though that may have to do with their riders). For that mission, what matters is carrying capacity, range, noise when moving, fuel consumption, reliability, speed—and by those criteria, a real donkey beats this artificial one easily.
And when you compare Big Dog to other machines, like offroad bikes or ATVs, the useless gimmickry of the four-legged “robot” is even more obvious. A dirt bike has very little brain but it will claw up a muddy gulch wall better than Big Dog. Throw some ATVs into the competition, and how about one of those amazing Russian Kamaz trucks?
There just aren’t too many slopes, short of a rock wall, that can’t be climbed by bike, ATV, or even Kamaz. At any rate, I’m willing to offer Boston Dynamics a fair bet on a two-vehicle race: Team War Nerd, fielding a 2013 HondaCRF450X ridden by a volunteer (i.e. anybody thinner and less of an uncoordinated dweeb than me) vs. Big Dog and his team of handlers.
The course: 20 miles of the roughest ground you can find. The stakes: totally fair and balanced, to wit: The losing side gives its income for the year to the winner. If I win, all that DARPA money funding Big Dog goes to me, to be used researching military history in someplace that has a good coral reef right offshore. In the unlikely event of Big Dog romping over the finish line first, BD gets my yearly income, which will serve them right.
Big Dog needs computing power only because it’s trying to mimic vertebrate locomotion. Drop that gimmick and it’s a dirt bike or ATV, with more torque than brain.
There’s something very dubious about BD’s products, with their quixotic attempt to imitate mammal motion at a time when familiar machines with wheels have surpassed mammals in every category. Either the whole concept is a classic military boondoggle, or the stated purpose of these machines is not what we’re being told.
BD’s history is a good place to start. The company was started by Marc Raibert, who taught electrical engineering and computer science at MIT. Raibert’s specialty was balance—creating a robot that could keep its balance as well as vertebrates do. Raibert managed to build a robot that could hop without falling over –a great moment for those dreaming of an all-robot production of Riverdance, no doubt. But that breakthrough created a lot of enthusiasm in a more lucrative audience: the research agency for the Department of Defense (DoD), the Defense Advanced Research Projects Agency (DARPA). DARPA and other military agencies have been BD’s main clients for its entire history.
Which raises the same question I keep asking: Why? Why are legs so wonderful, when wheels and treads can do pretty much everything legs do, only faster—much faster?
Two possibilities come to mind: (a) It’s the Department of Defense, which means that insane profligacy with tax money is all we’re seeing; or (b) It’s the human-like or mammal-like motion that DARPA values—not for the stated reason that legs work better on bad terrain, but because DARPA wants a generation of military robots that looks human/mammalian and moves like a mammal. The gimmick, the anthropomorphism, is the goal in itself. To what end we can only guess.
I’m leaning toward option (b), but if you know anything about DoD, you can’t just dismiss “insane proflilgacy” out of hand. In fact, supply vehicles that walk on machine legs is an old dream of DARPA’s. Way back in the Vietnam War, DARPA put a lot of tax money into a project that stood out for sheer idiocy in a war defined by DoD idiocy: a “mechanical elephant” that could carry supplies through steep, roadless jungle—a bigger, earlier version of BD’s Big Dog.
Big Elephant was scrapped as a “damn fool” idea before it took its first steps—a real loss to comedy, if not military logistics—but DARPA hasn’t stopped dreaming about military robots that walk rather than roll.
What about this recurring claim that legs work better than wheels or treads when the goin’ gets tough? It makes no sense at all. Not even the specious sort of sense one finds in many DoD theories. The argument behind it is that long before wheels and tracks dominated movement, the world crawled with creatures that used legs—two or four or six or eight. This proves that legs work better on roadless, rough terrain like the landscape in which the legged creatures evolved.
The only argument against this is the fact that machines moving on wheels and treads showed long ago that they can move faster—much faster—than anything on legs. And continue far longer without tiring. And carry loads thousands of times heavier. Over all kinds of terrain.
If you want to see wheeled vehicles dealing with terrain much rougher than anything Big Dog takes on, and moving through it with ease at high speed, check out a Russian monster truck race. Russians take big trucks and mud real seriously, and I have yet to see a Russian engineer lose faith in the wheel and instead design trucks with legs.
That claim doesn’t hold up. But if you watch the promotional video for Big Dog, marching up the hill like a mechanical mastiff, it’s easy to see why people are so amazed they don’t bother to think about the claims made for this marvel.
In fact, all Big Dog does in his screen-test video is trudge, slowly and noisily, up a hill, then keep its balance after being kicked while crossing a frozen pond. Balance; BD’s line keeps coming back to its one and only breakthrough, Raibert’s work on perfection of balance in robots
But why is that worth DARPA’s time? The only reason Big Dog needs good balance is that it’s imitating the mammal shape, with its high center of gravity—top-heavy body and head on long, skinny legs. The problem of falling over when kicked doesn’t even apply to an ordinary ATV with a low center of gravity. The three best kickers in MMA—Jon Jones, Cro-Cop in his prime, and Anderson Silva—would have a hard time kicking over a heavy-duty ATV (let alone a Kamaz).
And that animal mimicry is such a huge design cost that the product is slow and noisy, extremely noisy. If you watched the Big Dog video with the sound off, try again with the volume up. You’ll hear Big Dog whining like a chorus of chainsaws as it tries to get up that hill.
That alone rules this contraption out for a small-unit mission on foot over rough terrain. And if Big Dog isn’t useful for that kind of mission, what is it good for?
Unless DARPA is insane (a real possibility), the “supply vehicle” story is ridiculous. BD’s chassis, with its animal shape and gait, is such a huge design cost that it must be an end in itself. So the mission must involve looking and moving like a human or a quadruped mammal.
When you reframe the question that way, a plausible role for these walking machines pops up instantly. Most likely, BD’s anthropomorphic walkers are slotted as the chassis for a new generation of military robots whose software is being developed somewhere else, away from all the publicity. Remember how every Chevy used to have a stamp, “Body by Fisher”? This generation of robots will be stamped “Body by Boston Dynamics.” Big Dog and his metal buddies are going to have their heads sawed open and fitted with brains, subcontracted to somebody with more pure AI experience, and then shipped off to do the missions human soldiers can’t, or won’t, do effectively.
And it’s pretty clear what that job is: Counterinsurgency (CI), the most important military mission we have, and the one our military hates and refuses to take seriously.
The reason the US military hates CI is that it’s defined by “the Three D’s” of counterinsurgency: “dull, dirty, and dangerous.” Not to mention that it reeks of Vietnam and Iraq, our worst military failures (a fact which is, let’s say, not unrelated to our distaste for CI, making the military’s aversion for and avoidance of the job one of those “self-fulfilling prophecies” your high-school counselor warned you about.)
Robots are naturals for jobs characterized by the “Three D’s.” Like vacuuming. You like vacuuming? Me neither. Which is why Roomba was invented.
Machines don’t get bored. The Roomba never daydreams of being a cruise missile, as far as I know. It will vacuum until it breaks down, no need for R&R. The motion sensor on your garage never gets tired of being a motion sensor, never daydreams about going to Vegas. Consider land mines, which could reasonably be called the simplest military robots, because they operate without human help once programmed or set. In this way, a mine is much more like a robot than the drones everyone’s worried about. Drones are just fancy model airplanes; they have no capacity to attack their targets without a human operator and his/her supervisors making the call.
Mines don’t need any help, once in place. They never get bored or distracted. They remain in place until their sensors are triggered, and then they detonate. It doesn’t matter if the war ended years ago; they’re still not distracted or bored. Dull is not a problem for a land mine.
What about the second D, “dirty”? It’s a huge problem for human soldiers occupying another country, dealing with an insurgency. These wars are every way you can use the word. Literally dirty, because these wars, by their nature, happen in poor countries where there are no public services, where the toilet is a pit and water is a precious commodity you buy or carry a long way home. And since the occupying soldiers are from a richer country—again, by the nature of such wars—it’s hard for them, even before their first ambush, just dealing with the smell and the dirt. They hate the locals before the ambushes even start.
And that leads to the other kind of “dirty,” the sleaze that irregular war always encourages. A squad search a house and one of them steals a gold necklace; nobody wants to lose out, so they all steal. A man objects and gets shot; the squad plants a gun and keeps quiet.
This is an aspect of what the Army likes to call “unit cohesion,” banding together in combat—but it’s the worst thing a CI force can do. You’re all dirty now, and the whole city knows it and hates you. Any sympathy you had is gone. The guerrillas get more and more good info; you get lies or nothing at all.
Now imagine a unit of non-human units, say BD Atlas chassis with a good program, dealing with the “dirty” stuff. For them, there is no dirt, literal or figurative. The neighborhood dirt and smells don’t register at all, and the emotions that lead to stealing, murder, rape, and humiliation of the locals don’t exist. If the units are doing something counterproductive, you alter their programming; there’s no grudge, no memory, no resistance.
Now comes the last and most important D: “Dangerous.” This is where robots could really revolutionize CI warfare. Over the last century, guerrillas have developed a kind of military miracle, a strategy for defeating bigger, wealthier, better-equipped occupying armies. It’s worked again and again, all over the world—and it works not by tinkering with weapons or massing giant armies, but by playing with the occupiers’ emotions, warping them patiently, back and forth, until the occupying soldiers are so scared, resentful and vengeful they’re no use at all, and are actually recruiting for the guerrillas among the civilians, the third group that both sides are trying to win over.
No other form of war depends entirely on playing with the enemy’s emotional responses. The occupier wants to warp that bond by “winning hearts and minds”; the guerrilla wants to make the occupying force lash out at the civilian population. It doesn’t take much, actually. Most combat soldiers are young, male, provincial, and hungry for the group’s approval. If your squad goes out on patrol and loses a soldier or two each time—to “cowardly” guerrilla tactics like snipers or IEDs—then the group will preach hate for all the locals, with no distinction between guerrillas and civilians. Eventually, the group will act on that belief by firing every weapon it has at anything that moves, or any house in sight.
That’s when the guerrillas win—when the occupier starts raging around like a blind giant, killing old women and kids, disabled, housebound elders, all those who are supposed to be off limits. When that happens, the guerrillas start getting donations, information, volunteers, and the soldiers hole up behind sandbags. They’ve lost the neighborhood, in spite of their advantage in money and weaponry.
If the occupiers sortie out and blast the neighborhood, the guerrillas will usually not even fight back. These sorties just make the occupiers more monstrous, more hated, more isolated. To keep the civilians’ anger for revenge satisfied, the guerrillas use all their new, eager informers to find out when the next patrol leaves the base. There’s an IED waiting for it, and in the gory mess after it goes off, the soldiers overreact wildly, firing the tank’s main cannon into apartment houses. At that point, the war is over and the guerrillas win, even if it takes years for the foreigners to leave.
Sooner or later, the occupiers leave. They’re the only party that can leave. The civilians have nowhere else to go, and the guerrillas have plans for the day when the foreigners pull out. In a year, or ten years, the budget at home is tightened, the polls say the war is hurting the ruling family, or the oligarchy finds something else to obsess about—and the occupying forces leave, hating and hated by everyone.
Now, imagine a CI unit in which the patrols venturing into dangerous areas are robotic, not human. All that guerrilla theory is suddenly obsolete. The guerrillas might be very good at their job, and “kill” several robotic units patrolling the neighborhood. But the reprisals they’re hoping for just won’t happen, and that ruins their whole strategy. The units feel no anger or fear, no grief for the damaged units. They follow their programming, unmoved.
The guerrillas repeat the process, still hoping for reprisals. There are none. The robot patrols may not even need to retaliate. In theory, an occupier rich and patient enough to focus on the “hearts and minds” job could decide to keep counter-guerrilla violence to a minimum. The occupying power could simply keep sending more robotic units to replace those destroyed, while the robot patrols focus on projects like improving sanitation, roads, and electric power. Most occupying armies talk about that kind of work, but it’s hard for humans to feel very gung-ho about Peace Corps chores on behalf of the people who are trying to kill them. Robotic units have no grudges, making them lethal CI soldiers.
If the occupiers had the patience and money to continue this experiment in CI strategy long enough, the guerrillas would become more violent toward the civilian population as the robot units became less hated. Guerrillas are only human—worse yet, they’re usually young males, easily outraged and prone to violence. The guerrillas expect the civilians to share their outrage, but with no reprisals and occupying units fixing up the streets and sewers for the first time in anyone’s memory, other human variants would rather wait and see. Eventually, some “collaborators” will be killed by the more emotional guerrillas. At that point, the occupiers win no matter what happens next. Maybe the guerrillas split over these reprisals, and civil war destroys the resistance. Maybe the robot units are so trusted by now that a delegation of those with an interest in stability—rich people, parents with military-age sons they’d like to keep alive, vulnerable minority sects—go to the outpost gate to present a list of guerrilla leaders and their present addresses.
No real war is likely to go that smoothly for an occupier, even with automated troops. But then, few guerrilla wars go as smoothly as the guerrilla victory I outlined above either.
What is intriguing about robot units in CI warfare is that emotion, the key of guerrilla strategy, is off the table—unless the people programming and running the automated units project their fickle, unstable reactions onto their robots. Which is all too possible. And in that case—well, it would be just as easy to program automated soldiers to kill everything that moved in a certain neighborhood as to focus on helping repair the infrastructure. But an occupying power could use nukes for that, more quickly and cheaply than robot soldiers.
What robot soldiers could do is just as scary, though: Make outright colonialism a practical option again. If guerrillas can’t provoke reprisals by playing on the soldiers’ fear and hate, then there’s only one other player in the game whose emotions can be exploited—the civilian population. That puts the guerrilla in the occupying army’s traditional role. It’s the human guerrillas—as vengeful and unpredictable as most humans are—who become resented, even if the neighborhood agrees, in theory, with their struggle against occupation. The guerrillas are the only wild card, so they are the element to fear and eventually, to hate.
Meanwhile, the people running the occupation feed in replacement units and plan how to siphon off whatever it is they wanted in the occupied area, a world away from the shooting. And their machine-soldiers—never homesick, never scared, never angry—can keep this up forever, or until a newer model comes along. No doubt some company will become the Toyota of machine-soldiers, and their commercials will feature a rusty old unit suddenly famous because the guerrilla this veteran unit just killed turns out to be the great-grandson of the first one it neutralized when shipped to the occupation zone as a squeaky-clean product, fresh out of the carton.
When you imagine military robots in this scenario, the huge design costs of BD’s humanoid and mammalian chassis begin to make sense. Machines make better CI soldiers than humans, but only if they still resemble humans in outline. A checkpoint manned by occupation robots with no human characteristics would be too alienating.
What’s more likely is a mixed squad, with some actual humans—back out of suicide-bomber range—and BD-derived models, biped and quadraped, fronting the public. There’ll be mockery, but that’s another weapon only useful on gregarious mammals. Over time, the inhuman discipline of the walking machines will make their approximation of familiar organisms acceptable. Better a humanoid who doesn’t commit reprisals than a fully human foreigner with a temper and a 25mm automatic cannon.
The advantages—for the occupier—are almost endless. No relatives to hold up pictures of dead relatives outside the White House. No lawsuits. No PTSD. And huge, almost unimaginable profits for whoever holds the patents.
Googles Big Dog for counter-insurgency?
Moderator: Alyrium Denryle
- cosmicalstorm
- Jedi Council Member
- Posts: 1642
- Joined: 2008-02-14 09:35am
Googles Big Dog for counter-insurgency?
I discovered the War Nerd is still active, this is his latest blog piece regarding Googles acquisition of Boston Dynamics. Is Google aiming for a place in the war-machinery or is he just rambling?
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Googles Big Dog for counter-insurgency?
I think this argument is partly correct and partly incorrect.
On the one hand, it's fairly obvious that DARPA is not just interested in "Big Dog" because it wants a robot pack mule, and they do want to advance the technology to the point where a humanoid robot chassis is at least possible. They may be hoping that in the mid-century something like what the article describes will become possible.
On the other hand, I think it may also be a long time before anything like that does become possible, if only because the 'robots' we're designing here are totally non-sentient, and are not easily going to be made versatile.
You want a humanoid drone for counterinsurgency that will fight when told to, ignore snipers, participate meaningfully in reconstruction efforts? Presumably one that can't be stopped by the simple expedient of having nonviolent protestors form a human chain in front of it... but that will NOT just trample over a small child in the street. Basically, you need an army of RoboCops.
It would actually be much easier to design an autonomous killbot with waypoint navigation that automatically shoots anything that moves. DARPA is likely to get that (essentially Big Dog with a gun turret and GPS) a long time before it gets Robocop. Personally, I would not be surprised if they want both.
On the one hand, it's fairly obvious that DARPA is not just interested in "Big Dog" because it wants a robot pack mule, and they do want to advance the technology to the point where a humanoid robot chassis is at least possible. They may be hoping that in the mid-century something like what the article describes will become possible.
On the other hand, I think it may also be a long time before anything like that does become possible, if only because the 'robots' we're designing here are totally non-sentient, and are not easily going to be made versatile.
You want a humanoid drone for counterinsurgency that will fight when told to, ignore snipers, participate meaningfully in reconstruction efforts? Presumably one that can't be stopped by the simple expedient of having nonviolent protestors form a human chain in front of it... but that will NOT just trample over a small child in the street. Basically, you need an army of RoboCops.
It would actually be much easier to design an autonomous killbot with waypoint navigation that automatically shoots anything that moves. DARPA is likely to get that (essentially Big Dog with a gun turret and GPS) a long time before it gets Robocop. Personally, I would not be surprised if they want both.
This space dedicated to Vasily Arkhipov
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Googles Big Dog for counter-insurgency?
They don't have to be anywhere near sentient or even terribly autonomous as long as you can maintain a decent comms network (e.g. have constantly loitering drones blow up any significant jammers). Just have lots of operators back in the home country managing a handful of drones each, maintains military employment levels with the side benefit of making your rank and file staff less dangerous (to the government) after they leave the military.Simon_Jester wrote:On the other hand, I think it may also be a long time before anything like that does become possible, if only because the 'robots' we're designing here are totally non-sentient, and are not easily going to be made versatile.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Googles Big Dog for counter-insurgency?
True; if they don't have to be independently functional it greatly simplifies the requirements for the drone.Starglider wrote:They don't have to be anywhere near sentient or even terribly autonomous as long as you can maintain a decent comms network (e.g. have constantly loitering drones blow up any significant jammers). Just have lots of operators back in the home country managing a handful of drones each, maintains military employment levels with the side benefit of making your rank and file staff less dangerous (to the government) after they leave the military.Simon_Jester wrote:On the other hand, I think it may also be a long time before anything like that does become possible, if only because the 'robots' we're designing here are totally non-sentient, and are not easily going to be made versatile.
Even so, the killbot is probably easier to build, I'd think- it's easier to remotely control a trundling (or walking) gun platform with a machine gun turret on top than it is to remotely control a versatile humanoid robot that can do all the different stuff we expect a counterinsurgency robot to do- like, say, talk to people.
This space dedicated to Vasily Arkhipov
Re: Googles Big Dog for counter-insurgency?
Remote control killbots already exist with the TALON SWORDS robot (basically an uparmed version of an EOD bot). AFAIK they've been deployed, but never used.Simon_Jester wrote:Even so, the killbot is probably easier to build, I'd think- it's easier to remotely control a trundling (or walking) gun platform with a machine gun turret on top than it is to remotely control a versatile humanoid robot that can do all the different stuff we expect a counterinsurgency robot to do- like, say, talk to people.
It's successor the MAARS (btw who the hell names this stuff?) has even more dakka (grenade launchers in addition to the machine gun)https://www.qinetiq-na.com/wp-content/u ... _maars.pdf
Needs moar dakka
Re: Googles Big Dog for counter-insurgency?
I've been talking about some of this kind of stuff with regards to sci-fi, and I agree with him, to a point. If you want a hauler following your squad around like a loyal puppy so that your troops don't ruin their knees carrying all the crap you expect them to have on hand, a wheeled chassis makes a lot more sense than something with legs. But isn't the same true of a counter-insurgency robot? It's the humanoid arms that are useful, so your robot can manipulate objects and not just point guns at things. Legs might be better for climbing staircases, but even then, you might be better off just building a wheeled robot that can lean the torso forwards and backwards to do the same thing.
Re: Googles Big Dog for counter-insurgency?
Google probably wants any sort of leg up it can get for the drone race with Amazon.
Re: Googles Big Dog for counter-insurgency?
If you have a robot that can do all that CI stuff and build infrastructure, what do you need the locals for anymore?
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Googles Big Dog for counter-insurgency?
Political correctness. Of course this is only a concern in the West.Darmalus wrote:If you have a robot that can do all that CI stuff and build infrastructure, what do you need the locals for anymore?
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Googles Big Dog for counter-insurgency?
The US has, within the relatively recent memory of its policymakers, found itself fighting two wars* in which killing off all the locals would totally defeat the purpose of fighting the war, but in which the war could NOT have been avoided simply by relying on superior robot economies or whatever. A post-scarcity US would probably still have wound up fighting in Afghanistan (unless we assume that the magic of post-scarcity would somehow have canceled the attacks themselves, which is at best hard to prove). And a post-scarcity US would still probably wind up in Iraq because someone like President Bush would still have been able to convince people to fight there.
Go farther back and we have Vietnam, same case.
So the US military research establishment is hard-wired to want things that make it easier to fight this kind of war. It keeps happening, they keep getting ordered to go to Country X and win hearts and minds while blowing the crap out of the X-ians' heartless commie/terrorist cousins. They don't get the choice of "not needing the locals."
It's worth remembering that DARPA probably funds research based almost entirely on its applicability to the wars they realistically expect to have to fight in 2020, 2030, or 2050. They don't get to say "gee, logically our entire military paradigm will change if these robots exist so we won't need to fight at all, and won't need to use these robots for this purpose!" Because the policymakers who tell them what wars to fight and on what terms aren't necessarily going to be logical that way.
*And a host of minor brushfire conflicts associated with the WAR ON TERROR that didn't merit major troop deployment but COULD have...
Go farther back and we have Vietnam, same case.
So the US military research establishment is hard-wired to want things that make it easier to fight this kind of war. It keeps happening, they keep getting ordered to go to Country X and win hearts and minds while blowing the crap out of the X-ians' heartless commie/terrorist cousins. They don't get the choice of "not needing the locals."
It's worth remembering that DARPA probably funds research based almost entirely on its applicability to the wars they realistically expect to have to fight in 2020, 2030, or 2050. They don't get to say "gee, logically our entire military paradigm will change if these robots exist so we won't need to fight at all, and won't need to use these robots for this purpose!" Because the policymakers who tell them what wars to fight and on what terms aren't necessarily going to be logical that way.
*And a host of minor brushfire conflicts associated with the WAR ON TERROR that didn't merit major troop deployment but COULD have...
This space dedicated to Vasily Arkhipov
Re: Googles Big Dog for counter-insurgency?
Calling it right now. Google -> Skynet.
You will be assimilated...bunghole!
-
- Sith Marauder
- Posts: 3539
- Joined: 2006-10-24 11:35am
- Location: Around and about the Beltway
Re: Googles Big Dog for counter-insurgency?
Until the guerrillas figure out how to start hacking robots or steal/build their own.
Turns out that a five way cross over between It's Always Sunny in Philadelphia, the Ali G Show, Fargo, Idiocracy and Veep is a lot less funny when you're actually living in it.
Re: Googles Big Dog for counter-insurgency?
Building their own doesn't help. Theft or hacking could. Conducting false flag operations so that your enemy are blamed when you make a robot open fire on a crowd achieves the goal of turning the populace against the invaders, while having your own robots do it does not.Pelranius wrote:Until the guerrillas figure out how to start hacking robots or steal/build their own.
Re: Googles Big Dog for counter-insurgency?
That's all you'd really need for a terrorist attack, isn't it? Hijack a "Terminator" style robot, load him up with a chaingun, and set him loose in Time's Square with orders to shoot anything that looks human.
You will be assimilated...bunghole!
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Googles Big Dog for counter-insurgency?
Hacking military robotics is unlikely to be within the capability of third world insurgent groups, even with plenty of destroyed hardware to study and helpers distributed over the internet. The closed source embedded code will be secured by redundant hardware and software measures including layered encryption, obfuscation, auto-erase and (for anything but disposable units) physical self-destruct of the processor & storage. Swapping in complete replacement processors should work in isolated situations where inability to verify on IFF is not a problem, but engineering that to work with the chassis won't be cheap or easy either. To be a significant threat support from a nation state with a copious military/intelligence budget and an advanced IT industry would be required.
Re: Googles Big Dog for counter-insurgency?
What was the explanation for the stealth drone that was captured by Iran? The claim that they hacked it has not been disproven yet...
You will be assimilated...bunghole!
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Googles Big Dog for counter-insurgency?
Are you seriously suggesting that 'Iranian propaganda is correct' should be the null hypothesis?Borgholio wrote:What was the explanation for the stealth drone that was captured by Iran? The claim that they hacked it has not been disproven yet...
Re: Googles Big Dog for counter-insurgency?
I'm not saying that Iran's PR should be taken as the default explanation, but it seems to make the most sense. Various sources on our end say that it either crashed, or remote control was suddenly lost and the drone landed "where it wasn't supposed to". Sounds like hijacking to me, at least in the second instance.
You will be assimilated...bunghole!
Re: Googles Big Dog for counter-insurgency?
False. Employing locals keeps them out of trouble, pumps money in to the local economy and gives people a reason to not want you gone.Starglider wrote:Political correctness. Of course this is only a concern in the West.Darmalus wrote:If you have a robot that can do all that CI stuff and build infrastructure, what do you need the locals for anymore?
It's the same thing used in refugeee camps. sure, people get excited about flying giant 3d printers out to automate house building, but really it's much better policy to employ some local guys.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
Re: Googles Big Dog for counter-insurgency?
Yeah, the best way to kill an insurgency is to get rid of the reason for the people to join the insurgents. Jobs, healthcare, education, clean water, etc. and make the local authority look capable of providing security. Robots would be counterproductive.madd0ct0r wrote:False. Employing locals keeps them out of trouble, pumps money in to the local economy and gives people a reason to not want you gone.Starglider wrote:Political correctness. Of course this is only a concern in the West.Darmalus wrote:If you have a robot that can do all that CI stuff and build infrastructure, what do you need the locals for anymore?
It's the same thing used in refugeee camps. sure, people get excited about flying giant 3d printers out to automate house building, but really it's much better policy to employ some local guys.
-
- Sith Marauder
- Posts: 3539
- Joined: 2006-10-24 11:35am
- Location: Around and about the Beltway
Re: Googles Big Dog for counter-insurgency?
Considering that many insurgencies have some sort of state sponsorship/support, the hacking part shouldn't be dismissed. Not to mention building their own robots.Starglider wrote:Hacking military robotics is unlikely to be within the capability of third world insurgent groups, even with plenty of destroyed hardware to study and helpers distributed over the internet. The closed source embedded code will be secured by redundant hardware and software measures including layered encryption, obfuscation, auto-erase and (for anything but disposable units) physical self-destruct of the processor & storage. Swapping in complete replacement processors should work in isolated situations where inability to verify on IFF is not a problem, but engineering that to work with the chassis won't be cheap or easy either. To be a significant threat support from a nation state with a copious military/intelligence budget and an advanced IT industry would be required.
Turns out that a five way cross over between It's Always Sunny in Philadelphia, the Ali G Show, Fargo, Idiocracy and Veep is a lot less funny when you're actually living in it.
Re: Googles Big Dog for counter-insurgency?
It would be stupid for an insurgency to build robots. You might as well put up a sign saying "This big, immobile factory is controlled by the insurgents. Please bomb us."Pelranius wrote:Considering that many insurgencies have some sort of state sponsorship/support, the hacking part shouldn't be dismissed. Not to mention building their own robots.
-
- Sith Marauder
- Posts: 3539
- Joined: 2006-10-24 11:35am
- Location: Around and about the Beltway
Re: Googles Big Dog for counter-insurgency?
Grumman wrote:It would be stupid for an insurgency to build robots. You might as well put up a sign saying "This big, immobile factory is controlled by the insurgents. Please bomb us."Pelranius wrote:Considering that many insurgencies have some sort of state sponsorship/support, the hacking part shouldn't be dismissed. Not to mention building their own robots.
You don't need an industrial park to build a robot. Kitbashing something out of civilian parts (with a few military bits from said state sponsor) won't give you something as capable as the COIN force, but it'd still create a lot of trouble.
Turns out that a five way cross over between It's Always Sunny in Philadelphia, the Ali G Show, Fargo, Idiocracy and Veep is a lot less funny when you're actually living in it.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Googles Big Dog for counter-insurgency?
The overall cost of the robot is probably going to remain quite high. For the US, which spends something on the close order of a million dollars per soldier just to have soldiers (note: most of this money does not go to the soldier's pay)... that's not so much of a problem. There's appeal to the idea of buying a ten million dollar robot to replace a soldier whose field deployment costs 850 thousand a year (a figure I got from Google, don't take my word for it).
It is NOT so appealing to try and scrounge up parts for a multimillion dollar robot with lots of complicated, delicate mechanical parts to break down when you can more cost-effectively outfit random expendable angry teenagers with AKs and improvised bombs.
It is NOT so appealing to try and scrounge up parts for a multimillion dollar robot with lots of complicated, delicate mechanical parts to break down when you can more cost-effectively outfit random expendable angry teenagers with AKs and improvised bombs.
This space dedicated to Vasily Arkhipov
-
- Sith Marauder
- Posts: 3539
- Joined: 2006-10-24 11:35am
- Location: Around and about the Beltway
Re: Googles Big Dog for counter-insurgency?
Given the manpower intensive nature of COIN operations, a 10 million robot isn't going to be a viable answer to most problems.
Turns out that a five way cross over between It's Always Sunny in Philadelphia, the Ali G Show, Fargo, Idiocracy and Veep is a lot less funny when you're actually living in it.