Interesting stuff. I'd be curious to hear what some of the more AI-knowledgeable posters think about this.New York Times wrote:
Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.
There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.
Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.
The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.
The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.
Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.
“This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.
And then, of course, there are the cats.
To find them, the Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.
The videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.
Currently much commercial machine vision technology is done by having humans “supervise” the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.
“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” Dr. Ng said.
“We never told it during the training, ‘This is a cat,’ ” said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. “It basically invented the concept of a cat. We probably have other ones that are side views of cats.”
The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.
Neuroscientists have discussed the possibility of what they call the “grandmother neuron,” specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.
“You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.
While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.
“A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.
“It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.
Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.
“The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”
Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.
Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.
“It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” said Dr. Ng.
Google's Machine Learning Experiment
Moderators: Alyrium Denryle, Edi, K. A. Pital
- Guardsman Bass
- Cowardly Codfish
- Posts: 9281
- Joined: 2002-07-07 12:01am
- Location: Beneath the Deepest Sea
Google's Machine Learning Experiment
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
- Ziggy Stardust
- Sith Devotee
- Posts: 3114
- Joined: 2006-09-10 10:16pm
- Location: Research Triangle, NC
Re: Google's Machine Learning Experiment
I am curious to read an actual academic paper on this project, anybody know if one is floating around out there? I never trust pop sci news, especially on a subject like this, and just from the article it doesn't give a very clear representation of what the model they used was.
- Sarevok
- The Fearless One
- Posts: 10681
- Joined: 2002-12-24 07:29am
- Location: The Covenants last and final line of defense
Re: Google's Machine Learning Experiment
They advanced the field of Computer Vision. Image recognition may get more accurate and easier to do (if human supervision can truly be minimized). Potential applications ? Not many at present I guess given the sheer amount of processors they used. maybe in a few years when computers get faster average researchers with less funds than google can play with the tech.Interesting stuff. I'd be curious to hear what some of the more AI-knowledgeable posters think about this.
I have to tell you something everything I wrote above is a lie.
- ChaserGrey
- Jedi Knight
- Posts: 501
- Joined: 2010-10-17 11:04pm
Re: Google's Machine Learning Experiment
Highly advanced spambots that can defeat Captcha?Sarevok wrote:They advanced the field of Computer Vision. Image recognition may get more accurate and easier to do (if human supervision can truly be minimized). Potential applications ? Not many at present I guess given the sheer amount of processors they used. maybe in a few years when computers get faster average researchers with less funds than google can play with the tech.
Lt. Brown, Mr. Grey, and Comrade Syeriy on Let's Play BARIS
- Sarevok
- The Fearless One
- Posts: 10681
- Joined: 2002-12-24 07:29am
- Location: The Covenants last and final line of defense
Re: Google's Machine Learning Experiment
^^
That is an insidious application I had not thought about.
You know an interesting factoid ? Back when google goggles came out some hackers tested it against googles own CAPTCHA. At the time (mid 2010) it was effective.
Image recognition has come a long way and has achieved dramatic spread amongst developers and users alike in wake of the smartphone revolution. In the age of Kinetic and like I wonder how long image based captchas can hold out.
That is an insidious application I had not thought about.
You know an interesting factoid ? Back when google goggles came out some hackers tested it against googles own CAPTCHA. At the time (mid 2010) it was effective.
Image recognition has come a long way and has achieved dramatic spread amongst developers and users alike in wake of the smartphone revolution. In the age of Kinetic and like I wonder how long image based captchas can hold out.
I have to tell you something everything I wrote above is a lie.
- Ziggy Stardust
- Sith Devotee
- Posts: 3114
- Joined: 2006-09-10 10:16pm
- Location: Research Triangle, NC
Re: Google's Machine Learning Experiment
This is based purely on the information in the OP article, since I can't find a more detailed description of how the algorithm works, but since it seems to be culling general features based on a large corpus of pictures, wouldn't some sort of image based captcha still work if, for example, you distort the color/shape of the image enough that it could trick the algorithm but to a person would still be coherent? Or possibly have the "answer" to the question be a very specific question about the image? It doesn't seem like the way the algorithm "builds" its information would be able to parse out specific details like that.Sarevok wrote:^^
That is an insidious application I had not thought about.
You know an interesting factoid ? Back when google goggles came out some hackers tested it against googles own CAPTCHA. At the time (mid 2010) it was effective.
Image recognition has come a long way and has achieved dramatic spread amongst developers and users alike in wake of the smartphone revolution. In the age of Kinetic and like I wonder how long image based captchas can hold out.
Re: Google's Machine Learning Experiment
The problem is that with that kind of "security" you are only ever exploiting a present inability in the automated system. There is no inherent reason why the system shouldn't be able to crack it, it's just to hard/ineffective to build such a system at this point in time. (Since there is no reason - other than cost - why a human should be able to perform a task that a computer can't.) This is why we continuously have to upgrade encryption standards. Captchas are being broken and replaced by new ones all the time BTW. Many of the more sophisticated webspam bots have captcha solving modules.
http://www.politicalcompass.org/test
Economic Left/Right: -7.12
Social Libertarian/Authoritarian: -7.74
This is pre-WWII. You can sort of tell from the sketch style, from thee way it refers to Japan (Japan in the 1950s was still rebuilding from WWII), the spelling of Tokyo, lots of details. Nothing obvious... except that the upper right hand corner of the page reads "November 1931." --- Simon_Jester
Economic Left/Right: -7.12
Social Libertarian/Authoritarian: -7.74
This is pre-WWII. You can sort of tell from the sketch style, from thee way it refers to Japan (Japan in the 1950s was still rebuilding from WWII), the spelling of Tokyo, lots of details. Nothing obvious... except that the upper right hand corner of the page reads "November 1931." --- Simon_Jester
- Number Theoretic
- Padawan Learner
- Posts: 187
- Joined: 2011-09-04 08:53am
- Location: Joeyray's Bar
Re: Google's Machine Learning Experiment
You can download their original paper here. Their algorithm was an autoencoder neural network with 9 layers, according to the abstract. The new thing was the fact that they fed completly unlabled data into it (essentially, "it watched" millions of Youtube videos) and watched what would happen.Ziggy Stardust wrote:I am curious to read an actual academic paper on this project, anybody know if one is floating around out there? I never trust pop sci news, especially on a subject like this, and just from the article it doesn't give a very clear representation of what the model they used was.
-
- Jedi Master
- Posts: 1401
- Joined: 2007-08-26 10:53pm
Re: Google's Machine Learning Experiment
We should check the machine's code for any incidences of Toxoplasma.gondii...
"The 4th Earl of Hereford led the fight on the bridge, but he and his men were caught in the arrow fire. Then one of de Harclay's pikemen, concealed beneath the bridge, thrust upwards between the planks and skewered the Earl of Hereford through the anus, twisting the head of the iron pike into his intestines. His dying screams turned the advance into a panic."'
SDNW4: The Sultanate of Klavostan
SDNW4: The Sultanate of Klavostan
- UnderAGreySky
- Jedi Knight
- Posts: 641
- Joined: 2010-01-07 06:39pm
- Location: the land of tea and crumpets
Re: Google's Machine Learning Experiment
This seems like the best place to ask about it, however unrelated it may seem to the topic, but... a couple of years ago I saw a story on AI linked through someone's sig on SDN. I think it was the effort of an SDN member. It was on the web, and dealt with the events before and after a singularity event and eventually how the AI collapses. I would love to find that link again and give it a reread.
Sorry for the diversion.
Sorry for the diversion.
Can't keep my eyes from the circling skies,
Tongue-tied and twisted, just an earth-bound misfit, I
Tongue-tied and twisted, just an earth-bound misfit, I
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Google's Machine Learning Experiment
Probably The Metamorphosis of Prime Intellect.UnderAGreySky wrote:This seems like the best place to ask about it, however unrelated it may seem to the topic, but... a couple of years ago I saw a story on AI linked through someone's sig on SDN. I think it was the effort of an SDN member. It was on the web, and dealt with the events before and after a singularity event and eventually how the AI collapses. I would love to find that link again and give it a reread.
- someone_else
- Jedi Knight
- Posts: 854
- Joined: 2010-02-24 05:32am
Re: Google's Machine Learning Experiment
Actually, serious text-rcognition softwares can easily defeat most text-based captcha nowadays (ABBY finereader and similar, usually used to digitalize scanned text which is an image). Audio is another easy thing for a machine.ChaserGrey wrote:Highly advanced spambots that can defeat Captcha?Sarevok wrote:They advanced the field of Computer Vision. Image recognition may get more accurate and easier to do (if human supervision can truly be minimized). Potential applications ? Not many at present I guess given the sheer amount of processors they used. maybe in a few years when computers get faster average researchers with less funds than google can play with the tech.
The best test is like SD.net's. There is a relatively simple mathematic problem the user has to solve, written in simple english, no known machine can understand what the hell does it have to do since programs that understand what they are reading and act on this are simply not here (some do understand very user-firendly programming language that does resemble english, but are still not widespread as text-recognition software).
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
Re: Google's Machine Learning Experiment
Though it would be rather easy to write a bot that recognizes the question.
http://www.politicalcompass.org/test
Economic Left/Right: -7.12
Social Libertarian/Authoritarian: -7.74
This is pre-WWII. You can sort of tell from the sketch style, from thee way it refers to Japan (Japan in the 1950s was still rebuilding from WWII), the spelling of Tokyo, lots of details. Nothing obvious... except that the upper right hand corner of the page reads "November 1931." --- Simon_Jester
Economic Left/Right: -7.12
Social Libertarian/Authoritarian: -7.74
This is pre-WWII. You can sort of tell from the sketch style, from thee way it refers to Japan (Japan in the 1950s was still rebuilding from WWII), the spelling of Tokyo, lots of details. Nothing obvious... except that the upper right hand corner of the page reads "November 1931." --- Simon_Jester
- UnderAGreySky
- Jedi Knight
- Posts: 641
- Joined: 2010-01-07 06:39pm
- Location: the land of tea and crumpets
Re: Google's Machine Learning Experiment
Thank you, exactly what I was looking for. What's the background (re: SDN) regarding this story? Written by a board member?Starglider wrote:Probably The Metamorphosis of Prime Intellect.UnderAGreySky wrote:This seems like the best place to ask about it, however unrelated it may seem to the topic, but... a couple of years ago I saw a story on AI linked through someone's sig on SDN. I think it was the effort of an SDN member. It was on the web, and dealt with the events before and after a singularity event and eventually how the AI collapses. I would love to find that link again and give it a reread.
(my last post in this thread on this topic)
Can't keep my eyes from the circling skies,
Tongue-tied and twisted, just an earth-bound misfit, I
Tongue-tied and twisted, just an earth-bound misfit, I
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Google's Machine Learning Experiment
As far as I know, no. I recall someone vaugely transhumanist having a link to it in their signature for a while.UnderAGreySky wrote:What's the background (re: SDN) regarding this story? Written by a board member?
- Terralthra
- Requiescat in Pace
- Posts: 4741
- Joined: 2007-10-05 09:55pm
- Location: San Francisco, California, United States
Re: Google's Machine Learning Experiment
I loved all the build up and exposition, but found the climax and denouement irritating and unlikable.