new artificial intelligence?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Natorgator
Jedi Knight
Posts: 856
Joined: 2003-04-26 08:23pm
Location: Atlanta, GA

new artificial intelligence?

Post by Natorgator »

Found this on Digg. I read an article about it a month ago in Wired, but couldn't find anything online. Fascinating theories:
Jeff Hawkins made a name for himself in the tech industry as the founder of Palm Computing and inventor of the Palm Pilot. He later founded Handspring, where he invented the Treo. If you were a fan of his work then, you are going to love what Jeff is up to now. He is currently pursuing his life-long passions, neuroscience and intelligence. His latest work made quite a splash a few years ago when he published On Intelligence. In this thin volume Jeff Hawkins elegantly summarised his theory of how the brain gave rise to intelligence. Disputing conventional wisdom that the brain is complex, or that intelligence is inseparable from other human qualities such as emotions, Jeff put forward a proof that human intelligence is a function of the neocortex and that it is temporal in nature.

To prove his theory, Jeff founded Numenta - a company dedicated to developing algorithms and software based on the ideas put forward in the book. This spring Numenta released its first product, an experimental software aimed at researchers and advanced developers which embodies the algorithms and techniques pioneered by Jeff and his crew. Numenta is presenting here at ETech today and so it's a great opportunity to familiarize you with these exciting new developments. Has the age of Artificial Intelligence arrived? Is it what we thought it would be? Read on to find out.

Hierarchical Temporal Memory (HTM)

One of the key insights that Jeff had was based on the fact that life has a spacio-temporal quality. This is a fancy way of saying that things happen in space and time. It is of course basic physics, but Jeff concluded that the structure in our brain that models reality, should also have spacio-temporal characteristics. After all, a good model is an approximation of the actual process. With that, Jeff looked for a part of the brain that would fit the description and immediately realized that it is the Neocortex.

Jeff and his collegues spent a lot of time studying the neocortext and were able to understand its essential operations. Based on their understanding they created the Hierarchical Temporal Memory (HTM) model, which captured the essential computation by constructing tree-like hierarchies. Like its biological forefather, the Neocortext, HTM applies the same algorithm to all inputs. The four basic operations performed by each element are:

* Discover causes in the world
* Infer causes of novel input
* Make predictions
* Direct actions

This model, the scientists claim, simulates what would commonly be classified as intelligence.
1. Discover causes in the world

Image

Similar to the neural networks, HTM does not have any prewired classification of the world. Instead, HTM accepts a sequence of spacio-temporal inputs and 'learns' the patterns in the input stream. In the diagram above, the senses digitize the signal and turn them into bitmaps (or vectors), which are then are processed by a classification system. The system then assigns the likelihood of a particular cause to each symbol. In plain english, you are shown a sequence of pictures of cats and dogs - and each picture you classify as either a cat or a dog. But just like we can't do that when we are born, neither can HTM. In fact HTM needs to go through a training process before it can 'learn' to distinguish things.

2. Infer causes of novel input

A trained HTM is able to assign the likelihood of a particular cause. Given a new input, the system then uses its previous knowledge to classify it. People actually do the same thing; given a sequence of pictures of cats and dogs, there is a chance (small) that we will make a mistake. What is particularly interesting is how HTM deals with novel input - it is used to continue the learning process. Each new input, along with its temporal aspect, is processed by the system and causes the system to change. As an example, think of the process involved in recognizing an object via sensory input - we move our hands around it in order to recognize the object. Jeff Hawkins explains that this ability to handle continuously variable input is one of the keys to make the whole system work.

3. Make predictions

The ability to predict or to imagine things is one of the most basic human abilities. Forecasting, mental modeling, imagination and planning - these are powerful attributes of intelligent behavior in humans, which each find place in HTM. Each node in the HTM network combines its memory with incoming signals, to predict what is going to happen next. This prediction can actually serve as an input itself, mimicing the process of imagination in humans. The entire network is able to compute a series of future states - so for example, like people, it is able to anticipate bad or dangerous sitations before they actually take place.

4. Direct actions

Probably the most important thing that people do after they think (most of the time) is act. The ability to calculate the sum of all inputs, conclude and do something, has been wired into HTM. Since the model itself has no way of interacting with the external world, its actions need to essentially go through a translator before being implemented (think how brain controls movement for example). So in its raw version, HTM actions are just internal commands that can be interpreted in various ways. For example, they can be hooked up to the motor generator to power physical behavior. In this first version of the model, the set of basic behaviors is pre-wired. However even in this early stage, the model is capable of generating complex responses by combining the basic building blocks.

Hal, are you there?

So what are we to make of this? Have Jeff Hawkins and his researchers at Numenta invented Artificial Intelligence? The answer is yes and no. It is likely that some future version of their system is going to be able to pass the famous Turing Test, but hardly anyone would mistake the Numeta creation for a human being. In fact the very beauty of this creation is that it decouples intelligence from other human qualities. Jeff and his colleagues invented an algorithm that mimics typical computation which occurs in our brains, but it is far from being a complete artificial intelligence.

So in terms of moral and ethical implications, right now there are no issues. Could there be in the future? Yes. The future generation of this algorithm, if implemented in advanced robots, could become closer to what Arnold Schwarzenegger so elegantly portrayed in the Terminator series. But seriously, as with any technology care must be taken as to how and where it is used.

In the meantime, we are excited to report on this breakthrough. Jeff's invention has paved the road to a new, brain-like computing paradigm. It is possible that the long-awaited promise of neural networks and cellular automata is finally being delivered. This means that computers will be able to tackle problems that come so easy to us, like recognizing faces or seeing patterns in music. But since computers are much faster than humans when it comes to computation, we also hope that new frontiers will be broken - enabling us to solve the problems that were unreachable before.

This post is based on the white paper on Numenta's web site. We highly recommend it, as it has a lot enlightening details about the architecture of HTM. Please take a look and let us know what you think about this exciting development.
User avatar
Specialist
Padawan Learner
Posts: 216
Joined: 2002-10-06 02:41pm

Post by Specialist »

Funny, I don't see the distinction between ANN and this HTM system. From what I can gather it's just another NN probably with a different topology. I'll just wait and see what comes of it cause this whole article sounds like a big marketing stun.

Code: Select all

"Friends teach you what you want to know. Enemies teach you what you need to know."
TheLemur
Padawan Learner
Posts: 204
Joined: 2007-03-27 09:36pm

Post by TheLemur »

Neat, how they can describe a very simple, very old principle- the Bayesian Probability Theorem- and trumpet it as a brand new system that will magically produce a machine capable of reproducing most of the dozens of regions of the human brain.
User avatar
drachefly
Jedi Master
Posts: 1323
Joined: 2004-10-13 12:24pm

Post by drachefly »

Specialist wrote:Funny, I don't see the distinction between ANN and this HTM system. From what I can gather it's just another NN probably with a different topology. I'll just wait and see what comes of it cause this whole article sounds like a big marketing stun.
Seems to me like the input is handled a bit differently. It may improve the ability to reliably abstract. Or it could be worse...

Of course, you can't tell just from this article.
User avatar
Ariphaos
Jedi Council Member
Posts: 1739
Joined: 2005-10-21 02:48am
Location: Twin Cities, MN, USA
Contact:

Post by Ariphaos »

drachefly wrote:Seems to me like the input is handled a bit differently. It may improve the ability to reliably abstract. Or it could be worse...

Of course, you can't tell just from this article.
The input seems to be exactly the same, to me. Like ANN, it's also using weights. Without an understanding of how the black box is working, it's impossible to say, but until I see otherwise, this is just another ANN.
User avatar
drachefly
Jedi Master
Posts: 1323
Joined: 2004-10-13 12:24pm

Post by drachefly »

This may look at the weights, consider them. A NN IS the weights.
User avatar
Darth Holbytlan
Padawan Learner
Posts: 405
Joined: 2007-01-18 12:20am
Location: Portland, Oregon

Post by Darth Holbytlan »

UPDATE: IEEE spectrum published a more detailed article on this. It's a bit long, so I've cut it down (especially to remove marketing crap):
article wrote:Learn Like A Human
By Jeff Hawkins
Why Can't A Computer Be More Like A Brain?

By the age of five, a child can understand spoken language, distinguish a cat from a dog, and play a game of catch. These are three of the many things humans find easy that computers and robots currently cannot do. Despite decades of research, we computer scientists have not figured out how to do basic tasks of perception and robotics with a computer.

[...]

It is clear to many people that the brain must work in ways that are very different from digital computers. To build intelligent machines, then, why not understand how the brain works, and then ask how we can replicate it?

My colleagues and I have been pursuing that approach for several years. We've focused on the brain's neocortex, and we have made significant progress in understanding how it works. We call our theory, for reasons that I will explain shortly, Hierarchical Temporal Memory, or HTM. We have created a software platform that allows anyone to build HTMs for experimentation and deployment. You don't program an HTM as you would a computer; rather you configure it with software tools, then train it by exposing it to sensory data. HTMs thus learn in much the same way that children do. HTM is a rich theoretical framework that would be impossible to describe fully in a short article such as this, so I will give only a high level overview of the theory and technology. Details of HTM are available at http://www.numenta.com.

First, I will describe the basics of HTM theory, then I will give an introduction to the tools for building products based on it. It is my hope that some readers will be enticed to learn more and to join us in this work.

We have concentrated our research on the neocortex, because it is responsible for almost all high-level thought and perception, a role that explains its exceptionally large size in humans-about 60 percent of brain volume [see illustration "Goldenrod"]. The neocortex is a thin sheet of cells, folded to form the convolutions that have become a visual synonym for the brain itself. Although individual parts of the sheet handle problems as different as vision, hearing, language, music, and motor control, the neocortical sheet itself is remarkably uniform. Most parts look nearly identical at the macroscopic and microscopic level.

Because of the neocortex's uniform structure, neuro-scientists have long suspected that all its parts work on a common algorithm-that is, that the brain hears, sees, understands language, and even plays chess with a single, flexible tool. Much experimental evidence supports the idea that the neocortex is such a general-purpose learning machine. What it learns and what it can do are determined by the size of the neocortical sheet, what senses the sheet is connected to, and what experiences it is trained on. HTM is a theory of the neocortical algorithm. If we are right, it represents a new way of solving computational problems that so far have eluded us.

Although the entire neocortex is fairly uniform, it is divided into dozens of areas that do different things. Some areas, for instance, are responsible for language, others for music, and still others for vision. They are connected by bundles of nerve fibers. If you make a map of the connections, you find that they trace a hierarchical design. The senses feed input directly to some regions, which feed information to other regions, which in turn send information to other regions. Information also flows down the hierarchy, but because the up and down pathways are distinct, the hierarchical arrangement remains clear and is well documented.

As a general rule, neurons at low levels of the hierarchy represent simple structure in the input, and neurons at higher levels represent more complex structure in the input. For example, input from the ears travels through a succession of regions, each representing progressively more complex aspects of sound. By the time the information reaches a language center, we find cells that respond to words and phrases independent of speaker or pitch.

Because the regions of the cortex nearest to the sensory input are relatively large, you can visualize the hierarchy as a tree's root system, in which sensory input enters at the wide bottom, and high-level thoughts occur at the trunk. There are many details I am omitting; what is important is that the hierarchy is an essential element of how the neocortex is structured and how it stores information.

HTMs are similarly built around a hierarchy of nodes. The hierarchy and how it works are the most important features of HTM theory. In an HTM, knowledge is distributed across many nodes up and down the hierarchy. Memory of what a dog looks like is not stored in one location. Low-level visual details such as fur, ears, and eyes are stored in low-level nodes, and high-level structure, such as head or torso, are stored in higher-level nodes. [See illustrations, "Everyone Knows You're a Dog" and "Higher & Higher."] In an HTM, you cannot always concretely locate such knowledge, but the general idea is correct.

Hierarchical representations solve many problems that have plagued AI and neural networks. Often systems fail because they cannot handle large, complex problems. Either it takes too long to train a system or it takes too much memory. A hierarchy, on the other hand, allows us to "reuse" knowledge and thus make do with less training. As an HTM is trained, the low-level nodes learn first. Representations in high-level nodes then share what was previously learned in low-level nodes.

For example, a system may take a lot of time and memory to learn what dogs look like, but once it has done so, it will be able to learn what cats look like in a shorter time, using less memory. The reason is that cats and dogs share many low-level features, such as fur, paws, and tails, which do not have to be relearned each time you are confronted with a new animal.

The second essential resemblance between HTM and the neocortex lies in the way they use time to make sense of the fast-flowing river of data they receive from the outside world. On the most basic level, each node in the hierarchy learns common, sequential patterns, analogous to learning a melody. When a new sequence comes along, the node matches the input to previously learned patterns, analogous to recognizing a melody. Then the node outputs a constant pattern representing the best matched sequences, analogous to naming a melody. Given that the output of nodes at one level becomes input to nodes at the next level, the hierarchy learns sequences of sequences of sequences.

That is how HTMs turn rapidly changing sensory patterns at the bottom of the hierarchy into relatively stable thoughts and concepts at the top of it. Information can flow down the hierarchy, unfolding sequences of sequences. For example, when you give a speech, you start with a sequence of high-level concepts, each of which unfolds into a sequence of sentences, each of which unfolds into a sequence of words and then phonemes.

Another, subtler way an HTM exploits time is how it decides what to learn. All of its parts learn on their own, without a programmer or anyone else telling the neurons what to do. It is tempting for us to try to fill such a coordinating role by deciding in advance what a node should do, for instance by saying, "Node A will learn to recognize eyes and ears, and node B will learn noses and fur." However, that approach does not work. As nodes learn, they change their output-which affects the input to other nodes. Because memory in an HTM is dynamic, it is not possible to decide in advance what a node should learn.

So how does a node know what to learn? This is where time plays a critical role and is one of the unique aspects of HTM. Patterns that occur close together in time generally have a common cause. For instance, when we hear a sequence of notes over and over, we learn to recognize them as a single thing, a melody. We do the same with visual and tactile patterns. Seeing a dog moving in front of us, for example, is what teaches us that a left-facing dog is actually the same as a right-facing dog, in spite of the fact that the actual information on the retina is different from moment to moment. HTM nodes learn similarly; they use time as a teacher. In fact, the only way to train an HTM is with input that changes over time. How that is done is the most challenging part of HTM theory and practice.

Because HTMs, like humans, can recognize spatial patterns such as a static picture, you might think that time is not essential. Not so. Strange though it may seem, we cannot learn to recognize pictures without first training on moving images. You can see why in your own behavior. When you are confronted with a new and confusing object, you pick it up and move it about in front of your eyes. You look at it from different directions and top and bottom. As the object moves and the patterns on your retina change, your brain assumes that the unknown object is not changing. Nodes in an HTM assemble differing input patterns together under the assumption that two patterns that repeatedly occur close in time are likely to share a common cause. Time is the teacher.

The final word in HTM is "memory." This attribute distinguishes HTMs from systems that are programmed. Most of the effort in building an HTM-based system is spent in training the system by exposing it to sensory data, not in writing code or configuring the network. Some people assume memory means a single remembered instance, such as "what I ate for lunch." Others associate memory with computer memory. In the case of HTM, it is neither. HTMs are hierarchical, dynamic, memory systems.

What makes HTM different from other approaches to machine learning? HTMs are unique not because we have discovered some new and miraculous concept. HTM combines the best of several existing techniques, with a few twists thrown in. For example, hierarchical representations exist in a technique called Hierarchical Hidden Markov Models. However, the hierarchies used in HHMMs are simpler than those in HTM. Even though HHMMs can learn complex temporal patterns, they do not handle spatial variation well. It is as if you could learn melodies but not be able to recognize them when played in a different key. Still, the similarity between HTM and other approaches is a good sign: it means that other people have reached similar conclusions. A detailed comparison to other techniques is available on Numenta's Web site.

Another unique aspect of HTM is that it is a biological model as well as a mathematical model. The mapping between HTM and the detailed anatomy of the neocortex is deep. As far as we know, no other model comes close to HTM's level of biological accuracy. The mapping is so good that we still look to neuroanatomy and physiology for direction whenever we encounter a theoretical or technical problem.

Finally, HTMs work. "If we really understand a system we will be able to build it," said Carver Mead, the famous Caltech electrical engineer. "Conversely, we can be sure that we do not fully understand the system until we have synthesized and demonstrated a working model." We have built and tested enough HTMs of sufficient complexity to know that they work. They work on at least some difficult and useful problems, such as handling distortion and variances in visual images. Thus we can identify dogs as such, in simple images, whether they face right or left, are big or small, are seen from the front or the rear, and even in grainy or partially occluded images.

[...]

By 2004 I had developed the essence of HTM, but the theory was still rooted in biology (I published part of that biological theory in 2004 in my book On Intelligence). I did not know how to turn the biological theory into a practical technology. A colleague of mine, Dileep George, was aware of my work and created the missing link. He showed how HTM could be modeled as a type of Bayesian network, a well-known technique for resolving ambiguity by assigning relative probabilities in problems with many conflicting variables. George also demonstrated that we could build machines based on HTM.

His prototype application was a vision system that recognized line drawings of 50 different objects, independent of size, position, distortion, and noise. Although it wasn't designed to solve a practical problem, it was impressive, for it did what no other vision system we were aware of could do.

[...]

To help Numenta jump-start an industry built on the ideas of HTM, we embarked on creating a set of tools that allow anyone to experiment with HTMs and to use HTMs to solve real-world problems. By making the tools broadly available, providing source code to many parts of the tools, and encouraging others to extend and commercialize their applications and enhancements, we hope to attract engineers, scientists, and entrepreneurs to learn about HTM. The tools together constitute an experimental platform for HTM that is meant to attract many developers by giving them the chance to create successful businesses.

[...]

Today the run-time engine runs on Linux. Our employees and customers use both PCs and Macs. When designing HTM-based systems, the developer needs to experiment with different configurations. Running concurrent tests saves a lot of time. Our tools make parallel testing easier.

[...]

HTM is not a model of a full brain or even the entire neo-cortex. Our system doesn't have desires, motives, or intentions of any kind. Indeed, we do not even want to make machines that are humanlike. Rather, we want to exploit a mechanism that we believe to underlie much of human thought and perception. This operating principle can be applied to many problems of pattern recognition, pattern discovery, prediction and, ultimately, robotics. But striving to build machines that pass the Turing Test is not our mission.

The best analogy I can make is to go back to the beginning of the computer age. The first digital computers operated on the same basic principles as today's computers. However, 60 years ago we were just starting to understand how to use computers, what applications were best matched to them, and what engineering problems had to be solved to make them easier to use, more capable, and faster. We had not yet invented the integrated circuit, operating systems, computer languages, or disk drives. Our knowledge of HTMs today is at a similar stage in development.

We have recognized a fundamental concept of how the neo-cortex uses hierarchy and time to create a model of the world and to perceive novel patterns as part of that model. If we are right, the true age of intelligent machines may just be getting started.

THIS ARTICLE WAS EDITED ON 28 MARCH 2007 .

About the Author

JEFF HAWKINS, inventor of the Palm Pilot, is the founder of Palm Computing, Handspring, and the Redwood Neuroscience Institute. In 2003 he was elected a member of the National Academy of Engineering.
There's also this little bit from their web site:
Numenta wrote:Hierarchical -- HTMs are organized as a tree-shaped hierarchy of nodes. Each node implements a learning and memory function, that is, it encapsulates an algorithm. Lower-level nodes receive large amounts of input and send processed input up to the next level. In that way, the HTM Network abstracts the information as it is passed up the hierarchy.

Temporal -- During training, the HTM application must be presented with objects as they change over time. For example, during training of the Pictures application, the images are presented first top to bottom, then left to right as if the image were moving over time. Note that the temporal element is critical: The algorithm has been written to expect input that changes gradually over time.

Memory -- An HTM application works in two stages, which can be thought of as training memory and using memory. During training, the HTM Network learns to recognize patterns in the input it receives. Each level in the hierarchy is trained separately. In the fully trained HTM Network, each level in the hierarchy knows -- has in memory -- all the objects in its world. During inference, when the HTM Network is presented with new objects, it can determine the likelihood that an object is one of the already known objects.
There are enough details here to get some idea of how an HTM actually works. It sounds pretty interesting to me, but I'm not really up on ANN state-of-the-art to judge properly. I am a little disappointed at how pedestrian its training sounds.
TheLemur
Padawan Learner
Posts: 204
Joined: 2007-03-27 09:36pm

Post by TheLemur »

To build intelligent machines, then, why not understand how the brain works, and then ask how we can replicate it?
Because that won't get you a working intelligence machine any more than trying to replicate a bird will get you a working flying machine. The problem is that evolution didn't build all this stuff with intelligent modification and copying in mind, and so no optimization whatsoever has gone into making human hardware easily replicable with machinery. A human runner, for instance, propels himself forward by pushing the ground backward with his feet; we have just, in 2007, gotten around to building a machine that can do the same thing. And yet we had working motor vehicles in 1900, because we thought of another implementation, the wheel and engine, that is much better suited to machinery. The human brain uses massively parallel computing with a hundred trillion synapses running at ~100 Hz; current supercomputers have only a few hundred thousand processors but run at over 1 GHz. The human brain combines memory and data processing; computers do not. The human brain gets bored if you put it into a box for six hours with nothing to do; computers do that for years on end in server farms. The human brain is intricately linked to biochemistry, releasing various hormones and reacting to substances in the blood; most computers do not have anything you could call a body at all. And on and on it goes.
Post Reply