Alternatives to neural nets in AI

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Alternatives to neural nets in AI

Post by Sarevok »

Neural nets are the most well known ways of implementing decision making systems where traditional symbolic logic approach fails to cope with the complexity. For example image recognition, data mining and pattern recognition in general. Now while NNs can produce fantastic results when they are properly trained they have this one teeny weeny problem. It is very hard to create a NN that does exactly what you want to do. They are undepredictable and difficult to work with compared to just writing an algorithm in C or Java.

So I was wondering what alternatives to neural nets exist in making AIs whether it be simplistic specific purpose software or some futuristic attempt at sentient AGIs.
I have to tell you something everything I wrote above is a lie.
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Alternatives to neural nets in AI

Post by Modax »

Have you read any of Ray Kurzweil's books? He discusses several different approaches used in AI. You might find these wikipedia articles interesting. :D

Expert Systems
Bayesian Net
Genetic Algorithms
R. U. Serious
Padawan Learner
Posts: 282
Joined: 2005-08-17 05:29pm

Re: Alternatives to neural nets in AI

Post by R. U. Serious »

Norvig's book (AI: A modern approach) covers all topics in great depth. A look into the Table of Contents should give you some starting points (e.g. chapter 20):

http://aima.cs.berkeley.edu/contents.html
Privacy is a transient notion. It started when people stopped believing that God could see everything and stopped when governments realized there was a vacancy to be filled. - Roger Needham
User avatar
The Jester
Padawan Learner
Posts: 475
Joined: 2005-05-30 08:34am
Location: Japan

Re: Alternatives to neural nets in AI

Post by The Jester »

In Machine Learning there are a variety of options for extracting information depending on the nature of the problem various methods may work better than others. Options include

Decision Trees
Logistic Regression
Principle Component Analysis
Linear Discriminant Analysis
Independent Component Analysis
Clustering
Support Vector Machines

Helsinki University of Technology has the course material for two courses in Machine Learning posted on the web, if you're further interested.

T-61.3050 Machine Learning: Basic Principles
T-61.5130 Machine Learning and Neural Networks (Also covers Independent Component Analysis and Support Vector Machines.)
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Re: Alternatives to neural nets in AI

Post by petesampras »

Boosting, where a strong classifier is constructed from a large pool of weak classifiers, is a very powerful technique. The beauty is that you don't need to worry about having good quality features, just a method for generating a large and diverse pool of them. In object recognition, for example, large pools of random image patches with a simple distance measure can produce very strong results.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Alternatives to neural nets in AI

Post by Starglider »

I keep wanting to post in this thread, but each time I feel like writing a huge essay (if not a book :) ) on the subject, which I don't really have time for. So I will be brief. The solutions Jester mentions are good (mostly - decision trees tend to suck for performance but have the advantage of being easy to read) but they're all limited to learning relatively simple functions, mapping inputs to outputs in a well defined space (e.g. classification of documents or images into categories). They aren't capable of doing planning, causal modeling or anything else involving complex or multiple problem contexts. The term 'genetic algorithms' is used semi-interchangeably with 'genetic programming', but strictly genetic programming is a more general technique that should in theory be able to solve a wide range of problems. In practice genetic programming doesn't work very well (the reasons for this are hotly debated, to say the least) and can't go much further than NNs. Basically all NNs used in real applications are non-recurrent and non-spiking, which is to say hardly like real neurons at all. Recurrent NNs can have internal state and spiking NNs can do more sophisticated temporal processing, but they're still mostly confined to the lab because standard training functions don't work well on them.

As for AGI, rather than survey the vast range of proposed approaches, I would ask exactly why are you writing off the 'traditional symbolic approach'? The short answer to why the 70s/80s symbolic logic push failed is that the models weren't layered, the logic wasn't probabilistic (it was usually simple Boolean-propositional, occasionally defeasible), the search control mechanisms were very simplistic, and that while some learning was possible within the scope of existing schemas no one came up with a convincing mechanism for computers to create their own symbolic representations from scratch. A lot of that was really unavoidable due to hardware constraints, though some of it was just unforgivable short-sightedness (e.g. rejection of Bayes for no good reason, giving up on recursion and reflection far too easily). Today we have powerful machines that enable sophisticated search control (and can tolerate more wasted effort anyway), huge memories for rich and deeply layered models, mature probability calculus and some fairly promising Kolmogorov-based approaches for hypothesis generation. Frankly I deeply dislike NN and GA models - an appropriately retooled symbolic system is far more capable while also being much more transparent, and when an NN might be appropriate for a fuzzy pattern recognition/continuous function approximation task it usually makes sense to use something faster-converging like an SVM instead.
Post Reply