AI::NeuralNet::BackProp 0.77 review

Download
by rbytes.net on

AI::NeuralNet::BackProp is a simple back-prop neural net that uses Delta's and Hebbs' rule. SYNOPSIS use AI::NeuralNet::BackPro

License: Perl Artistic License
File size: 93K
Developer: Josiah Bryan
0 stars award from rbytes.net

AI::NeuralNet::BackProp is a simple back-prop neural net that uses Delta's and Hebbs' rule.

SYNOPSIS

use AI::NeuralNet::BackProp;
# Create a new network with 1 layer, 5 inputs, and 5 outputs.
my $net = new AI::NeuralNet::BackProp(1,5,5);

# Add a small amount of randomness to the network
$net->random(0.001);

# Demonstrate a simple learn() call
my @inputs = ( 0,0,1,1,1 );
my @ouputs = ( 1,0,1,0,1 );

print $net->learn(@inputs, @outputs),"n";

# Create a data set to learn
my @set = (
[ 2,2,3,4,1 ], [ 1,1,1,1,1 ],
[ 1,1,1,1,1 ], [ 0,0,0,0,0 ],
[ 1,1,1,0,0 ], [ 0,0,0,1,1 ]
);

# Demo learn_set()
my $f = $net->learn_set(@set);
print "Forgetfulness: $f unitn";

# Crunch a bunch of strings and return array refs
my $phrase1 = $net->crunch("I love neural networks!");
my $phrase2 = $net->crunch("Jay Lenno is wierd.");
my $phrase3 = $net->crunch("The rain in spain...");
my $phrase4 = $net->crunch("Tired of word crunching yet?");

# Make a data set from the array refs
my @phrases = (
$phrase1, $phrase2,
$phrase3, $phrase4
);

# Learn the data set
$net->learn_set(@phrases);

# Run a test phrase through the network
my $test_phrase = $net->crunch("I love neural networking!");
my $result = $net->run($test_phrase);

# Get this, it prints "Jay Leno is networking!" ... LOL!
print $net->uncrunch($result),"n";

AI::NeuralNet::BackProp is the flagship package for this file. It implements a nerual network similar to a feed-foward, back-propagtion network; learning via a mix of a generalization of the Delta rule and a disection of Hebbs rule. The actual neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package

Requirements:
Perl

What's New in This Release:
This is version 0.77, a complete internal upgrade from version 0.42.
A new feature is the introduction of a randomness factor in the network, optional to disable.
The restriction on 0s are removed, so you can run any network you like.
See NOTES on using 0s with randomness disabled, below. Included is an improved learn() function, and a much more accurate internal fixed-point system for learning.
Also included is automated learning of input sets.
See learn_set() and learn_rand_set()

AI::NeuralNet::BackProp 0.77 search tags