function [w,bias] = pla(negative,positive) % pla.m % % Perceptron learning algorithm (developed by Frank Rosenblatt, 1958) % % Binary feature vectors in negative and positive can be built % with feature.pl or a similar program. % % This function will only converge if the training vectors are % linearly seperable, which is seldom the case in practice. % % Sparse matrix representation and operations are not used, although % they would be much more efficient. There was little point, because % sparse matrices are only implemented in Matlab (which costs about % US$2000 and is inaccessible to most IF developers). % % This program implements the same algorithm, and should do the same % thing, as the Perl program "pla.pl" that is included with Perform. % % Perform (Perceptron Classifier in Inform) v1.0 % Nick Montfort http://nickm.com 2004-06-24 if (size(negative,2) != size(positive,2)) printf("Dimensions of positive and negative examples don't match.\n"); break; end printf("Data looks good. Beginning training...\n"); fflush(stdout); d = size(negative,2); w = zeros(1,d); bias = 0; % Initially, all the negative examples are correctly classified by % a zero weight vector. tneg = zeros(1,size(negative,1)); % But the zero weight vector misclassifies all the positive examples. tpos = ones(1,size(positive,1)); i = 0; while ((sum(tneg) + sum(tpos)) > 0) i++; printf("Starting iteration %i with ", i); printf("%i (-) and %i (+) misclassified...\n", sum(tneg), sum(tpos)); fflush(stdout); % Update weight vector and bias by subtracting misclassified negative % examples and adding misclassified positive examples. w = w - (tneg * negative) + (tpos * positive); bias = bias - sum(tneg) + sum(tpos); % Update tneg and tpos to reflect how the new w classifies the data. for j=(1:size(negative,1)); tneg(j) = ((sum(negative(j,:) * w') + bias) > 0); end for j=(1:size(positive,1)); tpos(j) = ((sum(positive(j,:) * w') + bias) <= 0); end end printf("Finished successfully.\n");