packages icon
		Hopfield-style network simulator
	       Arun Jagota (jagota@cs.buffalo.edu)

If you plan to install to use this simulator, I suggest that you
drop a line to the above e-mail address. It will help me keep you informed
about newer developments on the network (like new applications, papers etc..)
or the simulator (new versions etc) as and when that happens.

Installation
------------
1. This s/w is all in C and should install without change on
any 32-bit Unix machine. To create the executable,
cd src
make
-- The executable 'mlhn' will be created in the directory 'src'.
-- Move it to 'bin' and try out one of the examples as follows
cd ../examples
../bin/mlhn room -c < room.train
-- You should be in the directory which contains the example.

2. To install it on a 16-bit machine (Unix or MSDOS), change the value 
of the constants MIN_VIGILANCE and MAX_W (in main.c) from -200000000 to
-32000 and 2500000 to 25000 respectively. This is quite necessary. These
 are the only required changes and have been tested on an IBM XT.
A file called Makefile.DOS for the MSDOS make utility has been provided
in the 'src' directory and might be useful.
3. It will probably also work on any other non-Unix machine since it is 
all written in standard C.

Changes
-------
Within a year, I will probably announce a newer version with added 
features.  It will make it easier for you to "roll" your changes 
(if any) to this s/w to the new version if you track the changes.

Usage
-----
	mlhn <network> [{-t}] [-c]

	-t   : Test only. mlhn will read standard input and do
	       energy-descent.
	-c   : Assume all units in input are clamped to 1 unless
	       <unit-name>*0.  mlhn will read the standard input
	       as containing training data
	[for other flags/options, see the header of the file main.c]

<network>.cfg has to be made in the directory from which mlhn is
invoked. It is a configuration file which must contain 5 integers
on a single line. The meaning and typical range of the numbers is as
follows.

max_units lambda lambda_R vigilance       #size units
10-1000   5 or 7  1-5   -10 to -4000      1-12

Eg

mlhn mynet -t

Files : mynet.cfg

mynet.cfg (* the following 1 line in the file mynet.cfg*)

400     5    1  -4000 10   

Input format (during training)

{ a b c }
....

Input Format (to mlhn, during testing)


{ a b c }

{ a*1 b*1 c*1 }

{ a*0 b*1 c }

{ 90-a 80-b 7-c }   (* Optional confidence values in interval [1-100] *)

-- The only delimiters in sets are blank[s], tab[s] or end-of-line[s].
-- So {a b c } would be an illegal set and might cause the symbol 'a' to
-- be ignored.

Debugging features (Commands accepted from stdin)
-------------------------------------------------

w 1 <first-set> <second-set>
-- Will display weights in matrix format as follows.
-- rows are 'first-set' and columns are 'second-set'
Eg,
w 1 { a b c } { d e f g }

-- will display
num11 num12 num13 num14
num21 num22 num22 num24
num31 num32 num33 num34

-- where the rows and columns correspond to
   d      e     f    g
a
b
c

r 1 <first-set>
-- Will display the internal resistance (threshold) of each unit.

Example
-------

1. Training file (mynet.train)
{ a b c }
{ c d e }
.......


2. mynet.cfg
100 7 1 -20 1

/* Train */
mlhn mynet -c < mynet.train

/* Test if network has stored all memories cleanly */

mlhn mynet -t < mynet.train > mynet.tst


NOTE : network is proven to store all memories 'cleanly' ONLY when
the domain has a combinatorial structure (See paper, 
A Hopfield-style network for content-addressable memories - 90-02)

If your application domain has a combinatorial structure, that's fine
otherwise, any n-dimensional binary domain can be converted to one with 
a combinatorial structure as follows. Need a unit for each value {0,1}
at each position. This requires 2n units.

Example : Store the 3-d vectors 101, 111,  011
It is recommended you use other symbols (like {y,n}) instead of
{0,1}. Use numerals to identify positions.
Based on above, we want to store yny, yyy, nyy

Store as follows:

{ y1 n2 n3 } 
{ y1 y2 y3 }
{ n1 y2 y3 }

Modifying individual weights
----------------------------
It is also possible to modify individual weights by presenting the
corresponding two units as a binary set. For eg, if you have to 
modify the weight w_ab, then present { a b } during training. However,
this should be done USING the -ns flag to mlhn which tells mlhn not
to automatically include the size unit during training. Otherwise
{ a b } will be considered a binary memory and not just a weight
in perhaps a larger memory. In some sense, this allows you to 
customise the learning rule (within limits). The script 'unlearn_i',
for eg, uses this because during unlearning we don't want to unlearn
all pairs of connections in a spurious state, ONLY those connections
between units in initial state and units that were switched ON. Without
the '-ns' option, there would be no way to accomplish this.

Exploiting order in the net O/P
-------------------------------
For any 'set' I/P, the net O/P is another 'set'. Actually this 
'set' has some order information in it that can be exploited if
necessary. If we know that the network switched k units ON during
energy descent, then the k left-most elements in O/P set will be in
the reverse order of the switches, the 1st element (from left) being the
last switch ON and the kth element being the first switch ON. 
At this time, unfortunately, the O/P set contains no information about
the order in which units (if any) were switched OFF or which was the
first element that was switched ON, if k is unknown. The latter can
be found by subtracting the I/P set from the O/P set but preserving
the order in the O/P set. If the result is non-empty, then the result
is a set which contains all and only those units that were switched ON 
and in reverse order. The former (info about switch OFF order) will
be provided in some subsequent version of simulator.

Files created while training
----------------------------
Suppose you train a network as follows

mlhn mynet -c < mynet.train

where mynet.train contains the training set.

mlhn will create the following files (in your current directory) in 
which it will store the network parameters.

mynet.pn1 - file contains all symbols used in the training set.
mynet.R1  - value of internal resistance for each unit
mynet.w1  - weight matrix (run-length encoded)
mynet.wup1- Not used for 1-layer net but should be left untouched

Numeric details
---------------
No 'real' arithmetic is performed. +ve weight values in [0,1] 
are done in practice in the integer interval 0..SCALE-1, where
'SCALE' is a constant declared in the file 'std.h'. To change the 
precision, simply change it's value and recompile everything
(typing 'make' will do it). If 'SCALE' is changed, 'SCALE_POWER'
should also be changed accordingly. 'SCALE' should always be a power
of 2.
Similarly, R_i, the internal resistance of each unit is in the integer
interval 0..THRESH_SCALE. To change the precision, change THRESH_SCALE
and THRESH_SCALE_POWER in file 'std.h' and recompile (type 'make').
Again, THRESH_SCALE should be a power of 2. It is recommended that
SCALE and THRESH_SCALE should always have the same value.
Initial values of units (recall units can be continous-valued in test
i/p) are in the integer range 0..HIGH-1. As before, to change the
precision, change HIGH (and HIGH_POWER) in std.h. Again, HIGH should
always be a power of 2.

Pre/Postprocessing utilities
----------------------------
The following utilities are included that do the following.

some_sequence -> wrd_in.c -> { s1 o2 m3 e4 _5 s6 e7 q8 u9 c10 e11 }
{ s1 o2 m3 e4 _5 s6 e7 q8 u9 c10 e11 } -> wrd_out.c -> some_sequence

....xxx..
.xxxxxx..  -> vis_in.c -> { .1 .2 .3 .4 x5 x6 ......... }
xxx.xxx..

                                      ....xxx..
{ .1 .2 .3 x4 x5 ...} -> vis_out.c -> .xxxxxx..  
                                      xxx.xxx..


Additional documentation
------------------------
1) Read the shell scripts train, tst and unlearn_i, especially
the last one.
2) Read the headers of the source files main.c, wrd_in.c

Known Bugs
----------
1. At least 1 size unit has to be put into the configuration file.
This causes no loss of generality.

1. Very rarely, when the input sets are too large, the network
gives segmentation fault after having processed 200-300 of them.
It may be related to running out of memory at some point in time.
All results before that are correct. Simply run the network until
a segmentation fault occurs, and re-run it on the remaining input.
This will work if the input is test input to the network. If it is
training input then it won't work. Break up the input into multiple
parts such that no segmentation fault occurs. This phenomenon is
quite rare.

Relevant Papers and Reports
---------------------------
The following papers are relevant (in priority order) and their LaTeX
sources can be made available by e-mail (send request to 
jagota@cs.buffalo.edu). The first 3 papers have minimal overlap and
are recommended reading.

%T A new Hopfield-style network for content-addressable memories
%A Arun Jagota (1990)
%P Technical Report 90-02, Dept Of Computer Science, Suny Buffalo, NY
%X Mathematical description of the model and its properties. A must.

%T Applying a Hopfield-style network to Degraded Text Recognition
%A Arun Jagota (1990). 
%P To appear in Proceedings of the International Joint Conference on 
   Neural Networks, San Diego, 1990
%X real-world application

%T Applying a Hopfield-style network to Degraded Printed Text Restoration
%A Arun Jagota (1990). 
%P To appear in Proceedings of the 3rd conference on neural networks
   and PDP, Indiana-Purdue university
%X real-world application

%T Knowledge representation in a multi-layered Hopfield network
%A Arun Jagota, Oleg Jakubowicz
%P Proceedings of the International Joint Conference on 
   Neural Networks, Wash DC, June 1989, I-435
%X multi-layer version of above model. Architecture designed with care
   but properties not fully explored. Simulator has partial implementa-
   tion of above architecture. For info, send e-mail. 

%T A neural lexicon in a Hopfield-style network
%A Arun Jagota, Y.S. Hung
%P Proceedings of the International Joint Conference on 
   Neural Networks, Wash DC, Jan 1990, II-607
%X Precursor to application-I (Degraded text recognition). 
   Detailed experiments with large word sets. Most, though not all
   the information in this paper can be infered from TR 90-02 or the
   Degraded Text recognition paper.