------------------------------------------------------------------------------
A license is hereby granted to reproduce this software source code and
to create executable versions from this source code for personal,
non-commercial use. The copyright notice included with the software
must be maintained in all copies produced.
THIS PROGRAM IS PROVIDED "AS IS". THE AUTHOR PROVIDES NO WARRANTIES
WHATSOEVER, EXPRESSED OR IMPLIED, INCLUDING WARRANTIES OF
MERCHANTABILITY, TITLE, OR FITNESS FOR ANY PARTICULAR PURPOSE. THE
AUTHOR DOES NOT WARRANT THAT USE OF THIS PROGRAM DOES NOT INFRINGE THE
INTELLECTUAL PROPERTY RIGHTS OF ANY THIRD PARTY IN ANY COUNTRY.
Copyright (c) 1995, 1996, 1997, 1998 John Conover, All Rights Reserved.
Comments and/or bug reports should be addressed to:
john@johncon.com (John Conover)
------------------------------------------------------------------------------
Rels is a program that determines the relevance of text documents to a
set of keywords expressed in boolean infix notation. The relevance is
determined by comparing the phonetic representation of the keywords
with the phonetic representation of every word in a
document. (Phonetic searching has some degree of tolerance to
misspelled words.) The list of file names that are relevant are
printed to the standard output, in order of relevance.
For example, the command:
rel "(directory & listing)" /usr/share/man/cat1
(ie., find the relevance of all files that contain both of the words
"directory" and "listing" in the catman directory) will list 21 files,
out of the 782 catman files, (totaling 6.8 MB,) of which "ls.1" is the
fifth most relevant-meaning that to find the command that lists
directories in a Unix system, the "literature search" was cut, on
average, from 359 to 5 files, or a reduction of approximately 98%. The
command took 55 seconds to execute on a on a System V, rel. 4.2
machine, (20Mhz 386 with an 18ms. ESDI drive,) which is a considerable
expediency in relation to browsing through the files in the directory
since ls.1 is the 359'th file in the directory. Although this example
is remedial, a similar expediency can be demonstrated in searching for
documents in email repositories and text archives.
Additional applications include information robots, (ie., "mailbots,"
or "infobots,") where the disposition (ie., delivery, filing, or
viewing,) of text documents can be determined dynamically, based on
the relevance of the document to a set of criteria, framed in boolean
infix notation. Or, in other words, the program can be used to order,
or rank, text documents based on a "context," specified in a general
mathematical language, similar to that used in calculators.
The words in the query are case insensitive, and either upper or lower
case can be used.
Associativity of operators is left to right, and the precedence of
operators is identical to 'C':
precedence operator
high ! = not
middle & = and
lowest | = or
The operator symbols can be escaped with the "\\" character to include
the symbol in a search pattern. The "escape space" character sequence
represents one or more instances of space character(s) in search
patterns, and each instance will match one or more consecutive
whitespace characters, (as defined by isspace(3) in ctype.h and/or
locale.h,) and allows phrases to be searched for. The "many to one"
whitespace character translation occurs in both the keyword arguments
and the text document(s). Multiple consecutive instances of the
"escape space" character sequence in keyword search phrases should not
be used, and single instances are appropriate only when necessary to
specify a consecutive sequence of keywords-the logical and operator is
the preferred searching construct when searching documents that
contain set(s) of keywords.
Note that the logical or operator, (|) is useful in conjunction with
a thesaurus. For example, the thesaurus entry for the word
"complexity" is:
Complexity. -- N. complexity; complexness &c. adj.; complexus;
complication, implication; intricacy, intrication; perplexity;
network, labyrinth; wilderness, jungle; involution, raveling,
entanglement; coil &c. (convolution) 248; sleave, tangled skein,
knot, Gordian knot, wheels within wheels; kink, knarl; webwork.
Adj. knarled. complex, complexed; intricate, complicated,
perplexed, involved, raveled, entangled, knotted, tangled,
inextricable; irreducible.
implying that a reasonable context for a search for things that are
complex would be:
rels '(complex | complic | implicat | intric | perplex |
labyrinth | involut | convolut | involv | tangl |
inextric | irreduc)' ...
which would probably return too many document names. The number of
documents can be reduced with the logical and (&) and not (!) operator
in an iterative fashion to reject documents of little interest.
Document format issues:
Hyphenation issues are addressed by deleting hyphens and any following
sequence of instances of whitespace characters, (as defined by
isspace(3),) in both the keyword arguments and the text document(s).
Backspace character issues are addressed by overwriting the character
before the backspace with the character after the backspace, which
will instantiate the character of the last instance of of consecutive
backspace/character combinations. This is specifically for catman
pages which utilize underscore/backspace/character combinations for
underlining, in addition to backspace/character combinations for bold
(overstrike,) representation-note that for this process to be
successful, a single underscore (used for underlining,) must preceed a
single character in the sequence.
Phonetic translation:
This program is a derivative works based on the rel(1) program,
available from sunsite.unc.edu in
/pub/Linux/utils/text/rel-1.3.tar.gz. The sources were modified to
include a soundex search algorithm.
The soundex algorithm is a mechanical phonetic translation system for
the English language, and converts English words into a corresponding
phonetic code for the word. The algorithm is as follows:
for each character in a word:
if the character is the first character of a word
1) do nothing
else
2) replace consecutive sequences of the labials, (ie., the
characters, B, F, P, V,) with the character '1'
3) replace consecutive sequences of the gutterals and
sibilants, (ie., the characters, C, G, J, K, Q, S, X, Z,)
with the character '2'
4) replace consecutive sequences of the dentals, (ie., the
characters, D, T,) with the character '3'
5) replace consecutive sequences of the longliquids, (ie.,
the character, L,) with the character '4'
6) replace consecutive sequences of the nasals, (ie., the
characters, M, N,) with the character '5'
7) replace consecutive sequences of the shortliquids,
(ie., the character, R,) with the character '6'
8) and, omit all other characters, (ie., the characters,
A, E, H, I, O, U, W, Y,)
9) if the soundex translation of the word is larger than 4
characters, truncate to 4 characters.
For example, the soundex translation of the word "conover" is
C516. Unfortunately, there are two related issues in using the soundex
algorithm as a search mechanism; interior keyword search is
impossible, and, there is no practical strategy to handle hyphenation.
As a heuristic, simply eliminating 1), above, would permit interior
keyword searches and hyphenation through concatenation of characters
on each side of a '-' character, at the expense of erroneous
matches. In practice, the expense is small-depending on the point of
view-particularly if the requirement in 9), above, is removed,
permitting soundex keyword translations of more syllables.
Note that this heuristic returns soundex translated words that consist
only of numbers. Since numerical data can be a valid search criteria,
the ambiguity can be avoided by using letters from the alphabet for
the numbers, making the algorithm as follows:
1) replace consecutive sequences of the labials, (ie., the
characters, B, F, P, V,) with the character 'B'
2) replace consecutive sequences of the gutterals and sibilants,
(ie., the characters, C, G, J, K, Q, S, X, Z,) with the character
'G'
3) replace consecutive sequences of the dentals, (ie., the
characters, D, T,) with the character 'D'
4) replace consecutive sequences of the longliquids, (ie., the
character, L,) with the character 'L'
5) replace consecutive sequences of the nasals, (ie., the
characters, M, N,) with the character 'N'
6) replace consecutive sequences of the shortliquids, (ie., the
character, R,) with the character 'S'
7) and, omit all other characters, (ie., the characters, A, E, H,
I, O, U, W, Y,)
which turns out to be implementable as a direct, many-to-one, and on-to
simple character mapping. It is, also, a very fast phonetic search
methodology-there is no speed penalty.
Comparing the two methodologies, (standard soundex vs. modified
soundex,) on a text version of the Webster's dictionary, (mine has
234,932 words,) as to the number of different words recognized, with
both unlimited soundex word length, and a word length of 4:
standard soundex modified soundex
length = 4 unlimited length = 4 unlimited
4,335 61,408 932 31,983
Although the modified soundex with unlimited length is inferior to the
standard soundex with unlimited word length in capability of
recognizing differences in words, it is superior to the standard
soundex with a word length of 4, which is the way the algorithm is
usually used. It would seem that the modified soundex algorithm is a
reasonable, (depending on the point of view,) compromise for
implementing a phonetic search algorithm.
There are additional issues with the soundex algorithm for phonetic
keyword searches:
1) it only works for the English language
2) a syntax error will be returned for keywords made up of ONLY
the characters A, E, I, H, O, U, W, and Y, (there is nothing to
search for-these characters are ignored by the soundex algorithm)
3) Extreme care must be exercised when using the algorithm to
reject documents with the logical not operator (!) since it will
reject more documents than probably expected.
meaning that the algorithm should be considered as an adjunct to,
instead of a replacement for, a strict keyword search.
Tests on large email archives, and the HTML pages from WWW servers
(each about 15 Mbytes,) tend to indicate that, in practice, the
algorithm returns not quite twice as many keyword matches as a strict
keyword search. (The output of this program was compared to the output
of the rel(1) program.)
General description of the program:
This program is an experiment to evaluate using infix boolean
operations as a heuristic to determine the relevance of text files in
electronic literature searches. The operators supported are, "&" for
logical "and," "|" for logical "or," and "!" for logical "not."
Parenthesis are used as grouping operators, and "partial key" searches
are fully supported, (meaning that the words can be abbreviated.) For
example, the command:
rels "(((these & those) | (them & us)) ! we)" file1 file2 ...
would print a list of filenames that contain either the words "these"
and "those", or "them" and "us", but doesn't contain the word "we"
from the list of filenames, file1, file2, ... The order of the printed
file names is in order of relevance, where relevance is determined by
the number of incidences of the words "these", "those", "them", and
"us", in each file. The general concept is to "narrow down" the number
of files to be browsed when doing electronic literature searches for
specific words and phrases in a group of files using a command similar
to:
more `rels "(((these & those) | (them & us)) ! we)" file1 file2`
Although regular expressions were supported in the prototype versions
of the program, the capability was removed in the release versions for
reasons of syntactical formality, for example, the command:
rels "((john & conover) & (joh.*over))" files
has a logical contradiction since the first group specifies all files
which contain "john" any place and "conover" anyplace in files, and
the second grouping specifies all files that contain "john" followed
by "conover". If the last group of operators takes precedence, the
first is redundant. Additionally, it is not clear whether wild card
expressions should span the scope multiple records in a literature
search, (which the first group of operators in this example does,) or
exactly what a wild card expression that spans multiple records means,
ie., how many records are to be spanned, without writing a string of
EOL's in the infix expression. Since the two groups of operators in
this example are very close, operationally, (at least for practical
purposes,) it was decided that support of regular expressions should
be abandoned, and such operations left to the grep(1) suite.
Comparative benchmarks of search algorithm:
The benchmarks were run on a System V, rel. 4.2 machine, (20Mhz
386 with an 18ms. ESDI drive,) and searched the catman directory,
(consisting of 782 catman files, totaling 6.8 MB,) which was
searched for either one or two 9 character words that did not
exist in any file, ie., there could be no matches found. The
comparison was between the standard egrep(1), agrep(1), and
rels(1). (Agrep is a very fast regular expression search program,
and is available by anonymous ftp from cs.arizona.edu, IP
192.12.69.5)
for complex search patterns (after cd'ing to the cat1 directory:)
the command "egrep 'abcdefwxy|wxyabcdef' *" took 74.93 seconds
the command "agrep 'abcdefwxy,wwxyabcdef' *" took 72.93
seconds
the command "rels 'abcdefwxy|wxyabcdef' *" took 51.95 seconds
for simple search patterns (after cd'ing to the cat1 directory:)
the command "egrep 'abcdefwxy' *" took 73.91 seconds
the command "agrep 'abcdefwxy' *" took 25.87 seconds
the command "rels 'abcdefwxy' *" took 43.68 seconds
For simple search patterns, agrep(1) is significantly faster, and
for complex search patterns, rels(1) is slightly faster..
Applicability:
Applicability of rels varies on complexity of search, size of database,
speed of host environment, etc., however, as some general guidelines:
1) For text files with a total size of less than 5 MB, rels, and
standard egrep(1) queries of the text files will probably prove
adequate.
2) For text files with a total size of 5 MB to 50 MB, qt seems
adequate for most queries. The significant issue is that, although
the retrieval execution times are probably adequate with qt, the
database write times are not impressive. Qt is listed in "Related
information retrieval software:," below.
3) For text files with a total size that is larger than 50 MB, or
where concurrency is an issue, it would be appropriate to consider
one of the other alternatives listed in "Related information
retrieval software:," below.
Extensibility:
The source was written with extensibility as an issue. To alter
character transliterations, see uppercase.c for details. For
enhancements to phrase searching and hyphenation suggestions, see
translit.c.
It is possible to "weight" the relevance determination of
documents that are composed in one of the standardized general
markup languages, like TeX/LaTeX, or SGML. The "weight" of the
relevance of search matches depends on where the words are found
in the structure of the document, for example, if the search was
for "numerical" and "methods," \chapter{Numerical Methods} would
be weighted "stronger" than if the words were found in
\section{Numerical Methods}, which in turn would be weighted
"stronger" than if the words were found in a paragraph. This would
permit relevance of a document to be determined by how author
structured the document. See eval.c for suggestions.
The list of identifiers in the search argument can be printed to
stdio, possibly preceeded by a '+' character and separated by '|'
characters to make an egrep(1) compatible search argument, which
could, conceivably, be used as the search argument in a browser so
that something like:
"browse `rels arg directory'"
would automatically search the directory for arg, load the files
into the browser, and skip to the first instance of an identifier,
with one button scanning to the next instance, and so on. See
postfix.c for suggestions.
The source architecture is very modularized to facilitate adapting
the program to different environments and applications, for
example, a "mailbot" can be constructed by eliminating
searchpath.c, and constructing a list of postfix stacks, with
perhaps an email address element added to each postfix stack, in
such a manner that the program could be used to scan incoming
mail, and if the mail was relevant to any postfix criteria, it
would be forwarded to the recipient.
The program is capable of running as a wide area, distributed,
full text information retrieval system. A possible scenario would
be to distribute a large database in many systems that are
internetworked together, presumably via the Unix inet facility,
with each system running a copy of the program. Queries would be
submitted to the systems, and the systems would return individual
records containing the count of matches to the query, and the file
name containing the matches, perhaps with the machine name, in
such a manner that the records could be sorted on the "count
field," and a network wide "browser" could be used to view the
documents, or a script could be made to use the "r suite" to
transfer the documents into the local machine. Obviously, the
queries would be run in parallel on the machines in the
network-concurrency would not be an issue. See the function,
main(), below, for suggestions.
References:
1) "Information Retrieval, Data Structures & Algorithms," William
B. Frakes, Ricardo Baeza-Yates, Editors, Prentice Hall, Englewood
Cliffs, New Jersey 07632, 1992, ISBN 0-13-463837-9.
The sources for the many of the algorithms presented in 1) are
available by ftp, ftp.vt.edu:/pub/reuse/ircode.tar.Z
2) "Text Information Retrieval Systems," Charles T. Meadow,
Academic Press, Inc, San Diego, 1992, ISBN 0-12-487410-X.
3) "Full Text Databases," Carol Tenopir, Jung Soon Ro, Greenwood
Press, New York, 1990, ISBN 0-313-26303-5.
4) "Text and Context, Document Processing and Storage," Susan
Jones, Springer-Verlag, New York, 1991, ISBN 0-387-19604-8.
5) ftp think.com:/wais/wais-corporate-paper.text
6) ftp cs.toronto.edu:/pub/lq-text.README.1.10
Related information retrieval software:
1) Wais, available by ftp, think.com:/wais/wais-8-b5.1.tar.Z.
2) Lq-text, available by ftp,
cs.toronto.edu:/pub/lq-text1.10.tar.Z.
3) Qt, available by ftp,
ftp.uu.net:/usenet/comp.sources/unix/volume27.
The general program strategy:
1) Translate the the infix notation of the first non-switch
argument specified on the command line into a postfix notation
list.
2) Compile each token in the postfix notation list, from 1), into
a Boyer-Moore-Horspool-Sunday compatible jump table.
3) Recursively descend into all directories that are listed on the
remainder of the command line, searching each file in each
directory, using the Boyer-Moore-Horspool-Sunday algorithm, for
the counts of incidences of each word in the postfix notation
list-at the conclusion of the search of each file, evaluate the
postfix notation list to determine the relevance of the file, and
if the relevance is greater than zero, add the filename and
relevance value to the relevance list.
4) Quick sort the relevance list from 3), on the relevance values,
and print the filename of each element in the relevance list.
Module descriptions:
1) The module uppercase.c constructs an array of MAX_ALPHABET_SIZE
characters, in such a manner that the implicit index of any
element contains the toupper() of the offset into the array of the
specific index value, (ie., it is a look up table for uppercase
characters,) and is called from main() for initialization in
rel.c. The arrays use is to make a locale specific, fast,
uppercase character translator, and is used in lexicon.c and
searchfile.c to translate the first argument of the command line,
and file data, respectively, to uppercase characters.
note: care must be exercised when using this array in systems
where the native type of char is signed, for example:
signed char ch;
unsigned char cu;
cu = uppercase[ch];
will not give the desired results, since ch indexed a negative
section of the array, (which does not exist.). Particularly
meticulous usage of lint is advisable.
See uppercase.c and translit.c for suggestions in implementing
hyphenation and phrase searching strategies.
2) The module translit.c translates all of the characters in an
array, using the array established in uppercase.c. See translit.c
and uppercase.c for suggestions in implementing hyphenation and
phrase searching strategies.
3) The module lexicon.c parses the first argument of the command
line into tokens, and is repetitively called by postfix.c for each
token in the first argument of the command line. Lexicon.c uses a
simple state machine to parse the tokens from the argument.
4) The module posfix.c translates the first argument of the
command line from infix notation to a postfix notation list, and
is called from main() in rel.c. Syntax of the infix expression is
also verified in this module.
5) The module bmhsearch.c contains all of the
Boyer-Moore-Horspool-Sunday (BMH) string search functions,
including the bmhcompile_postfix() function which is called from
main() in rel.c, to compile each token in the postfix notation
list into a jump table, and the bmhsearch_list () function which
is called repetitively to search each file in searchfile.c. See
the bmhsearech.c module for a complete description of the
assembled data structures.
6) The module searchpath.c is a POSIX compliant, recursive descent
directory and file listing function that is called from main() in
rel.c to search files using the module in searchfile.c.
7) The module searchfile.c is repetitively called from
searchpath() in searchpath.c to search each file found in 5),
using the BMH string search functions in bmhsearch.c. Searchfile.c
uses POSIX compliant functions to open, lock, read, and close each
file. The files are read locked for compatability with those
systems that write lock files during write operations with
utilities, for example, like vi(1). This provides concurrency
control in a multi user environment. Searchfile.c uses fcntl(2)
to read lock the file, and will wait if blocked by another process
(see man fcntl(2).)
8) The module eval.c contains postfix_eval(), which is called for
each file searched in searchfile.c to compute the relevance of the
file by evaluating the postfix notation list-the functions that
compute the "and," "or," and "not" evaluations are contained in
this module. If the value of the relevance computed is greater
than zero, an element is allocated, and added to the relevance
list. This module also contains a description of how the
document's relevance is determined.
9) The module qsortlist.c is a general function that is used to
quick sort a linked list-in this case the relevance list-and is
called from main() in rel.c.
10) The module rel.c contains main(), which is the main dispatch
function to all program operations.
11) The module relclose.c is called to shut down all operations,
allocated memory, and close all directories and files that may
have been opened by this program. For specifics, see below under
"Exception and fault handling," and relclose.c.
12) The module message.c is a general error message look up table,
for printing error message in a systematic manner, for all modules
in the program. This module may contain port specific error
messages that are unique to a specific operating system. For
specifics, see message.c.
13) The module version.c contains only the version of the program,
and serves as a place holder for information from the revision
control system for automatic version control.
14) The module stack.h contains defines for all list operations in
all modules. The lists are treated as "stacks," and this module
contains the PUSH() and POP() defines for the stack
operations. This module is general, and is used on many different
types of data structures. For structure element requirements, see
stack.h.
15) The module memalloc.c is used as a general memory allocation
routine, and contains functions for allocating memory, and making
a list of the allocated the memory areas, such that it may be
deallocated when the program exits, perhaps under exception or
fault conditions.
Note that all file and directory operations are POSIX compliant
for portability reasons.
Exception and fault handling:
Since this program is a full text information retrieval system, it
is not unreasonable to assume that some of the modules may find
application in client/server architectures. This places
constraints on how the program handles fault and exception
issues. Note that it is not unreasonable to assume that signal
interrupt does NOT cause the program to exit in a client/server
environment, and, therefore, there can be no reliance on exit() to
deallocate memory, close files and directories, etc.
Specifically, the program must be capable of vectoring to a
routine that deallocates any and all memory that has been
allocated, and closes all files and directories that have been
opened to prevent "memory leaks" and file table overflows. Since
the modules are involved in list operations, in recursive
functions, a strategy must be deployed that unconditionally
deallocates all allocated memory, closes all files and
directories, and resets all variables in the program the to their
initial "state."
The basic strategy to address the issues of exception and fault
handling in client/server architectures is to Centralize memory
allocation, and file and directory functions in such a manner that
shutdown routines can be called from relclose() that will
deallocate all memory allocated (memdealloc() in memalloc.c,) and
close any files and/or directories (int_searchfile () in
searchfile.c, and int_searchpath () in searchpath.c,) that may
have been opened. The function, relclose() in relclose.c, is
installed as an "interrupt handler," in main(), in rel.c.
Constructional and stylistic issues follow, generally, a compromise
agreement with the following references:
1) "C A Reference Manual", Samuel P. Harbison, Guy L. Steele
Jr. Prentice-Hall. 1984
2) "C A Reference Manual, Second Edition", Samuel P. Harbison,
Guy L. Steele Jr. Prentice-Hall, 1987
3) "C Programming Guidelines", Thomas Plum. Plum Hall, 1984
4) "C Programming Guidelines, Second Edition", Thomas Plum. Plum
Hall, 1989
5) "Efficient C", Thomas Plum, Jim Brodie. Plum Hall, 1985
6) "Fundamental Recommendations on C Programming Style", Greg
Comeau. Microsoft Systems Journal, vol 5, number 3, May, 1990
7) "Notes on the Draft C Standard", Thomas Plum. Plum Hall, 1987
8) "Portable C Software", Mark R. Horton. Printice Hall, 1990
9) "Programming Language - C", ANSI X3.159-1989. American
National Standards Institute, 1989
10) "Reliable Data Structures", Thomas Plum. Plum Hall, 1985
11) "The C Programming Language", Brian W. Kernighan and Dennis
M. Ritchie. Printice-Hall, 1978
Since each module is autonomous, (with the exception of service
functions) each module has an associated ".h" include file that
declares function prototypes of external scoped variables and
functions. These files are are made available to other modules by
being included in rel.h, which is included in all module's "c"
source file. One of the issues is that an include file may not
have been read before a variable declared in the include file is
used in another include file, (there are several circular
dependencies in the include files.) To address this issue, each
module's include file sets a variable, the first time it is read
by the compiler, and if this variable is set, then any subsequent
reads will be skipped. This variable name is generally of the form
of the module name, concatenated with "_H".
Each "c" source file and associated include file has an "rcsid"
static character array that contains the revision control system
"signatures" for that file. This information is included, for both
the "c" source file and its associated include file, in all object
modules for audit and maintenence.
If the stylistics listed below are annoying, the indent program
from the gnu foundation, (anonymous ftp to prep.ai.mit in
/pub/gnu,) is available to convert from these stylistics to any
desirable.
Both ANSI X3.159-1989 and Kernighan and Ritchie standard
declarations are supported, with a typical construct:
#ifdef __STDC__
ANSI declarations.
#else
K&R declarations.
#endif
Brace/block declarations and constructs use the stylistic, for
example:
for (this < that; this < those; this ++)
{
that --;
}
as opposed to:
for (this < that; this < those; this ++) {
that --;
}
Nested if constructs use the stylistic, for example:
if (this)
{
if (that)
{
.
.
.
}
}
as opposed to:
if (this)
if (that)
.
.
.
The comments in the source code are verbose, and beyond the
necessity of commenting the program operation, and the one liberty
taken was to write the code on a 132 column display. Many of the
comments in the source code occupy the full 132 columns, (but do
not break up the code's flow with interline comments,) and are
incompatible with text editors like vi(1). The rationale was that
it is easier to remove them with something like:
sed "s/\/\*.*\*\//" sourcefile.c > ../new/sourcefile.c
than to add them. Unfortunately, in the standard distribution of
Unix, there is no inverse command.
john@johncon.com (John Conover)
Campbell, California, USA
February, 1998