A crawler for the internet

It has two main functions : crawl the WEB to get documents and build a full text database with this documents. The crawler part visit the documents and store intersting information about them locally. It visits the document on a regular basis to make sure that it is still there and updates it if it changes.

There are run-time dependencies that need to be installed first. Why not use depothelper to install them all in one go?

Run-time dependencies:
mysql uri              
Build-time dependencies:
gcc make mysql uri          
Operating System Architecture Package Type Package Size Date Archived View Contents? Download
HP-UX -Tarred/Gzipped
Source Code
1.30 MB11 Mar 2002YesHTTP FTP