A crawler for the internet
It has two main functions : crawl the WEB to get documents and build a full text database with this documents. The crawler part visit the documents and store intersting information about them locally. It visits the document on a regular basis to make sure that it is still there and updates it if it changes.
There are run-time dependencies that need to be installed first. Why not use depothelper to install them all in one go?
|Operating System||Architecture||Package Type||Package Size||Date Archived||View Contents?||Download|
|1.30 MB||11 Mar 2002||Yes||HTTP FTP|