|
Open Source Crawlers in Java
Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project. |
Go To Heritrix
WebSPHINX ( Website-Specific Processors for HTML INformation eXtraction) is a Java class library and interactive development environment for Web crawlers that browse and process Web pages automatically. |
Go To WebSPHINX
A highly configurable and customizable Web Spider engine, Developed under the LGPL Open Source license, In 100% pure Java. |
Go To JSpider
A 100% pure Java program for web site retrieval and offline viewing. |
Go To WebEater
Java Web Crawler is a simple Web crawling utility written in Java. It supports the robots exclusion standard. |
Go To Java Web Crawler
WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and will feature a GUI console. |
Go To WebLech
Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page of a Web site is parsed. |
Go To Arachnid
JoBo is a simple program to download complete websites to your local computer. Internally it is basically a web spider. The main advantage to other download tools is that it can automatically fill out forms (e.g. for automated login) and also use cookies for session handling. Compared to other products the GUI seems to be very simple, but the internal features matters ! Do you know any download tool that allows it to login to a web server and download content if that server uses a web forms for login and cookies for session handling ? It also features very flexible rules to limit downloads by URL, size and/or MIME type. |
Go To JoBo
Web-Harvest is Open Source Web Data Extraction tool written in Java. It offers a way to collect desired Web pages and extract useful data from them. In order to do that, it leverages well established techniques and technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites which still make vast majority of the Web content. On the other hand, it could be easily supplemented by custom Java libraries in order to augment its extraction capabilities. |
Go To Web-Harvest
Bixo is an open source web mining toolkit that runs as a series of Cascading pipes on top of Hadoop. By building a customized Cascading pipe assembly, you can quickly create specialized web mining applications that are optimized for a particular use case. |
Go To Bixo
Crawler4j is a Java library which provides a simple interface for crawling the web. Using it, you can setup a multi-threaded web crawler in 5 minutes! It is also very efficient, it has been able to download and parse 200 pages per second on a Quad core PC with cable connection. |
Go To Crawler4j
Ex-Crawler is divided into three subprojects. Ex-Crawler server daemon is a highly configurable, flexible (Web-) Crawler, including distributed grid / volunteer computing features written in Java. Crawled informations are stored in MySQL, MSSQL or PostgreSQL database. It supports plugins through multiple Plugin Interfaces. It comes with it's own socket server, where you can configure it, add urls and much more. Including user accounts and user levels, which are shared with the webfrontend search engine.
With the Ex-Crawler distributed crawling graphical client, other people / computers can crawl and analyse websites, images and more for the crawler.
The third part of the project is the webfrontend search engine. |
Go To Ex-Crawler
|
|
Java is a trademark or registered trademark of Sun Microsystems, Inc. in the United States
and other countries. This site is independent of Sun Microsystems, Inc.