Web crawler

From Wiki @ Karl Jones dot com
Jump to: navigation, search

A web crawler (web spider, etc.) is an Internet bot which systematically browses the World Wide Web, typically for the purpose of web indexing.

Description

Web search engines and some other sites use Web crawling or spidering software to update their web content or indexes of others sites' web content.

Web crawlers can copy all the pages they visit for later processing by a search engine which indexes the downloaded pages so the users can search much more efficiently.

Crawlers can validate hyperlinks and HTML code.

Web scraping

They can also be used for web scraping (see also data-driven programming).

See also

External Links