Brief Search Engines Guide Logo
Search WWW Search
How Search Engines and Crawlers Work?

Search engines: Crawlers

Search engines use crawlers or robots whose main work is to collect information site by site, page by page. How these crawlers do this amazing work? When a search engine crawler visits a web site it reads its content and then follows the hyperlinks from page to page. There is a good chance for major search engines to find your site if many other sites link to it. Googlebot, ZyBorg, Slurp, Scooter, Zealbot, Ia_archiver, and FAST-WebCrawler are among the most frequent visitors of our web site.

Search engines: Optimal web design

The most search engines friendly are plain-text static HTML pages. A search engine crawler cannot collect content from a database, neither fill out a form of any kind. Dynamic pages block and frames confuse Web crawlers. Search engines cannot index pictures and graphics unless ALT text is used to describe it. Also if the pages are very complex, it might time out before the crawler can index all the text. A site will not be listed in search engine index if it cannot be crawled, due to network or web hosting problems.

Search engines: Index

All the information that crawlers find is collected in the search engine index, its database. This index contains a copy of every crawled page. Once a web site is placed in the index the crawler returns to it on a regular basis. When the crawler finds changes in a web page content it updates the index with new information.

Search engines: Web site ranking

Search engine ranking software is the last part of these famous web sites. This software seeks through the huge database of web pages stored in the search engine index to find these matching to your query then rank them in relevant order.

References: Google | Altavista |