Basic concepts

==>> Search engines
Before we start talking about search engine optimization we need to understand how search engines work. Basically, each search engine consists of 3 parts:

1. The Crawler (or the spider). This part of a search engine is a simple robot that downloads pages of a website and crawls them for links. Then, it opens and downloads each of those links to crawl (spider) them too. The crawler visits websites periodically to find the changes in their content and modify their rankings accordingly. Depending on the quality of a website and the frequency of its content updates this may happen from say once per month up to several times a day for a high popularity news sites.

The crawler does not rank websites itself. Instead, it simply passes all crawled websites to another search engine module called the indexer.

2. The Indexer. This module stores all the pages crawled by the spider in a large database called the index. Think of it as the index in a paper book: you find a word and see which pages mention this word. The index is not static, it updates every time the crawler finds a new page or re-crawls the one already presented in the index. Since the volume of the index is very large it often takes time to commit all the changes into the database. So one may say that a website has been crawled, but not yet indexed.

Once the website with all its content is added to the index, the third part of the search engine begins to work.

3. The ranker (or search engine software). This part interacts with user and asks for a search query. Then it sifts millions of indexed pages and finds all of them that are relevant to that query. The results get sorted by relevance and finally are shown to a user.

What is relevance and how would one determine if a page is more or less relevant to a query? Here comes the tricky part - the ranking factors...