How does a Search Engine Work

Search engine helps us to get the information from the world wild web. The World Wide Web is also known as the universe of information which many people access.

Ads by Google

World Wide Web helps to share the information globally. It stores enormous information and hence searching a information on web has become a difficult task. In order to make searching information an easy task Search engine was introduced.

Internet Search engine perform the search operations on the web to get the required information. Crawler indexer is the kind of architecture used by most of the search engines. Crawler is also known as spiders. The spiders are a kind of small programs which is used for browsing the web.
An initial set of URL’s are given to the Crawlers. Crawlers consist of crawled pages on which the URL’s will appear and these URL’s are extracted by the crawlers to provide the information to the crawler control module. The next page which has to be visited is desired by the crawler module.
Different search engine use different algorithms and hence the topics covered by them also changes accordingly. Some search engines are programmed in such a way that they search sites for a particular topic.

The word extracts from each pages and recording of the URL is done using indexer module which results in a look up table that gives the list of URL’s. These URL’s will be pointing towards the pages where each word occurs.

Another important part of the Search engine architecture is the collection analysis module which creates the utility index.

The pages which are retrieved during the crawling and indexing are stored. The temporary storage is known as repository. Search engine maintains a cache memory in which it stores the pages which they visited and hence accessing already visited pages becomes easy.
The user search requests in the form of keywords which is received by the query module of a Search engine. The results are sorted by the ranking module.

There are many variants in crawler indexer architecture which is modified in the distributed architecture. The architecture contains gatherers and brokers. The process of collecting indexing information is done by the gatherers. The query interface and indexing mechanism is done by the brokers. The indices are up dated by the brokers according to the information given by the gatherers and other brokers. The information can be filtered. Now a days search engines use this kind of architectures.

The Search engine displays the results of query in a particular order. Many people visit only the pages in the top of the order and ignore others. The reason is that they think that only the top pages bear most relevant to their queries. So every one want their pages to be ranked within the first ten.

To raise the rank of the site webmasters try some tricks like keywords are used for populating the home page of a site. If you really want to do some reach then you should understand that not only the top pages consists of serious contents even the pages beyond it will have some serious contents.

Related Articles:

Last 5 posts by Gena





Other Posts from "Computer Networking" Category:



Leave a Reply