One of the first things I did after opening my blog was to think how people are going to reach it, so I started with searching for it in Google. I entered “The Code Lodge” hit enter and I couldn’t find my site anywhere. Hitting “Code Lodge” also got me nothing. First I thought that something must be wrong with the process of launching the site, but on second thought it might have sense.
I assumed that the reason for that was because I just launched it a few hours ago and it’s only a baby in terms of the web. It got my thinking about how these things work, and what can I do to make my site searchable. I realized that I was clueless about the process which modern search engines works on and that it’s a huge con on my list.
So how does the search engine magic work? It is comprised of two main stages: Crawling and Indexing.
The process of crawling is generally accessing public content (webpages) and following the links that are on those pages. This is done nonstop by automated bots.
The robots.txt file can help you allow or prevent access to specific content on your site to the robots that crawl your site. It should be located in your root folder and has to be called robots.txt. Make sure you don’t abuse it to secure private content. For an example of the robots file you can view http://www.facebook.com/robots.txt .
Indexing is the process of gathering information about a page so it will be available in search results.
It is important to distinguish between the two stages. Your page can be crawled but not indexed and vice versa. Intervening with the process of one of those stages can extremely affect your search visibility, so do it with caution.
As you can see, the available (public) content on your site has a major effect on the search process. By allowing the search engine to crawl a page, all its links will be also crawled, the content that your pages contain will be indexed and eventually shown in search results.
SEO? Stay tuned for how to optimize your sites search results.
No comments:
Post a Comment