The search industry is moving at a faster pace than it did 5 years ago. Search giant Google is making monthly changes to its algorithm and it seems that every corner you turn on the Internet there is a start up looking to innovate in the search market.
Competition is quite healthy but makes the life of a SEO very difficult. Letâ€™s just say the days of KW stuffing, blog commenting and content production are over. This is a new search economy, a white hat economy. Here are some tips to get you started in optimizing your site.
Information architecture is the process of categorizing all the content on your website so it is can easily be crawled by the search spiders. By having a proper structure in place, it will allow older posts and articles to stay indexed for longer and continue to contribute as traffic sources.
Letâ€™s use an example to better illustrate the concept of content architecture. Letâ€™s say you are the webmaster of a programming website that discusses varies types of programming languages. At a categorical level, you might think about labeling them Linux, C++, Java, etc. On a sub category level, you might want to break the Linux into sections such as Ubuntu, Fedora, Debian, etc. Once the categories and sub categories are identified, you would want to begin creating content around each sub-category. For example, you might want to write a post about How to Install Gentoo or Top Linux Blogs on the Web.
Interlinking with Silos
Since links are roads for search engine crawlers, it is important to place links in new articles pointing to related articles that were posted a while back. Here are two interlinking tips that should be implemented immediately. The simplest way to interlink is placing a linking between two articles in the same silo. This means that you would want to place a link in your How to Install Gentoo article to an older article such as The Pros and Cons of Gentoo. Linking out to older articles on your website will count as a new link pointing to the older piece, making the search engines deem it as still relevant.
The second technique that can be used for interlinking is related articles. The majority of publishing platforms and CMSâ€™s have plug ins that automatically select related articles and place them at the bottom of the post. This is a quick and easy way to get the interlinking done. For whatever reason, you do not have access to a plugin, you could always do it manually. Although this takes a lot of time, the results in the search engine results will be worth it.
Keyword hierarchy may require a little more skill as it involves matching the specific keywords with the proper section. This involves a proper keyword research and the know how in terms of strategically placing those keywords in the page titles and descriptions as well as tailoring the content to contain a fair amount of keywords as well.
It is important to note, when placing keywords, which one of your pages are the strongest. Usually this is the homepage or category pages. When you identify the strongest page, place the most competitive (usually short tail keywords) on those pages and work your way done the information architecture. By the time you get down to creating content you might be optimizing them for long tail keywords such as â€œlinux gentoo download guideâ€.
Another reason longer tail keywords should be written about in lower level pages is because it is typically easier to rank for. This means pages will require less authority to get that page to rank for your targeted long tail keyword.
Duplicate content is when two URLs display the identical or similar page content. This is viewed as problematic by Google and can get your website penalized. Here are three tags that can be used in removing duplicate content from the indexes.
Robots.txt is a file where parameters can be placed and blocked from the search engines. For example, letâ€™s say you originally launched you website with the parameter /c-prog/ and you wanted to change it to /c-programming/, this would be a good opportunity to place the old parameter in the robots.txt file. This will block every URL with the specific parameter in it by only placing one line of code. Note: ensure that all the URLs you want to keep are 301 redirected to the new URLs before placing the parameter in the robots.txt file.
For a quick fix, the Noindex tag is works great to remove duplicate content issues. The code is placed in the meta title and is follows:
<meta name="robots" content="noindex">
Canonical tags act in a very similar manner to the noindiex tag and does a great job removing duplicate content. The code for adding a canonical tag is as follows:
<link rel="canonical" href="..."/>
URL rewrites should only be done in extreme cases where the URL cannot be indexed by the search engines because it cannot be processed. If they are not being crawled by the spiders, that means that page is not being indexed and will not bring any traffic. To resolve this issue, URLs should be reduced to a reasonable length and should contain keywords if possible. The best way to go about this is to have it mirror the hierarchy of the site. For example, http://slodive.com/freebies/twitter-tools-to-increase-your-productivity/ is a clean URL that informs both search engines and users what the page is about.
These tips are great to get any webmaster started in the SEO field. I would suggest following some SEO blogs as well as Matt Cutts of Google to stay on top of any changes that may be implemented.
Mouseover to see this author's bio. Nisha is the head blogger for Slodive.com. She loves tattoos and inspirational quotes. Check her out on google plus https://plus.google.com/u/0/116437517919411097994.Nisha Patel's Archive