search engines identify keywords stuffing

SEO can help Google or other Search engines to identify the website content,but keywords stuffing can NOT, especially those who directly use the software to put together keywords to form a so-called article,which is early SPAM cheating way. Keyword stuffing attempts to increase the number of indexed pages or the rank of site,but search engines now have been able to identify this cheating.

how do search engines identify keywords stuffing?

First of all,search engine companies have quality control departments already.Such as Baidu, a man-involved search engine in China.If users find a rubbish post,he or she can report it to the relevant department.
Second, similar to Google, search engine companies can automatically identify keyword stuffing. This post update url:

How to automatically identify keyword-stuffing? Commonly by way of statistical analysis.

First, N/L

1, search engin can segment words is a post,then it get two things: one is the number of words N,the other is the post length L.

2, It says that,between the length L and the number N,there is a relationship.mostly the value of L/N is between 4 to 8, the average is about 5-6. If one post has 1000 bytes, then there should be 125-250 keywords. As for Chinese and English,the ratio will be a little different.

3, if search engine find that L/N is too large, this post is suspected of keyword stuffing. If L/N is very small, this post may be a so-called article.

Statistics showed that,there is a certain relationship between keywords in a post and the value of its N/L.

Second, stop words

Search engine also based on the proportion of stop words to determine whether it is a natural article.What is stop words?including "I, is,and" and other common terms. If the proportion of stop words is outside the normal range, this post will be submitted to the quality control department or neglected directly.

Third, of course, more ways can be used to judge whether a post is natural or not.

Some cheater have given up words-composition,instead,they form a sentence-composited article.They use software to collect a dozens of related articles and put them together into an article.

This requires search engines to do semantic analysis. Semantic analysis also currently in the research stage, and this is the direction of the next generation of search engines.

If,in the end, some software could generate nature articles that human can understand well, were this post a SPAM or a cream? This just like that,is a RSS aggregation article a SPAM or a cream? However, if there were such aggregate articles too many online, who would write original articles,then?


电子邮件地址不会被公开。 必填项已用*标注