Post by account_disabled on Mar 5, 2024 22:36:23 GMT -5
The check on WordPress It often happens that, when creating a website with WordPress, maximum attention is not paid to certain aspects. The first reason is the most banal: a simple check that discourages indexing on search engines. Even days after the site went online, this check could make all the results disappear from the SERP. image.png This is probably the simplest thing to check and fix:1. Log in to your WordPress admin area and go to Settings > Reading 2. Scroll down and locate the Search Engine Visibility option. 3. Remove the flag from the item “Dissuade search engines from indexing this site” 4. Submit the home page to Search Console's URL Checker 2. .htaccess The .htaccess is an editable text file that resides in the WWW or public_html folder and allows you to define the specific "rules" that the site must respect. It can contain configurations that can redact traffic to other domains, set cache duration, protect access to a folder, prevent indexing, and much more.
Modifying the .htaccess is a very delicate action, which should only be carried out Venezuela Phone Number after several tests in a test environment as, by making mistakes, we could end up triggering an infinite loop of our page. until the site goes down. image.png Even if you're fairly experienced, the absolute minimum you can do is download a copy of the .htaccess file to your computer, so you can restore it in case of an error. Then try to understand if the error lies in this file and correct it. 3. Robots.txt not configured correctly Another step you need to take to index your website correctly is to check how the robots.txt text file was inserted. The robots.txt file is a text file encoded with UTF-8 characters, saved in the main directory (root), which contains site access or restriction directives intended for search engine bots. The basic syntax of a robots.txt is quite simple: you specify the name of a robot and an action. The crawler is identified by the user agent, while the actions (e.g. disallow) can be specified in the disallow.
Generally the file can be verified by typing f there is a line like this: User-agent: * It means you are discouraging search engines from crawling your entire site. image.png But, an incorrectly configured robots.txt file might also have a rule that prevents bots and spiders from crawling a particular page that you want to appear in search results. Here is an example: image.png To solve the problem, you need to leave the search bot free to scan the pages of the site that need to be indexed and positioned. 4. Presence of Meta Tags that prevent indexing Meta tags can also provide instructions to spiders on how to treat the contents of a particular page or website. The difference with robots.txt is that they are displayed on individual pages and do not give a simple general instruction. The robots meta tags are often forgotten and can be insidious and harmful to the indexing of a site.
Modifying the .htaccess is a very delicate action, which should only be carried out Venezuela Phone Number after several tests in a test environment as, by making mistakes, we could end up triggering an infinite loop of our page. until the site goes down. image.png Even if you're fairly experienced, the absolute minimum you can do is download a copy of the .htaccess file to your computer, so you can restore it in case of an error. Then try to understand if the error lies in this file and correct it. 3. Robots.txt not configured correctly Another step you need to take to index your website correctly is to check how the robots.txt text file was inserted. The robots.txt file is a text file encoded with UTF-8 characters, saved in the main directory (root), which contains site access or restriction directives intended for search engine bots. The basic syntax of a robots.txt is quite simple: you specify the name of a robot and an action. The crawler is identified by the user agent, while the actions (e.g. disallow) can be specified in the disallow.
Generally the file can be verified by typing f there is a line like this: User-agent: * It means you are discouraging search engines from crawling your entire site. image.png But, an incorrectly configured robots.txt file might also have a rule that prevents bots and spiders from crawling a particular page that you want to appear in search results. Here is an example: image.png To solve the problem, you need to leave the search bot free to scan the pages of the site that need to be indexed and positioned. 4. Presence of Meta Tags that prevent indexing Meta tags can also provide instructions to spiders on how to treat the contents of a particular page or website. The difference with robots.txt is that they are displayed on individual pages and do not give a simple general instruction. The robots meta tags are often forgotten and can be insidious and harmful to the indexing of a site.