


About php showing different content codes to visitors and crawlers
This article mainly introduces the different contents displayed by PHP to visitors and crawlers. It has certain reference value. Now I share it with you. Friends in need can refer to it
In order to improve the user experience of the web page , we often do things that are not very friendly to search engines, but in some cases this is not irreversible and we can provide a good user experience and SEO by displaying different content to natural humans and search engine bots.
I heard that this method will violate some operating principles of search engines, and may be punished by various search engines, or even delete the website. So I have just removed this treatment until I am sure that it is not cheating. Enterprising Friends can continue to use it, but at their own risk.
The homepage and archive pages of this blog display articles in the form of a list, and the content of the article is only loaded when the visitor clicks to expand the article. Because the content of the article contains a large amount of text and pictures , requires a lot of loading time and traffic. Displaying web pages to visitors as soon as possible can retain a large number of visitors. For mobile users, loading time and traffic are more important.
Generally speaking, the homepage of the website is the search engine The pages most visited by engines should display meaningful content to them as much as possible. However, if the articles are displayed in the form of a list, visitors and search engines can only obtain the article title information. Article content or summary (especially the article's first paragraph) (sentence) is extremely important for SEO, so we have to try to send this content to the crawler.
Well, we can use the User Agent to determine whether the visitor is a crawler, and if so, display the article in a general form, Otherwise, the article list is displayed in list form. You can use the following PHP method to determine whether it is a crawler:
function is_crawler() { $userAgent = strtolower($_SERVER['HTTP_USER_AGENT']); $spiders = array( ‘Googlebot', // Google 爬虫 ‘Baiduspider', // 百度爬虫 ‘Yahoo! Slurp', // 雅虎爬虫 ‘YodaoBot', // 有道爬虫 ‘msnbot' // Bing爬虫 // 更多爬虫关键字 ); foreach ($spiders as $spider) { $spider = strtolower($spider); if (strpos($userAgent, $spider) !== false) { return true; } } return false; }
This is the method I use. Each crawler sorts the accessed comments from high to low. Then use the following Methods to display different content to crawlers and natural people
The above is the entire content of this article. I hope it will be helpful to everyone's learning. For more related content, please pay attention to the PHP Chinese website!
Related recommendations:
About the code of php multi-functional image processing class
How to use pcntl and libevent in PHP to implement the Timer function
The above is the detailed content of About php showing different content codes to visitors and crawlers. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

1. WeChat is a social platform that pays attention to privacy protection. Users cannot see who has visited their Moments or personal homepage. 2. This design is intended to protect user privacy and avoid potential harassment or snooping. 3. Users can only see the likes and comments records in their circle of friends, further ensuring the confidentiality of personal information.

The time it takes to learn Python crawlers varies from person to person and depends on factors such as personal learning ability, learning methods, learning time and experience. Learning Python crawlers is not just about learning the technology itself, but also requires good information gathering skills, problem solving skills and teamwork skills. Through continuous learning and practice, you will gradually grow into an excellent Python crawler developer.

In crawler development, handling cookies is often an essential part. As a state management mechanism in HTTP, cookies are usually used to record user login information and behavior. They are the key for crawlers to handle user authentication and maintain login status. In PHP crawler development, handling cookies requires mastering some skills and paying attention to some pitfalls. Below we explain in detail how to handle cookies in PHP. 1. How to get Cookie when writing in PHP

Analysis of common problems and solutions for PHP crawlers Introduction: With the rapid development of the Internet, the acquisition of network data has become an important link in various fields. As a widely used scripting language, PHP has powerful capabilities in data acquisition. One of the commonly used technologies is crawlers. However, in the process of developing and using PHP crawlers, we often encounter some problems. This article will analyze and give solutions to these problems and provide corresponding code examples. 1. Description of the problem that the data of the target web page cannot be correctly parsed.

Java crawler practice: How to efficiently crawl web page data Introduction: With the rapid development of the Internet, a large amount of valuable data is stored in various web pages. To obtain this data, it is often necessary to manually access each web page and extract the information one by one, which is undoubtedly a tedious and time-consuming task. In order to solve this problem, people have developed various crawler tools, among which Java crawler is one of the most commonly used. This article will lead readers to understand how to use Java to write an efficient web crawler, and demonstrate the practice through specific code examples. 1. The base of the reptile

The stock market has always been a topic of great concern. The daily rise, fall and changes in stocks directly affect investors' decisions. If you want to understand the latest developments in the stock market, you need to obtain and analyze stock information in a timely manner. The traditional method is to manually open major financial websites to view stock data one by one. This method is obviously too cumbersome and inefficient. At this time, crawlers have become a very efficient and automated solution. Next, we will demonstrate how to use PHP to write a simple stock crawler program to obtain stock data. allow

With the rapid development of Internet technology, Web applications are increasingly used in our daily work and life. In the process of web application development, crawling web page data is a very important task. Although there are many web scraping tools on the market, these tools are not very efficient. In order to improve the efficiency of web page data crawling, we can use the combination of PHP and Selenium. First, we need to understand what PHP and Selenium are. PHP is a powerful

Practical skills sharing: Quickly learn how to crawl web page data with Java crawlers Introduction: In today's information age, we deal with a large amount of web page data every day, and a lot of this data may be exactly what we need. In order to quickly obtain this data, learning to use crawler technology has become a necessary skill. This article will share a method to quickly learn how to crawl web page data with a Java crawler, and attach specific code examples to help readers quickly master this practical skill. 1. Preparation work Before starting to write a crawler, we need to prepare the following
