7.SEO & Rendering Methods
Members only · Non-members can read 30% of the article.
- Published
- February 16, 2025
- Reading Time
- 2 min read
- Author
- Felix
- Access
- Members only
Non-members can read 30% of the article.
Earlier we were mainly talking about some very basic SEO basic skills. In this chapter, we will delve into the core issues of JavaScript SEO and the impact of different rendering mechanisms on SEO.
1. Google crawler types and how they work
Search engine crawlers are the core component of SEO. This section will mainly introduce the Google crawler, because Google is the most important search engine today. In fact, after understanding this one, the principles of all search engine crawlers are similar.
1.1 The evolution of Google crawler
Limitations of traditional crawlers
Early search engine crawlers were primarily designed to process static HTML pages. They understand the content of the page by parsing the structure of the HTML document, but are powerless for dynamic content generated by JavaScript. This results in many JavaScript-powered websites performing poorly in search results.
The emergence of modern JavaScript rendering crawlers
In order to adapt to the development of Web technology, Google launched a crawler capable of rendering JavaScript in 2015. This new type of crawler is able to execute JavaScript code to better understand and index dynamically generated content. This is also a major advancement in search engine technology, providing a more level playing field for JavaScript-driven websites.
1.2 Main types of Google crawlers
Google uses a variety of specialized crawlers to index different types of web content. Here are some of the more popular crawlers:
* Googlebot (Web Crawler): This is Google’s main crawler, responsible for crawling and indexing web content. It is available in both desktop and mobile versions.
* GooglebotImages (image crawler): Specially designed to discover and index image content on the web.
* GooglebotVideo (Video Crawler): Responsible for crawling and indexing video content, including video metadata and thumbnails.
* AdsBot (Ad Quality Assessment Crawler): This crawler is used to evaluate the quality of Google advertising landing pages.
1.3 How Googlebot works
The working process of Googlebot can be divided into the following main stages:
Discovery phase
During this initial stage, Googlebot discovers web page URLs in a variety of ways, including:
*Sitemap submitted through the website
* Follow links from known pages
* URL submitted via Google Search Console
* Analyze backlink data
Fetching phase
After discovering the URLs, Googlebot requests access to those pages. At this stage:
* Googlebot downloads HTML documents
* Parse the HTML structure and identify links and resource references in the page
* Add newly discovered URLs to the crawl queue
Here are the key advantages of SSR: For server-side rendering (SSR) websites, the complete content is already included in the HTML document, and all important content can be seen even if Googlebot does not execute JavaScript. In contrast, a client-side rendered (CSR) website only has an almost empty HTML skeleton at this stage.
Processing and Analysis Phase
After crawling the page, Google will:
* Conduct preliminary analysis of page content
* Evaluate whether the page requires further rendering
* Put the page that needs to be rendered into the rendering queue
* Assign rendering priority based on page importance and resource constraints
Another advantage of SSR: Since the SSR page already contains the complete content in the initial HTML, Google may decide that it does not need to be put into the rendering queue, or give it a lower rendering priority, thus speeding up indexing.
Rendering stage
For pages
Subscribe to unlock the full article
Support the writing, unlock every paragraph, and receive future updates instantly.
Comments
Join the conversation
No comments yet. Be the first to add one.