What is Crawl Budget?
Crawl budget refers to the number of pages a search engine like Googlebot is willing and able to crawl on a website within a given timeframe. It is a combination of two main factors:
- Crawl rate limit: How many requests a search engine will make to a site without overloading it.
- Crawl demand: How much interest a search engine has in crawling a website’s pages, based on factors like popularity, freshness, and SEO value.
For large or complex websites, crawl budget can influence how much of your content is discovered and indexed.
Why Crawl Budget Matters
Crawl budget plays a key role in ensuring that all your important pages are found, crawled, and ultimately indexed by search engines. If search engines waste crawl budget on unnecessary URLs, such as duplicate pages, filters, or outdated content, more valuable content may be missed.
Benefits of managing crawl budget include:
- Better coverage of important pages in search results
- Faster discovery of new or updated content
- Reduced server load from unnecessary crawling
- Improved overall SEO performance
Crawl budget is especially relevant for websites with thousands of pages, eCommerce platforms, or dynamically generated content.
Example in Use
An online store with 20,000 product pages might also have thousands of filter-generated URLs. If Googlebot spends most of its crawl budget on filter pages, it may miss indexing newer or high-priority content. Using robots.txt, noindex tags, and canonicalisation can help manage crawl behaviour.
Related Terms
- Crawling
- Indexing
- Robots.txt
- Canonical Tag
- Technical SEO