Screaming Frog SEO Spider Website Crawler (2024)

The industry leading website crawler for Windows, macOS and Linux, trusted by thousands of SEOs and agencies worldwide for technical SEO site audits.

  • Download
  • Pricing
  • Overview
  • User Guide
  • Tutorials
  • Issues
  • FAQ
  • Support

SEO Spider Tool

The Screaming Frog SEO Spider is a website crawler that helps you improve onsite SEO by auditing for common SEO issues.

Download & crawl 500 URLs for free, or buy a licence to remove the limit & access advanced features.

Read our getting started guide

Free Vs Paid Download

Available On

Screaming Frog SEO Spider Website Crawler (1) windows

Screaming Frog SEO Spider Website Crawler (2) mac

Screaming Frog SEO Spider Website Crawler (3) linux

What can you do with the SEO Spider Tool?

The SEO Spider is a powerful and flexible site crawler, able to crawl both small and very large websites efficiently, while allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions.

Screaming Frog SEO Spider Website Crawler (4)

Find Broken Links

Crawl a website instantly and find broken links (404s) and server errors. Bulk export the errors and source URLs to fix, or send to a developer.

Screaming Frog SEO Spider Website Crawler (5)

Audit Redirects

Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.

Screaming Frog SEO Spider Website Crawler (6)

Analyse Page Titles & Meta Data

Analyse page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicated across your site.

Screaming Frog SEO Spider Website Crawler (7)

Discover Duplicate Content

Discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions or headings and find low content pages.

Screaming Frog SEO Spider Website Crawler (8)

Extract Data with XPath

Collect any data from the HTML of a web page using CSS Path, XPath or regex. This might include social meta tags, additional headings, prices, SKUs or more!

Screaming Frog SEO Spider Website Crawler (9)

Review Robots & Directives

View URLs blocked by robots.txt, meta robots or X-Robots-Tag directives such as ‘noindex’ or ‘nofollow’, as well as canonicals and rel=“next” and rel=“prev”.

Screaming Frog SEO Spider Website Crawler (10)

Generate XML Sitemaps

Quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include, last modified, priority and change frequency.

Screaming Frog SEO Spider Website Crawler (11)

Integrate with GA, GSC & PSI

Connect to the Google Analytics, Search Console and PageSpeed Insights APIs and fetch user and performance data for all URLs in a crawl for greater insight.

Screaming Frog SEO Spider Website Crawler (12)

Crawl JavaScript Websites

Render web pages using the integrated Chromium WRS to crawl dynamic, JavaScript rich websites and frameworks, such as Angular, React and Vue.js.

Screaming Frog SEO Spider Website Crawler (13)

Visualise Site Architecture

Evaluate internal linking and URL structure using interactive crawl and directory force-directed diagrams and tree graph site visualisations.

Screaming Frog SEO Spider Website Crawler (14)

Schedule Audits

Schedule crawls to run at chosen intervals and auto export crawl data to any location, including Google Sheets. Or automate entirely via command line.

Screaming Frog SEO Spider Website Crawler (15)

Compare Crawls & Staging

Track progress of SEO issues and opportunities and see what's changed between crawls. Compare staging against production environments using advanced URL Mapping.

Features

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Audit hreflang Attributes
  • Discover Exact Duplicate Pages
  • Generate XML Sitemaps
  • Site Visualisations
  • Crawl Limit
  • Scheduling
  • Crawl Configuration
  • Save & Open Crawls
  • JavaScript Rendering
  • Crawl Comparison
  • Near Duplicate Content
  • Custom robots.txt
  • Mobile Usability
  • AMP Crawling & Validation
  • Structured Data & Validation
  • Spelling & Grammar Checks
  • Custom Source Code Search
  • Custom Extraction
  • Custom JavaScript
  • Crawl with OpenAI & Gemini
  • Google Analytics Integration
  • Search Console Integration
  • PageSpeed Insights Integration
  • Link Metrics Integration
  • Forms Based Authentication
  • Segmentation
  • Looker Studio Crawl Report
  • Free Technical Support

Price per licence

Licences last 1 year. After that you will be required to renew your licence.

Free Version

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Audit hreflang Attributes
  • Discover Exact Duplicate Pages
  • Generate XML Sitemaps
  • Site Visualisations
  • Crawl Limit -500 URLs
  • Scheduling
  • Crawl Configuration
  • Save & Open Crawls
  • JavaScript Rendering
  • Crawl Comparison
  • Near Duplicate Content
  • Custom robots.txt
  • Mobile Usability
  • AMP Crawling & Validation
  • Structured Data & Validation
  • Spelling & Grammar Checks
  • Custom Source Code Search
  • Custom Extraction
  • Custom JavaScript
  • Crawl with OpenAI & Gemini
  • Google Analytics Integration
  • Search Console Integration
  • PageSpeed Insights Integration
  • Link Metrics Integration
  • Forms Based Authentication
  • Segmentation
  • Looker Studio Crawl Report
  • Free Technical Support

Free

Download free version

Paid Version

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Audit hreflang Attributes
  • Discover Exact Duplicate Pages
  • Generate XML Sitemaps
  • Site Visualisations
  • Crawl Limit -Unlimited*
  • Scheduling
  • Crawl Configuration
  • Save & Open Crawls
  • JavaScript Rendering
  • Crawl Comparison
  • Near Duplicate Content
  • Custom robots.txt
  • Mobile Usability
  • AMP Crawling & Validation
  • Structured Data & Validation
  • Spelling & Grammar Checks
  • Custom Source Code Search
  • Custom Extraction
  • Custom JavaScript
  • Crawl with OpenAI & Gemini
  • Google Analytics Integration
  • Search Console Integration
  • PageSpeed Insights Integration
  • Link Metrics Integration
  • Forms Based Authentication
  • Segmentation
  • Looker Studio Crawl Report
  • Free Technical Support

$

£

239
Per Year

Purchase licence

* The maximum number of URLs you can crawl is dependent on allocated memory and storage. Please see our FAQ.

Used By

Some of the biggest brands & agencies use our software.

Screaming Frog SEO Spider Website Crawler (16) Screaming Frog SEO Spider Website Crawler (17) Screaming Frog SEO Spider Website Crawler (18) Screaming Frog SEO Spider Website Crawler (19) Screaming Frog SEO Spider Website Crawler (20) Screaming Frog SEO Spider Website Crawler (21)

Featured In

The SEO Spider is regularly featured in top publications.

Screaming Frog SEO Spider Website Crawler (22) Screaming Frog SEO Spider Website Crawler (23) Screaming Frog SEO Spider Website Crawler (24) Screaming Frog SEO Spider Website Crawler (25) Screaming Frog SEO Spider Website Crawler (26) Screaming Frog SEO Spider Website Crawler (27)

Out of the myriad of tools we use at iPullRank I can definitively say that I only use the Screaming Frog SEO Spider every single day. It's incredibly feature-rich, rapidly improving and I regularly find a new use case. I can't endorse it strongly enough.

Mike King

Founder, iPullRank

Screaming Frog SEO Spider Website Crawler (28)

The Screaming Frog SEO Spider is my "go to" tool for initial SEO audits and quick validations: powerful, flexible and low-cost. I couldn't recommend it more.

Aleyda Solis

Owner, Orainti

Screaming Frog SEO Spider Website Crawler (29)

The SEO Spider Tool Crawls & Reports On...

The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include -

  1. Errors – Client errors such as broken links & server errors (No responses, 4XX client & 5XX server errors).
  2. Redirects – Permanent, temporary, JavaScript redirects & meta refreshes.
  3. Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
  4. Blocked Resources – View & audit blocked resources in rendering mode.
  5. External Links – View all external links, their status codes and source pages.
  6. Security – Discover insecure pages, mixed content, insecure forms, missing security headers and more.
  7. URL Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
  8. Duplicate Pages – Discover exact and near duplicate pages using advanced algorithmic checks.
  9. Page Titles – Missing, duplicate, long, short or multiple title elements.
  10. Meta Description – Missing, duplicate, long, short or multiple descriptions.
  11. Meta Keywords – Mainly for reference or regional search engines, as they are not used by Google, Bing or Yahoo.
  12. File Size – Size of URLs & Images.
  13. Response Time – View how long pages take to respond to requests.
  14. Last-Modified Header – View the last modified date in the HTTP header.
  15. Crawl Depth – View how deep a URL is within a website’s architecture.
  16. Word Count – Analyse the number of words on every page.
  17. H1 – Missing, duplicate, long, short or multiple headings.
  18. H2 – Missing, duplicate, long, short or multiple headings
  19. Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet etc.
  20. Meta Refresh – Including target page and time delay.
  21. Canonicals – Link elements & canonical HTTP headers.
  22. X-Robots-Tag – See directives issued via the HTTP Headder.
  23. Pagination – View rel=“next” and rel=“prev” attributes.
  24. Follow & Nofollow – View meta nofollow, and nofollow link attributes.
  25. Redirect Chains – Discover redirect chains and loops.
  26. hreflang Attributes – Audit missing confirmation links, inconsistent & incorrect languages codes, non canonical hreflang and more.
  27. Inlinks – View all pages linking to a URL, the anchor text and whether the link is follow or nofollow.
  28. Outlinks – View all pages a URL links out to, as well as resources.
  29. Anchor Text – All link text. Alt text from images with links.
  30. Rendering – Crawl JavaScript frameworks like AngularJS and React, by crawling the rendered HTML after JavaScript has executed.
  1. AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
  2. Images – All URLs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
  3. User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
  4. Custom HTTP Headers – Supply any header value in a request, from Accept-Language to cookie.
  5. Custom Source Code Search – Find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
  6. Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
  7. Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
  8. Google Search Console Integration – Connect to the Google Search Analytics and URL Inspection APIs and collect performance and index status data in bulk.
  9. PageSpeed Insights Integration – Connect to the PSI API for Lighthouse metrics, speed opportunities, diagnostics and Chrome User Experience Report (CrUX) data at scale.
  10. External Link Metrics – Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
  11. XML Sitemap Generation – Create an XML sitemap and an image sitemap using the SEO spider.
  12. Custom robots.txt – Download, edit and test a site’s robots.txt using the new custom robots.txt.
  13. Rendered Screen Shots – Fetch, view and analyse the rendered pages crawled.
  14. Store & View HTML & Rendered HTML – Essential for analysing the DOM.
  15. AMP Crawling & Validation – Crawl AMP URLs and validate them, using the official integrated AMP Validator.
  16. XML Sitemap Analysis – Crawl an XML Sitemap independently or part of a crawl, to find missing, non-indexable and orphan pages.
  17. Visualisations – Analyse the internal linking and URL structure of the website, using the crawl and directory tree force-directed diagrams and tree graphs.
  18. Structured Data & Validation – Extract & validate structured data against Schema.org specifications and Google search features.
  19. Spelling & Grammar – Spell & grammar check your website in over 25 different languages.
  20. Crawl Comparison – Compare crawl data to see changes in issues and opportunities to track technical SEO progress. Compare site structure, detect changes in key elements and metrics and use URL mapping to compare staging against production sites.

I’ve tested nearly every SEO tool that has hit the market, but I can’t think of any I use more often than Screaming Frog. To me, it’s the Swiss Army Knife of SEO Tools. From uncovering serious technical SEO problems to crawling top landing pages after a migration to uncovering JavaScript rendering problems to troubleshooting international SEO issues, Screaming Frog has become an invaluable resource in my SEO arsenal. I highly recommend Screaming Frog for any person involved in SEO.

Glenn Gabe

Founder, GSQI

Screaming Frog SEO Spider Website Crawler (30)

Screaming Frog Web Crawler is one of the essential tools I turn to when performing a site audit. It saves time when I want to analyze the structure of a site, or put together a content inventory for a site, where I can capture how effective a site might be towards meeting the informational or situation needs of the audience of that site. I usually buy a new edition of Screaming Frog on my birthday every year, and it is one of the best birthday presents I could get myself.

Bill Slawski

Director, Go Fish Digital

Screaming Frog SEO Spider Website Crawler (31)

About The Tool

The Screaming Frog SEO Spider is a fast and advanced SEO site audit tool. It can be used to crawl both small and large websites, where manually checking every page would be extremely labour intensive, and where you can easily miss a redirect, missing page title, or duplicate page issue. You can view, analyse and filter the crawl data as it’s gathered and updated in real-time in the apps UI.

The SEO Spider allows you to export key onsite SEO elements (URL, page title, meta description, headings etc) to a spread sheet, so it can easily be used as a base for SEO recommendations. Check our out demo video above.

Crawl 500 URLs For Free

The ‘lite’ version of the tool is free to download and use. However, this version is restricted to crawling up to 500 URLs in a single crawl and it does not give you full access to the configuration, saving of crawls, or advanced features such as JavaScript rendering, custom extraction, Google Analytics integration and much more. You can crawl 500 URLs from the same website, or as many websites as you like, as many times as you like, though!

For just £199 per year you can purchase a licence, which removes the 500 URL crawl limit, allows you to save crawls, and opens up the spider’s configuration options and advanced features.

Alternatively hit the ‘buy a licence’ button in the SEO Spider to buy a licence after downloading and trialing the software.

FAQ & User Guide

The SEO Spider crawls sites like Googlebot discovering hyperlinks in the HTML using a breadth-first algorithm. It uses a configurable hybrid storage engine, able to save data in RAM and disk to crawl large websites. By default it will only crawl the raw HTML of a website, but it can also render web pages using headless Chromium to discover content and links.

For more guidance and tips on how to use the Screaming Frog SEO crawler –

  • Please read our quick-fire getting started guide.
  • Please see our recommended hardware, user guide, SEO issues list, tutorials and FAQ. Please also watch the demo video above!
  • Check out our tutorials, including how to use the SEO Spider as a broken link checker, duplicate content checker, , generating XML Sitemaps, crawling JavaScript, robots.txt testing, web scraping, crawl comparison and crawl visualisations.
  • Level up your SEO game and read our Learn SEO section.

Updates

Keep updated with future releases by subscribing to RSS feed, our mailing list below and following us on Twitter @screamingfrog.

Support & Feedback

If you have any technical problems, feedback or feature requests for the SEO Spider, then please just contact us via our support. We regularly update the SEO Spider and currently have lots of new features in development!

Join the mailing list for updates, tips & giveaways

How we use the data in this form

Back to top

Screaming Frog SEO Spider Website Crawler (2024)

FAQs

How to crawl entire website in Screaming Frog? ›

Method 1: Use Screaming Frog to identify all subdomains on a given site. Navigate to Configuration > Spider, and ensure that “Crawl all Subdomains” is selected. Just like crawling your whole site above, this will help crawl any subdomain that is linked to within the site crawl.

What is the screaming frog SEO spider used for? ›

The Screaming Frog SEO Spider is a fast and advanced SEO site audit tool. It can be used to crawl both small and large websites, where manually checking every page would be extremely labour intensive, and where you can easily miss a redirect, missing page title, or duplicate page issue.

Is there a free version of Screaming Frog? ›

Screaming Frog has its own free version, but that version is limited in functionality. You can crawl your site for broken links and link errors, page titles and meta data, your meta robots directives, your hreflang attributes, and it can find duplicate content. It can also generate an XML sitemap for you.

Can you crawl a staging site on Screaming Frog? ›

It's essential to be able to audit sites in staging environments before they go live, which is where crawling them with the Screaming Frog SEO Spider can help. Various methods are used to block search engines from a staging site, or to avoid content from being indexed, including putting it behind a login, using robots.

Why is Screaming Frog not crawling all URLs? ›

No Internal Links

It may surprise you, but this is typically the most common reason for not crawling a page. It's simply not linked to internally on the website. If it's not linked to, then the SEO Spider will not discover it by default.

How do I crawl all pages of a website? ›

You can usually locate it in the root or footer section of the website. For example, the XML sitemap URL could be “www.example.com/sitemap.xml“. Once you click on the sitemap, you will find all pages & subpages of a website.

Is Screaming Frog worth it? ›

Great content audit and inventory tool

Overall, Screaming Frog helps me a lot in analyzing client websites and identifying top issues to solve. I can better understand a site's structure, SEO health, and usability status, which informs how I will approach a project to be the most successful.

How many URLs can screaming frogs crawl? ›

The Screaming Frog SEO Spider is free to download and use for crawling up to 500 URLs at a time. For £199 a year you can buy a licence, which removes the 500 URL crawl limit.

What is the best use of Screaming Frog? ›

Segments in Screaming Frog enable you to classify URLs on your site to recognise and oversee issues and opportunities associated with various templates, page types, or site sections more effectively. This can be particularly useful when auditing ecommerce websites, due to their typically large size.

What's the main difference between SiteBulb vs Screaming Frog? ›

Screaming Frog vs SiteBulb: Pros & Cons
Screaming FrogSiteBulb
- Limited data storage capacity for larger websites- Not ideal for crawling JavaScript and AJAX websites
- The free version has a 500 URLs limit- The free version has a limited number of URLs that can be audited
12 more rows
Mar 14, 2023

What is similar to Screaming Frog? ›

Top 5 Screaming Frog Alternatives
  • Letterdrop automatically fixes on-page and site-wide issues for content-heavy sites.
  • Ahrefs offers detailed recommendations for improvement and has a user-friendly interface.
  • ContentKing provides real-time site performance tracking.
May 7, 2024

Why is it called Screaming Frog? ›

The name came to Dan, our founder, when his two cats cornered a frog in his back garden and to his surprise, it let out a loud scream.

Is Screaming Frog free for commercial use? ›

How much does the Screaming Frog SEO Spider cost? The SEO Spider is free to download and use. However, without a licence the SEO Spider is limited to crawling a maximum of 500 URLs each crawl, crawls can't be saved, and advanced features and the configuration are restricted.

Can Screaming Frog create a sitemap? ›

This tutorial walks you through how you can use the Screaming Frog SEO Spider to generate XML Sitemaps. To get started, you'll need to download the SEO spider which is free in lite form, for up to 500 URLs. You can download via the buttons in the right hand side bar.

What are the minimum requirements for Screaming Frog? ›

System Requirements

Screaming Frog recommends that you have 16GB of RAM to crawl extremely large websites, but for smaller sites, 8GB of RAM is fine. A 64-bit operating system is required, but if you're using a 32-bit machine, Screaming Frog can provide an alternative version of the software.

How many pages can Screaming Frog crawl? ›

How much does the Screaming Frog SEO Spider cost? The SEO Spider is free to download and use. However, without a licence the SEO Spider is limited to crawling a maximum of 500 URLs each crawl, crawls can't be saved, and advanced features and the configuration are restricted.

How do I make my website crawl? ›

The six steps to crawling a website include:
  1. Understanding the domain structure.
  2. Configuring the URL sources.
  3. Running a test crawl.
  4. Adding crawl restrictions.
  5. Testing your changes.
  6. Running your crawl.
Oct 18, 2021

How do I get all URLs from a website? ›

The simplest way to extract all the URLs on a website is to use a crawler. Crawlers start with a single web page (called a seed), extracts all the links in the HTML, then navigates to those links and repeats the process again until all links have been navigated to.

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Ray Christiansen

Last Updated:

Views: 5323

Rating: 4.9 / 5 (49 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Ray Christiansen

Birthday: 1998-05-04

Address: Apt. 814 34339 Sauer Islands, Hirtheville, GA 02446-8771

Phone: +337636892828

Job: Lead Hospitality Designer

Hobby: Urban exploration, Tai chi, Lockpicking, Fashion, Gunsmithing, Pottery, Geocaching

Introduction: My name is Ray Christiansen, I am a fair, good, cute, gentle, vast, glamorous, excited person who loves writing and wants to share my knowledge and understanding with you.