网站 robots.txt User-agent Disallow 参考示例

注意
本文最后更新于 2024-07-10,文中内容可能已过时。

网站通过设置 robots.txt 禁止一些爬虫。robots.txt 几个可参考示例。

之前看了篇文,goodboyboy 的 Facebook 爬虫造成的高占用问题 。而且爬的是自建 Gitea 的 commit 。最后通过屏蔽海外 IP 实现的禁爬。

我以前搭建过 Chevereto 图床,被爬虫搞的服务器资源占满。谷歌的爬虫各种参数排列组合去爬。Chevereto 自带多语言,爬虫每种语言都试一遍。有些爬虫也会爬图片,用于生成略缩图。之前必应搜索看到我博客里的图片了,甚至会收录文章中引用的 JSON 文件。

话说回来,通常做法是写 robots.txt 。这个文件可确保像对搜索引擎没有任何意义的 API 端点等等不会被搜索引擎索引,或者禁止部分路径,UA 爬取,或其他限制。

本文主要提供 UA 列表,作为备忘和参考。

Hugo Theme Doit https://github.com/HEIGE-PCloud/DoIt/blob/main/layouts/robots.txt

禁止了几个营销公司的爬虫,有些据说是监视版权之类的。

以及 Internet Archive 用的 Heritrix 默认 UA。注意 Internet Archive 网站实际使用的是 archive.org_bot 等 UA。

https://archive.org/details/archive.org_bot

text

User-agent: MJ12bot
Disallow: /

User-agent: AhrefsBot
Disallow: /

User-agent: BLEXBot
Disallow: /

# Block SISTRIX
User-agent: SISTRIX Crawler
Disallow: /
User-agent: sistrix
Disallow: /
User-agent: 007ac9
Disallow: /
User-agent: 007ac9 Crawler
Disallow: /

# Block Uptime robot
User-agent: UptimeRobot/2.0
Disallow: /

# Block Ezooms Robot
User-agent: Ezooms Robot
Disallow: /

# Block Perl LWP
User-agent: Perl LWP
Disallow: /

# Block netEstate NE Crawler (+http://www.website-datenbank.de/)
User-agent: netEstate NE Crawler (+http://www.website-datenbank.de/)
Disallow: /

# Block WiseGuys Robot
User-agent: WiseGuys Robot
Disallow: /

# Block Turnitin Robot
User-agent: Turnitin Robot
Disallow: /

# Block Heritrix
User-agent: Heritrix
Disallow: /

# Block pricepi
User-agent: pimonster
Disallow: /

User-agent: SurdotlyBot
Disallow: /

User-agent: ZoominfoBot
Disallow: /

User-agent: *
Allow: /

Sitemap: {{ "/sitemap.xml" | absURL }}

A list of AI agents and robots to block. https://github.com/ai-robots-txt/ai.robots.txt

https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.txt

text

User-agent: AdsBot-Google
User-agent: Amazonbot
User-agent: anthropic-ai
User-agent: Applebot-Extended
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: ClaudeBot
User-agent: Claude-Web
User-agent: cohere-ai
User-agent: Diffbot
User-agent: FacebookBot
User-agent: FriendlyCrawler
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GPTBot
User-agent: img2dataset
User-agent: omgili
User-agent: omgilibot
User-agent: peer39_crawler
User-agent: peer39_crawler/1.0
User-agent: PerplexityBot
User-agent: YouBot
Disallow: /

GoToSocial 实际应用,部分 robots.txt

text

# GoToSocial robots.txt -- to edit, see internal/web/robots.go
# More info @ https://developers.google.com/search/docs/crawling-indexing/robots/intro

# AI scrapers and the like.
# https://github.com/ai-robots-txt/ai.robots.txt/
User-agent: AdsBot-Google
User-agent: Amazonbot
User-agent: anthropic-ai
User-agent: Applebot-Extended
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: ClaudeBot
User-agent: Claude-Web
User-agent: cohere-ai
User-agent: Diffbot
User-agent: FacebookBot
User-agent: FriendlyCrawler
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GPTBot
User-agent: img2dataset
User-agent: omgili
User-agent: omgilibot
User-agent: peer39_crawler
User-agent: peer39_crawler/1.0
User-agent: PerplexityBot
User-agent: YouBot
Disallow: /

# Marketing/SEO "intelligence" data scrapers
User-agent: AwarioRssBot
User-agent: AwarioSmartBot
User-agent: DataForSeoBot
User-agent: ImagesiftBot
User-agent: magpie-crawler
User-agent: Meltwater
User-agent: PiplBot
User-agent: scoop.it
User-agent: Seekr
Disallow: /

# Well-known.dev crawler. Indexes stuff under /.well-known.
# https://well-known.dev/about/
User-agent: WellKnownBot     
Disallow: /   

...

众所周知,许多爬虫会忽略 robots.txt 中的条目。此时,可考虑在反向代理处配置过滤 UA 等方法。