blog

git clone https://git.ce9e.org/blog.git

commit
1d014fcef57287b24023c8306e8079084e0d98c8
parent
ceba0e53974eacf40c829dcc995e6926e946ff6c
Author
Tobias Bengfort <tobias.bengfort@posteo.de>
Date
2025-06-18 10:35
anubis: add link to related post

Diffstat

M _content/posts/2025-05-24-anubis/index.md 4 ++++

1 files changed, 4 insertions, 0 deletions


diff --git a/_content/posts/2025-05-24-anubis/index.md b/_content/posts/2025-05-24-anubis/index.md

@@ -182,6 +182,10 @@ has close to 30.000 pages, so downloading all of them would require clients to
  182   182 waste ~100 minutes of CPU time. While this is not nothing, I am also not
  183   183 convinced that this is enough of an obstacle to discourage scrapers.
  184   184 
   -1   185 [Raphael Michel](https://behind.pretix.eu/2025/05/23/captchas-are-over/) comes
   -1   186 to a similar conclusion when discussing scalpers: If you stand to make 200€
   -1   187 profit from a request, you do not care about a few cents in extra CPU time.
   -1   188 
  185   189 Also, this whole idea assumes that attackers even care about their resource
  186   190 usage. DDoS attacks are commonly executed via bot nets where attackers have
  187   191 taken over regular people's devices. In that case, attackers don't really care