Require new accounts to verify their email address if any of the
following conditions are true:
* Their IP is a proxy.
* Their IP is under a partial IP ban.
* They're creating a new account while logged in to another account.
* Somebody recently created an account from the same IP in the last week.
Changes from before:
* Allow logged in users to view the signup page and create new accounts.
Creating a new account while logged in to your old account is now
allowed, but it requires email verification. This is a honeypot.
* Creating multiple accounts from the same IP is now allowed, but they
require email verification. Previously the same IP check was only for
the last day (now it's the last week), and only for an exact IP match
(now it's a subnet match, /24 for IPv4 or /64 for IPv6).
* New account verification is disabled for private IPs (e.g. 127.0.0.1,
192.168.0.1), to make development or running personal boorus easier
(fixes#4618).
Bug: The frontpage failed due to a SSL error. We couldn't fetch the
popular tag list from Reportbooru because Reportbooru's SSL certificate
had expired and HTTP.rb raised an SSLError exception that we didn't
catch.
Fix: Convert the SSLError to a 5xx HTTP error to prevent SSL exceptions
from leaking through HTTP.rb.
* Factor out the Cloudflare Polish bypass code to a standalone feature.
* Add `http_downloader` method to the base source strategy. This is a
HTTP client that should be used for downloading images or making
requests to images. This client ensures that referrer spoofing and
Cloudflare bypassing are performed.
This fixes a bug with the upload page reporting the polished filesize
instead of the original filesize when uploading ArtStation images.
Factor out referrer spoofing so that it can be used outside of downloading
files. We also need to spoof the referrer when determining the remote
filesize of images on the uploads page.
Bug: if a Nijie login failed with a 429 Too Many Requests error, the
error would get cached, so when we retried the request, we would just
get our own cached response back every time. The 429 error would
eventually be passed up to the Nijie strategy, which caused random
methods to fail because they couldn't get the html page.
Fix: add the `retriable` feature *after* the `cache` feature so that
retries don't go through the cache. This is a hack. We want retries to
go at the bottom of the stack, below caching, but we can't enforce this
ordering.
Replace http.rb's builtin redirect following option with our own
redirect follower. This fixes an issue with http.rb losing cookies after
following a redirect.
Allow cookies to be saved and sent back when making several requests in
a row. Usage:
http = Danbooru::Http.use(:session)
# saves the foo=42 cookie sent by the response.
http.get("https://httpbin.org/cookies/set/foo/42")
# sends back the foo=42 cookie from the previous request.
http.get("https://httpbin.org/cookies")
Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
3cdf67920 changed it so that Danbooru::Http follows redirects by
default. This broke some things in the Nico Seiga strategy, so disable
following redirects in the Nico Seiga API client for now.
Also change it so that Danbooru::Http follows redirects after a POST
request (by setting `strict: false`). Nico Seiga needs this because it
sends a redirect after we POST the login form.
* Add a three second connection timeout to all http requests. By default
http.rb doesn't have any timeouts, so it can hang forever trying to
connect if there are any network issues.
* Return a fake 522 error in the event of a timeout so that callers
don't have to deal with TimeoutError exceptions, instead they can treat
timeouts as normal 5xx errors (which most callers already handle).
The old password reset flow:
* User requests a password reset.
* Danbooru generates a password reset nonce.
* Danbooru emails user a password reset confirmation link.
* User follows link to password reset confirmation page.
* The link contains a nonce authenticating the user.
* User confirms password reset.
* Danbooru resets user's password to a random string.
* Danbooru emails user their new password in plaintext.
The new password reset flow:
* User requests a password reset.
* Danbooru emails user a password reset link.
* User follows link to password edit page.
* The link contains a signed_user_id param authenticating the user.
* User changes their own password.
Fix a couple security issues related to dmail permalinks. Dmails have a
permalink that you can give to a Mod to let them read the dmail. This is
done with a key param that grants access when the dmail is opened by
another user. The key param had several problems:
* The key contained a full copy of the message's title and body encoded in
base64. This meant that anyone given a dmail permalink could read the
full dmail just by decoding the key in the link, without even having
to open the link.
* The key was derived from the dmail's title and body. If you knew or
could guess a dmail's title and body you could open the dmail. One
case when this was possible was when sending dmails. You could send
someone a dmail, take the permalink from your sent copy of the dmail,
then increment the dmail id to open the receiver's copy of the dmail.
Since the sent copy and the received copy both had the same title and
body, they both had the same dmail key. This let you check whether a
person had read your dmail, and what time they read it at.
* The key verification was done with an insecure string comparison
rather than a secure constant-time comparison. This was potentially
vulnerable to timing attacks.
* Opening a dmail belonging to another user would mark it as read for them.
The fix to all this is to use the dmail's id as the key instead of the
dmail's title and body. This means that old permalinks no longer work.
This is unavoidable given the issues above.
Other changes:
* The name of the 'Permalink' link is now 'Share'.
* Anyone with the 'Share' link can view the dmail, not just Mods.
* Switch CloudflareService from HttpartyCache to Danbooru::Http.
* Purge cached urls from Cloudflare when a post is replaced and the md5
doesn't change. This happens when a corrupted image is replaced or
thumbnails are regenerated. Before we purged urls when a post was
expunged, which was unneeded because those urls can expire naturally.
It was also wrong because the subdomains were hardcoded, the urls used
http:// instead of https://, and we didn't account for tagged urls.
Previously the page-based (numbered) paginator would always count the
total_pages, even in API calls when it wasn't needed. This could be very
slow in some cases. Refactor so that total_pages isn't calculated unless
it's called.
While we're at it, refactor to condense all the sequential vs. numbered
pagination logic into one module. This incidentally fixes a couple more
bugs:
* "page=b0" returned all pages rather than nothing.
* Bad parameters like "page=blaha123" and "page=a123blah" were accepted.