Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
Increase timeout to 30 seconds when uploading files to IQDB. Previously
we used the default timeout of 3 seconds, which could cause 599 timeout
errors sometimes if the upload took too long.
Fix "cannot determine size of body" errors on upload page. Caused by
exception during IQDB lookup. We were posting the form data wrong, we
need to wrap the file with HTTP::FormData::File and pass it through the
`form` parameter.
This was only halfways supported, as the download module does not
have an image_url function. So for this, it just uses the url function,
which is just the original URL passed into the download function.
Additionally, it adds support to grab the largest available image,
which it does by using the file_url function of the downloads module.
- Fixes image_url parameter
- Adds file_url parameter
Instead of sending IQDB the image url and letting it download the file,
download the file on Danbooru's end and send IQDB the downloaded file.
This fixes several issues:
* Some sites need the referer header set to avoid hotlink protection
when downloading the file. Danbooru knows how to deal with this but
IQDB doesn't.
* We need to enforce certain restrictions when downloading files,
including setting max filesize limits, setting max timeouts, and not
allowing downloads from forbidden IPs (to avoid SSRF attacks).
Danbooru knows how to handle these things but IQDB doesn't.
* Fix the IQDB lookup to handle error messages returned by IQDB.
* Fix `undefined method `>=' for nil:NilClass` error when trying to upload
ugoiras. Using the preview image for IQDB lookups should be faster and
give the same results as the full image, plus it should work with things
like ugoiras and videos that IQDB can't handle.
Also add an image_url param so that API users can continue using the
full image if needed.
Previously the search form on the /iqdb_queries page submitted directly
to the iqdb service (karasuma.donmai.us), which redirected back to
Danbooru with the search results.
This was different than API requests, which submitted to
/iqdb_queries.json which proxied the call to iqdb through Danbooru.
Because of this, searches on the /iqdb_queries page had different
behavior than API requests. Things like filesize limits and referrer
spoofing were handled differently.
Now searches on the /iqdb_queries page submit directly to Danbooru. This
is simpler and it means that API requests and HTML requests have the
same behavior.
* Change cutoffs on upload page to max 5 results, min. 20% similarity.
* Change cutoffs on standalone /iqdb_queries page to max 20 results, min. 0% similarity.
* /iqdb_queries.json: add `limit` and `similarity` params to change default cutoffs.