Factor out referrer spoofing so that it can be used outside of downloading
files. We also need to spoof the referrer when determining the remote
filesize of images on the uploads page.
Bug: the uploads page showed a remote size of 146 bytes for Pixiv uploads.
Cause: we didn't spoof the Referer header when making the HEAD request
for the image, causing Pixiv to return a 403 error.
Also fix the case where the Content-Length header is absent.
Add a `respond_to_search` test helper for concisely testing that a
controller's index action correctly responds to a search. Usage:
# Tests that `/tags.json?search[name]=touhou` returns the `touhou` tag.
setup { @touhou = create(:tag, name: "touhou") }
should respond_to_search(name: "touhou").with { @touhou }
These exceptions are no longer thrown now that we've switched from
HTTParty to http.rb. Swallowing unexpected exceptions during testing was
a bad practice anyway.
Replace the mocked services in scripts/mocked_services with Rails-level
mocked services.
The scripts in scripts/mocked_services were a set of stub Sinatra
servers used to mock the Reportbooru, Recommender, and IQDBs services
during development. They return fake data so you can test pages that use
these services.
Implementing these services in Rails makes it easier to run them. It
also lets us drop a dependency on Sinatra and drop a use of HTTParty.
To use these services, set the following configuration in danbooru_local_config.rb
or .env.local:
* reportbooru_server: http://localhost:3000/mock/reportbooru
* recommender_server: http://localhost:3000/mock/recommender
* iqdbs_server: http://localhost:3000/mock/iqdb
where `http://localhost:300` is the url for your local Danbooru server
(may need to be changed depending on your configuration).
The Nijie login process works like this:
* First we submit our `email` and `password` to `https://nijie.info/login_int.php`.
* Then we save the NIJIEIEID session cookie from the response.
* We optionally retry if login failed. Nijie returns 429 errors with a
`Retry-After: 5` header if we send too many login requests. This can
happen during parallel testing.
* We cache the login cookies for only 1 hour so we don't have to worry
about them becoming invalid if we cache them too long.
Cookies and retrying errors on failure are handled transparently by Danbooru::Http.
Allow cookies to be saved and sent back when making several requests in
a row. Usage:
http = Danbooru::Http.use(:session)
# saves the foo=42 cookie sent by the response.
http.get("https://httpbin.org/cookies/set/foo/42")
# sends back the foo=42 cookie from the previous request.
http.get("https://httpbin.org/cookies")
Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
Revert back to previous workaround of fetching previous day if current
day returns no result. A terrible hack, really we should convert dates
to Reportbooru's timezone, but that has other complications.
This reverts commit e83d07ea7b.
It was worth a try, but unfortunately it seems that once
someone sets tools in a Pixiv upload, they become defaults and
are applied to all of their subsequent uploads, so we get some
posts with two or three different digital tags.
This passed in development but failed in CI because SavedSearch.redis
used the live Redis server, which worked by accident as long as you had
a Redis server running.
* Get rid of mechanize, fully switch to Danbooru::Http
* Switch to mobile api, improving speed
* Merge main and manga clients
* Add full support for manga pages
* Add support for anonymous and r-15 images
* Don't fail when attempting to upload oekaki direct links
* Various misc fixes