Also add inputs on the search page for both the linked_to and the
not_linked_to search parameters. Additionally, normalize the title
first since autocomplete adds trailing spaces. The search query was
also simplified a bit by taking advantage of Rails associations.
Nijie tests fail often under parallel testing. This is because every
test needs to login to Nijie first, but Nijie rate-limits the login
endpoint, so eventually we hit the limit and tests start failing.
This is made worse by a thundering herd problem. Eight test processes
try to login to Nijie at the same time, but only one succeeds, so the
rest sleep and try again, but they all wakeup and try again at the same
time, hitting the rate limits again.
The workaround is to set the retry limit ridiculously high, higher than
we would ideally like in production. Another workaround would be to
serialize the Nijie tests in the test suite. This can be done with
lockfiles and flock(2). This helps, but we can still hit the rate limit
even under serialized execution.
Fix Nicoseiga strategy to work with certain direct image urls that we
can't otherwise extract any information from.
Examples:
* https://dic.nicovideo.jp/oekaki/52833.png
Bug: if a Nijie login failed with a 429 Too Many Requests error, the
error would get cached, so when we retried the request, we would just
get our own cached response back every time. The 429 error would
eventually be passed up to the Nijie strategy, which caused random
methods to fail because they couldn't get the html page.
Fix: add the `retriable` feature *after* the `cache` feature so that
retries don't go through the cache. This is a hack. We want retries to
go at the bottom of the stack, below caching, but we can't enforce this
ordering.
Replace the mocked services in scripts/mocked_services with Rails-level
mocked services.
The scripts in scripts/mocked_services were a set of stub Sinatra
servers used to mock the Reportbooru, Recommender, and IQDBs services
during development. They return fake data so you can test pages that use
these services.
Implementing these services in Rails makes it easier to run them. It
also lets us drop a dependency on Sinatra and drop a use of HTTParty.
To use these services, set the following configuration in danbooru_local_config.rb
or .env.local:
* reportbooru_server: http://localhost:3000/mock/reportbooru
* recommender_server: http://localhost:3000/mock/recommender
* iqdbs_server: http://localhost:3000/mock/iqdb
where `http://localhost:300` is the url for your local Danbooru server
(may need to be changed depending on your configuration).
The Nijie login process works like this:
* First we submit our `email` and `password` to `https://nijie.info/login_int.php`.
* Then we save the NIJIEIEID session cookie from the response.
* We optionally retry if login failed. Nijie returns 429 errors with a
`Retry-After: 5` header if we send too many login requests. This can
happen during parallel testing.
* We cache the login cookies for only 1 hour so we don't have to worry
about them becoming invalid if we cache them too long.
Cookies and retrying errors on failure are handled transparently by Danbooru::Http.
Replace http.rb's builtin redirect following option with our own
redirect follower. This fixes an issue with http.rb losing cookies after
following a redirect.
Allow cookies to be saved and sent back when making several requests in
a row. Usage:
http = Danbooru::Http.use(:session)
# saves the foo=42 cookie sent by the response.
http.get("https://httpbin.org/cookies/set/foo/42")
# sends back the foo=42 cookie from the previous request.
http.get("https://httpbin.org/cookies")
Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
Revert back to previous workaround of fetching previous day if current
day returns no result. A terrible hack, really we should convert dates
to Reportbooru's timezone, but that has other complications.
This reverts commit e83d07ea7b.
It was worth a try, but unfortunately it seems that once
someone sets tools in a Pixiv upload, they become defaults and
are applied to all of their subsequent uploads, so we get some
posts with two or three different digital tags.
Increase timeout to 30 seconds when uploading files to IQDB. Previously
we used the default timeout of 3 seconds, which could cause 599 timeout
errors sometimes if the upload took too long.
Fix "cannot determine size of body" errors on upload page. Caused by
exception during IQDB lookup. We were posting the form data wrong, we
need to wrap the file with HTTP::FormData::File and pass it through the
`form` parameter.