Nijie tests fail often under parallel testing. This is because every
test needs to login to Nijie first, but Nijie rate-limits the login
endpoint, so eventually we hit the limit and tests start failing.
This is made worse by a thundering herd problem. Eight test processes
try to login to Nijie at the same time, but only one succeeds, so the
rest sleep and try again, but they all wakeup and try again at the same
time, hitting the rate limits again.
The workaround is to set the retry limit ridiculously high, higher than
we would ideally like in production. Another workaround would be to
serialize the Nijie tests in the test suite. This can be done with
lockfiles and flock(2). This helps, but we can still hit the rate limit
even under serialized execution.
Bug: if a Nijie login failed with a 429 Too Many Requests error, the
error would get cached, so when we retried the request, we would just
get our own cached response back every time. The 429 error would
eventually be passed up to the Nijie strategy, which caused random
methods to fail because they couldn't get the html page.
Fix: add the `retriable` feature *after* the `cache` feature so that
retries don't go through the cache. This is a hack. We want retries to
go at the bottom of the stack, below caching, but we can't enforce this
ordering.
The Nijie login process works like this:
* First we submit our `email` and `password` to `https://nijie.info/login_int.php`.
* Then we save the NIJIEIEID session cookie from the response.
* We optionally retry if login failed. Nijie returns 429 errors with a
`Retry-After: 5` header if we send too many login requests. This can
happen during parallel testing.
* We cache the login cookies for only 1 hour so we don't have to worry
about them becoming invalid if we cache them too long.
Cookies and retrying errors on failure are handled transparently by Danbooru::Http.
Fix regression in #4475. Fetch the commentary as html instead of
plaintext so that we don't lose links or other formatting.
Also fix it so that /jump.php redirect links are replaced with the
actual url.
* Move the source normalization logic out of the post model
and into individual sources' strategies.
* Rewrite normalization tests to be handled into each source's test,
and expand them significantly. Previously we were only testing
a very small subset of domains and variants.
* Fix up normalization for several sites.
* Normalize fav.me urls into normal deviantart urls.
* Rename `unique_id` to `tag_name`.
* Add `other_names` and `profile_urls` methods that sources can override
to provide extra names or urls when creating new artist entries.
Fix sources choosing the wrong strategy when the referer belongs to a
different site (for example, when uploading a twitter post with a pixiv
referer).
* Fix `match?` to only consider the main url, not the referer.
* Change `match?` to match against a list of domains given by the `domains` method.
* Change `match?` to an instance method.
e.g. If you were on an html work page on pixiv, clicked a link to a
different html work page on pixiv, and then clicked the bookmarklet,
then it used to fetch the source from the FIRST work you were on instead
of the second.