The Nijie login process works like this:
* First we submit our `email` and `password` to `https://nijie.info/login_int.php`.
* Then we save the NIJIEIEID session cookie from the response.
* We optionally retry if login failed. Nijie returns 429 errors with a
`Retry-After: 5` header if we send too many login requests. This can
happen during parallel testing.
* We cache the login cookies for only 1 hour so we don't have to worry
about them becoming invalid if we cache them too long.
Cookies and retrying errors on failure are handled transparently by Danbooru::Http.
Allow cookies to be saved and sent back when making several requests in
a row. Usage:
http = Danbooru::Http.use(:session)
# saves the foo=42 cookie sent by the response.
http.get("https://httpbin.org/cookies/set/foo/42")
# sends back the foo=42 cookie from the previous request.
http.get("https://httpbin.org/cookies")
Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
Revert back to previous workaround of fetching previous day if current
day returns no result. A terrible hack, really we should convert dates
to Reportbooru's timezone, but that has other complications.
* Get rid of mechanize, fully switch to Danbooru::Http
* Switch to mobile api, improving speed
* Merge main and manga clients
* Add full support for manga pages
* Add support for anonymous and r-15 images
* Don't fail when attempting to upload oekaki direct links
* Various misc fixes
* Combine MissedSearchService, PostViewCountService, and
PopularSearchService into single ReportbooruService class.
* Use Danbooru::Http for these services instead of HTTParty.
Bug: Replacing posts hosted on cdn.donmai.us didn't work.
Cause: Original files on cdn.donmai.us are hosted under /var/www/danbooru/original/, but replacements
were trying to store them directly under /var/www/danbooru, which failed with a permission error.
We were trying to store them in the wrong directory because we didn't respect the `original_subdir`
option when generating file paths.
Fix regression in #4475. Fetch the commentary as html instead of
plaintext so that we don't lose links or other formatting.
Also fix it so that /jump.php redirect links are replaced with the
actual url.
* Move the source normalization logic out of the post model
and into individual sources' strategies.
* Rewrite normalization tests to be handled into each source's test,
and expand them significantly. Previously we were only testing
a very small subset of domains and variants.
* Fix up normalization for several sites.
* Normalize fav.me urls into normal deviantart urls.
* Move image thumbnail generation code to MediaFile::Image.
* Move video thumbnail generation code to MediaFile::Video.
* Move ugoira->webm conversion code to MediaFile::Ugoira.
This separates thumbnail generation from the upload process so that it's
possible to generate thumbnails outside of uploads.
Fixes bug described in d3e4ac7c17 (commitcomment-39049351)
When dealing with searches, there are several variables we have to keep
in mind:
* Whether tag aliases should be applied.
* Whether search terms should be sorted.
* Whether the rating:s and -status:deleted metatags should be added by
safe mode and the hide deleted posts setting.
Which of these things we need to do depends on the context:
* We want to apply aliases when actually doing the search, calculating
the count, looking up the wiki excerpt, recording missed/popular
searches in Reportbooru, and calculating related tags for the sidebar,
but not when displaying the raw search as typed by the user (for
example, in the page title or in the tag search box).
* We want to sort the search when calculating cache keys for fast_count
or related tags, and when recording missed/popular searches, but not
in the page title or when displaying the raw search.
* We want to add rating:s and -status:deleted when performing the
search, calculating the count, or recording missed/popular searches,
but not when calculating related tags for the sidebar, or when
displaying the page title or raw search.
Here we introduce normalized_query and try to use it in contexts where
query normalization is necessary. When to use the normalized query
versus the raw unnormalized query is still subtle and prone to error.
This was a hack to deal with the Cloudflare check sometimes being slow
or timing out during tests. The call to https://api.cloudflare.com/client/v4/ips
could hang if there were IPv6 connectivity problems. If this happens, make
sure that IPv6 is configured properly and that `curl -v --http1.1 -6 https://api.cloudflare.com/client/v4/ips`
works.