Revert back to previous workaround of fetching previous day if current
day returns no result. A terrible hack, really we should convert dates
to Reportbooru's timezone, but that has other complications.
This reverts commit e83d07ea7b.
It was worth a try, but unfortunately it seems that once
someone sets tools in a Pixiv upload, they become defaults and
are applied to all of their subsequent uploads, so we get some
posts with two or three different digital tags.
Increase timeout to 30 seconds when uploading files to IQDB. Previously
we used the default timeout of 3 seconds, which could cause 599 timeout
errors sometimes if the upload took too long.
Fix "cannot determine size of body" errors on upload page. Caused by
exception during IQDB lookup. We were posting the form data wrong, we
need to wrap the file with HTTP::FormData::File and pass it through the
`form` parameter.
Fix the sidebar on the /posts index page sometimes being blank. This
could happen when either the related tag calculation was too slow and
timed out, or when Reporbooru was unavailable and we couldn't fetch the
list of popular tags.
In the tag list would otherwise be blank, we fall back to frequent tags
(the most common tags on the current page of results).
Also change it so that if Reportbooru is unconfigured, we fail
gracefully by returning blank results instead of failing with an
exception. This is so we can still view the popular searches and missed
searches pages during testing (even though they'll be blank).
3cdf67920 changed it so that Danbooru::Http follows redirects by
default. This broke some things in the Nico Seiga strategy, so disable
following redirects in the Nico Seiga API client for now.
Also change it so that Danbooru::Http follows redirects after a POST
request (by setting `strict: false`). Nico Seiga needs this because it
sends a redirect after we POST the login form.
* Get rid of mechanize, fully switch to Danbooru::Http
* Switch to mobile api, improving speed
* Merge main and manga clients
* Add full support for manga pages
* Add support for anonymous and r-15 images
* Don't fail when attempting to upload oekaki direct links
* Various misc fixes
* Combine MissedSearchService, PostViewCountService, and
PopularSearchService into single ReportbooruService class.
* Use Danbooru::Http for these services instead of HTTParty.
Bug: Replacing posts hosted on cdn.donmai.us didn't work.
Cause: Original files on cdn.donmai.us are hosted under /var/www/danbooru/original/, but replacements
were trying to store them directly under /var/www/danbooru, which failed with a permission error.
We were trying to store them in the wrong directory because we didn't respect the `original_subdir`
option when generating file paths.
* Remove `banned_ip_for_download?` config option. This isn't something that usually needs
to be configured.
* Replace the `ipaddress` gem with `ipaddress_2`. The `ipaddress` gem has several methods
we need (`link_local?`, etc) that are only available in master because the gem hasn't had
an official release in several years. `ipaddress_2` is a fork that is more actively
maintained.
Fix regression in #4475. Fetch the commentary as html instead of
plaintext so that we don't lose links or other formatting.
Also fix it so that /jump.php redirect links are replaced with the
actual url.
Get rid of `normalized_for_artist_finder?` and `normalizable_for_artist_finder?`.
This was legacy bullshit that was originally designed to avoid API calls
when saving artist entries containing old Pixiv direct image urls that
had already been normalized, or that couldn't be normalized because they
were bad id.
Nowadays we store profile urls in artist entries instead of direct image
urls, so we don't normally need to do any API calls to normalize the
profile url. Strategies should take care to avoid triggering API calls
inside `profile_url` when possible.
flash files can be quite big (the biggest on danbooru.donmai.us being
68.6MB atm). Reading it and applying complex transformations twice seems
unnecessary.
MediaFile#dimensions is called twice - in #width and in #height but
it only works on the first call because the file is read to the end and
consumed the first time so when #read is called the second time it only
returns the empty string
ref: https://danbooru.donmai.us/forum_topics/16935.
Bug: sample images were being generated to be at most 850px width *and*
850px tall. They're supposed to be at most 850px wide with unlimited height.
This was only halfways supported, as the download module does not
have an image_url function. So for this, it just uses the url function,
which is just the original URL passed into the download function.
Additionally, it adds support to grab the largest available image,
which it does by using the file_url function of the downloads module.
- Fixes image_url parameter
- Adds file_url parameter