* Get rid of mechanize, fully switch to Danbooru::Http
* Switch to mobile api, improving speed
* Merge main and manga clients
* Add full support for manga pages
* Add support for anonymous and r-15 images
* Don't fail when attempting to upload oekaki direct links
* Various misc fixes
* Combine MissedSearchService, PostViewCountService, and
PopularSearchService into single ReportbooruService class.
* Use Danbooru::Http for these services instead of HTTParty.
Bug: Replacing posts hosted on cdn.donmai.us didn't work.
Cause: Original files on cdn.donmai.us are hosted under /var/www/danbooru/original/, but replacements
were trying to store them directly under /var/www/danbooru, which failed with a permission error.
We were trying to store them in the wrong directory because we didn't respect the `original_subdir`
option when generating file paths.
The problem was that the Addressable parser does not catch all invalid
URL cases, so some extra checks were added in.
- hostname must contain a dot
This accounts for URLs of the following type:
http://http://something.com
which has a hostname of http.
The artist URL tests were also updated with cases which test all validation
errors.
Fix regression in #4475. Fetch the commentary as html instead of
plaintext so that we don't lose links or other formatting.
Also fix it so that /jump.php redirect links are replaced with the
actual url.
* Move the source normalization logic out of the post model
and into individual sources' strategies.
* Rewrite normalization tests to be handled into each source's test,
and expand them significantly. Previously we were only testing
a very small subset of domains and variants.
* Fix up normalization for several sites.
* Normalize fav.me urls into normal deviantart urls.
* Move image thumbnail generation code to MediaFile::Image.
* Move video thumbnail generation code to MediaFile::Video.
* Move ugoira->webm conversion code to MediaFile::Ugoira.
This separates thumbnail generation from the upload process so that it's
possible to generate thumbnails outside of uploads.
Fixes bug described in d3e4ac7c17 (commitcomment-39049351)
When dealing with searches, there are several variables we have to keep
in mind:
* Whether tag aliases should be applied.
* Whether search terms should be sorted.
* Whether the rating:s and -status:deleted metatags should be added by
safe mode and the hide deleted posts setting.
Which of these things we need to do depends on the context:
* We want to apply aliases when actually doing the search, calculating
the count, looking up the wiki excerpt, recording missed/popular
searches in Reportbooru, and calculating related tags for the sidebar,
but not when displaying the raw search as typed by the user (for
example, in the page title or in the tag search box).
* We want to sort the search when calculating cache keys for fast_count
or related tags, and when recording missed/popular searches, but not
in the page title or when displaying the raw search.
* We want to add rating:s and -status:deleted when performing the
search, calculating the count, or recording missed/popular searches,
but not when calculating related tags for the sidebar, or when
displaying the page title or raw search.
Here we introduce normalized_query and try to use it in contexts where
query normalization is necessary. When to use the normalized query
versus the raw unnormalized query is still subtle and prone to error.
This was a hack to deal with the Cloudflare check sometimes being slow
or timing out during tests. The call to https://api.cloudflare.com/client/v4/ips
could hang if there were IPv6 connectivity problems. If this happens, make
sure that IPv6 is configured properly and that `curl -v --http1.1 -6 https://api.cloudflare.com/client/v4/ips`
works.
that's the latest commit made to deviantart files before switching from
the developer API to the Javascript backend from the new "Eclipse"
frontend.
This is necessary because it's basically impossible to download posts
now with the JS backend without being logged in, i.e. having the cookies
from a logged in user, which can't be used for very long even if
exporting them from a browser. You would have to save the cookies
deviantart sends you back via the "Set-Cookie" header in a database
somewhere in addition to the other added complexity.
also
* (temporarily) replace HttpartyCache with HTTParty as it's long been
removed
* fix one case of "last argument as keyword parameter"
* change repository url (5d1a1cc87e)
* remove self-explanatory comment
The name AliasAndImplicationImporter is a holdover from the time before
bulk update requests existed. This was a bad name because it doesn't do
any actual importing, instead it's used for parsing and executing bulk
update requests.
* Remove unnecessary rename_aliased_pages option. This option was always enabled.
* Don't try to rename the artist and wiki page inside AliasAndImplicationImporter
when an alias is approved. This is already handled by TagAlias#process!.
Refactor RelatedTagCalculator and RelatedTagQuery to take a PostQuery
object instead of a raw tag string.
* Fixes the related tag sidebar on the post index page having to reparse
the query and reevaluate aliases.
* Fixes related tags being affected by the current user's safe mode and
hide deleted posts settings.
Some searches, such as searches for private favorites or for the
status:unmoderated tag, return different results for different users.
These searches need to have their counts cached separately for each user
so that we don't return incorrect page counts when two different users
perform the same search.
This can also potentially leak private information, such as the number
of posts flagged, downvoted, or disapproved by a given user.
Partial fix for #4280.