Change how artist URLs are normalized in artist entries. Don't try to secretly
convert image URLs to profile URLs in artist entries. For example, if someone puts a
Pixiv image URL in an artist entry, don't secretly try to fetch the source and
convert it into a profile URL in the `normalized_url` field.
We did this because years ago, it was standard practice to put image URLs in artist
entries. Pixiv image URLs used to contain the artist's username, so we used to put
image URLs in artist entries for artist finding purposes. But Pixiv changed it so
that image URLs no longer contained the username, so we dealt with it by adding a
`normalized_url` column to artist_urls and secretly converting image URLs to profile
URLs in this field. But this is no longer necessary because now we don't normally put
image URLs in artist entries in the first place.
Now the `profile_url` method in `Source::URL` is used to normalize URLs in artist
entries. This lets us parse various profile URL formats and normalize them into a
single canonical form.
This also removes the `normalize_for_artist_finder` method from source strategies.
Instead the `profile_url` method is used for artist finding purposes. So the profile
URL returned by the source strategy needs to be the same as the URL in the artist
entry in order for artist finding to work.
Bug: We assumed the referer URL was from the same site as the target
URL. We tried to call methods on the referer only supported by the
target URL.
Fix: Ignore the referer URL when it's from a different site than the
target URL.
Remove the `preview_urls` method from strategies. The only place this was used was
when doing IQDB searches, to download the thumbnail image from the source instead of
the full image.
This wasn't worth it for a few reasons:
* Thumbnails on other sites are sometimes not the size we want, which could affect
IQDB results.
* Grabbing thumbnails is complex for some sites. You can't always just rewrite the
image URL. Sometimes it requires extra API calls, which can be slower than just
grabbing the full image.
* For videos and animations, thumbnails from other sites don't always match our
thumbnails. We do smart thumbnail generation to try to avoid blank thumbnails, which
means we don't always pick the first frame, which could affect IQDB results.
API changes:
* /iqdb_queries?search[file_url] now downloads the URL as is without any modification.
Before it tried to change thumbnail and sample size image URLs to the full version.
* /iqdb_queries?search[url] now returns an error if the URL is for a HTML page that
contains multiple images. Before it would grab only the first image and silently
ignore the rest.
NicoSeiga changed it so that on every login, you must enter a 2FA code
sent by email. This broke the NicoSeiga strategy. The fix is to just use
a static session cookie instead (and hope it doesn't expire, and isn't
tied to an IP).
The `nico_seiga_login` and `nico_seiga_password` config settings have
been removed from config/danbooru_default_config.rb and replaced by
`nico_seiga_user_session`. If you run your own Danbooru instance, you
will have to update your config file manually.
Fix uploads for NicoSeiga sources not working because the strategy
returned URLs like the one below in the list of image_urls, which
require a login to download:
https://seiga.nicovideo.jp/image/source/10315315
Also fix certain URLs like https://dic.nicovideo.jp/oekaki/52833.png not
working, because they didn't contain an image ID and the image_urls
method returned an empty list in this case.
The image_url method makes a request to `https://seiga.nicovideo.jp/images/source/:image_id`
to see where this URL redirects to. Before we did a GET request, which caused it to download
the full image. This could fail with a timeout error if the download took too long. We also
cached the request, which caused the full image to be cached, even though we only need the
headers. Change it to a HEAD request so we don't have to download the entire image just to
check the URL.
Fix Nicoseiga strategy to work with certain direct image urls that we
can't otherwise extract any information from.
Examples:
* https://dic.nicovideo.jp/oekaki/52833.png
* Get rid of mechanize, fully switch to Danbooru::Http
* Switch to mobile api, improving speed
* Merge main and manga clients
* Add full support for manga pages
* Add support for anonymous and r-15 images
* Don't fail when attempting to upload oekaki direct links
* Various misc fixes
Get rid of `normalized_for_artist_finder?` and `normalizable_for_artist_finder?`.
This was legacy bullshit that was originally designed to avoid API calls
when saving artist entries containing old Pixiv direct image urls that
had already been normalized, or that couldn't be normalized because they
were bad id.
Nowadays we store profile urls in artist entries instead of direct image
urls, so we don't normally need to do any API calls to normalize the
profile url. Strategies should take care to avoid triggering API calls
inside `profile_url` when possible.
* Move the source normalization logic out of the post model
and into individual sources' strategies.
* Rewrite normalization tests to be handled into each source's test,
and expand them significantly. Previously we were only testing
a very small subset of domains and variants.
* Fix up normalization for several sites.
* Normalize fav.me urls into normal deviantart urls.
* Rename `unique_id` to `tag_name`.
* Add `other_names` and `profile_urls` methods that sources can override
to provide extra names or urls when creating new artist entries.
Fix sources choosing the wrong strategy when the referer belongs to a
different site (for example, when uploading a twitter post with a pixiv
referer).
* Fix `match?` to only consider the main url, not the referer.
* Change `match?` to match against a list of domains given by the `domains` method.
* Change `match?` to an instance method.
Works around connection reset errors in the test suite by disabling
persistent connections.
20) Error:
Sources::PixivTest#test_: in all cases fetching source data for a new manga image should get the tags. :
Net::HTTP::Persistent::Error: too many connection resets (due to closed stream - IOError) after 0 requests on 47071328584700, last used 1.842702476 seconds ago
app/logical/pixiv_web_agent.rb:46:in `build'
app/logical/sources/strategies/pixiv.rb:104:in `agent'
app/logical/sources/strategies/pixiv.rb:72:in `get'
app/logical/sources/site.rb:6:in `get'
test/unit/sources/pixiv_test.rb:7:in `get_source'
test/unit/sources/pixiv_test.rb:64:in `block (3 levels) in <class:PixivTest>'
ref: github.com/sparklemotion/mechanize/issues/123
ref: http://www.rubydoc.info/gems/mechanize/Mechanize#retry_change_requests%3D-instance_method
e.g. If you were on an html work page on pixiv, clicked a link to a
different html work page on pixiv, and then clicked the bookmarklet,
then it used to fetch the source from the FIRST work you were on instead
of the second.