* Fix error when uploading non-ugoira files.
* Fix sample image URLs not being rewritten to full images correctly. We
have to get the full image URL from the API because given an
/img-master/ URL, we don't know what the original file extension is.
* Drop support for preview_urls. This means that IQDB lookups may be
slower, especially for ugoiras, since we have to download the full
ugoira now. However, ugoira lookups should produce better results,
since the ugoira thumbnail chosen by Pixiv wasn't necessarily the same
as the thumbnail chosen by Danbooru.
* Drop support for uploading single manga pages:
http://www.pixiv.net/member_illust.php?mode=manga_big&illust_id=18557054&page=2
Previously uploading an URL like this would only upload a single image
out of a multi-image work. Now it will upload all images in the work.
Pixiv no longer supports URLs like this, so we don't either.
* Add support for parsing URLs like this:
https://i.pximg.net/c/360x360_70/custom-thumb/img/2022/03/08/00/00/56/96755248_p0_custom1200.jpg
Apparently artists can choose a custom thumbnail now (not like anyone
will try to upload one though).
Add upload support for Pixiv Sketch. Fetch tags, commentary, and artist,
and rewrite sample images to full images.
Authentication isn't required. R18 images are hidden in the browser but
visible in the API.
Additionally, fixed some broken tests and changed normalization for urls
of album type to point to the mobile version instead, because they're
only visible to logged-in users.
Also fixes the uploader uploading all images when trying to upload only a
single image in a multi-image work. Caused by `image_urls` incorrectly
returning all images when the source strategy was given a url for a
single image.
Add `#basename`, `#filename`, and `#file_ext` utility methods to
Danbooru::URL and change a few places to use them. Simplifies parsing
filenames in source URLs in various places.
Introduce a Source::URL class for parsing URLs from source sites. Refactor the Twitter
source strategy to use it.
This is the first step towards factoring all the URL parsing logic out of source
strategies and moving it to subclasses of Source::URL. Each site will have a subclass
of Source::URL dedicated to parsing URLs from that site. Source strategies will use
these classes to extract information from URLs.
This is to simplify source strategies. Most sites have many different URL formats we have
to parse or rewrite, and handling all these different cases tends to make source
strategies very complex. Isolating the URL parsing logic from the site scraping logic
should make source strategies easier to maintain.