Remove the last remaining uses of the PixivUgoiraFrameData model. As of
32bfb8407, Ugoira frame data is now stored in the MediaMetadata model,
under the `Ugoira:FrameDelays` EXIF field.
The pixiv_ugoira_frame_data table still exists, but it can be removed
after this commit is deployed.
Fixes#5264: Error when replacing with ugoira.
Remove the `CurrentUser.ip_addr` global variable and replace it with
`request.remote_ip`. Before we had to track the current user's IP in a
global variable so that when we edited a post for example, we could pass
down the user's IP to the model and save it in the post_versions table.
Now that we now longer save IPs in version tables, we don't need a global
variable to get access to the current user's IP outside of controllers.
This was an obsolete URL format briefly used by Pixiv around 2019-2020.
There were only ~80 posts with sources using this format. They have been
manually fixed.
Bug: When uploading a direct Pixiv image URL, we ignored it in favor of the
image URL returned by the Pixiv API. This meant if you tried to upload the
original version of a revised image, we would get the revised version instead.
Fix: When given a direct Pixiv image URL, use it as-is if it's a full
image URL. If it's a sample image URL, ignore it in favor of the full image
URL as returned by the API, unless the post is deleted and the API data
is unavailable.
Add methods to Source::URL for determining whether a URL is an image
URL, a page URL, or a profile URL.
Also add more source URL tests and fix various URL parsing bugs.
Refactor source strategies to remove the `canonical_url` method.
`canonical_url` returned the URL that should be used as the source of
the post after upload. Now we simply use `Source::URL#page_url` to
determine the source after upload. If the source is an image URL that is
convertible to a page URL, then the image URL is used as the source. If
the source is an image URL that is not convertible to a page URL, then
the page URL is used as the source.
This simplifies source strategies so that all they have to care about is
implementing the `Source::URL#page_url` and `Sources::Strategies#page_url`
methods, and the preferred source will be chosen for posts automatically.
Remove the `image_url` method from source strategies. This method would
return only the first image if a source had multiple images. The
`image_urls` method should be used instead. Tests were the main place
that still used `image_url` instead of `image_urls`.
Also make post replacements return an error if replacing with a source
that contains multiple images, instead of just blindly replacing the
post with the first image in the source.
* Drop support for preview_urls. This means that IQDB lookups may be
slower, especially for ugoiras, since we have to download the full
ugoira now. However, ugoira lookups should produce better results,
since the ugoira thumbnail chosen by Pixiv wasn't necessarily the same
as the thumbnail chosen by Danbooru.
* Drop support for uploading single manga pages:
http://www.pixiv.net/member_illust.php?mode=manga_big&illust_id=18557054&page=2
Previously uploading an URL like this would only upload a single image
out of a multi-image work. Now it will upload all images in the work.
Pixiv no longer supports URLs like this, so we don't either.
* Add support for parsing URLs like this:
https://i.pximg.net/c/360x360_70/custom-thumb/img/2022/03/08/00/00/56/96755248_p0_custom1200.jpg
Apparently artists can choose a custom thumbnail now (not like anyone
will try to upload one though).
Add upload support for Pixiv Sketch. Fetch tags, commentary, and artist,
and rewrite sample images to full images.
Authentication isn't required. R18 images are hidden in the browser but
visible in the API.
Rework the upload process so that files are saved to Danbooru first
before the user starts tagging the upload.
The main user-visible change is that you have to select the file first
before you can start tagging it. Saving the file first lets us fix a
number of problems:
* We can check for dupes before the user tags the upload.
* We can perform dupe checks and show preview images for users not using the bookmarklet.
* We can show preview images without having to proxy images through Danbooru.
* We can show previews of videos and ugoira files.
* We can reliably show the filesize and resolution of the image.
* We can let the user save files to upload later.
* We can get rid of a lot of spaghetti code related to preprocessing
uploads. This was the cause of most weird "md5 confirmation doesn't
match md5" errors.
(Not all of these are implemented yet.)
Internally, uploading is now a two-step process: first we create an upload
object, then we create a post from the upload. This is how it works:
* The user goes to /uploads/new and chooses a file or pastes an URL into
the file upload component.
* The file upload component calls `POST /uploads` to create an upload.
* `POST /uploads` immediately returns a new upload object in the `pending` state.
* Danbooru starts processing the upload in a background job (downloading,
resizing, and transferring the image to the image servers).
* The file upload component polls `/uploads/$id.json`, checking the
upload `status` until it returns `completed` or `error`.
* When the upload status is `completed`, the user is redirected to /uploads/$id.
* On the /uploads/$id page, the user can tag the upload and submit it.
* The upload form calls `POST /posts` to create a new post from the upload.
* The user is redirected to the new post.
This is the data model:
* An upload represents a set of files uploaded to Danbooru by a user.
Uploaded files don't have to belong to a post. An upload has an
uploader, a status (pending, processing, completed, or error), a
source (unless uploading from a file), and a list of media assets
(image or video files).
* There is a has-and-belongs-to-many relationship between uploads and
media assets. An upload can have many media assets, and a media asset
can belong to multiple uploads. Uploads are joined to media assets
through a upload_media_assets table.
An upload could potentially have multiple media assets if it's a Pixiv
or Twitter gallery. This is not yet implemented (at the moment all
uploads have one media asset).
A media asset can belong to multiple uploads if multiple people try
to upload the same file, or if the same user tries to upload the same
file more than once.
New features:
* On the upload page, you can press Ctrl+V to paste an URL and immediately upload it.
* You can save files for upload later. Your saved files are at /uploads.
Fixes:
* Improved error messages when uploading invalid files, bad URLs, and
when forgetting the rating.
* Make it so replacing a post doesn't generate a dummy upload as a side effect.
* Make it so you can't replace a post with itself (the post should be regenerated instead).
* Refactor uploads and replacements to save the ugoira frame data when
the MediaAsset is created, not when the post is created. This way it's
possible to view the ugoira before the post is created.
* Make `download_file!` in the Pixiv source strategy return a MediaFile
with the ugoira frame data already attached to it, instead of returning it
in the `data` field then passing it around separately in the `context`
field of the upload.
Fix the test suite failing when trying to run it in the default state
with no config file or API keys configured. Most source sites require
API keys or login credentials to be set in order to work. Skip these
tests when credentials aren't configured.
Regression caused by the switch from the mobile API to the Ajax API. In
the Ajax API, commentaries have /jump.php?<url> links that we have to strip out.
Fix the Pixiv API no longer working by rewriting the Pixiv strategy to
use the Ajax API instead of the mobile API.
Before we could authenticate in the mobile API by using the OAuth 2.0
grant_type=password authentication flow. This no longer works. Now it
requires logging in through a HTML page, which is protected by Google
reCaptcha. This makes using the mobile API infeasible.
Instead we switch to the Ajax API, which only needs a PHPSESSID to
authenticate. This can be obtained by logging in manually and using the
devtools to extract the cookie.
This also temporarily removes support for Pixiv novels. This should be
moved to a separate source strategy.
Bug: the uploads page showed a remote size of 146 bytes for Pixiv uploads.
Cause: we didn't spoof the Referer header when making the HEAD request
for the image, causing Pixiv to return a 403 error.
Also fix the case where the Content-Length header is absent.
These exceptions are no longer thrown now that we've switched from
HTTParty to http.rb. Swallowing unexpected exceptions during testing was
a bad practice anyway.
This reverts commit e83d07ea7b.
It was worth a try, but unfortunately it seems that once
someone sets tools in a Pixiv upload, they become defaults and
are applied to all of their subsequent uploads, so we get some
posts with two or three different digital tags.
* Move the source normalization logic out of the post model
and into individual sources' strategies.
* Rewrite normalization tests to be handled into each source's test,
and expand them significantly. Previously we were only testing
a very small subset of domains and variants.
* Fix up normalization for several sites.
* Normalize fav.me urls into normal deviantart urls.
* Tighten up illust id parsing to avoid misparsing ids from
non-illust urls (sketch urls and novel urls).
* Move id parsing tests from post_test.rb to sources/pixiv_test.rb.
* Drop support for touch.pixiv.net urls. These urls are no longer used
by Pixiv and aren't present as the source of any posts on Danbooru.
Bug: Uploading bad pixiv id images failed because the pixiv strategy
raised a BadIDError exception when the upload service checked for the
ugoira frame data.