Allow uploading multiple files from your computer at once.
The maximum limit is 100 files at once. There is still a 50MB size limit
that applies to the whole upload. This limit is at the Nginx level.
The upload widget no longer shows a thumbnail preview of the uploaded
file. This is because there isn't room for it in a multi-file upload,
and because the next page will show a preview anyway after the files are
uploaded.
Direct file uploads are processed synchronously, so they may be slow.
API change: the `POST /uploads` endpoint now expects the param to be
`upload[files][]`, not `upload[file]`.
Use a spinner icon instead of the word "Loading" for thumbnails that are
being processed in the background in a batch upload.
Also use morphdom to update thumbnails so we only update the parts of
the DOM that actually changed.
Add data attributes to thumbnails on the /uploads, /upload_media_assets,
and /media_assets pages. Add a `data-is-posted` attribute for styling
thumbnails based on whether they've already been posted.
Followup to 093a808a3. Using a NOT EXISTS clause is much faster than the
`LEFT OUTER JOIN posts WHERE posts.id IS NULL` clause generated by
`.where.missing(:post)`.
Change the loading indicator from a progress bar to a spinner. Fixes
issue with the <progress> element having a different appearance on
different browsers.
* Add a "Size" menu to the My Uploads / All Uploads pages to allow
changing the thumbnail size.
* Make the My Uploads / All Uploads pages use the same thumbnail size as
the post index page.
* Change the "Gallery | Table" links on the My Uploads page to icons.
Fix the "My Uploads" page showing Admins all uploads, not just their own
uploads.
Changes the URL of the My Uploads page from /uploads to /users/:id/uploads.
Fixes an issue where if you were uploading a multi-image source, and you
clicked on a thumbnail that was still processing, then the page wouldn't
refresh when the processing was complete.
Do one less API call when fetching the image URLs for a Pixiv post. The
`is_ugoira?` check in `image_urls` caused us to do an extra API call
when fetching the image URLs for a non-ugoira post.
API calls to Pixiv take around ~800ms, so this reduces minimum upload
time for Pixiv posts from ~1.6 seconds (two calls) to ~0.8 seconds.
Fix an error when trying to upload a file larger than the file size
limit. In this case we tried to dump the whole HTTP response into the
error message, which included the binary file itself, which caused this
exception because it contained null bytes.
Make the "completed" status for an upload mean "at least one file in the
upload successfully completed". The "error" status means "all files in
the upload failed".
This means that when an upload has multiple assets and some succeed and
some fail, the whole upload is considered completed. This can happen
when uploading multiple files and some files are over the size limit,
for example. The upload is considered failed only if all files in the
upload fail.
This fixes an issue where, if uploading a single file and that file
failed because it was over the size limit, then the upload wouldn't be
marked as failed.
Fix uploads for NicoSeiga sources not working because the strategy
returned URLs like the one below in the list of image_urls, which
require a login to download:
https://seiga.nicovideo.jp/image/source/10315315
Also fix certain URLs like https://dic.nicovideo.jp/oekaki/52833.png not
working, because they didn't contain an image ID and the image_urls
method returned an empty list in this case.
Fix the null source strategy setting the page URL. The page URL is
expected to be nil when we can't determine the page containing the image URL.
Fixes the upload_media_assets.page_url field being filled for uploads
from unknown sites.
Raise the timeout for downloading files from the source to 60 seconds globally.
Previously had a lower timeout because uploads were processed in the
foreground when not using the bookmarklet, and we didn't want to tie up
Puma worker processes with slow downloads. Now that all uploads are
processed in the background, we can have a higher timeout.
Make the upload page automatically detect when a source URL has multiple images
and let the user choose which images to post.
For example, when uploading a Twitter or Pixiv post with more than one image, we
direct the user to a page showing a thumbnail for each image and letting
them choose which ones to post.
This is similar to the batch upload page, except we actually download each image
in the background, instead of just hotlinking or proxying the thumbnails through
our servers. This avoids various problems with proxying and makes new features
possible, like showing which images in the batch have already been posted.
This page shows each individual file you've uploaded. This is different
from the regular uploads page because files in multi-file uploads are
not grouped together.
* Make thumbnails on the "My Uploads" page show an icon with an image
count when an upload contains multiple files.
* Make the "My Uploads" page show each upload, not each individual file.
If an upload contains multiple files, they're shown grouped together
under a single upload. This does mean that failed or duplicate uploads
will show up on this page now. This is because this page shows each
upload attempt, not each uniquely uploaded file.
Make media assets show a placeholder thumbnail when the image is
missing. This can happen if the upload is still processing, or if the
media asset's image was expunged, or if the asset failed during upload
(usually because of some temporary network failure when trying to
distribute thumbnails to the backend image servers).
Fixes a problem where new images on the My Uploads or All Uploads pages
could have broken thumbnails if they were still in the uploading phase.
Include media assets in /uploads.json and /uploads/:id.json API responses, like this:
{
"id": 4983629,
"source": "https://www.pixiv.net/en/artworks/96198438",
"uploader_id": 52664,
"status": "completed",
"created_at": "2022-02-12T16:26:04.680-06:00",
"updated_at": "2022-02-12T16:26:08.071-06:00",
"referer_url": "",
"error": null,
"media_asset_count": 1,
"upload_media_assets": [
{
"id": 9370,
"created_at": "2022-02-12T16:26:08.068-06:00",
"updated_at": "2022-02-12T16:26:08.068-06:00",
"upload_id": 4983629,
"media_asset_id": 5206552,
"status": "pending",
"source_url": "https://i.pximg.net/img-original/img/2022/02/13/01/20/19/96198438_p0.jpg",
"error": null,
"page_url": "https://www.pixiv.net/artworks/96198438",
"media_asset": {
"id": 5206552,
"created_at": "2022-02-12T16:26:07.980-06:00",
"updated_at": "2022-02-12T16:26:08.061-06:00",
"md5": "90a85a5fae5f0e86bdb2501229af05b7",
"file_ext": "jpg",
"file_size": 1055775,
"image_width": 1052,
"image_height": 1545,
"duration": null,
"status": "active"
}
}
]
}
This is needed so you can check for upload errors in the API, since in a multi-file
upload, each asset can have a separate error message. This is a stopgap solution until
something like /uploads.json?include=upload_media_assets.media_asset works.
* Save the filename for files uploaded from disk. This could be used in
the future to extract source data if the filename is from a known site.
* Save both the image URL and the page URL for files uploaded from
source. This is needed for multi-file uploads. The image URL is the
URL of the file actually downloaded from the source. This can be
different from the URL given by the user, if the user tried to upload
a sample URL and we automatically changed it to the original URL. The
page URL is the URL of the page containing the image. We don't always
know this, for example if someone uploads a Twitter image without the
bookmarklet, then we can't find the page URL.
* Add a fix script to backfill URLs for existing uploads. For file
uploads, the filename will be set to "unknown.jpg". For source
uploads, we fetch the source data again to get the image and page
URLs. This may fail for uploads that have been deleted from the
source since uploading.
This exception was thrown by app/logical/pixiv_ajax_client.rb:406 when a
Pixiv API call failed with a network error. In this case we tried to log
the response body, but this failed because we returned a faked HTTP
response with an empty string for the body, which the http.rb library
didn't like because it was expecting an IO-like object for the body.