Fix a bug where, if you were uploading an entire 4chan thread, then the source of each post would
get set to the 4chan thread, rather than to the individual 4chan post.
Allow uploading .zip, .rar, and .7z files from disk. The archive will be extracted and the images
inside will be uploaded.
This only works for archive files uploaded from disk, not from a source URL.
Post source URLs will look something like this: "file://foo.zip/1.jpg", "file://foo.zip/2.jpg", etc.
Sometimes artists uses Shift JIS or other encodings instead of UTF-8 for filenames. In these cases
we just assume the filename is UTF-8 and replace invalid characters with '?', so filenames might be
wrong in some cases.
There are various protections to prevent uploading malicious archive files:
* Archives with more than 100 files aren't allowed.
* Archives that decompress to more than 100MB aren't allowed.
* Archives with filenames containing '..' components aren't allowed (e.g. '../../../../../etc/passwd').
* Archives with filenames containing absolute paths aren't allowed (e.g. '/etc/passwd').
* Archives containing symlinks aren't allowed (e.g. 'foo -> /etc/passwd').
* Archive types other than .zip, .rar, and .7z aren't allowed (e.g. .tar.gz, .cpio).
* File permissions, owners, and other metadata are ignored.
Partial fix for #5340: Add support for extracting archive attachments from certain sources
Fix temp files generated during the upload process not being cleaned up quickly enough. This included
downloaded files, generated preview images, and Ugoira video conversions.
Before we relied on `Tempfile` cleaning up files automatically. But this only happened when the
Tempfile object was garbage collected, which could take a long time. In the meantime we could have
hundreds of megabytes of temp files hanging around.
The fix is to explicitly close temp files when we're done with them. But the standard `Tempfile`
class doesn't immediately delete the file when it's closed. So we also have to introduce a
Danbooru::Tempfile wrapper that deletes the tempfile as soon as it's closed.
Fix three exploits that allowed one to keep using their account after it was deleted:
* It was possible to use session cookies from another computer to login after you deleted your account.
* It was possible to use API keys to make API requests after you deleted your account.
* It was possible to request a password reset, delete your account, then use the password reset link
to change your password and login to your deleted account.
* Don't delete the user's favorites unless private favorites are enabled. The general rule is that
public account activity is kept and private account activity is deleted.
* Delete the user's API keys, forum topics visits, private favgroups, downvotes, and upvotes (if
privacy is enabled).
* Reset all of the user's account settings to default. This means custom CSS is deleted, where it
wasn't before.
* Delete everything but the user's name and password asynchronously.
* Don't log the current user out if it's the owner deleting another user's account.
* Fix#5067 (Mod actions sometimes not created for user deletions) by wrapping the deletion process
in a transaction.
Automatically add the `sound` tag if the post has sound. Remove the tag if the post doesn't have sound.
A video is considered to have sound if its peak loudness is greater than -70 dB. The current quietest post
on Danbooru has a peak loudness of -62 dB (post #3470668), but it's possible to have audible sound at
-80 dB or possibly even lower. It's hard to draw a clear line between "silent" and "barely audible".
If a media asset is corrupt, include the error message from libvips or
ffmpeg in the "Vips:Error" or "FFmpeg:Error" fields in the media
metadata table.
Corrupt files can't be uploaded nowadays, but they could be in the past,
so we have some old corrupted files that we can't generate thumbnails
for. This lets us mark these files in the metadata so they're findable
with the tag search `exif:Vips:Error`.
Known bug: Vips has a single global error buffer that is shared between
threads and that isn't cleared between operations. So we can't reliably
get the actual error message because it may pick up errors from other
threads, or from previous operations in the same thread.
When searching posts by width, height, file size, or file extension, use the
values from the media_assets table rather than the posts table.
This makes filetype: searches faster because the file_ext is indexed on
the media assets table, but not on the posts table.
This paves the way for getting rid of the width, height, file_size, and
file_ext indexes on the posts table in the future. It's wasteful to
index these columns on both the posts table and the media assets table.
Add a `MediaAsset#regenerate!` method that regenerates everything about
the asset, including the metadata, thumbnails, IQDB, cached Cloudflare
URLs, and AI tags.
Fixes it so that a) it's possible to regenerate media assets that aren't
attached to posts and b) regenerating a post regenerates everything. Before
it didn't regenerate the metadata, AI tags, or all of the cached URLs.
Fix it so that trying to regenerate AI tags for a Flash file doesn't
fail because Flash files have no image preview.
Also let `MediaFile.open` take a block argument.
Fix certain corrupt GIFs returning dimensions of 0x0. This happened
when the GIF was too corrupt for libvips to read. Fixed by using
ExifTool to read the dimensions instead.
Also add validations to ensure that it's not possible to have media
assets with a width or height of 0.
Add a script to go through every media asset and check the metadata
(width, height, duration, filesize, md5, EXIF metadata) and update it
if it's changed. This is necessary after upgrading ExifTool because the
metadata it returns may have changed.
Fix StatementInvalid exception when uploading https://files.catbox.moe/vxoe2p.mp4.
This was a result of multiple bugs:
* First, generating thumbnails for the video failed. This was because
the video uses the AV1 codec, which FFmpeg failed to decode. It failed
because our version of FFmpeg was built without the `--enable-libdav1d`
flag, so it uses the builtin AV1 decoder, which apparently can't
handle this particular video (it spews a bunch of errors about "Failed
to get pixel format" and "missing sequence header" and "failed to get
reference frame").
* Because generating the thumbnails failed, an exception was raised. We
tried to save the error message in the upload_media_assets.error
field. However, this also failed because the error message was 77kb
long (it contained the entire output of the ffmpeg command), but the
`upload_media_assets` table had a btree index on the `error` column,
which meant the maximum length of the error column was limited to
~2.7kb. This lead to a StatementInvalid exception being raised.
* Because the StatementInvalid exception was raised while we were trying
to set the upload media asset's status to `failed`, the upload was
left stuck in the `processing` state rather than being set to the
`failed` state.
* Because the upload was stuck in the `processing` state, the upload
page would hang forever waiting for the upload to complete.
The fixes are to:
* Build FFmpeg with `--enable-libdav1d` to use libdav1d for decoding AV1
videos instead of the builtin AV1 decoder.
* Remove the index on the `upload_media_assets.error` column so that
setting overly long error messages won't fail.
* Catch unexpected exceptions in ProcessUploadMediaAssetJob so we can
mark uploads as failed, even if `process_upload!` itself fails because
it raises an unexpected exception inside its own exception handler.
* Check that the video is playable with `MediaFile::Video#is_corrupt?` before
allowing it to be uploaded. This way we can return a better error
message if we can't generate thumbnails because the video isn't
playable. This requires decoding the entire video, so it means uploads
may take several seconds longer for long videos. It's also a security
risk in case ffmpeg has any bugs.
* Define `MediaAsset#preview!` as raising an exception on error, so
it's clear that generating thumbnails can fail. Define `MediaAsset#preview`
as returning nil on error for when we don't care about the cause of
the error.
Add a JPEG conversion for .avif and .webp files. The `full` variant is
the .avif or .webp file converted to JPEG format, with the same
resolution as the original file (full resolution).
Known bug: When converting an HDR .avif file to .jpeg, the resulting
image is too bright compared to the original image as rendered by
Firefox or Chrome.
Add ability to upload .webp images.
Animated WebP images aren't supported. This is because they aren't
supported by FFmpeg yet[1], so generating thumbnails and samples for
them would be more complicated than for other formats.
[1]: https://trac.ffmpeg.org/ticket/4907
Features of AVIF include:
* Lossless and lossy compression.
* High dynamic range (HDR) images
* Wide color gamut images (i.e. 10- and 12-bit color depths)
* Transparency (through alpha planes).
* Animations (with an optional cover image).
* Auxiliary image sequences, where the file contains a single primary
image and a short secondary video, like Apple's Live Photos.
* Metadata rotation, mirroring, and cropping.
The AVIF format is still relatively new and some of these features aren't well
supported by browsers or other software:
* Animated AVIFs aren't supported by Firefox or by libvips.
* HDR images aren't supported by Firefox.
* Rotated, mirrored, and cropped AVIFs aren't supported by Firefox or Chrome.
* Image grids, where the file contains multiple images that are tiled
together into one big image, aren't supported by Firefox.
* AVIF as a whole has only been supported for a year or two by Chrome
and Firefox, and less than a year by Safari.
For these reasons, only basic AVIFs that don't use animation, rotation,
cropping, or image grids can be uploaded.
Increase the database timeout to 10 seconds when generating reports.
Generating reports tends to be slow, especially for things like graphing
posts over time since the beginning of Danbooru.
Does not apply to anonymous users. Users must have an account to get
higher timeouts so that we can identify users scraping reports too hard.
Also add a rate limit of 1 report per 3 seconds to limit abuse.
Add config options to customize where uploads are stored, and how image URLs are generated.
* Add `media_asset_file_path` option to customize where uploads are stored.
* Add `media_asset_file_url` option to customize how image URLs are generated.
* Remove the `enable_seo_post_urls` config option. The `media_asset_file_url` option
should be used instead to include the tags in the image URL.
* Replace the "Download" placeholder thumbnail for Flash files with a
new placeholder that specifically says it's a Flash file.
* Fix a bug where the Flash placeholder thumbnail was too small when
using larger thumbnail sizes.
* Fix it so that media assets don't falsely consider Flash files to have
thumbnails. This could potentially cause errors if someone tried to
expunge, replace, or regenerate a Flash post.
When choosing the Open Graph image (the preview image shown when a
Danbooru link is posted on Discord or social media), choose the safest
image with the highest score, rather than the image with the highest
favcount.
Try to automatically fix various kind of typos and common mistakes in
email addresses when a user creates a new account. It's common for users
to signup with addresses like `name@gmai.com`, which leads to bounces
when we try to send the welcome email.
Remove the last remaining uses of the PixivUgoiraFrameData model. As of
32bfb8407, Ugoira frame data is now stored in the MediaMetadata model,
under the `Ugoira:FrameDelays` EXIF field.
The pixiv_ugoira_frame_data table still exists, but it can be removed
after this commit is deployed.
Fixes#5264: Error when replacing with ugoira.
Automatically add the AI-generated tag to posts that have the
`PNG:Software=NovelAI` EXIF attribute.
This is not foolproof because this metadata may get removed if an
AI-generated post is resaved or uploaded to a site that strips EXIF
metadata. It also only works for NovelAI. Currently it detects 29 out of
177 AI-generated uploads on Danbooru.
Store Ugoira frame delays in the MediaMetadata model as a fake EXIF
field instead of in the PixivUgoiraFrameData model. This way we can get
rid of the PixivUgoiraFrameData model completely. This is a step towards
fixing #5264.
The frame data for Ugoira files is stored like this:
[{"file"=>"000000.jpg", "delay"=>65},
{"file"=>"000001.jpg", "delay"=>65},
{"file"=>"000002.jpg", "delay"=>65},
{"file"=>"000003.jpg", "delay"=>65},
{"file"=>"000004.jpg", "delay"=>65},
{"file"=>"000005.jpg", "delay"=>65},
{"file"=>"000006.jpg", "delay"=>65},
{"file"=>"000007.jpg", "delay"=>65},
{"file"=>"000008.jpg", "delay"=>65},
{"file"=>"000009.jpg", "delay"=>65},
{"file"=>"000010.jpg", "delay"=>65}]
This is stored in the pixiv_ugoira_frame_data table in YAML format. This
is a problem because a) we only need the frame delays to play the Ugoira,
not the filenames, and b) storing the data in YAML format is a security
issue that's blocking the upgrade to Rails 7.0.4 (see [1]).
This commit changes the Ugoira Javascript player so that it only uses
the list of frame delays, not the filenames, to play the Ugoira. This
paves the way for storing the frame delays as a simple integer array
instead as a serialized YAML object.
This assumes that the images in a Ugoira zip file are stored in the same
order they should be played back in. This was confirmed by checking every
zip file and verifying that files are actually stored in filename order.
[1]: https://discuss.rubyonrails.org/t/cve-2022-32224-possible-rce-escalation-bug-with-serialized-columns-in-active-record/81017