A MediaAsset represents an image or video file uploaded to Danbooru. It
stores the metadata associated with the image or video. This is to work
on decoupling files from posts so that images can be uploaded separately
from posts.
Fix an exploit that let you determine the flagger of a post using
`flagger:<username>` saved searches. Saved searches were performed as
DanbooruBot, but since DanbooruBot is a moderator, it let unprivileged
users do `flagger:<username>` searches. Saved searches were done as a
moderator to avoid tag limits, but this is no longer necessary since the
last PostQueryBuilder refactor.
fred get out
Pundit 2.1.1 changed it so that if the first argument to `authorize` is
an Array, then the `authorize` call returns the last element of the
array. This broke order:random, because in that case we returned an
Array of posts. The fix is to return an ActiveRecord::Relation of posts,
which is more correct anyway.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
When a user tries to change their email, redirect them to the confirm
password page (like Github's sudo mode) instead of having them re-enter
their password on the change email page. This is the same thing we do
when a user updates their API keys. This way we have can use the same
confirm password authentication flow for everything that needs a
password.
* Show the ban length instead of the ban expiration date in ban notices.
* Fix the ban notice to not say "Your account has been temporarily
banned" when it's a permanent ban.
There used to be about 1000 posts with a .jpeg file extension instead of
.jpg. These posts have been fixed manually, so we no longer have to
check for this any more.
Flash is dead. It's no longer supported by browsers, it's not
well-supported by emulators, and only two Flash posts were uploaded in
the last year anyway. Old Flash files will continue to exist, but new
Flash uploads will no longer be allowed.
Make it so that when a user removes their own vote, the vote is soft
deleted (the is_deleted flag is set) instead of hard deleted.
Changes:
* Add is_deleted flag to comment votes.
* Relax uniqueness constraint so you can have multiple deleted votes on
the same comment. You can still only have one active vote on the comment.
* Add `soft_delete` method to Deletable concern.
Changes:
* Change the `expires_at` field to `duration`.
* Make moderators choose from a fixed set of standard ban lengths,
instead of allowing arbitrary ban lengths.
* List `duration` in seconds in the /bans.json API.
* Dump bans to BigQuery.
Note that some old bans have a negative duration. This is because their
expiration date was before their creation date, which is because in 2013
bans were migrated to Danbooru 2 and the original ban creation dates
were lost.
Remove the `category_name` field from the `/wiki_page.json` API. This
field was originally added only because it was needed by our autocomplete
Javascript. It was also misnamed, it wasn't the tag's category name, it
was the category's ID.
Users should use `https://danbooru.donmai.us/wiki_pages.json?only=title,tag`
instead if they need this.
This triggered a N+1 query pattern when dumping wiki pages to BigQuery,
which made dumping wiki pages very slow. It also meant this field was
included in the database dump, even though it wasn't a real database
column.
Prevent saved searches, news updates, and ip bans from being publicly
dumped to BigQuery. They didn't override the `visible` method to
restrict their visibility for anonymous users.
* Max comment length: 15,000 characters.
* Max forum post length: 200,000 characters.
* Max forum topic title length: 200 characters.
* Max dmail length: 50,000 characters.
* Max dmail title length: 200 characters.
* Don't set the inviter field for newly promoted users, or for Gold/Plat
upgrades.
* Clear the inviter field for paid Gold/Plat upgrades, and for users who
have a feedback or a modaction listing who invited them. This leaves
about 600 remaining users with an inviter field with no other record
of who invited them.
See #4750.
* When trying to create an artist entry for a non-artist tag, set the
error on the name attribute so that the artist name gets marked
as incorrect in the artist edit form.
* Fix a bad `Name '' cannot be blank` error message when the artist name
is blank.
* Fix showing wiki pages of non-artist tags in the artist edit form when
the artist name conflicts with a non-artist tag (e.g. if you try to
create an artist named '1girl', don't show the wiki for 1girl in the
artist edit form).
Change error from "Body has no content" to "Body can't be blank" when a
user tries to submit an empty comment. This makes it consistent with
error messages in other models when someone tries to submit blank content.
Allow users to view their own rate limits with /rate_limits.json.
Note that rate limits are only updated after every API call, so this
page only shows the state of the limits after the last call, not the
current state.
If the user makes a request while rate limited, penalize them 1 second
for that request, up to a maximum of 30 seconds. This means that if a
user doesn't stop making requests after being rate limited, then they
will stay rate limited forever until they stop.
This is to temp ban bots, especially spam bots, that flood requests
while ignoring HTTP errors or rate limits.
(Note that this is on a per-endpoint basis. Being rate limited on one
endpoint won't penalize you for making calls to other endpoints.)
* Tie rate limits to both the user's ID and their IP address.
* Make each endpoint have separate rate limits. This means that, for
example, your post edit rate limit is separate from your post vote
rate limit. Before all write actions had a shared rate limit.
* Make all write endpoints have rate limits. Before some endpoints, such
as voting, favoriting, commenting, or forum posting, weren't subject
to rate limits.
* Add stricter rate limits for some endpoints:
** 1 per 5 minutes for creating new accounts.
** 1 per minute for login attempts, changing your email address, or
for creating mod reports.
** 1 per minute for sending dmails, creating comments, creating forum
posts, or creating forum topics.
** 1 per second for voting, favoriting, or disapproving posts.
** These rate limits all have burst factors high enough that they
shouldn't affect normal, non-automated users.
* Raise the default write rate limit for Gold users from 2 per second to
4 per second, for all other actions not listed above.
* Raise the default burst factor to 200 for all other actions not listed
above. Before it was 10 for Members, 30 for Gold, and 60 for Platinum.
Rework the rate limit implementation to make it more flexible:
* Allow setting different rate limits for different actions. Before we
had a single rate limit for all write actions. Now different
controller endpoints can have different limits.
* Allow actions to be rate limited by user ID, by IP address, or both.
Before actions were only limited by user ID, which meant non-logged-in
actions like creating new accounts or attempting to login couldn't be rate
limited. Also, because actions were limited by user ID only, you could
use multiple accounts with the same IP to get around limits.
Other changes:
* Remove the API Limit field from user profile pages.
* Remove the `remaining_api_limit` field from the `/profile.json` endpoint.
* Rename the `X-Api-Limit` header to `X-Rate-Limit` and change it from a
number to a JSON object containing all the rate limit info
(including the refill rate, the burst factor, the cost of the call,
and the current limits).
* Fix a potential race condition where, if you flooded requests fast
enough, you could exceed the rate limit. This was because we checked
and updated the rate limit in two separate steps, which was racy;
simultaneous requests could pass the check before the update happened.
The new code uses some tricky SQL to check and update multiple limits
in a single statement.