Remove StorageManager::Hybrid and StorageManager::Match. These were used
to store uploads on different servers based on the post ID or file
sample type. This is no longer used in production because in hindsight
it's a lot more difficult to manage uploads when they're fragmented
across different servers.
If you need this, you can do tricks with network filesystems to get the
same effect. For example, if you want to store some files on server A
and others on server B, then mount servers A and B as network
filesystems (with e.g. sshfs, Samba, NFS, etc), and use symlinks to
point subdirectories at either server A or B.
A MediaAsset represents an image or video file uploaded to Danbooru. It
stores the metadata associated with the image or video. This is to work
on decoupling files from posts so that images can be uploaded separately
from posts.
Allow moderators to search `disapproved:<username>` with any user.
Before mods could only search for their own disapprovals, even though
they could see disapprovals by others.
Fix an exploit that let you determine the flagger of a post using
`flagger:<username>` saved searches. Saved searches were performed as
DanbooruBot, but since DanbooruBot is a moderator, it let unprivileged
users do `flagger:<username>` searches. Saved searches were done as a
moderator to avoid tag limits, but this is no longer necessary since the
last PostQueryBuilder refactor.
fred get out
Parse the user agent and log whether it seems like a known bot or a
human to NewRelic under the `user.bot` request attribute. This is so
that known bots can be filtered out of search traffic analytics. Bots
and search crawlers make up a significant portion of search traffic.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Make it so that when a user removes their own vote, the vote is soft
deleted (the is_deleted flag is set) instead of hard deleted.
Changes:
* Add is_deleted flag to comment votes.
* Relax uniqueness constraint so you can have multiple deleted votes on
the same comment. You can still only have one active vote on the comment.
* Add `soft_delete` method to Deletable concern.
When a POST request returns a 302 redirect, follow the redirect with a
GET request instead of with a POST request.
HTTP standards leave it unspecified whether a POST request that returns
a 302 redirect should be followed with a GET or with a POST. A GET is
what most browsers use, which means it's what most servers expect.
Fixes the /tagme Discord command not working because when we uploaded
the image to DeepDanbooru, the POST request returned a 302 redirect,
which the server expected us to follow with a GET, not with a POST.
Ref:
* https://stackoverflow.com/questions/17605915/what-is-the-correct-behavior-expected-of-an-http-post-302-redirect-to-get
Always store original files in `public/data/original` instead of directly in
`public/data`. Previously this was optional and defaulted to off.
Downstream boorus will need to either move all images in the
`public/data` directory to `public/data/original`, or symlink the
`public/data/original` directory to the toplevel `public/data` directory:
ln -s . /path/to/danbooru/public/data/original
This to simplify file layout. This option existed because in the past we
stored original files in different locations on different servers (for
no particular reason).
Changes:
* Change the `expires_at` field to `duration`.
* Make moderators choose from a fixed set of standard ban lengths,
instead of allowing arbitrary ban lengths.
* List `duration` in seconds in the /bans.json API.
* Dump bans to BigQuery.
Note that some old bans have a negative duration. This is because their
expiration date was before their creation date, which is because in 2013
bans were migrated to Danbooru 2 and the original ban creation dates
were lost.
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json
If we detect that the session cookie has expired (by the presence of the
`#login_illust` element on the page), then clear the cached session
cookie. The current source fetch will still fail, but the next fetch
will try to login again and hopefully succeed.
If the user makes a request while rate limited, penalize them 1 second
for that request, up to a maximum of 30 seconds. This means that if a
user doesn't stop making requests after being rate limited, then they
will stay rate limited forever until they stop.
This is to temp ban bots, especially spam bots, that flood requests
while ignoring HTTP errors or rate limits.
(Note that this is on a per-endpoint basis. Being rate limited on one
endpoint won't penalize you for making calls to other endpoints.)
* Tie rate limits to both the user's ID and their IP address.
* Make each endpoint have separate rate limits. This means that, for
example, your post edit rate limit is separate from your post vote
rate limit. Before all write actions had a shared rate limit.
* Make all write endpoints have rate limits. Before some endpoints, such
as voting, favoriting, commenting, or forum posting, weren't subject
to rate limits.
* Add stricter rate limits for some endpoints:
** 1 per 5 minutes for creating new accounts.
** 1 per minute for login attempts, changing your email address, or
for creating mod reports.
** 1 per minute for sending dmails, creating comments, creating forum
posts, or creating forum topics.
** 1 per second for voting, favoriting, or disapproving posts.
** These rate limits all have burst factors high enough that they
shouldn't affect normal, non-automated users.
* Raise the default write rate limit for Gold users from 2 per second to
4 per second, for all other actions not listed above.
* Raise the default burst factor to 200 for all other actions not listed
above. Before it was 10 for Members, 30 for Gold, and 60 for Platinum.
Rework the rate limit implementation to make it more flexible:
* Allow setting different rate limits for different actions. Before we
had a single rate limit for all write actions. Now different
controller endpoints can have different limits.
* Allow actions to be rate limited by user ID, by IP address, or both.
Before actions were only limited by user ID, which meant non-logged-in
actions like creating new accounts or attempting to login couldn't be rate
limited. Also, because actions were limited by user ID only, you could
use multiple accounts with the same IP to get around limits.
Other changes:
* Remove the API Limit field from user profile pages.
* Remove the `remaining_api_limit` field from the `/profile.json` endpoint.
* Rename the `X-Api-Limit` header to `X-Rate-Limit` and change it from a
number to a JSON object containing all the rate limit info
(including the refill rate, the burst factor, the cost of the call,
and the current limits).
* Fix a potential race condition where, if you flooded requests fast
enough, you could exceed the rate limit. This was because we checked
and updated the rate limit in two separate steps, which was racy;
simultaneous requests could pass the check before the update happened.
The new code uses some tricky SQL to check and update multiple limits
in a single statement.