* Add a `DiscordSlashCommand.register_slash_commands!` method to register
all slash commands with the Discord API.
* Allow registering global commands.
* Refactor slash commands to use class attributes for the command
name, description, and options.
Always store original files in `public/data/original` instead of directly in
`public/data`. Previously this was optional and defaulted to off.
Downstream boorus will need to either move all images in the
`public/data` directory to `public/data/original`, or symlink the
`public/data/original` directory to the toplevel `public/data` directory:
ln -s . /path/to/danbooru/public/data/original
This to simplify file layout. This option existed because in the past we
stored original files in different locations on different servers (for
no particular reason).
Generate image URLs relative to the site's canonical URL instead of
relative to the domain of the current request.
This means that all subdomains of Danbooru - safebooru.donmai.us,
shima.donmai.us, saitou.donmai.us, and kagamihara.donmai.us - will use
image URLs from https://danbooru.donmai.us, instead of from the current
domain.
The main reason we did this before was so that we could generate either
http:// or https:// image URLs, depending on whether the current request
was HTTP or HTTPS, back when we tried to support both at the same time.
Now we support only HTTPS in production, so there's no need for this. It
was also pretty hacky, since it required storing the URL of the current
request in a per-request global variable in `CurrentUser`.
This also improves caching slightly, since users of safebooru.donmai.us
will receive cached images from danbooru.donmai.us.
Downstream boorus should make sure that the `canonical_url` and
`storage_manager` config options are set correctly. If you don't support
https:// in development, you should make sure to set the canonical_url
option to http:// instead of https://.
Changes:
* Change the `expires_at` field to `duration`.
* Make moderators choose from a fixed set of standard ban lengths,
instead of allowing arbitrary ban lengths.
* List `duration` in seconds in the /bans.json API.
* Dump bans to BigQuery.
Note that some old bans have a negative duration. This is because their
expiration date was before their creation date, which is because in 2013
bans were migrated to Danbooru 2 and the original ban creation dates
were lost.
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json
* Don't set the inviter field for newly promoted users, or for Gold/Plat
upgrades.
* Clear the inviter field for paid Gold/Plat upgrades, and for users who
have a feedback or a modaction listing who invited them. This leaves
about 600 remaining users with an inviter field with no other record
of who invited them.
See #4750.
If we detect that the session cookie has expired (by the presence of the
`#login_illust` element on the page), then clear the cached session
cookie. The current source fetch will still fail, but the next fetch
will try to login again and hopefully succeed.
* When trying to create an artist entry for a non-artist tag, set the
error on the name attribute so that the artist name gets marked
as incorrect in the artist edit form.
* Fix a bad `Name '' cannot be blank` error message when the artist name
is blank.
* Fix showing wiki pages of non-artist tags in the artist edit form when
the artist name conflicts with a non-artist tag (e.g. if you try to
create an artist named '1girl', don't show the wiki for 1girl in the
artist edit form).
* Tie rate limits to both the user's ID and their IP address.
* Make each endpoint have separate rate limits. This means that, for
example, your post edit rate limit is separate from your post vote
rate limit. Before all write actions had a shared rate limit.
* Make all write endpoints have rate limits. Before some endpoints, such
as voting, favoriting, commenting, or forum posting, weren't subject
to rate limits.
* Add stricter rate limits for some endpoints:
** 1 per 5 minutes for creating new accounts.
** 1 per minute for login attempts, changing your email address, or
for creating mod reports.
** 1 per minute for sending dmails, creating comments, creating forum
posts, or creating forum topics.
** 1 per second for voting, favoriting, or disapproving posts.
** These rate limits all have burst factors high enough that they
shouldn't affect normal, non-automated users.
* Raise the default write rate limit for Gold users from 2 per second to
4 per second, for all other actions not listed above.
* Raise the default burst factor to 200 for all other actions not listed
above. Before it was 10 for Members, 30 for Gold, and 60 for Platinum.
Rework the rate limit implementation to make it more flexible:
* Allow setting different rate limits for different actions. Before we
had a single rate limit for all write actions. Now different
controller endpoints can have different limits.
* Allow actions to be rate limited by user ID, by IP address, or both.
Before actions were only limited by user ID, which meant non-logged-in
actions like creating new accounts or attempting to login couldn't be rate
limited. Also, because actions were limited by user ID only, you could
use multiple accounts with the same IP to get around limits.
Other changes:
* Remove the API Limit field from user profile pages.
* Remove the `remaining_api_limit` field from the `/profile.json` endpoint.
* Rename the `X-Api-Limit` header to `X-Rate-Limit` and change it from a
number to a JSON object containing all the rate limit info
(including the refill rate, the burst factor, the cost of the call,
and the current limits).
* Fix a potential race condition where, if you flooded requests fast
enough, you could exceed the rate limit. This was because we checked
and updated the rate limit in two separate steps, which was racy;
simultaneous requests could pass the check before the update happened.
The new code uses some tricky SQL to check and update multiple limits
in a single statement.
Light mode:
* Change child post border from orange back to dark yellow (still darker
than previous yellow).
* Make flagged borders brighter red.
* Make admins brighter red.
* Make parent, child, and pending post notice bars brighter.
* Change copyright tags from purple to magenta (very close to copyright
tag color from before).
* Darken forum topic new/approved/rejected labels.
Dark mode:
* Make platinum users brighter grey.
Add site icons linking to all the artist's sites in the fetch source
data box.
Some artist entries have a large number of URLs. Various heuristics are
applied to try to present the most useful URLs first. Dead URLs and
redundant URLs (Pixiv stacc and Twitter intent URLs) are filtered out.
Remaining URLs are sorted first by site (to put sites like Pixiv and
Twitter first), then by URL (to break ties when an artist has multiple
accounts on the same site).
Some sites have shitty hard-to-read icons. It can't be helped. The icons
are the official favicons of each site.
Remove the follow properties:
* clear (wasn't used)
* list-style-* (wasn't used)
* outline (only used a few times; can be achieved with borders or
box-shadow).
* mask (not well-supported in browsers; only used a few times for
effects that could be achieved in other ways).
* text-decoration-* (use the shorthand property)
* text-transform (can be achieved in other ways)
Add a new color palette and rework all site colors (both light mode and dark mode) to
use the new palette.
This ensures that colors are used consistently, from a carefully designed color palette,
instead of being chosen at random.
Before, colors in light mode were chosen on an ad-hoc basis, which resulted in a lot of
random colors and inconsistent design.
The new palette has 7 hues: red, orange, yellow, green, blue, azure (a lighter blue), and
purple. There's also a greyscale. Each hue has 10 shades of brightness, which (including
grey) gives us 80 total colors.
Colors are named like this:
var(--red-0); /* very light red */
var(--red-2); /* light red */
var(--red-5); /* medium red */
var(--red-7); /* dark red */
var(--red-9); /* very dark red */
var(--green-7); /* dark green */
var(--blue-5); /* medium blue */
var(--purple-3); /* light purple */
/* etc */
The color palette is designed to meet the following criteria:
* To have close equivalents to the main colors used in the old color scheme,
especially tag colors, so that changes to major colors are minimized.
* To produce a set of colors that can be used as as main text colors, as background
colors, and as accent colors, both in light mode and dark mode.
* To ensure that colors at the same brightness level have the same perceived brightness.
Green-4, blue-4, red-4, purple-4, etc should all have the same brightness and contrast
ratios. This way colors look balanced. This is actually a difficult problem, because human
color perception is non-linear, so you can't just scale brightness values linearly.
There's a color palette test page at https://danbooru.donmai/static/colors
Notable changes to colors in light mode:
* Username colors are the same as tag colors.
* Copyright tags are a deeper purple.
* Builders are a deeper purple (fixes#4626).
* Moderators are green.
* Gold users are orange.
* Parent borders are a darker green.
* Child borders are a darker orange.
* Unsaved notes have a thicker red border.
* Selected notes have a thicker blue (not green) border.
Bug: In Postgres 13, getting the count of a blank search underestimated
the page count by a large margin (~700,000 posts).
The query we were executing was this:
EXPLAIN (FORMAT JSON) SELECT * FROM posts ORDER BY id DESC
The `ORDER BY id DESC` clause triggered a parallel seq scan query plan
in Postgres 13, which for some reason causes Postgres to underestimate
the row count by large amount in each parallel branch.
Getting rid of the ORDER BY clause makes it do a regular seq scan, which
gives an accurate estimate.