Update the config for the Puma webserver (used by `bin/rails server`).
* Update default settings.
* Prefix all Puma environment variables with `PUMA_`.
* Enable the Puma control app (`bin/pumactl`).
Add a model for storing image and video metadata for uploaded files.
Metadata is extracted using ExifTool. You will need to install ExifTool
after this commit. ExifTool 12.22 is the minimum required version
because we use the `--binary` option, which was added in this release.
The MediaMetadata model is separate from the MediaAsset model because
some files contain tons of metadata, and most of it is non-essential.
The MediaAsset model represents an uploaded file and contains essential
metadata, like the file's size and type, while the MediaMetadata model
represents all the other non-essential metadata associated with a file.
Metadata is stored as a JSON column in the database.
ExifTool returns all the file's metadata, not just the EXIF metadata.
EXIF is one of several types of image metadata, hence why we call
it MediaMetadata instead of EXIFMetadata.
Hardcode the list of nondisposable email providers instead of making it
a config option. Also add a few new providers.
This was previously a config option to keep it secret, but there's not
much need for secrecy here.
A Restricted user's email must be on this list to unrestrict their
account. If a user is Restricted and their email is not in this list,
then it's assumed to be disposable and can't be used to unrestrict their
account even if they verify their email address.
Remove StorageManager::Hybrid and StorageManager::Match. These were used
to store uploads on different servers based on the post ID or file
sample type. This is no longer used in production because in hindsight
it's a lot more difficult to manage uploads when they're fragmented
across different servers.
If you need this, you can do tricks with network filesystems to get the
same effect. For example, if you want to store some files on server A
and others on server B, then mount servers A and B as network
filesystems (with e.g. sshfs, Samba, NFS, etc), and use symlinks to
point subdirectories at either server A or B.
Caused by Webpack clobbering the `Danbooru` variable for unknown reasons
when the app/javascript/packs/flash.js pack is loaded. Disabling the
output.library option seems to fix it.
A MediaAsset represents an image or video file uploaded to Danbooru. It
stores the metadata associated with the image or video. This is to work
on decoupling files from posts so that images can be uploaded separately
from posts.
Make nokogiri use the bundled version of libxml2 instead of the system
version. In the past installing nokogiri was slow because it had to
compile the bundled version of libxml2, which is partly why we switched
to the system library. Now it's faster because the bundled version comes
pre-compiled with the nokogiri gem.
https://nokogiri.org/#native-gems-faster-more-reliable-installation
Reverts 440bbbb28.
Add support for using a proxy for HTTP requests. Only used for external
requests, such as downloading files or talking to source sites such as
Pixiv or Twitter, not for internal requests, such as talking to IQDB or
Reportbooru.
Using `Rails.logger` here causes server boot to fail with a `Undefined
method 'tagged'` error, possibly because `Rails.logger` isn't ready yet
during early initialization.
* Add README files to several directories in app/ giving a brief
overview of some parts of Danbooru's architecture.
* Add documentation for files in config/.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Fix an issue where the New Relic agent always started in the production
environment, even when a license key wasn't configured.
Also make the New Relic agent log to stdout instead of log/newrelic_agent.log.
Use the prebuilt Docker images instead of building them locally in the
Docker Compose script. This is faster, but it means that local changes
to the code will be ignored.
Fix the quickstart command in the README not working for Ubuntu 18.04.
This was because the Docker Compose file was set to version 3.7, but
Ubuntu 18.04 ships an older version of Docker Compose that only supports
version 3.4.
Fix the ca-certificates package not being installed inside the base
Docker image. This caused uploads from HTTPS sites to fail because TLS
certificates couldn't be validated.
Allow specifying the location of the `config/danbooru_local_config.rb`
file with the DANBOORU_CONFIG_FILE environment variable. For example:
DANBOORU_CONFIG_FILE=/etc/danbooru/danbooru_local_config.rb bin/rails server
This is useful in Kubernetes because it lets us mount a directory
containing the config file without it clobbering everything else in the
config/ directory.
Bug: if someone ran server with RAILS_ENV=production, but tried to
access the site under http://, then logging in didn't work. This was
because we set the `secure` flag on cookies when running in the
production environment, because we assumed that in production you were
using HTTPS. If you weren't using HTTPS, then the `secure` flag
prevented session cookies from being sent under http://.
The default now is to use http:// instead of https:// for the
`canonical_url` option.
If you run a Danbooru instance, and you use HTTPS, you will have to
change the `canonical_url` config option to "https://www.mybooru.com".
Allow viewing Flash posts with the Ruffle emulator.
Known issues:
* Many flash files aren't fully supported.
* In development it sometimes spazzes out and starts triggering random
keyboard shortcuts when you press any key. This doesn't happen with
the browser extension.
* We have to put the .wasm file in the public/packs/js directory because
Ruffle is hardcoded to search for it there.
* If you're running Nginx, you need to make sure you're serving the
right MIME type for .wasm files or it won't work.
* We're using Some Random Guy's unofficial NPM package for Ruffle, since the
Ruffle project doesn't publish an official package themselves. We
should build our own package.
References:
* https://github.com/ruffle-rs/ruffle
* https://github.com/ruffle-rs/ruffle/wiki/Using-Ruffle#configure-webassembly-mime-type
* https://www.npmjs.com/package/ruffle-mirror
Fixes Docker containers and development installs that don't have Redis
installed from throwing errors about failing to connect to Redis.
Downstream boorus who do use Redis will need to uncomment this line or
set `redis_url` manually in their config to enable Redis again.
Allow admins to remove comment votes by other users. This is done by
clicking the comment score to get to the comment vote list, then
clicking the Remove button on every vote.
* Optimize Dockerfile to minimize size of the Docker image.
* Specify exact versions of important dependencies (Ruby, Node, Vips) to
ensure our dependencies are up to date and locked to known versions.
* Install Vips from source because the version that ships with Ubuntu is too old.
* Install FFmpeg from source because otherwise using the Ubuntu package
pulls in tons of video libraries we don't need, bloating the image.
Always log to stdout instead of logging to files in `log/{development,production}.log`.
For development, logging to files wasn't really useful, and could
generate multi-gigabyte log files if you weren't paying attention. For
production, most systems these days (such as Docker and Systemd) prefer
that you write your logs to stdout so they can manage them.
Fixes the Docker image writing logs inside the container, which never
got rotated and could fill up the container.
Fix Rails complaining about IpAddressType not being reloaded by hot
reloading:
DEPRECATION WARNING: Initialization autoloaded the constant IpAddressType.
Being able to do this is deprecated. Autoloading during initialization is going
to be an error condition in future versions of Rails.
Reloading does not reboot the application, and therefore code executed during
initialization does not run again. So, if you reload IpAddressType, for example,
the expected changes won't be reflected in that stale Class object.
This autoloaded constant has been unloaded.
In order to autoload safely at boot time, please wrap your code in a reloader
callback this way:
Rails.application.reloader.to_prepare do
# Autoload classes and modules needed at boot time here.
end
That block runs when the application boots, and every time there is a reload.
For historical reasons, it may run twice, so it has to be idempotent.
Check the "Autoloading and Reloading Constants" guide to learn more about how
Rails autoloads and reloads.
Add a Docker Compose file that launches a minimal Danbooru instance in a
Docker container with a single command. This is suitable as a quick demo
or for personal use, not for public-facing sites.
To use it, just run `bin/danbooru`. This is a wrapper script that
installs Docker Compose then uses it to start Danbooru.
This will generate a lot of debug output and take several minutes while
it builds the Docker containers. Be patient. When it's done, you should
have an empty booru accessible at http://localhost.
Automatically generate a random secret key for `Danbooru.config.secret_key_base`
if no key is specified.
This so that you can run Danbooru in a Docker container with zero
configuration.
This removes support for the ~/.danbooru/secret_token file and the
SECRET_TOKEN environment variable. If you used either one of these, you
must copy the value either to DANBOORU_SECRET_KEY_BASE in .env.local, or to
`secret_key_base` in config/danbooru_local_config.rb.
# .env.local
DANBOORU_SECRET_KEY_BASE=<value>
# config/danbooru_local_config.rb
def secret_key_base
# <value>
end
Set sensible defaults for connecting to the database. By default, we try
to connect to the `danbooru2` database running on localhost as the
`danbooru` user. These are the defaults recommended by the install
guide.
If you need to change the database settings, set DATABASE_URL in
.env.local or on the command line:
# .env.local
DATABASE_URL=postgresql://danbooru:password@localhost/danbooru2
# command line
$ DATABASE_URL=postgresql://danbooru:password@localhost/danbooru2 bin/rails server
This eliminates the need to copy script/install/database.yml.templ to
config/database.yml during installation and during deployment. This is
so that Danbooru works out of the box without extra configuration. In
particular, this is so that we can run Danbooru in a Docker container
without having to set DATABASE_URL.
Generate image URLs relative to the site's canonical URL instead of
relative to the domain of the current request.
This means that all subdomains of Danbooru - safebooru.donmai.us,
shima.donmai.us, saitou.donmai.us, and kagamihara.donmai.us - will use
image URLs from https://danbooru.donmai.us, instead of from the current
domain.
The main reason we did this before was so that we could generate either
http:// or https:// image URLs, depending on whether the current request
was HTTP or HTTPS, back when we tried to support both at the same time.
Now we support only HTTPS in production, so there's no need for this. It
was also pretty hacky, since it required storing the URL of the current
request in a per-request global variable in `CurrentUser`.
This also improves caching slightly, since users of safebooru.donmai.us
will receive cached images from danbooru.donmai.us.
Downstream boorus should make sure that the `canonical_url` and
`storage_manager` config options are set correctly. If you don't support
https:// in development, you should make sure to set the canonical_url
option to http:// instead of https://.
Remove a monkey patch that added a `to_i` method to `FalseClass` so that
`false.to_i` returned 0. This is legacy code that shouldn't still be in
use anywhere. It doesn't really work anyway, because `true.to_i` isn't
defined.
Add ngrok.io (plus a few more domains) to the hostname whitelist so that
it can be used as a hostname in development. Useful for testing
webhooks.
* https://ngrok.com
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json
Allow users to view their own rate limits with /rate_limits.json.
Note that rate limits are only updated after every API call, so this
page only shows the state of the limits after the last call, not the
current state.
* Fix it so that all edit forms show an error banner if the form
has validation errors. Previously forms had to manually call
`error_messages_for`, which not all forms did.
* Fix it so that the full validation error message is shown next to each
input attribute that had errors. Also update the styling of these
error messages to look better.
Add a new color palette and rework all site colors (both light mode and dark mode) to
use the new palette.
This ensures that colors are used consistently, from a carefully designed color palette,
instead of being chosen at random.
Before, colors in light mode were chosen on an ad-hoc basis, which resulted in a lot of
random colors and inconsistent design.
The new palette has 7 hues: red, orange, yellow, green, blue, azure (a lighter blue), and
purple. There's also a greyscale. Each hue has 10 shades of brightness, which (including
grey) gives us 80 total colors.
Colors are named like this:
var(--red-0); /* very light red */
var(--red-2); /* light red */
var(--red-5); /* medium red */
var(--red-7); /* dark red */
var(--red-9); /* very dark red */
var(--green-7); /* dark green */
var(--blue-5); /* medium blue */
var(--purple-3); /* light purple */
/* etc */
The color palette is designed to meet the following criteria:
* To have close equivalents to the main colors used in the old color scheme,
especially tag colors, so that changes to major colors are minimized.
* To produce a set of colors that can be used as as main text colors, as background
colors, and as accent colors, both in light mode and dark mode.
* To ensure that colors at the same brightness level have the same perceived brightness.
Green-4, blue-4, red-4, purple-4, etc should all have the same brightness and contrast
ratios. This way colors look balanced. This is actually a difficult problem, because human
color perception is non-linear, so you can't just scale brightness values linearly.
There's a color palette test page at https://danbooru.donmai/static/colors
Notable changes to colors in light mode:
* Username colors are the same as tag colors.
* Copyright tags are a deeper purple.
* Builders are a deeper purple (fixes#4626).
* Moderators are green.
* Gold users are orange.
* Parent borders are a darker green.
* Child borders are a darker orange.
* Unsaved notes have a thicker red border.
* Selected notes have a thicker blue (not green) border.