No longer used now that we use Puma in production. If you still used
Unicorn in your install, switch to `bin/rails server` instead. See
config/puma.rb for config settings.
No longer used now that we use Kubernetes to deploy the site instead of
Capistrano.
If you run your own installation of Danbooru, and you used Capistrano to
deploy your site, it is recommended that you switch to either the Docker
Compose file (for personal installs), the Procfile (for non-Dockerized,
development environments), or Kubernetes (for production environments;
see https://github.com/danbooru/danbooru-infrastructure/tree/master/k8s
for Danbooru's production configuration).
The artist ban tests deadlocked because of a weird interaction between
threads and database transactions when tagging posts in parallel. Add a
hack to work around it.
When processing an alias, rename, implication, mass update, or nuke,
update the posts in parallel. This means that if we alias foo to bar,
for example, then we use four processes at once to retag the posts from
foo to bar.
This doesn't mean that if we have two aliases in a BUR, we process both
aliases in parallel. It simply means that when processing an alias, we
update the posts in parallel for that alias.
When a BUR is approved, put it in a `processing` state. After it
successfully finishes processing, put it in the `approved` state. If it
fails processing, put it in the `failed` state.
If approving the BUR fails with a validation error, for example because
the alias already exists or an implication lacks a wiki, then leave the
BUR in the `pending` state. The `failed` state is only for unexpected
errors during processing.
Change the way BURs are processed. Before, we spawned a background job
for each line of the BUR, then processed each job sequentially. Now, we
process the entire BUR sequentially in a single background job.
This means that:
* BURs are truly sequential now. Before certain things like removing
aliases weren't actually performed in a background job, so they were
performed out-of-order before everything else in the BUR.
* Before, if an alias or implication line failed, then subsequent alias
or implication lines would still be processed. This was because each
alias or implication line was queued as a separate job, so a failure
of one job didn't block another. Now, if any alias or implication
fails, the entire BUR will fail and stop processing after that line.
This may be good or bad, depending on whether we actually need the BUR
to be processed in order or not.
* Before, BURs were processed inside a database transaction (except for the
actual updating of posts). Now they're not. This is because we can't
afford to hold transactions open while processing long-running aliases
or implications. This means that if BUR fails in the middle when it is
initially approved, it will be left in a half-complete state. Before
it would be rolled back and left in a pending state with no changes
performed.
* Before, only one BUR at a time could be processed. If multiple BURs
were approved at the same time, then they would queue up and be
processed one at a time. Now, multiple BURs can be processed at the
same time. This may be undesirable when processing large BURs, or BURs
that must be approved in a specific order.
* Before, large tag category changes could time out. This was because
they weren't actually performed in a background job. Now they are, so
they shouldn't time out.
Make it so pull requests from outside contributors can't edit workflows
under .github/workflows/ without approval. Also limit workflows to the
minimum permissions necessary.
Split up the Github workflow. Instead of one workflow with two jobs, one
to build the Docker image and one to test it, split it into two separate
workflows, one to build and one to test. This way if the Docker build
fails it doesn't try to run the tests, and if the tests fail it only
marks the test workflow as failed, not the entire workflow.
This is especially so the workflows page doesn't show everything as
failing just because the tests failed.
https://github.com/danbooru/danbooru/actions
Bug: if ExifTool exited with status 1 because it thought the file was
corrupt, then we didn't record any of the metadata, even though it was
able to read most of it. It turns out there are thousands of posts with
minorly corrupt metadata that ExifTool is still able to read, but will
complain about.
Fix: ignore the exit code of ExifTool and always save whatever metadata
ExifTool is able to return. It will return an `ExifTool:Error` tag in
the event of errors.
Note that there are some (many?) files that are considered corrupt by
ExifTool but not by Vips, and vice versa. Probably because ExifTool only
parses the metadata while Vips only parses the image data.
Fix Exiftool not being able to get the metadata for compressed SWF
files. Exiftool requires Compress::Zlib as an optional dependency to
decompress compressed SWF files, but it wasn't in the Docker image.
Archive::Zip is required for Zip files and Digest::MD5 for certain other
metadata (see "DEPENDENCIES" in exiftool README).
Unlike Unicorn, Puma doesn't have a builtin HTTP request timeout
mechanism, so we have to use Rack::Timeout instead.
See the caveats in the Rack::Timeout documentation [1]. In Unicorn, a
timeout would send a SIGKILL to the worker, immediately killing it. This
would result in a dropped connection and a Cloudflare 502 error to the
user. In Puma, it raises an exception, which we can catch and return a
better error to the user. On the other hand, raising an exception can
potentially corrupt application state if it's sent at the wrong time, or
be delayed indefinitely if the app is stuck in IO or C extension code.
The default request timeout is 65 seconds. 65 seconds is to give things
like HTTP requests on a 60 second timeout enough time to complete. Set
the RACK_REQUEST_TIMEOUT environment variable to change the timeout.
1: https://github.com/sharpstone/rack-timeout#further-documentation
Include OpenResty in the base Docker image. This is so we can run
OpenResty in front of Danbooru as a reverse proxy to serve static assets
(CSS, JS, and static images living in public/images).
Including the proxy in the same container as the static assets avoids a
lot of problems with trying to share files across separate containers.
Change this message:
2 post(s) on this page were hidden by safe mode. Go to Danbooru or
disable safe mode to view them (learn more).
To link to [[help:safe mode]] instead of [[help:user settings]].
Update the config for the Puma webserver (used by `bin/rails server`).
* Update default settings.
* Prefix all Puma environment variables with `PUMA_`.
* Enable the Puma control app (`bin/pumactl`).
Fix a bug where if you did a slow search that took too long to calculate
the page count, and you had 200 posts per page, then we would show page
5000 as the last page of the search.
This was because we were artificially returning 1,000,000 as the post
count to signal that the count timed out, but at 200 posts per page this
would show 5000 as the last page of the search.