* Have CI build Docker images for both x86 and ARM.
* Add a `bin/rails danbooru:docker:build-arm` command for building a Docker image locally for ARM.
Usage:
* Test the image:
docker run --rm -it --platform linux/arm64 ghcr.io/danbooru/danbooru bash
* Build the image:
bin/rails danbooru:docker:build-arm
* Build the image by hand:
git archive HEAD | docker buildx build - --platform linux/amd64 --build-arg SOURCE_COMMIT=$(git rev-parse HEAD) -t danbooru -f Dockerfile --load
Switch the ActiveJob backend from DelayedJob to GoodJob. Differences:
* The job worker is run with `bin/good_job start` instead of `bin/delayed_job`.
* Jobs have an 8 hour timeout instead of a 4 hour timeout.
* Jobs don't automatically retry on failure.
* Finishing jobs are preserved and pruned after 7 days.
Refactor StorageManager to remove all image URL generation code. Instead
the image URL generation code lives in MediaAsset.
Now StorageManager is only concerned with how to read and write files to
remote storage backends like S3 or SFTP, not with how image URLs should
be generated. This way the file storage code isn't tightly coupled to
posts, so it can be used to store any kind of file, not just images
belonging to posts.
* Make danbooru:images:validate task run in parallel.
* Remove task for regenerating thumbnails.
* Move all rake tasks into same file.
* Add usage instructions.
No longer used now that we use Kubernetes to deploy the site instead of
Capistrano.
If you run your own installation of Danbooru, and you used Capistrano to
deploy your site, it is recommended that you switch to either the Docker
Compose file (for personal installs), the Procfile (for non-Dockerized,
development environments), or Kubernetes (for production environments;
see https://github.com/danbooru/danbooru-infrastructure/tree/master/k8s
for Danbooru's production configuration).
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Fix an issue where the New Relic agent always started in the production
environment, even when a license key wasn't configured.
Also make the New Relic agent log to stdout instead of log/newrelic_agent.log.
Set sensible defaults for connecting to the database. By default, we try
to connect to the `danbooru2` database running on localhost as the
`danbooru` user. These are the defaults recommended by the install
guide.
If you need to change the database settings, set DATABASE_URL in
.env.local or on the command line:
# .env.local
DATABASE_URL=postgresql://danbooru:password@localhost/danbooru2
# command line
$ DATABASE_URL=postgresql://danbooru:password@localhost/danbooru2 bin/rails server
This eliminates the need to copy script/install/database.yml.templ to
config/database.yml during installation and during deployment. This is
so that Danbooru works out of the box without extra configuration. In
particular, this is so that we can run Danbooru in a Docker container
without having to set DATABASE_URL.
Add a dedicated queue for bulk update requests and process it using a
single worker. This prevents bulk updates from consuming all available
workers and preventing other job types from running.
This also effectively serializes bulk updates so that they're processed
one-at-a-time instead of in parallel. This will be slower overall but
may avoid some of the issues with indeterminate update order under
parallel updates.
Drop support for https://danbooru.donmai.us/cache/tags.json. This was a
nightly dump of the tags table that was originally added in #1012. It
was never documented and never really used except for by the DanbooruUp
extension.