* Use fixed access tokens instead of fetching an access token with the OAuth flow. This assumes
access tokens won't expire, which seems to be true for the default app-level access token, unless
you manually regenerate it. Fixes the OAuth flow not working on Baraag for some reason.
* Eliminate the MastodonApiClient class. Just inline it in the extractor instead.
Downstream users will need to update their configs to set the `pawoo_access_token` and
`baraag_access_token` config options.
Fix external HTTP requests not working when the HTTP proxy was enabled. Caused by the `public_only`
option (which prevents SSRF attacks by validating that the URL doesn't resolve to a local IP) being
incompatible with the `proxy` option.
The previous upload limit was 50MB, to discourage uploading excessively
large images. But for videos this can be too low, especially for long
videos at high resolutions.
The upload limit really should be around 200MB to allow for a ~10Mbps
bitrate at the maximum upload length of 2:20. However, the maximum
upload limit under Cloudflare is 100MB, so if we raised the upload limit
beyond this, it would only work when uploading a file from a source URL,
not from your computer. To get around this, we would have to put the
upload endpoint outside of Cloudflare, or allow uploading files in
chunks.
Add config options to customize where uploads are stored, and how image URLs are generated.
* Add `media_asset_file_path` option to customize where uploads are stored.
* Add `media_asset_file_url` option to customize how image URLs are generated.
* Remove the `enable_seo_post_urls` config option. The `media_asset_file_url` option
should be used instead to include the tags in the image URL.
Add a system for upgrading accounts using upgrade codes. Users purchase
an upgrade code off-site then redeem it on-site to upgrade their account
to Gold. Upgrade codes are randomly pre-generated and are one time use
only. Codes have enough randomness that guessing a code is infeasible.
Add options to disable comments, the forum, and autocomplete. This is
for personal boorus and potentially for safe mode. Note that disabling
the forum may cause difficulties with creating and approving BURs.
Disabling comments and the forum merely hides them from most areas,
rather than completely removing them.
* Remove the default list of blocked tags in safe mode.
* Change it so that tags that are blocked in safe mode are filtered out
at the database level rather than at the html level.
Factor out the Stripe code from the UserUpgrade class. Introduce a new
PaymentTransaction abstract class that represents a payment with some
payment processor, and a PaymentTransaction::Stripe class that
implements transactions with Stripe.
Note that we can't completely eliminate Stripe even though we no longer
accept payments with it because we still need to be able to look up old
payments in Stripe.
Support grabbing the full image for Tinami uploads, rather than the sample.
Getting the full image requires making a request like this:
curl -X POST \
-H 'Referer: https://www.tinami.com/' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Cookie: Tinami2SESSID=<redacted>;' \
--data-raw 'action_view_original=true&cont_id=1087268ðna_csrf=<redacted>' \
https://www.tinami.com/view/1087268
Then scraping the <img> tag from the resulting HTML page.
If the post has multiple images, then we need to scrape and pass the
`sub_id` of the image too.
Fixes#2818.
NicoSeiga changed it so that on every login, you must enter a 2FA code
sent by email. This broke the NicoSeiga strategy. The fix is to just use
a static session cookie instead (and hope it doesn't expire, and isn't
tied to an IP).
The `nico_seiga_login` and `nico_seiga_password` config settings have
been removed from config/danbooru_default_config.rb and replaced by
`nico_seiga_user_session`. If you run your own Danbooru instance, you
will have to update your config file manually.
Do a few micro-optimizations to reduce the number of memory allocations
during thumbnail generation.
This commit, combined with freezing string literals in a7dc05 and
67b961, reduces the number of allocations on the front page from 180,000
to 150,000, and the number of retained objects from 8,000 to 4,000.
Remove the SFTP file storage backend. Downstream users can use either
sshfs (which is what Danbooru now uses in production) or rclone instead.
The Ruby SFTP gem was much slower than sshfs.
Move all the code for defining tag categories from the config file to
TagCategory. It didn't belong in the config because it's not possible to
add new tag categories purely in the config without editing other things
like the CSS.
Also change it so that tag colors are hardcoded in the CSS instead of
generated using ERB. Generating the CSS in ERB meant that the Docker
build had to recompile the CSS on every commit, even when it didn't
change, because it relied on Ruby code outside the CSS that we couldn't
guarantee didn't change.
Try to optimize certain types of common slow searches:
* Searches for mutually-exclusive tags (e.g. `1girl multiple_girls`,
`touhou solo -1girl -1boy`)
* Relatively large tags that are heavily skewed towards old posts
(e.g. lucky_star, haruhi_suzumiya_no_yuuutsu, inazuma_eleven_(series),
imageboard_desourced).
* Mid-sized tags in the <30k post range that Postgres thinks are
big enough for a post id index scan, but a tag index scan is faster.
The general pattern is Postgres not using the tag index because it
thinks scanning down the post id index would be faster, but it's
actually much slower because it degrades to a full table scan. This
usually happens when Postgres thinks a tag is larger or more common than
it really is. Here we try to force Postgres into using the tag index
when we know the search is small.
One case that is still slow is `2girls -multiple_girls`. This returns no
results, but we can't know that without searching all of `2girls`. The
general case is searching for `A -B` where A is a subset of B and A and B
are both large tags.
Hopefully fixes#581, #654, #743, #1020, #1039, #1421, #2207, #4070,
#4337, #4896, and various other issues raised over the years regarding
slow searches.
Hardcode the list of nondisposable email providers instead of making it
a config option. Also add a few new providers.
This was previously a config option to keep it secret, but there's not
much need for secrecy here.
A Restricted user's email must be on this list to unrestrict their
account. If a user is Restricted and their email is not in this list,
then it's assumed to be disposable and can't be used to unrestrict their
account even if they verify their email address.
Remove StorageManager::Hybrid and StorageManager::Match. These were used
to store uploads on different servers based on the post ID or file
sample type. This is no longer used in production because in hindsight
it's a lot more difficult to manage uploads when they're fragmented
across different servers.
If you need this, you can do tricks with network filesystems to get the
same effect. For example, if you want to store some files on server A
and others on server B, then mount servers A and B as network
filesystems (with e.g. sshfs, Samba, NFS, etc), and use symlinks to
point subdirectories at either server A or B.
Add support for using a proxy for HTTP requests. Only used for external
requests, such as downloading files or talking to source sites such as
Pixiv or Twitter, not for internal requests, such as talking to IQDB or
Reportbooru.
* Add README files to several directories in app/ giving a brief
overview of some parts of Danbooru's architecture.
* Add documentation for files in config/.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Fix an issue where the New Relic agent always started in the production
environment, even when a license key wasn't configured.
Also make the New Relic agent log to stdout instead of log/newrelic_agent.log.
Bug: if someone ran server with RAILS_ENV=production, but tried to
access the site under http://, then logging in didn't work. This was
because we set the `secure` flag on cookies when running in the
production environment, because we assumed that in production you were
using HTTPS. If you weren't using HTTPS, then the `secure` flag
prevented session cookies from being sent under http://.
The default now is to use http:// instead of https:// for the
`canonical_url` option.
If you run a Danbooru instance, and you use HTTPS, you will have to
change the `canonical_url` config option to "https://www.mybooru.com".
Fixes Docker containers and development installs that don't have Redis
installed from throwing errors about failing to connect to Redis.
Downstream boorus who do use Redis will need to uncomment this line or
set `redis_url` manually in their config to enable Redis again.
Automatically generate a random secret key for `Danbooru.config.secret_key_base`
if no key is specified.
This so that you can run Danbooru in a Docker container with zero
configuration.
This removes support for the ~/.danbooru/secret_token file and the
SECRET_TOKEN environment variable. If you used either one of these, you
must copy the value either to DANBOORU_SECRET_KEY_BASE in .env.local, or to
`secret_key_base` in config/danbooru_local_config.rb.
# .env.local
DANBOORU_SECRET_KEY_BASE=<value>
# config/danbooru_local_config.rb
def secret_key_base
# <value>
end
Generate image URLs relative to the site's canonical URL instead of
relative to the domain of the current request.
This means that all subdomains of Danbooru - safebooru.donmai.us,
shima.donmai.us, saitou.donmai.us, and kagamihara.donmai.us - will use
image URLs from https://danbooru.donmai.us, instead of from the current
domain.
The main reason we did this before was so that we could generate either
http:// or https:// image URLs, depending on whether the current request
was HTTP or HTTPS, back when we tried to support both at the same time.
Now we support only HTTPS in production, so there's no need for this. It
was also pretty hacky, since it required storing the URL of the current
request in a per-request global variable in `CurrentUser`.
This also improves caching slightly, since users of safebooru.donmai.us
will receive cached images from danbooru.donmai.us.
Downstream boorus should make sure that the `canonical_url` and
`storage_manager` config options are set correctly. If you don't support
https:// in development, you should make sure to set the canonical_url
option to http:// instead of https://.
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json