Add a system for upgrading accounts using upgrade codes. Users purchase
an upgrade code off-site then redeem it on-site to upgrade their account
to Gold. Upgrade codes are randomly pre-generated and are one time use
only. Codes have enough randomness that guessing a code is infeasible.
Add options to disable comments, the forum, and autocomplete. This is
for personal boorus and potentially for safe mode. Note that disabling
the forum may cause difficulties with creating and approving BURs.
Disabling comments and the forum merely hides them from most areas,
rather than completely removing them.
* Remove the default list of blocked tags in safe mode.
* Change it so that tags that are blocked in safe mode are filtered out
at the database level rather than at the html level.
Factor out the Stripe code from the UserUpgrade class. Introduce a new
PaymentTransaction abstract class that represents a payment with some
payment processor, and a PaymentTransaction::Stripe class that
implements transactions with Stripe.
Note that we can't completely eliminate Stripe even though we no longer
accept payments with it because we still need to be able to look up old
payments in Stripe.
Support grabbing the full image for Tinami uploads, rather than the sample.
Getting the full image requires making a request like this:
curl -X POST \
-H 'Referer: https://www.tinami.com/' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Cookie: Tinami2SESSID=<redacted>;' \
--data-raw 'action_view_original=true&cont_id=1087268ðna_csrf=<redacted>' \
https://www.tinami.com/view/1087268
Then scraping the <img> tag from the resulting HTML page.
If the post has multiple images, then we need to scrape and pass the
`sub_id` of the image too.
Fixes#2818.
NicoSeiga changed it so that on every login, you must enter a 2FA code
sent by email. This broke the NicoSeiga strategy. The fix is to just use
a static session cookie instead (and hope it doesn't expire, and isn't
tied to an IP).
The `nico_seiga_login` and `nico_seiga_password` config settings have
been removed from config/danbooru_default_config.rb and replaced by
`nico_seiga_user_session`. If you run your own Danbooru instance, you
will have to update your config file manually.
Do a few micro-optimizations to reduce the number of memory allocations
during thumbnail generation.
This commit, combined with freezing string literals in a7dc05 and
67b961, reduces the number of allocations on the front page from 180,000
to 150,000, and the number of retained objects from 8,000 to 4,000.
Remove the SFTP file storage backend. Downstream users can use either
sshfs (which is what Danbooru now uses in production) or rclone instead.
The Ruby SFTP gem was much slower than sshfs.
Move all the code for defining tag categories from the config file to
TagCategory. It didn't belong in the config because it's not possible to
add new tag categories purely in the config without editing other things
like the CSS.
Also change it so that tag colors are hardcoded in the CSS instead of
generated using ERB. Generating the CSS in ERB meant that the Docker
build had to recompile the CSS on every commit, even when it didn't
change, because it relied on Ruby code outside the CSS that we couldn't
guarantee didn't change.
Try to optimize certain types of common slow searches:
* Searches for mutually-exclusive tags (e.g. `1girl multiple_girls`,
`touhou solo -1girl -1boy`)
* Relatively large tags that are heavily skewed towards old posts
(e.g. lucky_star, haruhi_suzumiya_no_yuuutsu, inazuma_eleven_(series),
imageboard_desourced).
* Mid-sized tags in the <30k post range that Postgres thinks are
big enough for a post id index scan, but a tag index scan is faster.
The general pattern is Postgres not using the tag index because it
thinks scanning down the post id index would be faster, but it's
actually much slower because it degrades to a full table scan. This
usually happens when Postgres thinks a tag is larger or more common than
it really is. Here we try to force Postgres into using the tag index
when we know the search is small.
One case that is still slow is `2girls -multiple_girls`. This returns no
results, but we can't know that without searching all of `2girls`. The
general case is searching for `A -B` where A is a subset of B and A and B
are both large tags.
Hopefully fixes#581, #654, #743, #1020, #1039, #1421, #2207, #4070,
#4337, #4896, and various other issues raised over the years regarding
slow searches.
Hardcode the list of nondisposable email providers instead of making it
a config option. Also add a few new providers.
This was previously a config option to keep it secret, but there's not
much need for secrecy here.
A Restricted user's email must be on this list to unrestrict their
account. If a user is Restricted and their email is not in this list,
then it's assumed to be disposable and can't be used to unrestrict their
account even if they verify their email address.
Remove StorageManager::Hybrid and StorageManager::Match. These were used
to store uploads on different servers based on the post ID or file
sample type. This is no longer used in production because in hindsight
it's a lot more difficult to manage uploads when they're fragmented
across different servers.
If you need this, you can do tricks with network filesystems to get the
same effect. For example, if you want to store some files on server A
and others on server B, then mount servers A and B as network
filesystems (with e.g. sshfs, Samba, NFS, etc), and use symlinks to
point subdirectories at either server A or B.
Add support for using a proxy for HTTP requests. Only used for external
requests, such as downloading files or talking to source sites such as
Pixiv or Twitter, not for internal requests, such as talking to IQDB or
Reportbooru.
* Add README files to several directories in app/ giving a brief
overview of some parts of Danbooru's architecture.
* Add documentation for files in config/.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Fix an issue where the New Relic agent always started in the production
environment, even when a license key wasn't configured.
Also make the New Relic agent log to stdout instead of log/newrelic_agent.log.
Bug: if someone ran server with RAILS_ENV=production, but tried to
access the site under http://, then logging in didn't work. This was
because we set the `secure` flag on cookies when running in the
production environment, because we assumed that in production you were
using HTTPS. If you weren't using HTTPS, then the `secure` flag
prevented session cookies from being sent under http://.
The default now is to use http:// instead of https:// for the
`canonical_url` option.
If you run a Danbooru instance, and you use HTTPS, you will have to
change the `canonical_url` config option to "https://www.mybooru.com".
Fixes Docker containers and development installs that don't have Redis
installed from throwing errors about failing to connect to Redis.
Downstream boorus who do use Redis will need to uncomment this line or
set `redis_url` manually in their config to enable Redis again.
Automatically generate a random secret key for `Danbooru.config.secret_key_base`
if no key is specified.
This so that you can run Danbooru in a Docker container with zero
configuration.
This removes support for the ~/.danbooru/secret_token file and the
SECRET_TOKEN environment variable. If you used either one of these, you
must copy the value either to DANBOORU_SECRET_KEY_BASE in .env.local, or to
`secret_key_base` in config/danbooru_local_config.rb.
# .env.local
DANBOORU_SECRET_KEY_BASE=<value>
# config/danbooru_local_config.rb
def secret_key_base
# <value>
end
Generate image URLs relative to the site's canonical URL instead of
relative to the domain of the current request.
This means that all subdomains of Danbooru - safebooru.donmai.us,
shima.donmai.us, saitou.donmai.us, and kagamihara.donmai.us - will use
image URLs from https://danbooru.donmai.us, instead of from the current
domain.
The main reason we did this before was so that we could generate either
http:// or https:// image URLs, depending on whether the current request
was HTTP or HTTPS, back when we tried to support both at the same time.
Now we support only HTTPS in production, so there's no need for this. It
was also pretty hacky, since it required storing the URL of the current
request in a per-request global variable in `CurrentUser`.
This also improves caching slightly, since users of safebooru.donmai.us
will receive cached images from danbooru.donmai.us.
Downstream boorus should make sure that the `canonical_url` and
`storage_manager` config options are set correctly. If you don't support
https:// in development, you should make sure to set the canonical_url
option to http:// instead of https://.
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json
Replace the Google map on the IP address show page with a Bing map. Bing
doesn't require an API key, which makes it easier to deploy. The Google
Maps API requires to you to whitelist the IP addresses and domains you
plan to use with your API key, which is inconvenient for development
because it means maps won't display unless you whitelist your
development IPs.
Fix the Pixiv API no longer working by rewriting the Pixiv strategy to
use the Ajax API instead of the mobile API.
Before we could authenticate in the mobile API by using the OAuth 2.0
grant_type=password authentication flow. This no longer works. Now it
requires logging in through a HTML page, which is protected by Google
reCaptcha. This makes using the mobile API infeasible.
Instead we switch to the Ajax API, which only needs a PHPSESSID to
authenticate. This can be obtained by logging in manually and using the
devtools to extract the cookie.
This also temporarily removes support for Pixiv novels. This should be
moved to a separate source strategy.
Remove the rule that Members could only post 2 bumping comments per
hour.
This was frequently misunderstood as meaning that Members could only
post 2 comments per hour. In fact, Members could post an unlimited
number of comments per hour, but the rest of their comments had to be
non-bumping. The error message we showed to users was misleading. Even
our own code misunderstood what this did when describing the config
option.
Gold users also weren't subject to this limit, which was unfair since
Gold users aren't any better at commenting than regular users. The fact
that a large number of users already ignored bump limits and nobody
really noticed indicates that the limit was unnecessary.
Refactor page limits to a) be explicitly listed in the User class (not
hidden away in the Danbooru config) and b) explicitly depend on the
CurrentUser (not implicitly by way of Danbooru.config.max_numbered_pages).
* Refactor various user limit methods to class methods from instance
methods so they can be used outside the context of a single user.
* Remove the Danbooru.config.base_tag_query_limit option.
Add the following bank redirect payment methods:
* https://stripe.com/docs/payments/bancontact
* https://stripe.com/docs/payments/eps
* https://stripe.com/docs/payments/giropay
* https://stripe.com/docs/payments/ideal
* https://stripe.com/docs/payments/p24
These methods are used in Austria, Belgium, Germany, the Netherlands,
and Poland.
These methods require payments to be denominated in EUR, which means we
have to set prices in both USD and EUR, and we have to automatically
detect which currency to use based on the user's country. We also have
to automatically detect which payment methods to offer based on the
user's country. We do this by using Cloudflare's CF-IPCountry header to
geolocate the user's country.
This also switches to using prices and products defined in Stripe
instead of generated on-the-fly when creating the checkout.