Parse the user agent and log whether it seems like a known bot or a
human to NewRelic under the `user.bot` request attribute. This is so
that known bots can be filtered out of search traffic analytics. Bots
and search crawlers make up a significant portion of search traffic.
Fix `PostVersionTest` being defined in two different places, which broke
the test runner if you tried to run the system tests at the same time as
the regular tests.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
When a user tries to change their email, redirect them to the confirm
password page (like Github's sudo mode) instead of having them re-enter
their password on the change email page. This is the same thing we do
when a user updates their API keys. This way we have can use the same
confirm password authentication flow for everything that needs a
password.
Flash is dead. It's no longer supported by browsers, it's not
well-supported by emulators, and only two Flash posts were uploaded in
the last year anyway. Old Flash files will continue to exist, but new
Flash uploads will no longer be allowed.
Allow admins to remove comment votes by other users. This is done by
clicking the comment score to get to the comment vote list, then
clicking the Remove button on every vote.
Make it so that when a user removes their own vote, the vote is soft
deleted (the is_deleted flag is set) instead of hard deleted.
Changes:
* Add is_deleted flag to comment votes.
* Relax uniqueness constraint so you can have multiple deleted votes on
the same comment. You can still only have one active vote on the comment.
* Add `soft_delete` method to Deletable concern.
When a POST request returns a 302 redirect, follow the redirect with a
GET request instead of with a POST request.
HTTP standards leave it unspecified whether a POST request that returns
a 302 redirect should be followed with a GET or with a POST. A GET is
what most browsers use, which means it's what most servers expect.
Fixes the /tagme Discord command not working because when we uploaded
the image to DeepDanbooru, the POST request returned a 302 redirect,
which the server expected us to follow with a GET, not with a POST.
Ref:
* https://stackoverflow.com/questions/17605915/what-is-the-correct-behavior-expected-of-an-http-post-302-redirect-to-get
Fix bug reported in forum #182766:
The Download button on the posts page does not respect the Disable
tagged filenames user setting. Tags are included in the filename when
clicking the Download button even when the Disable tagged filenames
setting is set to Yes. Right click -> Save As on the image still
respects the setting.
Always store original files in `public/data/original` instead of directly in
`public/data`. Previously this was optional and defaulted to off.
Downstream boorus will need to either move all images in the
`public/data` directory to `public/data/original`, or symlink the
`public/data/original` directory to the toplevel `public/data` directory:
ln -s . /path/to/danbooru/public/data/original
This to simplify file layout. This option existed because in the past we
stored original files in different locations on different servers (for
no particular reason).
Changes:
* Change the `expires_at` field to `duration`.
* Make moderators choose from a fixed set of standard ban lengths,
instead of allowing arbitrary ban lengths.
* List `duration` in seconds in the /bans.json API.
* Dump bans to BigQuery.
Note that some old bans have a negative duration. This is because their
expiration date was before their creation date, which is because in 2013
bans were migrated to Danbooru 2 and the original ban creation dates
were lost.
* Export daily public database dumps to BigQuery and Google Cloud Storage.
* Only data visible to anonymous users is exported. Some tables have
null or missing fields because of this.
* The bans table is excluded because some bans have an expires_at
timestamp set beyond year 9999, which BigQuery doesn't support.
* The favorites table is excluded because it's too slow to dump (it
doesn't have an id index, which is needed by find_each).
* Version tables are excluded because dumping them every day is
inefficient, streaming insertions should be used instead.
Links:
* https://console.cloud.google.com/bigquery?project=danbooru1
* https://console.cloud.google.com/storage/browser/danbooru_public
* https://storage.googleapis.com/danbooru_public/data/posts.json
If we detect that the session cookie has expired (by the presence of the
`#login_illust` element on the page), then clear the cached session
cookie. The current source fetch will still fail, but the next fetch
will try to login again and hopefully succeed.