Fix strategy_should_work to not perform API calls outside of `should`
blocks. This could cause the whole test suite to crash if a source test
raised an unexpected exception.
* Fixed a bug where manga posts with a single tag would raise an error
* Fixed a bug where dic.nicovideo.jp/oekaki posts weren't uploadable due
to SSL issues
* Added support for more manga corner cases
Foundation changed their HTML page format and we can no longer scrape
the image URL directly from the page. Instead we have to build it based
on API data.
Factor out the Stripe code from the UserUpgrade class. Introduce a new
PaymentTransaction abstract class that represents a payment with some
payment processor, and a PaymentTransaction::Stripe class that
implements transactions with Stripe.
Note that we can't completely eliminate Stripe even though we no longer
accept payments with it because we still need to be able to look up old
payments in Stripe.
When the artist name couldn't found for a Newgrounds URL, for example
for `https://www.newgrounds.com/dump/item`, then the `profile_url`
method erroneously returned `https://.newgrounds.com`. This led to an
error later on when the artist finder tried to parse the invalid URL.
Also fix `strategy_should_work` to test that the profile URL is a valid
URL, and not to try to download the file when image_urls is empty.
Remove the `image_url` method from source strategies. This method would
return only the first image if a source had multiple images. The
`image_urls` method should be used instead. Tests were the main place
that still used `image_url` instead of `image_urls`.
Also make post replacements return an error if replacing with a source
that contains multiple images, instead of just blindly replacing the
post with the first image in the source.
Fixes a bug where the Foundation source strategy failed because http.rb
automatically sent a `Content-Length: 0` header with all GET requests,
which caused Foundation to return a 400 Bad Request error. This behavior
was fixed in http.rb 5.x.
http.rb 5.x has a breaking change where it now includes the request object
inside the response object, which we have to handle in a few places.
Allow uploading multiple files from your computer at once.
The maximum limit is 100 files at once. There is still a 50MB size limit
that applies to the whole upload. This limit is at the Nginx level.
The upload widget no longer shows a thumbnail preview of the uploaded
file. This is because there isn't room for it in a multi-file upload,
and because the next page will show a preview anyway after the files are
uploaded.
Direct file uploads are processed synchronously, so they may be slow.
API change: the `POST /uploads` endpoint now expects the param to be
`upload[files][]`, not `upload[file]`.
* Save the filename for files uploaded from disk. This could be used in
the future to extract source data if the filename is from a known site.
* Save both the image URL and the page URL for files uploaded from
source. This is needed for multi-file uploads. The image URL is the
URL of the file actually downloaded from the source. This can be
different from the URL given by the user, if the user tried to upload
a sample URL and we automatically changed it to the original URL. The
page URL is the URL of the page containing the image. We don't always
know this, for example if someone uploads a Twitter image without the
bookmarklet, then we can't find the page URL.
* Add a fix script to backfill URLs for existing uploads. For file
uploads, the filename will be set to "unknown.jpg". For source
uploads, we fetch the source data again to get the image and page
URLs. This may fail for uploads that have been deleted from the
source since uploading.
* Fix broken upload tests.
* Fix uploads to return an error if both a file and a source are given
at the same time, or if neither are given. Also fix the error message
in this case so that it doesn't include "base" at the start of the string.
* Fix uploads to percent-encode any Unicode characters in the source URL.
* Add a max filesize validation to media assets.
Fix the test suite failing when trying to run it in the default state
with no config file or API keys configured. Most source sites require
API keys or login credentials to be set in order to work. Skip these
tests when credentials aren't configured.
Add a model for storing image and video metadata for uploaded files.
Metadata is extracted using ExifTool. You will need to install ExifTool
after this commit. ExifTool 12.22 is the minimum required version
because we use the `--binary` option, which was added in this release.
The MediaMetadata model is separate from the MediaAsset model because
some files contain tons of metadata, and most of it is non-essential.
The MediaAsset model represents an uploaded file and contains essential
metadata, like the file's size and type, while the MediaMetadata model
represents all the other non-essential metadata associated with a file.
Metadata is stored as a JSON column in the database.
ExifTool returns all the file's metadata, not just the EXIF metadata.
EXIF is one of several types of image metadata, hence why we call
it MediaMetadata instead of EXIFMetadata.
Replace the old IQDB API client with a new client for the new forked
version of IQDB at https://github.com/danbooru/iqdb.
Changes:
* The /iqdb_queries endpoint now returns `hash` and `signature` fields.
The `signature` is the full decoded Haar signature, while the `hash`
is a encoded version of the signature.
* The /iqdb_queries endpoint no longer returns `width` and `height`
fields in the response (these were always 128x128).
* We no longer need the IQDBs frontend server, now we talk to the IQDB
instance directly.
* We no longer send add/remove image commands to IQDB through AWS SQS,
now we send them to IQDB directly. They are sent in a delayed job so
that if IQDB is down, uploading images is still possible, the add
image commands will just get queued up.
* Fix a bug where regenerating an image's thumbnails didn't regenerate
IQDB, because IQDB silently ignored add image commands when the image
already existed in the database.
Add a custom Shoulda matcher for testing that a model correctly normalizes an attribute.
Usage:
subject { build(:wiki_page) }
should normalize_attribute(:title).from(" Azur Lane ").to("azur_lane")
Add the following bank redirect payment methods:
* https://stripe.com/docs/payments/bancontact
* https://stripe.com/docs/payments/eps
* https://stripe.com/docs/payments/giropay
* https://stripe.com/docs/payments/ideal
* https://stripe.com/docs/payments/p24
These methods are used in Austria, Belgium, Germany, the Netherlands,
and Poland.
These methods require payments to be denominated in EUR, which means we
have to set prices in both USD and EUR, and we have to automatically
detect which currency to use based on the user's country. We also have
to automatically detect which payment methods to offer based on the
user's country. We do this by using Cloudflare's CF-IPCountry header to
geolocate the user's country.
This also switches to using prices and products defined in Stripe
instead of generated on-the-fly when creating the checkout.
This upgrades from the legacy version of Stripe's checkout system to the
new version:
> The legacy version of Checkout presented customers with a modal dialog
> that collected card information, and returned a token or a source to
> your website. In contrast, the new version of Checkout is a smart
> payment page hosted by Stripe that creates payments or subscriptions. It
> supports Apple Pay, Dynamic 3D Secure, and many other features.
Basic overview of the new system:
* We send the user to a checkout page on Stripe.
* Stripe collects payment and sends us a webhook notification when the
order is complete.
* We receive the webhook notification and upgrade the user.
Docs:
* https://stripe.com/docs/payments/checkout
* https://stripe.com/docs/payments/checkout/migration#client-products
* https://stripe.com/docs/payments/handling-payment-events
* https://stripe.com/docs/payments/checkout/fulfill-orders
- Allow for different user levels by checking the CurrentUser variable
- Allow for other URL parameters other than the search parameters
-- This is needed on some controllers such as the comments controller
- Allow for delayed execution of lambda functions for item input
-- This is because the updated_at field isn't correct at the time
that the input gets sent to the function
-- This prevented the right order from being tested
Add a `respond_to_search` test helper for concisely testing that a
controller's index action correctly responds to a search. Usage:
# Tests that `/tags.json?search[name]=touhou` returns the `touhou` tag.
setup { @touhou = create(:tag, name: "touhou") }
should respond_to_search(name: "touhou").with { @touhou }
These exceptions are no longer thrown now that we've switched from
HTTParty to http.rb. Swallowing unexpected exceptions during testing was
a bad practice anyway.
Remove the Downloads::File class. Move download methods to
Danbooru::Http instead. This means that:
* HTTParty has been replaced with http.rb for downloading files.
* Downloading is no longer tightly coupled to source strategies. Before
Downloads::File tried to automatically look up the source and download
the full size image instead if we gave it a sample url. Now we can
do plain downloads without source strategies altering the url.
* The Cloudflare Polish check has been changed from checking for a
Cloudflare IP to checking for the CF-Polished header. Looking up the
list of Cloudflare IPs was slow and flaky during testing.
* The SSRF protection code has been factored out so it can be used for
normal http requests, not just for downloads.
* The Webmock gem can be removed, since it was only used for stubbing
out certain HTTParty requests in the download tests. The Webmock gem
is buggy and caused certain tests to fail during CI.
* The retriable gem can be removed, since we no longer autoretry failed
downloads. We assume that if a download fails once then retrying
probably won't help.
* Combine MissedSearchService, PostViewCountService, and
PopularSearchService into single ReportbooruService class.
* Use Danbooru::Http for these services instead of HTTParty.
* Fix corrupted image detection. We were shelling out to vips and trying
to grep for error messages, but the error message for jpeg files changed.
Now we load the file in ruby vips, which raises an error on failure.
* Don't attempt to redownload corrupted images. If a download completes
without any errors yet the downloaded file is corrupt, then something is
wrong at the source and redownloading is unlikely to help. Let the
upload fail and the user retry if necessary.
* Validate that all uploads are uncorrupted, including files uploaded
from a computer, not just files uploaded from a source.
* Fix the pool version SQS service to always be mocked before every
test. Before we had to manually set it up before every test dealing
with pool versions.
* Fix it so that we reconnect to the post/pool version databases before
every test. Before using $ARCHIVE_DATABASE_URL to set the database url
failed because environment variables weren't loaded by dotenv yet when
connections were first established.