Comparison:
* Codecov has a simpler integration and a better UI.
* Codeclimate tracks both linter warnings (Rubocop, ESLint) and code
coverage, but its UI for code coverage is worse than Codecov's.
* Codeclimate doesn't support Simplecov 0.18 because Codeclimate doesn't
support 0.18's new coverage format yet.
Fix the wiki excerpt not appearing when searching for a tag that doesn't
exist in the tag list. This could happen if someone created a wiki for a
tag that has never been used on a post.
* Only show the delete artist button on the artist edit page.
* Only show the delete pool button on the pool edit page.
* Only show the delete wiki button on the wiki edit page.
Makes it so that models that have maximum length validations will add
maxlength attributes to form fields. This includes flag reasons, appeal
reasons, and forum topic titles.
Partially fixes#4519 (Add "n/m characters remaining" character counter to the appeal reason).
https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/maxlength
* Refactors DText form fields to use a custom SimpleForm input instead
of manually generated html. This fixes it so that DText fields use the
same markup as normal SimpleForm fields, which lets us apply browser
maxlength validations to DText input fields.
* Fixes autocomplete for @-mentions only working in comments and forum posts.
Now @-mention autocomplete works in all DText fields, including dmails.
Known bug: it applies in artist commentary fields when it shouldn't.
The image_url method makes a request to `https://seiga.nicovideo.jp/images/source/:image_id`
to see where this URL redirects to. Before we did a GET request, which caused it to download
the full image. This could fail with a timeout error if the download took too long. We also
cached the request, which caused the full image to be cached, even though we only need the
headers. Change it to a HEAD request so we don't have to download the entire image just to
check the URL.
* Factor out the Cloudflare Polish bypass code to a standalone feature.
* Add `http_downloader` method to the base source strategy. This is a
HTTP client that should be used for downloading images or making
requests to images. This client ensures that referrer spoofing and
Cloudflare bypassing are performed.
This fixes a bug with the upload page reporting the polished filesize
instead of the original filesize when uploading ArtStation images.
Factor out referrer spoofing so that it can be used outside of downloading
files. We also need to spoof the referrer when determining the remote
filesize of images on the uploads page.
Bug: the uploads page showed a remote size of 146 bytes for Pixiv uploads.
Cause: we didn't spoof the Referer header when making the HEAD request
for the image, causing Pixiv to return a 403 error.
Also fix the case where the Content-Length header is absent.
Add a `respond_to_search` test helper for concisely testing that a
controller's index action correctly responds to a search. Usage:
# Tests that `/tags.json?search[name]=touhou` returns the `touhou` tag.
setup { @touhou = create(:tag, name: "touhou") }
should respond_to_search(name: "touhou").with { @touhou }
These exceptions are no longer thrown now that we've switched from
HTTParty to http.rb. Swallowing unexpected exceptions during testing was
a bad practice anyway.
Also add inputs on the search page for both the linked_to and the
not_linked_to search parameters. Additionally, normalize the title
first since autocomplete adds trailing spaces. The search query was
also simplified a bit by taking advantage of Rails associations.
Nijie tests fail often under parallel testing. This is because every
test needs to login to Nijie first, but Nijie rate-limits the login
endpoint, so eventually we hit the limit and tests start failing.
This is made worse by a thundering herd problem. Eight test processes
try to login to Nijie at the same time, but only one succeeds, so the
rest sleep and try again, but they all wakeup and try again at the same
time, hitting the rate limits again.
The workaround is to set the retry limit ridiculously high, higher than
we would ideally like in production. Another workaround would be to
serialize the Nijie tests in the test suite. This can be done with
lockfiles and flock(2). This helps, but we can still hit the rate limit
even under serialized execution.
Fix Nicoseiga strategy to work with certain direct image urls that we
can't otherwise extract any information from.
Examples:
* https://dic.nicovideo.jp/oekaki/52833.png