Bug: when a search timed out we got the generic failbooru page instead
of the search timeout error page.
Cause: when rendering the <link rel="next"> / <link rel="prev"> tags in
the header, we may need to evaluate the search to determine the next or
previous page, but if the searches times out then this fails, which
caused Rails to throw a ActionView::Template::Error because an exception
was thrown while rendering the template.
Likewise, rendering the attributes for the <body> tag could fail with an
ActionView::Template::Error because the call to `current_item.present?`
forced evaluation of the search.
Bug: The frontpage failed due to a SSL error. We couldn't fetch the
popular tag list from Reportbooru because Reportbooru's SSL certificate
had expired and HTTP.rb raised an SSLError exception that we didn't
catch.
Fix: Convert the SSLError to a 5xx HTTP error to prevent SSL exceptions
from leaking through HTTP.rb.
Remove sending dmail notifications to uploaders when an upload is
disapproved. These messages are usually confusing and frustrating to
uploaders. They don't know who is sending them and they usually feel
insulted when they get negative messages from anonymous users.
* Fix incorrect canonical tags. Before we were using
`<meta name="canonical" content="...">`. This is wrong, it should have been
`<link rel="canonical" href="...">`.
* Add a default canonical tag on all pages. Fixes Google treating the
same content on different subdomains (safebooru, shima, kagamihara, etc)
as duplicate content. Also fixes Google sometimes treating similar but
distinct content on the same domain as duplicate content.
Fix the wiki excerpt not appearing when searching for a tag that doesn't
exist in the tag list. This could happen if someone created a wiki for a
tag that has never been used on a post.
* Only show the delete artist button on the artist edit page.
* Only show the delete pool button on the pool edit page.
* Only show the delete wiki button on the wiki edit page.
* Refactors DText form fields to use a custom SimpleForm input instead
of manually generated html. This fixes it so that DText fields use the
same markup as normal SimpleForm fields, which lets us apply browser
maxlength validations to DText input fields.
* Fixes autocomplete for @-mentions only working in comments and forum posts.
Now @-mention autocomplete works in all DText fields, including dmails.
Known bug: it applies in artist commentary fields when it shouldn't.
The image_url method makes a request to `https://seiga.nicovideo.jp/images/source/:image_id`
to see where this URL redirects to. Before we did a GET request, which caused it to download
the full image. This could fail with a timeout error if the download took too long. We also
cached the request, which caused the full image to be cached, even though we only need the
headers. Change it to a HEAD request so we don't have to download the entire image just to
check the URL.
* Factor out the Cloudflare Polish bypass code to a standalone feature.
* Add `http_downloader` method to the base source strategy. This is a
HTTP client that should be used for downloading images or making
requests to images. This client ensures that referrer spoofing and
Cloudflare bypassing are performed.
This fixes a bug with the upload page reporting the polished filesize
instead of the original filesize when uploading ArtStation images.
Factor out referrer spoofing so that it can be used outside of downloading
files. We also need to spoof the referrer when determining the remote
filesize of images on the uploads page.
Bug: the uploads page showed a remote size of 146 bytes for Pixiv uploads.
Cause: we didn't spoof the Referer header when making the HEAD request
for the image, causing Pixiv to return a 403 error.
Also fix the case where the Content-Length header is absent.
Also add inputs on the search page for both the linked_to and the
not_linked_to search parameters. Additionally, normalize the title
first since autocomplete adds trailing spaces. The search query was
also simplified a bit by taking advantage of Rails associations.
Nijie tests fail often under parallel testing. This is because every
test needs to login to Nijie first, but Nijie rate-limits the login
endpoint, so eventually we hit the limit and tests start failing.
This is made worse by a thundering herd problem. Eight test processes
try to login to Nijie at the same time, but only one succeeds, so the
rest sleep and try again, but they all wakeup and try again at the same
time, hitting the rate limits again.
The workaround is to set the retry limit ridiculously high, higher than
we would ideally like in production. Another workaround would be to
serialize the Nijie tests in the test suite. This can be done with
lockfiles and flock(2). This helps, but we can still hit the rate limit
even under serialized execution.
Fix Nicoseiga strategy to work with certain direct image urls that we
can't otherwise extract any information from.
Examples:
* https://dic.nicovideo.jp/oekaki/52833.png