Add a dtext_links table for tracking links between wiki pages. This is
to allow for broken link detection and "what links here" searches, among
other uses.
DText was changed so that the .dtext-external-link class is now applied to all external links.
Previously it applied only to named links (ex: "google":[http://www.google.com]), not bare
links (ex: http://www.google.com).
https://fontawesome.com/how-to-use/on-the-web/other-topics/performance
The default version of Font Awesome uses Javascript to replace <i> tags
with dynamically generated SVG elements. This adds a lot of weight to
the Javascript bundle (at least 1MB+), even when using subsetting to
load only the icons we actually use. The web font version is less
featureful than the JS version, but much lighter weight.
* Fix the IQDB lookup to handle error messages returned by IQDB.
* Fix `undefined method `>=' for nil:NilClass` error when trying to upload
ugoiras. Using the preview image for IQDB lookups should be faster and
give the same results as the full image, plus it should work with things
like ugoiras and videos that IQDB can't handle.
Also add an image_url param so that API users can continue using the
full image if needed.
https://danbooru.donmai.us/forum_topics/9127?page=283#forum_post_160508
There was a recent outage that was caused by the read replica
(yukinoshita.donmai.us) being temporarily unavailable. The pg driver in
rails got hardstuck trying to connect to the replica, which brought down
the whole site. The app servers stopped responding and could only be
brought down with SIGKILL. Even try to boot the rails console didn't
work.
We only really used this to calculate tag counts inside Post.fast_count,
which wasn't really beneficial since the read replica is slower than the
main database.
Previously the search form on the /iqdb_queries page submitted directly
to the iqdb service (karasuma.donmai.us), which redirected back to
Danbooru with the search results.
This was different than API requests, which submitted to
/iqdb_queries.json which proxied the call to iqdb through Danbooru.
Because of this, searches on the /iqdb_queries page had different
behavior than API requests. Things like filesize limits and referrer
spoofing were handled differently.
Now searches on the /iqdb_queries page submit directly to Danbooru. This
is simpler and it means that API requests and HTML requests have the
same behavior.
Normally thumbnails have a fixed size of 154x154, but that's not always
desirable outside of the posts index because it creates empty gaps
around thumbnails.
* Add the source (twitter, pixiv, etc) and upload date ("X minutes ago")
to iqdb thumbnails.
* Link the filesize to the full file so you can compare files in new tabs.
* Link the similarity to a iqdb search so you can pivot your search to other posts.
* Change cutoffs on upload page to max 5 results, min. 20% similarity.
* Change cutoffs on standalone /iqdb_queries page to max 20 results, min. 0% similarity.
* /iqdb_queries.json: add `limit` and `similarity` params to change default cutoffs.
Remove the 'Similar' button next to the source field in the post edit
form. Removed for multiple reasons:
* It doesn't make sense to have to open the edit form to do a reverse
image search.
* The 'Similar' button tries to redownload the file from the source,
which has various problems: the source might have been deleted, it
might have been changed or revised, it might be a format that iqdb
can't handle (ugoira/webm/mp4), or it might otherwise not match the
the actual post.
* The 'Find similar' button already exists in the sidebar and it does
the right thing by using the preview image from Danbooru, which
avoids all the above issues.
Disable the assets pipeline (Sprockets). Sprockets errors out now after
upgrading to Sprockets 4 because of missing config files. We don't use
it any more after switching to Webpack, so we can disable it entirely.
Also disable a few more Rails features that we don't use (ActiveStorage,
ActionCable, ActionMailbox, ActionText).
This was used to discourage crawlers from crawling certain pages we
didn't want them to crawl, primarily post searches.
Remove because there are better ways to control crawling. Some of these
links weren't even visible to crawlers anyway. This lets us be
consistent about only applying rel="nofollow" to external links.
* Use underscores instead of spaces for tags in inline tag lists (upload
tags report, tooltips, modqueue, comments page).
* Allow long tags to word wrap. Fixes long sources not wrapping in the
uploads tag report. Also fixes very long tags that don't have
underscores not wrapping in the sidebar (ex: kuouzumiaiginsusutakeizumonokamimeichoujin_mika).
* Don't truncate long sources in the sidebar on the post show page. Word
wrap them instead.
* Word wrap long external links in general (mainly links in dtext).
* Turn sources into links on modqueue page.
DText is processed in three phases: a preprocessing phase, the regular
parsing phases, and a postprocessing phase.
In the preprocessing phase we extract all the wiki links from all the
dtext messages on the page (more precisely, we do this in forum threads
and on comment pages, because these are the main places with lots of
dtext). This is so we can lookup all the tags and wiki pages in one
query, which is necessary because in the worst case (in certain forum
threads and in certain list_of_* wiki pages) there can be hundreds of
tags per page.
In the postprocessing phase we fixup the html generated by the ragel
parser to add CSS classes to wiki links. We do this in a postprocessing
step because it's easier than doing it in the ragel parser itself.
* Parse the wiki page with the actual dtext parser instead of by hand.
This is so that wiki links inside things like [nodtext] or [code]
blocks are handled properly.
* Only include tags that exist and are nonempty. Don't include links to
dead pages or blank tags.
There are a handful of places where we need to strip markup from a piece
of dtext, primarily in <meta> description tags in the wiki. Currently
the dtext parser handles this by having a special mode where it parses
the text but doesn't output html tags. Here we refactor to instead parse
the text normally then strip out the html tags after the fact.
This is more flexible and allows us to simplify a lot of things in the
dtext parser. This also produces more readable output than before in
certain cases.
The swipe left gesture interfered with scrolling left and right, using
using pinch to zoom, and with copy and pasting text. This gesture wasn't
really necessary anyway, since the back button can always be used to go
back instead.
* Allow both xml and json authentication in sessions controller.
* Raise an exception if a login attempt fails so that a) we return a
proper error for json/xml requests and b) failed login attempts get
reported to NewRelic (for monitoring abuse).
Replace this common pattern in controllers:
@tags = Tag.search(search_params).paginate(params[:page], :limit => params[:limit], :search_count => params[:search])
with this:
@tags = Tag.paginated_search(params)
`search_count` is used to skip doing a full page count when we're not
doing a search (on the assumption that the number of results will be
high when not constrained by a search). We didn't do this consistently
though. Refactor to do this in every controller.
Previously the page-based (numbered) paginator would always count the
total_pages, even in API calls when it wasn't needed. This could be very
slow in some cases. Refactor so that total_pages isn't calculated unless
it's called.
While we're at it, refactor to condense all the sequential vs. numbered
pagination logic into one module. This incidentally fixes a couple more
bugs:
* "page=b0" returned all pages rather than nothing.
* Bad parameters like "page=blaha123" and "page=a123blah" were accepted.