* Warn when renaming a wiki that still has links from other wikis.
* When renaming a wiki that still has posts, just show a warning instead
of returning an error and making the user confirm the rename.
Hide IP ban creation and deletion actions from non-mods in the
/mod_actions listing.
The previous approach of just filtering out the IP from the description
was hacky and didn't work with the `only` param (/mod_actions.json?only=id
still included the description field).
Remove the login reminder page. The meaning of "login reminder" wasn't
clear (it's for recovering a forgotten username) and the functionality
was redundant. The password reset page can already be used to recover
forgotten usernames.
There was also a privacy leak, since the login reminder page could be
used to find out whether a given email is in use on Danbooru.
The twitter gem had several problems:
* It's been unmaintained for over a year.
* It pulled in a lot of dependencies, many of which were outdated. In
particular, it locked the `http` gem to version 3.3, preventing us
from upgrading to 4.2.
* It raised exceptions on normal error conditions, like for deleted
tweets or suspended users, which we really don't want.
* We had to wrap it to provide caching.
Changes:
* Fixes#4226 (Exception when creating new artists entries for suspended
Twitter accounts)
* Drop support for scraping images from summary cards. Summary cards
are the previews you get when you link to a website in a tweet. These
preview images aren't always the best image.
* Move the Curated pool updater from Reportbooru to Danbooru.
* Change the process for selecting curated posts. Previously it was
every post from the last week with at least three supervotes. This was
flawed because it included both super-upvotes and super-downvotes. Now
it's the top 100 posts from the last week, ordered from most super-upvoted
to least.
Don't return the `domains` field in /artists/{id}.{json,xml}. Fixes a
failure in /artists/{id}.xml:
https://danbooru.donmai.us/artists/156646.xml
<result success="false">
undefined method `domains' for #<ArtistUrl:0x00005566dd340af0> Did you mean? DomainName
</result>
`to_xml` passes down the `methods` param to all nested models, which
doesn't work.
Don't track IP addresses for post appeals, post flags, tag aliases, tag
implications, or user feedbacks. These things are already tightly
limited. We don't need IPs from them to detect sockpuppets.
Add a new IP address search page at /ip_addresses. Replaces the old
search page at /moderator/ip_addrs.
On user profile pages, show the user's last known IP to mods. Also add
search links for finding other IPs or accounts associated with the user.
IP address search uses a big UNION ALL statement to merge IP addresses
across various tables into a single view. This makes searching easier,
but is known to timeout in certain cases.
Fixes#4207 (the new IP search page supports searching by subnet).
Add these endpoints:
* /note_versions/1234
* /artist_versions/1234
* /artist_commentary_versions/1234
This is so the /ip_addresses listing can link to these endpoints.
Bug: sending a dmail containing a wiki link (ex: [[tagme]]) failed when
the recipient had email notifications turned on.
Cause: wiki links inside email notifications use absolute urls, which
the dtext postprocessor didn't parse correctly.
Bug: links like these returned 404s:
* https://danbooru.donmai.us/wiki_pages/...
* https://danbooru.donmai.us/wiki_pages/.hack//
* https://danbooru.donmai.us/wiki_pages/ssss.gridman
Cause: by default, Rails uses dots in route segments to separate the id
from the format. For example, in /wiki_pages/ssss.gridman, the id is
parsed as "ssss" and the format is "gridman" (as if "gridman" were a
format like "json" or "xml").
We work around this by specifying the regex for the id param manually.
The trick here is to use a non-greedy match-all combined with a positive
lookahead to detect the extension but not include it in the match.
* Redirect the show_or_new action to either the show page or the new
page. Don't use show_or_new to render nonexistent wikis; do that in the
regular show action instead.
* Make the show action return 404 for nonexistent wikis.
Change wiki page search to redirect to exact matches only when using the
quick search bar. Fixes searches sometimes unexpectedly redirecting when
doing a regular (non-quick) search that happens to return a single result.
Also remove the logic that tries to expand the search when no results
are found. This will eventually be replaced with a smarter "did you mean?"
search.
Move the parsing for the [bur:<id>], [ta:<id>], [ti:<id>] pseudo tags to
the main parser in `DText.format_text`. This fixes a bug where wiki
links inside bulk update requests on the forum weren't properly
colorized because the text of the BUR was embedded after we scanned for
wiki links, not before.
This also ensures that tags inside bulk update requests will be recorded
in the dtext_links table, meaning that forum posts can be properly
searched by tags.
This incidentally means that these request pseudo tags can now be used
outside the forum.
* Move forum post vote tests from test/controllers to test/functional.
* Fix forum post vote tests to work with new routes.
* Fix obsolete wiki page tests dealing with updater_id.
* Remove the single alias and implication request forms. From now
on, bulk update requests are the only way to request aliases or
implications.
* Remove the forum topic ID field from the bulk update request form.
Instead, to attach a BUR to an existing topic you go to the topic then
you click "Request alias/implication" at the top of the page.
* Update the bulk update request form to give better examples for the
script format and to explain the difference between aliases and
implications.
Fix the moebooru strategy to fallback to returning the image url if we
can't find the preview url. Fixes iqdb lookups failing in some cases
because the strategy didn't return a valid url for preview_url.
Drop the creator_id and updater_id fields from wiki pages. These fields
had several issues:
* The creator_id field was inconsistent with the wiki_page_versions
table. Apparently during the migration to Danbooru 2 in 2012-2013 the
creator_id field got reset to whoever last updated the wiki at that
point in time.
* Saving a wiki would set the updater_id even when nothing actually
changed. This also caused the updated_at timestamp to get bumped.
Because of this, anything that saved a wiki, including things like
creating aliases or implications, would bump the updater_id and
updated_at even though the wiki didn't actually change. This meant
these fields weren't consistent with the wiki_page_versions history.
Changes:
* Remove `creator_name` field from the /wiki_pages.json API.
* Remove creator name search option from /wiki_pages/search.
Add a dtext_links table for tracking links between wiki pages. This is
to allow for broken link detection and "what links here" searches, among
other uses.
* Change cutoffs on upload page to max 5 results, min. 20% similarity.
* Change cutoffs on standalone /iqdb_queries page to max 20 results, min. 0% similarity.
* /iqdb_queries.json: add `limit` and `similarity` params to change default cutoffs.
DText is processed in three phases: a preprocessing phase, the regular
parsing phases, and a postprocessing phase.
In the preprocessing phase we extract all the wiki links from all the
dtext messages on the page (more precisely, we do this in forum threads
and on comment pages, because these are the main places with lots of
dtext). This is so we can lookup all the tags and wiki pages in one
query, which is necessary because in the worst case (in certain forum
threads and in certain list_of_* wiki pages) there can be hundreds of
tags per page.
In the postprocessing phase we fixup the html generated by the ragel
parser to add CSS classes to wiki links. We do this in a postprocessing
step because it's easier than doing it in the ragel parser itself.
There are a handful of places where we need to strip markup from a piece
of dtext, primarily in <meta> description tags in the wiki. Currently
the dtext parser handles this by having a special mode where it parses
the text but doesn't output html tags. Here we refactor to instead parse
the text normally then strip out the html tags after the fact.
This is more flexible and allows us to simplify a lot of things in the
dtext parser. This also produces more readable output than before in
certain cases.
Previously the page-based (numbered) paginator would always count the
total_pages, even in API calls when it wasn't needed. This could be very
slow in some cases. Refactor so that total_pages isn't calculated unless
it's called.
While we're at it, refactor to condense all the sequential vs. numbered
pagination logic into one module. This incidentally fixes a couple more
bugs:
* "page=b0" returned all pages rather than nothing.
* Bad parameters like "page=blaha123" and "page=a123blah" were accepted.