This is being done to be able to support some other collapsing behavior
starting to come in (for example, as an effect from comment tags).
However, it's starting to get pretty ugly at this point. I should
probably try to implement a wrapper class for Comments that are inside a
CommentTree so that I can move methods like this into that class instead
of needing them to be staticmethods on the CommentTree class itself.
This adds a consumer (in prod only) that uses Embedly's Extract API to
scrape the links from all new link topics and stores some of the data in
the topic's content_metadata column.
Previously, any topic processed by this consumer would have its
content_metadata completely replaced. This won't work once other
consumers or processes start being able to set that data, since we don't
know that this one will always run first.
This commit updates the method the consumer uses so that it will keep
any data that's already in the topic's content_metadata column if
necessary. It would probably be good to generalize this method out
somehow so that it can be used in other places more easily.
Previously, user topic tag filters weren't also filtering out any
"descendant" tags when they were hierarchical. For example, setting a
filter on "ask" wouldn't also filter out "ask.survey". This fixes that
behavior, though it's a bit awkward and maybe could be done better
somehow.
This re-enables the comment tagging functionality, giving the permission
to all users who are over a week old. However, as of this commit, the
tags have no functional effect at all, and are only visible to admins.
This should really be done on the database end, but this is a simple fix
for the sorting being wrong (due to last_reply_time not being set for
single-message conversations).
This follows the REUSE practices to add license and copyright info to
all source files: https://reuse.software/practices/2.0/
In addition, LICENSE.md was switched to a plaintext LICENSE file, to
support the tag-value header as recommended.
Note that files that are closer to configuration than code did not have
headers added. This includes all Salt files, Alembic files, and Python
files such as most __init__.py files that only import other files, since
those are similar to header files which are not considered
copyrightable.
Just some small adjustments to how @extend is used here, so that some of
the styles that apply to "fully collapsed" chains don't get brought over
to the "individually collapsed" comments when they're not wanted.
The % syntax is SASS's recommendation for "@extend-only selectors".
These disables no longer seem to be necessary, due to switching to
Prospector. Some may be related to newer versions of astroid, pylint, or
other reasons.
Pylama is no longer maintained, and has been gradually getting slower
and slower, as well as being incompatible with Python 3.7 and newer
versions of astroid and pylint. This replaces it with Prospector, which
is being maintained by the same group as pylint and some other code
quality tools.
For users that have the "mark new comments" feature enabled, this will
collapse old comments when they re-visit a topic that has new ones. It
involves adding a new "individual collapse" style that only collapses a
single comment and doesn't also hide all of its replies.
New comments and their direct parents will stay uncollapsed, and all
other comments in a path up to the root will be individually collapsed.
Any branches with no expanded comments will be fully collapsed. We
should probably add an indicator for how many comments are in a
collapsed chain so that we can distinguish between individually
collapsed ones and larger collapsed chains.
Quite a few aspects of this are very hackish (especially as related to
the templates and things that needed to be done to allow
topic_listing.jinja2 to be inherited from for this new one), but it's a
lot better than nothing.
Previously these methods for generating "base" and "normal" urls weren't
treating each route individually and just had a single list of query
vars that would be kept for all routes. This approach is a lot more
flexible and allows separating out only the variables relevant for a
particular route.
These limits were determined by looking at site activity so far, and
generally shouldn't have any impact on normal site usage.
This also adds a new request method - apply_rate_limit, which can be
used to check the rate limit and immediately raise an error if it's
exceeded, instead of needing to check and handle the result separately.