I don't think the extra precision is very meaningful on topics in
listings (especially now that it changes to absolute dates after a
week), so we can reduce that and it will make things look a bit less
cluttered.
This tests rearranging the info shown on link topics in listings a
little, including no longer showing the name of the submitter of links
in listings at all. The domain (previously shown after the title in
parentheses) is moved into the space previously showing the submitter's
name, and we start showing the word count for link topics in that space
when we have it (which is most of the time).
The script that cleans up old deleted data wipes out the title of
deleted topics (along with most of the other data), but this breaks the
little headers on a user page that say which topic a comment was in.
This just adds a "<deleted topic>" marker in place of the title.
This should probably be generalized out to other locations eventually
too, but this is probably the most prominent place it will be needed.
Until now, users have only been able to view the full posting history of
themselves (with pagination only being available on your own user page).
This extends the view_history permission to all logged-in users, so
everyone logged into an account will be able to see the full history of
any user.
Youtube scraping broke earlier on a crazy duration of "P30W2DT8H2M32S"
(30 weeks?!), so I updated the parsing a little to be able to handle
that, and also not crash the consumer if it hits a duration that it
can't handle.
A lot of the code in common between this and the EmbedlyScraper should
probably be generalized out to a base class soon, but let's make sure
this works first.
I hate almost everything about this, but it seems to work fine overall.
Pagination is still restricted to users themselves currently, so you can
only page back through history and switch to topics or comments only on
yourself. This won't stay like this for much longer though.
Required telling prospector to ignore some abstract methods inside
wrapt's ObjectProxy - they're not truly abstract, but will raise
NotImplementedError if they're called, so I guess pylint detects those
as abstract.
The monitoring server needs Redis, but not the separate server that's
used for the breached-passwords bloom filter in dev/prod. This splits
that server out to its own state, so that it doesn't need to be set up
on the monitoring server.
Some of these states were built entirely around a single-server approach
(Prometheus + monitoring being on the same server as the site), and the
files have needed modifications to work with a separate monitoring
server.
This updates the states so that it should all happen as expected in all
types of environments.
I'm seeing some errors come through now when requests for unhandled urls
(things like /ads.txt) are made, usually just by various bots/scanners.
The metrics tween is crashing on these, so this should fix it.
When a form status message is displayed (often an error), it could cause
the button to be shrunk, making it look strange (sometimes the text
would even become larger than the button background). This prevents it
from being able to shrink and will cause the message to wrap instead.
Previously I was using Salt to install the Sentry SDK (previously known
as "raven") only on the production server, but that's not really
necessary. This will just install it everywhere, and then we'll only
actually integrate it in production.
Now that all links in text have underlines by default, I think this
looks pretty strange for ~group and @user links, which are quite common
and unnecessary to have underlined all the time. This modifies the
markdown parser to add link-user and link-group classes to these links,
which allows them to be styled differently.
In addition, some of the markdown tests needed to be changed to use
BeautifulSoup instead of simple string-matching, since it's not as
simple to recognize links any more (and the order of attrs might
change).
I think this is a good idea for a few reasons, including accessibility
(people that have difficulty distinguishing the link color will still be
able to recognize links).
This is a bit flimsy, but when I started looking at applying the
existing transformations to old posts, I found the Paradox forums as an
example of links that became broken after they were processed (because
"fixing" their links ends up breaking them).
This will give a way to exempt any other domains or urls that end up
being a problem, though over the long term it would probably be better
to make this database-based instead of code-based.
webargs 5.0.0 makes a behavior change to its @use_args / @use_kwargs
decorators that makes it so that optional/missing arguments are no
longer filled: https://github.com/marshmallow-code/webargs/issues/342
This breaks how I'm currently doing some views with missing arguments
(such as views.topic.get_group_topics), so to be able to upgrade to 5.0
we will need to either update the views or the schemas.
When a topic tag was using the hierarchy (for something like
"science.biology"), it would set an invalid class on the tag label,
because it would include the period(s), which can't be used in CSS
classes. This fixes it so that periods will be replaced with dashes.
This is (extreme) overkill at this point, but I'm thinking ahead to some
of the transformations that I want to be able to support, and this
refactor should make it much more feasible to have a good process.
The main changes here are using a class-based system where each
transformation can also define a method for checking whether it should
apply to any given url (probably often based on domain), as well as a
new process that will repeatedly apply all transformations until the url
passes through all of them unchanged.
This script can be scheduled as a cronjob, and will dump the database,
compress and GPG-encrypt it, and upload to an FTP. Afterwards, it will
also delete any backups older than the specified retention periods, both
locally as well as on the FTP (with individual retention periods).