Using GET for logging out isn't a very good idea, and can result in
external sites being able to log users out by including things like
<img src="https://tildes.net/logout">
This changes it to require a POST, and uses a form with its submit
button re-styled to look like the other text links in the menu.
Previously, the little "Exemplary" badge was only being shown to people
that can view the reasons (generally, the comment author and admins).
The only indication to other users that the comment had been labeled as
Exemplary was the colored left border. This adds the label to the top
for all users, including a count if there are multiple.
I've been reading a little about PostgreSQL transaction ID wraparound
today, and how it's knocked multiple companies out of commission for
days to get it resolved. It should have almost no chance of happening on
Tildes for years, but this will let me set up some monitoring for it
now, while I'm thinking about it.
For more info:
https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres.html
I didn't like that the previous change made it possible to *always* have
leading/trailing whitespace around a username. For example, it made it
so that you could go to "/user/ Deimos" and still see my user page
because of the leading space being trimmed. This makes it so that you
have to manually set a flag in the UserSchema context to enable the
trimming, and then only does that on the login view.
This is mostly for when people are logging in. Mobile keyboards
especially like to add a space after the username, which previously
would cause an error unless they manually deleted the trailing space.
I don't think the extra precision is very meaningful on topics in
listings (especially now that it changes to absolute dates after a
week), so we can reduce that and it will make things look a bit less
cluttered.
This tests rearranging the info shown on link topics in listings a
little, including no longer showing the name of the submitter of links
in listings at all. The domain (previously shown after the title in
parentheses) is moved into the space previously showing the submitter's
name, and we start showing the word count for link topics in that space
when we have it (which is most of the time).
The script that cleans up old deleted data wipes out the title of
deleted topics (along with most of the other data), but this breaks the
little headers on a user page that say which topic a comment was in.
This just adds a "<deleted topic>" marker in place of the title.
This should probably be generalized out to other locations eventually
too, but this is probably the most prominent place it will be needed.
Until now, users have only been able to view the full posting history of
themselves (with pagination only being available on your own user page).
This extends the view_history permission to all logged-in users, so
everyone logged into an account will be able to see the full history of
any user.
Youtube scraping broke earlier on a crazy duration of "P30W2DT8H2M32S"
(30 weeks?!), so I updated the parsing a little to be able to handle
that, and also not crash the consumer if it hits a duration that it
can't handle.
A lot of the code in common between this and the EmbedlyScraper should
probably be generalized out to a base class soon, but let's make sure
this works first.
I hate almost everything about this, but it seems to work fine overall.
Pagination is still restricted to users themselves currently, so you can
only page back through history and switch to topics or comments only on
yourself. This won't stay like this for much longer though.
Required telling prospector to ignore some abstract methods inside
wrapt's ObjectProxy - they're not truly abstract, but will raise
NotImplementedError if they're called, so I guess pylint detects those
as abstract.
The monitoring server needs Redis, but not the separate server that's
used for the breached-passwords bloom filter in dev/prod. This splits
that server out to its own state, so that it doesn't need to be set up
on the monitoring server.
Some of these states were built entirely around a single-server approach
(Prometheus + monitoring being on the same server as the site), and the
files have needed modifications to work with a separate monitoring
server.
This updates the states so that it should all happen as expected in all
types of environments.
I'm seeing some errors come through now when requests for unhandled urls
(things like /ads.txt) are made, usually just by various bots/scanners.
The metrics tween is crashing on these, so this should fix it.
When a form status message is displayed (often an error), it could cause
the button to be shrunk, making it look strange (sometimes the text
would even become larger than the button background). This prevents it
from being able to shrink and will cause the message to wrap instead.
Previously I was using Salt to install the Sentry SDK (previously known
as "raven") only on the production server, but that's not really
necessary. This will just install it everywhere, and then we'll only
actually integrate it in production.
Now that all links in text have underlines by default, I think this
looks pretty strange for ~group and @user links, which are quite common
and unnecessary to have underlined all the time. This modifies the
markdown parser to add link-user and link-group classes to these links,
which allows them to be styled differently.
In addition, some of the markdown tests needed to be changed to use
BeautifulSoup instead of simple string-matching, since it's not as
simple to recognize links any more (and the order of attrs might
change).
I think this is a good idea for a few reasons, including accessibility
(people that have difficulty distinguishing the link color will still be
able to recognize links).
This is a bit flimsy, but when I started looking at applying the
existing transformations to old posts, I found the Paradox forums as an
example of links that became broken after they were processed (because
"fixing" their links ends up breaking them).
This will give a way to exempt any other domains or urls that end up
being a problem, though over the long term it would probably be better
to make this database-based instead of code-based.