I temporarily pinned two packages that will require more significant
updates (webargs in requirements and prospector in requirements-dev).
Other than those, everything seemed to upgrade cleanly, except for an
issue with mypy that needed a "type: ignore" comment to circumvent.
Note that there is currently an issue with Salt's pip module being
unable to handle comments in a requirements file that include "-r", so I
had to manually edit the two .txt files after using pip-tools to remove
all lines with "via -r" comments in them. I've commented about this in
an issue on Salt's repo here:
https://github.com/saltstack/salt/issues/56514#issuecomment-665947887
Previously, when checking if a link had been posted before, there was no
restriction on the time limit, so even posts from years ago would come
up. This restricts it to only the last 6 months, which I think is a
pretty reasonable time period for reposting.
This isn't great, but will fix an error that's actively occurring when
someone filters to a single tag (tag= query var) and also has a filtered
topic tag with a space in it.
The "outer" width/height functions also include padding and border. Not
including these didn't make a noticeable difference for the left/right
flipping (the omissions almost canceled each other out), but the
discrepancy is much more noticeable on the top/bottom flipping.
Use bottom: 100% to make sure the menu does not overlap the
button (as with bottom: 0). If it overlaps the button then
that interferes with the button click handler.
Tags are stored in the search index as space-separated strings
with the periods removed. Searches for "parent.child" tags
were failing because of the period.
Removing period is okay for now because URL domains are not
currently indexed for search.
Trying to change the mode of this file (which often already exists)
fails on Windows. It seems fine to just not set it and let it be set to
the default.
This message is getting pretty outdated now, and should probably be done
in a different way regardless so that it doesn't need to be in the code,
especially since forks won't want the same message (or any message).
A better approach would probably be a consumer or cronjob watching for
new registrations in the event stream.
Prevents scrollbar from showing up when there is a
subscript on the last line of text.
Another option would have been overflow-y: hidden,
but that clips the text in the (pathological?) case
of deeply nested subscripts.
The generate_site_icons_css cronjob will create this file, but the site
won't work before it exists, so there's a (less than 5 min) gap where
the site is broken when first set up. This probably won't be noticeable
in dev/prod setups, but breaks things like CI setups where everything is
getting created freshly each time.
This makes sure that the file always exists on initial setup and
whenever the Salt states are re-run.
Fixes provisioning of a new VM.
Old versions like 2019.2.3 may be moved to an archive
and get an HTTP 404 error.
Relaxing the pinned version allows setup to find
newer patches, such as 2019.2.5.
More info:
752768b1ff/accepted/0022-old-releases.md
By default, new top-level comments will only be allowed in the latest
topic from a particular set of scheduled topics. Replies to existing
comments in old topics will still be allowed - this is just intended to
prevent the cases where an old scheduled topic gets bumped back up due
to a reply and people inadvertently start adding new top-level comments
to it instead of the latest one.
This should be the correct behavior for most scheduled topics, but it
can be disabled for a particular schedule if needed.
This adds a new latest_topic_id column to topic_schedule and uses
triggers on the topics table to keep it correct.
This isn't really ideal, but it will simplify a few things related to
scheduled topics by quite a bit. For example, this commit also uses that
new data to much more easily populate the list of scheduled topics in a
group's sidebar, which previously required a subquery and windowing.
I think overall this is triggering more than I want, and getting in the
way of perfectly reasonable conversations. I like the idea still, but
needs adjusting.
Coronavirus topics have slowed down greatly now, with generally only
about 3 per day, and are almost all restricted to ~health.coronavirus,
so users can easily find (or avoid) them by just using that group.
Previously, the comment reply form was being created entirely
client-side by cloning and modifying a <template>. This was nice because
it meant that a network request wasn't necessary to display the form,
but it also had downsides.
For example, if a topic was locked after a user had already loaded the
page (or their notifications page with a comment from that topic), they
would still be able to click Reply and type in a comment, and wouldn't
know that replying wasn't possible until they actually tried to submit
the comment.
By switching to using intercooler for this form, we can do server-side
validation to check permissions before showing the form, and it also
simplifies some other aspects, such as the warning about replying to an
old comment, which previously needed a data-js-old-warning-age attribute
in the HTML, but is now just part of generating the reply form template
server-side.
If a comment is removed and then deleted by its author, we should
continue showing it as removed, since that's the more significant action
(and the deletion is usually *because* of the removal).
This will probably only ever be relevant in development environments,
but we don't want the topic scheduler to always post a full backlog of
scheduled topics when it hasn't run for a while. For example, if a dev
environment has a daily scheduled topic set up, but the VM is not
launched for a week, the next time the "post scheduled topics" cronjob
runs, it will post all 7 of the backlogged topics.
This commit changes the script so that it advances the schedule to the
next *future* occurrence, instead of continuing the backlog.
Previously, TopicQuery was excluding ignored topics by default. However,
this caused some unexpected issues, such as a crash when someone tried
to vote on a topic after ignoring it. I think it's more intuitive to
reverse the logic like this: include the ignored topics by default, and
only specifically exclude them in the cases where that's necessary.
This adds some very simple metrics to all of the background jobs that
consume the event streams. Currently, the only "real" metric is a
counter tracking how many messages have been processed by that consumer,
but a lot of the value will come from being able to utilize the
automatic "up" metric provided by Prometheus to monitor and make sure
that all of the jobs are running.
I decided to use ports starting from 25010 for these jobs - this is
completely arbitrary, it's just a fairly large range of unassigned
ports, so shouldn't conflict with anything.
I'm not a fan of how much hard-coding is involved here for the different
ports and jobs in the Prometheus config, but it's also not a big deal.
This enables me to set a ban expiry time for a user (manually, in the
database). By doing so:
* The user's page will say that they're temporarily banned, and show the
date their ban will be lifted.
* If the user tries to log in, it will say they're temporarily banned,
and give a specific datetime that the ban will be lifted by.
* An hourly cronjob will lift any bans that have expired.
I get a fair number of "forgot password" emails where the person is
actually trying to log in with the wrong username. Normally, a login
system shouldn't display whether the username or password was the
incorrect part, but since it's already public information which
usernames exist on Tildes (simply by visiting /user/<username>), this
really isn't meaningfully hiding anything. It would only have any effect
on the most absolutely naive attackers. I think it's an acceptable
trade-off to help out people that are inadvertently trying to log in
with the wrong username instead.
Adding the info about the coronavirus views overrode this block, so we
can just show either (or both, which probably shouldn't happen, but
could) by doing this.
This is a prometheus exporter that allows checking IPv4 and IPv6
responses, among other things. This sets it up to make sure that the
site is responding over both IPv4 and IPv6, so that I can monitor and
set up an alert if either stops working.