Installs the Nu Html Checker and starts using it to validate the home
page's HTML: https://validator.github.io/validator/
Also includes fixes to some lists that were nested in an invalid way.
I mistakenly assumed that not setting the cookiejar argument when
creating a webtest TestApp would mean that no cookies would be retained
between requests, but that's wrong. If you don't pass a cookiejar, it
just automatically creates one for you. Because of this, logged-out
webtests would end up being logged-in after any test logged in.
This reduces the webtest_loggedout fixture's scope to function-level so
that it will be re-initiated on every test instead. It also stops
passing a cookiejar for the logged-in webtest, since that's unnecessary.
This reverts commit cb7be83877.
HTML Tidy seems to have various gaps in its validation that we've found
already, including one that's pretty much a deal-breaker for Tildes's
HTML: it doesn't think that <menu> is a valid parent for <li>.
We're looking at alternative validators still.
Adds the HTML Tidy library to the dev version, along with the pytidylib
wrapper for it, and a couple of tests that use it to validate the HTML
of the home page.
Includes a fix to the GitLab "Planned features" link that Tidy considers
invalid because it includes some un-encoded characters.
This was not a fun upgrade. webargs made some major changes to its
approaches in 6.0, which are mostly covered here:
https://webargs.readthedocs.io/en/latest/upgrading.html
To keep using it on Tildes, this commit had to make the following
changes:
- Write my own wrapper for use_kwargs that changes some of the default
behavior. Specifically, we want the location that data is being
loaded from to default to "query" (the query string) instead of
webargs' default of "json". We also needed to set the "unknown"
behavior on every schema to "exclude" so that the schemas would
ignore any data fields they didn't need, since the default behavior
is to throw an error, which happens almost everywhere because of
Intercooler variables and/or multiple use_kwargs calls for different
subsets of the data.
- All @pre_load hooks in schemas needed to be rewritten so that they
weren't modifying data in-place (copy to a new data dict first).
Because webargs is now passing all data through all schemas,
modifying in-place could result in an earlier schema modifying data
that would then be passed in modified form to the later ones.
Specifically, this caused an issue with tags on posting a new topic,
where we just wanted to treat the tags as a string, but TopicSchema
would convert it to a list in @pre_load.
- use_kwargs on every endpoint using non-query data needed to be
updated to support the new single-location approach, either replacing
an existing locations= with location=, or adding location="form",
since form data was no longer used by default.
- The code that parsed the errors returned by webargs/Marshmallow
ValidationErrors needed to update to handle the additional "level"
in the dict of errors, where errors are now split out by location
and then field, instead of only by field.
- A few other minor updates, like always passing a schema object
instead of a class, and never passing a callable (mostly just for
simplicity in the wrapper).
I thought this would be a larger task due to so many of the tools
updating to new versions, but the only thing necessary for this upgrade
was updating the name of one of the disabled pylint errors.
I temporarily pinned two packages that will require more significant
updates (webargs in requirements and prospector in requirements-dev).
Other than those, everything seemed to upgrade cleanly, except for an
issue with mypy that needed a "type: ignore" comment to circumvent.
Note that there is currently an issue with Salt's pip module being
unable to handle comments in a requirements file that include "-r", so I
had to manually edit the two .txt files after using pip-tools to remove
all lines with "via -r" comments in them. I've commented about this in
an issue on Salt's repo here:
https://github.com/saltstack/salt/issues/56514#issuecomment-665947887
Previously, when checking if a link had been posted before, there was no
restriction on the time limit, so even posts from years ago would come
up. This restricts it to only the last 6 months, which I think is a
pretty reasonable time period for reposting.
This isn't great, but will fix an error that's actively occurring when
someone filters to a single tag (tag= query var) and also has a filtered
topic tag with a space in it.
The "outer" width/height functions also include padding and border. Not
including these didn't make a noticeable difference for the left/right
flipping (the omissions almost canceled each other out), but the
discrepancy is much more noticeable on the top/bottom flipping.
Use bottom: 100% to make sure the menu does not overlap the
button (as with bottom: 0). If it overlaps the button then
that interferes with the button click handler.
Tags are stored in the search index as space-separated strings
with the periods removed. Searches for "parent.child" tags
were failing because of the period.
Removing period is okay for now because URL domains are not
currently indexed for search.
Trying to change the mode of this file (which often already exists)
fails on Windows. It seems fine to just not set it and let it be set to
the default.
This message is getting pretty outdated now, and should probably be done
in a different way regardless so that it doesn't need to be in the code,
especially since forks won't want the same message (or any message).
A better approach would probably be a consumer or cronjob watching for
new registrations in the event stream.
Prevents scrollbar from showing up when there is a
subscript on the last line of text.
Another option would have been overflow-y: hidden,
but that clips the text in the (pathological?) case
of deeply nested subscripts.
The generate_site_icons_css cronjob will create this file, but the site
won't work before it exists, so there's a (less than 5 min) gap where
the site is broken when first set up. This probably won't be noticeable
in dev/prod setups, but breaks things like CI setups where everything is
getting created freshly each time.
This makes sure that the file always exists on initial setup and
whenever the Salt states are re-run.
Fixes provisioning of a new VM.
Old versions like 2019.2.3 may be moved to an archive
and get an HTTP 404 error.
Relaxing the pinned version allows setup to find
newer patches, such as 2019.2.5.
More info:
752768b1ff/accepted/0022-old-releases.md
By default, new top-level comments will only be allowed in the latest
topic from a particular set of scheduled topics. Replies to existing
comments in old topics will still be allowed - this is just intended to
prevent the cases where an old scheduled topic gets bumped back up due
to a reply and people inadvertently start adding new top-level comments
to it instead of the latest one.
This should be the correct behavior for most scheduled topics, but it
can be disabled for a particular schedule if needed.
This adds a new latest_topic_id column to topic_schedule and uses
triggers on the topics table to keep it correct.
This isn't really ideal, but it will simplify a few things related to
scheduled topics by quite a bit. For example, this commit also uses that
new data to much more easily populate the list of scheduled topics in a
group's sidebar, which previously required a subquery and windowing.
I think overall this is triggering more than I want, and getting in the
way of perfectly reasonable conversations. I like the idea still, but
needs adjusting.
Coronavirus topics have slowed down greatly now, with generally only
about 3 per day, and are almost all restricted to ~health.coronavirus,
so users can easily find (or avoid) them by just using that group.
Previously, the comment reply form was being created entirely
client-side by cloning and modifying a <template>. This was nice because
it meant that a network request wasn't necessary to display the form,
but it also had downsides.
For example, if a topic was locked after a user had already loaded the
page (or their notifications page with a comment from that topic), they
would still be able to click Reply and type in a comment, and wouldn't
know that replying wasn't possible until they actually tried to submit
the comment.
By switching to using intercooler for this form, we can do server-side
validation to check permissions before showing the form, and it also
simplifies some other aspects, such as the warning about replying to an
old comment, which previously needed a data-js-old-warning-age attribute
in the HTML, but is now just part of generating the reply form template
server-side.
If a comment is removed and then deleted by its author, we should
continue showing it as removed, since that's the more significant action
(and the deletion is usually *because* of the removal).
This will probably only ever be relevant in development environments,
but we don't want the topic scheduler to always post a full backlog of
scheduled topics when it hasn't run for a while. For example, if a dev
environment has a daily scheduled topic set up, but the VM is not
launched for a week, the next time the "post scheduled topics" cronjob
runs, it will post all 7 of the backlogged topics.
This commit changes the script so that it advances the schedule to the
next *future* occurrence, instead of continuing the backlog.
Previously, TopicQuery was excluding ignored topics by default. However,
this caused some unexpected issues, such as a crash when someone tried
to vote on a topic after ignoring it. I think it's more intuitive to
reverse the logic like this: include the ignored topics by default, and
only specifically exclude them in the cases where that's necessary.
This adds some very simple metrics to all of the background jobs that
consume the event streams. Currently, the only "real" metric is a
counter tracking how many messages have been processed by that consumer,
but a lot of the value will come from being able to utilize the
automatic "up" metric provided by Prometheus to monitor and make sure
that all of the jobs are running.
I decided to use ports starting from 25010 for these jobs - this is
completely arbitrary, it's just a fairly large range of unassigned
ports, so shouldn't conflict with anything.
I'm not a fan of how much hard-coding is involved here for the different
ports and jobs in the Prometheus config, but it's also not a big deal.
This enables me to set a ban expiry time for a user (manually, in the
database). By doing so:
* The user's page will say that they're temporarily banned, and show the
date their ban will be lifted.
* If the user tries to log in, it will say they're temporarily banned,
and give a specific datetime that the ban will be lifted by.
* An hourly cronjob will lift any bans that have expired.
I get a fair number of "forgot password" emails where the person is
actually trying to log in with the wrong username. Normally, a login
system shouldn't display whether the username or password was the
incorrect part, but since it's already public information which
usernames exist on Tildes (simply by visiting /user/<username>), this
really isn't meaningfully hiding anything. It would only have any effect
on the most absolutely naive attackers. I think it's an acceptable
trade-off to help out people that are inadvertently trying to log in
with the wrong username instead.