This starts using webassets for the site-icons.css file inside the base
template so that a cache-busting "version" string is added after the
filename as a query variable (as was already being done with the other
CSS and JS files).
It also creates a new service that's triggered by a "path changed" event
on site-icons.css, which causes gunicorn to reload. This should mean
that whenever the site-icons.css file is updated by the cronjob that
generates it, gunicorn will automatically reload and update the
cache-busting string for the CSS file, causing users' browsers to update
to the newest version.
The new version of Salt ("3000") seems to have a number of bugs,
including not being able to handle "unless" checks, which the Tildes
states use frequently. Because of this, creating a new dev environment
currently doesn't work. This pins Salt to the previous stable version
for now.
Here's the relevant bug for "unless" specifically:
https://github.com/saltstack/salt/issues/56131
And the overall release notes:
https://docs.saltstack.com/en/latest/topics/releases/3000.html
If a user can edit a post, they don't need the ability to view the
markdown separately, so the button doesn't need to be shown in those
cases. I'm not sure if this should be a separate permission defined
inside the ACL or not.
The "roadmap" issue boards on GitLab aren't being maintained in any sort
of useful way, and are probably just more confusing to anyone than
helpful. This replaces that link with a "Planned Features" one that goes
to an issue search for ones that have both the "Stage::Accepted" and
"Feature Request" labels, sorted by GitLab's "priority" method which
will put High Priority ones at the top.
I removed this no_autoflush block in another recent change, but it was
still necessary due to the calls to _mark_comment_read_from_interaction
not being inside the try: block looking for an IntegrityError. This
could also be done in a different order to avoid the issue without
needing to disable autoflush, but this works fine.
This should prevent a few strange behaviors related to topic visits,
such as "losing" new comments if you accidentally double-click when
entering a topic's comments.
Currently, the "grace period" is set to 2 minutes, and no new visits
will be stored until the previous visit is at least that old.
This changes from storing only a single topic visit per user to just
storing all of them. I don't intend to keep all of these and will
probably find a way to "quantize" repeated visits soon. However, I want
to get an idea of the volume first, and also use this to see how the new
querying methods work in production.
On that note, I'm not sure that the LATERAL outer join is the best
method, but it seems interesting (and was kind of a pain in the ass in
SQLAlchemy), so we'll see how it looks.
As part of this, I also changed the method of adjusting num_comments on
past topic visits to be done entirely in triggers, instead of the
previous approach of doing it in _increment_topic_comments_seen().
However, this really just made me realize how incorrect this idea is and
how many edge cases can come up that will mess up the comment counters
on the visits (e.g. post a comment and then delete it immediately).
Hopefully this can go away in the somewhat near future with some other
changes to notifications.
This is just using the recommendations from PGTune for a web application
being hosted on a server with the prod server's specs. I'm sure they're
not the best values, but should be better than the defaults.
Previously, this feature was disabled by default. However, despite being
one of the best features on the site, only about 10% of users ever
enabled it, and even very involved/frequent users often didn't realize
it existed.
My original thought about why it should be opt-in only is that I thought
it had a meaningful privacy impact, but it really doesn't. User visits
to topics are already tracked through server logs and similar data, so
the feature doesn't really make any difference.
This commit enables the feature for everyone, removes the separate
Settings page, and moves the "Collapse old comments" sub-setting onto
the main Settings page.
Previously, if an event stream consumer hit an error when processing a
message, it would crash and restart, and the message that caused the
error would be left in "pending" status for that consumer forever while
the consumer continued processing new messages.
This commit adds some more deliberate handling of messages that cause
errors:
* When a consumer starts, it will try to read pending messages first. In
a case where an error was transient, this should mean that the message
that previously caused a crash will be processed on retry.
* If a particular message causes the consumer to crash 3 times, it will
be considered "dead" and moved out of the consumer's pending list into
one specifically for dead messages. These dead queues can be monitored
and inspected manually to look into failures, while the consumer can
still continue processing new messages.
* After clearing or processing all pending messages, consumers go back
to waiting for and processing new messages.
This should be a little more clear about what the Ignore function does
(as opposed to thinking that it might ignore the user that posted the
topic, one of the topic's tags, etc.).
Not a huge fan of this implementation, but it seems to work okay.
This removes RabbitMQ as well as everything else attached to it:
Erlang; the Prometheus collector; the pg-amqp-bridge and all PostgreSQL
functions and triggers; and the amqpy Python package and the Tildes code
that used it.
Note that this commit does not actually uninstall or delete any of these
packages or services, so if you have a running instance that you want to
keep (instead of re-provisioning from scratch), you will need to
manually remove them if you want them completely gone.
RabbitMQ was used to support asynchronous/background processing tasks,
such as determining word count for text topics and scraping the
destinations or relevant APIs for link topics. This commit replaces
RabbitMQ's role (as the message broker) with Redis streams.
This included building a new "PostgreSQL to Redis bridge" that takes
over the previous role of pg-amqp-bridge: listening for NOTIFY messages
on a particular PostgreSQL channel and translating them to messages in
appropriate Redis streams.
One particular change of note is that the names of message "sources"
were adjusted a little and standardized. For example, the routing key
for a message caused by a new comment was previously "comment.created",
but is now "comments.insert". Similarly, "comment.edited" became
"comments.update.markdown". The new naming scheme uses the table name,
proper name for the SQL operation, and column name instead of the
previous unpredictable terms.
The "keyset"-style pagination that Tildes uses for topic listings uses
WHERE and ORDER BY clauses that involve multiple columns to keep a
deterministic ordering even when the values in the main sort column are
equal. For example, when sorting by number of votes, you're actually
ordering by num_votes DESC, topic_id DESC. The previous single-column
indexes were a little inefficient for this and couldn't always be used
well.
This commit extends all of the relevant indexes to composite ones that
contain topic_id as well, and drops all of the original ones. This
should be more efficient, and should probably be done to indexes on the
comments table as well.
This generates a significantly better execution plan for the query - I
think using one of the columns from the join condition helps the
query-planner do this properly.
If the footer stretches to two lines (should only happen on mobile and
when there are new comments), this aligns the dropdown button more
towards the bottom of the topic, which looks better.