Nothing4You
@Nothing4You@programming.dev
- Comment on Defederation issues between Lemm.ee and Lemmy.ml 5 days ago:
this doesn’t just affect lemmy.ml.
it seems that lemmy.ml -> lemm.ee was somehow fixed yesterday, but there are several other instances that also have issues sending to lemm.ee:
- hexbear.net: was broken since 2024-10-23, fixed since 2024-10-25
- lemmy.blahaj.zone: broken since 2024-10-24
- lemmy.ml: was broken since 2024-11-01, seems fixed since 2024-11-16
- startrek.website: broken since 2024-11-15
- Comment on Defederation issues between Lemm.ee and Lemmy.ml 2 weeks ago:
this seems more of a federation issue than a defederation issue ;)
- Comment on federation issue to instance 5 weeks ago:
do you happen to have experience with setting up influxdb and telegraf? or maybe something else that might be better suited?
the metrics are currently in prometheus metrics format and scraped every 5 minutes.
my idea was to keep the current retention for most metrics and have longer retention (possibly with lower granularity for data older than a month).
the current prometheus setup is super simple, you can see (and older copy of) the config here.
if you want to build a configuration for influxdb/telegraf that i can more or less just drop in there without too many adjustments that would certainly be welcomed.
the metric that would need longer retention is
lemmy_federation_state_last_successful_id_local
. - Comment on federation issue to instance 5 weeks ago:
I’ll probably have to look at another storage than prometheus, aiui it’s not really well suited for this task.
maybe something with influxdb+telegraf, although i haven’t looked at that yet.
- Comment on federation issue to instance 1 month ago:
so all you’re looking for is the amount of activities generated per instance?
that is only a small subset of the data currently collected, most of the storage use currently comes from collecting information in relation to other instances.
- Comment on federation issue to instance 1 month ago:
Hi, I run this.
What benefit do you expect from longer retention periods and how much time did you have in mind?
The way data is currently collected and stored keeps the same granularity for the entire time period, which currently uses around 60 GiB for a month of retention across all monitored instances.
- Comment on Not being able to see any content in a community 1 month ago:
you may have broken your language settings? check in your account settings. the posts are all tagged as English, you’ll want to have at least English and undefined languages selected
- Comment on Cannot stay logged in on desktop 2 months ago:
- Comment on What's the deal with lemmy.world today? 2 months ago:
lemmy updates did some improvements on the receiving side, parallel sending on the sending side is not yet part of a new release. it’ll also likely take some time for that to be deployed on lemmy.world to have those changes be tested by other production instances first. my activitypub-federation-queue-batcher is currently used by at least 2 other high latency instances and would address the issue at the cost of a small (like 3 bucks or so) vps in Europe and some time investment for the setup.
- Comment on Transfer posts, saves, and comments to a different Lemmy account 3 months ago:
You can export/import your account settings on the settings page, which includes also the following data:
- subscribed communities
- saved posts
- saved comments
- blocked communities
- blocked users
- blocked instances
There is no way to associate content you have previously posted/commented with your new account however.
You might need to import the file multiple times to get everything imported. - Comment on Low Levels of Aussie Zone association with Lemmy in Search. 3 months ago:
Posts have a canonical reference to the originating instance, e.g. this post contains
<link data-inferno-helmet=“true” rel=“canonical” href=“https://aussie.zone/post/11962005”>
for me. This is a hint for search engines to ignore this post and instead index the original one instead. The same also already works for communities, this community containing<link data-inferno-helmet=“true” rel=“canonical” href=“https://aussie.zone/c/meta”>
. Not sure if DDG is just ignoring this or there’s another reason for it to show up multiple times. - Comment on Unread Count Says 1, While Inbox is Empty 5 months ago:
I’ve submitted a PR to fix this, it might still make it into 0.19.4.
fyi @DABDA@lemm.ee
- Comment on Unread Count Says 1, While Inbox is Empty 5 months ago:
curious, what do you mean by checking them?
- Comment on Unread Count Says 1, While Inbox is Empty 5 months ago:
if you open lemmy.world/api/v3/user/unread_count after being logged in, it should at least tell you what kind of unread message it is.
with that information it can probably be narrowed down a bit.
i don’t think this is related to an inconsistency with blocked users, as that is only being fixed in 0.19.4 or 0.19.5: github.com/LemmyNet/lemmy/issues/4227
moderated or deleted comments as mentioned by others don’t look like they would be the case when i’m looking at the 0.19.3 code.
the bot reply mentioned by @DABDA@lemm.ee seems like a very plausible explanation, as bot accounts are hidden from the comment reply list in the api, but they’re not currently excluded from the notification count.
i’ll have a look at whether that is still the case in the current development version in a bit and submit a pr to fix that if it is.
- Comment on Are there any updates on the ongoing federation delays with lemmy.world? 5 months ago:
lemmy’s current federation implementation works with a sending queue, so it stores a list of activities to be sent in its database. there is a worker running for each linked instance checking if an activity should be sent to that instance, and if it should, then send it. due to how this is currently implemented, this is always only sending a single activity at a time, waiting for this activity to be successfully sent (or rejected), then sending the next one.
an activity is any federation message when an instance informs another instance about something happening. this includes posts, comments, votes, reports, private messages, moderation actions, and a few others.
let’s assume an activity is generated on lemmy.world every second. now every second this worker will send this activity from helsinki to sydney and wait for the response, then wait for the next activity to be available. to simplify things, i’ll skip processing time in this example and just work with raw latency, based on the number you provided. now lemmy.world has to send an activity to sydney. this takes approximately 160ms. aussie.zone immediately responds, which takes 160ms for the response to get back to helsinki. in sum this means the entire process took 320ms. as long as only one activity is generated per second, this is easy to keep up with. still assuming there is no other time needed for any processing, this means about 3.125 activities can be transmitted from lemmy.world to aussie.zone on average.
the real activity generation rate on lemmy.world is quite a bit higher than 3.125 activities per second, and in reality there are also other things that take up some time during this process. over the last 7 days, lemmy.world had an average activity generation rate of about 5.45 activities per second. it is important to note here that not all activities generated on an instance will be sent to all other linked instance, so this isn’t a reliable number of how many activities are actually supposed to be sent to aussie.zone every second, rather an upper limit. for example, for content in a community, lemmy will only send these activities to other instances that have at least one subscriber on the remote instance. although only a fraction of the activities, private messages are another example of an activity that is only sent to a single linked instance.
to answer the original question: the week of delay is simply built up over time, as the amount of lag just keeps growing.
additionally, lemmy also discards its queued activities that are older than a week once a week, so if you go over 7 days of lag for too long you will start completely missing activities that were over the limit. as previously explained, this can be any kind of federated content. it can be posts, comments, votes, which are usually not that important, but it can also affect private messages, which are then just lost without the sender ever knowing.
- Comment on Are there any updates on the ongoing federation delays with lemmy.world? 5 months ago:
it’s open source: github.com/…/activitypub-federation-queue-batcher
I strongly recommend fully understanding how it works, which failure scenarios there are and how to recover from them before deploying it in production though. not all of this is currently documented, a lot of it has just been in matrix discussions.
I also have a script to prefetch posts and comments from remote communities before they’d get through via federation, which would make them appear without votes at least, and slightly improve processing speed while they’re coming in through regular federation. this also doesn’t require any additional privileges or being in a position to intercept traffic. it is however also not enough to catch up and stay caught up.
this script is not open source currently. while it’s fairly simple and straightforward, i just didn’t bother cleaning it up for publishing, as it’s currently still partially integrated in an unrelated tool.
I previously tried offering to deploy this on matrix but one of my attempts to open a conversation was rejected and the other one never got accepted. - Comment on Are there any updates on the ongoing federation delays with lemmy.world? 5 months ago:
yes, that’s about the second best option for the time being.
it’s currently used by reddthat.com and lemmy.nz.
disclaimer: i wrote that software.
- Comment on Are there any updates on the ongoing federation delays with lemmy.world? 5 months ago:
github.com/LemmyNet/lemmy/pull/4623 is on the 0.19.5 milestone, until parallel sending is implemented there won’t be any benefit from parallel receiving.
0.19.4 will already have some improved logic for backgrounding some parts of the receiving logic to speed that up a little, but that won’t be enough to deal with this.
- Comment on Are there any updates on the ongoing federation delays with lemmy.world? 5 months ago:
stating that it’s an issue on our end as our server isn’t keeping up
this isn’t exactly an issue in your end, unless you consider hosting the server in Australia as your issue. the problem is the latency across the world and lemmy not sending multiple activities simultaneously. there is nothing LW can do about this. as unfortunate as it is, the “best” solution at the time would be moving the server to Europe.
there are still some options besides moving the server entirely though. if you can get the activities to lemmy without as many delays am experience similar to being hosted in Europe can be achieved.
- Comment on Why do comments from lemmy.world users not appear until 4 days later? 5 months ago:
aussie.zone is just not keeping up with the amount of activities generated on lemmy.world.
it’s not going lower than a week anymore without actively doing something to improve this situation, and once a week all activities older than a week that haven’t been received from LW yet will be discarded on LWs end.
this is most likely mostly caused by latency from LW (finland) to aussie.zone, as lemmy only sends one activity at a time, requiring a round trip across the world for every single activity activity before sending the next one.
there are a couple things that can cause comments to show up on aussie.zone before they would regularly federate, such as someone searching the post/comment url on aussie.zone. some clients will do this automatically if you click a link to a post/comment from another instance.
- Comment on federation issue to instance 6 months ago:
lemmy currently only sends one activity per receiving instance at a time, so there is a round trip for every single post, comment, vote, etc., before the next activity will be sent. you wouldn’t see any increased number of connections, as there’s only a single one.
do you have access logs for
/inbox
with the lemmy.world’s user agent? you might be able to derive some information from that if requests increased over time or something, maybe also response status codes? - Comment on federation issue to instance 6 months ago:
I can’t tell you why you’re lagging but you’re clearly lagging quite a bit behind.
which country is your instance located in?
did you (or someone else on your instance) recently subscribe to a bunch of high traffic communities on lemmy.world, which would make lemmy.world send more activities to you?lemmy by default only sends activities in a community to another instance if there’s at least one subscriber to the community on that instance. if you’re located far from finland, where lemmy.world is located, you might have been able to keep up just enough before this, although this isn’t the first time as the graphs above show.