@prologic@twtxt.net Probably has something to do with how the nickname is set up, It starts with a capital letter and has a white space. I couldn’t fetch their feed until I fixed that in my ‘follow’ file. But I dunno, maybe it’s just me…
@prologic@twtxt.net @lyse@lyse.isobeef.org I checked my logs and all I see are 304 responses and a couple of delayed requests here and there due to rate limiting, but not that many. I’ll disable it (the rate limiting) for a couple of days, let me know if you still get the ‘forbidden access’ thing 🫣 I may have effed up my configuration trying to deal with some weird stuff.
@prologic@twtxt.net I am! 😅 I’ll check my logs and see if there’s something I can do about that!
@lyse@lyse.isobeef.org Ahh so it’s not just me! 😅
I’m working on getting my twtxt.txt file up to https://yarn.social standards so that it will be more than yelling in the wind.
@prologic@twtxt.net Good to know. I must admit I’ve never actually used a Docker instance, probably as I just assumed the overhead might be a bit much for my usual very modest servers.
@bender@twtxt.net Is it so maxed out you couldn’t fit a pretty small program like Headscale on it? Headscale by itself and only personal home type use as far as amount of peers go, it really isn’t noticeable I don’t think resource-wise. The Docker version I guess could be a different story.
@bender@twtxt.net Mine is about the same, though I have 20GB left 😅 In terms of resources, Headscale is using next to nothing though.
@eldersnake@we.loveprivacy.club how big is that VPS, if you can tell? My 1 vCPU, 2GB, 50GB is maxed out. 😬
@prologic@twtxt.net Yes I suppose that is true. There is an article on Tailscale’s site that explains it all quite a bit: https://tailscale.com/blog/how-nat-traversal-works
To me, with CGNAT, it’s a small miracle that a direct connection can be made between peers (as opposed to going through a relay constantly) but it does indeed work. I guess to host it at home you would need to have it WAN accessible, and if you’ve already gone to the trouble of port forwarding etc… well 😅
Not that I could personally do that, but for those with static IPs etc.
@bender@twtxt.net on my hosted VPS, as I’m on Starlink which is CGNAT, I need some sort of external intermediary.
@prologic@twtxt.net Interesting! Had no idea about that, but trust you to know of a self-hosted implementation 😅👌
receieveFile())? 🤔
@prologic@twtxt.net I don’t think it’s your code. As you said in one of your commit comments, the internet is a hostile place! That’s partly why I reacted the way I did: all things considered it’s usually better to react quickly and clean up the mess later, then it is to wait and risk further damage. Anyway it sucks @xuu@txt.sour.is got caught up in it. Hopefully it’s all good now.
receieveFile())? 🤔
@xuu@txt.sour.is I hope everything is sorted out with your ISP. Please let me know if there’s anything I can do to help. I sincerely did not mean to cause you any trouble.
receieveFile())? 🤔
@stigatle@yarn.stigatle.no @xuu@txt.sour.is @lyse@lyse.isobeef.org “Not cool”? I was receiving many broken (HTTP 400 error) requests per second from an IP address I didn’t recognize, right after having my VPS crash because the hard drive filled up with bogus data. None of this had happened on this VPS before, so it was a new problem that I didn’t understand and I took immediate action to get it under control. Of course I reported the IP address to its abuse email. That’s a 100% normal, natural, and “cool” thing to do in such a situation. At the time I had no idea it was @xuu@txt.sour.is .
The moment I realized it was @xuu@txt.sour.is and definitely a false alarm, I emailed the ISP and told them this was a false positive and to not ban or block the IP in question because it was not abusive traffic. They haven’t yet responded but I do hope they’ve stopped taking action, and if there’s anything else I can do to certify to them that this is not abuse then I will do that.
I run numerous services on that VPS that I rely on, and I spent most of my day today cleaning up the mess all this has caused. I get that this caused @xuu@txt.sour.is a lot of stress and I’m sincerely sorry about that and am doing what I can to rectify the situation. But calling me “not cool” isn’t necessary. This was an unfortunate situation that we’re trying to make right and there’s no need for criticizing anyone.
@bender@twtxt.net haha funny! though i just realized my ISP is the only one with fiber pulled to the property so i would have to get a phone line from them some how. The other ISP in the area is basically a mobile hotspot.
receieveFile())? 🤔
Hey so.. i just got an email from my ISP saying they will terminate my service. Did i break something @abucci@anthony.buc.ci ?
receieveFile())? 🤔
@prologic@twtxt.net Yes, I can do that.
receieveFile())? 🤔
@stigatle@yarn.stigatle.no @prologic@twtxt.net my /tmp is also fine now! Thanks for your help @prologic@twtxt.net!
receieveFile())? 🤔
@prologic@twtxt.net I unbanned a few IP address I had blocked before the bugfix. I wasn’t being careful and just blocked any IP I saw making a large number of requests to my pod. That slowed the problem down but I think I blocked your and @stigatle@yarn.stigatle.no ’s pods in the process, oops.
receieveFile())? 🤔
@stigatle@yarn.stigatle.no Sweet, thank you! I’ve been shooting myself in the foot over here and want to make sure the situation is getting fixed!
receieveFile())? 🤔
@stigatle@yarn.stigatle.no @prologic@twtxt.net testing 1 2 3 can either of you see this?
receieveFile())? 🤔
@prologic@twtxt.net I don’t know if this is new, but I’m seeing:
Jul 25 16:01:17 buc yarnd[1921547]: time="2024-07-25T16:01:17Z" level=error msg="https://yarn.stigatle.no/user/stigatle/twtxt.txt: client.Do fail: Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" error="Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)"
I no longer see twts from @stigatle@yarn.stigatle.no at all.
receieveFile())? 🤔
@prologic@twtxt.net Have you been seeing any of my replies?
@abucci@anthony.buc.ci / @abucci@anthony.buc.ci Any interesting errors pop up in the server logs since the the flaw got fixed (unbounded receieveFile())? 🤔
This is a test. I am not seeing twts from @stigatle@yarn.stigatle.no and it seems like @prologic@twtxt.net might not be seeing twts from me. Do people see this?
@prologic@twtxt.net I am not seeing twts from @stigatle@yarn.stigatle.no anymore. Are you seeing twts from me?
./tools/dump_cache.sh: line 8: bat: command not found
No Token Provided
I don’t have bat on my VPS and there is no package for installing it. Is cat a reasonable alternate?
@prologic@twtxt.net Hitting that URL returns a bunch of HTML even though there is no user named lovetocode999 on my pod. I think it should 404, and maybe with a delay, to discourage whatever this abuse is. Basically this can be used to DDoS a pod by forcing it to generate a hunch of HTML just by doing a bogus GET like this.
@stigatle@yarn.stigatle.no I used the following hack to keep my VPS from running out of space: watch -n 60 rm -rf /tmp/yarn-avatar-*, run in tmux so it keeps running.
@stigatle@yarn.stigatle.no / @abucci@anthony.buc.ci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed’s preamble (metadata). I’d love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/
@prologic@twtxt.net There are a lot of logs being generated by yarnd, which is something I haven’t seen before too:
Jul 25 14:32:42 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:42 (162.211.155.2) "GET /twt/ubhq33a HTTP/1.1" 404 29 643.251µs
Jul 25 14:32:43 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:43 (162.211.155.2) "GET /twt/112073211746755451 HTTP/1.1" 400 12 505.333µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (111.119.213.103) "GET /twt/whau6pa HTTP/1.1" 200 37360 35.173255ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112343305123858004 HTTP/1.1" 400 12 455.069µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (168.199.225.19) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fwww.palapa.pl%2Fbaners.php%3Flink%3Dhttps%3A%2F%2Fwww.dwnewstoday.com HTTP/1.1" 200 36167 19.582077ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112503061785024494 HTTP/1.1" 400 12 619.152µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/111863876118553837 HTTP/1.1" 400 12 817.678µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/112749994821704400 HTTP/1.1" 400 12 540.616µs
Jul 25 14:32:47 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:47 (103.204.109.150) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fampurify.com%2Fbbs%2Fboard.php%3Fbo_table%3Dfree%26wr_id%3D113858 HTTP/1.1" 200 36187 15.95329ms
I’ve seen that nick=lovetocode999 a bunch.
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Inspect? What’s sift? What would you like to know about the files?
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 10 Gbytes has accumulated since I made that last post. It’s coming in at a rate of 55 Mbits/second !
@prologic@twtxt.net I think there’s more to it than that. I’ve updated, yet hundreds of gigabytes of junk is still accumulating.
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Alright, running yarnd 0.15.1 now. I stopped my hack so we’ll see if the VPS gets clogged with junk 😆
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
abucci@buc:~/yarnd/yarn$ make preflight
Checking Go version ... [ ERR ]
Go 1.16+ is required, found go1.22.5
FATAL: 🙁 preflight failed
make: *** [Makefile:33: preflight] Error 1
🤔
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Aha, got it. Thanks for looking into it. I’m updating now and we’ll see if that stops it.
@stigatle@yarn.stigatle.no Works now! 🥳
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Sure, but why would this start happening all of a sudden today? Nothing like this has happened before. Is this a known bug?
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 0.15.1, looks like.
watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.
@bender@twtxt.net I hope so too. I’ve never seen anything like this before. Whatever it is, it’s strange.
@prologic@twtxt.net This is weird, but today, out of nowhere, yarnd filled up the disk on the VPS where I run it. It’s never done anything like this before and I have no idea why it would start. But it threw almost 700 Gbytes of data into /tmp in files like this:
yarnd-avatar-1087570772 yarnd-avatar-1599127133 yarnd-avatar-2042956376 yarnd-avatar-2562946212 yarnd-avatar-3274766535 yarnd-avatar-3931929859 yarnd-avatar-553201529
yarnd-avatar-1089125452 yarnd-avatar-1606826819 yarnd-avatar-2089122560 yarnd-avatar-2611944556 yarnd-avatar-3310922372 yarnd-avatar-3938996661 yarnd-avatar-556240195
yarnd-avatar-1101228867 yarnd-avatar-1618755765 yarnd-avatar-2104107259 yarnd-avatar-2641384948 yarnd-avatar-3326285269 yarnd-avatar-3939402047 yarnd-avatar-559344463
yarnd-avatar-1112165824 yarnd-avatar-1650827505 yarnd-avatar-2142824779 yarnd-avatar-2680659340 yarnd-avatar-3340682113 yarnd-avatar-3998621883 yarnd-avatar-570292705
yarnd-avatar-1119886894 yarnd-avatar-1656673647 yarnd-avatar-2160786463 yarnd-avatar-271923479 yarnd-avatar-3374584613 yarnd-avatar-4005102536 yarnd-avatar-595490106
yarnd-avatar-1131417623 yarnd-avatar-1685698239 yarnd-avatar-2165405940 yarnd-avatar-2793562275 yarnd-avatar-3380606954 yarnd-avatar-4016872095 yarnd-avatar-679251850
yarnd-avatar-1160959085 yarnd-avatar-1746759128 yarnd-avatar-2171489899 yarnd-avatar-2842068287 yarnd-avatar-3416352997 yarnd-avatar-4110048378 yarnd-avatar-679950970
yarnd-avatar-1231649265 yarnd-avatar-1752278279 yarnd-avatar-2251317422 yarnd-avatar-2843868670 yarnd-avatar-3468636088 yarnd-avatar-4116552474 yarnd-avatar-737874628
164 files. Some are empty, some are 7 or even 10 Gbyte.
Any idea what would cause that? And why now, after running yarnd for so long with nothing like this happening?
@lyse@lyse.isobeef.org I bet it was! These kinds of sunset shots (with colorful delicious clouds in motion… etc) have always been candy to my eyes. And I know for a fact that the real thing usually looks ten folds better than in pictures (at least in the ones I used to take). Thank you for sharing these!
(I don’t really trust Android, though, and I suspect that apps can still install background services that are always active. Pure speculation and paranoid on my part, but still.)
Which is fair, but I would say the GrapheneOS devs in particular are also quite paranoid about this stuff and go to great pains to make sure this stuff can be controlled by the user.
docker build without any --build-arg VERSION= or --build-arg COMMIT= there was no version information in the built binary and bundled assets. Therefore cache busting would not work as expected. When introducing htmx and hyperscript to create a UI/UX SPA-like experience, this is when things fell apart a bit for you. I think....
@prologic@twtxt.net Yeah that is probably what was happening. I wish that go build could embed the values that go install does.
@abucci@anthony.buc.ci i love this talk
@movq@www.uninformativ.de This outage did affect me, though not much, via the university where my wife teaches and where I teach sometimes. They actually sent out an alert in their emergency alert system (the one they use to alert people of extreme weather events and bomb threats, mostly), telling people that all IT systems were down.
A friend of mine elsewhere pointed out that they pushed this change on a Friday, which of course no software developer with any experience would ever, ever, ever do. I have to assume there’s some toxic management at CrowdStrike, but who knows. Even more reasons to sympathize with the poor folks who are probably going to be working nights and weekends to clean up this mess.
⨁ Follow button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
@prologic@twtxt.net One of these days I’ll turn off registrations