@prologic@twtxt.net Very rarely. And if I/we do, then it’s by train or by car. 😅
@prologic@twtxt.net Really? That’s nice. 😅 (God, I haven’t been on a plane in 25 years, I think.)
@prologic@twtxt.net Hmm. 🤔 Well, I don’t run that server myself, so I can’t peek into the logs to see what’s going wrong … 🥴
@lyse@lyse.isobeef.org Oh yeah, there’s lots of them here. Even in winter when it’s freezing outside. I’m always baffled to see parrots in the snow … feels like a paradox. 🥴
@prologic@twtxt.net How do I test? You can try to mention my Mastodon account https://tilde.zone/@movq, if that helps. 🤔
I was having a stroll and heard this weird crackling noise. Took me a moment to realize that it’s coming from the tree above me. I looked up and didn’t see anything at first, because of the bad light. And then I saw it: About 10 parrots (alexandrine parakeets or rose-ringed parakeets) were sitting up there, heaving a feast. 😅
https://movq.de/v/3527326471/parrots.mp4
(Video isn’t great, because this is my smartphone and the light was bad.)
@prologic@twtxt.net Yeah, I meant ISPs. Hm, okay. 🤔
@iolfree@tilde.club They’re not wrong, are they? 😅
@prologic@twtxt.net Do these IPs belong to hosting providers or to providers of private internet connections? The latter is what I’m seeing on my server …
@prologic@twtxt.net We have a bit of a vendor lock-in here in Germany: PayPal is sometimes the only non-shady option to pay for something. ☹️
https://fokus.cool/2025/11/25/i-dont-care-how-well-your-ai-works.html
AI systems being egregiously resource intensive is not a side effect — it’s the point.
And someone commented on that with:
I’m fascinated by the take about the resource usage being an advantage to the AI bros.
They’ve created software that cannot (practically) be replicated as open source software / free software, because there is no community of people with sufficient hardware / data sets. It will inherently always be a centralized technology.
Fascinating and scary.
@bender@twtxt.net Once Advent of Code starts, I’ll start spamming, don’t worry. 😅
Hm, so regarding the hash change:
https://git.mills.io/yarnsocial/twtxt.dev/pulls/28
How about 2026-03-01 00:00:00 UTC as the cut-off date? 🤔
@lyse@lyse.isobeef.org Probably wouldn’t help, since almost every request comes from a different IP address. These are the hits on those weird /projects URLs since Sunday:
1 IP has 5 hits
1 IP has 4 hits
13 IPs have 3 hits
280 IPs have 2 hits
25543 IPs have 1 hit
The total number of hits has decreased now. Maybe the botnet has moved on …
Not a day goes by at work, where I’m not either infuriated or frustrated by this wave of AI garbage. In my private life, I can avoid it. But not at work. And they’re pushing hard for it.
Something has to change in 2026.
Which actively maintained Yarn/twtxt clients are there at the moment? Client authors raise your hands! 🙋
twtxt.net) was being hammered by something at a request rate of 30 req/s (there are global rate limits in place, but still...). The culprit? Turned out to be a particular IP 43.134.51.191 and after looking into who own s that IP I discovered it was yet-another-bad-customer-or-whatever from Tencent, so that entire network (ASN) is now blocked from my Edge:
@prologic@twtxt.net Time to make a new internet. Maybe one that intentionally doesn’t “scale” and remains slow (on both ends) so it’s harder to overload in this manner, harder to abuse for tracking your every move, … Got any of those 56k modems left?
(I’m half-joking. “Make The Internet Expensive Again” like it was in the 1990ies and some of these problems might go away. Disclaimer: I didn’t have my coffee yet. 😅)
hash[12:] instead of hash[:12].
@lyse@lyse.isobeef.org Oops. 😅 But yay, it’s working. 🥳
And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But it’s hard to tell, because I switched operating systems and webservers since then …
But the thing is that I’m seeing new URLs constructed in this pattern. So this can’t just be an old crawling dataset.
I am now wondering if those broken URLs are bot bugs as well.
They look like this (zalgo is a new project):
https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
When you request that URL, you get redirected to /git/:
$ curl -sI https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
HTTP/1.0 301 Moved Permanently
Date: Sat, 22 Nov 2025 06:13:51 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 510
Location: /git/
And on /git/, there are links to my repos. So if a broken client requests https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/, then sees a bunch of links and simply appends them, you’ll end up with an infinite loop.
Is that what’s going on here or are my redirects actually still broken … ?
I just noticed this pattern:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Let me add some spaces to make it more clear:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Some IP (from Brazil) requests some (non-existing, completely broken) URL from my webserver. But they use the hostname uninformativ.de, so they get redirected to www.uninformativ.de.
In the next step, just a second later, some other IP (from Nepal) issues an HTTP proxy request for the same URL.
Clearly, someone has no idea how HTTP redirects work. And clearly, they’re running their broken code on some kind of botnet all over the world.
My webserver is getting millions of hits per month at the moment.
All bots.
@thecanine@twtxt.net Not bad. 🥳 Fingers crossed that they actually do it. 🤞
Luckily, I haven’t noticed at all. 😅
Another day, another attempt at rearranging the furniture, because I am never happy with that. 😟
@lyse@lyse.isobeef.org That is brilliant! 🤣
FTR, I see one (two) issues with PyQt6, sadly:
- The PyQt6 docs appear to be mostly auto-generated from the C++ docs. And they contain many errors or broken examples (due to the auto-conversion). I found this relatively unpleasent to work with.
- (Until Python finally gets rid of the Global Interpreter Lock properly, it’s not really suited for GUI programs anyway – in my opinion. You can’t offload anything to a second thread, because the whole program is still single-threaded. This would have made my fractal rendering program impossible, for example.)
@prologic@twtxt.net Hm, same startup delay. (Go is not an option for me anyway.)
It’s hard to tell why all this is so slow. Maybe in this particular case it has something to do with fonts: strace shows the program loading the fontconfig configs several times, and that takes up a bulk of the startup time. 🤔 (Qt6 or Java don’t do that, but they’re still slow to start up – for other reasons, apparently.)
To be fair, it’s “just” the initial program startup (with warm I/O caches). Once it’s running, it’s fine. All toolkits I’ve tried are. But I don’t want to accept such delays, not in the year 2025. 😅 Imagine every terminal window needing half a second to appear on the screen … nah, man.
Be it Java with Swing or PyQt6, it takes ~300 ms until a basic window with a treeview and a listbox appears. That is a very noticeable delay.
Is it unrealistic to expect faster startup times these days? 🤔
Once the program is running, a new second window (in the same process) appears very quickly. So it’s all just the initialization stuff that takes so long. I could, of course, do what “fat” programs have done for ages: Pre-launch the process during boot, windowless. But I was hoping that this wasn’t needed. 😞 (And it’s a bad model anyway. When the main process crashes, all windows crash with it.)
@lyse@lyse.isobeef.org Yeah, I noticed that too. I haven’t double-checked my code, though. Maybe it has something to do with selecting the correct URL? I mean, these feeds don’t have any # url = fields, so maybe that’s it?
@lyse@lyse.isobeef.org Ah, there it is. 😃 Never gets old. 👍
@arne@uplegger.eu … I still haven’t watched that show. 🤦
tilde.club feeds have no # nick and is messing with yarnd's behavior 😅
@prologic@twtxt.net And none of them use Yarn-style threading. I don’t think they’re aware of us, they’re probably using plain twtxt. Other than one hit by @threatcat@tilde.club a few days ago, I’ve seen no traffic from them. 🤔
Speaking of sunsets … https://movq.de/v/753ab5f9e5/sunset.jpg
@threatcat@tilde.club Let me guess, sl? 😏
This looks like a botnet, to be honest. The IPs are all over the place. Ethopia, Brazil, Kenya, Lebanon, Netherlands, … I mean, that’s the logical thing to do, isn’t it? Do your web crawling on infected PCs. Nobody will block those, because those are the same IP ranges as legitimate requests. And obviously you don’t have to pay for computing time.
… and they all send invalid HTTP requests, all answered with HTTP 400 … How silly.
@bender@twtxt.net Better safe than sorry, I guess. 😅
My goodness, a new level of stupidity.
The bots are now doing things like this:
GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
- That URL does not exist.
- By including
http://uninformativ.dein that request, this instructs the webserver to do an HTTP proxy request. Of course, this isn’t allowed on my webserver (and shouldn’t by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.
And of course, it’s not just 50 hits like this or 100 or 1’000 or 10’000. No, it’s over 150’000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.
This almost looks like a DDoS attack, but it’s just completely stupid. This feels more like some idiot vibe coded a crawler.
I used Gemini (the Google AI) twice at work today, asking about Google Workspace configuration and Google Cloud CLI usage (because we use those a lot). You’d think that it’d be well-suited for those topics. It answered very confidently, yet completely wrong. Just wrong. Made-up CLI arguments, whatever. It took me a while to notice, though, because it’s so convincing and, well, you implicitly and subconsciously trust the results of the Google AI when asking about Google topics, don’t you?
Will it get better over time? Maybe. But what I really want is this:
- Good, well-structured, easy-to-read, proper documentation. Google isn’t doing too bad in this regard, actually, it’s just that they have so much stuff that it’s hard to find what you’re looking for. Hence …
- … I want a good search function. Just give me a good fuzzy search for your docs. That’s it.
I just don’t have the time or energy to constantly second-guess this stuff. Give me something reliable. Something that is designed to do the right thing, not toy around with probabilities. “AI for everything” is just the wrong approach.
@lyse@lyse.isobeef.org Well, they say you have to build up stocks, don’t they? 😅
The font is fiamf3 (scaled up 2x, it would be too small when printed). It’s the same one that I use in my terminal and the status bars. 😃
@lyse@lyse.isobeef.org Yeah, it feels broken. It often needs a couple of retries and a lot of patience. It’s been like that for months. 🫤
Lol, YouTube supports increasing the playback speed, but when you want to go to 4x, they want you to pay extra:
@lyse@lyse.isobeef.org There’s a couple of new users on https://tilde.club, but since this is a shared host, I doubt that they have access to their access.log files. Hence they’ll never see followers, unless we notify them out of band. 🫤
Android shopping list apps disappointed me too many times, so I went back to writing these lists by hand a while ago.
Here’s what’s more fun: Write them in Vim and then print them on the dotmatrix printer. 🥳
And, because I can, I use my own font for that, i.e. ImageMagick renders an image file and then a little tool converts that to ESC/P so I can dump it to /dev/usb/lp0.
(I have so much scrap paper from mail spam lying around that I don’t feel too bad about this. All these sheets would go straight to the bin otherwise.)

@lyse@lyse.isobeef.org Yeah, I’m glad I’m not the only one who didn’t get this right. 😅 You never had to configure a systemd timer? Lucky. 😅
@bender@twtxt.net No plus-aliases, just aliases. The mailserver runs on my OpenBSB box and is managed using BundleWrap (we use that at work), so to create a new alias, I push a new BundleWrap config to the server.
@prologic@twtxt.net Glad you’re back. ✌️
@lyse@lyse.isobeef.org It’s possible to run the validator locally (my blog generator scripts do that):
https://validator.w3.org/nu/about.html
That way you don’t forget. 🥳
@prologic@twtxt.net FWIW, I love the idea and I do the same with my email domains. It’s the most effective way to fight spam, IMO. 🥳