And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But itās hard to tell, because I switched operating systems and webservers since then ā¦
But the thing is that Iām seeing new URLs constructed in this pattern. So this canāt just be an old crawling dataset.
I am now wondering if those broken URLs are bot bugs as well.
They look like this (zalgo is a new project):
https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
When you request that URL, you get redirected to /git/:
$ curl -sI https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
HTTP/1.0 301 Moved Permanently
Date: Sat, 22 Nov 2025 06:13:51 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 510
Location: /git/
And on /git/, there are links to my repos. So if a broken client requests https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/, then sees a bunch of links and simply appends them, youāll end up with an infinite loop.
Is that whatās going on here or are my redirects actually still broken ⦠?
I just noticed this pattern:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Let me add some spaces to make it more clear:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Some IP (from Brazil) requests some (non-existing, completely broken) URL from my webserver. But they use the hostname uninformativ.de, so they get redirected to www.uninformativ.de.
In the next step, just a second later, some other IP (from Nepal) issues an HTTP proxy request for the same URL.
Clearly, someone has no idea how HTTP redirects work. And clearly, theyāre running their broken code on some kind of botnet all over the world.
Testing 1 2 3
Testing 1 2 3
FTR, I see one (two) issues with PyQt6, sadly:
- The PyQt6 docs appear to be mostly auto-generated from the C++ docs. And they contain many errors or broken examples (due to the auto-conversion). I found this relatively unpleasent to work with.
- (Until Python finally gets rid of the Global Interpreter Lock properly, itās not really suited for GUI programs anyway ā in my opinion. You canāt offload anything to a second thread, because the whole program is still single-threaded. This would have made my fractal rendering program impossible, for example.)
Testing 1 2 3 @manton@twtxt.net
For those curious, the new Twtxt <-> ActivityPub bridge Iām building (bidirectional) simply requires three things:
- You register your Twtxt feed to the bridge: https://bridge.twtxt.net
- You verify that you in fact own/control the feed by putting the verification code somewhere on/in your feed (doesnāt matter where or how)
- You proxy/forward requests for
/.well-known/webfingerto the Bridgebridge.twtxt.net.
Iām still testing through and ironing out bugs š Please be patient! š
My goodness, a new level of stupidity.
The bots are now doing things like this:
GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
- That URL does not exist.
- By including
http://uninformativ.dein that request, this instructs the webserver to do an HTTP proxy request. Of course, this isnāt allowed on my webserver (and shouldnāt by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.
And of course, itās not just 50 hits like this or 100 or 1ā000 or 10ā000. No, itās over 150ā000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.
This almost looks like a DDoS attack, but itās just completely stupid. This feels more like some idiot vibe coded a crawler.
@prologic@twtxt.net yeah I should probably update. Version 0.15.1@31958f89 2025-06-29T20:35:20+10:00 go1.23.1
@prologic@twtxt.net Letās go through it one by one. Hereās a wall of text that took me over 1.5 hours to write.
The criticism of AI as untrustworthy is a problem of misapplication, not capability.This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.
The AI also said that users must develop āAI literacyā, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is āAI literacyā, isnāt it?
My text went one step further, though: I said that when you take this requirement of āAI literacyā into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.
Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft ā okay, fine, a draft is a draft, itās fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they donāt feel like a draft that needs editing.
Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But hereās the kicker: You did not create that draft. You were not involved in the āthought processā behind it. When you, a human being, make a draft, you often think something like: āOkay, I want to draw a picture of a landscape and thereās going to be a little house, but for now, Iāll just put in a rough sketch of the house and add the details later.ā You are aware of what you left out. When the AI did the draft, you are not aware of whatās missing ā even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.
Skill Erosion vs. Skill EvolutionYou, @prologic@twtxt.net, also mentioned this in your car tyre example.
In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Geminiās calculator example is different (and, again, gaslight-y, see below).
What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?
No, youāre something else. You should not be hired as a programmer or a mechanic.
Yes, that is āskill evolutionā ā which is pretty much my point! But the AI framed it like a counter-argument. It didnāt understand my text.
(But what if thatās our future? What if all programming will look like that in some years? I claim: Itās not possible. If you donāt know how to program, then you donāt know how to read/understand code written by an AI. You are something else, but youāre not a programmer. It might be valid to be something else ā but that wasnāt my point, my point was that youāre not a bloody programmer.)
Geminiās calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., ācomplex problem-solvingā) are two different things. Just because you now have a calculator, doesnāt mean itāll free you up to do mathematical proofs or whatever.
What would have worked is this: Letās say youāre an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, thereās a little gaslight-y detail: A calculator is correct. Yes, it could have ābugsā (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), itās just a statistical model. So, this modified example (āaccountant with a calculatorā) would actually have to be phrased like this: Suppose thereās an accountant and you give her a magic box that spits out the correct result in, what, I donāt know, 70-90% of the time. The accountant couldnāt rely on this box now, could she? Sheād either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.
Gemini has no idea that its calculator example doesnāt make sense. It just spits out some generic āargumentā that it picked up on some website.
3. The Technical and Legal Perspective (Scraping and Copyright)The AI makes two points here. The first one, I might actually agree with (ābad bot behavior is not the fault of AI itselfā).
The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didnāt. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didnāt even question whether itās okay to break the current law or not. It just said ālol yeah, change the lawsā. (I wonder in what way the laws would have to be changed in the AIās āopinionā, because some of these changes could kill some business opportunities ā or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasnāt part of Geminiās answer.)
tl;drExcept for one point, I donāt accept any of Geminiās ācriticismā. It didnāt pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, itās just a statistical model).
And it framed everything like a counter-argument, while actually repeating what I said. Thatās gaslighting: When Alice says āthe sky is blueā and Bob replies with āwhy do you say the sky is purple?!ā
But it sure looks convincing, doesnāt it?
Never againThis took so much of my time. I wonāt do this again. š
@movq@www.uninformativ.de this I find more worrisome, and saw no mention of it on your text: Right-Wing Chatbots Turbocharge Americaās Political and Cultural Wars (gift article).
Enoch, one of the newer chatbots powered by artificial intelligence, promises āto āmind wipeā the pro-pharma biasā from its answers. Another, Arya, produces content based on instructions that tell it to be an āunapologetic right-wing nationalist Christian A.I. model.ā
@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:
1. The User Perspective (Untrustworthiness)The criticism of AI as untrustworthy is a problem of misapplication, not capability.
- AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
- The Rise of AI Literacy: Users must develop a new skillāAI literacyāto critically evaluate and verify AIās probabilistic output. This skill, along with improving citation features in AI tools, mitigates the āgaslightingā effect.
The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; itās skill evolution, not erosion.
- Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
- Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.
- Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced
robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.
Iām building a service that lets you:
create and manage disposable, brandable email aliases so you can track leaks, forward important messages, and keep your real inbox clean.
Iāve just finishing building it for the most part, and have cut a v0.1.0 release. Itās currently closed source (to be decided later) and now open to beta testers. cc @bender@twtxt.net š I fully intend to monetize and offer this as a paid service in teh coming weeks/months, but beta/invite-only testers and early adopters/users first š¤
Specimen 1: A caveman marking his territory: 
@lyse@lyse.isobeef.org back to this, I think @prologic@twtxt.net meant 1 November 12:00 UTC. I wonāt hold it against him. š¤
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of processing 2. Exactly-once delivery
@movq@www.uninformativ.de Itās way more expensive and time-consuming in the end. If only somebody had warned us!!1
The triangle reminds me of zalgo text: https://en.wikipedia.org/wiki/Zalgo_text
man and it calls home to see if I'm allowed to do that.
Because OP twtxt seems to be a cross-post from the Fediverse, I am bringing some context here. It refers to this GitHub issue. This comment explains why the issue described is happening:
This is usually due to notarization checks. E.g. the binaries are checked by the notarization service (āXProtectā) which phones home to Apple. Depending on your network environment, this can take a long time. Once the executable has been run the results are usually cached, so any subsequent startup should be fast.
OP network must be running on 1,200 Baud modem, or less. š¤ I have never, ever, experienced any distinguishable delays.
Advent of Code will be different this year:
There will only be 12 puzzles, i.e. only December 1 to December 12. This might make it more interesting for some people, because itās (probably) less work and a lower chance of people getting burned out. š¤
Personally, Iāll probably stretch it out over 24 days. Giving myself more time to solve each puzzle and I really want this event to last the entire month. š
Maybe this makes it more interesting for some people around here as well?
@movq@www.uninformativ.de streamlining jenny.vim?
index adc0db9..cb54abc 100644
--- a/vim/ftdetect/jenny.vim
+++ b/vim/ftdetect/jenny.vim
@@ -1 +1,2 @@
au BufNewFile,BufRead jenny-posting.eml setl completefunc=jenny#CompleteMentions fo-=t wrap
+au BufRead,BufNewFile jenny-posting.eml normal $
@lyse@lyse.isobeef.org In my case it was a silver necklace, a hummingbird with a wing connected with the cold welding I mentioned using thin brass wires.
It made it in a goldsmithing class (I went to a private craftmanship high-school) so no phones allowed (no photos of it) and no ātake homeā of the works.
Hereās a rough sketch of it drawn by memory, the dots in the wing is where it connects to the body.

The technique is basically the same as i described, but the scale is much smaller, the whole piece was about 5-6 cm on the largest side.
The rivet was made by drilling a hole through the parts, than with a short and thicker drill you widen the hole on the surface to let the rivet settle flatter on the piece, then with a rubber hammer you hit it to flatten the head until itās snug on the hole, lock them together by doing the same on the other side.
Note that widening the hole with a thicker drill head wonāt make a difference with bigger holes, mine had holes of about 1-2 mm of diameter maximum.
Hereās a sketch of what is going on for clarity.

How much ram/cpu does a Yarn instance need? I currently have a vps with 1 GB ram. Letās see if it will be enough š
@itsericwoodward@itsericwoodward.com No worries, all good, mate! We all have to start somewhere. Other software requests my feed several orders of magnitude more often.
I can confirm, the User-Agent header appears to be fixed. \o/
Two other things I noticed, though:
Thereās now an
OPTIONSrequest for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, itās rejected with a405 Method Not Allowed.Not that these few requests bother me at all, but you might wanna implement caching next with either the
If-Modified-SinceorIf-None-Matchrequest headers. This way, if the feed hasnāt changed, the web server can reply with a304 Not Modifiedand no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure youāre aware of it, thatās all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)
groff --version)?
@movq@www.uninformativ.de Itās an ancient 1.22.4. :-)
Each origin feed numbers new threads
(tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).
@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.
After thinking about it for a while I got to two solutions:
Proposal 1: Thread syntax (using subject)
Each post have an implicit and an optional explicit root reference:
Implicit (no action needed, all data required are already there)
- URL + timestamp
- URL + timestamp
Explicit (subject required)
- Identity (client generated)
- External reference
- Random value
- Identity (client generated)
We then add include a ārootā subject in each post for generating explicit theads:
1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters
Each post can have both references, like the current hash approach the reference can be treated as a simple string and donāt have a real meaning.
Using a custom reference this way allows a client to decide how to generate them:
- Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
- External references: can be provided from another system (Eg.
7e073bd345, yarnsocial/yarn latest commit)
- Random value: like a UUID (Eg.
9a0c34ed-d11e-447e-9257-0a0f57ef6e07)
Proposal 2: Threaded mentions (featuring zvava)
Inspired by @zvava@twtxt.netās solution it could be simplified into: #<nick url#timestamp> or #<url#timestamp>
It can be shown like a mentions or hidden like a subject.
If weāre using thinking of using a counter in the client, I think thereās no point in avoiding the timestamp anymore.
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
urlfield too, by specification, it will use the first# url = <URL>in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>with theurlas fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
Here is just a small list of things⢠that Iām aware will break, some quite badly, others in minor ways:
- Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt ā jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they wonāt.
- Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several āparentsā and split the thread.
- Verification & spam-resistance: content addressing lets you dedupe and verify youāre pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
- Offline/cached reading: without the original URL being reachable, readers canāt resolve anchors; with hashes they can match against local caches/archives.
- Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.
@kat@yarn.girlonthemoon.xyz Mine shows 1/1 of 14 Twts š I think this is a bug š¤Æ
ok so i have found a genuine twt hash collision. what do i do.
internally, bbycll relies on a post lookup table with post hashes as keys, this is really fast but i knew iād inevitably run into this issue (just not so soon) so now i have to either:
Ā Ā 1) pick the newer post over the other
Ā Ā 2) break from specification and not lowercase hashes
Ā Ā 3) secretly associate canonical urls or additional entropy with post hashes in the backend without a sizeable performance impact somehow

Made this a few weeks ago, just listened to it again and I quite like it:
https://www.uninformativ.de/music/2025-1-ebow/Fog.ogg
This is just one instrument: Electric bass guitar + EBow. And echo/delay on top. But itās a single track, single take. It amazes me quite a bit how much you can do with that little thing. š¤Æ
gopher://hashnix.club:70/1/~dce/art/
The chemtrails have fallen down!!1 https://lyse.isobeef.org/abendhimmel-2025-09-05/
Hmm, gnu.org is slow as heck. Shorter HTML pages load in about ten seconds. This complete AWK manual all in one large HTML page took a full minute: https://www.gnu.org/software/gawk/manual/gawk.html Is there maybe some anti AI shenanigans going on?
In any case, I find the user guide super interesting. My AWK skills are basically non-existent, so I finally decided to change that. This document is incredibly well written and makes it really fun to keep reading and learning. Iām very impressed. So far, I made it to section 1.6, happy to continue.
Why do I care about this?
- The load will become a problem at some point.
- These crawlers and the current āAIā in general are breaking the rules. I am supposed to be paying for every little thing, I get sued for āpiracyā. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that.
- I simply donāt want it. Period.
Iāve got a prototype of my hardcopy simulator going. Iām typing on the keyboard and the ādisplayā goes to the printer:
https://movq.de/v/56feb53912/s.png
https://movq.de/v/235c1eabac/MVI_8810.MOV.mp4
The biiiiiiiiiig problem is that the print head and plastic cover make it impossible to see whatās currently being printed, because this is not a typewriter. This means: In order to see what I just entered, I have to feed the paper back and forth and back and forth ⦠itās not ideal.
I got that idea of moving back/forth from Drew DeVault, who ā as it turned out ā did something similar a few years back. (I tried hard to read as little as possible of his blog post, because figuring things out myself is more fun. But that could mean I missed a great idea here or there.)
But hey, at least this is running on my Pentium 133 on SuSE Linux 6.4, printer connected with a parallel cable. š
(Also, yes, you can see the printouts of earlier tests and, yes, I used ed(1) wrong at one point. 𤪠And ls insisted on using colors ā¦)
Hereās an interesting thought/angle on this topic:
gemini://gemini.conman.org/boston/2025/08/21.1
A further check showed that all the network blocks are owned by one organizationāTencent [4]. Iām seriously thinking that the CCP (Chinese Communist Party) encourage this with maybe the hope of externalizing the cost of the Great Firewall [5] to the rest of the world.
Sooooooooo, things happened, and I now have a dot matrix printer again. šš
(One of the end goals is to simulate a hardcopy terminal on my old box. Iām waiting for another cable to arrive, I donāt have USB there. And then use ed(1) like it was meant to be used! š
)
@movq@www.uninformativ.de WE NEED MORE BACKUPS!!!!!!!!!!!1
ok i really like XLOV. 1&Only is a great song. so vibe-y and sensual. and they released it in pride month too they Get It https://www.youtube.com/watch?v=NBZgirj_C2Y
@lyse@lyse.isobeef.org @kat@yarn.girlonthemoon.xyz Colorized manpages have been a thing for a very long time:
https://movq.de/v/81219d7f7a/s.png
Problem is, hardly anybody knows this, because you configure this by ⦠drumroll ⦠overwriting TERMCAP entries of less in your ~/.bashrc:
export LESS_TERMCAP_md=$'\e[38;5;3m' # Bold⨠export LESS_TERMCAP_me=$'\e[0m' # End Bold
export LESS_TERMCAP_us=$'\e[4;38;5;6m' # Underline⨠export LESS_TERMCAP_ue=$'\e[0m' # End Underline
export GROFF_NO_SGR=1 # Needed since groff 1.23
@lyse@lyse.isobeef.org āAdvancedā, well, probably more āmatureā. There arenāt a ton of crazy features and that icon thing is the largest code addition in the last 10 years. %)
Speaking of OS/2 ⦠I just realized that Windows 3.x didnāt have icons, either. If Iām not mistaken, this only got added in Windows 95. In other words, OS/2 had this feature before Windows did, because at least OS/2 2.1 from 1993 had icons. Who would have thunk.
(Now I kind of want to know which system really introduced this feature.)
@lyse@lyse.isobeef.org Oh, huh, maybe it was just my GNOME 2 themes back then that didnāt show the icon. š¤
I like the looks of your window manager. Thatās using Wayland, right?
Oh, no. Itās still X11. All my recent Wayland comments resulted from me trying to switch, but I think itās still too early. Being unable to use QEMU (because it canāt capture the mouse pointer) is a pretty big blocker for me. This is completely broken, it just happens to be unnoticeable with modern guest OSes, so itās probably not a priority for devs.
(Not to mention that I would have to fork and substantially extend dwl in order to āreplicateā my X11 WM. And then, after having done that, Iād have to follow upstream Wayland development, for which I donāt have the resources. Things would need to slow down before I can do that.)
all that wasted space of the windows not making use of the full screen!!!1
Heh. Iāve been using tiling WMs for ~15 years now, so itās actually kind of refreshing to see something different for a change. š
Probably close to the older Windowses.
That particular theme is a ripoff of OS/2 Warp 3: https://movq.de/v/6c2a948882/s.png š
We ran some similar brownish color scheme (donāt recall its name) on Win95 or Win98
Oh god. Yeah, I wasnāt a fan of those, either. š„“
@movq@www.uninformativ.de According to this screenshot, KDE still shows good old application icons: https://upload.wikimedia.org/wikipedia/commons/9/94/KDE_Plasma_5.21_Breeze_Twilight_screenshot.png
And GNOME used to have them, too: https://upload.wikimedia.org/wikipedia/commons/9/9f/Gnome-2-22_%284%29.png
I like the looks of your window manager. Thatās using Wayland, right? The only thing on this screenshot to critique is all that wasted space of the windows not making use of the full screen!!!1 At least the file browser. 8-)
This drives me nuts when my workmates share their screens. I really donāt get it how people can work like that. You canāt even read the whole line in the IDE or log viewer with all the expanded side bars. And then thereās 200 pixels on the left and another 300 pixels on the right where the desktop wallpaper shows. Gnaa! Thereās the other extreme end when somebody shares their ultra wide screen and I just have a āregularishā 16:10 monitor and donāt see shit, because itās resized way too tiny to fit my width. Good times. :-D
Sorry for going off on a tangent here. :-) Back to your WM: It has the right mix of being subtle and still similar to motif. Probably close to the older Windowses. My memory doesnāt serve me well, but I think they actually got it fairly good in my opinion. Your purple active window title looks killer. It just fits so well. This brown one (https://www.uninformativ.de/blog/postings/2025-07-22/0/leafpads.png) gives me also classic vibes. Awww. We ran some similar brownish color scheme (donāt recall its name) on Win95 or Win98 for some time on the family computer. I remember other people visting us not liking these colors. :-D
Hereās an example of X11/Xlib being old and archaic.
X11 knows the data type ācardinalā. For example, the window property _NET_WM_ICON (which holds image data for icons) is an array of ācardinalā. I am already not really familiar with that word and Iām assuming that it comes from mathematics:
https://en.wikipedia.org/wiki/Cardinal_number
(It could also be a bird, but probably not: https://en.wikipedia.org/wiki/Cardinalidae)
We would probably call this an āintegerā today.
EWMH says that icons are arrays of cardinals and that theyāre 32-bit numbers:
https://specifications.freedesktop.org/wm-spec/latest-single/#id-1.6.13
So itās something like 0x11223344 with 0x11 being the alpha channel, 0x22 is red, and so on.
You would assume that, when you retrieve such an array from the X11 server, youād get an array of uint32_t, right?
Nope.
Xlib is so old, they use char for 8-bit stuff, short int for 16-bit, and long int for 32-bit:
That is congruent with the general C data types, so it does make sense:
https://en.wikipedia.org/wiki/C_data_types
Now the funny thing is, on modern x86_64, the type long int is actually 64 bits wide.
The result is that every pixel in a Pixmap, for example, is twice as large in memory as it would need to be. Just because Xlib uses long int, because uint32_t didnāt exist, yet.
And this is something that I wouldnāt know how to fix without breaking clients.
@lyse@lyse.isobeef.org They are optional dependencies and listed as such:
$ pacman -Qi pinentry
Name : pinentry
Version : 1.3.1-5
Description : Collection of simple PIN or passphrase entry dialogs which
utilize the Assuan protocol
Optional Deps : gcr: GNOME backend [installed]
gtk3: GTK backend [installed]
qt5-x11extras: Qt5 backend [installed]
kwayland5: Qt5 backend
kguiaddons: Qt6 backend
kwindowsystem: Qt6 backend
And itās probably a good thing that theyāre optional. I wouldnāt want to have all that installed all the time.