Searching yarn

Twts matching #1
Sort by: Newest, Oldest, Most Relevant
In-reply-to » I just noticed this pattern:

And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But it’s hard to tell, because I switched operating systems and webservers since then …

But the thing is that I’m seeing new URLs constructed in this pattern. So this can’t just be an old crawling dataset.

I am now wondering if those broken URLs are bot bugs as well.

They look like this (zalgo is a new project):

https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/

When you request that URL, you get redirected to /git/:

$ curl -sI https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
HTTP/1.0 301 Moved Permanently
Date: Sat, 22 Nov 2025 06:13:51 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 510
Location: /git/

And on /git/, there are links to my repos. So if a broken client requests https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/, then sees a bunch of links and simply appends them, you’ll end up with an infinite loop.

Is that what’s going on here or are my redirects actually still broken … ?

⤋ Read More
In-reply-to » My goodness, a new level of stupidity.

I just noticed this pattern:

uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx  - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"

Let me add some spaces to make it more clear:

    uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET                       /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx  - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"

Some IP (from Brazil) requests some (non-existing, completely broken) URL from my webserver. But they use the hostname uninformativ.de, so they get redirected to www.uninformativ.de.

In the next step, just a second later, some other IP (from Nepal) issues an HTTP proxy request for the same URL.

Clearly, someone has no idea how HTTP redirects work. And clearly, they’re running their broken code on some kind of botnet all over the world.

⤋ Read More
In-reply-to » There are no really good GUI toolkits for Linux, are there?

FTR, I see one (two) issues with PyQt6, sadly:

  1. The PyQt6 docs appear to be mostly auto-generated from the C++ docs. And they contain many errors or broken examples (due to the auto-conversion). I found this relatively unpleasent to work with.
  2. (Until Python finally gets rid of the Global Interpreter Lock properly, it’s not really suited for GUI programs anyway – in my opinion. You can’t offload anything to a second thread, because the whole program is still single-threaded. This would have made my fractal rendering program impossible, for example.)

⤋ Read More

For those curious, the new Twtxt <-> ActivityPub bridge I’m building (bidirectional) simply requires three things:

  1. You register your Twtxt feed to the bridge: https://bridge.twtxt.net
  2. You verify that you in fact own/control the feed by putting the verification code somewhere on/in your feed (doesn’t matter where or how)
  3. You proxy/forward requests for /.well-known/webfinger to the Bridge bridge.twtxt.net.

I’m still testing through and ironing out bugs šŸ› Please be patient! šŸ™

⤋ Read More

My goodness, a new level of stupidity.

The bots are now doing things like this:

GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
  1. That URL does not exist.
  2. By including http://uninformativ.de in that request, this instructs the webserver to do an HTTP proxy request. Of course, this isn’t allowed on my webserver (and shouldn’t by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.

And of course, it’s not just 50 hits like this or 100 or 1’000 or 10’000. No, it’s over 150’000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.

This almost looks like a DDoS attack, but it’s just completely stupid. This feels more like some idiot vibe coded a crawler.

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely ā€œmisunderstoodā€ everything I wrote and confidently spat out garbage. šŸ‘Œ

@prologic@twtxt.net Let’s go through it one by one. Here’s a wall of text that took me over 1.5 hours to write.

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.

The AI also said that users must develop ā€œAI literacyā€, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is ā€œAI literacyā€, isn’t it?

My text went one step further, though: I said that when you take this requirement of ā€œAI literacyā€ into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.

Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft – okay, fine, a draft is a draft, it’s fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they don’t feel like a draft that needs editing.

Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But here’s the kicker: You did not create that draft. You were not involved in the ā€œthought processā€ behind it. When you, a human being, make a draft, you often think something like: ā€œOkay, I want to draw a picture of a landscape and there’s going to be a little house, but for now, I’ll just put in a rough sketch of the house and add the details later.ā€ You are aware of what you left out. When the AI did the draft, you are not aware of what’s missing – even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.

Skill Erosion vs. Skill Evolution

You, @prologic@twtxt.net, also mentioned this in your car tyre example.

In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Gemini’s calculator example is different (and, again, gaslight-y, see below).

What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?

No, you’re something else. You should not be hired as a programmer or a mechanic.

Yes, that is ā€œskill evolutionā€ – which is pretty much my point! But the AI framed it like a counter-argument. It didn’t understand my text.

(But what if that’s our future? What if all programming will look like that in some years? I claim: It’s not possible. If you don’t know how to program, then you don’t know how to read/understand code written by an AI. You are something else, but you’re not a programmer. It might be valid to be something else – but that wasn’t my point, my point was that you’re not a bloody programmer.)

Gemini’s calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., ā€œcomplex problem-solvingā€) are two different things. Just because you now have a calculator, doesn’t mean it’ll free you up to do mathematical proofs or whatever.

What would have worked is this: Let’s say you’re an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, there’s a little gaslight-y detail: A calculator is correct. Yes, it could have ā€œbugsā€ (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), it’s just a statistical model. So, this modified example (ā€œaccountant with a calculatorā€) would actually have to be phrased like this: Suppose there’s an accountant and you give her a magic box that spits out the correct result in, what, I don’t know, 70-90% of the time. The accountant couldn’t rely on this box now, could she? She’d either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.

Gemini has no idea that its calculator example doesn’t make sense. It just spits out some generic ā€œargumentā€ that it picked up on some website.

3. The Technical and Legal Perspective (Scraping and Copyright)

The AI makes two points here. The first one, I might actually agree with (ā€œbad bot behavior is not the fault of AI itselfā€).

The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didn’t. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didn’t even question whether it’s okay to break the current law or not. It just said ā€œlol yeah, change the lawsā€. (I wonder in what way the laws would have to be changed in the AI’s ā€œopinionā€, because some of these changes could kill some business opportunities – or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasn’t part of Gemini’s answer.)

tl;dr

Except for one point, I don’t accept any of Gemini’s ā€œcriticismā€. It didn’t pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, it’s just a statistical model).

And it framed everything like a counter-argument, while actually repeating what I said. That’s gaslighting: When Alice says ā€œthe sky is blueā€ and Bob replies with ā€œwhy do you say the sky is purple?!ā€

But it sure looks convincing, doesn’t it?

Never again

This took so much of my time. I won’t do this again. šŸ˜‚

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely ā€œmisunderstoodā€ everything I wrote and confidently spat out garbage. šŸ‘Œ

@movq@www.uninformativ.de this I find more worrisome, and saw no mention of it on your text: Right-Wing Chatbots Turbocharge America’s Political and Cultural Wars (gift article).

Enoch, one of the newer chatbots powered by artificial intelligence, promises ā€œto ā€˜mind wipe’ the pro-pharma biasā€ from its answers. Another, Arya, produces content based on instructions that tell it to be an ā€œunapologetic right-wing nationalist Christian A.I. model.ā€

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:

1. The User Perspective (Untrustworthiness)

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

  • AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
  • The Rise of AI Literacy: Users must develop a new skill—AI literacy—to critically evaluate and verify AI’s probabilistic output. This skill, along with improving citation features in AI tools, mitigates the ā€œgaslightingā€ effect.
2. The Moral/Political Perspective (Skill Erosion)

The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; it’s skill evolution, not erosion.

  • Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
  • Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
3. The Technical and Legal Perspective (Scraping and Copyright)

The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.

  • Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.

⤋ Read More

I’m building a service that lets you:

create and manage disposable, brandable email aliases so you can track leaks, forward important messages, and keep your real inbox clean.

I’ve just finishing building it for the most part, and have cut a v0.1.0 release. It’s currently closed source (to be decided later) and now open to beta testers. cc @bender@twtxt.net šŸ™ I fully intend to monetize and offer this as a paid service in teh coming weeks/months, but beta/invite-only testers and early adopters/users first 🤟

⤋ Read More
In-reply-to » @lyse They’re seriously telling us at work: ā€œCan it be AI’d? Do it, don’t waste time!ā€ Shit like that is the result. (What’s this weird gray triangle in the bottom right corner?)

@movq@www.uninformativ.de It’s way more expensive and time-consuming in the end. If only somebody had warned us!!1

The triangle reminds me of zalgo text: https://en.wikipedia.org/wiki/Zalgo_text

⤋ Read More
In-reply-to » The most infuriating 3 seconds of using this Mac every day are the first time I run man and it calls home to see if I'm allowed to do that.

Because OP twtxt seems to be a cross-post from the Fediverse, I am bringing some context here. It refers to this GitHub issue. This comment explains why the issue described is happening:

This is usually due to notarization checks. E.g. the binaries are checked by the notarization service (ā€˜XProtect’) which phones home to Apple. Depending on your network environment, this can take a long time. Once the executable has been run the results are usually cached, so any subsequent startup should be fast.

OP network must be running on 1,200 Baud modem, or less. 🤭 I have never, ever, experienced any distinguishable delays.

⤋ Read More

Advent of Code will be different this year:

https://old.reddit.com/r/adventofcode/comments/1ocwh04/changes_to_advent_of_code_starting_this_december/

There will only be 12 puzzles, i.e. only December 1 to December 12. This might make it more interesting for some people, because it’s (probably) less work and a lower chance of people getting burned out. šŸ¤”

Personally, I’ll probably stretch it out over 24 days. Giving myself more time to solve each puzzle and I really want this event to last the entire month. šŸ˜…

Maybe this makes it more interesting for some people around here as well?

⤋ Read More

@movq@www.uninformativ.de streamlining jenny.vim?

index adc0db9..cb54abc 100644
--- a/vim/ftdetect/jenny.vim
+++ b/vim/ftdetect/jenny.vim
@@ -1 +1,2 @@
 au BufNewFile,BufRead jenny-posting.eml setl completefunc=jenny#CompleteMentions fo-=t wrap
+au BufRead,BufNewFile jenny-posting.eml normal $

⤋ Read More
In-reply-to » @lyse Great job!

@lyse@lyse.isobeef.org In my case it was a silver necklace, a hummingbird with a wing connected with the cold welding I mentioned using thin brass wires.

It made it in a goldsmithing class (I went to a private craftmanship high-school) so no phones allowed (no photos of it) and no ā€œtake homeā€ of the works.

Here’s a rough sketch of it drawn by memory, the dots in the wing is where it connects to the body.

Hummingbird necklace sketch

The technique is basically the same as i described, but the scale is much smaller, the whole piece was about 5-6 cm on the largest side.

The rivet was made by drilling a hole through the parts, than with a short and thicker drill you widen the hole on the surface to let the rivet settle flatter on the piece, then with a rubber hammer you hit it to flatten the head until it’s snug on the hole, lock them together by doing the same on the other side.

Note that widening the hole with a thicker drill head won’t make a difference with bigger holes, mine had holes of about 1-2 mm of diameter maximum.

Here’s a sketch of what is going on for clarity.

Cold welding cross-section

⤋ Read More
In-reply-to » @lyse Thanks, I think I fixed it now. Sorry for the spam.

@itsericwoodward@itsericwoodward.com No worries, all good, mate! We all have to start somewhere. Other software requests my feed several orders of magnitude more often.

I can confirm, the User-Agent header appears to be fixed. \o/

Two other things I noticed, though:

  1. There’s now an OPTIONS request for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, it’s rejected with a 405 Method Not Allowed.

  2. Not that these few requests bother me at all, but you might wanna implement caching next with either the If-Modified-Since or If-None-Match request headers. This way, if the feed hasn’t changed, the web server can reply with a 304 Not Modified and no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure you’re aware of it, that’s all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)

⤋ Read More
In-reply-to » TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).

@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.

After thinking about it for a while I got to two solutions:

Proposal 1: Thread syntax (using subject)

Each post have an implicit and an optional explicit root reference:

  • Implicit (no action needed, all data required are already there)

    • URL + timestamp
  • Explicit (subject required)

    • Identity (client generated)
    • External reference
    • Random value

We then add include a ā€œrootā€ subject in each post for generating explicit theads:

1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
	- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
	- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters

Each post can have both references, like the current hash approach the reference can be treated as a simple string and don’t have a real meaning.

Using a custom reference this way allows a client to decide how to generate them:

  • Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
  • External references: can be provided from another system (Eg. 7e073bd345, yarnsocial/yarn latest commit)
  • Random value: like a UUID (Eg. 9a0c34ed-d11e-447e-9257-0a0f57ef6e07)

Proposal 2: Threaded mentions (featuring zvava)

Inspired by @zvava@twtxt.net’s solution it could be simplified into: #<nick url#timestamp> or #<url#timestamp>

It can be shown like a mentions or hidden like a subject.

If we’re using thinking of using a counter in the client, I think there’s no point in avoiding the timestamp anymore.

⤋ Read More
In-reply-to » @itsericwoodward any news about this? I am, at the very least, curious!

@bender@twtxt.net Thanks for asking!

So, I’ve been working on 2 main twtxt-related projects.

The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).

But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the ā€œfollowā€ list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.

On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).

In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.

I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!

⤋ Read More
In-reply-to » @itsericwoodward any news about this? I am, at the very least, curious!

@bender@twtxt.net Thanks for asking!

So, I’ve been working on 2 main twtxt-related projects.

The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).

But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the ā€œfollowā€ list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.

On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).

In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.

I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@lyse@lyse.isobeef.org @prologic@twtxt.net Can’t we find a middle ground and support both?

The thread is defined by two parts:

  1. The hash
  2. The subject

The client/pod generate the hash and index it in it’s database/cache, then it simply query the subject of other posts to find the related posts, right?

In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.

function setPostIndex(post) {
    // Current hash approach
    const hash = createHash(post.url, post.timestamp, post.content);

    // New location approach
    const location = post.url + '#' + post.timestamp;

    // Unchanged (probably)
    const subject = post.subject;

    // Index them all
    addToIndex(hash, post);
    addToIndex(location, post);
    addToIndex(subject, post);
}

// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location

As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.

Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.

Otherwise, the only other solution I can think of is a different approach where the value doesn’t matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed ā€œheaderā€ for each post, but it can be seen as a separate extension).

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@prologic@twtxt.net I know we won’t ever convince each other of the other’s favorite addressing scheme. :-D But I wanna address (haha) your concerns:

  1. I don’t see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesn’t matter.

  2. The same is true for duplication and forks. Even today, the ā€œcannonical URLā€ has to be chosen to build the hash. That’s exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I don’t know of any such software to be honest.

  3. If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?

  4. I don’t get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Where’s the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.

  5. Even a new hashing algorithm requires work on clients etc. It’s not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. That’s why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.

If these are general concerns, I’m completely with you. But I don’t think that they only apply to location-based addressing. That’s how I interpreted your message. I could be wrong. Happy to read your explanations. :-)

⤋ Read More
In-reply-to » @zvava @lyse I also think a location based reference might be better.

@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.

  1. The current hash relies on a url field too, by specification, it will use the first # url = <URL> in the feed’s metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like # feed_id = <UNIQUE_RANDOM_STRING> with the url as fallback.

  2. We can prevent duplications if the reference uses that same url field too or the client ā€œcollapseā€ any reference of all the urls defined in the metadata.

  3. I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.

  4. For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe I’m missing some context on this one.

  5. To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that don’t use threads at all, a mention style format for that could be even more user-friendly for those clients.

We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so I’m not that worried about threads working the same way.

Hope to see some other thought about this matter. šŸ¤“

⤋ Read More
In-reply-to » @zvava @lyse I also think a location based reference might be better.

Here is just a small list of thingsā„¢ that I’m aware will break, some quite badly, others in minor ways:

  1. Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt → jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they won’t.
  2. Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several ā€œparentsā€ and split the thread.
  3. Verification & spam-resistance: content addressing lets you dedupe and verify you’re pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
  4. Offline/cached reading: without the original URL being reachable, readers can’t resolve anchors; with hashes they can match against local caches/archives.
  5. Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.

⤋ Read More

ok so i have found a genuine twt hash collision. what do i do.

internally, bbycll relies on a post lookup table with post hashes as keys, this is really fast but i knew i’d inevitably run into this issue (just not so soon) so now i have to either:
Ā Ā 1) pick the newer post over the other
Ā Ā 2) break from specification and not lowercase hashes
Ā Ā 3) secretly associate canonical urls or additional entropy with post hashes in the backend without a sizeable performance impact somehow

⤋ Read More
In-reply-to » Thanks to a blog post by ~solderpunk and the presence of ImageMagick on my pubnix, all of my weirdcore art (apart from the animated works) is now under 32K in size! Honestly, I'd say the lower JPEG quality actually adds to the vibe of the images: something from the early web, taken permanently out of context and long forgotten.

gopher://hashnix.club:70/1/~dce/art/

⤋ Read More

Hmm, gnu.org is slow as heck. Shorter HTML pages load in about ten seconds. This complete AWK manual all in one large HTML page took a full minute: https://www.gnu.org/software/gawk/manual/gawk.html Is there maybe some anti AI shenanigans going on?

In any case, I find the user guide super interesting. My AWK skills are basically non-existent, so I finally decided to change that. This document is incredibly well written and makes it really fun to keep reading and learning. I’m very impressed. So far, I made it to section 1.6, happy to continue.

⤋ Read More
In-reply-to » The bots have begun to access my website way more often. I’m getting about 120k hits on https://www.uninformativ.de/git/ now in a couple of hours.

Why do I care about this?

  1. The load will become a problem at some point.
  2. These crawlers and the current ā€œAIā€ in general are breaking the rules. I am supposed to be paying for every little thing, I get sued for ā€œpiracyā€. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that.
  3. I simply don’t want it. Period.

⤋ Read More

I’ve got a prototype of my hardcopy simulator going. I’m typing on the keyboard and the ā€œdisplayā€ goes to the printer:

https://movq.de/v/56feb53912/s.png

https://movq.de/v/235c1eabac/MVI_8810.MOV.mp4

The biiiiiiiiiig problem is that the print head and plastic cover make it impossible to see what’s currently being printed, because this is not a typewriter. This means: In order to see what I just entered, I have to feed the paper back and forth and back and forth … it’s not ideal.

I got that idea of moving back/forth from Drew DeVault, who – as it turned out – did something similar a few years back. (I tried hard to read as little as possible of his blog post, because figuring things out myself is more fun. But that could mean I missed a great idea here or there.)

But hey, at least this is running on my Pentium 133 on SuSE Linux 6.4, printer connected with a parallel cable. šŸ˜

(Also, yes, you can see the printouts of earlier tests and, yes, I used ed(1) wrong at one point. 🤪 And ls insisted on using colors …)

⤋ Read More
In-reply-to » Bloody AI clowns:

Here’s an interesting thought/angle on this topic:

gemini://gemini.conman.org/boston/2025/08/21.1

A further check showed that all the network blocks are owned by one organization—Tencent [4]. I’m seriously thinking that the CCP (Chinese Communist Party) encourage this with maybe the hope of externalizing the cost of the Great Firewall [5] to the rest of the world.

⤋ Read More
In-reply-to » Speaking of manpages:

@lyse@lyse.isobeef.org @kat@yarn.girlonthemoon.xyz Colorized manpages have been a thing for a very long time:

https://movq.de/v/81219d7f7a/s.png

Problem is, hardly anybody knows this, because you configure this by … drumroll … overwriting TERMCAP entries of less in your ~/.bashrc:

export LESS_TERMCAP_md=$'\e[38;5;3m'      # Bold
    export LESS_TERMCAP_me=$'\e[0m'           # End Bold
export LESS_TERMCAP_us=$'\e[4;38;5;6m'    # Underline
    export LESS_TERMCAP_ue=$'\e[0m'           # End Underline
export GROFF_NO_SGR=1                     # Needed since groff 1.23

⤋ Read More
In-reply-to » I was drafting support for showing ā€œapplication iconsā€ in my window manager, i.e. the Firefox icon in the titlebar:

@lyse@lyse.isobeef.org ā€œAdvancedā€, well, probably more ā€œmatureā€. There aren’t a ton of crazy features and that icon thing is the largest code addition in the last 10 years. %)

Speaking of OS/2 … I just realized that Windows 3.x didn’t have icons, either. If I’m not mistaken, this only got added in Windows 95. In other words, OS/2 had this feature before Windows did, because at least OS/2 2.1 from 1993 had icons. Who would have thunk.

(Now I kind of want to know which system really introduced this feature.)

⤋ Read More
In-reply-to » I was drafting support for showing ā€œapplication iconsā€ in my window manager, i.e. the Firefox icon in the titlebar:

@lyse@lyse.isobeef.org Oh, huh, maybe it was just my GNOME 2 themes back then that didn’t show the icon. šŸ¤”

I like the looks of your window manager. That’s using Wayland, right?

Oh, no. It’s still X11. All my recent Wayland comments resulted from me trying to switch, but I think it’s still too early. Being unable to use QEMU (because it can’t capture the mouse pointer) is a pretty big blocker for me. This is completely broken, it just happens to be unnoticeable with modern guest OSes, so it’s probably not a priority for devs.

(Not to mention that I would have to fork and substantially extend dwl in order to ā€œreplicateā€ my X11 WM. And then, after having done that, I’d have to follow upstream Wayland development, for which I don’t have the resources. Things would need to slow down before I can do that.)

all that wasted space of the windows not making use of the full screen!!!1

Heh. I’ve been using tiling WMs for ~15 years now, so it’s actually kind of refreshing to see something different for a change. šŸ˜…

Probably close to the older Windowses.

That particular theme is a ripoff of OS/2 Warp 3: https://movq.de/v/6c2a948882/s.png šŸ˜…

We ran some similar brownish color scheme (don’t recall its name) on Win95 or Win98

Oh god. Yeah, I wasn’t a fan of those, either. 🄓

⤋ Read More
In-reply-to » I was drafting support for showing ā€œapplication iconsā€ in my window manager, i.e. the Firefox icon in the titlebar:

@movq@www.uninformativ.de According to this screenshot, KDE still shows good old application icons: https://upload.wikimedia.org/wikipedia/commons/9/94/KDE_Plasma_5.21_Breeze_Twilight_screenshot.png

And GNOME used to have them, too: https://upload.wikimedia.org/wikipedia/commons/9/9f/Gnome-2-22_%284%29.png

I like the looks of your window manager. That’s using Wayland, right? The only thing on this screenshot to critique is all that wasted space of the windows not making use of the full screen!!!1 At least the file browser. 8-)

This drives me nuts when my workmates share their screens. I really don’t get it how people can work like that. You can’t even read the whole line in the IDE or log viewer with all the expanded side bars. And then there’s 200 pixels on the left and another 300 pixels on the right where the desktop wallpaper shows. Gnaa! There’s the other extreme end when somebody shares their ultra wide screen and I just have a ā€œregularishā€ 16:10 monitor and don’t see shit, because it’s resized way too tiny to fit my width. Good times. :-D

Sorry for going off on a tangent here. :-) Back to your WM: It has the right mix of being subtle and still similar to motif. Probably close to the older Windowses. My memory doesn’t serve me well, but I think they actually got it fairly good in my opinion. Your purple active window title looks killer. It just fits so well. This brown one (https://www.uninformativ.de/blog/postings/2025-07-22/0/leafpads.png) gives me also classic vibes. Awww. We ran some similar brownish color scheme (don’t recall its name) on Win95 or Win98 for some time on the family computer. I remember other people visting us not liking these colors. :-D

⤋ Read More

Here’s an example of X11/Xlib being old and archaic.

X11 knows the data type ā€œcardinalā€. For example, the window property _NET_WM_ICON (which holds image data for icons) is an array of ā€œcardinalā€. I am already not really familiar with that word and I’m assuming that it comes from mathematics:

https://en.wikipedia.org/wiki/Cardinal_number

(It could also be a bird, but probably not: https://en.wikipedia.org/wiki/Cardinalidae)

We would probably call this an ā€œintegerā€ today.

EWMH says that icons are arrays of cardinals and that they’re 32-bit numbers:

https://specifications.freedesktop.org/wm-spec/latest-single/#id-1.6.13

So it’s something like 0x11223344 with 0x11 being the alpha channel, 0x22 is red, and so on.

You would assume that, when you retrieve such an array from the X11 server, you’d get an array of uint32_t, right?

Nope.

Xlib is so old, they use char for 8-bit stuff, short int for 16-bit, and long int for 32-bit:

https://x.org/releases/current/doc/libX11/libX11/libX11.html#Obtaining_and_Changing_Window_Properties

That is congruent with the general C data types, so it does make sense:

https://en.wikipedia.org/wiki/C_data_types

Now the funny thing is, on modern x86_64, the type long int is actually 64 bits wide.

The result is that every pixel in a Pixmap, for example, is twice as large in memory as it would need to be. Just because Xlib uses long int, because uint32_t didn’t exist, yet.

And this is something that I wouldn’t know how to fix without breaking clients.

⤋ Read More

@lyse@lyse.isobeef.org They are optional dependencies and listed as such:

$ pacman -Qi pinentry
Name            : pinentry
Version         : 1.3.1-5
Description     : Collection of simple PIN or passphrase entry dialogs which
                  utilize the Assuan protocol
Optional Deps   : gcr: GNOME backend [installed]
                  gtk3: GTK backend [installed]
                  qt5-x11extras: Qt5 backend [installed]
                  kwayland5: Qt5 backend
                  kguiaddons: Qt6 backend
                  kwindowsystem: Qt6 backend

And it’s probably a good thing that they’re optional. I wouldn’t want to have all that installed all the time.

⤋ Read More