All my newly added test cases failed, that movq thankfully provided in https://git.mills.io/yarnsocial/twtxt.dev/pulls/28#issuecomment-20801 for the draft of the twt hash v2 extension. The first error was easy to see in the diff. The hashes were way too long. Youāve already guessed it, I had cut the hash from the twelfth character towards the end instead of taking the first twelve characters: hash[12:] instead of hash[:12].
After fixing this rookie mistake, the tests still all failed. Hmmm. Did I still cut the wrong twelve characters? :-? I even checked the Go reference implementation in the document itself. But it read basically the same as mine. Strange, what the heck is going on here?
Turns out that my vim replacements to transform the Python code into Go code butchered all the URLs. ;-) The order of operations matters. I first replaced the equals with colons for the subtest struct fields and then wanted to transform the RFC 3339 timestamp strings to time.Date(ā¦) calls. So, I replaced the colons in the time with commas and spaces. Hence, my URLs then also all read https, //example.com/twtxt.txt.
But that was it. All test green. \o/
I just noticed this pattern:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Let me add some spaces to make it more clear:
uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
Some IP (from Brazil) requests some (non-existing, completely broken) URL from my webserver. But they use the hostname uninformativ.de, so they get redirected to www.uninformativ.de.
In the next step, just a second later, some other IP (from Nepal) issues an HTTP proxy request for the same URL.
Clearly, someone has no idea how HTTP redirects work. And clearly, theyāre running their broken code on some kind of botnet all over the world.
I was looking at some ancient code and then thought: Hmm, maybe it would be a good idea to see more details in this error message. Which of the values donāt line up. On the other hand, that feature isnāt probably used anyway, because itās a bit ugly to use (historically evolved). And on top of that, most teams need something slightly different, if they deal with that sort of thing.
I still told my workmates about it, so they could also have a look at it and we can decide tomorrow what to do about it. Speaking of the devil, no kidding, not even half an hour later, a puzzled tester contacted me. She received exactly that rather useless error message. Looks like I had an afflatus. ;-)
Itās interesting, though, that in all those years, nobody stumbled across this before. At least we now know for sure that this is not dead code. :-)
@movq@www.uninformativ.de I think I now remember having similar problems back then. Iām pretty sure I typically consulted the Qt C++ documentation and only very rarely looked at the Python one. It was easy enough to translate the C++ code to Python.
Yeah, the GIL can be problematic at times. Iām glad it wasnāt an issue for my application.
For those curious, the new Twtxt <-> ActivityPub bridge Iām building (bidirectional) simply requires three things:
- You register your Twtxt feed to the bridge: https://bridge.twtxt.net
- You verify that you in fact own/control the feed by putting the verification code somewhere on/in your feed (doesnāt matter where or how)
- You proxy/forward requests for
/.well-known/webfingerto the Bridgebridge.twtxt.net.
Iām still testing through and ironing out bugs š Please be patient! š
technically I can put the Bridge verificaiton code in my feedās metadata so no-one really ever sees or notices it š¤ Maybe Iāll add a first-class button/field thingy in yarnd so users can āregister their feedā straight from their pod? š¤
@lyse@lyse.isobeef.org Yeah, I noticed that too. I havenāt double-checked my code, though. Maybe it has something to do with selecting the correct URL? I mean, these feeds donāt have any # url = fields, so maybe thatās it?
tilde.club feeds have no # nick and is messing with yarnd's behavior š
@bender@twtxt.net Just wrote better code with tests š¤£
My goodness, a new level of stupidity.
The bots are now doing things like this:
GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
- That URL does not exist.
- By including
http://uninformativ.dein that request, this instructs the webserver to do an HTTP proxy request. Of course, this isnāt allowed on my webserver (and shouldnāt by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.
And of course, itās not just 50 hits like this or 100 or 1ā000 or 10ā000. No, itās over 150ā000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.
This almost looks like a DDoS attack, but itās just completely stupid. This feels more like some idiot vibe coded a crawler.
User-Agent analyzer with my subscription list to spot new feeds automatically.
@lyse@lyse.isobeef.org an advent of code, I love it! Go, Lyse, go!
@prologic@twtxt.net Letās go through it one by one. Hereās a wall of text that took me over 1.5 hours to write.
The criticism of AI as untrustworthy is a problem of misapplication, not capability.This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.
The AI also said that users must develop āAI literacyā, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is āAI literacyā, isnāt it?
My text went one step further, though: I said that when you take this requirement of āAI literacyā into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.
Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft ā okay, fine, a draft is a draft, itās fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they donāt feel like a draft that needs editing.
Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But hereās the kicker: You did not create that draft. You were not involved in the āthought processā behind it. When you, a human being, make a draft, you often think something like: āOkay, I want to draw a picture of a landscape and thereās going to be a little house, but for now, Iāll just put in a rough sketch of the house and add the details later.ā You are aware of what you left out. When the AI did the draft, you are not aware of whatās missing ā even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.
Skill Erosion vs. Skill EvolutionYou, @prologic@twtxt.net, also mentioned this in your car tyre example.
In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Geminiās calculator example is different (and, again, gaslight-y, see below).
What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?
No, youāre something else. You should not be hired as a programmer or a mechanic.
Yes, that is āskill evolutionā ā which is pretty much my point! But the AI framed it like a counter-argument. It didnāt understand my text.
(But what if thatās our future? What if all programming will look like that in some years? I claim: Itās not possible. If you donāt know how to program, then you donāt know how to read/understand code written by an AI. You are something else, but youāre not a programmer. It might be valid to be something else ā but that wasnāt my point, my point was that youāre not a bloody programmer.)
Geminiās calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., ācomplex problem-solvingā) are two different things. Just because you now have a calculator, doesnāt mean itāll free you up to do mathematical proofs or whatever.
What would have worked is this: Letās say youāre an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, thereās a little gaslight-y detail: A calculator is correct. Yes, it could have ābugsā (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), itās just a statistical model. So, this modified example (āaccountant with a calculatorā) would actually have to be phrased like this: Suppose thereās an accountant and you give her a magic box that spits out the correct result in, what, I donāt know, 70-90% of the time. The accountant couldnāt rely on this box now, could she? Sheād either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.
Gemini has no idea that its calculator example doesnāt make sense. It just spits out some generic āargumentā that it picked up on some website.
3. The Technical and Legal Perspective (Scraping and Copyright)The AI makes two points here. The first one, I might actually agree with (ābad bot behavior is not the fault of AI itselfā).
The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didnāt. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didnāt even question whether itās okay to break the current law or not. It just said ālol yeah, change the lawsā. (I wonder in what way the laws would have to be changed in the AIās āopinionā, because some of these changes could kill some business opportunities ā or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasnāt part of Geminiās answer.)
tl;drExcept for one point, I donāt accept any of Geminiās ācriticismā. It didnāt pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, itās just a statistical model).
And it framed everything like a counter-argument, while actually repeating what I said. Thatās gaslighting: When Alice says āthe sky is blueā and Bob replies with āwhy do you say the sky is purple?!ā
But it sure looks convincing, doesnāt it?
Never againThis took so much of my time. I wonāt do this again. š
@movq@www.uninformativ.de this I find more worrisome, and saw no mention of it on your text: Right-Wing Chatbots Turbocharge Americaās Political and Cultural Wars (gift article).
Enoch, one of the newer chatbots powered by artificial intelligence, promises āto āmind wipeā the pro-pharma biasā from its answers. Another, Arya, produces content based on instructions that tell it to be an āunapologetic right-wing nationalist Christian A.I. model.ā
@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:
1. The User Perspective (Untrustworthiness)The criticism of AI as untrustworthy is a problem of misapplication, not capability.
- AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
- The Rise of AI Literacy: Users must develop a new skillāAI literacyāto critically evaluate and verify AIās probabilistic output. This skill, along with improving citation features in AI tools, mitigates the āgaslightingā effect.
The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; itās skill evolution, not erosion.
- Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
- Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.
- Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced
robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.
donāt mind the glaring light mode i just think the pink looks pretty. this ādesktop modeā is just a bunch of css repurposing the sidebar into the taskbar, but the file manager and its supporting code is proving a very fun endeavour. my favorite part is u can just turn javascript off and it functions like a regular website with nothing suspicious about it at all
@movq@www.uninformativ.de Uh, that actually looks not that terrible. Somehow, I remember Swing GUIs being way uglier.
As for Visual Basic, I only had to use VBA once in my life. That was in the beginning of my career when I inherited a project from a leaving coworker. Fuck me, was that awful. Just alone the damn compiler error dialog box popping up in my face all the time while editing and the compiler already trying to parse the unfinished and hence of course uncompilable code. Boy, that left a lasting impression on me. I ported everything to Java very quickly. Luckily, the code base wasnāt all that large at that point in time. I had to add a bunch of new features after that, so I was very glad that I convinced my workmate/project manager to do that first. We didnāt even need a GUI, the button in Excel was transformed to a command line program that just generated the large file.
But I cannot comment on the VB GUI designer, I never used that. Your screenshot looks very similar to the Delphi one, though. Only towards the end of my Delphi days I found out about the possibility to make the widgets snap to window edges and corners (I donāt remember how that was called), so that resizing the windows was actually possible without messing up their entire contents.
Switching to Linux, Delphi wasnāt an option anymore. For some reason I couldnāt use Kylix. Maybe it was already dead by the time I changed OSes. Or I couldnāt get it to run. I just donāt remember. I just recall that the unavailability of Delphi was the reason it took me a while to actually settle on Linux. I then fully switched to Java. The GridBagLayout was my absolutely favorite Swing layout manager. I reckon I used it 98% of the time, because it was so powerful and made the windows resize properly, just as I had learned to do in Delphi shortly before.
Up until discovering Swing, I used Javaās AWT for a short amount of time. That was very limited I think and I hit the limits fairly quickly. Later at uni, we had one project making use of SWT. Didnāt convince me either. I could be wrong, but I think there was also a SWT GUI designer plugin for Eclipse. If there really was, that one wasnāt in the same street as Delphiās (there must be a reason I forgot about it ;-)).
@movq@www.uninformativ.de The one for Delphi was quite good. But JCreator (I donāt remember exactly) was awful and I never looked back to GUI designers. Always layed out the GUI by hand in code myself since then. These days I donāt deal with GUI programming anymore.
Advent of Code will be different this year:
There will only be 12 puzzles, i.e. only December 1 to December 12. This might make it more interesting for some people, because itās (probably) less work and a lower chance of people getting burned out. š¤
Personally, Iāll probably stretch it out over 24 days. Giving myself more time to solve each puzzle and I really want this event to last the entire month. š
Maybe this makes it more interesting for some people around here as well?
After taking most of the year off from role-playing, Iāve got 3 one-shots coming up in the next month, all of which need some tweaking before I can run them (as do my homebrew rules).
Plus thereās a ābuild a gameā code challenge at work, a pair of media boxes I need to rebuild, a pair of dead machines I need to diagnose, and Iād like to (eventually) get my twtxt apps to a āreleasableā state.
So many projects, so little (free) timeā¦
@movq@www.uninformativ.de The time has come for āvibe codingā consultations.
@movq@www.uninformativ.de @prologic@twtxt.net Unfortunately, I had to review a coworkerās code that was also spewed out the same way. It was abso-fucking-lutely horrible. I didnāt know upfront, but then asked afterwards and got the proud (!) answer that it indeed was āassistedā. I bet this piece of garbage result was never checked or questioned the tiniest bit before submitting for review. >:-( It didnāt even do the right thing as a bonus.
What a giant shitshow. Things just have to burn to the ground several times.
It happened.
āCan you help me debug this program? I vibe coded it and I have no idea whatās going on. I had no choice ā learning this new language and frameworks would have taken ages, and I have severe time constraints.ā
Did I say ānoā? Of course not, Iām a ānice guyā. So Iām at fault as well, because I endorsed this whole thing. The other guy is also guilty, because he didnāt communicate clearly to his boss what can be done and how much time it takes. And the boss and his bosses are guilty a lot, because theyāre all pushing for āAIā.
The end result is garbage software.
This particular project is still relatively small, so it might be okay at the moment. But normalizing this will yield nothing but garbage. And actually, especially if this small project works out fine, this contributes to the shittiness because management will interpret this as āhey, AI worksā, so they will keep asking for it in future projects.
How utterly frustrating. This is not what I want to do every day from now on.
Hello again everyone! A little update on my twtxt client.
I think itās finally shaping a bit better now, but⦠āļø
As Iām trying to put all the parts together, I decided to build multiple parallel UIs, to ensure I donāt accidentally create a structure that is more rigid than planned.
I already decided on a UI that I would want to use for myself, it would be inspired by moshidon, misskey and some other āsocial feedsā mock-ups I found on dribbble.
I also plan on building a raw HTML version (for anyone wanting to do a full DIY client).
I would love to get any suggestions of what you would like to see (and possibly use) as a client, by sharing a link, app/website name or even a sketch made by you on paper.
I think Iāll pick a third and maybe a fourth design to build together with the two already mentioned.
For reference, the screens I think of providing are (some might be optional or conditionally/manually hidable):
- Global / personal timeline screen
- Profile screen (with timeline)
- Thread screen
- Notifications screen or popup (both valid)
- DM list & chat screens (still planning, might come later)
- Settings screen (itāll probably be a hard coded form, but better mention it)
- Publish / edit post screen or popup (still analysing some use cases, as some āenginesā might not have direct publishing support)
I also plan on adding two optional metadata fields:
display_name: To show a human readable alternative for a nick, it fallback tonickif not defined
banner: Using the same format asavatarbut the image expected is wider, inspired by other socials around
I also plan on supporting any metadata provided, including a dynamically parsable regex rule format for those extra fields, this should allow anyone to build new clients that donāt limit themselves to just the social aspect of twtxt, hoping to see unique ways of using twtxt! š¤
@zvava@twtxt.net CORS is our worst enemy. š„·
I too had the same issue being a browser-based request, so the only solution is using a proxy.
For testing (and real personal use) I rely on this one https://corsproxy.io/.
In my client, I first check if the source allows me to fetch it without issues first and fallback to prefixing with a proxy if it gives an error.
For security reasons the client donāt give you a readable error for CORS, so you must use a catch-all for that, if it fails again with the proxy you can deal with any other errors it throws as you normally would (preferably outside of the fetch function).
After the fetching responded, I store the response.url value to fetch it again for updates without having to do extra calls (you can store it verbatim or as a flag to be able to change the proxy later).
Here an extract of my code:
export async function fetchWithProxy(url, proxy=null) {
return await fetch(url).catch(err => {
if (!proxy) throw err;
return fetch(`${proxy}${encodeURIComponent(url)}`);
});
}
// Using it with
const res = await fetchWithProxy('https://twtxt.net/user/zvava/twtxt.txt', 'https://corsproxy.io/?');
// Get the working url (direct or through proxy)
const fetchingURL = res.url;
// Get the twtxt feed content (or handle errors)
const text = await res.text();
I also plan to allow the user to define a custom proxy field, I like the solution used by Delta.chat in their android app, where you can define the URL format with a variable https://my-proxy?$TWTXT_URL since it allows you to define with more freedom any proxy without a prefix format.
If the idea of using a third-party proxy is not to the user liking they can use a self-hosted solution like cors-anywhere or build their own (with twtxt it should just be a GET).
I finally solved the loading issue in my WIP reader, TwtStrm (and apologies again to anyone that got spammed while I was diagnosing the issue).
After another round of coding this weekend, Iām happy to report that it now renders all the twts (with markdown parsing), complete with localstorage and server-based file caching.
Hi everyone, hereās a little introduction of my twtxt client (still WIP).
The client Iām developing is a single tenant project that runs entirely in the browser (it might use an optional backend).
Itās entirely based on native web-components and vanilla JS, it is designed to act closer to a toolkit than a full-fledged client, allowing users to āDIYā their own interface with pure html or plain javascript functions.
Users can also build their own engines by including a global javascript object that implement the defined internal API (TBD).
Iām planning to build a system that is easy enough to build and use with any skill level, using only pure html (with a homebrew minimal template engine) or via plain JS (Iāll be also providing some pre-made templates too).
Everything can be self-hosted on any static hosting provider, this allows to spread twtxt within communities like Neocities and similarly hosted websites (basically any Indieweb/Smallweb/Digital garden website and any of the common GitHub/Lab/Berg/lify Pages).
It will be probably named something like TxtCraft or craf.txt but Iām not really sure yet⦠š¤ (Maybe some suggestions could help)
Iām still in the experimental phase, so thereās no decent source-code to share yet, but it will soon enough!
Pretty happy with my zs-blog-template starter kit for creating and maintaining your own blog using zs š Demo of what the starter kit looks like here ā Basic features include:
- Clean layout & typography
- Chroma code highlighting (aligned to your site palette)
- Accessible copy-code button
- āOn this pageā collapsible TOC
- RSS, sitemap, robots
- Archives, tags, tag cloud
- Draft support (hidden from lists/feeds)
- Open Graph (OG) & Twitter card meta (default image + per-post overrides)
- Ready-to-use 404 page
As well as custom routes (redirects, rewrites, etc) to support canonical URLs or redirecting old URLs as well as new zs external command capability itself that now lets you do things like:
$ zs newpost
to help kick-start the creation of a new post with all the right āstuffā⢠ready to go and then pop open your $EEDITOR š¤
@prologic@twtxt.net need to work on the CSS. For example, the tags are too big, the code blocks (and the inline ones) are too small, the single posts have no date (intended?), and so on. Itās an alpha start!
Put another way, what you are proposing/pushing for requires hundreds of lines of code to change across a half dozen or so clients and lots of breaking changes, not to mention unknowns.
What I want us to do is make only a few half dozen or so lines of code changes to our clients and minimize the breaking changes and unknowns.
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
The big QR code canine, has been one of my favourites - because even after a few months, I still find the pose really cute. Always thought a chibi version is a necessary addition and now I finally drew it.

Doing a bit of 2018 Advent of Code now to relax. š
@arne@uplegger.eu Hm, noch nie gemacht. š¤ Machst du das von Hand oder mit Code?
@kat@yarn.girlonthemoon.xyz @kat@yarn.girlonthemoon.xyz Pretty sure I have many more mentions in the database than the one and only one I see hmmm š¤ ā Iāll have a look at the code when I can and the SQL query itās using
@kat@yarn.girlonthemoon.xyz i think almost all of the code was written between 10pm-10am :3c
@movq@www.uninformativ.de this seems like a bit of an overkill, that would also harm modding and power users - who often need to see the exact implementation of new features and benefit from the ability to pull up the history of code changes, in their browser. Sure they could clone the repo and do that locally, but if it has dependencies, theyād also have to clone those, to see how those get updated and itād soon be a mess.
@movq@www.uninformativ.de Holy shit, thatās insane! :-D I tried it, but iām absolutely terrible at these type of games. Iām having trouble with the keys to move around. Maybe after ages I would pick it up and it becomes natural. I just was never a real gamer.
I will definitely try to read through the code, though! This looks sick. 8-)
@eric@itsericwoodward.com Hmm, the images are all 404ing. Also, I reckon that lots of code blocks are broken, too.
Haha, fun! I browsed your gopher hole a little bit. I noticed some entries are fully justified (formatting), while others are not. I didnāt notice a pattern, though it makes sense not to use justification on entries with code. Yet, some prose entries are, and some are not. A mystery. :-)
@bender@twtxt.net The address is/was correct but probably got mangled by the Markdown renderer. Letās try again in a code block:
gopher://uninformativ.de/0/phlog/2025/2025-09/2025-09-03--roophloch.txt
@prologic@twtxt.net havenāt had too much time to really try it out yet ^^ā iām um too busy staring at code i wrote while sleep deprived and wondering why i did the things i did, while sleep deprived \@.@
āBut all your stuff is MIT licensed! They are allowed to do that!ā
Haha. As if they would care. They crawl everything they get their hands on.
Besides, thatās not true, the license states that the copyright notice must be retained. āAIā breaks that. They incorporate my code and my articles in their product and make it appear as if it was their work.
@kat@yarn.girlonthemoon.xyz On the one hand, all these programs have a very long history and the technology behind manpages is actually very powerful ā you can use it to write books:
https://www.troff.org/pubs.html
I have two books from that list, for example āThe UNIX programming environmentā:
https://movq.de/v/c3dab75c97/upe.jpg
Itās a bit older, of course, but it looks and feels like a normal book, and it uses the same tech as manpages ā which I think is really cool. š
Itās comparable to LaTeX (just harder/different to use) but much faster than LaTeX. You can also do stuff like render manpages as a PDF (man -Tpdf cp >cp.pdf) or as an HTML file (man -Thtml cp >cp.html). I think I once made slides for a talk this way.
On the other hand, traditional manpages (i.e., ones that are not written in mandoc) do not use semantic markup. They literally say, āthis text is bold, that text over here is italicsā, and so on.
So when you run man foo, it has no other choice but to show it in black, white, bold, underline ā showing it in color would be wrong, because thatās not what the source code of that manpage says.
Colorizing them is a hack, to be honest. Youāre not meant to do this. (The devs actually broke this by accident recently. They themselves arenāt really aware that people use colors.)
If mandoc and semantic markup was more commonly used, I think it would be easier to convince the devs to add proper customizable colors.
/short/ if it's of this useless kind. Never thought that they ever actually will improve their Atom feeds. Thank you, much appreciated!
@kat@yarn.girlonthemoon.xyz @movq@www.uninformativ.de Sorry, I neither finished it nor in time. :-( Thatās as good as itās gonna get for the moment: https://git.isobeef.org/lyse/gelbariab/-/tree/master/rss-proxys?ref_type=heads
The README should hopefully provide a crude introduction. The example configuration file is documented fairly well, I believe (but maybe not). You probably still have to consult and maybe also modify the source code to fit your needs.
Let me know if you run into issues, have questions, wishes etc.
In 1996, they came up with the X11 āSECURITYā extension:
https://www.reddit.com/r/linux/comments/4w548u/what_is_up_with_the_x11_security_extension/
This is what could have (eventually) solved the security issues that weāre currently seeing with X11. Those issues are cited as one of the reasons for switching to Wayland.
That extension never took off. The person on reddit wonders why ā I think itās simple: Containers and sandboxes werenāt a thing in 1996. It hardly mattered if X11 was āinsecureā. If you could run an X11 client, you probably already had access to the machine and could just do all kinds of other nasty things.
Today, sandboxing is a thing. Today, this matters.
Iāve heard so many times that āX11 is beyond fixable, itās hopeless.ā I donāt believe that. I believe that these problems are solveable with X11 and some devs have said āyeah, we could have kept working on itā. Itās that people donāt want to do it:
Why not extend the X server?
Because for the first time we have a realistic chance of not having to do that.
https://wayland.freedesktop.org/faq.html
Iām not in a position to judge the devs. Maybe the X.Org code really is so bad that you want to run away, screaming in horror. I donāt know.
But all this was a choice. I donāt buy the argument that we never would have gotten rid of things like core fonts.
All the toolkits and programs had to be ported to Wayland. A huge, still unfinished effort. If that was an acceptable thing to do, then it would have been acceptable to make an āX12ā that keeps all the good things about X11, remains compatible where feasible, eliminates the problems, and requires some clients to be adjusted. (You could have still made āX11X12ā like āXWaylandā for actual legacy programs.)

Since Fastly acquired and recently shut down glitch.com, some of my ancient webapps are no longer available, nor do I have any plans to make them available again - all had either zero, or very few monthly visits, used outdated libraries and would be a waste of money, to continue hosting and updating elsewhere.
All art archives remain unaffected and all projects shut down before 2025, were already permanently deleted, but if thereās someone out there, still relying on the recently discontinued projects, somehow - you can reach out and request their source code.
These requests will only be honoured, until the end of this year, when we plan to permanently delete, all of this data (both webapps and files only hosted on Amazons CDN).
Canine out °_°
Heck yeah, thatās damn cool: Reading QR codes without a computer! https://qr.blinry.org/
Stuff that nobody needs:
systemctl uses ANSI escape codes to underline text (\e[4m) and then it also uses special escape codes ā that Wikipedia classifies as ānot in the standardā, but I havenāt looked it up ā to change the color of the underline. That color change is barely noticeable in the first place.
Some terminals donāt support this and now my systemctl output is blinking because of that.
@lyse@lyse.isobeef.org āAdvancedā, well, probably more āmatureā. There arenāt a ton of crazy features and that icon thing is the largest code addition in the last 10 years. %)
Speaking of OS/2 ⦠I just realized that Windows 3.x didnāt have icons, either. If Iām not mistaken, this only got added in Windows 95. In other words, OS/2 had this feature before Windows did, because at least OS/2 2.1 from 1993 had icons. Who would have thunk.
(Now I kind of want to know which system really introduced this feature.)
@kat@yarn.girlonthemoon.xyz NVM i stole other peoples code to make a dictionary lookup script https://bytes.4-walls.net/kat/dotfiles/src/branch/main/config/.local/bin/dict