Wow, as I anticipated, this is waaay out of my capabilities to really understand it. But Iām quite happy to just have spotted a mistake in an explanatory comment in section 4.5.2 āThe icode Arrayā. Of course, it should be /e + tc + /i + ni + t\0. Letās hope that my e-mail with the patch actually makes it into Briamās inbox. I fear GMail just hides it in the spam folder.
Iām trying to implement configurable key bindings in tt. Boy, is parsing the key names into tcell.EventKeys a horrible thing. This type consists of three information:
- maybe a predefined compound key sequence, like Ctrl+A
- maybe some modifiers, such as Shift, Ctrl, etc.
- maybe a rune if neither modifiers are present nor a predefined compound key exists
Itās hardcoded usage results in code like this:
func (t *TreeView[T]) InputHandler() func(event *tcell.EventKey, setFocus func(p tview.Primitive)) {
return t.WrapInputHandler(func(event *tcell.EventKey, setFocus func(p tview.Primitive)) {
switch event.Key() {
case tcell.KeyUp:
t.moveUp()
case tcell.KeyDown:
t.moveDown()
case tcell.KeyHome:
t.moveTop()
case tcell.KeyEnd:
t.moveBottom()
case tcell.KeyCtrlE:
t.moveScrollOffsetDown()
case tcell.KeyCtrlY:
t.moveScrollOffsetUp()
case tcell.KeyTab, tcell.KeyBacktab:
if t.finished != nil {
t.finished(event.Key())
}
case tcell.KeyRune:
if event.Modifiers() == tcell.ModNone {
switch event.Rune() {
case 'k':
t.moveUp()
case 'j':
t.moveDown()
case 'g':
t.moveTop()
case 'G':
t.moveBottom()
}
}
}
})
}
This data structure is just awful to handle and especially initialize in my opinion. Some compound tcell.Keys are mapped to human-readable names in tcell.KeyNames. However, these names always use - to join modifiers, e.g. resulting in Ctrl-A, whereas tcell.EventKey.Name() produces +-delimited strings, e.g. Ctrl+A. Gnaarf, why this asymmetry!? O_o
I just checked k9s and theyāre extending tcell.KeyNames with their own tcell.Key definitions like crazy: https://github.com/derailed/k9s/blob/master/internal/ui/key.go Then, they convert an original tcell.EventKey to tcell.Key: https://github.com/derailed/k9s/blob/b53f3091ca2d9ab963913b0d5e59376aea3f3e51/internal/ui/app.go#L287 This must be used when actually handling keyboard input: https://github.com/derailed/k9s/blob/e55083ba271eed6fc4014674890f70c5ed6c70e0/internal/ui/tree.go#L101
This seems to be much nicer to use. However, I fear this will break eventually. And itās more fragile in general, because itās rather easy to forget the conversion or one can get confused whether a certain key at hand is now an original tcell.Key coming from the library or an āextendedā one.
I will see if I can find some other programs that provide configurable tcell key bindings.
@eldersnake@we.loveprivacy.club
Steps to world domination:
- āInventā āAIā (by using other peopleās data).
- Get people hyped about it and ideally hooked on it.
- Only provide it as a cloud service. But hey, if you want to, you can run it locally!
- Buy all hardware available on the market, so that nobody but you can build more systems.
- All PCs of consumers and competitors are too weak now and canāt be upgraded anymore.
- Everybody depends on your cloud service! Win!
All of that is possible because corporations donāt have a āconscienceā in capitalism. Nobody forces the RAM manufacturers to sell all their stuff to just one or two buyers, but since the only goal of that manufacturer is to make money, they do it.
@movq@www.uninformativ.de Well, just a very limited subset thereof:
- inline and multiline code blocks using single/double/triple backticks (but no code blocks with just indentation)
- markdown links using using
[text](url)
- markdown media links using

And thatās it. No bold, italics, lists, quotes, headlines, etc.
Just like mentions, plain URLs, markdown links and markdown media URLs are highlighted and available in the URLs View. Theyāre also colored differently, similarly to code segments.
I definitely should write some documentation and provide screenshots.
@movq@www.uninformativ.de Yeah, I see. Just crudely checked on my computer, with around 0.013 seconds, Python 2.7 seems a tad faster than Python 3.14ās 0.023 seconds in this little program.
The lazy imports sound not too bad, but I just skimmed over them. There are surprisingly many exceptions, but yeah, no way around them. :-)
The tt URLs View now automatically selects the first URL that I probably are going to open. In decreasing order, the URL types are:
- markdown media URLs (images, videos, etc.)
- markdown or plaintext URLs
- subjects
- mentions
I might differentiate between mentions of subscribed and unsubscribed feeds in the future. The odds of opening a new feed over an already existing one are higher.
Whoo! I fixed one of the hardest bugs in mu (µ) I think Iāve had to figure out. Took me several days in fact to figure it out. The basic problem was, println(1, 2) was bring printed as 1 2 in the bytecode VM and 1 nil when natively compiled to machine code on macOS. In the end it turned out the machine code being generated / emitted meant that the list pointers for the rest... of the variadic arguments was being slot into a register that was being clobbered by the mu_retain and mu_release calls and effectively getting freed up on first use by the RC (reference counting) garbage collector š¤¦āāļø
My little toy operating system from last year runs in 16-bit Real Mode (like DOS). Since Iāve recently figured out how to switch to 64-bit Long Mode right after BIOS boot, I now have a little program that performs this switch on my toy OS. It will load and run any x86-64 program, assuming itās freestanding, a flat binary, and small enough (< 128 KiB code, only uses the first 2 MiB of memory).
Here Iām running a little C program (compiled using normal GCC, no Watcom trickery):
https://movq.de/v/b27ced6dcb/los86%2D64.mp4
https://movq.de/v/b27ced6dcb/c.png
Next steps could include:
- Use Rust instead of C for that 64-bit program?
- Provide interrupt service routines. (At the moment, it just keeps interrupts disabled.)
@movq@www.uninformativ.de From 2:50 PM to 3:23 PM AEST (+10 UTC) there was an outage. Everything went āupā on Down Detector, my EU region went offline, numerous sites were unavailable, and so on. Basically everything to/from the EU appeared to basically go kaput.
I rewrote all my solutions in Rust (except for day 10 part 2) and these are the runtimes on my i7-3770 from 2013 (this measures CLOCK_PROCESS_CPUTIME_ID, not wallclock):
day01/1 [ 00.000501311] Result: 1066
day01/2 [ 00.000400298] Result: 6223
day02/1 [ 00.000358848] Result: 12586854255
day02/2 [ 00.000750711] Result: 17298174201
day03/1 [ 00.000106537] Result: 17405
day03/2 [ 00.000404632] Result: 171990312704598
day04/1 [ 00.000257517] Result: 1626
day04/2 [ 00.007495342] Result: 9173
day05/1 [ 00.000237212] Result: 505
day05/2 [ 00.000142731] Result: 344423158480189
day06/1 [ 00.000229629] Result: 4076006202939
day06/2 [ 00.000279552] Result: 7903168391557
day07/1 [ 00.000204422] Result: 1622
day07/2 [ 00.000283816] Result: 10357305916520
day08/1 [ 00.029427421] Result: 84968
day08/2 [ 00.028089859] Result: 8663467782
day09/1 [ 00.000310304] Result: 4764078684
day09/2 [ 00.015512554] Result: 1652344888
day10/1 [ 00.000796663] Result: 375
day10/2 [ --.---------] Result: 15377 (Z3)
day11/1 [ 00.000416804] Result: 753
day11/2 [ 00.000660528] Result: 450854305019580
day12/1 [ 00.000336081] Result: 577
day12/2 [ 00.000000695] Result: no part 2
A little under 90 ms total.
On my Samsung NC10 netbook from 2011 with its Intel Atom N455 at 1.6 GHz:
day01/1 [ 00.003771326] Result: 1066
day01/2 [ 00.003267317] Result: 6223
day02/1 [ 00.003902698] Result: 12586854255
day02/2 [ 00.006659479] Result: 17298174201
day03/1 [ 00.000747544] Result: 17405
day03/2 [ 00.002737587] Result: 171990312704598
day04/1 [ 00.001263892] Result: 1626
day04/2 [ 00.044985301] Result: 9173
day05/1 [ 00.001696761] Result: 505
day05/2 [ 00.000978962] Result: 344423158480189
day06/1 [ 00.001387660] Result: 4076006202939
day06/2 [ 00.001734248] Result: 7903168391557
day07/1 [ 00.001295528] Result: 1622
day07/2 [ 00.001809659] Result: 10357305916520
day08/1 [ 00.277251443] Result: 84968
day08/2 [ 00.284359332] Result: 8663467782
day09/1 [ 00.003152407] Result: 4764078684
day09/2 [ 00.071123459] Result: 1652344888
day10/1 [ 00.005279527] Result: 375
day10/2 [ --.---------] Result: 15377 (Z3)
day11/1 [ 00.003273342] Result: 753
day11/2 [ 00.005139719] Result: 450854305019580
day12/1 [ 00.002857552] Result: 577
day12/2 [ 00.000004421] Result: no part 2
A little over 700 ms total.
I like this. You get performance thatās more or less in the ballpark of C, but without the footguns.
@movq@www.uninformativ.de I shrank Day 9 Part 2 from ācover the whole mapā to āonly track the interesting lines.ā By compressing coordinates to just the unique x/y breakpoints, the grid got tiny. I still flood-fill and do the corner-pair checks, but now on that compact grid with weighted prefix sums for instant rectangle checks. Result: far less RAM, way less CPU, same correct answer.
Day 7 was pretty tough, I initially ended up implementing an exponential in both time and memory solution that I killed because it was eating all the resources on my Mac Studio, and this poor little machine only has 32GB of memory (I stopped it at 118GB of memory, swapping badly!), This is what I ended up doing before/after:
- Before: Time O(2^k Ā· L), memory O(2^k), where k is the number of splitters along a reachable path and L is path length. Exponential in k.
- After: Time O(RĀ·C) (or O(RĀ·C + s) with s split events), memory OĀ©, where R = rows, C = columns. Polynomial/linear in grid size.
I just completed āGift Shopā - Day 2 - Advent of Code 2025 #AdventOfCode https://adventofcode.com/2025/day/2 ā But again, Iām solving this in my own language mu that I had to build first š¤£
Alright, Advent of Code is over:
https://www.uninformativ.de/blog/postings/2025-12-12/0/POSTING-en.html
Itās been quite the time sink, especially with the DOS games on top, but it was fun. š„³
In case youāre wondering: All puzzles (except for part 2 of day 10) were doable in Python 1 on SuSE Linux 6.4 and ran in a finite time on the Pentium 133. Puzzle 10/2 might have been doable as well if I had better education. š¤£
Day 2 was pretty tough on my old hardware. Part 1 originally took 16 minutes, then I got it down to 9 seconds ā only to realize later that my solution abused some properties of my particular input. A correct solution will probably take about 30 seconds. š«¤
Part 2 took 29 minutes this morning. I wrote an optimized version but havenāt tested it yet. I hope itāll be under a minute.
Python 1 feels really slow, even compared to Java 1. And these first puzzles werenāt even computationally intensive. Weāll see how far Iāll make it ā¦
Thinking about doing Advent of Code in my own tiny language mu this year.
mu is:
- Dynamically typed
- Lexically scoped with closures
- Has a Go-like curly-brace syntax
- Built around lists, maps, and first-class functions
Key syntax:
- Functions use
fnand braces:
fn add(a, b) {
return a + b
}
- Variables use
:=for declaration and=for assignment:
x := 10
x = x + 1
- Control flow includes
if/elseandwhile:
if x > 5 {
println("big")
} else {
println("small")
}
while x < 10 {
x = x + 1
}
- Lists and maps:
nums := [1, 2, 3]
nums[1] = 42
ages := {"alice": 30, "bob": 25}
ages["bob"] = ages["bob"] + 1
Supported types:
int
bool
string
list
map
fn
nil
mu feels like a tiny little Go-ish, Python-ish language ā curious to see how far I can get with it for Advent of Code this year. š
@lyse@lyse.isobeef.org Damn. That was stupid of me. I should have posted examples using 2026-03-01 as cutoff date. š
In my actual test suite, everything uses 2027-01-01 and then I have this, hoping that thatās good enough. š„“
def test_rollover():
d = jenny.HASHV2_CUTOFF_DATE
assert len(jenny.make_twt_hash(URL, d - timedelta(days=7), TEXT)) == 7
assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=3), TEXT)) == 7
assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=2), TEXT)) == 7
assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=1), TEXT)) == 7
assert len(jenny.make_twt_hash(URL, d, TEXT)) == 12
assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=1), TEXT)) == 12
assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=2), TEXT)) == 12
assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=3), TEXT)) == 12
assert len(jenny.make_twt_hash(URL, d + timedelta(days=7), TEXT)) == 12
(In other words, I donāt care as long as itās before 2027-01-01. šš )
@aelaraji@aelaraji.com Thanks for the account! I figured out one thing at least so far, my WAF was blocking some of the AP requests. Fixed that. Anyway, holiday time 𤣠Back in ~2 weeks.
Iāve once again brought up a Gitea instance on my server space, but there are two highlights here:
- No self-registration (accounts are tied to the e-mail server, which is in turn tied to the system accounts)
- Going beyond the landing page requires to be logged in, no excuses. (It also could scare the AI crawlers to oblivion, avoiding Anubis at that)
Thatās it.
@lyse@lyse.isobeef.org Probably wouldnāt help, since almost every request comes from a different IP address. These are the hits on those weird /projects URLs since Sunday:
1 IP has 5 hits
1 IP has 4 hits
13 IPs have 3 hits
280 IPs have 2 hits
25543 IPs have 1 hit
The total number of hits has decreased now. Maybe the botnet has moved on ā¦
@lyse@lyse.isobeef.org @bender@twtxt.net Pfft, they want folks to relocate to Sydney. Fuck that 𤣠Sydney is a bit like San Francisco, Iām not actually sure which is worse. Fuckān expensive as hell, the only palce youād be able to afford to buy or rent is at least ~2hrs out of the city by public transport (i.e: train) and by that time youāve just pissed your life down the toilet, because youād be expected ot work a 9-10hr day + 2-3hrs of travel each way, buy the time you factor in having to wake up super early to get ready to travel in to work, you basically have zero time for anything else, let alone your ufamily,
Fuck that.
What do you do, when a recruiter throws you a PD or two and says the total compensation is ~2-3x what youāre on now?! š¤
Testing 1 2 3
Testing 1 2 3
FTR, I see one (two) issues with PyQt6, sadly:
- The PyQt6 docs appear to be mostly auto-generated from the C++ docs. And they contain many errors or broken examples (due to the auto-conversion). I found this relatively unpleasent to work with.
- (Until Python finally gets rid of the Global Interpreter Lock properly, itās not really suited for GUI programs anyway ā in my opinion. You canāt offload anything to a second thread, because the whole program is still single-threaded. This would have made my fractal rendering program impossible, for example.)
Testing 1 2 3 @manton@twtxt.net
I wound up running 2 out of 3 of the one-shots, both Halloween games based on Ravenloft / Curse of Strahd, and both rousing successes (for the players, not so much for Strahd).
Since Iām on something of a gaming kick, I think Iām going to try and finish plotting out the rest of the fae adventure Iām running for my kids, while also (hopefully) finishing my super secret astral gaming project.
Can I do it? Stay tuned and find out!
For those curious, the new Twtxt <-> ActivityPub bridge Iām building (bidirectional) simply requires three things:
- You register your Twtxt feed to the bridge: https://bridge.twtxt.net
- You verify that you in fact own/control the feed by putting the verification code somewhere on/in your feed (doesnāt matter where or how)
- You proxy/forward requests for
/.well-known/webfingerto the Bridgebridge.twtxt.net.
Iām still testing through and ironing out bugs š Please be patient! š
My goodness, a new level of stupidity.
The bots are now doing things like this:
GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
- That URL does not exist.
- By including
http://uninformativ.dein that request, this instructs the webserver to do an HTTP proxy request. Of course, this isnāt allowed on my webserver (and shouldnāt by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.
And of course, itās not just 50 hits like this or 100 or 1ā000 or 10ā000. No, itās over 150ā000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.
This almost looks like a DDoS attack, but itās just completely stupid. This feels more like some idiot vibe coded a crawler.
@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:
1. The User Perspective (Untrustworthiness)The criticism of AI as untrustworthy is a problem of misapplication, not capability.
- AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
- The Rise of AI Literacy: Users must develop a new skillāAI literacyāto critically evaluate and verify AIās probabilistic output. This skill, along with improving citation features in AI tools, mitigates the āgaslightingā effect.
The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; itās skill evolution, not erosion.
- Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
- Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.
- Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced
robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.
Won a bunch of games of Solitaire and then rearranged the cards for maximum negative points, to distract me from the horrors.
(Still ended up with >0 points on OS/2, because donāt ask me.)
https://www.uninformativ.de/desktop/2025%2D11%2D04%2D%2Dkatriawm%2Dsolitaire.png
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of processing 2. Exactly-once delivery
@movq@www.uninformativ.de streamlining jenny.vim?
index adc0db9..cb54abc 100644
--- a/vim/ftdetect/jenny.vim
+++ b/vim/ftdetect/jenny.vim
@@ -1 +1,2 @@
au BufNewFile,BufRead jenny-posting.eml setl completefunc=jenny#CompleteMentions fo-=t wrap
+au BufRead,BufNewFile jenny-posting.eml normal $
@lyse@lyse.isobeef.org In my case it was a silver necklace, a hummingbird with a wing connected with the cold welding I mentioned using thin brass wires.
It made it in a goldsmithing class (I went to a private craftmanship high-school) so no phones allowed (no photos of it) and no ātake homeā of the works.
Hereās a rough sketch of it drawn by memory, the dots in the wing is where it connects to the body.

The technique is basically the same as i described, but the scale is much smaller, the whole piece was about 5-6 cm on the largest side.
The rivet was made by drilling a hole through the parts, than with a short and thicker drill you widen the hole on the surface to let the rivet settle flatter on the piece, then with a rubber hammer you hit it to flatten the head until itās snug on the hole, lock them together by doing the same on the other side.
Note that widening the hole with a thicker drill head wonāt make a difference with bigger holes, mine had holes of about 1-2 mm of diameter maximum.
Hereās a sketch of what is going on for clarity.

I experimented with a 2.4x7mm aluminium rivet I had on hand. As expected, it was quite a bit long. Using my pliers wrench, I was able to crush it down by quite some bit. I should have taken a photo right after the hand riveter for comparison. Now, itās much smoother and the chance of cutting my hand open is reduced by quite a bit. But breaking the burr with a few file strokes is still necessary. I should get 2.4x4mm rivets and try with them. I reckon they would be more suited for my 0.5mm sheet metal.
With the pliers wrench again, I was able to also crush down the chopped off 3mm copper nail and form a second head. That was surprisingly easy. Now, I need to figure out how to efficiently make a head on the remaining copper nail shaft, so that I can use this again.
Both are rock solid, thereās absolutely no movement at all between the two sheet metal cutoffs.
Okay, they are also offering 2.8x25mm copper nails. Which I actually do have a single one here. :-)
My hardware collection also includes a few brass-like looking screws that I could repurpose into rivets. But I reckon I have to upgrade my burner first. Iām not a metal worker by any means, so I could be totally wrong, but I imagine that some heat is necessary to loosen the work-hardening effect when beating on them. I will do some experiments on Saturday and report back.
@itsericwoodward@itsericwoodward.com No worries, all good, mate! We all have to start somewhere. Other software requests my feed several orders of magnitude more often.
I can confirm, the User-Agent header appears to be fixed. \o/
Two other things I noticed, though:
Thereās now an
OPTIONSrequest for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, itās rejected with a405 Method Not Allowed.Not that these few requests bother me at all, but you might wanna implement caching next with either the
If-Modified-SinceorIf-None-Matchrequest headers. This way, if the feed hasnāt changed, the web server can reply with a304 Not Modifiedand no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure youāre aware of it, thatās all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)
Please donāt hate me today; Iām a bit grumpy and have too many reasons to be upset:
- 2 counts of pushing and trying to get the simplest things done at work (that for some reason are made more difficult than they should be)
- This whole Chat Control bullshit
- And some other person things going on that have been ongoing for 72 days and counting š¤¬
Each origin feed numbers new threads
(tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).
@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.
After thinking about it for a while I got to two solutions:
Proposal 1: Thread syntax (using subject)
Each post have an implicit and an optional explicit root reference:
Implicit (no action needed, all data required are already there)
- URL + timestamp
- URL + timestamp
Explicit (subject required)
- Identity (client generated)
- External reference
- Random value
- Identity (client generated)
We then add include a ārootā subject in each post for generating explicit theads:
1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters
Each post can have both references, like the current hash approach the reference can be treated as a simple string and donāt have a real meaning.
Using a custom reference this way allows a client to decide how to generate them:
- Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
- External references: can be provided from another system (Eg.
7e073bd345, yarnsocial/yarn latest commit)
- Random value: like a UUID (Eg.
9a0c34ed-d11e-447e-9257-0a0f57ef6e07)
Proposal 2: Threaded mentions (featuring zvava)
Inspired by @zvava@twtxt.netās solution it could be simplified into: #<nick url#timestamp> or #<url#timestamp>
It can be shown like a mentions or hidden like a subject.
If weāre using thinking of using a counter in the client, I think thereās no point in avoiding the timestamp anymore.
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
urlfield too, by specification, it will use the first# url = <URL>in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>with theurlas fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
@alexonit@twtxt.alessandrocutolo.it thank you and welcome back to Yarn! The somewhat plushie-like look is intentional, so Iām glad it was noticed.
Only have 2 sizes of him in this pose, as well as most other sitting poses, but if thereās ever a sitting pose, shared by more than 2 of them, Iāll be sure to make a matrioska edit.
ok so i have found a genuine twt hash collision. what do i do.
internally, bbycll relies on a post lookup table with post hashes as keys, this is really fast but i knew iād inevitably run into this issue (just not so soon) so now i have to either:
Ā Ā 1) pick the newer post over the other
Ā Ā 2) break from specification and not lowercase hashes
Ā Ā 3) secretly associate canonical urls or additional entropy with post hashes in the backend without a sizeable performance impact somehow

Since Google announced their intentions to heavily limit sideloading on Android, starting end of 2026, Iāve been looking for potential solutions, for this policy change, that threatens the majority of projects I maintain, in some way. Google already killed my browser project years ago, but I have no other choice, than to fight this, any way I can.
The best choice to deal with this, will probably be the Android Debug Bridge, which can be used not only to install apps unrestricted, but also to uninstall, or remove, almost any unnecessary part of the OS. Shizuku, combined with Canta Debloater, is the winning combination for now.
Iāve already removed most Google apps from my device: the annoying AI assistant, the stupid Google app adding the annoying articles, left of your homes screen, Google One, Gboard, Safety app⦠itās amazing, no distracting Google slopware, like in the good old Android 2 days! And I absolutely intend to keep it this way, from now on, no new Google apps or services on my devices, unless Google can give me a good enough reason, to allow them there and whenever the app that verifies signatures, to block installing apps not approved by Google, Iāll just remove it from my device and advocate others do so too.
Why do I care about this?
- The load will become a problem at some point.
- These crawlers and the current āAIā in general are breaking the rules. I am supposed to be paying for every little thing, I get sued for āpiracyā. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that.
- I simply donāt want it. Period.
This probably means that I can no longer host my own website. I donāt want to deploy something like Anubis, because that ruins the whole thing: I want it to be accessible from ancient browsers, like OS/2 or Windows 3.11.
Iāll keep an eye on it for a while. Maybe try to block some IPs.
Sooner or later, Iāll take the website down and shift everything to Gopher.
We use all the Microsoft programs at work - Teams and Outlook especially.
After all kinds of technical problems with Teams, that sometimes go unresolved for over a year, Microsoft shifted their priorities away from fixing things and towards adding an annoying AI Copilot button, that just takes up space and all it does, is loads the website in Teams, so I disabled it. Soon they just add it back, but in a different row of icons, therefore itās now a different button, you have to disable (I think they added yet another one, to the Teams, on my work phone and I had to disabled that too). Not too long after, the desktop one just enabled itself, because of āan errorā and I can disable it, but doing so activates a popup, that begs you to turn it back on, every once in a while. You canāt disable the popup and can only click āYesā or āNot nowā on it. I still keep it disabled, out of principle, but yesterday I noticed yet another Copilot button, this time in the top right corner of my Outlook and this one cannot be disabled, on the business version of Outlook and even on the personal one, itās only possible to do it through hidden privacy settings, by prohibiting the program from connecting to Microsoft servers, for extra āfeaturesā.
Thereās people complaining about it online, so itās clear nobody really wants it, but at this point Microsofts position is that you will have at least one useless AI button on your screen, at any given time, and you will be happy. And yes, their AI sucks and if I absolutely have to use AI for something, thereās already 2 better options, we have access to, at work.