@alexonit@twtxt.alessandrocutolo.it Yes well I’m pretty big on self-hosting. I’ve even tried to start a small business/company around it (but that’s another story for another day!) – Meanwhile I would encourage you to have a look at the work we’ve done in Salty.im 👌
@alexonit@twtxt.alessandrocutolo.it Well we have to really use the same spec or threading doesn’t really work in a truly decentralized manner 😉
That’s what I’m using right now, while my own client is still in the making.
A simple bash script to write a post in a mktemp
file then clean it with regex.
I don’t even bother to hash the replies, I just open https://twtxt.net and copy the hash by hand since I’m checking the new posts from there anyway (temporarily, as I might end up DoS-ing everyone’s feed in my client right now).
@prologic@twtxt.net Don’t worry about it!
I also getting angry thinking how this Chat Control crap will escalate to.
I’m already thinking of countermeasures and self-hosted alternatives, while searching lists of affected apps and services to replace/drop in the worst scenario (and probably devices).
@zvava@twtxt.net Amazing! I would love to see all the specs described this way. 🤩
@prologic@twtxt.net Well, personally I would, as I already do for user feeds in my client.
That’s why part of my proposal was to allow custom strings and be free from a specific format that need periodical upgrades, but it’s not much of a problem in the end.
I’ll adapt to what we can get out of this.
@zvava@twtxt.net I axtually latest did and I wasn’t the only one 🤣
@zvava@twtxt.net That’s what I’m leaning towards yeah🤞
@prologic@twtxt.net to clarify the i meant the ability to parse feeds using unix command line utilities, as a prinicpal of twtxtv1’s design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;
i concede, it does make a lot of sense to fix up the hashing spec rather than completely supplant it at this point, just thinking about what the rewrite would be like is dreadful in and of itself x.x
@prologic@twtxt.net to clarify: i meant the ability to parse feeds using unix command line utilities, as a principal of twtxtv1’s design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;
i concede, it does make a lot of sense to fix up the hashing spec rather than completely supplant it at this point, just thinking about what the rewrite would be like is dreadful in and of itself x.x
And I need to make something absolutely clear as well here. Twtxt was completely and utterly dead back in {Aug 2020](https://yarn.social/about.html) when I came across the spec and its simplicity and realised the lost opportunity. Since then we’ve continued to grow a small but thriving community. The extensions we’ve built over time have stood and lasted the test of time for the past ~5 years. We need not break things too badly, because what we have today and was designed years ago actually works quite well™ (despite some flaws).
@zvava@twtxt.net Going to have to hard disagree here I’m sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I’m sorry you’ve completely lost me! I’m old enough to pre-date before Linux became popular, so I’m not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject)
whose contents is a cryptographic content-addressable hash of the “thing”™ you’re replying to and forming a chain of other replies (a thread).
I’m sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a “Magic Date” for which our clients use the modified function(s).
@prologic@twtxt.net the simplest thing to do is to completely forgo hashing anything because we are communicating using plain text files right now :3
@prologic@twtxt.net the simplest thing to do is to completely forgo hashing anything because we are communicating using plain text files right now :3 while i agree hashes are incredibly helpful in the backend im not sure it has a place outside of it, it basically eliminates two core design principals of twtxt (human readability and integrating well with unix command line utilities) and makes new clients more difficult to build than it should be
@bender@twtxt.net Well honestly, this is just it. My strong position on this is quite simple:
Do the simplest thing that could work.
It’s one of the age old UNIX philosphies.
Therefore, the simplest thing™ to do here is to just increase the hash length, mark a magic™ date/time as @lyse@lyse.isobeef.org has indicated and call it a day. We’ll then be fine for a few hundred years, at which point there’ll be no-one left alive to give a shit™ anyway 🤣
@prologic@twtxt.net considering other alternatives we have seeing (of which I have lost track already), yes. Why don’t you guys (client makers) take a step at a time and, for now, increase the hash length to deal with the collisions. Then location-based addressing can be added… or not, you know. 😅
@alexonit@twtxt.alessandrocutolo.it My problem is I don’t see a world where we don’t employ some form of cryptography to use as keys for threads in databases and other such things honestly. I’m not going to use url#timestamp
as keys.
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.
After thinking about it for a while I got to two solutions:
Proposal 1: Thread syntax (using subject)
Each post have an implicit and an optional explicit root reference:
Implicit (no action needed, all data required are already there)
- URL + timestamp
- URL + timestamp
Explicit (subject required)
- Identity (client generated)
- External reference
- Random value
- Identity (client generated)
We then add include a “root” subject in each post for generating explicit theads:
1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters
Each post can have both references, like the current hash approach the reference can be treated as a simple string and don’t have a real meaning.
Using a custom reference this way allows a client to decide how to generate them:
- Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
- External references: can be provided from another system (Eg.
7e073bd345
, yarnsocial/yarn latest commit)
- Random value: like a UUID (Eg.
9a0c34ed-d11e-447e-9257-0a0f57ef6e07
)
Proposal 2: Threaded mentions (featuring zvava)
Inspired by @zvava@twtxt.net’s solution it could be simplified into: #<nick url#timestamp>
or #<url#timestamp>
It can be shown like a mentions or hidden like a subject.
If we’re using thinking of using a counter in the client, I think there’s no point in avoiding the timestamp anymore.
index.md
a prehook
and a few utilities:
@bender@twtxt.net Yes I did about a week or so ago. It took me a lot of effort to get the content even rendered in the first place. LOL I had to basically export my blog as HTML (can you believe that?!) – The Hugo export just didn’t work at all 🤣
@prologic@twtxt.net While it might work if you want to keep both, I think the point was to be able to use one or the other, if we still have to generate the hash anyway it might be pointless to use this format.
@prologic@twtxt.net I admit that I was a bit confused about the meaning of the message, at least I understood it was a “yes” from the last sentence. 😅
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
@movq@www.uninformativ.de Yes it’s kind of terrible 😞 – Let’s not do this 🤣
@bender@twtxt.net Really? 🤔
@bender@twtxt.net Well, you guessed correctly! 😁
Would be nice to have a fixed fee for that, a car is a car anywhere in the world…
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
@prologic@twtxt.net That’s a completely flat threading model (you can’t reply to replies). Is that intentional?
@prologic@twtxt.net that’s not too bad! 👏🏻👏🏻👏🏻
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
This is possibly the only other threading model I can come up with for Twtxt that I think I can get behind.
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
Example:
Alice starts thread href=”https://yarn.girlonthemoon.xyz/search?q=%2342:”>#42:**
2025-09-25T12:00:00Z (tno:42) Launching storage design review.
Bob replies:
2025-09-25T12:05:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) I think compaction stalls under load.
Carol replies to Bob:
2025-09-25T12:08:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) Token bucket sounds good.
@itsericwoodward@itsericwoodward.com I’m glad to hear it 🤣
I would personally rather see something like this:
2025-09-25T22:41:19+10:00 Hello World
2025-09-25T22:41:19+10:00 (#kexv5vq https://example.com/twtxt.html#:~:text=2025-09-25T22:41:19%2B10:00) Hey!
Preserving both content-based addressing as well as location-based addressing and text fragment linking.
@alexonit@twtxt.alessandrocutolo.it Holy fuck! 🤣 I just realized how bad my typing was in my reply before 🤣 🤦♂️ So sorry about that haha 😆 I blame the stupid iPhone on-screen keyboard ⌨️
@alexonit@twtxt.alessandrocutolo.it Maybe I misunderstood, but you have to keep the timezone offsets in mind. Simple alphabetical sorting of the timestamp strings does not yield a truly chronological order. It might be close enough for you, though.
@alexonit@twtxt.alessandrocutolo.it that sounds pretty much like Italy! LOL. We pay $48 on renewal in Florida, US, but that fee isn’t Federal, so other states may pay more, or less.
@prologic@twtxt.net That is really great to hear!
If there are opposing opinions we either build a bridge or provide a new parallel road.
Also, I wouldn’t call my opinion a “stance”, I just wish for a better twtxt thanks to everyone’s effort.
The last thing we need to do is decide a proper format for the location-based version.
My proposal is to keep the “Subject extension” unchanged and include the reference to the mention like this:
// Current hash format: starts with a '#'
(#hash) here's text
(#hash) @<nick url> here's text
// New location format: valid URL-like + '#' + TIMESTAMP (verbatim format of feed source)
(url#timestamp) here's text
(url#timestamp) @<nick url> here's text
I think the timestamp should be referenced verbatim to prevent broken references with multiple variations (especially with the many timezones out there) which would also make it even easier to implement for everyone.
I’m sure we can get @zvava@twtxt.net, @lyse@lyse.isobeef.org and everyone else to help on this one.
I personally think we should also consider allowing a generic format to build on custom references, this would allow for creating threads using any custom source (manual, computed or external generated), maybe using a new “Topic extension”, here’s some examples.
// New format for custom references: starts with a '!' maybe?
(!custom) here's text
(!custom) @<nick url> here's text
// A possible "Topic" parse as a thread root:
[!custom] start here
[custom] simpler format
This one is just an idea of mine, but I feel it can unleash new ways of using twtxt.
@itsericwoodward@itsericwoodward.com I used the dates as is for indexing them as string, the ISO format allows for free auto sorting.
@bender@twtxt.net What?! In my country you have to pay 100€ every 10 years of which about 75% are just taxes…
@movq@www.uninformativ.de I’ve got this magic spell in my config: -f bestvideo[height<=?1080]+bestaudio/best
@itsericwoodward@itsericwoodward.com pretty cool! Started following you, not to miss any progress. Thanks for the exhaustive reply!
yt-dlp
ed https://www.youtube.com/watch?v=OZTSIYkuMlU. It's only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org that’s pretty cool! The first video I see on YouTube of that kind. Thanks!
@lyse@lyse.isobeef.org Hm, I couldn’t trick yt-dlp into downloading the correct format. Works in the browser, though. 😅
@bender@twtxt.net @movq@www.uninformativ.de I had automatically yt-dlp
ed https://www.youtube.com/watch?v=OZTSIYkuMlU
. It’s only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org I can’t remember the last time I came across a 360° video. 🤔
@bender@twtxt.net A renewed vision test might be a good idea for some people. 😅 I mean, it is kind of curious that you get this license as a young person and then it lasts a lifetime, without any further tests. As long as you don’t screw up really bad, it remains valid …
@movq@www.uninformativ.de I tried making an ascii scribble of penguin waving back at you but I gave up halfway through my first try xD … HI! 👋
Thanks @bender@twtxt.net it’s been a long time indeed but, I was here the whole time. Just silent. I just didn’t have much meaningful/worth twting about … /ME flips a bird to life
@bender@twtxt.net Thanks for asking!
So, I’ve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the “follow” list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.
I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, I’ve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the “follow” list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.
I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!