@prologic@twtxt.net thatās not too bad! šš»šš»šš»
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
This is possibly the only other threading model I can come up with for Twtxt that I think I can get behind.
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
Example:
Alice starts thread href=āhttps://yarn.girlonthemoon.xyz/search?q=%2342:ā>#42:**
2025-09-25T12:00:00Z (tno:42) Launching storage design review.
Bob replies:
2025-09-25T12:05:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) I think compaction stalls under load.
Carol replies to Bob:
2025-09-25T12:08:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) Token bucket sounds good.
@itsericwoodward@itsericwoodward.com Iām glad to hear it š¤£
I would personally rather see something like this:
2025-09-25T22:41:19+10:00 Hello World
2025-09-25T22:41:19+10:00 (#kexv5vq https://example.com/twtxt.html#:~:text=2025-09-25T22:41:19%2B10:00) Hey!
Preserving both content-based addressing as well as location-based addressing and text fragment linking.
@alexonit@twtxt.alessandrocutolo.it Holy fuck! 𤣠I just realized how bad my typing was in my reply before 𤣠š¤¦āāļø So sorry about that haha š I blame the stupid iPhone on-screen keyboard āØļø
@alexonit@twtxt.alessandrocutolo.it Maybe I misunderstood, but you have to keep the timezone offsets in mind. Simple alphabetical sorting of the timestamp strings does not yield a truly chronological order. It might be close enough for you, though.
@alexonit@twtxt.alessandrocutolo.it that sounds pretty much like Italy! LOL. We pay $48 on renewal in Florida, US, but that fee isnāt Federal, so other states may pay more, or less.
@prologic@twtxt.net That is really great to hear!
If there are opposing opinions we either build a bridge or provide a new parallel road.
Also, I wouldnāt call my opinion a āstanceā, I just wish for a better twtxt thanks to everyoneās effort.
The last thing we need to do is decide a proper format for the location-based version.
My proposal is to keep the āSubject extensionā unchanged and include the reference to the mention like this:
// Current hash format: starts with a '#'
(#hash) here's text
(#hash) @<nick url> here's text
// New location format: valid URL-like + '#' + TIMESTAMP (verbatim format of feed source)
(url#timestamp) here's text
(url#timestamp) @<nick url> here's text
I think the timestamp should be referenced verbatim to prevent broken references with multiple variations (especially with the many timezones out there) which would also make it even easier to implement for everyone.
Iām sure we can get @zvava@twtxt.net, @lyse@lyse.isobeef.org and everyone else to help on this one.
I personally think we should also consider allowing a generic format to build on custom references, this would allow for creating threads using any custom source (manual, computed or external generated), maybe using a new āTopic extensionā, hereās some examples.
// New format for custom references: starts with a '!' maybe?
(!custom) here's text
(!custom) @<nick url> here's text
// A possible "Topic" parse as a thread root:
[!custom] start here
[custom] simpler format
This one is just an idea of mine, but I feel it can unleash new ways of using twtxt.
@itsericwoodward@itsericwoodward.com I used the dates as is for indexing them as string, the ISO format allows for free auto sorting.
@bender@twtxt.net What?! In my country you have to pay 100⬠every 10 years of which about 75% are just taxesā¦
@movq@www.uninformativ.de Iāve got this magic spell in my config: -f bestvideo[height<=?1080]+bestaudio/best
@itsericwoodward@itsericwoodward.com pretty cool! Started following you, not to miss any progress. Thanks for the exhaustive reply!
yt-dlp
ed https://www.youtube.com/watch?v=OZTSIYkuMlU. It's only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org thatās pretty cool! The first video I see on YouTube of that kind. Thanks!
@lyse@lyse.isobeef.org Hm, I couldnāt trick yt-dlp into downloading the correct format. Works in the browser, though. š
@bender@twtxt.net @movq@www.uninformativ.de I had automatically yt-dlp
ed https://www.youtube.com/watch?v=OZTSIYkuMlU
. Itās only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org I canāt remember the last time I came across a 360° video. š¤
@bender@twtxt.net A renewed vision test might be a good idea for some people. š I mean, it is kind of curious that you get this license as a young person and then it lasts a lifetime, without any further tests. As long as you donāt screw up really bad, it remains valid ā¦
@movq@www.uninformativ.de I tried making an ascii scribble of penguin waving back at you but I gave up halfway through my first try xD ⦠HI! š
Thanks @bender@twtxt.net itās been a long time indeed but, I was here the whole time. Just silent. I just didnāt have much meaningful/worth twting about ⦠/ME flips a bird to life
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@movq@www.uninformativ.de better than in the US. Our lasts only 10 years, and you need to go through the vision test, and, of course, pay). Recently they added a little gold star denoting āreal IDā compliance, and we had to pay $10 to get the old one replacedāout of the regular renew āscheduleā.
In here it is all about control, and money.
@lyse@lyse.isobeef.org is it a 360 degree video online, or a local one?
@alexonit@twtxt.alessandrocutolo.it Yhays kind of love you!! Stance and position on this. If we are going to make chicken changes in the threading model, letās keep content based addressing, but also improve the use of experience. So in fact, in order to answer your question, I think yes, we can do some kind of combination of both.
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the #
character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>
, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@lyse@lyse.isobeef.org I donāt think thereās any point in continuing the discussion of Location vs. Content based addressing.
I want us to preserve Content based addressing.
Letās improve the user experience and fix the hash commission problems.
@aelaraji@aelaraji.com welcome back dude! Long time no see!
@movq@www.uninformativ.de Yeah, it took quite some time to load. But then it was briefly back. Now itās 503ing immediately all the time.
@lyse@lyse.isobeef.org https://www.celestrak.com (where the TLE files are downloaded from) seems to be down. :/
@lyse@lyse.isobeef.org šæšæšæš
@movq@www.uninformativ.de Woah, cool!
(WTF, asciiworld-sat-track somehow broke, but I have not changed any of the scripts at all. O_o It doesnāt find the asciiworld-sat-calc anymore. How in the world!? When I use an absolute path, the .tle is empty and I get a parsing error. Gotta debug this.)
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
@movq@www.uninformativ.de Happy equinox!
It looks amazing from the map, you probably canāt tell even by looking from space.
@kat@yarn.girlonthemoon.xyz Oh! A new place to spam dad jokes. š„³
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
url
field too, by specification, it will use the first# url = <URL>
in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>
with theurl
as fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>
) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
@thecanine@twtxt.net Thanks!
Looking forward to it!āļø
@alexonit@twtxt.alessandrocutolo.it @lyse@lyse.isobeef.org i really donāt understand why this was not the solution in the first place, given how simple twtxt is (mean to be), a reply should be as simple as #<https://example.com/twtxt.txt#2025-09-22T06:45Z>
with the timestamp in an anchor link. the need for a mention is avoided like this as well since itās already linking to the replied-to feed!
šš i should just implement it into bbycll and force it into existence
@kat@yarn.girlonthemoon.xyz Mine shows 1/1 of 14 Twts š I think this is a bug š¤Æ
@itsericwoodward@itsericwoodward.com any news about this? I am, at the very least, curious!
@alexonit@twtxt.alessandrocutolo.it I took it down mostly because of continued abuse and spam:l. I intend to fix I and improve the drive and its sister at Summer point š¤
@alexonit@twtxt.alessandrocutolo.it thank you and welcome back to Yarn! The somewhat plushie-like look is intentional, so Iām glad it was noticed.
Only have 2 sizes of him in this pose, as well as most other sitting poses, but if thereās ever a sitting pose, shared by more than 2 of them, Iāll be sure to make a matrioska edit.
@lyse@lyse.isobeef.org Yeah, the format is just an idea of how it could work.
The order of SOURCE > POST
does make more sense indeed.
@prologic@twtxt.net thanks, I already follow @important_dev_news@n8n.andros.dev too.
BTW, the feed on https://feeds.twtxt.net/ seem down? It says itās in maintenance.
@alexonit@twtxt.alessandrocutolo.it Love this š
@alexonit@twtxt.alessandrocutolo.it Yeah same 𤣠Thereās also this @news-minimalist@feeds.twtxt.net feed that shows up the most important shit⢠anyway (when/if that happens).
@alexonit@twtxt.alessandrocutolo.it Personally, I find the reversed order of URL first and then timestamp more natural to reference something. Granted, URL last would be kinda consistent with the mention format. However, the timestamp doesnāt act as a link text or display text like in a mention, so, itās some different in my opinion. But yeah.
@prologic@twtxt.net Me neither, if thereās any important news others usually tell me anyway. š