Yes it work: 2024-12-01T19:38:35Z twtxt/1.2.3 (+https://eapl.mx/twtxt.txt; @eapl)
:D
The .log is just a simple append each request. The idea with the .cvs is to have it tally up how many request there have been from each client as a way to avoid having the log file grow too big. And that you can open the .cvs as a spreadsheet and have an easy overview and filtering options.
Access to those files are closed to the public.
Maybe realize webmention in my feed? (receiving). Sending maybe will be implemened in WIP clientā¦
@aelaraji@aelaraji.com You could use https://lyse.isobeef.org/tmp/twthash.py to generate twt hashes. I cobbled that together in order to generate test data for my client.
but for real, i really want that sword and Iām perfectly content running my daily dev stuff using an rpi. i actually used to do client work on an old rpi laptop kit, i only stopped using that baby because the battery controller let the battery go flat.
i am working on very smol deployments, where a server may use two or so replicated sqlite databases instead of a db server like postgres to seamlessly move from single to multi-node arrangements as needed. there is a clear performance limit here, but the goal is not to serve a huge number of clients. just to do as much as possible with a small number of useful components that can be upgraded to handle up to medium size workloads, without difficult data conversions or migrations. scaling beyond that point should be done via federation.
@bender@twtxt.net My made-up rule is to keep at least three full months in the main feed and when rotating, I create one feed per month.
@doesnm@doesnm.p.psf.lt There is no real recommendation I think. But if you hit half a MiB or so, it might be worth considering to rotate in order to keep the network traffic low. People with bad connectivitiy might appreciate it. I want to implement HTTP range requests in my client rewrite at some point in time (but first, it has to become kinda usable, though).
Ouiii, jāai de nouveau internet! En fait, #sfr a changĆ© lāarmoire, a dĆ©branchĆ© tout le monde sans prĆ©venir personne, et maintenant les autres fournisseurs doivent venir rebrancher leurs clients. Cāest malhonnĆŖte⦠Bref, si3t.ch et puffy.cafe sont de retour
@sorenpeter@darch.dk Section 7 on emojis: Exactly that, itās an avatar for text interfaces. The metadata name needs tweaking, but thatās a cool idea. If I implemented this in my client, Iād make the text avatar overridable by the user, though. Otherwise Iād probably only see boxes for everbody in my terminal. :-D
Thank you, @eapl.me@eapl.me! No need to apologize in the introduction, all good. :-)
Section 3: Iām a bit on the fence regarding documenting the HTTP caching headers. Itās a very general HTTP thing, so there is nothing special about them for twtxt. No need for the Twtxt Specification to actually redo it. But on the other hand, a short hint could certainly help client developers and feed authors. Maybe itās thanks to my distroās Ngninx maintainer, but I did not configure anything for the Last-Modified
and ETag
headers to be included in the response, the web server just already did it automatically.
The more that I think about it while typing this reply, the more I think your recommendation suggestion is actually really great. It will definitely beneficial for client developers. In almost all client implementation cases Iād say one has to actually do something specifically in the code to send the If-Modified-Since
and/or If-None-Match
request headers. There is no magic that will do it automatically, as one has to combine data from the last response with the new request.
But I also came across feeds that serve zero response headers that make caching possible at all. So, an explicit recommendation enables feed authors to check their server setups. Yeah, letās absolutely do this! :-)
Regarding section 4 about feed discovery: Yeah, non-HTTP transport protocols are an issue as they do not have User-Agent
headers. How exactly do you envision the discovery_url
to work, though? I wouldnāt limit the transports to HTTP(S) in the Twtxt Specification, though. Itās up to the client to decide which protocols it wants to support.
Since I currently rely on buckketās twtxt
client to fetch the feeds, I can only follow http(s)://
(and file://
) feeds. But in tt2
I will certainly add some gopher://
and gemini://
at some point in time.
Some time ago, @movq@www.uninformativ.de found out that some Gopher/Gemini users prefer to just get an e-mail from people following them: https://twtxt.net/twt/dikni6q So, it might not even be something to be solved as there is no problem in the first place.
Section 5 on protocol support: Youāre right, announcing the different transports in the url
metadata would certainly help. :-)
Section 7 on emojis: Your idea of TUI/CLI avatars is really intriguing I have to say. Maybe I will pick this up in tt2
some day. :-)
@sorenpeter@darch.dk on 4 for gemini if your TLS client certificate contains your nick@host could that work for discovery?
@eapl.me@eapl.me here are my replies (somewhat similar to Lyseās and Jamesā)
Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax

if something is NSFWIDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.
Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.
Discovery: User-agent for discovery can become better. Iām working on a wrapper script in PHP, so you donāt need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But thatās the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.
Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs
Languages: If the need is big then make a separate feed. I donāt mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then itās about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.
Emojis: Iām not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?
Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)
Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where youāre coming from.
Numbering twts in a feed is basically the attempt of generating message IDs. Itās an interesting idea, but I reckon it is not even needed. Iād simply use location based addressing (feed URL + ā#ā + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.
yarnd actually sends If-Modified-Since
request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(ā¦)
we can actually see If-Modified-Since
being deployed when the feed has been retrieved with a Last-Modified
response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278
Turns out etags with If-None-Match
are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.
I donāt understand how the discovery URLs should work to replace the User-Agent
header in HTTP(S) requests. Do you mind to elaborate?
Different protocols are basically just a client thing.
I reckon itās best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, Iād definitely create a different feed for other languages.
Isnāt the emoji thing ājustā a client feature? So, feed do not even have to state any emojis. As a user Iād configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt
where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feedās nick? Hmm. But still, my terminal font doesnāt even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.
@Codebuzz@www.codebuzz.nl you replied to me, but the reply was just an @, nothing else (the whole handle was missing).
There are no web mentions here, and no notifications. It isnāt Mastodon; if you want to see if someone wrote something new, or replied to you, you need to open your client.
@bender@twtxt.net @prologic@twtxt.net Iām not exactly asking yarnd to change. If you are okay with the way it displayed my twts, then by all means, leave it as is. I hope you wonāt mind if I continue to write things like 1/4
to mean āfirst out of fourā.
What has text/markdown
got to do with this? I donāt think Markdown says anything about replacing 1/4
with ¼, or other similar transformations. Itās not needed, because ¼ is already a unicode character that can simply be directly inserted into the text file.
Whatās wrong with my original suggestion of doing the transformation before the text hits the twtxt.txt file? @prologic@twtxt.net, I think it would achieve what you are trying to achieve with this content-type thing: if someone writes 1/4
on a yarnd instance or any other client that wants to do this, it would get transformed, and other clients simply wouldnāt do the transformation. Every client that supports displaying unicode characters, including Jenny, would then display ¼ as ¼.
Alternatively, if you prefer yarnd to pretty-print all twts nicely, even ones from simpler clients, thatās fine too and you donāt need to change anything. My 1/4
-> ¼ thing is nothing more than a minor irritation which probably isnāt worth overthinking.
@sorenpeter@darch.dk I run Weechat headless on a VM and mostly connect via mobile or dwsktop. I use the android client or gliwing bear. Work blocks all comms on their always on MitM VPN so I cant in office anymore. So I just use mobile.
its offline atm, but my usual setup basically makes my xmpp server into a bouncer (biboumi) and my xmpp client just does its normal thing to read the irc backlog as server-side message history.
@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. Iām referring to the definition that itās the first url =
in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.
Ya know; Rather than being an asshole and getting all angry, just be reasonable and reach out to the community or folks fetching (or trying) your feed.
Most clients respect caching if your feed is transported Iāve HTTP.
Otherwise you can add the # refresh
hint to clients on your feed.
No need to be an obnoxious ass and flood your own feed. That will just get you permanarely unfollowed and ignored.
THE LAST HUMAN POST ON THIS FEED IS MORE THAN FOUR YEARS OLD. PERHAPS TWTXT CLIENTS SHOULD THEN FETCH THE FEED VERY RARELY.
I know no client support it (yet) - but it could be the future š
@2024-10-08T19:36:38-07:00@a.9srv.net Thanks for the followup. I agrees with most of it - especially:
Please nobody suggest sticking the content type in more metadata. š
Yes, URL can be considered ugly, but they work and are understandable by both humans and machines. And its trivial for any client to hide the URLs used as reference in replies/treading.
Webfinger can be an add-on to help lookup people, and it can be made independent of the nick by just serving the same json regardless of the nick as people do with static sites and a as I implemented it on darch.dk (wf endpoint). Try RANDOMSTRING@darch.dk
on http://darch.dk/wf-lookup.php (wf lookup) or RANDOMSTRING@garrido.io
on https://webfinger.net
@doesnm@doesnm.p.psf.lt Agree. salty.im should allow the user to post multiple brokers on their webfinger so the client can find a working path.
gemini calls the request-response cycle a transaction in the spec. since trasactions are not cached, we have this problem where we canāt tell if anything was updated without fetching it and we canāt indicate how often a client should expect the content to be valid. the most common solution right now to just to keep requesting the resource until it changes or stops existing, which isnāt ideal. this sort of update notification model is interesting because it re-frames your thinking into something more like event sourcing. you end up needing to add an event queue and dispatch to the server, which is a bit more complex on the server side than plain static files, but the client stays the same. iām curious to see what kind of systems could be built on this gemini message queue concept.
@movq@www.uninformativ.de iām sorry if I sound too contrarian. Iām not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we donāt just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.
Really we should all think hard about how changes will break things and if those breakages are acceptable.
These collisions arenāt important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.
@doesnm@doesnm.p.psf.lt twt
probably isnāt the best client Iām afraid. It doesnāt really cache twts by their key (hash) to display threads properly. Jenny however does š
Only with dovecot xD. For mail im use android native mail client and not mutt. And jenny display some errors with found some files and /tmp dir (android dont have /tmp)
@prologic@twtxt.net YES James, it should be up to the client to deal with changes like edits and deletions. And putting this load on the clients, location-addressing with make this a lot easier since what is says it: Look in this file at this timestamp, did anything change or went missing? (And then threading will not break;)
Diving into mblaze, I think Iāve nearly* reached peek email geek.
Just a bunch of shell commands I can pipe together to search, list, view and reply to email (after syncing it to a local Maildir).
EXAMPLES at https://git.vuxu.org/mblaze/tree/README
So far Iām using most of the tools directly from the command line, but I might take inspiration from https://sr.ht/~rakoo/omail/ to make my workflow a bit more efficient.
*To get any closer, I think Iād have to hand-craft my own SMTP client or something.
Yes, that is exactly what I meant. I like that collection and ātwtxt v2ā feels like a departure.
Maybe thereās an advantage to grouping it into one spec, but IMO that shouldnāt be done at the same time as introducing new untested ideas.
See https://yarn.social (especially this section: https://yarn.social/#self-host) ā It really doesnāt get much simpler than this š¤£
Again, I like this existing simplicity. (I would even argue you donāt need the metadata.)
That page says āFor the best experience your client should also support some of the Twtxt Extensionsā¦ā but it is clear you donāt need to. I would like it to stay that way, and publishing a big long spec and calling it ātwtxt v2ā feels like a departure from that. (I think the content of the document is valuable; Iām just carping about how itās being presented.)
why can we both have a format that you can write by hand and better clients?
Some more arguments for a local-based treading model over a content-based one:
The format:
(#<DATE URL>)
or(@<DATE URL>)
both makes sense: # as prefix is for a hashtag like we allredy got with the(#twthash)
and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.Having something like
(#<DATE URL>)
will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the#twthash
. This will also make it possible to make 3th part twt-mentions services.Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you donāt follow.
#fzf is the new emacs: a tool with a simple purpose that has evolved to include an #email client. https://sr.ht/~rakoo/omail/
Iām being a little silly, of course. fzf doesnāt actually check your email, but it appears to be basically the whole user interface for that mail program, with #mblaze wrangling the emails.
Iāve been thinking about how I handle my email, and am tempted to make something similar. (When I originally saw this linked the author was presenting it as an example tweaked to their own needs, encouraging people to make their own.)
This approach could surely also be combined with #jenny, taking the place of (neo)mutt. For example mblazeās mthread tool presents a threaded discussion with indentation.
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with ā(#abc1234) Edit: ā¦ā and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itās sha256 or whatever but thereās no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iām not the one implementing this stuff, and itās less work to just have everything determined up front.
Misc comments (I havenāt read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iād suggest gaining 11 bits with base64 instead.
āClients MUST preserve the original hashā ā do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donāt like the MUST in āClients MUST follow the chain of reply-to referencesā¦ā. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnāt declare the client non-conforming just because they didnāt get to all the bells and whistles.
Similarly I donāt like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iām again thinking again of the 40-line shell script).
For āwho followsā lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canāt feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnāt too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iām a little sad about other protocols being not recommended.
I donāt know how I feel about including markdown. I donāt mind too much that yarn users emit twts full of markdown, but Iām more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
@falsifian@www.falsifian.org Do you have specifics about the GRPD law about this?
Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I donāt think I have to honour that request, no matter how European they are.
Iām not sure myself now. So letās find out whether parts of the GDPR actually apply to a truly decentralised system? š¤
@prologic@twtxt.net Do you have a link to some past discussion?
Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I donāt think I have to honour that request, no matter how European they are.
I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?
Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.
One distinct disadvantage of (replyto:ā¦)
over (edit:#)
: (replyto:ā¦)
relies on clients always processing the entire feed ā otherwise they wouldnāt even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.
I guess neither matters that much in practice. Itās still a disadvantage.
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I donāt think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jennyās beautiful code. I donāt write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jennyās current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because itās possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that havenāt been seen yet but are wanted. When one of those āwantedā twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto ā¦) format.
- Probably other important things Iām forgetting.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.
if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
@sorenpeter@darch.dk hmm, how does your client handles āa little editingā? I am sure threads would break just as well. š
Iām not advocating in either direction, btw. I havenāt made up my mind yet. š Just braindumping here.
The (replyto:ā¦)
proposal is definitely more in the spirit of twtxt, Iād say. Itās much simpler, anyone can use it even with the simplest tools, no need for any client code. That is certainly a great property, if you ask me, and itās things like that that brought me to twtxt in the first place.
Iād also say that in our tiny little community, message integrity simply doesnāt matter. Signed feeds donāt matter. I signed my feed for a while using GPG, someone else did the same, but in the end, nobody cares. The community is so tiny, thereās enough āimplicit trustā or whatever you want to call it.
If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? š
I do have to āadmitā, though, that hashes feel better. It feels good to know that we can clearly identify a certain twt. It feels more correct and stable.
Hm.
I suspect that the (replyto:ā¦)
proposal would work just as well in practice.
@prologic@twtxt.net I wouldnāt want my client to honour delete requests. I like my computerās memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer canāt find it.
An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, letās say I produced the following Twt:
2024-09-18T23:08:00+10:00 Hllo World
And my feedās URI is https://example.com/twtxt.txt
. The hash for this Twt is therefore 229d24612a2
:
$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:08:00+10:00\nHllo World" | sha1sum | head -c 11
229d24612a2
You wish to correct your mistake, so you make an amendment to that Twt like so:
2024-09-18T23:10:43+10:00 (edit:#229d24612a2) Hello World
Which would then have a new Twt hash value of 026d77e03fa
:
$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:10:43+10:00\nHello World" | sha1sum | head -c 11
026d77e03fa
Clients would then take this edit:#229d24612a2
to mean, this Twt is an edit of 229d24612a2
and should be replaced in the clientās cache, or indicated as such to the user that this is the intended content.
(#hash;#originalHash)
would also work.
Maybe Iām being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic ā or maybe I didnāt even send that twt, I donāt remember š ), I never really liked hashes to begin with. They arenāt super hard to implement but they are kind of against the beauty of the original twtxt ā because you need special client support for them. Itās not something that you could write manually in your
twtxt.txt
file. With @sorenpeter@darch.dkās proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iāll try it some time just for fun.
@prologic@twtxt.net Yeah, that thing with (#hash;#originalHash)
would also work.
Maybe Iām being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic ā or maybe I didnāt even send that twt, I donāt remember š
), I never really liked hashes to begin with. They arenāt super hard to implement but they are kind of against the beauty of the original twtxt ā because you need special client support for them. Itās not something that you could write manually in your twtxt.txt
file. With @sorenpeter@darch.dkās proposal, though, that would be possible.
I donāt know ⦠maybe itās just me. š„“
Iām also being a bit selfish, to be honest: Implementing (#hash;#originalHash)
in jenny for editing your own feed would not be a no-brainer. (Editing is already kind of unsupported, actually.) It wouldnāt be a problem to implement it for fetching other peopleās feeds, though.
no my fault your client canāt handle a little editing ;)
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I think I like this a lot. š¤
The problem with using hashes always was that theyāre āone-directionalā: You can construct a hash from URL + timestamp + twt, but you cannot do the inverse. When I see ā, I have no idea what that could possibly refer to.
But of course something like (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
has all the information you need. This could simplify twt/feed discovery quite a bit, couldnāt it? š¤ That thing that I just implemented ā jenny asking some Yarn pod for some twt hash ā would not be necessary anymore. Clients could easily and automatically fetch complete threads instead of requiring the user to follow all relevant feeds.
Only using the timestamp to identify a twt also solves the edit problem.
It even is better for non-Yarn clients, because you now donāt have to read, understand, and implement a ātwt hash specificationā before you can reply to someone.
The only problem, really, is that (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
is so long. Clients would have to try harder to hide this. š
@aelaraji@aelaraji.com So what is it about @sorenpeter@darch.dkās feed thatās screwed with your client? (Jenny?) š¤ Kind of curious now š¤£
@falsifian@www.falsifian.org TLS wonāt help you if you change your domain name. How will people know if itās really you? Maybe thatās not the biggest problem for something with such low stakes as twtxt, but itās a reasonable concern that could be solved using signatures from an unchanging cryptographic key.
This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesnāt matter where you get the note from, your client can verify its authenticity. That way, relays donāt need to be trusted.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youāre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canāt verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donāt mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iām also not very familiar with IPFS or IPNS.
I havenāt been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iām too lazy to change how I publish my feed :-)