Searching yarn

Twts matching #Client
Sort by: Newest, Oldest, Most Relevant
In-reply-to » @sorenpeter hey! I'm watching that now your .txt is pointing to https://darch.dk/twtAgent.php

Yes it work: 2024-12-01T19:38:35Z twtxt/1.2.3 (+https://eapl.mx/twtxt.txt; @eapl) :D

The .log is just a simple append each request. The idea with the .cvs is to have it tally up how many request there have been from each client as a way to avoid having the log file grow too big. And that you can open the .cvs as a spreadsheet and have an easy overview and filtering options.

Access to those files are closed to the public.

⤋ Read More
In-reply-to » next up, decomissioning this nice ass laptop so i can sell it on ebay and buy a sword.

but for real, i really want that sword and I’m perfectly content running my daily dev stuff using an rpi. i actually used to do client work on an old rpi laptop kit, i only stopped using that baby because the battery controller let the battery go flat.

⤋ Read More

i am working on very smol deployments, where a server may use two or so replicated sqlite databases instead of a db server like postgres to seamlessly move from single to multi-node arrangements as needed. there is a clear performance limit here, but the goal is not to serve a huge number of clients. just to do as much as possible with a small number of useful components that can be upgraded to handle up to medium size workloads, without difficult data conversions or migrations. scaling beyond that point should be done via federation.

⤋ Read More
In-reply-to » Time to rotate three months into archive feeds again.

@bender@twtxt.net My made-up rule is to keep at least three full months in the main feed and when rotating, I create one feed per month.

@doesnm@doesnm.p.psf.lt There is no real recommendation I think. But if you hit half a MiB or so, it might be worth considering to rotate in order to keep the network traffic low. People with bad connectivitiy might appreciate it. I want to implement HTTP range requests in my client rewrite at some point in time (but first, it has to become kinda usable, though).

⤋ Read More

Ouiii, j’ai de nouveau internet! En fait, #sfr a changĆ© l’armoire, a dĆ©branchĆ© tout le monde sans prĆ©venir personne, et maintenant les autres fournisseurs doivent venir rebrancher leurs clients. C’est malhonnĆŖte… Bref, si3t.ch et puffy.cafe sont de retour

⤋ Read More
In-reply-to » @eapl.me here are my replies (somewhat similar to Lyse's and James')

@sorenpeter@darch.dk Section 7 on emojis: Exactly that, it’s an avatar for text interfaces. The metadata name needs tweaking, but that’s a cool idea. If I implemented this in my client, I’d make the text avatar overridable by the user, though. Otherwise I’d probably only see boxes for everbody in my terminal. :-D

⤋ Read More
In-reply-to » Thanks @lyse! I'm replying here https://text.eapl.mx/reply-to-lyse-about-twtxt

Thank you, @eapl.me@eapl.me! No need to apologize in the introduction, all good. :-)

Section 3: I’m a bit on the fence regarding documenting the HTTP caching headers. It’s a very general HTTP thing, so there is nothing special about them for twtxt. No need for the Twtxt Specification to actually redo it. But on the other hand, a short hint could certainly help client developers and feed authors. Maybe it’s thanks to my distro’s Ngninx maintainer, but I did not configure anything for the Last-Modified and ETag headers to be included in the response, the web server just already did it automatically.

The more that I think about it while typing this reply, the more I think your recommendation suggestion is actually really great. It will definitely beneficial for client developers. In almost all client implementation cases I’d say one has to actually do something specifically in the code to send the If-Modified-Since and/or If-None-Match request headers. There is no magic that will do it automatically, as one has to combine data from the last response with the new request.

But I also came across feeds that serve zero response headers that make caching possible at all. So, an explicit recommendation enables feed authors to check their server setups. Yeah, let’s absolutely do this! :-)

Regarding section 4 about feed discovery: Yeah, non-HTTP transport protocols are an issue as they do not have User-Agent headers. How exactly do you envision the discovery_url to work, though? I wouldn’t limit the transports to HTTP(S) in the Twtxt Specification, though. It’s up to the client to decide which protocols it wants to support.

Since I currently rely on buckket’s twtxt client to fetch the feeds, I can only follow http(s):// (and file://) feeds. But in tt2 I will certainly add some gopher:// and gemini:// at some point in time.

Some time ago, @movq@www.uninformativ.de found out that some Gopher/Gemini users prefer to just get an e-mail from people following them: https://twtxt.net/twt/dikni6q So, it might not even be something to be solved as there is no problem in the first place.

Section 5 on protocol support: You’re right, announcing the different transports in the url metadata would certainly help. :-)

Section 7 on emojis: Your idea of TUI/CLI avatars is really intriguing I have to say. Maybe I will pick this up in tt2 some day. :-)

⤋ Read More
In-reply-to » Righto, @eapl.me, ta for the writeup. Here we go. :-)

@eapl.me@eapl.me here are my replies (somewhat similar to Lyse’s and James’)

  1. Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax ![NSFW](url.to/image.jpg) if something is NSFW

  2. IDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.

  3. Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.

  4. Discovery: User-agent for discovery can become better. I’m working on a wrapper script in PHP, so you don’t need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But that’s the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.

  5. Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs

  6. Languages: If the need is big then make a separate feed. I don’t mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then it’s about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.

  7. Emojis: I’m not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?

⤋ Read More
In-reply-to » I've been thinking of a few improvements for the next generation of twtxt spec, let me know if these are useful or interesting :) https://text.eapl.mx/a-few-ideas-for-a-next-twtxt-version

Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)

Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where you’re coming from.

Numbering twts in a feed is basically the attempt of generating message IDs. It’s an interesting idea, but I reckon it is not even needed. I’d simply use location based addressing (feed URL + ā€˜#’ + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.

yarnd actually sends If-Modified-Since request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(…) we can actually see If-Modified-Since being deployed when the feed has been retrieved with a Last-Modified response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278

Turns out etags with If-None-Match are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.

I don’t understand how the discovery URLs should work to replace the User-Agent header in HTTP(S) requests. Do you mind to elaborate?

Different protocols are basically just a client thing.

I reckon it’s best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, I’d definitely create a different feed for other languages.

Isn’t the emoji thing ā€œjustā€ a client feature? So, feed do not even have to state any emojis. As a user I’d configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feed’s nick? Hmm. But still, my terminal font doesn’t even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.

⤋ Read More
In-reply-to » Hey, @ I know. Just wondering the kind of apps or software and how you all stay up to date in conversations. Is it through webmentions?

@Codebuzz@www.codebuzz.nl you replied to me, but the reply was just an @, nothing else (the whole handle was missing).

There are no web mentions here, and no notifications. It isn’t Mastodon; if you want to see if someone wrote something new, or replied to you, you need to open your client.

⤋ Read More
In-reply-to » @prologic I'm not a yarnd user, so it doesn't matter a whole lot to me, but FWIW I'm not especially keen on changing how I format my twts to work around yarnd's quirks.

@bender@twtxt.net @prologic@twtxt.net I’m not exactly asking yarnd to change. If you are okay with the way it displayed my twts, then by all means, leave it as is. I hope you won’t mind if I continue to write things like 1/4 to mean ā€œfirst out of fourā€.

What has text/markdown got to do with this? I don’t think Markdown says anything about replacing 1/4 with ¼, or other similar transformations. It’s not needed, because ¼ is already a unicode character that can simply be directly inserted into the text file.

What’s wrong with my original suggestion of doing the transformation before the text hits the twtxt.txt file? @prologic@twtxt.net, I think it would achieve what you are trying to achieve with this content-type thing: if someone writes 1/4 on a yarnd instance or any other client that wants to do this, it would get transformed, and other clients simply wouldn’t do the transformation. Every client that supports displaying unicode characters, including Jenny, would then display ¼ as ¼.

Alternatively, if you prefer yarnd to pretty-print all twts nicely, even ones from simpler clients, that’s fine too and you don’t need to change anything. My 1/4 -> ¼ thing is nothing more than a minor irritation which probably isn’t worth overthinking.

⤋ Read More
In-reply-to » What are peoples #IRC setup? Do you have your own bouncer server or just have a you computer always on? And do you IRC on mobile?

@sorenpeter@darch.dk I run Weechat headless on a VM and mostly connect via mobile or dwsktop. I use the android client or gliwing bear. Work blocks all comms on their always on MitM VPN so I cant in office anymore. So I just use mobile.

⤋ Read More
In-reply-to » What are peoples #IRC setup? Do you have your own bouncer server or just have a you computer always on? And do you IRC on mobile?

its offline atm, but my usual setup basically makes my xmpp server into a bouncer (biboumi) and my xmpp client just does its normal thing to read the irc backlog as server-side message history.

⤋ Read More
In-reply-to » Simplified twtxt - I want to suggest some dogmas or commandments for twtxt, from where we can work our way back to how to implement different feature like replies/treads:

@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. I’m referring to the definition that it’s the first url = in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.

⤋ Read More

Ya know; Rather than being an asshole and getting all angry, just be reasonable and reach out to the community or folks fetching (or trying) your feed.

Most clients respect caching if your feed is transported I’ve HTTP.

Otherwise you can add the # refresh hint to clients on your feed.

No need to be an obnoxious ass and flood your own feed. That will just get you permanarely unfollowed and ignored.

⤋ Read More
In-reply-to » New post (mostly follow-up on the previous with a few new points) on the twtxt v2 discussion. http://a.9srv.net/b/2024-10-08

@2024-10-08T19:36:38-07:00@a.9srv.net Thanks for the followup. I agrees with most of it - especially:

Please nobody suggest sticking the content type in more metadata. šŸ™„

Yes, URL can be considered ugly, but they work and are understandable by both humans and machines. And its trivial for any client to hide the URLs used as reference in replies/treading.

Webfinger can be an add-on to help lookup people, and it can be made independent of the nick by just serving the same json regardless of the nick as people do with static sites and a as I implemented it on darch.dk (wf endpoint). Try RANDOMSTRING@darch.dk on http://darch.dk/wf-lookup.php (wf lookup) or RANDOMSTRING@garrido.io on https://webfinger.net

⤋ Read More
In-reply-to » There’s a lot more activity in Geminispace than I realized: gemini://warmedal.se/~antenna/

gemini calls the request-response cycle a transaction in the spec. since trasactions are not cached, we have this problem where we can’t tell if anything was updated without fetching it and we can’t indicate how often a client should expect the content to be valid. the most common solution right now to just to keep requesting the resource until it changes or stops existing, which isn’t ideal. this sort of update notification model is interesting because it re-frames your thinking into something more like event sourcing. you end up needing to add an event queue and dispatch to the server, which is a bit more complex on the server side than plain static files, but the client stays the same. i’m curious to see what kind of systems could be built on this gemini message queue concept.

⤋ Read More
In-reply-to » @prologic I’m sure you can somehow install something that calculates blake2b on OpenBSD. But it’s not part of the base system as a standalone CLI tool, there only appear to be Perl modules for it. The other SHA tools do exist.

@movq@www.uninformativ.de i’m sorry if I sound too contrarian. I’m not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we don’t just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.

Really we should all think hard about how changes will break things and if those breakages are acceptable.

⤋ Read More
In-reply-to » Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:

These collisions aren’t important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.

⤋ Read More
In-reply-to » @anth you wrote:

@prologic@twtxt.net YES James, it should be up to the client to deal with changes like edits and deletions. And putting this load on the clients, location-addressing with make this a lot easier since what is says it: Look in this file at this timestamp, did anything change or went missing? (And then threading will not break;)

⤋ Read More

Diving into mblaze, I think I’ve nearly* reached peek email geek.

Just a bunch of shell commands I can pipe together to search, list, view and reply to email (after syncing it to a local Maildir).

EXAMPLES at https://git.vuxu.org/mblaze/tree/README

So far I’m using most of the tools directly from the command line, but I might take inspiration from https://sr.ht/~rakoo/omail/ to make my workflow a bit more efficient.

*To get any closer, I think I’d have to hand-craft my own SMTP client or something.

⤋ Read More
In-reply-to » More thoughts about changes to twtxt (as if we haven't had enough thoughts):

@prologic@twtxt.net

See https://dev.twtxt.net

Yes, that is exactly what I meant. I like that collection and ā€œtwtxt v2ā€ feels like a departure.

Maybe there’s an advantage to grouping it into one spec, but IMO that shouldn’t be done at the same time as introducing new untested ideas.

See https://yarn.social (especially this section: https://yarn.social/#self-host) – It really doesn’t get much simpler than this 🤣

Again, I like this existing simplicity. (I would even argue you don’t need the metadata.)

That page says ā€œFor the best experience your client should also support some of the Twtxt Extensionsā€¦ā€ but it is clear you don’t need to. I would like it to stay that way, and publishing a big long spec and calling it ā€œtwtxt v2ā€ feels like a departure from that. (I think the content of the document is valuable; I’m just carping about how it’s being presented.)

⤋ Read More
In-reply-to » So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

why can we both have a format that you can write by hand and better clients?

⤋ Read More

Some more arguments for a local-based treading model over a content-based one:

  1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

  2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

  3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don’t follow.

⤋ Read More

#fzf is the new emacs: a tool with a simple purpose that has evolved to include an #email client. https://sr.ht/~rakoo/omail/

I’m being a little silly, of course. fzf doesn’t actually check your email, but it appears to be basically the whole user interface for that mail program, with #mblaze wrangling the emails.

I’ve been thinking about how I handle my email, and am tempted to make something similar. (When I originally saw this linked the author was presenting it as an example tweaked to their own needs, encouraging people to make their own.)

This approach could surely also be combined with #jenny, taking the place of (neo)mutt. For example mblaze’s mthread tool presents a threaded discussion with indentation.

⤋ Read More
In-reply-to » Okay folks, I've spent all day on this today, and I think its in "good enough"ā„¢ shape to share:

@prologic@twtxt.net Thanks for writing that up!

I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.

I am not sure how I feel about all this being done at once, vs. letting conventions arise.

For example, even today I could reply to twt abc1234 with ā€œ(#abc1234) Edit: ā€¦ā€ and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.

Similarly we could just start using 11-digit hashes. We should iron out whether it’s sha256 or whatever but there’s no need get all the other stuff right at the same time.

I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).

However I recognize that I’m not the one implementing this stuff, and it’s less work to just have everything determined up front.

Misc comments (I haven’t read the whole thing):

  • Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. I’d suggest gaining 11 bits with base64 instead.

  • ā€œClients MUST preserve the original hashā€ — do you mean they MUST preserve the original twt?

  • Thanks for phrasing the bit about deletions so neutrally.

  • I don’t like the MUST in ā€œClients MUST follow the chain of reply-to referencesā€¦ā€. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldn’t declare the client non-conforming just because they didn’t get to all the bells and whistles.

  • Similarly I don’t like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (I’m again thinking again of the 40-line shell script).

  • For ā€œwho followsā€ lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?

  • Why can’t feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasn’t too bad, but 1.0 would have been slightly simpler.

  • Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.

  • I’m a little sad about other protocols being not recommended.

  • I don’t know how I feel about including markdown. I don’t mind too much that yarn users emit twts full of markdown, but I’m more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.

⤋ Read More
In-reply-to » @prologic Do you have a link to some past discussion?

@falsifian@www.falsifian.org Do you have specifics about the GRPD law about this?

Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I don’t think I have to honour that request, no matter how European they are.

I’m not sure myself now. So let’s find out whether parts of the GDPR actually apply to a truly decentralised system? šŸ¤”

⤋ Read More
In-reply-to » @prologic I wouldn't want my client to honour delete requests. I like my computer's memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can't find it.

@prologic@twtxt.net Do you have a link to some past discussion?

Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I don’t think I have to honour that request, no matter how European they are.

I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?

Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @sorenpeter I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

One distinct disadvantage of (replyto:…) over (edit:#): (replyto:…) relies on clients always processing the entire feed – otherwise they wouldn’t even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.

I guess neither matters that much in practice. It’s still a disadvantage.

⤋ Read More

I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.

I don’t think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.

With apologies to @movq@www.uninformativ.de for corrupting jenny’s beautiful code. I don’t write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jenny’s current LICENCE.

Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because it’s possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that haven’t been seen yet but are wanted. When one of those ā€œwantedā€ twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.

Patch based on jenny commit 73a5ea81.

https://www.falsifian.org/a/oDtr/patch0.txt

Not implemented:

  • Composing twts using the (replyto …) format.
  • Probably other important things I’m forgetting.

⤋ Read More

the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @sorenpeter I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

I’m not advocating in either direction, btw. I haven’t made up my mind yet. šŸ˜… Just braindumping here.

The (replyto:…) proposal is definitely more in the spirit of twtxt, I’d say. It’s much simpler, anyone can use it even with the simplest tools, no need for any client code. That is certainly a great property, if you ask me, and it’s things like that that brought me to twtxt in the first place.

I’d also say that in our tiny little community, message integrity simply doesn’t matter. Signed feeds don’t matter. I signed my feed for a while using GPG, someone else did the same, but in the end, nobody cares. The community is so tiny, there’s enough ā€œimplicit trustā€ or whatever you want to call it.

If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? šŸ˜…

I do have to ā€œadmitā€, though, that hashes feel better. It feels good to know that we can clearly identify a certain twt. It feels more correct and stable.

Hm.

I suspect that the (replyto:…) proposal would work just as well in practice.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

@prologic@twtxt.net I wouldn’t want my client to honour delete requests. I like my computer’s memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can’t find it.

⤋ Read More

An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let’s say I produced the following Twt:

2024-09-18T23:08:00+10:00	Hllo World

And my feed’s URI is https://example.com/twtxt.txt. The hash for this Twt is therefore 229d24612a2:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:08:00+10:00\nHllo World" | sha1sum | head -c 11
229d24612a2

You wish to correct your mistake, so you make an amendment to that Twt like so:

2024-09-18T23:10:43+10:00	(edit:#229d24612a2) Hello World

Which would then have a new Twt hash value of 026d77e03fa:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:10:43+10:00\nHello World" | sha1sum | head -c 11
026d77e03fa

Clients would then take this edit:#229d24612a2 to mean, this Twt is an edit of 229d24612a2 and should be replaced in the client’s cache, or indicated as such to the user that this is the intended content.

⤋ Read More
In-reply-to » @prologic Yeah, that thing with (#hash;#originalHash) would also work.

@movq@www.uninformativ.de

Maybe I’m being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic – or maybe I didn’t even send that twt, I don’t remember šŸ˜…), I never really liked hashes to begin with. They aren’t super hard to implement but they are kind of against the beauty of the original twtxt – because you need special client support for them. It’s not something that you could write manually in your twtxt.txt file. With @sorenpeter@darch.dk’s proposal, though, that would be possible.

Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe I’ll try it some time just for fun.

⤋ Read More
In-reply-to » This scheme also only support threading off a specific Twt of someone's feed. What if you're not replying to anyone in particular?

@prologic@twtxt.net Yeah, that thing with (#hash;#originalHash) would also work.

Maybe I’m being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic – or maybe I didn’t even send that twt, I don’t remember šŸ˜…), I never really liked hashes to begin with. They aren’t super hard to implement but they are kind of against the beauty of the original twtxt – because you need special client support for them. It’s not something that you could write manually in your twtxt.txt file. With @sorenpeter@darch.dk’s proposal, though, that would be possible.

I don’t know … maybe it’s just me. 🄓

I’m also being a bit selfish, to be honest: Implementing (#hash;#originalHash) in jenny for editing your own feed would not be a no-brainer. (Editing is already kind of unsupported, actually.) It wouldn’t be a problem to implement it for fetching other people’s feeds, though.

⤋ Read More
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there šŸ˜‚ even my my last replay. WTF IS GOING ON? 🤣🤣🤣

no my fault your client can’t handle a little editing ;)

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@sorenpeter@darch.dk

  1. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I think I like this a lot. šŸ¤”

The problem with using hashes always was that they’re ā€œone-directionalā€: You can construct a hash from URL + timestamp + twt, but you cannot do the inverse. When I see ā€œ, I have no idea what that could possibly refer to.

But of course something like (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) has all the information you need. This could simplify twt/feed discovery quite a bit, couldn’t it? šŸ¤” That thing that I just implemented – jenny asking some Yarn pod for some twt hash – would not be necessary anymore. Clients could easily and automatically fetch complete threads instead of requiring the user to follow all relevant feeds.

Only using the timestamp to identify a twt also solves the edit problem.

It even is better for non-Yarn clients, because you now don’t have to read, understand, and implement a ā€œtwt hash specificationā€ before you can reply to someone.

The only problem, really, is that (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) is so long. Clients would have to try harder to hide this. šŸ˜…

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@falsifian@www.falsifian.org TLS won’t help you if you change your domain name. How will people know if it’s really you? Maybe that’s not the biggest problem for something with such low stakes as twtxt, but it’s a reasonable concern that could be solved using signatures from an unchanging cryptographic key.

This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesn’t matter where you get the note from, your client can verify its authenticity. That way, relays don’t need to be trusted.

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@mckinley@twtxt.net

HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn’t, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you’re getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can’t verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don’t mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I’m not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I’m also not very familiar with IPFS or IPNS.

I haven’t been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I’m too lazy to change how I publish my feed :-)

⤋ Read More