Searching yarn

Twts matching #Client
Sort by: Newest, Oldest, Most Relevant
In-reply-to » @prologic I’m sure you can somehow install something that calculates blake2b on OpenBSD. But it’s not part of the base system as a standalone CLI tool, there only appear to be Perl modules for it. The other SHA tools do exist.

@movq@www.uninformativ.de i’m sorry if I sound too contrarian. I’m not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we don’t just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.

Really we should all think hard about how changes will break things and if those breakages are acceptable.

⤋ Read More
In-reply-to » Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:

These collisions aren’t important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.

⤋ Read More
In-reply-to » @anth you wrote:

@prologic@twtxt.net YES James, it should be up to the client to deal with changes like edits and deletions. And putting this load on the clients, location-addressing with make this a lot easier since what is says it: Look in this file at this timestamp, did anything change or went missing? (And then threading will not break;)

⤋ Read More

Diving into mblaze, I think I’ve nearly* reached peek email geek.

Just a bunch of shell commands I can pipe together to search, list, view and reply to email (after syncing it to a local Maildir).

EXAMPLES at https://git.vuxu.org/mblaze/tree/README

So far I’m using most of the tools directly from the command line, but I might take inspiration from https://sr.ht/~rakoo/omail/ to make my workflow a bit more efficient.

*To get any closer, I think I’d have to hand-craft my own SMTP client or something.

⤋ Read More
In-reply-to » More thoughts about changes to twtxt (as if we haven't had enough thoughts):

@prologic@twtxt.net

See https://dev.twtxt.net

Yes, that is exactly what I meant. I like that collection and ā€œtwtxt v2ā€ feels like a departure.

Maybe there’s an advantage to grouping it into one spec, but IMO that shouldn’t be done at the same time as introducing new untested ideas.

See https://yarn.social (especially this section: https://yarn.social/#self-host) – It really doesn’t get much simpler than this 🤣

Again, I like this existing simplicity. (I would even argue you don’t need the metadata.)

That page says ā€œFor the best experience your client should also support some of the Twtxt Extensionsā€¦ā€ but it is clear you don’t need to. I would like it to stay that way, and publishing a big long spec and calling it ā€œtwtxt v2ā€ feels like a departure from that. (I think the content of the document is valuable; I’m just carping about how it’s being presented.)

⤋ Read More
In-reply-to » So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

why can we both have a format that you can write by hand and better clients?

⤋ Read More

Some more arguments for a local-based treading model over a content-based one:

  1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

  2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

  3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don’t follow.

⤋ Read More

#fzf is the new emacs: a tool with a simple purpose that has evolved to include an #email client. https://sr.ht/~rakoo/omail/

I’m being a little silly, of course. fzf doesn’t actually check your email, but it appears to be basically the whole user interface for that mail program, with #mblaze wrangling the emails.

I’ve been thinking about how I handle my email, and am tempted to make something similar. (When I originally saw this linked the author was presenting it as an example tweaked to their own needs, encouraging people to make their own.)

This approach could surely also be combined with #jenny, taking the place of (neo)mutt. For example mblaze’s mthread tool presents a threaded discussion with indentation.

⤋ Read More
In-reply-to » Okay folks, I've spent all day on this today, and I think its in "good enough"ā„¢ shape to share:

@prologic@twtxt.net Thanks for writing that up!

I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.

I am not sure how I feel about all this being done at once, vs. letting conventions arise.

For example, even today I could reply to twt abc1234 with ā€œ(#abc1234) Edit: ā€¦ā€ and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.

Similarly we could just start using 11-digit hashes. We should iron out whether it’s sha256 or whatever but there’s no need get all the other stuff right at the same time.

I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).

However I recognize that I’m not the one implementing this stuff, and it’s less work to just have everything determined up front.

Misc comments (I haven’t read the whole thing):

  • Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. I’d suggest gaining 11 bits with base64 instead.

  • ā€œClients MUST preserve the original hashā€ — do you mean they MUST preserve the original twt?

  • Thanks for phrasing the bit about deletions so neutrally.

  • I don’t like the MUST in ā€œClients MUST follow the chain of reply-to referencesā€¦ā€. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldn’t declare the client non-conforming just because they didn’t get to all the bells and whistles.

  • Similarly I don’t like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (I’m again thinking again of the 40-line shell script).

  • For ā€œwho followsā€ lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?

  • Why can’t feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasn’t too bad, but 1.0 would have been slightly simpler.

  • Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.

  • I’m a little sad about other protocols being not recommended.

  • I don’t know how I feel about including markdown. I don’t mind too much that yarn users emit twts full of markdown, but I’m more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.

⤋ Read More
In-reply-to » @prologic Do you have a link to some past discussion?

@falsifian@www.falsifian.org Do you have specifics about the GRPD law about this?

Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I don’t think I have to honour that request, no matter how European they are.

I’m not sure myself now. So let’s find out whether parts of the GDPR actually apply to a truly decentralised system? šŸ¤”

⤋ Read More
In-reply-to » @prologic I wouldn't want my client to honour delete requests. I like my computer's memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can't find it.

@prologic@twtxt.net Do you have a link to some past discussion?

Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I don’t think I have to honour that request, no matter how European they are.

I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?

Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @sorenpeter I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

One distinct disadvantage of (replyto:…) over (edit:#): (replyto:…) relies on clients always processing the entire feed – otherwise they wouldn’t even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.

I guess neither matters that much in practice. It’s still a disadvantage.

⤋ Read More

I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.

I don’t think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.

With apologies to @movq@www.uninformativ.de for corrupting jenny’s beautiful code. I don’t write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jenny’s current LICENCE.

Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because it’s possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that haven’t been seen yet but are wanted. When one of those ā€œwantedā€ twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.

Patch based on jenny commit 73a5ea81.

https://www.falsifian.org/a/oDtr/patch0.txt

Not implemented:

  • Composing twts using the (replyto …) format.
  • Probably other important things I’m forgetting.

⤋ Read More

the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @sorenpeter I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

I’m not advocating in either direction, btw. I haven’t made up my mind yet. šŸ˜… Just braindumping here.

The (replyto:…) proposal is definitely more in the spirit of twtxt, I’d say. It’s much simpler, anyone can use it even with the simplest tools, no need for any client code. That is certainly a great property, if you ask me, and it’s things like that that brought me to twtxt in the first place.

I’d also say that in our tiny little community, message integrity simply doesn’t matter. Signed feeds don’t matter. I signed my feed for a while using GPG, someone else did the same, but in the end, nobody cares. The community is so tiny, there’s enough ā€œimplicit trustā€ or whatever you want to call it.

If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? šŸ˜…

I do have to ā€œadmitā€, though, that hashes feel better. It feels good to know that we can clearly identify a certain twt. It feels more correct and stable.

Hm.

I suspect that the (replyto:…) proposal would work just as well in practice.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

@prologic@twtxt.net I wouldn’t want my client to honour delete requests. I like my computer’s memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can’t find it.

⤋ Read More

An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let’s say I produced the following Twt:

2024-09-18T23:08:00+10:00	Hllo World

And my feed’s URI is https://example.com/twtxt.txt. The hash for this Twt is therefore 229d24612a2:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:08:00+10:00\nHllo World" | sha1sum | head -c 11
229d24612a2

You wish to correct your mistake, so you make an amendment to that Twt like so:

2024-09-18T23:10:43+10:00	(edit:#229d24612a2) Hello World

Which would then have a new Twt hash value of 026d77e03fa:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:10:43+10:00\nHello World" | sha1sum | head -c 11
026d77e03fa

Clients would then take this edit:#229d24612a2 to mean, this Twt is an edit of 229d24612a2 and should be replaced in the client’s cache, or indicated as such to the user that this is the intended content.

⤋ Read More
In-reply-to » @prologic Yeah, that thing with (#hash;#originalHash) would also work.

@movq@www.uninformativ.de

Maybe I’m being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic – or maybe I didn’t even send that twt, I don’t remember šŸ˜…), I never really liked hashes to begin with. They aren’t super hard to implement but they are kind of against the beauty of the original twtxt – because you need special client support for them. It’s not something that you could write manually in your twtxt.txt file. With @sorenpeter@darch.dk’s proposal, though, that would be possible.

Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe I’ll try it some time just for fun.

⤋ Read More
In-reply-to » This scheme also only support threading off a specific Twt of someone's feed. What if you're not replying to anyone in particular?

@prologic@twtxt.net Yeah, that thing with (#hash;#originalHash) would also work.

Maybe I’m being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic – or maybe I didn’t even send that twt, I don’t remember šŸ˜…), I never really liked hashes to begin with. They aren’t super hard to implement but they are kind of against the beauty of the original twtxt – because you need special client support for them. It’s not something that you could write manually in your twtxt.txt file. With @sorenpeter@darch.dk’s proposal, though, that would be possible.

I don’t know … maybe it’s just me. 🄓

I’m also being a bit selfish, to be honest: Implementing (#hash;#originalHash) in jenny for editing your own feed would not be a no-brainer. (Editing is already kind of unsupported, actually.) It wouldn’t be a problem to implement it for fetching other people’s feeds, though.

⤋ Read More
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there šŸ˜‚ even my my last replay. WTF IS GOING ON? 🤣🤣🤣

no my fault your client can’t handle a little editing ;)

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@sorenpeter@darch.dk

  1. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I think I like this a lot. šŸ¤”

The problem with using hashes always was that they’re ā€œone-directionalā€: You can construct a hash from URL + timestamp + twt, but you cannot do the inverse. When I see ā€œ, I have no idea what that could possibly refer to.

But of course something like (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) has all the information you need. This could simplify twt/feed discovery quite a bit, couldn’t it? šŸ¤” That thing that I just implemented – jenny asking some Yarn pod for some twt hash – would not be necessary anymore. Clients could easily and automatically fetch complete threads instead of requiring the user to follow all relevant feeds.

Only using the timestamp to identify a twt also solves the edit problem.

It even is better for non-Yarn clients, because you now don’t have to read, understand, and implement a ā€œtwt hash specificationā€ before you can reply to someone.

The only problem, really, is that (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) is so long. Clients would have to try harder to hide this. šŸ˜…

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@falsifian@www.falsifian.org TLS won’t help you if you change your domain name. How will people know if it’s really you? Maybe that’s not the biggest problem for something with such low stakes as twtxt, but it’s a reasonable concern that could be solved using signatures from an unchanging cryptographic key.

This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesn’t matter where you get the note from, your client can verify its authenticity. That way, relays don’t need to be trusted.

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@mckinley@twtxt.net

HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn’t, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you’re getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can’t verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don’t mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I’m not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I’m also not very familiar with IPFS or IPNS.

I haven’t been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I’m too lazy to change how I publish my feed :-)

⤋ Read More
In-reply-to » @aelaraji how would that work exactly? Does that mean then that every user is required to have a cox side profile? Who maintains cox site? Is it centralized or decentralized can be relied upon?

@prologic@twtxt.net well…

how would that work exactly?

To my limited knowledge, Keyoxide is an open source project offering different tools for verifying one’s online persona(s). That’s done by either A) creating an Ariande Profile using the web interface, a CLI. or B) Just using your GPG key. Either way, you add in Identity claims to your different profiles, links and whatnot, and finally advertise your profile … Then there is a second set of Mobile/Web clients and CLI your correspondents can use to check your identity claims. I think of them like the front-ends of GPG Keyservers (which keyoxide leverages for verification when you opt for the GPG Key method), where you verify profiles using links, Key IDs and Fingerprints…

Who maintains cox site? Is it centralized or decentralized can be relied upon?

  • Maintainers? Definitely not me, but here’s their Git stuff and OpenCollective page …
  • Both ASP and Keyoxide Webtools can be self-hosted. I don’t see a central authority here… + As mentioned on their FAQ page the whole process can be done manually, so you don’t have to relay on any one/thing if you don’t want to, the whole thing is just another tool for convenience (with a bit of eye candy).

Does that mean then that every user is required to have a cox side profile?

Nop. But it looks like a nice option to prove that I’m the same person to whom that may concern if I ever change my Twtxt URL, host/join a yarn pod or if I reach out on other platforms to someone I’ve met in her. Otherwise I’m just happy exchanging GPG keys or confirm the change IRL at a coffee shop or something. 😁

⤋ Read More

Interesting.. QUIC isn’t very quick over fast internet.

QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUIC’s performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC’s user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.

https://dl.acm.org/doi/10.1145/3589334.3645323

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.

# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...

⤋ Read More
In-reply-to » (#mp6ox4a) @cuaxolotl Ah, thanks for reporting back! Okay, so you’re basically manually ā€œcrawlingā€ feeds right now. šŸ¤” What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? šŸ¤”

@movq@www.uninformativ.de @prologic@twtxt.net Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI itself instead of relaying on the USER Agent šŸ˜‚. I’ve copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" and this happens:

A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:

hostname:1965 - ā€œgemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txtā€ 20 ā€œtext/plain;lang=en-USā€

You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then you’ll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt" gave me this :

gopher.aelaraji.com:70 - [09/Sep/2024:22:08:54 +0000] ā€œGET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0ā€ 404 0 ā€œā€ ā€œUnknown gopher clientā€

NB: the follower=... string won’t appear in gopher logs after a ? but if I replace it with a + or a & and it works. There will be a missing / after the https:. Probably a client thing.

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

@mckinley@twtxt.net To answer some of your questions:

Are SSH signatures standardized and are there robust software libraries that can handle them? We’ll need a library in at least Python and Go to provide verified feed support with the currently used clients.

We already have this. Ed25519 libraries exist for all major languages. Aside from using ssh-keygen -Y sign and ssh-keygen -Y verify, you can also use the salty CLI itself (https://git.mills.io/prologic/salty), and I’m sure there are other command-line tools that could be used too.

If we all implemented this, every twt hash would suddenly change and every conversation thread we’ve ever had would at least lose its opening post.

Yes. This would happen, so we’d have to make a decision around this, either a) a cut-off point or b) some way to progressively transition.

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@lyse@lyse.isobeef.org This looks like a nice way to do it.

Another thought: if clients can’t agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.

A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if there’s a period when clients are doing different things.

(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I don’t know about other clients.)

⤋ Read More
In-reply-to » @bender Sorry, trust was the wrong word. Trust as in, you do not have to check with anything or anyone that the hash is valid. You can verify the hash is valid by recomputing the hash from the content of what it points to, etc.

@bender@twtxt.net Yes, they do 🤣 Implicitly, or threading would never work at all šŸ˜… Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you’re coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.

My money is on extending the Twt Subject extension to support more (optional) advanced ā€œsubjectsā€; i.e: indicating you edited a Twt you already published in your feed as @falsifian@www.falsifian.org indicated šŸ‘Œ

Then we have a secondary (bure much rarer) problem of the ā€œidentityā€ of a feed in the first place. Using the URL you fetch the feed from as @lyse@lyse.isobeef.org ’s client tt seems to do or using the # url = metadata field as every other client does (according to the spec) is problematic when you decide to change where you host your feed. In fact the spec says:

Users are advised to not change the first one of their urls. If they move their feed to a new URL, they should add this new URL as a new url field.

See Choosing the Feed URL – This is one of our longest debates and challenges, and I think (_I suspect along with @xuu@txt.sour.is _) that the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feed’s unique identity that never changes.

⤋ Read More
In-reply-to » All this hash breakage made me wonder if we should try to introduce ā€œmessage IDsā€ after all. šŸ˜…

@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that it’s an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start ā€œ(#abcd123) (redit)ā€.

What I like about this is that clients that don’t know this convention will still stick it in the same thread. And I feel it’s in the spirit of the old pre-hash (subject) convention, though that’s before my time.

I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.

But the great thing about the current system is that nobody can spoof message IDs.

I don’t think twtxt hashes are long enough to prevent spoofing.

⤋ Read More
In-reply-to » Serious open (for anyone) question: what makes you follow someone on twtxt? Will you just follow anyone that you come across, simply because that someone using the "decentralised, minimalist microblogging service for hackers" microblog?

@bender@twtxt.net On twtxt, I follow all feeds that I can find (there are some exceptions, of course). There’s so little going on in general, it hardly matters. šŸ˜…

And I just realized: Mutt’s layout helps a lot. Skimming over new twts is really easy and it’s not a big loss if there are a couple of shitpostsā„¢ in my ā€œtimelineā€. This is very different from Mastodon (both the default web UI and all clients I’ve tried), where the timeline is always huge. Posts take up a lot of space on screen. Makes me think twice if I want to follow someone or not. šŸ˜…

(I mostly only follow Hashtags on Mastodon anyway. It’s more interesting that way.)

⤋ Read More

@cuaxolotl@sunshinegardens.org Ah, thanks for reporting back! Okay, so you’re basically manually ā€œcrawlingā€ feeds right now. šŸ¤” What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? šŸ¤”

⤋ Read More
In-reply-to » Ų„Ų­ŲŖŲ³ Ł‚Ł‡ŁˆŲŖŁƒ بسلام ā˜•šŸ•Š

@bender@twtxt.net My index formatting is intact, probably because I still haven’t figured out how to set up my terminal to show RTL text correctly! šŸ˜… but hey, that won’t be a problem anymore, I don’t feel like twting in Arabic. Sorry for the inconvenience.

Screenshot of neomutt running Jenny, the best twtxt client

⤋ Read More
In-reply-to » Correct, @bender. Since the very beginning, my twtxt flow is very flawed. But it turns out to be an advantage for this sort of problem. :-) I still use the official (but patched) twtxt client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, I've seen enough errors, I unsubscribe.

@lyse@lyse.isobeef.org ah, if only you were to finally clean up that code, and make that client widely available…! One can only dream, right? :-)

⤋ Read More
In-reply-to » @lyse so, is it safe to assume you occasionally, but carefully, vet your feeds, and have contingencies in place to not keep requesting a seemingly dead feed over and over?

Correct, @bender@twtxt.net. Since the very beginning, my twtxt flow is very flawed. But it turns out to be an advantage for this sort of problem. :-) I still use the official (but patched) twtxt client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, I’ve seen enough errors, I unsubscribe.

tt is just a viewer into the cache. The read statuses are stored in a separate database file.

It also happened a few times, that I thought some feed was permanently dead and removed it from my list. But then, others mentioned it, so I resubscribed.

⤋ Read More
In-reply-to » @bender I'm not a yarnd user, but automatically unfollowing on 404 doesn't seem right. Besides @lyse's example, I could imagine just accidentally renaming my own twtxt file, or forgetting to push it when I point my DNS to a new web server. I'd rather not lose all my yarnd followers in a situation like that (and hopefully they feel the same).

@falsifian@www.falsifian.org @bender@twtxt.net I’d certainly hate my client for automatic feed unsubscription, too.

⤋ Read More
In-reply-to » @bender I'm not a yarnd user, but automatically unfollowing on 404 doesn't seem right. Besides @lyse's example, I could imagine just accidentally renaming my own twtxt file, or forgetting to push it when I point my DNS to a new web server. I'd rather not lose all my yarnd followers in a situation like that (and hopefully they feel the same).

@bender@twtxt.net Based on my experience so far, as a user, I would be upset if my client dropped someone from my follower list, i.e. stopped fetching their feed, without me asking for that to happen.

⤋ Read More
In-reply-to » @abucci / @abucci Any interesting errors pop up in the server logs since the the flaw got fixed (unbounded receieveFile())? šŸ¤”

@prologic@twtxt.net I don’t know if this is new, but I’m seeing:

Jul 25 16:01:17 buc yarnd[1921547]: time="2024-07-25T16:01:17Z" level=error msg="https://yarn.stigatle.no/user/stigatle/twtxt.txt: client.Do fail: Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" error="Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)"

I no longer see twts from @stigatle@yarn.stigatle.no at all.

⤋ Read More
In-reply-to » Microsoft Outage Hits Users Worldwide, Leading To Canceled Flights Microsoft grappled with a major service outage, leaving users across the world unable to access its cloud computing platforms and causing airlines to cancel flights. From a report: Thousands of users across the world reported problems with Microsoft 365 apps and services to Downdetector.com, a website that tracks service disruptions. "We're inve ... ⌘ Read more

I havnt seen any emails about the outage at work. I know i have the mac crowdstrike client though. My buddy that works at a hospital says they wernt affected.

⤋ Read More