Searching yarn

Twts matching #Protocol
Sort by: Newest, Oldest, Most Relevant
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there šŸ˜‚ even my my last replay. WTF IS GOING ON? 🤣🤣🤣

More:

Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
        somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
        what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)' 2.
        `(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' 2. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the
        twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
        somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
        what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)` 2.
        `(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` 3. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` I know it's longer that 7-11 characters, but it's self-explaining when looking at the
        twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?

Notice the difference? Soren edited, and broke everything.

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be… Maybe it doesn’t have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

  1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
  2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
  3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it’s longer that 7-11 characters, but it’s self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?

⤋ Read More
In-reply-to » @falsifian In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt … damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?

⤋ Read More
In-reply-to » Interesting.. QUIC isn't very quick over fast internet.

@xuu@txt.sour.is Thanks for the link. I found a pdf on one of the authors’ home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.

To be fair, I don’t think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream it’s part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff.

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.

# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...

⤋ Read More
In-reply-to » Does anyone know what the differences between HTTP/1.1 HTTP/2 and HTTP/3 are? šŸ¤”

HTTP/2 differs from 1.x by becoming a binary protocol, it also multiplexes multiple channels over the same connection and has the ability to prefetch related content to the browser to lower the perceived latency.

HTTP/3 moves the binary protocol from HTTP/2 over to QUIC which is based on UDP instead of TCP. This makes it better suited to mobile or unstable networks where handling of transmission errors can be handled at a higher level.

⤋ Read More
In-reply-to » yarn should define its own federation protocol that extends the basic twtxt in ways that twtxt doesn't allow. it's time. and i've got ideas!

@shreyan@twtxt.net What do you mean when you say federation protocol?

I’m not sure we need much else. I would not even bother with encryption since other platforms does that better, and for me twtxt/yarn/timeline is for making things public

⤋ Read More
In-reply-to » (#fytbg6a) What about using the blockquote format with > ?

I’m also more in favor of #reposts being human readable and writable. A client might implement a bottom that posts something simple like: #repost Look at this cool stuff, because bla bla [alt](url)

This will then make it possible to also ā€œrepostā€ stuff from other platforms/protocols.

The reader part of a client, can then render a preview of the link, which we talked about would be a nice (optional) feature to have in yarnd.

⤋ Read More

PensĆ©e dĆ©sagrĆ©able du jour: le protocole #gemini n’est pas Ć©cologique car il n’est pas accessible sur du vieux matĆ©riel Ć  cause du TLS forcĆ©. Servir des fichiers textes Ć©crits en gemtext en #http est mieux dans ce cas

⤋ Read More
In-reply-to » So given's Googleā„¢'s recent policy changes where they now outright and blatantly just admit they'll crawl, index and feed your (yes your fuckind) writings, thoughts, conversations, etc into their AI models; Should we as a small niche community (still growing) think about perhaps finally building Yarn.social v2 where we have encrypted feeds? šŸ˜…

@prologic@twtxt.net hmm, I’d be up for thinking about that. At least at the protocol and design level–I’m afraid I can’t help much with Go programming.

⤋ Read More

šŸ’” Quick ā€˜n Dirty prototype Yarn.social protocol/spec:

If we were to decide to write a new spec/protocol, what would it look like?

Here’s my rough draft (back of paper napkin idea):

  • Feeds are JSON file(s) fetchable by standard HTTP clients over TLS
  • WebFinger is used at the root of a user’s domain (or multi-user) lookup. e.g: prologic@mills.io -> https://yarn.mills.io/~prologic.json
  • Feeds contain similar metadata that we’re familiar with: Nick, Avatar, Description, etc
  • Feed items are signed with a ED25519 private key. That is all ā€œpostsā€ are cryptographically signed.
  • Feed items continue to use content-addressing, but use the full Blake2b Base64 encoded hash.
  • Edited feed items produce an ā€œEditedā€ item so that clients can easily follow Edits.
  • Deleted feed items produced a ā€œDeletedā€ item so that clients can easily delete cached items.

#Yarn.social #Protocol #Ideas

⤋ Read More

I played around with parsers. This time I experimented with parser combinators for twt message text tokenization. Basically, extract mentions, subjects, URLs, media and regular text. It’s kinda nice, although my solution is not completely elegant, I have to say. Especially my communication protocol between different steps for intermediate results is really ugly. Not sure about performance, I reckon a hand-written state machine parser would be quite a bit faster. I need to write a second parser and then benchmark them.

lexer.go and newparser.go resemble the parser combinators: https://git.isobeef.org/lyse/tt2/-/commit/4d481acad0213771fe5804917576388f51c340c0 It’s far from finished yet.

The first attempt in parser.go doesn’t work as my backtracking is not accounted for, I noticed only later, that I have to do that. With twt message texts there is no real error in parsing. Just regular text as a ā€œfallbackā€. So it works a bit differently than parsing a real language. No error reporting required, except maybe for debugging. My goal was to port my Python code as closely as possible. But then the runes in the string gave me a bit of a headache, so I thought I just build myself a nice reader abstraction. When I noticed the missing backtracking, I then decided to give parser combinators a try instead of improving on my look ahead reader. It only later occurred to me, that I could have just used a rune slice instead of a string. With that, porting the Python code should have been straightforward.

Yeah, all this doesn’t probably make sense, unless you look at the code. And even then, you have to learn the ropes a bit. Sorry for the noise. :-)

⤋ Read More
In-reply-to » @jlj I like your website's look, but i was disappointed to find that 'finger' doesn't seem to actually work. ;-)

You need better pen test scripts. :-) Seriously, the protocol is absurdly simple. Turn it on! Don’t trust any of the implementations? Write your own!

⤋ Read More

i want to use something similar to ssb producer tokens as the basis for novo atlantis trade protocols. interfacing with scarcity currencies has been a headache, but i think something like taller that does delegated settlement can link coop credits to another currency for foreign trade could work. devil is in the details.

⤋ Read More
In-reply-to » Looking at raw IRC traffic streams to debug a client issue and it's 1997 again.

Indeed! I think the first ā€œnetwork protocol clientā€ I ever wrote was something that just did the PING/PONG part and passed everything else raw.

⤋ Read More
In-reply-to » My finger server now includes the last post from tw that doesn't have a subject. 'finger a@9srv.net'

No, totally not useful. 🤣 I mean, the finger protocol is pretty trivial, and it’d be fun to add, but doesn’t replace anything you’re doing.

⤋ Read More