@prologic@twtxt.net wellâŠ
how would that work exactly?
To my limited knowledge, Keyoxide is an open source project offering different tools for verifying oneâs online persona(s). Thatâs done by either A) creating an Ariande Profile using the web interface, a CLI. or B) Just using your GPG key. Either way, you add in Identity claims to your different profiles, links and whatnot, and finally advertise your profile ⊠Then there is a second set of Mobile/Web clients and CLI your correspondents can use to check your identity claims. I think of them like the front-ends of GPG Keyservers (which keyoxide leverages for verification when you opt for the GPG Key method), where you verify profiles using links, Key IDs and FingerprintsâŠ
Who maintains cox site? Is it centralized or decentralized can be relied upon?
- Maintainers? Definitely not me, but hereâs their Git stuff and OpenCollective page âŠ
- Both ASP and Keyoxide Webtools can be self-hosted. I donât see a central authority here⊠+ As mentioned on their FAQ page the whole process can be done manually, so you donât have to relay on any one/thing if you donât want to, the whole thing is just another tool for convenience (with a bit of eye candy).
Does that mean then that every user is required to have a cox side profile?
Nop. But it looks like a nice option to prove that Iâm the same person to whom that may concern if I ever change my Twtxt URL, host/join a yarn pod or if I reach out on other platforms to someone Iâve met in her. Otherwise Iâm just happy exchanging GPG keys or confirm the change IRL at a coffee shop or something. đ
@sorenpeter@darch.dk There was a client that would generate a unique hash for each twt. It didnât get wide adoption.
Interesting.. QUIC isnât very quick over fast internet.
QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUICâs performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUICâs user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
@prologic@twtxt.net do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
# follow_notify = gemini://foo/bar
to your feedâs metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? đ€
@movq@www.uninformativ.de @prologic@twtxt.net Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI
itself instead of relaying on the USER Agent
đ. Iâve copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt"
and this happens:
A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:
hostname:1965 - âgemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txtâ 20 âtext/plain;lang=en-USâ
You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then youâll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt"
gave me this :
gopher.aelaraji.com:70 - [09/Sep/2024:22:08:54 +0000] âGET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0â 404 0 ââ âUnknown gopher clientâ
NB: the follower=...
string wonât appear in gopher logs after a ?
but if I replace it with a +
or a &
and it works. There will be a missing /
after the https:
. Probably a client thing.
@mckinley@twtxt.net To answer some of your questions:
Are SSH signatures standardized and are there robust software libraries that can handle them? Weâll need a library in at least Python and Go to provide verified feed support with the currently used clients.
We already have this. Ed25519 libraries exist for all major languages. Aside from using ssh-keygen -Y sign
and ssh-keygen -Y verify
, you can also use the salty
CLI itself (https://git.mills.io/prologic/salty), and Iâm sure there are other command-line tools that could be used too.
If we all implemented this, every twt hash would suddenly change and every conversation thread weâve ever had would at least lose its opening post.
Yes. This would happen, so weâd have to make a decision around this, either a) a cut-off point or b) some way to progressively transition.
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canât agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereâs a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donât know about other clients.)
@bender@twtxt.net Yes, they do đ€Ł Implicitly, or threading would never work at all đ Nor lookups đ€Ł They are used as keys. Think of them like a primary key in a database or index. I totally get where youâre coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.
My money is on extending the Twt Subject extension to support more (optional) advanced âsubjectsâ; i.e: indicating you edited a Twt you already published in your feed as @falsifian@www.falsifian.org indicated đ
Then we have a secondary (bure much rarer) problem of the âidentityâ of a feed in the first place. Using the URL you fetch the feed from as @lyse@lyse.isobeef.org âs client tt
seems to do or using the # url =
metadata field as every other client does (according to the spec) is problematic when you decide to change where you host your feed. In fact the spec says:
Users are advised to not change the first one of their urls. If they move their feed to a new URL, they should add this new URL as a new url field.
See Choosing the Feed URL â This is one of our longest debates and challenges, and I think (_I suspect along with @xuu@txt.sour.is _) that the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feedâs unique identity that never changes.
@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that itâs an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start â(#abcd123) (redit)â.
What I like about this is that clients that donât know this convention will still stick it in the same thread. And I feel itâs in the spirit of the old pre-hash (subject) convention, though thatâs before my time.
I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.
But the great thing about the current system is that nobody can spoof message IDs.
I donât think twtxt hashes are long enough to prevent spoofing.
@bender@twtxt.net On twtxt, I follow all feeds that I can find (there are some exceptions, of course). Thereâs so little going on in general, it hardly matters. đ
And I just realized: Muttâs layout helps a lot. Skimming over new twts is really easy and itâs not a big loss if there are a couple of shitpostsâą in my âtimelineâ. This is very different from Mastodon (both the default web UI and all clients Iâve tried), where the timeline is always huge. Posts take up a lot of space on screen. Makes me think twice if I want to follow someone or not. đ
(I mostly only follow Hashtags on Mastodon anyway. Itâs more interesting that way.)
@cuaxolotl@sunshinegardens.org Ah, thanks for reporting back! Okay, so youâre basically manually âcrawlingâ feeds right now. đ€ What do you think about the idea of adding something like # follow_notify = gemini://foo/bar
to your feedâs metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? đ€
Anyone had any intereractions with @cuaxolotl@sunshinegardens.org yet? Or are they using a client that doesnât know how to detect clients following them properly? Hmmm đ§
early preview of my new web-based twtxt client https://sunshinegardens.org/static/howl/
@aelaraji@aelaraji.com Ahh I see! Interesting đ§ Would you prefer that clients like yarnd
prefetch resources liks this, cache them and serve the cached copy? đ€
@bender@twtxt.net My index formatting is intact, probably because I still havenât figured out how to set up my terminal to show RTL text correctly! đ but hey, that wonât be a problem anymore, I donât feel like twting in Arabic. Sorry for the inconvenience.
twtxt
client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, I've seen enough errors, I unsubscribe.
@lyse@lyse.isobeef.org ah, if only you were to finally clean up that code, and make that client widely availableâŠ! One can only dream, right? :-)
@quark@ferengi.one this is what I see:
Correct, @bender@twtxt.net. Since the very beginning, my twtxt flow is very flawed. But it turns out to be an advantage for this sort of problem. :-) I still use the official (but patched) twtxt
client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, Iâve seen enough errors, I unsubscribe.
tt
is just a viewer into the cache. The read statuses are stored in a separate database file.
It also happened a few times, that I thought some feed was permanently dead and removed it from my list. But then, others mentioned it, so I resubscribed.
@falsifian@www.falsifian.org @bender@twtxt.net Iâd certainly hate my client for automatic feed unsubscription, too.
@bender@twtxt.net Based on my experience so far, as a user, I would be upset if my client dropped someone from my follower list, i.e. stopped fetching their feed, without me asking for that to happen.
receieveFile()
)? đ€
@prologic@twtxt.net I donât know if this is new, but Iâm seeing:
Jul 25 16:01:17 buc yarnd[1921547]: time="2024-07-25T16:01:17Z" level=error msg="https://yarn.stigatle.no/user/stigatle/twtxt.txt: client.Do fail: Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" error="Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)"
I no longer see twts from @stigatle@yarn.stigatle.no at all.
I havnt seen any emails about the outage at work. I know i have the mac crowdstrike client though. My buddy that works at a hospital says they wernt affected.
Referer
is /post
then consider that total bullshit, and ignore? đ€
@prologic@twtxt.net I was wondering if my reverse proxy could cause something but itâs pretty standardâŠ
server {
listen 80; server_name we.loveprivacy.club;
location / {
return 301 https://$host$request_uri;
<a href="https://yarn.girlonthemoon.xyz/search?q=%23proxy_pass">#proxy_pass</a> http://127.0.0.1:8000;
}
}
server {
listen 443 ssl http2;
server_name we.loveprivacy.club;
ssl_certificate /etc/letsencrypt/live/we.loveprivacy.club/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/we.loveprivacy.club/privkey.pem;
client_max_body_size 8M;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
@aelaraji@mastodon.social @aelaraji@aelaraji.com Aw, thanks. I should install a twtxt client I guess⊠But I prefere rss, so for now, itâs just a twtxt2atom script doing the job :)
Well! My 24 hrs without a GUI Web browser was quite of a nice experience.
As a matter of fact, and as long as Iâm not doing any 3D work, I kind of donât need gui applications as much as it feels like.
Even though, a couple of websites asked me to eff off because they need
JavaScript to work. Some others handed me a cold â402 Upgrade Requiredâ client
error response⊠(LOL letâs not even talk about how Github repos looked
and felt like). I have managed to fix a couple of things Iâve been meaning to
for quite some time but never got, mainly to because of my browsing
habits. I tend to open a lot of tabs, read some, get distracted then
open some more and down the rabbit hole (or shall I say tabs) I go.
All in all, it was quite a nice experience.
How nice? It was an âIâm dropping into a full TTY experience for another
24 hrsâ kind of nice!
Although, I miss using a mouse already, but hey, I would have never
heard about gpm(8) otherwise.
Its quite nice. I have been half tempted to make a twtxt client with it
At last! my Twtxt feed is up and running and I can post to it from a remote client! Yey!
Hey @sorenpeter@darch.dk, Iâm sorry to tell you, but the prev
field in your feedâs headers is invalid. đ
First, it doesnât include the hash of the last twt in the archive. Second, and thatâs probably more important, it forms an infinite loop: The prev
field of your main feed specifies http://darch.dk/twtxt-archive.txt and that file then again specifies http://darch.dk/twtxt-archive.txt. Some clients might choke on this, mine for example. đ Iâll push a fix soon, though.
For reference, the prev
field is described here: https://dev.twtxt.net/doc/archivefeedsextension.html
My coworker started chatting over wall
this morning as we were both on the same server investigating something⊠itâs the best chat client haha
Not making THREADING the default view of e-mail clients and thus teaching users that e-mail is âchaoticâ (if you get a lot of mail, it becomes unusable without threading) and âneedsâ full quoting all the time was one of the worst mistakes ever.
#gemini readers, I wrote a tool to download new gemfeeds entries instead of opening a client: gemini://si3t.ch/log/2024-02-28-gemfeeds-downloader.txt
Seriously, where is the suckless-style Nostr client?
>
?
@sorenpeter@darch.dk this makes sense as a quote twt that references a direct URL. If we go back to how it developed on twitter originally it was RT @nick: original text
because it contained the original text the twitter algorithm would boost that text into trending.
i like the format (#hash) @<nick url> > "Quoted text"\nThen a comment
as it preserves the human read able. and has the hash for linking to the yarn. The comment part could be optional for just boosting the twt.
The only issue i think i would have would be that that yarn could then become a mess of repeated quotes. Unless the client knows to interpret them as multiple users have reposted/boosted the thread.
The format is also how iphone does reactions to SMS messages with +number liked: original SMS
>
?
Iâm also more in favor of #reposts being human readable and writable. A client might implement a bottom that posts something simple like: #repost Look at this cool stuff, because bla bla [alt](url)
This will then make it possible to also ârepostâ stuff from other platforms/protocols.
The reader part of a client, can then render a preview of the link, which we talked about would be a nice (optional) feature to have in yarnd.
Un twt pour tester mon nouveau client qui doit mâafficher le nombre de twt publiĂ©s
Anyone else working with Mac OS (work), Windows (client project) and Fedora (private) on the same day, almost every day?
I am back on twtxt for now. I am using twtwt client. Donât think that it does replies so I should try jenny with mutt again.
Iâve been thinking of how to notify someone else that youâve replied to their twts.
Is there something already developed, for example on yarn.social?
Letâs say I want to notify https://sour.is/tiktok/America/Denver.txt that Iâve replied to some twt. They donât follow me back, so they wonât see my reply.
I would send my URL to, could be, https://sour.is/tiktok/replies?url=MY_URL and theyâll check that I have a reply to some of their twts, and could decide to follow me back (after seeing my twtxt profile to avoid spam)
Another option could be having a metadata like
follow-request=https://sour.is/tiktok/America/Denver.txt TIMESTAMP_IN_SECONDS
that the other client has to look for, to ensure that the request comes from that URL (again, to avoid spam)
This could be deleted after the other .txt has your URL in the follow list, or auto-expire after X days to clean-up old requests.
What do you think?
@adi@twtxt.net I think it is, and one benefit they have is that you can add third-party repositories to the F-Droid app as you discover them. So, for instance, if you know of a developer who pushes builds to an F-Droid compatible repository, you can add that to your F-Droid app and start tracking updates like you would for any other app in there. Canât do that with Google Play!
F-Droid tends to focus on open source applications that can be built in a reproducible way, which limits the inventory (though of course tends to mean the apps are safer and donât spy on you). There are non-free apps in there as well but they come with warnings so youâre informed about what you might be sacrificing by using them.
That said if you have a favorite app you get through Google Play, thereâs a decent chance it wonât be in F-Droid. Many âbig corporateâ apps arenât, and vendor-specific apps tend not to be either. But for most of the major functions you might want, like email clients, calendar apps, weather apps, etc etc, there are very good substitutes now in F-Droid. Youâre definitely making a trade-off though.
What I did was go through the apps I had installed on my last phone, found as many substitutes in F-Droid as I could, started using those instead to see how they worked, and bit by bit replaced as much as I could from Google Play with a comparable app from F-Droid. I still have a few apps (mostly vendor-specific things that donât have substitutes) that come from Google Play but Iâm aiming to be rid of those before I need to replace this phone.
@jmjl@tilde.green Iâm sorry that Iâm not super knowledgeable about alternatives to jmp.chat but Iâll tell you what I know.
Youâre probably right about jmp.chat not working for you, at least as it is now. You can only get US and Canadian phone numbers through it last time I checked, so if youâre not in either of those countries youâd be making international calls all the time and people who wanted to call you would be making international calls too.
Iâve seen people talk about using SIP as an intermediary: you can bridge SIP-to-XMPP, and bridge SIP-to-PSTN (PSTN = âpacket switched telephone networkâ, meaning normal telephone). You can skip the SIP-to-XMPP side if youâre comfortable using a SIP client. I donât know very much about SIP or PSTN so I am not sure what to recommend, but perhaps this helps your search queries.
There are a fair number of services like TextNow that let you sign up for a real telephone number that you can then use via their app (I wouldnât use TextNowâthey had tons of spyware in their app). I donât know if that kind of service works for you but if it does perhaps youâd be able to find one of them that isnât horrible. This page (https://alternativeto.net/software/jmp-chat/) has a bunch of alternatives; I canât vouch for any of them but maybe itâs a starting point if you want to go this route.
Good luck!
Yep, thatâs right, we have to use these tools in a proper way; terminal itâs not a friendly tool to use for this kind of stuff, on mobile devices, and web interfaces are prepared to bring us a confortable space.
Btw, Iâm waiting for your php based client đ no pressure⊠đ€
¿Qué seguirå para este cliente de Twtxt?
- Agregar RSS (para que otras personas puedan seguirlo en su cliente favorito)
- Agregar hilos (para dar seguimiento a futuras contestaciones)
- Soporte para Gemtext y Gemini (para la comunidad de Smol net)
ÂżTĂș que diceS?
¿Qué seguirå para este cliente de Twtxt?
- Agregar RSS (para que otras personas puedan seguirlo en su cliente favorito)
- Agregar hilos (para dar seguimiento a futuras contestaciones)
- Soporte para Gemtext y Gemini (para la comunidad de Smol net)
ÂżTĂș que diceS?
En un ejercicio de diseño, Âżque pasarĂa si hacemos el inicio de sesiĂłn solo con un cĂłdigo dinĂĄmico TOTP?
Lo que he encontrado es que muchos clientes limitan a 6 y mĂĄximo 8 o 10 caractĂšres.
QuizĂĄs algo de 12 o 16 dĂgitos (similar a una tarjeta de crĂ©dito, por lo que describe frecuentemente), agregarĂa seguridad.
AquĂ unas fĂłrmulas interesantes para predecir la probabilidad de un ataque de fuerza bruta, dependiendo el nĂșmero de dĂgitos.
https://security.stackexchange.com/questions/185905/maximum-tries-for-2fa-code#185917
En un ejercicio de diseño, Âżque pasarĂa si hacemos el inicio de sesiĂłn solo con un cĂłdigo dinĂĄmico TOTP?
Lo que he encontrado es que muchos clientes limitan a 6 y mĂĄximo 8 o 10 caractĂšres.
QuizĂĄs algo de 12 o 16 dĂgitos (similar a una tarjeta de crĂ©dito, por lo que describe frecuentemente), agregarĂa seguridad.
AquĂ unas fĂłrmulas interesantes para predecir la probabilidad de un ataque de fuerza bruta, dependiendo el nĂșmero de dĂgitos.
https://security.stackexchange.com/questions/185905/maximum-tries-for-2fa-code#185917
Tengo bastante descuidado el twtxt. Me acordé un poco al estar pensando en incluir Passkeys al pensadero:
https://pensadero.eapl.mx
Tengo una implementaciĂłn de WebAuthn (sin las Client-side discoverable Credentials) y no deja de hacerme cosquillas implementarlas.
đ Q: How do we feel about forking the Twtxt spec into what we love and use today in Yarn.social in yarnd
, tt
, jenny
, twtr
and other clients? đ€ Thinking about (and talking with @xuu@txt.sour.is on IRC) about the possibility of rewriting a completely new spec (no extensions). Proposed name yarn.txt
or âYarnâ. Compatibility would remain with Twtxt in the sense that we wouldnât break anything per se, but weâd divorce ourselves from Twtxt and be free to improve based on the needs of the community and not the ideals of those that donât use, contribute in the first place or fixate on nostalgia (which doesnât really help anyone).
Iâm not super a fan of using json. I feel we could still use text as the medium. Maybe a modified version to fix any weakness.
What if instead of signing each twt individually we generated a merkle tree using the twt hashes? Then a signature of the root hash. This would ensure the full stream of twts are intact with a minimal overhead. With the added bonus of helping clients identify missing twts when syncing/gossiping.
Have two endpoints. One as the webfinger to link profile details and avatar like you posted. And the signature for the merkleroot twt. And the other a pageable stream of twts. Or individual twts/merkle branch to incrementally access twt feeds.
đĄ Quick ân Dirty prototype Yarn.social protocol/spec:
If we were to decide to write a new spec/protocol, what would it look like?
Hereâs my rough draft (back of paper napkin idea):
- Feeds are JSON file(s) fetchable by standard HTTP clients over TLS
- WebFinger is used at the root of a userâs domain (or multi-user) lookup. e.g:
prologic@mills.io
->https://yarn.mills.io/~prologic.json
- Feeds contain similar metadata that weâre familiar with: Nick, Avatar, Description, etc
- Feed items are signed with a ED25519 private key. That is all âpostsâ are cryptographically signed.
- Feed items continue to use content-addressing, but use the full Blake2b Base64 encoded hash.
- Edited feed items produce an âEditedâ item so that clients can easily follow Edits.
- Deleted feed items produced a âDeletedâ item so that clients can easily delete cached items.