Searching yarn

Twts matching #twtxt.txt
Sort by: Newest, Oldest, Most Relevant
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there 😂 even my my last replay. WTF IS GOING ON? 🤣🤣🤣

@aelaraji@aelaraji.com no, it is not just you. Do fetch the parent with jenny, and you will see there are two messages with different hash. Soren did something funky, for sure.

⤋ Read More
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there 😂 even my my last replay. WTF IS GOING ON? 🤣🤣🤣

@quark@ferengi.one here is an example: This Thread is not showing up in Mutt 🤔 Something is off!

I’ll set up jenny and mutt on another computer and see how it goes from there.

⤋ Read More
In-reply-to » Something odd just happened to my twtxt timeline... A bunch of twts dissapered, others were marked to be deleted in mutt. so I nuked my whole twtxt Maildir and deleted my ~/.cache/jenny in order to start with a fresh Pull. I pulled feed as usual. Now like HALF the twts aren't there 😂 even my my last replay. WTF IS GOING ON? 🤣🤣🤣

@aelaraji@aelaraji.com hmm, I see all of your twtxts just fine. Now, that’s a puzzle!

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@sorenpeter@darch.dk

Also what are the change that the same human will make two different posts within the same second?!

Just out of curiosity, What would happen someday if I (maybe trolling) edit my twtxt.txt-file manually and switch/switch a couple of twt timestamps, or add in 3 different twts manually with the same time stamp?

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@mckinley@twtxt.net Thanks for the feedback.

  1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
  2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
  3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same human will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like “I move to this new URL, please follow me there!” I did that with my feeds at least twice, and you guys still seem to read my posts:)

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be… Maybe it doesn’t have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

  1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
  2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
  3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it’s longer that 7-11 characters, but it’s self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?

⤋ Read More
In-reply-to » @sorenpeter !! I freaking love your Timeline ... I kind of have an justified PHP phobia 😅 but, I'm definitely thinking about giving it a try!

Thank you @aelaraji@aelaraji.com, I’m glad you like it. I use PHP because it’s everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@falsifian@www.falsifian.org TLS won’t help you if you change your domain name. How will people know if it’s really you? Maybe that’s not the biggest problem for something with such low stakes as twtxt, but it’s a reasonable concern that could be solved using signatures from an unchanging cryptographic key.

This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesn’t matter where you get the note from, your client can verify its authenticity. That way, relays don’t need to be trusted.

⤋ Read More
In-reply-to » @prologic earlier you suggested extending hashes to 11 characters, but here's an argument that they should be even longer than that.

@prologic@twtxt.net Brute force. I just hashed a bunch of versions of both tweets until I found a collision.

I mostly just wanted an excuse to write the program. I don’t know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.

⤋ Read More

@prologic@twtxt.net earlier you suggested extending hashes to 11 characters, but here’s an argument that they should be even longer than that.

Imagine I found this twt one day at https://example.com/twtxt.txt :

2024-09-14T22:00Z Useful backup command: rsync -a “$HOME” /mnt/backup

Image

and I responded with “(#5dgoirqemeq) Thanks for the tip!”. Then I’ve endorsed the twt, but it could latter get changed to

2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory

Image

which also has an 11-character base32 hash of 5dgoirqemeq. (I’m using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I’m taking 11 characters instead of 7 from the end of the base32 encoding.)

That’s what I meant by “spoofing” in an earlier twt.

I don’t know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the “two times” is because of the birthday paradox.

Side note: current hashes always end with “a” or “q”, which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.

Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.

⤋ Read More
In-reply-to » Interesting.. QUIC isn't very quick over fast internet.

@prologic@twtxt.net

They’re in Section 6:

  • Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; I’m a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.

  • Some other receiver-side suggestions: “sending delayed QUICK ACKs”; “using recvmsg to read multiple UDF packets in a single system call”.

  • Use multiple threads when receiving large files.

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@mckinley@twtxt.net

HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn’t, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you’re getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can’t verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don’t mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I’m not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I’m also not very familiar with IPFS or IPNS.

I haven’t been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I’m too lazy to change how I publish my feed :-)

⤋ Read More
In-reply-to » @aelaraji how would that work exactly? Does that mean then that every user is required to have a cox side profile? Who maintains cox site? Is it centralized or decentralized can be relied upon?

@prologic@twtxt.net well…

how would that work exactly?

To my limited knowledge, Keyoxide is an open source project offering different tools for verifying one’s online persona(s). That’s done by either A) creating an Ariande Profile using the web interface, a CLI. or B) Just using your GPG key. Either way, you add in Identity claims to your different profiles, links and whatnot, and finally advertise your profile … Then there is a second set of Mobile/Web clients and CLI your correspondents can use to check your identity claims. I think of them like the front-ends of GPG Keyservers (which keyoxide leverages for verification when you opt for the GPG Key method), where you verify profiles using links, Key IDs and Fingerprints…

Who maintains cox site? Is it centralized or decentralized can be relied upon?

  • Maintainers? Definitely not me, but here’s their Git stuff and OpenCollective page
  • Both ASP and Keyoxide Webtools can be self-hosted. I don’t see a central authority here… + As mentioned on their FAQ page the whole process can be done manually, so you don’t have to relay on any one/thing if you don’t want to, the whole thing is just another tool for convenience (with a bit of eye candy).

Does that mean then that every user is required to have a cox side profile?

Nop. But it looks like a nice option to prove that I’m the same person to whom that may concern if I ever change my Twtxt URL, host/join a yarn pod or if I reach out on other platforms to someone I’ve met in her. Otherwise I’m just happy exchanging GPG keys or confirm the change IRL at a coffee shop or something. 😁

⤋ Read More
In-reply-to » @bender Yes, they do 🤣 Implicitly, or threading would never work at all 😅 Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you're coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.

@prologic@twtxt.net a signature IS encryption in reverse. If my private key becomes compromised then they can impersonate me. Being able to manage promotion and revocation of keys needed even in a system where its used for just signatures.

⤋ Read More
In-reply-to » @falsifian In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt … damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?

⤋ Read More
In-reply-to » Interesting.. QUIC isn't very quick over fast internet.

@xuu@txt.sour.is Thanks for the link. I found a pdf on one of the authors’ home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.

To be fair, I don’t think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream it’s part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff.

⤋ Read More
In-reply-to » (#mp6ox4a) @cuaxolotl Ah, thanks for reporting back! Okay, so you’re basically manually “crawling” feeds right now. 🤔 What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? 🤔

@movq@www.uninformativ.de Damn! I’m two years late to the discussion 😅 So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF they check their logs.

⤋ Read More
In-reply-to » (#mp6ox4a) @cuaxolotl Ah, thanks for reporting back! Okay, so you’re basically manually “crawling” feeds right now. 🤔 What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? 🤔

@prologic@twtxt.net Unfortunately it only work if I pull the feed in debug mode jenny -D otherwise, it misses things up if I add that snippet of text to links in my .config/jenny/follow file 😅 Anyway, it was a nice try.

⤋ Read More
In-reply-to » (#mp6ox4a) @cuaxolotl Ah, thanks for reporting back! Okay, so you’re basically manually “crawling” feeds right now. 🤔 What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? 🤔

@movq@www.uninformativ.de @prologic@twtxt.net Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI itself instead of relaying on the USER Agent 😂. I’ve copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" and this happens:

A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:

hostname:1965 - “gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt” 20 “text/plain;lang=en-US”

You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then you’ll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt" gave me this :

gopher.aelaraji.com:70 - [09/Sep/2024:22:08:54 +0000] “GET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0” 404 0 “” “Unknown gopher client”

NB: the follower=... string won’t appear in gopher logs after a ? but if I replace it with a + or a & and it works. There will be a missing / after the https:. Probably a client thing.

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

@mckinley@twtxt.net To answer some of your questions:

Are SSH signatures standardized and are there robust software libraries that can handle them? We’ll need a library in at least Python and Go to provide verified feed support with the currently used clients.

We already have this. Ed25519 libraries exist for all major languages. Aside from using ssh-keygen -Y sign and ssh-keygen -Y verify, you can also use the salty CLI itself (https://git.mills.io/prologic/salty), and I’m sure there are other command-line tools that could be used too.

If we all implemented this, every twt hash would suddenly change and every conversation thread we’ve ever had would at least lose its opening post.

Yes. This would happen, so we’d have to make a decision around this, either a) a cut-off point or b) some way to progressively transition.

⤋ Read More
In-reply-to » @falsifian In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

how little data is needed for generating the hashes? Instead of the full URL, can we makedo with just the domain (example.net) so we avoid the conflicts with gemini://, https:// and only http:// (like in my own twtxt.txt) or construct something like like a webfinger id nick@domain (also used by mastodon etc.) from the domain and nick if there, else use domain as nick as well

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@lyse@lyse.isobeef.org This looks like a nice way to do it.

Another thought: if clients can’t agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.

A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if there’s a period when clients are doing different things.

(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I don’t know about other clients.)

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@falsifian@www.falsifian.org In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

# url = https://example.com/alias/txtxt.txt
# url = https://example.com/initial/twtxt.txt
<message 1 uses the initial URL>
<message 2 uses the initial URL, too>
# url = https://example.com/new/twtxt.txt
<message 3 uses the new URL>
# url = https://example.com/brand-new/twtxt.txt
<message 4 uses the brand new URL>

In theory, the same could be done for prepend-style feeds. They do exist, I’ve come around them. The parser would just have to calculate the hashes afterwards and not immediately.

⤋ Read More
In-reply-to » All this hash breakage made me wonder if we should try to introduce “message IDs” after all. 😅

@movq@www.uninformativ.de Another idea: just hash the feed url and time, without the message content. And don’t twt more than once per second.

Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.

Though I kind of like the idea of twts being immutable. At least, it’s clear which version of a twt you’re replying to (assuming nobody is engineering hash collisions).

⤋ Read More