Searching yarn

Twts matching #hash
Sort by: Newest, Oldest, Most Relevant
In-reply-to » @andros I've commented on the ticket: https://git.mills.io/yarnsocial/twtxt.dev/issues/14#issuecomment-19142

True. Though if the idea turns out to be better.. then community will adopt it.

if you look at the subject for that twt you will see that it uses the extended hash format to include a URL address.

⤋ Read More
In-reply-to » (#tbyqv7a) @andros Do edits cause problems? I sometimes make them and didn't realize it may be an issue

@bmallred@staystrong.run I forgot one more effect of edits. If clients remember the read status of massages by hash, an edit will mark the updated message as unread again. To some degree that is even the right behavior, because the message was updated, so the user might want to have a look at the updated version. On the other hand, if it’s just a small typo fix, it’s maybe not worth to tell the user about. But the client doesn’t know, at least not with additional logic.

Having said that, it appears that this only affects me personally, noone else. I don’t know of any other client that saves read statuses. But don’t worry about me, all good. Just keep doing what you’ve done so far. I wanted to mention that only for the sake of completeness. :-)

⤋ Read More
In-reply-to » @lyse What do you think about this? https://git.mills.io/yarnsocial/twtxt.dev/issues/14

I like this syntax, you have my vote, although I’d change it a bit like
#<Alice https://example.com/twtxt.com#2024-12-18T14:18:26+01:00>

Hashes are not a problem on PHP, I dont know why it’s slow to calculate them from your side, but I agree with your points.

BTW, did you have the chance to read my proposal on twtxt 2.0? I shared a few ideas about possible improvements to discuss:
https://text.eapl.mx/a-few-ideas-for-a-next-twtxt-version
https://text.eapl.mx/reply-to-lyse-about-twtxt

⤋ Read More
In-reply-to » (#tbyqv7a) @andros Do edits cause problems? I sometimes make them and didn't realize it may be an issue

@bmallred@staystrong.run Any edit automatically changes the twt hash, because the hash is built over the hash URL, message timestamp and message text. https://twtxt.dev/exts/twt-hash.html So, it is only a problem, if somebody replied to your original message with the old hash. The original message suddenly doesn’t exist anymore and the reply becomes detached, orphaned, whatever you wanna call it. Threading doesn’t break, though, if nobody replied to your message.

⤋ Read More
In-reply-to » (#jzdbrkq) What do you think? Where is the problem?

@andros@twtxt.andros.dev I believe you have just reproduced the bug… it looks like you’ve replayed to a twt but the hash is wrong. I can see the hash here from Jenny, but it doesn’t look like it corresponds to any{twt,thing}. if you check it out on any yarn instance it won’t look like a replay.

⤋ Read More

My hypothesis about that thing breaking my twts is that it might have something to do with the parenthesis surrounding the root twt hash in the replay twt-A when I replay to it with fork-twt-B; I imagine elisp interpreting those as a s-expression thus breaking the generation precess of hash (#twt-A) before prepending it to for-twt-B … but then I’m too ignorant to figure out how to test my theory (heck I couldn’t even recalculate the hashes myself correctly in bash xD). I’ll keep trying tho.

⤋ Read More
In-reply-to » I suspect the problem is that the content is updated. It looks like a design problem.

@andros@twtxt.andros.dev yes, that usually happens when twts get edited and we just made a gentlemen agreement to avoid edits as much as possible (at least for the time being). But the thing is, That is not what’s happening with my broken twts’ hashes. Since I’ve bee mostly replaying to my own twts as a test and I know for sure that I haven’t edited any. (I usually fork-replay instead of edit a twt when needed)

⤋ Read More
In-reply-to » @lyse Where? 🧐

@prologic@twtxt.net Of course you don’t notice it when yarnd only shows at most the last n messages of a feed. As an example, check out mckinley’s message from 2023-01-09T22:42:37Z. It has ā€œ[Scheduled][Scheduled][Scheduled]ā€œā€¦ in it. This text in square brackets is repeated numerous times. If you search his feed for closing square bracket followed by an opening square bracket (][) you will find a bunch more of these. It goes without question he never typed that in his feed. My client saves each twt hash I’ve explicitly marked read. A few days ago, I got plenty of apparently years old, yet suddenly unread messages. Each and every single one of them containing this repeated bracketed text thing. The only conclusion is that something messed up the feed again.

⤋ Read More
In-reply-to » @lyse As far as I know, they're still visible in the Web UI. Although, in the mobile app and youtube.com, I believe it tells you that the video isn't available without having to click on it. They don't tell you that in the RSS feed, and I agree; it gets annoying.

Definitely something going on with replies. This one was replying to the wrong twt and even when I got clever and pasted the right hash it didn’t work.

⤋ Read More
In-reply-to » @arne Uuuuhhh, das fühlt sich klasse an, gute Arbeit mein Lieber! :-)

@lyse@lyse.isobeef.org Danke! Ja, es gibt noch unzƤhlige Stellschrauben an dem Ding. Deine Anmerkungen werde ich einarbeiten. Eine mobile Ansicht wƤr auch noch schƶn. Derzeit sitzt es auf dem Smartphone doch noch recht stramm.

@Unterhaltungen: Die von gestern zu verschlüsselten Nachrichten war ausschlaggebend für die Umsetzung. In ā€œTimelineā€ und ā€œYarnā€ haben mich die LƶsungsansƤtze bisher nicht überzeugt. Aber wir kƶnnen ja alle etwas von einander lernen.

⤋ Read More
In-reply-to » I want to share a little idea for a new extension with the goal of adding direct messages in #twtxt https://github.com/tanrax/twtxt-direct-message-extension

another one would be to allow changing public keys over time (as it may be a good practice [0]). A syntax like the following could help to know what public key you used to encrypt the message, and which private key the client should use to decrypt it:

!<nick url> <encrypted_message> <public_key_hash_7_chars>

Also I’d remove support for storing the message as hex, only allowing base64 (more compact, aiming for a minimalistic spec, etc.)

[0] https://www.brandonchecketts.com/archives/its-2023-you-should-be-using-an-ed25519-ssh-key-and-other-current-best-practices

⤋ Read More
In-reply-to » Hello @movq . Did you fixed jenny bug which causes fetching long ids from yarn instances on feeds like https://ciberlandia.pt/@marado.txt ? I'm asking because i want to store links in brackets on some of my posts and don't want to confuse jenny users

@doesnmppsflt@doesnm.p.psf.lt Not sure which bug you’re referring to. šŸ¤” (Did I forget?)

Those long IDs like (#113797927355322708) are simply part of that feed. Looks like the author just dumps ActivityPub IDs into twtxt. I think this used to work in the past, but the corresponding spec (https://twtxt.dev/exts/hash-tag.html) has been deprecated and jenny doesn’t support – actually, jenny never supported that.

jenny can only group threads by exactly one criterium (because it writes a Message-ID into the mail file) and that’s the regular twt hash. So, anything else, like people doing ā€œ#CoolTopicā€, isn’t possible.

⤋ Read More

I’m still making progress with the Emacs client. I’m proud to say that the code that is responsible for reading the feeds is almost finished, including: Twt Hash Extension, Twt Subject Extension, Multiline Extension and Metadata Extension. I’m fine-tuning some tests and will soon do the first buffer that displays the twts.

⤋ Read More
In-reply-to » I've been thinking of a few improvements for the next generation of twtxt spec, let me know if these are useful or interesting :) https://text.eapl.mx/a-few-ideas-for-a-next-twtxt-version

Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)

Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where you’re coming from.

Numbering twts in a feed is basically the attempt of generating message IDs. It’s an interesting idea, but I reckon it is not even needed. I’d simply use location based addressing (feed URL + ā€˜#’ + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.

yarnd actually sends If-Modified-Since request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(…) we can actually see If-Modified-Since being deployed when the feed has been retrieved with a Last-Modified response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278

Turns out etags with If-None-Match are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.

I don’t understand how the discovery URLs should work to replace the User-Agent header in HTTP(S) requests. Do you mind to elaborate?

Different protocols are basically just a client thing.

I reckon it’s best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, I’d definitely create a different feed for other languages.

Isn’t the emoji thing ā€œjustā€ a client feature? So, feed do not even have to state any emojis. As a user I’d configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feed’s nick? Hmm. But still, my terminal font doesn’t even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.

⤋ Read More

similar to data packets in NDN, each message has multiple names. a true name, which is an encoded cryptographic hash of the file itself. we call this kind of information self-certifying. given a true name, you can find a file and verify its integrity. additionally, agents can associate a self-certifying name with a pet name or subjective label of their choosing and share it with their friends/peers. zoko’s triangle can suck it. gemini://sunshinegardens.org/~xjix/wiki/cryptogen–specification/

⤋ Read More
In-reply-to » @prologic I’m sure you can somehow install something that calculates blake2b on OpenBSD. But it’s not part of the base system as a standalone CLI tool, there only appear to be Perl modules for it. The other SHA tools do exist.

@movq@www.uninformativ.de i’m sorry if I sound too contrarian. I’m not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we don’t just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.

Really we should all think hard about how changes will break things and if those breakages are acceptable.

⤋ Read More
In-reply-to » @prologic I wanted to wait for things to settle down. It’s still unclear to me in which direction we’re going – and if that new/different stuff is even possible to implement in jenny. That said, I’ve been really busy with private stuff these last few days, I’ve lost track of most of what you’re discussing. 🄓

I share I did write up an algorithm for it at some point I think it is lost in a git comment someplace. I’ll put together a pseudo/go code this week.

Super simple:

Making a reply:

  1. If yarn has one use that. (Maybe do collision check?)
  2. Make hash of twt raw no truncation.
  3. Check local cache for shortest without collision
    • in SQL: select len(subject) where head_full_hash like subject || '%'

Threading:

  1. Get full hash of head twt
  2. Search for twts
    • in SQL: head_full_hash like subject || '%' and created_on > head_timestamp

The assumption being replies will be for the most recent head. If replying to an older one it will use a longer hash.

⤋ Read More
In-reply-to » Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:

These collisions aren’t important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.

⤋ Read More
In-reply-to » There we go!

@prologic@twtxt.net Regarding the new way of generating twt-hashes, to me it makes more sense to use tabs as separator instead of spaces, since the you can just copy/past a line directly from a twtxt-file that already go a tab between timestamp and message. But tabs might be hard to ā€œtypeā€ when you are in a terminal, since it will activate autocompleteā€¦šŸ¤”

Another thing, it seems that you sugget we only use the domain in the hash-creation and not the full path to the twtxt.txt

$ echo -e "https://example.com 2024-09-29T13:30:00Z Hello World!" | sha256sum - | awk '{ print $1 }' | base64 | head -c 12

⤋ Read More

šŸ‘‹ Thanks for joining us on our Sept monthly Yarn.social meetup today y’all šŸ™‡ā€ā™‚ļø We had @david@collantes.us @sorenpeter@darch.dk @doesnm@doesnm.p.psf.lt @falsifian@www.falsifian.org and @xuu@txt.sour.is šŸ’Ŗ Nice turn out! (not all at once of course, as we normally run this over 4 hours as we span many time zones!)

Things we talked about:

  • Decentralised vs. Distributed
  • Use of SHA256 for Twt Hash(es)
  • We solved Edits! 🄳
  • UUID(s) probably won’t work! (susceptible to sppofing)
  • Helped @sorenpeter@darch.dk write some PHP to process/parse User-Agent and service his feed via a custom PHP script šŸ˜…
  • @falsifian@www.falsifian.org introduced himself šŸ‘Œ
  • Talked about Merkle Trees 🌳

Did I miss anything? šŸ¤”

⤋ Read More

More thoughts about changes to twtxt (as if we haven’t had enough thoughts):

  1. There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:

1a. Better and longer hashes.

1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.

1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8

1d. Stuff already described at dev.twtxt.net that doesn’t need any changes.

  1. We won’t know what will and won’t work until we try them. So I’m inclined to think of this as a bunch of draft ideas. Maybe later when we’ve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.

  2. Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
    https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
    https://twtxt.readthedocs.io/en/latest/user/discoverability.html
    and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long ā€œtwtxt v2ā€ document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment ā€œyou’ve ruined twtxtā€ and while I don’t completely agree with that commenter’s sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)

  3. All that being said, these are just my opinions, and I’m not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if you’re actually implementing things, you’re in charge of what you decide to make, and I’m grateful for the work.

⤋ Read More
In-reply-to » Something @anth said on ITC

We:

  • Drop # url= from the spec.
  • We don’t adopt # uuid = – Something @anth@a.9srv.net also mentioned (see below)

We instead use the @nick@domain to identify your feed in the first place and use that as the identify when calculating Twt hashes <id> + <timestamp> + <content>. Now in an ideal world I also agree, use WebFinger for this and expect that for the most part you’ll be doing a WebFinger lookup of @user@domain to fetch someone’s feed in the first place.

The only problem with WebFinger is should this be mandated or a recommendation?

⤋ Read More
In-reply-to » This is only first draft quality, but I made some notes on the #twtxt v2 proposal. http://a.9srv.net/b/2024-09-25

Good writeup, @anth@a.9srv.net! I agree to most of your points.

3.2 Timestamps: I feel no need to mandate UTC. Timezones are fine with me. But I could also live with this new restriction. I fail to see, though, how this change would make things any easier compared to the original format.

3.4 Multi-Line Twts: What exactly do you think are bad things with multi-lines?

4.1 Hash Generation: I do like the idea with with a new uuid metadata field! Any thoughts on two feeds selecting the same UUID for whatever reason? Well, the same could happen today with url.

5.1 Reply to last & 5.2 More work to backtrack: I do not understand anything you’re saying. Can you rephrase that?

8.1 Metadata should be collected up front: I generally agree, but if the uuid metadata field were a feed URL and no real UUID, there should be probably an exception to change the feed URL mid-file after relocation.

⤋ Read More
In-reply-to » There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (on the average case).

(#2024-09-24T12:44:35Z) There is a increase in space/memory for sure. But calculating the hashes also takes up CPU. I’m not good with that kind of math, but it’s a tradeoff either way.

⤋ Read More
In-reply-to » Some more arguments for a local-based treading model over a content-based one:

There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (on the average case).

⤋ Read More
In-reply-to » Some more arguments for a local-based treading model over a content-based one:

So really your argument is just that switching to a location-based addressing ā€œjust makes senseā€. Why? Without concrete pros/cons of each approach this isn’t really a strong argument I’m afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

I also don’t really buy the argument of simplicity either personally, because I don’t technically see it much more difficult to take a echo -e "<url>\t<timestamp>\t<content>" | sha256sum | base64 as the Twt Subject or concatenating the <url> <timestamp> – The ā€œeffortā€ is the same. If we’re going to argue that SHA256 or cryptographic hashes are ā€œtoo complicatedā€ then I’m not really sure how to support that argument.

⤋ Read More
In-reply-to » @prologic Thanks for writing that up!

@bender@twtxt.net

Sorry, you’re right, I should have used numbers!

I’m don’t understand what ā€œpreserve the original hashā€ could mean other than ā€œmake sure there’s still a twt in the feed with that hashā€. Maybe the text could be clarified somehow.

I’m also not sure what you mean by markdown already being part of it. Of course people can already use Markdown, just like presumably nothing stopped people from using (twt subjects) before they were formally described. But it’s not universal; e.g. as a jenny user I just see the plain text.

⤋ Read More
In-reply-to » Okay folks, I've spent all day on this today, and I think its in "good enough"ā„¢ shape to share:

@prologic@twtxt.net Thanks for writing that up!

I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.

I am not sure how I feel about all this being done at once, vs. letting conventions arise.

For example, even today I could reply to twt abc1234 with ā€œ(#abc1234) Edit: ā€¦ā€ and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.

Similarly we could just start using 11-digit hashes. We should iron out whether it’s sha256 or whatever but there’s no need get all the other stuff right at the same time.

I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).

However I recognize that I’m not the one implementing this stuff, and it’s less work to just have everything determined up front.

Misc comments (I haven’t read the whole thing):

  • Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. I’d suggest gaining 11 bits with base64 instead.

  • ā€œClients MUST preserve the original hashā€ — do you mean they MUST preserve the original twt?

  • Thanks for phrasing the bit about deletions so neutrally.

  • I don’t like the MUST in ā€œClients MUST follow the chain of reply-to referencesā€¦ā€. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldn’t declare the client non-conforming just because they didn’t get to all the bells and whistles.

  • Similarly I don’t like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (I’m again thinking again of the 40-line shell script).

  • For ā€œwho followsā€ lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?

  • Why can’t feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasn’t too bad, but 1.0 would have been slightly simpler.

  • Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.

  • I’m a little sad about other protocols being not recommended.

  • I don’t know how I feel about including markdown. I don’t mind too much that yarn users emit twts full of markdown, but I’m more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.

⤋ Read More

Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:

$ ./stats
Saw 58263 hashes
7fqcxaa
  https://twtxt.net/user/justamoment/twtxt.txt
  https://twtxt.net/user/prologic/twtxt.txt
ntnakqa
  https://twtxt.net/user/prologic/twtxt.txt
  https://twtxt.net/user/thecanine/twtxt.txt

Namely:

$ jenny -D https://twtxt.net/user/justamoment/twtxt.txt | grep 7fqcxaa

[7fqcxaa] [2022-12-28 04:53:30+00:00] [(#pmuqoca) @prologic@twtxt.net I checked the GitHub discussion, it became a request to join forces.

Do you plan on having them join?

Also for the name, how about:

  • ā€œprogitā€ or ā€œprologitā€ (prologic official hard fork)
  • ā€œgit-stanceā€ (git instance)
  • ā€œGitTreeā€ (Gitea inspired, maybe to related)
  • ā€œGitomataā€ (git automata)
  • ā€œGit.Sourceā€
  • ā€œForgorā€ (forgit is taken so I forgor) 🤣
  • ā€œSweetGitā€ (as salty chat)
  • ā€œPepper Gitā€ (other ingredients) šŸ˜‰
  • ā€œGitHeartā€ (core of git with a GitHub sounding name)
  • ā€œGitTakaā€ (With music in mind)

Ok, enough fun… Hope this helps sprout some ideas from others if nothing is to your taste.]

$ jenny -D https://twtxt.net/user/prologic/twtxt.txt/5 | grep 7fqcxaa

[7fqcxaa] [2022-02-25 21:14:45+00:00] [(#bqq6fxq) It’s handled by blue Monday]

And:

$ jenny -D https://twtxt.net/user/thecanine/twtxt.txt | grep ntnakqa
[ntnakqa] [2022-01-23 10:24:09+00:00] [(#2wh7r4q) <a href="https://yarn.girlonthemoon.xyz/external?uri=https://twtxt.net/user/prologic/twtxt.txt">@prologic<em>@twtxt.net</em></a> I know, I was just hoping it might have also gotten fixed by that change, by some kind of backend miracles. šŸ˜‚]

$ jenny -D https://twtxt.net/user/prologic/twtxt.txt/1 | grep ntnakqa
[ntnakqa] [2024-02-27 05:51:50+00:00] [(#otuupfq) <a href="https://yarn.girlonthemoon.xyz/external?uri=https://twtxt.net/user/shreyan/twtxt.txt">@shreyan<em>@twtxt.net</em></a>  Ahh šŸ‘Œ]

⤋ Read More