@movq@www.uninformativ.de Apparently you wrote it :D The hash doesnāt lie? 𤣠https://twtxt.net/twt/znf6csa
@prologic@twtxt.net What happened here ā did I edit my twt or is this hash wrong? š„“
@prologic@twtxt.net Spring cleanup! Thatās one way to encourage people to self-host their feeds. :-D
Since Iām only interested in the url
metadata field for hashing, I do not keep any comments or metadata for that matter, just the messages themselves. The last time I fetched was probably some time yesterday evening (UTC+2). I cannot tell exactly, because the recorded last fetch timestamp has been overridden with todayās by now.
I dumped my new SQLite cache into: https://lyse.isobeef.org/tmp/backup.tar.gz This time maybe even correctly, if youāre lucky. Iām not entirely sure. It took me a few attempts (date and time were separated by space instead of T
at first, I normalized offsets +00:00
to Z
as yarnd does and converted newlines back to U+2028
). At least now the simple cross check with the Twtxt Feed Validator does not yield any problems.
@andros@twtxt.andros.dev sha256 hash of twt in json. Look at converter script
Amazing! It is a good tool for reading feeds. What you used to calculate the hash?
Hello, i want to present my new revolution twtxt v3 format - twjson
Thatās why you should use it:
- Itās easy to to parse
- Itās easy to read (in formatted mode :D)
- It used actually \n for newlines, you donāt need unprintable symbols
- Forget about hash collisions because using full hash
Here is my twjson feed: https://doesnm.p.psf.lt/twjson.json
And twtxt2json converter: https://doesnm.p.psf.lt/twjson.js
@eapl.me@eapl.me Interesting! Two points stood right out to me:
Why the hell are e-mail newsletters considered a valid option in the first place? Just offer an Atom feed and be done with it! Especially for a blog of this very type. This doesnāt even involve a third party service. Although, in addition he also links to Feedburner, what the fuck!? No e-mail address or the like is needed and subject to being disclosed.
When these spam mailers want to prevent resubscribing, then for fuckās sake, why donāt they use a hash of the e-mail address (I saw that in yarnd) for that purpose? Storing the e-mail address in clear text after unsubscribing is illegal in my book.
There are 82.108 read statuses, but only 24.421 messages in the cache. In contrast to the cache with the messages, the read statuses are never cleaned up when a feed was unsubscribed from. And the read statuses also contain old style hashes, before we settled on the what we have today. Still a huge difference. Hmm.
tt
reimplementation that I already followed with the old Python tt
. Previously, I just had a few feeds for testing purposes in my new config. While transfering, I "dropped" heaps of feeds that appeared to be inactive.
Thanks, @movq@www.uninformativ.de!
My backing SQLite database with indices is 8.7 MiB in size right now.
The twtxt
cache is 7.6 MiB, it uses Pythonās pickle
module. And next to it there is a 16.0 MiB second database with all the read statuses for the old tt
. Wow, super inefficient, it shouldnāt contain anything else, itās a giant, pickled {"$hash": {"read": True/False}, ā¦}
. What the heck, why is it so big?! O_o
(Back in tt
.) Well, it kinda worked. At least appending to the file. But my cache database got screwed up. I do not yet support replies, so the subject and and root hash columns have not been set at all, resulting in a message that is just not shown at all. I gotta do something about that next. The good thing is, though, after simply fixing the two columns the message appeared on screen.
@bender@twtxt.net Yeah, as you mentioned in the other thread, @andros@twtxt.andros.devās hashes appear to be not quite right. š¤
@andros@twtxt.andros.dev your client is breaking things, I am afraid. This hash (ptxsca
), which you seem to be using to reply to @movq@www.uninformativ.de is not the right one.
@movq@www.uninformativ.de I have no doubt that youāre not seeing the images correctly š. Itās just that itās broken when viewing them, in my case, and analyzing the URLs, Iāve seen everything I mentioned.
Regarding the hash, youāre right. Iāll have to investigate whatās going on. Iām having a hard time getting the hash generation to work properly.
@andros@twtxt.andros.dev Hm, looks correct to me. The image to be displayed is a thumbnail and this links to the full-sized image. The thumbnail (JPG) is auto-generated from the full image (PNG), hence the two extensions.
What does look strange, though, is that your client came up with the hash pqsmcka
, while it should have been te5quba
. š¤
Why not just use registry? It can be personal or hosted by someone like registry.twtxt.org. Just need to be adapt to support hashes
@prologic@twtxt.net We canāt agree on this idea because that makes things even more complicated than it already is today. The beauty of twtxt is, you put one file on your server, done. One. Not five million. Granted, there might be archive feeds, so it might be already a bit more, but still faaaaaaar less than one file per message.
Also, you would need to host not your own hash files, but everybody elseās as well you follow. Otherwise, what is that supposed to achieve? If people are already following my feed, they know what hashes I have, so this is to no use of them (unless they want to look up a message from an archive feed and donāt process them). But the far more common scenario is that an unknown hash originates from a feed that they have not subscribed to.
Additionally, yarndās URL schema would then also break, because https://twtxt.net/twt/<hash>
now becomes https://twtxt.net/user/prologic/<hash>
, https://twtxt.net/user/bender/<hash>
and so on. To me, that looks like you would only get hashes if they belonged to this particular user. Of course, you could define rules that if there is a /user/
part in the path, then use a different URL, but this complicates things even more.
Sorry, I donāt like that idea.
One of the biggest gripes of the community with the way the threading model currently works with Twtxt v1.2 (https://twtxt.dev) is this notion of:
What is this hash?
What does it refer to?
Idea: Why canāt we all agree to implement a simple URI scheme where we host our Twtxt feeds?
That is, if you host your feed at https://example.com/twtxt.txt
ā Why canāt or could you not also host various JSON files (letās agree on the spec of course) at https://example.com/twt/<hash>
? š¤
That way we solve this problem in a truly decentralised way, rather than every relying on yarnd
pods alone.
a few async ideas for later
The editing process needs a lot of consideration and compromises.
From one side, editing and deleting itās necessary IMO. People will do it anyway, and personally I like to edit my texts, so Iād put some effort on make it work.
Should we keep a history of edits? Should we hash every edit to avoid abuse? Should we mark internally a twt as deleted, but keeping the replies?
I think thatās part of a more complete āthreadā extension, although Iād say itās worth to agree on something reflecting the real usage in the wild, along with what people usually do on other platforms.
looks good to me!
About aliceās hash, using SHA256, I get 96473b4f
or 96473B4F
for the last 8 characters. Iāll add it as an implementation example.
The idea of including it besides the follow URL is to avoid calculating it every time we load the file (assuming the client did that correctly), and helps to track replies across the file with a simple search.
Also, watching your example Iām thinking now that instead of {url=96473B4F,id=1}
which is ambiguous of which URL we are referring to, it could be something like:
{reply_to=[URL_HASH]_[TWT_ID]}
/ {reply_to=96473B4F_1}
That way, the āfull twt IDā could be 96473B4F_1
.
True. Though if the idea turns out to be better.. then community will adopt it.
if you look at the subject for that twt you will see that it uses the extended hash format to include a URL address.
@bmallred@staystrong.run I forgot one more effect of edits. If clients remember the read status of massages by hash, an edit will mark the updated message as unread again. To some degree that is even the right behavior, because the message was updated, so the user might want to have a look at the updated version. On the other hand, if itās just a small typo fix, itās maybe not worth to tell the user about. But the client doesnāt know, at least not with additional logic.
Having said that, it appears that this only affects me personally, noone else. I donāt know of any other client that saves read statuses. But donāt worry about me, all good. Just keep doing what youāve done so far. I wanted to mention that only for the sake of completeness. :-)
I like this syntax, you have my vote, although Iād change it a bit like
#<Alice https://example.com/twtxt.com#2024-12-18T14:18:26+01:00>
Hashes are not a problem on PHP, I dont know why itās slow to calculate them from your side, but I agree with your points.
BTW, did you have the chance to read my proposal on twtxt 2.0? I shared a few ideas about possible improvements to discuss:
https://text.eapl.mx/a-few-ideas-for-a-next-twtxt-version
https://text.eapl.mx/reply-to-lyse-about-twtxt
@bmallred@staystrong.run Any edit automatically changes the twt hash, because the hash is built over the hash URL, message timestamp and message text. https://twtxt.dev/exts/twt-hash.html So, it is only a problem, if somebody replied to your original message with the old hash. The original message suddenly doesnāt exist anymore and the reply becomes detached, orphaned, whatever you wanna call it. Threading doesnāt break, though, if nobody replied to your message.
@andros@twtxt.andros.dev I believe you have just reproduced the bug⦠it looks like youāve replayed to a twt but the hash is wrong. I can see the hash here from Jenny, but it doesnāt look like it corresponds to any{twt,thing}. if you check it out on any yarn instance it wonāt look like a replay.
My hypothesis about that thing breaking my twts is that it might have something to do with the parenthesis surrounding the root twt hash in the replay twt-A
when I replay to it with fork-twt-B
; I imagine elisp interpreting those as a s-expression thus breaking the generation precess of hash (#twt-A) before prepending it to for-twt-B
⦠but then Iām too ignorant to figure out how to test my theory (heck I couldnāt even recalculate the hashes myself correctly in bash xD). Iāll keep trying tho.
@andros@twtxt.andros.dev yes, that usually happens when twts get edited and we just made a gentlemen agreement to avoid edits as much as possible (at least for the time being). But the thing is, That is not whatās happening with my broken twtsā hashes. Since Iāve bee mostly replaying to my own twts as a test and I know for sure that I havenāt edited any. (I usually fork-replay instead of edit a twt when needed)
@aelaraji@aelaraji.com Can you give me examples of hashes that you have detected wrong between Emacs client and twtxt.net?
Perhaps there is some character, some space, that is creating the discrepancy.
@prologic@twtxt.net Agreed! But clients can hallucinate and generate wrong hashes
aka Lies
𤣠Also, If you chheck your own twt on twtxt.net, it looks like a root twt instead of a replay.
the hash in that test replay should have been s243lua
instead š¤·
@prologic@twtxt.net Are you sure? xD ⦠it was supposed to be a replay to another twt, but the twt hash is wrong (I think).
@andros@twtxt.andros.dev is it me or twtxt-el generates a wrong twt hash when I use the [ ā³ Reply to twt ]
button?
@prologic@twtxt.net Of course you donāt notice it when yarnd only shows at most the last n messages of a feed. As an example, check out mckinleyās message from 2023-01-09T22:42:37Z. It has ā[Scheduled][Scheduled][Scheduled]ā⦠in it. This text in square brackets is repeated numerous times. If you search his feed for closing square bracket followed by an opening square bracket (][
) you will find a bunch more of these. It goes without question he never typed that in his feed. My client saves each twt hash Iāve explicitly marked read. A few days ago, I got plenty of apparently years old, yet suddenly unread messages. Each and every single one of them containing this repeated bracketed text thing. The only conclusion is that something messed up the feed again.
Definitely something going on with replies. This one was replying to the wrong twt and even when I got clever and pasted the right hash it didnāt work.
Iām realizing that my performance bottleneck is @prologic@twtxt.net ! It is actually calculating the hash to make the replicas, and specifically users with very long feeds š . Iām seriously thinking about enabling replies via configuration.
Well, thatās another bug: The search https://twtxt.net/search?q=%22LOOOOL%2C+great+programming+tutorial+music%22 yields the wrong hash. It should have been poyndha instead.
@lyse@lyse.isobeef.org Danke! Ja, es gibt noch unzƤhlige Stellschrauben an dem Ding. Deine Anmerkungen werde ich einarbeiten. Eine mobile Ansicht wƤr auch noch schƶn. Derzeit sitzt es auf dem Smartphone doch noch recht stramm.
@Unterhaltungen: Die von gestern zu verschlüsselten Nachrichten war ausschlaggebend für die Umsetzung. In āTimelineā und āYarnā haben mich die LƶsungsansƤtze bisher nicht überzeugt. Aber wir kƶnnen ja alle etwas von einander lernen.
another one would be to allow changing public keys over time (as it may be a good practice [0]
). A syntax like the following could help to know what public key you used to encrypt the message, and which private key the client should use to decrypt it:
!<nick url> <encrypted_message> <public_key_hash_7_chars>
Also Iād remove support for storing the message as hex, only allowing base64 (more compact, aiming for a minimalistic spec, etc.)
Or using the same twt hash method, but only for the URL, to generate the nick, if it doesnāt exist, like so, @5vxo4ia@twtxt.net
@doesnmppsflt@doesnm.p.psf.lt Not sure which bug youāre referring to. š¤ (Did I forget?)
Those long IDs like (#113797927355322708) are simply part of that feed. Looks like the author just dumps ActivityPub IDs into twtxt. I think this used to work in the past, but the corresponding spec (https://twtxt.dev/exts/hash-tag.html) has been deprecated and jenny doesnāt support ā actually, jenny never supported that.
jenny can only group threads by exactly one criterium (because it writes a Message-ID
into the mail file) and thatās the regular twt hash. So, anything else, like people doing ā#CoolTopicā, isnāt possible.
Iām still making progress with the Emacs client. Iām proud to say that the code that is responsible for reading the feeds is almost finished, including: Twt Hash Extension, Twt Subject Extension, Multiline Extension and Metadata Extension. Iām fine-tuning some tests and will soon do the first buffer that displays the twts.
Added TwtHash hashes to every message on my personal Twtxt HTML renderer. Code is not yet ready for prime-time. Need to work out some kinks still.
@aelaraji@aelaraji.com You could use https://lyse.isobeef.org/tmp/twthash.py to generate twt hashes. I cobbled that together in order to generate test data for my client.
@bender@twtxt.net So turns out something is setting my HashingURI to the value {{ .Profile.URI }}
and that is making my hashes wrong so it cannot delete or edit twts.
āDesigned by Apple in Californiaā stopped being sold in 2019. It sold for $199, and $299 (two sizes). Check the eBay pricing on that link, if you want to know the selling price today.
Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)
Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where youāre coming from.
Numbering twts in a feed is basically the attempt of generating message IDs. Itās an interesting idea, but I reckon it is not even needed. Iād simply use location based addressing (feed URL + ā#ā + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.
yarnd actually sends If-Modified-Since
request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(ā¦)
we can actually see If-Modified-Since
being deployed when the feed has been retrieved with a Last-Modified
response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278
Turns out etags with If-None-Match
are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.
I donāt understand how the discovery URLs should work to replace the User-Agent
header in HTTP(S) requests. Do you mind to elaborate?
Different protocols are basically just a client thing.
I reckon itās best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, Iād definitely create a different feed for other languages.
Isnāt the emoji thing ājustā a client feature? So, feed do not even have to state any emojis. As a user Iād configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt
where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feedās nick? Hmm. But still, my terminal font doesnāt even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.
similar to data packets in NDN, each message has multiple names. a true name, which is an encoded cryptographic hash of the file itself. we call this kind of information self-certifying. given a true name, you can find a file and verify its integrity. additionally, agents can associate a self-certifying name with a pet name or subjective label of their choosing and share it with their friends/peers. zokoās triangle can suck it. gemini://sunshinegardens.org/~xjix/wiki/cryptogenāspecification/
@movq@www.uninformativ.de iām sorry if I sound too contrarian. Iām not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we donāt just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.
Really we should all think hard about how changes will break things and if those breakages are acceptable.
I share I did write up an algorithm for it at some point I think it is lost in a git comment someplace. Iāll put together a pseudo/go code this week.
Super simple:
Making a reply:
- If yarn has one use that. (Maybe do collision check?)
- Make hash of twt raw no truncation.
- Check local cache for shortest without collision
- in SQL:
select len(subject) where head_full_hash like subject || '%'
- in SQL:
Threading:
- Get full hash of head twt
- Search for twts
- in SQL:
head_full_hash like subject || '%' and created_on > head_timestamp
- in SQL:
The assumption being replies will be for the most recent head. If replying to an older one it will use a longer hash.
If we stuck with Blake2b for Twt Hash(es); what do we think we need to reasonably go to in bit length/size?
=> https://gist.mills.io/prologic/194993e7db04498fa0e8d00a528f7be6
e.g: (turns out @xuu@txt.sour.is is right about Blak2b being easy/simple too!):
$ printf "%s\t%s\t%s" "https://example.com/twtxt.txt" "2024-09-29T13:30:00Z" "Hello World!" | b2sum -l 32 -t | awk '{ print $1 }'
7b8b79dd