twet
š¤¦āāļø
Necropost: btw i have twt alias for twet š
twtxt via dmenu | https://git.sr.ht/~fredg/mybin/tree/master/item/twt
If we stuck with Blake2b for Twt Hash(es); what do we think we need to reasonably go to in bit length/size?
=> https://gist.mills.io/prologic/194993e7db04498fa0e8d00a528f7be6
e.g: (turns out @xuu@txt.sour.is is right about Blak2b being easy/simple too!):
$ printf "%s\t%s\t%s" "https://example.com/twtxt.txt" "2024-09-29T13:30:00Z" "Hello World!" | b2sum -l 32 -t | awk '{ print $1 }'
7b8b79dd
@fastidious@tilde.town Yeah, I gave it a try, now Iāll just wait. BTW, Your Nick rings a bell! I probably do remember it from reading old twts š Happy getting to you here!
Drama
is tech entities' new Going Viral
PR stunt. After the Wordpress vs. WPE mayhem, Godot starts it's own, Who/what's next?
@bender@twtxt.net I can always edit my twt and correct my Oopsie xD Would that make him happier?
Drama
is tech entities' new Going Viral
PR stunt. After the Wordpress vs. WPE mayhem, Godot starts it's own, Who/what's next?
You proud daddy!? My twt is exactly 140 characters! ššš
@prologic@twtxt.net Regarding the new way of generating twt-hashes, to me it makes more sense to use tabs as separator instead of spaces, since the you can just copy/past a line directly from a twtxt-file that already go a tab between timestamp and message. But tabs might be hard to ātypeā when you are in a terminal, since it will activate autocompleteā¦š¤
Another thing, it seems that you sugget we only use the domain in the hash-creation and not the full path to the twtxt.txt
$ echo -e "https://example.com 2024-09-29T13:30:00Z Hello World!" | sha256sum - | awk '{ print $1 }' | base64 | head -c 12
I believe Iād missed an f
:
~/src/jenny $ git diff
diff --git a/jenny b/jenny
index ada8da2..8ae9a06 100755
--- a/jenny
+++ b/jenny
@@ -1194,7 +1194,7 @@ if __name__ == '__main__':
if args.edit:
edit_twt_file(app)
elif args.fetch:
- with DirectoryLock(f'/tmp/jenny-{getuser()}.run'):
+ with DirectoryLock(expanduser(f'~/tmp/jenny-{getuser()}.run')):
retrieve_all(app)
elif args.last_seen:
print('Feeds last seen at (times are local time), oldest first:')
@doesnm@doesnm.p.psf.lt Iāve just given it a try on android/termux and got it to work, I canāt promise it wonāt break something else (because i definitely donāt know what Iām doing) but hereās what I broke š :
~/src/jenny $ git diff
diff --git a/jenny b/jenny
index ada8da2..8ae9a06 100755
--- a/jenny
+++ b/jenny
@@ -1194,7 +1194,7 @@ if __name__ == '__main__':
if args.edit:
edit_twt_file(app)
elif args.fetch:
- with DirectoryLock(f'/tmp/jenny-{getuser()}.run'):
+ with DirectoryLock(expanduser('~/tmp/jenny-{getuser()}.run')):
retrieve_all(app)
elif args.last_seen:
print('Feeds last seen at (times are local time), oldest first:')
and of course make sure you mkdir ~/tmp
twt
probably isn't the best client I'm afraid. It doesn't really cache twts by their key (hash) to display threads properly. Jenny however does š
It has twts cache which used if timeline is set to jew. Maybe i.should fork twet to make wishes like newlines (i see two squares), showing conversations, showing twts if not found in cache and parsing medata to configure url, nick and followers (currenly it duplicated in config and twtxt file)
@doesnm@doesnm.p.psf.lt twt
probably isnāt the best client Iām afraid. It doesnāt really cache twts by their key (hash) to display threads properly. Jenny however does š
twet display twts in raw format with some formatting (sadly no newlines). And for reply messages i just seen (#hash). But which text hidden on hash? currenly im open twtxt.net/twt/hash to see this
How to read twts without browser? I dont understand context in reply messages
š Thanks for joining us on our Sept monthly Yarn.social meetup today yāall šāāļø We had @david@collantes.us @sorenpeter@darch.dk @doesnm@doesnm.p.psf.lt @falsifian@www.falsifian.org and @xuu@txt.sour.is šŖ Nice turn out! (not all at once of course, as we normally run this over 4 hours as we span many time zones!)
Things we talked about:
- Decentralised vs. Distributed
- Use of SHA256 for Twt Hash(es)
- We solved Edits! š„³
- UUID(s) probably wonāt work! (susceptible to sppofing)
- Helped @sorenpeter@darch.dk write some PHP to process/parse
User-Agent
and service his feed via a custom PHP script š
- @falsifian@www.falsifian.org introduced himself š
- Talked about Merkle Trees š³
Did I miss anything? š¤
@bender@twtxt.net Soā¦
() @xuu@txt.sour.is wrote:
ā@bender I am also in camp no edit signals. deletes only breaks the head of a thread. all the replies are unaffected.ā
I figure I could also answer every single twtxt like this, so that if the original gets edited, or deleted, at least I donāt sound foolish without knowing exactly what I replied to. š¤
It Sounds like a good idea! should that be limited to just direct replays or can it be extended to replays
to other replays, that way and With just the right amount of chain-replays, weāll be RRrrrrrevolutionizing the way people Mailing Lists
like, in no time! xD
P.S: Just a reminder! Iāve already told you not to mind my twts for the next couple of hours, right!
We:
- Drop
# url=
from the spec.
- We donāt adopt
# uuid =
ā Something @anth@a.9srv.net also mentioned (see below)
We instead use the @nick@domain
to identify your feed in the first place and use that as the identify when calculating Twt hashes <id> + <timestamp> + <content>
. Now in an ideal world I also agree, use WebFinger for this and expect that for the most part youāll be doing a WebFinger lookup of @user@domain
to fetch someoneās feed in the first place.
The only problem with WebFinger is should this be mandated or a recommendation?
@bender@twtxt.net Re that broken thread (#bqor23a)
. Its the same one. My pod doesnāt have the Root Twt: https://twtxt.net/twt/bqor23a => 404 Not Found.
How in the hell did you even reply to this in the first place?
Donāt mind this twt!
Good writeup, @anth@a.9srv.net! I agree to most of your points.
3.2 Timestamps: I feel no need to mandate UTC. Timezones are fine with me. But I could also live with this new restriction. I fail to see, though, how this change would make things any easier compared to the original format.
3.4 Multi-Line Twts: What exactly do you think are bad things with multi-lines?
4.1 Hash Generation: I do like the idea with with a new uuid
metadata field! Any thoughts on two feeds selecting the same UUID for whatever reason? Well, the same could happen today with url
.
5.1 Reply to last & 5.2 More work to backtrack: I do not understand anything youāre saying. Can you rephrase that?
8.1 Metadata should be collected up front: I generally agree, but if the uuid
metadata field were a feed URL and no real UUID, there should be probably an exception to change the feed URL mid-file after relocation.
And finally the legibility of feeds when viewing them in their raw form are worsened as you go from a Twt Subject of (#abcdefg12345)
to something like (https://twtxt.net/user/prologic/twtxt.txt 2024-09-22T07:51:16Z)
.
There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd
does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (on the average case).
So really your argument is just that switching to a location-based addressing ājust makes senseā. Why? Without concrete pros/cons of each approach this isnāt really a strong argument Iām afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.
I also donāt really buy the argument of simplicity either personally, because I donāt technically see it much more difficult to take a echo -e "<url>\t<timestamp>\t<content>" | sha256sum | base64
as the Twt Subject or concatenating the <url> <timestamp>
ā The āeffortā is the same. If weāre going to argue that SHA256 or cryptographic hashes are ātoo complicatedā then Iām not really sure how to support that argument.
Some more arguments for a local-based treading model over a content-based one:
The format:
(#<DATE URL>)
or(@<DATE URL>)
both makes sense: # as prefix is for a hashtag like we allredy got with the(#twthash)
and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.Having something like
(#<DATE URL>)
will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the#twthash
. This will also make it possible to make 3th part twt-mentions services.Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you donāt follow.
@falsifian@www.falsifian.org I believe the preserve means to include the original subject hash in the start of the twt such as (#somehash)
Sorry, youāre right, I should have used numbers!
Iām donāt understand what āpreserve the original hashā could mean other than āmake sure thereās still a twt in the feed with that hashā. Maybe the text could be clarified somehow.
Iām also not sure what you mean by markdown already being part of it. Of course people can already use Markdown, just like presumably nothing stopped people from using (twt subjects) before they were formally described. But itās not universal; e.g. as a jenny user I just see the plain text.
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with ā(#abc1234) Edit: ā¦ā and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itās sha256 or whatever but thereās no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iām not the one implementing this stuff, and itās less work to just have everything determined up front.
Misc comments (I havenāt read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iād suggest gaining 11 bits with base64 instead.
āClients MUST preserve the original hashā ā do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donāt like the MUST in āClients MUST follow the chain of reply-to referencesā¦ā. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnāt declare the client non-conforming just because they didnāt get to all the bells and whistles.
Similarly I donāt like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iām again thinking again of the 40-line shell script).
For āwho followsā lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canāt feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnāt too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iām a little sad about other protocols being not recommended.
I donāt know how I feel about including markdown. I donāt mind too much that yarn users emit twts full of markdown, but Iām more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
So Iām a location based system, how exactly do I reply to one of these two Twts from @Yarns@search.twtxt.net ? š¤
2024-09-07T12:55:56Z š„³ NEW FEED: @<twtxt http://edsu.github.io/twtxt/twtxt.txt>
2024-09-07T12:55:56Z š„³ NEW FEED: @<kdy https://twtxt.kdy.ch/twtxt.txt>
Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:
$ ./stats
Saw 58263 hashes
7fqcxaa
https://twtxt.net/user/justamoment/twtxt.txt
https://twtxt.net/user/prologic/twtxt.txt
ntnakqa
https://twtxt.net/user/prologic/twtxt.txt
https://twtxt.net/user/thecanine/twtxt.txt
Namely:
$ jenny -D https://twtxt.net/user/justamoment/twtxt.txt | grep 7fqcxaa
[7fqcxaa] [2022-12-28 04:53:30+00:00] [(#pmuqoca) @prologic@twtxt.net I checked the GitHub discussion, it became a request to join forces.
Do you plan on having them join?
Also for the name, how about:
- āprogitā or āprologitā (prologic official hard fork)
- āgit-stanceā (git instance)
- āGitTreeā (Gitea inspired, maybe to related)
- āGitomataā (git automata)
- āGit.Sourceā
- āForgorā (forgit is taken so I forgor) š¤£
- āSweetGitā (as salty chat)
- āPepper Gitā (other ingredients) š
- āGitHeartā (core of git with a GitHub sounding name)
- āGitTakaā (With music in mind)
Ok, enough fun⦠Hope this helps sprout some ideas from others if nothing is to your taste.]
$ jenny -D https://twtxt.net/user/prologic/twtxt.txt/5 | grep 7fqcxaa
[7fqcxaa] [2022-02-25 21:14:45+00:00] [(#bqq6fxq) Itās handled by blue Monday]
And:
$ jenny -D https://twtxt.net/user/thecanine/twtxt.txt | grep ntnakqa
[ntnakqa] [2022-01-23 10:24:09+00:00] [(#2wh7r4q) <a href="https://yarn.girlonthemoon.xyz/external?uri=https://twtxt.net/user/prologic/twtxt.txt">@prologic<em>@twtxt.net</em></a> I know, I was just hoping it might have also gotten fixed by that change, by some kind of backend miracles. š]
$ jenny -D https://twtxt.net/user/prologic/twtxt.txt/1 | grep ntnakqa
[ntnakqa] [2024-02-27 05:51:50+00:00] [(#otuupfq) <a href="https://yarn.girlonthemoon.xyz/external?uri=https://twtxt.net/user/shreyan/twtxt.txt">@shreyan<em>@twtxt.net</em></a> Ahh š]
@prologic@twtxt.net Care to explain how that proves anything when someone else already got the spoofed twt with no way to tell it was? canāt an old twt just be deleted and give a similar result when grep-ed for?
Le me is worried! š
š Reminder folks of the upcoming Yarn.social monthly online meetup:
I hope to see @david@collantes.us @movq@www.uninformativ.de @lyse@lyse.isobeef.org @xuu@txt.sour.is @sorenpeter@darch.dk and hopefully others too @aelaraji@aelaraji.com @falsifian@www.falsifian.org and anyone else that sees this! š Weāre hopefully going to primarily discuss the future of Twtxt and the last few weeks of discussions š¤£
- Event: Yarn.social Online Meetup
- When: 28th September 2024 at 12:00pm UTC (midday)
- Where: Mills Meet : Yarn.social
- Cadence: 4th Saturday of every Month
Agenda:
- Letās talk about the upcoming changes to the Twtxt spec(s)
- See #xgghhnq
- See #xgghhnq
Been thinking about it for the last couple of days and I would say we can make do with the shorter (#<DATETIME URL>)
since it mirrors the twt-mention syntax and simply points to the OP as the topic identified by the time of posting it. Do we really need and (edit:...)
and (delete:...)
also?
@prologic@twtxt.net šµš» Hint: it was a twt about stolen propertyā¦
I have just made yet another convoluted twtxt notifications script! Feeling like an old dog learning new tricks! š¤£
@aelaraji@aelaraji.com This is one of the reasons why yarnd
has a couple of settings with some sensible/sane defaults:
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world oneās exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldnāt necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for⦠letās just say āTheir well beingā, would it heart if a pod just purged their content if itās serving it publicly (maybe relay the info to other pods) and call it a day? It doesnāt have to be about some law/convention somewhere ⦠𤷠I know! Too extreme, but Iāve seen news of people whoād gone to jail or got their lives ruined for as little as a silly joke. And it doesnāt even have to be about any of this.
There are two settings:
$ ./yarnd --help 2>&1 | grep max-cache
--max-cache-fetchers int set maximum numnber of fetchers to use for feed cache updates (default 10)
-I, --max-cache-items int maximum cache items (per feed source) of cached twts in memory (default 150)
-C, --max-cache-ttl duration maximum cache ttl (time-to-live) of cached twts in memory (default 336h0m0s)
So yarnd
pods by default are designed to only keep Twts around publicly visible on either the anonymous Frontpage or Discover View or your Timeline or the feedās Timeline for up to 2 weeks with a maximum of 150 items, whichever get exceeded first. Any Twts over this are considered āoldā and drop off the active cache.
Itās a feature that my old man @off_grid_living@twtxt.net was very strongly in support of, as was I back in the day of yarnd
ās design (nothing particularly to do with Twtxt per se) that Iāve to this day stuck by ā Even though there are some š that have different views on this š¤£
@movq@www.uninformativ.de @falsifian@www.falsifian.org @prologic@twtxt.net Maybe I donāt know what Iām talking about and Youāve probably already read this: Everything you need to know about the āRight to be forgottenā coming straight out of the EUās GDPR Website itself. It outlines the specific circumstances under which the right to be forgotten applies as well as reasons that trump the oneās right to erasure ā¦etc.
Iām no lawyer, but my uneducated guess would be that:
A) twts are already publicly available/public knowledge and such⦠just donāt process childrenās personal data and MAYBE youāre good? Since thereās this:
⦠an organizationās right to process someoneās data might override their right to be forgotten. Here are the reasons cited in the GDPR that trump the right to erasure:
- The data is being used to exercise the right of freedom of expression and information.
- The data is being used to perform a task that is being carried out in the public interest or when exercising an organizationās official authority.
- The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
B) What I love about the TWTXT sphere is itās Human/Humane element! No deceptive algorithms, no Corpo B.S ā¦etc. Just Humans. So maybe ⦠If we thought about it in this way, it wouldnāt heart to be even nicer to others/offering strangers an even safer space.
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world oneās exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldnāt necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for⦠letās just say āTheir well beingā, would it heart if a pod just purged their content if itās serving it publicly (maybe relay the info to other pods) and call it a day? It doesnāt have to be about some law/convention somewhere ⦠𤷠I know! Too extreme, but Iāve seen news of people whoād gone to jail or got their lives ruined for as little as a silly joke. And it doesnāt even have to be about any of this.
P.S: Maybe make X
tool check out robots.txt? Or maybe make long-term archives Opt-in? Opt-out?
P.P.S: Already Way too many MAYBEās in a single twt! So Iāll just shut up. š
Bahahahaha very clever @lyse@lyse.isobeef.org I look forward to reading your report ! 𤣠Howeverā¦
$ yarnc debug https://twtxt.net/user/prologic/twtxt.txt | grep -E '^pqst4ea' | tee | wc -l
0
I very quickly proved that Twt was never from me š¤£
@david@collantes.us Thanks, thatās good feedback to have. I wonder to what extent this already exists in registry servers and yarn pods. I havenāt really tried digging into the past in either one.
How interested would you be in changes in metadata and other comments in the feeds? Iām thinking of just permanently saving every version of each twtxt file that gets pulled, not just the twts. It wouldnāt be hard to do (though presenting the information in a sensible way is another matter). Compression should make storage a non-issue unless someone does something weird with their feed like shuffle the comments around every time I fetch it.
@falsifian@www.falsifian.org āI was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twtsā ā thatās an awesome idea for a project. Something I would certainly use!
(replyto:ā¦)
over (edit:#)
: (replyto:ā¦)
relies on clients always processing the entire feed ā otherwise they wouldnāt even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.
@movq@www.uninformativ.de I donāt think it has to be like that. Just make sure the new version of the twt is always appended to your current feed, and have some convention for indicating itās an edit and which twt it supersedes. Keep the original twt as-is (or delete it if you donāt want new followers to see it); doesnāt matter if itās archived because you arenāt changing that copy.
@prologic@twtxt.net Do you have a link to some past discussion?
Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I donāt think I have to honour that request, no matter how European they are.
I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?
Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.
One distinct disadvantage of (replyto:ā¦)
over (edit:#)
: (replyto:ā¦)
relies on clients always processing the entire feed ā otherwise they wouldnāt even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.
I guess neither matters that much in practice. Itās still a disadvantage.
@prologic@twtxt.net Hi. i have noticed sometimes when i hit the back button i lose all the surrounding layout and just have a list of twts.
BTW this code doesnāt incorporate existing twts into jennyās database. Itās best used starting from scratch. Iāve been testing it using a custom XDG_CACHE_HOME and XDG_CONFIG_HOME to avoid messing with my ārealā jenny data.
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I donāt think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jennyās beautiful code. I donāt write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jennyās current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because itās possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that havenāt been seen yet but are wanted. When one of those āwantedā twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto ā¦) format.
- Probably other important things Iām forgetting.
@prologic@twtxt.net Wikipedia claims sha1 is vulnerable to a āchosen-prefix attackā, which I gather means I can write any two twts I like, and then cause them to have the exact same sha1 hash by appending something. I guess a twt ending in random junk might look suspcious, but perhaps the junk could be worked into an image URL like
. If thatās not possible now maybe it will be later.git only uses sha1 because theyāre stuck with it: migrating is very hard. There was an effort to move git to sha256 but I donāt know its status. I think there is progress being made with Game Of Trees, a git clone that uses the same on-disk format.
I canāt imagine any benefit to using sha1, except that maybe some very old software might support sha1 but not sha256.
@movq@www.uninformativ.de Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the āreplytoā idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.
There is nothing wrong with how we currently run a diff to see what has been removed. if i build a merkle tree off all the twt hashes in a feed i can use that to verify a twt should be in a feed or not. and gossip that to my peers.
So.. basically a rehash of the email āunsendā requests? What if i was to make a (delete: 5vbi2ea)
.. would it delete someone elses twt?
Iām not advocating in either direction, btw. I havenāt made up my mind yet. š Just braindumping here.
The (replyto:ā¦)
proposal is definitely more in the spirit of twtxt, Iād say. Itās much simpler, anyone can use it even with the simplest tools, no need for any client code. That is certainly a great property, if you ask me, and itās things like that that brought me to twtxt in the first place.
Iād also say that in our tiny little community, message integrity simply doesnāt matter. Signed feeds donāt matter. I signed my feed for a while using GPG, someone else did the same, but in the end, nobody cares. The community is so tiny, thereās enough āimplicit trustā or whatever you want to call it.
If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? š
I do have to āadmitā, though, that hashes feel better. It feels good to know that we can clearly identify a certain twt. It feels more correct and stable.
Hm.
I suspect that the (replyto:ā¦)
proposal would work just as well in practice.
@falsifian@www.falsifian.org āI donāt really mind if the twt gets edited before I even fetch it.ā, right, thatās never the problem. Editing a twtxt before anyone fetches it isnāt even editing, right? :-P The problem we are trying to fix is the havoc is causes editing twtxts that have already been replied to, often ad nauseam. Thatās the real problem.