More thoughts about changes to twtxt (as if we havenāt had enough thoughts):
- There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:
1a. Better and longer hashes.
1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.
1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8
1d. Stuff already described at dev.twtxt.net that doesnāt need any changes.
We wonāt know what will and wonāt work until we try them. So Iām inclined to think of this as a bunch of draft ideas. Maybe later when weāve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.
Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
https://twtxt.readthedocs.io/en/latest/user/discoverability.html
and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long ātwtxt v2ā document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment āyouāve ruined twtxtā and while I donāt completely agree with that commenterās sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)All that being said, these are just my opinions, and Iām not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if youāre actually implementing things, youāre in charge of what you decide to make, and Iām grateful for the work.
@bender@twtxt.net Soā¦
() @xuu@txt.sour.is wrote:
ā@bender I am also in camp no edit signals. deletes only breaks the head of a thread. all the replies are unaffected.ā
I figure I could also answer every single twtxt like this, so that if the original gets edited, or deleted, at least I donāt sound foolish without knowing exactly what I replied to. š¤
It Sounds like a good idea! should that be limited to just direct replays or can it be extended to replays
to other replays, that way and With just the right amount of chain-replays, weāll be RRrrrrrevolutionizing the way people Mailing Lists
like, in no time! xD
P.S: Just a reminder! Iāve already told you not to mind my twts for the next couple of hours, right!
Finally weekend. Time to relax a bit. And today I finally have some time for my computer in my free time. Wish you all a great weekend! Take care of your self and those around you :)
if twtxt 2 is dropping gemini support, i will probably move on and spend more time on my gemini social zine protocol instead. i think the direction of the protocol is probably fine, but for me web is a tier 2 publishing channel. if the choice is between gemini and http iām always going to pick gemini. its been a fun ride, but i guess this is where i get off.
scp(1)
options.
@mckinley@twtxt.net I mean, yes! Iāve heard a lot of good things about how efficient of a tool it is for backup and all; and Iām willing to spend the time and learn. Itās just that seeing those +400 possible options was a buzz-kill. š«£ luckily @lyse and @movq shared their most used options!
Starting a couple of new projects (geez where do I find the time?!):
HomeTunnel:
HomeTunnel is a self-hosted solution that combines secure tunneling, proxying, and automation to create your own private cloud. Utilizing Wireguard for VPN, Caddy for reverse proxying, and Traefik for service routing, HomeTunnel allows you to securely expose your home network services (such as Gitea, Poste.io, etc.) to the Internet. With seamless automation and on-demand TLS, HomeTunnel gives you the power to manage your own cloud-like environment with the control and privacy of self-hosting.
CraneOps:
craneops is an open-source operator framework, written in Go, that allows self-hosters to automate the deployment and management of infrastructure and applications. Inspired by Kubernetes operators, CraneOps uses declarative YAML Custom Resource Definitions (CRDs) to manage Docker Swarm deployments on Proxmox VE clusters.
rsync(1)
but, whenever I Tab
for completion and get this:
@aelaraji@aelaraji.com rsync -zaXAP
is what I use all the time. But thatās all ā for the rest, I have to consult the manual. š
@sorenpeter@darch.dk Points 2 & 3 arenāt really applicable here in the discussion of the threading model really Iām afraid. WebMentions is completely orthogonal to the discussion. Further, no-one that uses Twtxt really uses WebMentions, whilst yarnd
supports the use of WebMentions, itās very rarely used in practise (if ever) ā In fact I should just drop the feature entirely.
The use of WebSub OTOH is far more useful and is used by every single yarnd
pod everywhere (no that thereās that many around these days) to subscribe to feed updates in ~near real-time without having the poll constantly.
rsync(1)
but, whenever I Tab
for completion and get this:
@aelaraji@aelaraji.com Rsync has a ton of options and I probably still havenāt scratched the surface, but I was able to memorize the options I actually need for day-to-day work in a relatively short time. I guess Iām the opposite of you, because I donāt know any scp(1)
options.
Been trying to get acquainted with rsync(1)
but, whenever I Tab
for completion and get this:
Ī» ~/ rsync ā
zsh: do you wish to see all 484 possibilities (162 lines)?
Iām like: Nope! a scp -rpCq ...
or whatever option salad will do just fine. š
[Insert: āAināt nobody got time foāthat!ā Meme.]
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with ā(#abc1234) Edit: ā¦ā and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itās sha256 or whatever but thereās no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iām not the one implementing this stuff, and itās less work to just have everything determined up front.
Misc comments (I havenāt read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iād suggest gaining 11 bits with base64 instead.
āClients MUST preserve the original hashā ā do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donāt like the MUST in āClients MUST follow the chain of reply-to referencesā¦ā. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnāt declare the client non-conforming just because they didnāt get to all the bells and whistles.
Similarly I donāt like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iām again thinking again of the 40-line shell script).
For āwho followsā lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canāt feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnāt too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iām a little sad about other protocols being not recommended.
I donāt know how I feel about including markdown. I donāt mind too much that yarn users emit twts full of markdown, but Iām more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
Been thinking about it for the last couple of days and I would say we can make do with the shorter (#<DATETIME URL>)
since it mirrors the twt-mention syntax and simply points to the OP as the topic identified by the time of posting it. Do we really need and (edit:...)
and (delete:...)
also?
@aelaraji@aelaraji.com This is one of the reasons why yarnd
has a couple of settings with some sensible/sane defaults:
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world oneās exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldnāt necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for⦠letās just say āTheir well beingā, would it heart if a pod just purged their content if itās serving it publicly (maybe relay the info to other pods) and call it a day? It doesnāt have to be about some law/convention somewhere ⦠𤷠I know! Too extreme, but Iāve seen news of people whoād gone to jail or got their lives ruined for as little as a silly joke. And it doesnāt even have to be about any of this.
There are two settings:
$ ./yarnd --help 2>&1 | grep max-cache
--max-cache-fetchers int set maximum numnber of fetchers to use for feed cache updates (default 10)
-I, --max-cache-items int maximum cache items (per feed source) of cached twts in memory (default 150)
-C, --max-cache-ttl duration maximum cache ttl (time-to-live) of cached twts in memory (default 336h0m0s)
So yarnd
pods by default are designed to only keep Twts around publicly visible on either the anonymous Frontpage or Discover View or your Timeline or the feedās Timeline for up to 2 weeks with a maximum of 150 items, whichever get exceeded first. Any Twts over this are considered āoldā and drop off the active cache.
Itās a feature that my old man @off_grid_living@twtxt.net was very strongly in support of, as was I back in the day of yarnd
ās design (nothing particularly to do with Twtxt per se) that Iāve to this day stuck by ā Even though there are some š that have different views on this š¤£
@david@collantes.us Thanks, thatās good feedback to have. I wonder to what extent this already exists in registry servers and yarn pods. I havenāt really tried digging into the past in either one.
How interested would you be in changes in metadata and other comments in the feeds? Iām thinking of just permanently saving every version of each twtxt file that gets pulled, not just the twts. It wouldnāt be hard to do (though presenting the information in a sensible way is another matter). Compression should make storage a non-issue unless someone does something weird with their feed like shuffle the comments around every time I fetch it.
@prologic@twtxt.net what time in UTC?
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I donāt think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jennyās beautiful code. I donāt write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jennyās current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because itās possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that havenāt been seen yet but are wanted. When one of those āwantedā twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto ā¦) format.
- Probably other important things Iām forgetting.
Iām bad with faces, I know that. But Iām having a really hard time recognizing Linus in this video:
https://www.youtube.com/watch?v=4WCTGycBceg
Basically a different person to me. Is it just me or has he really changed that much? š³
Can I get someone like maybe @xuu@txt.sour.is or @abucci@anthony.buc.ci or even @eldersnake@we.loveprivacy.club ā If you have some spare time ā to test this yarnd
PR that upgrades the Bitcask dependency for its internal database to v2? š
VERY IMPORTANT If you do; Please Please Please backup your yarn.db
database first! š
Heaven knows I donāt want to be responsible for fucking up a production database here or there š¤£
@quark@ferengi.one Do you mean something like this?
$ ./yarnc debug ~/Public/twtxt.txt | tail -n 1
kp4zitq 2024-09-08T02:08:45Z (#wsdbfna) @<aelaraji https://aelaraji.com/twtxt.txt> My work has this thing called "compressed work", where you can **buy** extra time off (_as much as 4 additional weeks_) per year. It comes out of your pay though, so it's not exactly a 4-day work week but it could be useful, just haven't tired it yet as I'm not entirely sure how it'll affect my net pay
main.go
(but it can be done on a template now, so no reason to touch the code):
@lyse@lyse.isobeef.org fully agree. I have never been a fan of relative times to begin with, so that one will go away, foh sho! :-D
Woot, yes! It works perfectly. By the time you see my twtxt, it is already at the main Ferengi.one website.
OK, OK, the third time is the charm. Maybe?
@aelaraji@aelaraji.com this is my change on main.go
(but it can be done on a template now, so no reason to touch the code):
<time class="dt-published" datetime="{{ $twt.Created | date "2006-01-02T15:04:05Z07:00" }}">
{{ $twt.Created | date "2006-01-02 15:04:05 MST" }}
</time>
See https://ferengi.one. I am going to further customise things, but thatās a start.
@bender@twtxt.net LOL normally things (in the vanilla template) render like <time class="dt-published" datetime="2024-09-17T15:05:19+01:00"> 2024-09-17 14:05:19 +0000 UTC+0000 </time>
the datetime=...
atribute is in my local time UTC+1 then the text within the tag is in UTC+0
The thing is, Iāve been poking at the template as well, but nothing changes. I literally whole portionsm added in lorem text just to see if it would do anything, then twtxt2html -T ./layout.html <link to twtxt file> | less
shows same thing as before! nothing changes. LOL Iām not sure Iām going at it the right way.
@movq@www.uninformativ.de I didnāt run the command as you recommended, but, I wiped things once more, and ran jenny -f
, and this time got:
david@arrakis:~$ jenny -f
Fetching archived feed https://anthony.buc.ci/user/abucci/twtxt.txt/1 (configured as abucci, https://anthony.buc.ci/user/abucci/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://darch.dk/twtxt-archive.txt (configured as soren, https://darch.dk/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2024-04-21_6v47cua.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/1 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2022-12-21_2us6qbq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/2 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2022-01-14_ew5gzca.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/3 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-12-23_f6y65bq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/4 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-12-04_e4x7yba.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/5 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-11-18_42tjxba.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/6 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-11-08_i2wnvaa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-10-23_kvwn5oa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-10-11_mljudaa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-09-22_5mkqwua.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-07-27_xcnzmlq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-06-16_mtedqya.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-04-29_z7lvzja.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-03-19_xjabvhq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-02-24_te4a6oa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-01-26_qxgigma.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2020-12-13_igfnala.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2020-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Notice that @prologic@twtxt.netās /6
is there. I found the twtxt then. Kind of odd it didnāt show before.
(#hash;#originalHash)
would also work.
Maybe Iām being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic ā or maybe I didnāt even send that twt, I donāt remember š ), I never really liked hashes to begin with. They arenāt super hard to implement but they are kind of against the beauty of the original twtxt ā because you need special client support for them. Itās not something that you could write manually in your
twtxt.txt
file. With @sorenpeter@darch.dkās proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iāll try it some time just for fun.
@bender@twtxt.net Now that Iām thinking about it, I could just add in a cron job on my remote machine with: twtxt2html https://domain.ltd/twtxt.txt > /path/to/static_files_dir/
that way I could benefit from the ārelative timeā portion Iām getting rid of with the -n
ā¦
@prologic@twtxt.net Iād be glad to! Iām just taking time to get well acquainted with itās ins and out before saying something stupid š
Like⦠Iāve just noticed the -n
š«
i have been using grapheneos for a long time (maybe 7 years now?) and i would fully reccomend it to anyone who is OK with buying a new pixel device to run it on. the install instructions are really easy to follow and can be executed on any device that supports WebUSB https://grapheneos.org/install/web
so it looks like genode has taken some inspiration from nix.. thatās a rabbit-hole for another time https://genode.org/documentation/developer-resources/package_management
Also what are the change that the same human will make two different posts within the same second?!
Just out of curiosity, What would happen someday if I (maybe trolling) edit my twtxt.txt-file manually and switch/switch a couple of twt timestamps, or add in 3 different twts manually with the same time stamp?
@prologic@twtxt.net earlier you suggested extending hashes to 11 characters, but hereās an argument that they should be even longer than that.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z Useful backup command: rsync -a ā$HOMEā /mnt/backup
and I responded with ā(#5dgoirqemeq) Thanks for the tip!ā. Then Iāve endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory
which also has an 11-character base32 hash of 5dgoirqemeq. (Iām using the existing hashing method with https://example.com/twtxt.txt as the feed url, but Iām taking 11 characters instead of 7 from the end of the base32 encoding.)
Thatās what I meant by āspoofingā in an earlier twt.
I donāt know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the ātwo timesā is because of the birthday paradox.
Side note: current hashes always end with āaā or āqā, which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youāre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canāt verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donāt mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iām also not very familiar with IPFS or IPNS.
I havenāt been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iām too lazy to change how I publish my feed :-)
Craters
ā Read more
@xuu@txt.sour.is Thanks for the link. I found a pdf on one of the authorsā home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.
To be fair, I donāt think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream itās part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff.
# follow_notify = gemini://foo/bar
to your feedās metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? š¤
@movq@www.uninformativ.de Damn! Iām two years late to the discussion š
So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF
they check their logs.
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
I wonder if bento has slightly missed the key to being a total genius approach to host management. ok hear me out. each node periodically pulls configuration from a coordination node that hosts a binary cache. the admin may make changes and pre-build them maybe kick off an update task manually if they want, but the point is thereās an automated checkin. for my case, the device I have available for coordination isnāt really capable of hosting a binary cache for any of my other machines. the nix store for my dev machine is larger than the entire disk of the coordinator! and due to the yearly heat my best machine canāt be reliably powered on all the time. so i started thinking to myself, āself, what if instead of having a central coordinator we fetched configuration from a reliable git mirror (maybe git+torrent some day) and consume it as a flake. the source could even be swapped out using a flake registry (so you donāt even have to commit to self-hosting anything other than a json file). then managed hosts only have to be setup to consume the registry and the shared flake (which registers the update agent) and DONE?ā
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canāt agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereās a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donāt know about other clients.)
Clown visits may shorten the amount of time children spend in hospital
Medical clowns, who play with children in hospitals, may help them be discharged sooner by reducing their heart rates ā Read more
@movq@www.uninformativ.de Another idea: just hash the feed url and time, without the message content. And donāt twt more than once per second.
Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.
Though I kind of like the idea of twts being immutable. At least, itās clear which version of a twt youāre replying to (assuming nobody is engineering hash collisions).
@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that itās an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start ā(#abcd123) (redit)ā.
What I like about this is that clients that donāt know this convention will still stick it in the same thread. And I feel itās in the spirit of the old pre-hash (subject) convention, though thatās before my time.
I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.
But the great thing about the current system is that nobody can spoof message IDs.
I donāt think twtxt hashes are long enough to prevent spoofing.
All this hash breakage made me wonder if we should try to introduce āmessage IDsā after all. š
But the great thing about the current system is that nobody can spoof message IDs. š¤ When you think about it, message IDs in e-mails only work because (almost) everybody plays fair. Nothing stops me from using the same Message-ID
header in each and every mail, that would break e-mail threading all the time.
In Yarn, twt hashes are derived from twt content and feed metadata. That is pretty elegant and Iād hate see us lose that property.
If we wanted to allow editing twts, we could do something like this:
2024-09-05T13:37:40+00:00 (~mp6ox4a) Hello world!
Here, mp6ox4a
would be a āpartial hashā: To get the actual hash of this twt, youād concatenate the feedās URL and mp6ox4a
and get, say, hlnw5ha
. (Pretty similar to the current system.) When people reply to this twt, they would have to do this:
2024-09-05T14:57:14+00:00 (~bpt74ka) (<a href="https://yarn.girlonthemoon.xyz/search?q=%23hlnw5ha">#hlnw5ha</a>) Yes, hello!
That second twt has a partial hash of bpt74ka
and is a reply to the full hash hlnw5ha
. The author of the āHello world!ā twt could then edit their twt and change it to 2024-09-05T13:37:40+00:00 (~mp6ox4a) Hello friends!
or whatever. Threading wouldnāt break.
Would this be worth it? Itās certainly not backwards-compatible. š
GĆ©rer les groupes Active Directory avec lāAdministration JIT https://www.it-connect.fr/active-directory-administration-just-in-time-outil-gestion-pam/?utm_content=cmp-true
@abucci@anthony.buc.ci OMFG! Dear jebus, look at the size of that! :-/ It is just a matter of time until one of those randomly falls on any of us. Just incredible!
@movq@www.uninformativ.de Thanks! Looking forward to trying it out. Sorry for the silence; I have become unexpectedly busy so no time for twtxt these past few days.
Pinellas Trail Challenge: 36.30 miles, 00:14:47 average pace, 08:56:28 duration
DNFād, but had a great time! i could have walked it out the last ten miles and still made the cut-off but after doing the math i would not have been done until about 1900 and we had guests visiting from out of town and i did not want to be even more of a wreck while entertaining. met a lot of great people and happy for the challenge.
#running #race
Itās a really good time to invest in nVIDIA shares š¤£
@abucci@anthony.buc.ci appreciate it if you find the time to update again š
@lyse@lyse.isobeef.org I have no Idea, I still havenāt found a repair shop I can trust with my monitor. As for the blackouts, they donāt have consistent frequency. Sometimes itās once every 3 months⦠other times itās 3 times a day š