@movq@www.uninformativ.de Iāve got this magic spell in my config: -f bestvideo[height<=?1080]+bestaudio/best
@itsericwoodward@itsericwoodward.com pretty cool! Started following you, not to miss any progress. Thanks for the exhaustive reply!
yt-dlped https://www.youtube.com/watch?v=OZTSIYkuMlU. It's only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org thatās pretty cool! The first video I see on YouTube of that kind. Thanks!
@lyse@lyse.isobeef.org Hm, I couldnāt trick yt-dlp into downloading the correct format. Works in the browser, though. š
@bender@twtxt.net @movq@www.uninformativ.de I had automatically yt-dlped https://www.youtube.com/watch?v=OZTSIYkuMlU
. Itās only worth for an experiment, no recommendation to watch.
@lyse@lyse.isobeef.org I canāt remember the last time I came across a 360° video. š¤
@bender@twtxt.net A renewed vision test might be a good idea for some people. š I mean, it is kind of curious that you get this license as a young person and then it lasts a lifetime, without any further tests. As long as you donāt screw up really bad, it remains valid ā¦
@movq@www.uninformativ.de I tried making an ascii scribble of penguin waving back at you but I gave up halfway through my first try xD ⦠HI! š
Thanks @bender@twtxt.net itās been a long time indeed but, I was here the whole time. Just silent. I just didnāt have much meaningful/worth twting about ⦠/ME flips a bird to life
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@movq@www.uninformativ.de better than in the US. Our lasts only 10 years, and you need to go through the vision test, and, of course, pay). Recently they added a little gold star denoting āreal IDā compliance, and we had to pay $10 to get the old one replacedāout of the regular renew āscheduleā.
In here it is all about control, and money.
@lyse@lyse.isobeef.org is it a 360 degree video online, or a local one?
@alexonit@twtxt.alessandrocutolo.it Yhays kind of love you!! Stance and position on this. If we are going to make chicken changes in the threading model, letās keep content based addressing, but also improve the use of experience. So in fact, in order to answer your question, I think yes, we can do some kind of combination of both.
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@lyse@lyse.isobeef.org I donāt think thereās any point in continuing the discussion of Location vs. Content based addressing.
I want us to preserve Content based addressing.
Letās improve the user experience and fix the hash commission problems.
@aelaraji@aelaraji.com welcome back dude! Long time no see!
@movq@www.uninformativ.de Yeah, it took quite some time to load. But then it was briefly back. Now itās 503ing immediately all the time.
@lyse@lyse.isobeef.org https://www.celestrak.com (where the TLE files are downloaded from) seems to be down. :/
@lyse@lyse.isobeef.org šæšæšæš
@movq@www.uninformativ.de Woah, cool!
(WTF, asciiworld-sat-track somehow broke, but I have not changed any of the scripts at all. O_o It doesnāt find the asciiworld-sat-calc anymore. How in the world!? When I use an absolute path, the .tle is empty and I get a parsing error. Gotta debug this.)
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
@movq@www.uninformativ.de Happy equinox!
It looks amazing from the map, you probably canāt tell even by looking from space.
@kat@yarn.girlonthemoon.xyz Oh! A new place to spam dad jokes. š„³
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
urlfield too, by specification, it will use the first# url = <URL>in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>with theurlas fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
@thecanine@twtxt.net Thanks!
Looking forward to it!āļø
@alexonit@twtxt.alessandrocutolo.it @lyse@lyse.isobeef.org i really donāt understand why this was not the solution in the first place, given how simple twtxt is (mean to be), a reply should be as simple as #<https://example.com/twtxt.txt#2025-09-22T06:45Z> with the timestamp in an anchor link. the need for a mention is avoided like this as well since itās already linking to the replied-to feed!
šš i should just implement it into bbycll and force it into existence
@kat@yarn.girlonthemoon.xyz Mine shows 1/1 of 14 Twts š I think this is a bug š¤Æ
@itsericwoodward@itsericwoodward.com any news about this? I am, at the very least, curious!
@alexonit@twtxt.alessandrocutolo.it I took it down mostly because of continued abuse and spam:l. I intend to fix I and improve the drive and its sister at Summer point š¤
@alexonit@twtxt.alessandrocutolo.it thank you and welcome back to Yarn! The somewhat plushie-like look is intentional, so Iām glad it was noticed.
Only have 2 sizes of him in this pose, as well as most other sitting poses, but if thereās ever a sitting pose, shared by more than 2 of them, Iāll be sure to make a matrioska edit.
@lyse@lyse.isobeef.org Yeah, the format is just an idea of how it could work.
The order of SOURCE > POST does make more sense indeed.
@prologic@twtxt.net thanks, I already follow @important_dev_news@n8n.andros.dev too.
BTW, the feed on https://feeds.twtxt.net/ seem down? It says itās in maintenance.
@alexonit@twtxt.alessandrocutolo.it Love this š
@alexonit@twtxt.alessandrocutolo.it Yeah same 𤣠Thereās also this @news-minimalist@feeds.twtxt.net feed that shows up the most important shit⢠anyway (when/if that happens).
@alexonit@twtxt.alessandrocutolo.it Personally, I find the reversed order of URL first and then timestamp more natural to reference something. Granted, URL last would be kinda consistent with the mention format. However, the timestamp doesnāt act as a link text or display text like in a mention, so, itās some different in my opinion. But yeah.
@prologic@twtxt.net Me neither, if thereās any important news others usually tell me anyway. š
@bender@twtxt.net Seriously I have zero clue 𤣠I donāt read or watch any news so I have no idea š¤¦āāļø
@prologic@twtxt.net I know you were away, but were you under a rock?! š
@prologic@twtxt.net Yes, no doubt. Thereās always something somewhere.
@lyse@lyse.isobeef.org Some stuff is actually more reliable, thatās true. Itās also waaaaaaaaaaay more expensive, though ⦠:-)
I called it a day, yes. \o/
@movq@www.uninformativ.de But itās so reliable and they have all the experts, they know what theyāre doing! And donāt forget, itās way cheaper! Just think of the 34 cents saved every year on paper, the business dude calculated!
Enjoy your weekend! (I hope, you just called it a day and donāt have to drive to the office or silly shenanigans like that.)
@lyse@lyse.isobeef.org Yep! Super fast and efficient! š
@prologic@twtxt.net welcome back! š„³š„³š„³
@movq@www.uninformativ.de Thatās transparency hardware support!
nicks? i remember reading somewhere whitespace should not be allowed, but i don't see it in the spec on twtxt.dev ā in fact, are there any other resources on twtxt extensions outside of twtxt.dev?
@zvava@twtxt.net In tt, I recognize umlauts in nicks, but they cannot include whitespace, @, !, #, (, ), [, ], <, >, " (but ' is okay). Whitespace also acts as a separator between nick and URL. @<Hello World http://example.com> ends up exactly like that and is not a mention.