I have released new updates to the twtxt.el client.
- Markdown to Org mode (you need to install Pandoc).
- Centred column.
- Added new logo.
- Added text helper.
The new version I will try to finish the visual thread. You still can’t see the thread yet.
#emacs #twtxt #twtxtel
Qualcomm gives OEMs the option of 8 years of Android updates
Starting with Android smartphones running on the Snapdragon 8 Elite Mobile Platform, Qualcomm Technologies now offers device manufacturers the ability to provide support for up to eight consecutive years of Android software and security updates. Smartphones launching on new Snapdragon 8 and 7-series mobile platforms will also be eligible to receive this extended support. ↫ Mike Genewich I mean, good news of cou … ⌘ Read more
I suspect the problem is that the content is updated. It looks like a design problem.
@aelaraji@aelaraji.com You can update the package 😀
@eapl.me@eapl.me Yeah, you need some kind of storage for that. But chances are that there’s already a cache in place. Ideally, the client remembers etags or last modified timestamps in order to reduce unnecessary network traffic when fetching feeds over HTTP(S).
A newsreader without read flags would be totally useless to me. But I also do not subscribe to fire hose feeds, so maybe that’s a different story with these. I don’t know.
To me, filtering read messages out and only showing new messages is the obvious solution. No need for notifications in my opinion.
There are different approaches with read flags. Personally, I like to explicitly mark messages read or unread. This way, I can think about something and easily come back later to reply. Of course, marking messages read could also happen automatically. All decent mail clients I’ve used in my life offered even more advanced features, like delayed automatic marking.
All I can say is that I’m super happy with that for years. It works absolutely great for me. The only downside is that I see heaps of new, despite years old messages when a bug causes a feed to be incorrectly updated (https://twtxt.net/twt/tnsuifa). ;-)
Redox’ relibc becomes a stable ABI
The Redox project has posted its usual monthly update, and this time, we’ve got a major milestone creeping within reach. Thanks to Anhad Singh for his amazing work on Dynamic Linking! In this southern-hemisphere-Redox-Summer-of-Code project, Anhad has implemented dynamic linking as the default build method for many recipes, and all new porting can use dynamic linking with relatively little effort. This is a huge step forward for Redox, because relibc can now beco … ⌘ Read more
@andros@twtxt.andros.dev Awesome! I’ve seen the demo earlier on mastodon, things are getting better and better with each update 👌 Good luck!
GTK announces X11 deprecation, new Android backend, and much more
Since a number of GTK developer came together at FOSDEM, the project figured now was as good a time as any to give an update on what’s coming in GTK. First, GTK is implementing some hard cut-offs for old platforms – Windows 10 and macOS 10.15 are now the oldest supported versions, which will make development quite a bit easier and will simplify several parts of the codebase. Windows 10 was released in 2 … ⌘ Read more
Heute fahren wir auffe Arbeit ein großen Update für das CMS der zentralen Webseiten. Hoffentlich geht das alles gut. 😱
Ahh yes, what I like to call “wild wild west” upgrading.😂
Felt like that when I upgraded/updated an Arch Linux machine that had been sitting for a couple years unused.
New human-like species discovered in China + 3 more stories
Scientists propose a new human-like species based on ancient fossils; oceans warm four times faster than in the 1980s; researchers recreate endosymbiosis significantly in the lab; CIA updates its Covid-19 origins assessment, hinting at a lab leak. ⌘ Read more
Here’s a twt from @andros@twtxt.andros.dev ’s new version of Twtxt-el 🥳 It feels WAaaaaY better! although it freezes on me as soon as I navigate to the next page complaining about some bad url, but the chronological sorting of the feed as well as the navigation buttons (links?) are a great addition. Looking forward to the next update already! 😁 🥳🥳🥳
Muons
⌘ Read more
Das Firmenhandy sagt mir nach einem Update: „Dein Pixel kann jetzt noch mehr!“ Aha. Ist es jetzt ein Voxel? Kann’s jetzt mehr als 256 Farben? Oder was? Ich bin eindeutig nicht die Zielgruppe solcher Sprüche …
⨁ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
@prologic@twtxt.net @lyse@lyse.isobeef.org it seems a recent update reset my pod settings to open registration.
@xuu@txt.sour.is The Pod.LastSeen
and Pod.LastUpdated
fields are only ever updated in the Cache.DetectPodFromUserAgent(…)
function as far as I can tell. This function is called in Cache.DetectClientFromRequest(…)
and Cache.DetectClientFromResponse(…)
.
Cache.DetectClientFromRequest(…)
is only invoked when the twtxt.txt is requested and looks at the User-Agent
HTTP request header.
Cache.DetectClientFromResponse(…)
is only called in Cache.FetchFeeds(…)
and looks at the Powered-By
HTTP response header. This header would be set in twtxt.txt HTTP responses from yarnd. A bunch of places invoke Cache.FetchFeeds(…)
, including a periodic job (UpdateFeedsJob.Run()
). Maybe something is iffy around these locations.
I updated the specification with base64, Curve25519 and more examples: https://github.com/tanrax/twtxt-direct-message-extension
MorphOS 3.19 released
It’s been about 18 months, but we’ve got a new release for MorphOS, the Amiga-like operating system for PowerPC Macs and some other PowerPC-based machines. Going through the list of changes, it seems MorphOS 3.19 focuses heavily on fixing bugs and addressing issues, rather than major new features or earth-shattering changes. Of note are several small but important updates, like updated versions of OpenSSL and OpenSSH, as well as a ton of new filetype definitions – and so much more. Havin … ⌘ Read more
EdgeGuard Update:
I am now in a position where I’m no longer having any ports open on my firewall at the Mills DC. 🥳 All services (Gopher, SMTP, IRC, SSH, HTTP) are being proxied through my edge network 💪
although I agree that it helps, I don’t see completely correct to leave the nick definition to the source .txt. It could be wrong from the start or outdated with the time.
I’d rather prefer to get it from the mentioned .txt nick metadata (could be cached for performance).
So my vote would to make it mandatory to follow @<name url>
but only using that name/nick if the URL doesn’t contain another nick.
A main advantage is that when the destination URL changes the nick, it’ll be automagically updated in the thread view (as happens with some other microblogging platforms, following the Jakob’s Law)
That’s pretty awesome @ ! I’ve seen your contributions to twtxt-el and wondering if you’ve been updating the same one or made another from scratch. either way, I can’t wait to give it a try! 🙌 cheers
@kat@yarn.girlonthemoon.xyz i’m an LXQt girlie for life and i like the convenience of apt despite that they never update their god damn packages so i guess i’m stuck on lubuntu for everything
been having fun updating my dotfiles repo as if i have anything notable to put in there
GoToSocial snapshot has gained “editing statuses” capabilities (and the ability to see the update trail as well). That was one of the things I wanted to most to be implemented. Actually, that sits at the top of my wish list. Next is push notifications.
@prologic@twtxt.net there’s @deadblackclover@deadblackclover.net’s twtxt-el already, I couldn’t use it correctly when I’ve had just discovered it (yes, #emacs skill issues) … but it has been updated since then. I should give it another spin 👌
@bender@twtxt.net Dud! you should see the updated version! 😂 I have just discovered the scratch
#container image and decided I wanted to play with it… I’m probably going to end up rebuilding a LOT of images.
~/htwtxt » podman image list htwtxt
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/htwtxt 1.0.7-scratch 2d5c6fb7862f About a minute ago 12 MB
localhost/htwtxt 1.0.5-alpine 13610a37e347 4 weeks ago 20.1 MB
localhost/htwtxt 1.0.7-alpine 2a5c560ee6b7 4 weeks ago 20.1 MB
docker.io/buckket/htwtxt latest c0e33b2913c6 8 years ago 778 MB
I’ll be using another URL for this twtxt.
The older one will redirect to the new for a while (I’m not sure what would happen if you follow both URLs, I assume it’s better to add the new one and remove the older)
Please update your following list to https://eapl.me/tw.txt !
I’ll be using another URL for this twtxt.
The older one will redirect to the new for a while (I’m not sure what would happen if you follow both URLs, I assume it’s better to add the new one and remove the older)
Please update your following list to https://eapl.me/tw.txt !
[Update!] My request to join in has finally gotten accepted over on thunix.net like, two days ago! And now, my alter ego @skinshafi@thunix.net can have a Twtxt feed of its own x)
Project update + 2 significant news stories
Trump threatens 100% tariffs on Brics nations over dollar currency rivalry; Severe flooding displaces over 122,000 in Malaysia ⌘ Read more
testing the bluesky cross-poster i added into my silly python script for posting status updates
I wrote about making Glenda’s Joy Division cover (with updated colors and a link to source): http://a.9srv.net/b/2024-11-23
Wow! Just Wow! 😮
Discovered this whilst trying to debug why my Youtube frontend no longer works:$ youtube-dl 'https://www.youtube.com/watch?v=YpiK1FMy2Mg'
[youtube] YpiK1FMy2Mg: Downloading webpage
WARNING: unable to extract uploader id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
ERROR: unable to download video data: HTTP Error 403: Forbidden
V:
pattern itself is quite good because you can do quite a lot of powerful things with selected text.
@prologic@twtxt.net Just gave this one a try to update my twtxt.txt file with a proper # follow = ...
list! 🙏
gemini calls the request-response cycle a transaction in the spec. since trasactions are not cached, we have this problem where we can’t tell if anything was updated without fetching it and we can’t indicate how often a client should expect the content to be valid. the most common solution right now to just to keep requesting the resource until it changes or stops existing, which isn’t ideal. this sort of update notification model is interesting because it re-frames your thinking into something more like event sourcing. you end up needing to add an event queue and dispatch to the server, which is a bit more complex on the server side than plain static files, but the client stays the same. i’m curious to see what kind of systems could be built on this gemini message queue concept.
Did Apple Just Kill Social Apps?
Apple’s iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it “the en … ⌘ Read more
@off_grid_living@twtxt.net is it locked because of a DRM thing or something else?
Otherwise you can check if you already have the pdftotext
command that comes with the poppler-utils package, try converting converting the pdf into a text file and copy to your heart’s content. I have just tried it myself.
If you don’t have it already here’s what you can do on Ubuntu or any Debian based distribution of Linux:
- Update and upgrade your packages:
> sudo apt update && sudo apt upgrade
- Install the
poppler-utils
package
> sudo apt install poppler-utils
- Now you can convert your pdf to txt file with:
> pdftotxt -layout -enc UTF-8 name_of_source_file.pdf name_of_destination_file.txt
You can always do a pdftotxt --help
to see the rest of possible options.
Hope this helps.
@sorenpeter@darch.dk Points 2 & 3 aren’t really applicable here in the discussion of the threading model really I’m afraid. WebMentions is completely orthogonal to the discussion. Further, no-one that uses Twtxt really uses WebMentions, whilst yarnd
supports the use of WebMentions, it’s very rarely used in practise (if ever) – In fact I should just drop the feature entirely.
The use of WebSub OTOH is far more useful and is used by every single yarnd
pod everywhere (no that there’s that many around these days) to subscribe to feed updates in ~near real-time without having the poll constantly.
@aelaraji@aelaraji.com This is one of the reasons why yarnd
has a couple of settings with some sensible/sane defaults:
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world one’s exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldn’t necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for… let’s just say ‘Their well being’, would it heart if a pod just purged their content if it’s serving it publicly (maybe relay the info to other pods) and call it a day? It doesn’t have to be about some law/convention somewhere … 🤷 I know! Too extreme, but I’ve seen news of people who’d gone to jail or got their lives ruined for as little as a silly joke. And it doesn’t even have to be about any of this.
There are two settings:
$ ./yarnd --help 2>&1 | grep max-cache
--max-cache-fetchers int set maximum numnber of fetchers to use for feed cache updates (default 10)
-I, --max-cache-items int maximum cache items (per feed source) of cached twts in memory (default 150)
-C, --max-cache-ttl duration maximum cache ttl (time-to-live) of cached twts in memory (default 336h0m0s)
So yarnd
pods by default are designed to only keep Twts around publicly visible on either the anonymous Frontpage or Discover View or your Timeline or the feed’s Timeline for up to 2 weeks with a maximum of 150 items, whichever get exceeded first. Any Twts over this are considered “old” and drop off the active cache.
It’s a feature that my old man @off_grid_living@twtxt.net was very strongly in support of, as was I back in the day of yarnd
’s design (nothing particularly to do with Twtxt per se) that I’ve to this day stuck by – Even though there are some 😉 that have different views on this 🤣
One distinct disadvantage of (replyto:…)
over (edit:#)
: (replyto:…)
relies on clients always processing the entire feed – otherwise they wouldn’t even notice when a twt gets updated. a) This is more expensive, b) you cannot edit twts once they get rotated into an archived feed, because there is nothing signalling clients that they have to re-fetch that archived feed.
I guess neither matters that much in practice. It’s still a disadvantage.
@falsifian@www.falsifian.org this one hits hard, as jenny
was just updated today. :‘-(
@bender@twtxt.net Does it have to. To my understanding, all you have to do is to add in a claim to your Twtxt feed link into your key, update your profile and post one of These Identity formats to your Twtxt file/Profile…
Give me a couple of minutes, I’ll give it a try myself 😉
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
I wonder if bento has slightly missed the key to being a total genius approach to host management. ok hear me out. each node periodically pulls configuration from a coordination node that hosts a binary cache. the admin may make changes and pre-build them maybe kick off an update task manually if they want, but the point is there’s an automated checkin. for my case, the device I have available for coordination isn’t really capable of hosting a binary cache for any of my other machines. the nix store for my dev machine is larger than the entire disk of the coordinator! and due to the yearly heat my best machine can’t be reliably powered on all the time. so i started thinking to myself, “self, what if instead of having a central coordinator we fetched configuration from a reliable git mirror (maybe git+torrent some day) and consume it as a flake. the source could even be swapped out using a flake registry (so you don’t even have to commit to self-hosting anything other than a json file). then managed hosts only have to be setup to consume the registry and the shared flake (which registers the update agent) and DONE?”
@movq@www.uninformativ.de pretty cool! Switched, and pulled. Nice update on README
!
neomutt
. I have now edited this one. Let's go!
OK. @quark@ferengi.one did not see this update, but should see this reply now, as broken.
@prologic@twtxt.net I’m not sure what this update does, but
https://twtxt.net/external?uri=https://google.com&nick=lovetocode999
still exhibits the same problem, on your pod and on mine, after the latest update.
@prologic@twtxt.net OK, I just updated to commit 77d527
, which looks to be the same one you’re running right now. I forgot to blow away my cache before restarting, so I just deleted the cache
file and restarted.
@abucci@anthony.buc.ci appreciate it if you find the time to update again 🙏
yarnd
that's been around for awhile and is still present in the current version I'm running that lets a person hit a constructed URL like
@prologic@twtxt.net What? I compiled, updated, and restarted. If you check what my pod reports, it gives that 7a… SHA. I don’t know what that other screenshot is showing but it seems to be out of date. That was the SHA I was running before this update.