Searching yarn

Twts matching #US
Sort by: Newest, Oldest, Most Relevant

Hi everyone, here’s a little introduction of my twtxt client (still WIP).

The client I’m developing is a single tenant project that runs entirely in the browser (it might use an optional backend).

It’s entirely based on native web-components and vanilla JS, it is designed to act closer to a toolkit than a full-fledged client, allowing users to ā€œDIYā€ their own interface with pure html or plain javascript functions.

Users can also build their own engines by including a global javascript object that implement the defined internal API (TBD).

I’m planning to build a system that is easy enough to build and use with any skill level, using only pure html (with a homebrew minimal template engine) or via plain JS (I’ll be also providing some pre-made templates too).

Everything can be self-hosted on any static hosting provider, this allows to spread twtxt within communities like Neocities and similarly hosted websites (basically any Indieweb/Smallweb/Digital garden website and any of the common GitHub/Lab/Berg/lify Pages).

It will be probably named something like TxtCraft or craf.txt but I’m not really sure yet… šŸ¤” (Maybe some suggestions could help)

I’m still in the experimental phase, so there’s no decent source-code to share yet, but it will soon enough!

⤋ Read More

I think I’m just about ready to go live with my new blog (migrated from MicroPub). I just finished migrating all of the content over, fixing up metadata, cleaning up, migrating media, optimizing media.

The new blog for prologic.blog soon to be powered by zs using the zs-blog-template is coming along very nicely šŸ‘Œ It was actually pretty easy to do the migration/conversation in the end. The results are not to shabby either.

Before:

  • ~50MB repo
  • ~267 files

After:

  • ~20MB repo
  • ~88 files

⤋ Read More

Pretty happy with my zs-blog-template starter kit for creating and maintaining your own blog using zs šŸ‘Œ Demo of what the starter kit looks like here – Basic features include:

  • Clean layout & typography
  • Chroma code highlighting (aligned to your site palette)
  • Accessible copy-code button
  • ā€œOn this pageā€ collapsible TOC
  • RSS, sitemap, robots
  • Archives, tags, tag cloud
  • Draft support (hidden from lists/feeds)
  • Open Graph (OG) & Twitter card meta (default image + per-post overrides)
  • Ready-to-use 404 page

As well as custom routes (redirects, rewrites, etc) to support canonical URLs or redirecting old URLs as well as new zs external command capability itself that now lets you do things like:

$ zs newpost

to help kick-start the creation of a new post with all the right ā€œstuffā€ā„¢ ready to go and then pop open your $EEDITOR šŸ¤ž

#awesome #zs

⤋ Read More
In-reply-to » Please don't hate me today; I'm a bit grumpy and have too many reasons to be upset:

@prologic@twtxt.net I too, self-host various services on a VPS (and considering buying a mini PC to keep at home instead).

I use most of it as a hosting platform for personal use only and as a remote development environment (I do share a couple of tools with a friend though).

But given the costant risks of DDoS, hacking, bots, etc. I keep any of my public facing resources purely static and on separate hosting providers (without lock-ins of course).

Lately, I began using homebrew PWAs with CouchDB as a sync database, this way I get a fantastic local-first experience and also have total control of my data, that also sync in a locally hosted backup instance in real-time.

Also, I was already aware of Salty.im, but what I’m thinking is a more feature complete solution that even my family can use quickly, Delta.chat with the new chatmail provider (self-hostable) might be the solution for my needs.

But I’m still thinking if it’s worth the trouble. I might just drop everything and only use safe channels to speak with them (free 24/7 family tech-support is easy to manage šŸ˜†).

Also, I’ll be waiting for the day you’ll share with us your story, I’m pretty curious about it!

⤋ Read More
In-reply-to » @prologic the simplest thing to do is to completely forgo hashing anything because we are communicating using plain text files right now :3 while i agree hashes are incredibly helpful in the backend im not sure it has a place outside of it, it basically eliminates two core design principals of twtxt (human readability and integrating well with unix command line utilities) and makes new clients more difficult to build than it should be

Exactly, @zvava@twtxt.net, I agree. (Although, in my client at least, I wouldn’t use hashes anywhere.)

⤋ Read More
In-reply-to » Oh man, if the EU actually rolled out this horribd idea called ChatControl that actually threatens the security and privacy of secure e2e encrypted messaging like Signalā„¢, fuck me, I'm out šŸ¤¦ā€ā™‚ļø I'll just rage quit the IT industry and become a luddite. I'm out.

@prologic@twtxt.net Hm, I don’t know. Over here, we have parties that we would call ā€œleftā€ or ā€œrightā€, one of them even calls themselves ā€œThe Leftā€. No idea about your political landscape, but it still makes sense for us. šŸ¤” For me, at least.

⤋ Read More
In-reply-to » @prologic to clarify: i meant the ability to parse feeds using unix command line utilities, as a principal of twtxtv1's design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;

@alexonit@twtxt.alessandrocutolo.it Yeah I think we’re overstating the UNIX principles a bit here 🤣 I get what you’re trying to say though @zvava@twtxt.net šŸ˜… If I could go back in time and do it all over again, I would have gotten the Hash length correct and I would have used SHA-256 instead. But someone way smarter than me designed the Twt Hash spec, we adopted it and well here we are today, it worksā„¢ šŸ˜…

⤋ Read More
In-reply-to » @prologic to clarify: i meant the ability to parse feeds using unix command line utilities, as a principal of twtxtv1's design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;

That’s what I’m using right now, while my own client is still in the making.

A simple bash script to write a post in a mktemp file then clean it with regex.
I don’t even bother to hash the replies, I just open https://twtxt.net and copy the hash by hand since I’m checking the new posts from there anyway (temporarily, as I might end up DoS-ing everyone’s feed in my client right now).

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@prologic@twtxt.net to clarify the i meant the ability to parse feeds using unix command line utilities, as a prinicpal of twtxtv1’s design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;

i concede, it does make a lot of sense to fix up the hashing spec rather than completely supplant it at this point, just thinking about what the rewrite would be like is dreadful in and of itself x.x

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@prologic@twtxt.net to clarify: i meant the ability to parse feeds using unix command line utilities, as a principal of twtxtv1’s design. im not sure how feasible it is to build a simple feed reader out of common scripting utilities when hashing is in play, and;

i concede, it does make a lot of sense to fix up the hashing spec rather than completely supplant it at this point, just thinking about what the rewrite would be like is dreadful in and of itself x.x

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

Put another way, what you are proposing/pushing for requires hundreds of lines of code to change across a half dozen or so clients and lots of breaking changes, not to mention unknowns.

What I want us to do is make only a few half dozen or so lines of code changes to our clients and minimize the breaking changes and unknowns.

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@zvava@twtxt.net Going to have to hard disagree here I’m sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I’m sorry you’ve completely lost me! I’m old enough to pre-date before Linux became popular, so I’m not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject) whose contents is a cryptographic content-addressable hash of the ā€œthingā€ā„¢ you’re replying to and forming a chain of other replies (a thread).

I’m sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a ā€œMagic Dateā€ for which our clients use the modified function(s).

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@prologic@twtxt.net the simplest thing to do is to completely forgo hashing anything because we are communicating using plain text files right now :3 while i agree hashes are incredibly helpful in the backend im not sure it has a place outside of it, it basically eliminates two core design principals of twtxt (human readability and integrating well with unix command line utilities) and makes new clients more difficult to build than it should be

⤋ Read More
In-reply-to » TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).

@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.

After thinking about it for a while I got to two solutions:

Proposal 1: Thread syntax (using subject)

Each post have an implicit and an optional explicit root reference:

  • Implicit (no action needed, all data required are already there)

    • URL + timestamp
  • Explicit (subject required)

    • Identity (client generated)
    • External reference
    • Random value

We then add include a ā€œrootā€ subject in each post for generating explicit theads:

1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
	- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
	- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters

Each post can have both references, like the current hash approach the reference can be treated as a simple string and don’t have a real meaning.

Using a custom reference this way allows a client to decide how to generate them:

  • Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
  • External references: can be provided from another system (Eg. 7e073bd345, yarnsocial/yarn latest commit)
  • Random value: like a UUID (Eg. 9a0c34ed-d11e-447e-9257-0a0f57ef6e07)

Proposal 2: Threaded mentions (featuring zvava)

Inspired by @zvava@twtxt.net’s solution it could be simplified into: #<nick url#timestamp> or #<url#timestamp>

It can be shown like a mentions or hidden like a subject.

If we’re using thinking of using a counter in the client, I think there’s no point in avoiding the timestamp anymore.

⤋ Read More

I just created a zs blogging template which I’m going to use for https://prologic.blog and I might starting writing long-form again soonā„¢ šŸ”œ So far the ā€œbloggingā€ template/engine (if you weill) is quite simple. It comprises essentially of an index.md a prehook and a few utilities:

$ git ls-files
.gitignore
.zs/config.yml
.zs/editthispage
.zs/include
.zs/layout.html
.zs/list
.zs/months
.zs/now
.zs/onthispage
.zs/posthook
.zs/postsbymonth
.zs/prehook
.zs/scripts
.zs/styles
.zs/tagcloud
.zs/taglist
.zs/years
archives/.empty
assets/css/site.css
assets/js/main.js
index.md
posts/hello-zs-blog.md
posts/on-tagging.md
posts/second-post.md
tags/.empty

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@prologic@twtxt.net That is really great to hear!

If there are opposing opinions we either build a bridge or provide a new parallel road.

Also, I wouldn’t call my opinion a ā€œstanceā€, I just wish for a better twtxt thanks to everyone’s effort.

The last thing we need to do is decide a proper format for the location-based version.

My proposal is to keep the ā€œSubject extensionā€ unchanged and include the reference to the mention like this:

// Current hash format: starts with a '#'
(#hash) here's text
(#hash) @<nick url> here's text

// New location format: valid URL-like + '#' + TIMESTAMP (verbatim format of feed source)
(url#timestamp) here's text
(url#timestamp) @<nick url> here's text

I think the timestamp should be referenced verbatim to prevent broken references with multiple variations (especially with the many timezones out there) which would also make it even easier to implement for everyone.

I’m sure we can get @zvava@twtxt.net, @lyse@lyse.isobeef.org and everyone else to help on this one.

I personally think we should also consider allowing a generic format to build on custom references, this would allow for creating threads using any custom source (manual, computed or external generated), maybe using a new ā€œTopic extensionā€, here’s some examples.

// New format for custom references: starts with a '!' maybe?
(!custom) here's text
(!custom) @<nick url> here's text

// A possible "Topic" parse as a thread root:
[!custom] start here
[custom] simpler format

This one is just an idea of mine, but I feel it can unleash new ways of using twtxt.

⤋ Read More
In-reply-to » @itsericwoodward any news about this? I am, at the very least, curious!

@bender@twtxt.net Thanks for asking!

So, I’ve been working on 2 main twtxt-related projects.

The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).

But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the ā€œfollowā€ list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.

On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).

In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.

I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!

⤋ Read More
In-reply-to » @itsericwoodward any news about this? I am, at the very least, curious!

@bender@twtxt.net Thanks for asking!

So, I’ve been working on 2 main twtxt-related projects.

The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I’ve been testing it on my site since the night I made that post. It’s still very much an MVP, and I’ve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).

But that’s where I’ve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the ā€œfollowā€ list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I’ll figure it out eventually, but for now, I’ve been spending far more time than I anticipated just trying to get it to work end-to-end.

On top of it, the 2 projects wound up turning into 4 (so far), as I’ve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).

In the end, I’m hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we’ll see.

I hope this has satisfied your curiosity, but if you’d like to know more, please reach out!

⤋ Read More
In-reply-to » The driver’s license documents in Germany now have an expiration date. You have to renew them every 15 years. (Not the license itself, just the documents.)

@movq@www.uninformativ.de better than in the US. Our lasts only 10 years, and you need to go through the vision test, and, of course, pay). Recently they added a little gold star denoting ā€œreal IDā€ compliance, and we had to pay $10 to get the old one replaced—out of the regular renew ā€œscheduleā€.

In here it is all about control, and money.

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@alexonit@twtxt.alessandrocutolo.it Yhays kind of love you!! Stance and position on this. If we are going to make chicken changes in the threading model, let’s keep content based addressing, but also improve the use of experience. So in fact, in order to answer your question, I think yes, we can do some kind of combination of both.

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@lyse@lyse.isobeef.org @prologic@twtxt.net Can’t we find a middle ground and support both?

The thread is defined by two parts:

  1. The hash
  2. The subject

The client/pod generate the hash and index it in it’s database/cache, then it simply query the subject of other posts to find the related posts, right?

In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.

function setPostIndex(post) {
    // Current hash approach
    const hash = createHash(post.url, post.timestamp, post.content);

    // New location approach
    const location = post.url + '#' + post.timestamp;

    // Unchanged (probably)
    const subject = post.subject;

    // Index them all
    addToIndex(hash, post);
    addToIndex(location, post);
    addToIndex(subject, post);
}

// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location

As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.

Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.

Otherwise, the only other solution I can think of is a different approach where the value doesn’t matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed ā€œheaderā€ for each post, but it can be seen as a separate extension).

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@lyse@lyse.isobeef.org I don’t think there’s any point in continuing the discussion of Location vs. Content based addressing.

I want us to preserve Content based addressing.

Let’s improve the user experience and fix the hash commission problems.

⤋ Read More
In-reply-to » Happy equinox – where the world is illuminated like this:

@movq@www.uninformativ.de Woah, cool!

(WTF, asciiworld-sat-track somehow broke, but I have not changed any of the scripts at all. O_o It doesn’t find the asciiworld-sat-calc anymore. How in the world!? When I use an absolute path, the .tle is empty and I get a parsing error. Gotta debug this.)

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@prologic@twtxt.net I know we won’t ever convince each other of the other’s favorite addressing scheme. :-D But I wanna address (haha) your concerns:

  1. I don’t see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesn’t matter.

  2. The same is true for duplication and forks. Even today, the ā€œcannonical URLā€ has to be chosen to build the hash. That’s exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I don’t know of any such software to be honest.

  3. If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?

  4. I don’t get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Where’s the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.

  5. Even a new hashing algorithm requires work on clients etc. It’s not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. That’s why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.

If these are general concerns, I’m completely with you. But I don’t think that they only apply to location-based addressing. That’s how I interpreted your message. I could be wrong. Happy to read your explanations. :-)

⤋ Read More
In-reply-to » @zvava @lyse I also think a location based reference might be better.

@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.

  1. The current hash relies on a url field too, by specification, it will use the first # url = <URL> in the feed’s metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like # feed_id = <UNIQUE_RANDOM_STRING> with the url as fallback.

  2. We can prevent duplications if the reference uses that same url field too or the client ā€œcollapseā€ any reference of all the urls defined in the metadata.

  3. I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.

  4. For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe I’m missing some context on this one.

  5. To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that don’t use threads at all, a mention style format for that could be even more user-friendly for those clients.

We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so I’m not that worried about threads working the same way.

Hope to see some other thought about this matter. šŸ¤“

⤋ Read More
In-reply-to » @zvava @lyse I also think a location based reference might be better.

Here is just a small list of thingsā„¢ that I’m aware will break, some quite badly, others in minor ways:

  1. Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt → jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they won’t.
  2. Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several ā€œparentsā€ and split the thread.
  3. Verification & spam-resistance: content addressing lets you dedupe and verify you’re pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
  4. Offline/cached reading: without the original URL being reachable, readers can’t resolve anchors; with hashes they can match against local caches/archives.
  5. Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.

⤋ Read More

Apologies if I’ve been spamming anyone out there in twtxt-land today.

I’ve been working on a couple of twtxt-related projects, and one of them is a reader (tentatively called twtstrm) written in JS. I used dummy data for the first few stages of development, but now I’m at the point where I need some real data, and that meant hitting up my actual following list.

Of course, it didn’t help that I had a typo in my If-Modified-Since headers, but all that has since been resolved.

Anyways, if I accidentally spammed you with requests today, I am sorry, and it shouldn’t happen anymore.

We thank you for your patience, and apologize for the inconvenience.

⤋ Read More

Apologies if I’ve been spamming anyone out there in twtxt-land today.

I’ve been working on a couple of twtxt-related projects, and one of them is a reader (tentatively called twtstrm) written in JS. I used dummy data for the first few stages of development, but now I’m at the point where I need some real data, and that meant hitting up my actual following list.

Of course, it didn’t help that I had a typo in my If-Modified-Since headers, but all that has since been resolved.

Anyways, if I accidentally spammed you with requests today, I am sorry, and it shouldn’t happen anymore.

We thank you for your patience, and apologize for the inconvenience.

⤋ Read More
In-reply-to » @lyse i dont mind if the hash is not backward compatible but im not sure if this is the right way to proceed because the added complexity dealing with two hash versions isnt justified

@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But I’d keep it around in my client, because why not.

If users choose a client which supports the extensions, they don’t have to mess around with v1 and v2 hashing, just like today.

As for the school of thought, personally, I’d prefer something else, too. I’m in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.

⤋ Read More
In-reply-to » @prologic im unsure how i feel about the hash v2 proposal, given it is completely backward incompatible with hash v1 it doesn't really solve any of the problems with it. it only delays collisions, and still fragments threads on post edits

@lyse@lyse.isobeef.org i dont mind if the hash is not backward compatible but im not sure if this is the right way to proceed because the added complexity dealing with two hash versions isnt justified

regular end users wont care to understand how twt hashes are formed, they just want to use twtxt! so i guess i could work in protecting users from themselves by disallowing post edits on old posts or posts with replies, but i’m not fond of this either really. if they want to break a thread, they can just delete the post (though i’ve noticed yarn handling post deletes dubiously…)

on activitypub i do genuinely find myself looking through several month or even year old posts sometimes and deciding to edit/reword them a little to be slightly less confusing, this should be trivial to handle on twtxt which is an infinitely simpler specification

⤋ Read More
In-reply-to » @zvava I am getting [2025/09/11 12:56:01.816] ⇒ please set config.host when trying to run "bbycll". How to bypass that tiny hurdle?

Adding too this. The configuration example at the repository reads:

{
	"nick": "Example",
	"description": "alice's twtxt instance!",
	"host": "twtxt.example.com",
	"admin": "alice"
}

Would it make more sense changing nick to instance_name or similar? Usually nick is reserved for users, like here, quark. Right? Also, is host the same FQDN to be used while proxying traffic to the application? That is, using the above configuration, it’s Caddy configuration would be:

twtxt.example.com {
	encode
	reverse_proxy :31212
}

Is that correct?

⤋ Read More
In-reply-to » Next level poop: Can’t log in to reddit anymore with adblock enabled. It says invalid usename or password.

Hmm, not experiencing that. Using Zen (Firefox), under Linux, with uBlock Origin.

⤋ Read More

im unable to figure out why bbycll is not generating posts hashes for @lyse@lyse.isobeef.org’s feed correctly (or at least different from the ones generated by yarn)

i’m pretty sure the timezone is stripped off the offset correctly (2025-09-14T12:45:00+02:00 → 2025-09-14T12:45:00Z) though messing with how the hash is generated i can’t get it to make one that matches…but all other hashes for all other feeds seem to be correct? does yarn use a different canonical url for lyse internally? is there a bug in the libraries im using? bwehhh

⤋ Read More
In-reply-to » Drawn based on a quick doodle, the canine returns victorious, from the battle of Hot Topic bargain bin, as smug as can be. Whoever will be the first to inform him, the spikes aren't real gold and it's most likely not even leather, meaning it's not what he's really been searching the universe for, better prepare themselves, to be jumped on, bitten and shredded by claws. Media

@lyse@lyse.isobeef.org no, as mentioned this ā€œdiagonal arrowā€ eye shape, is usually used for a smug expression. The optional white part, is in this case, where the dogs sclera would be visible, while they have their eyes, like this.
Here is a comparison between a real dog, making the face it is based on, and the exaggerated drawn version.

⤋ Read More
In-reply-to » @zvava I am getting [2025/09/11 12:56:01.816] ⇒ please set config.host when trying to run "bbycll". How to bypass that tiny hurdle?

Woot, thank you! Using a config.json like this:

{
  "host": "localhost:31212",
  "protocols": ["http"]
}

Indeed did the trick! I know it isn’t production ready, but I wanted to see with my own eyes, locally, how did it look. :-) I like where you are going! It is looking very nice, and polished. Can’t wait for an alpha, beta, and release!

⤋ Read More

Since Google announced their intentions to heavily limit sideloading on Android, starting end of 2026, I’ve been looking for potential solutions, for this policy change, that threatens the majority of projects I maintain, in some way. Google already killed my browser project years ago, but I have no other choice, than to fight this, any way I can.

The best choice to deal with this, will probably be the Android Debug Bridge, which can be used not only to install apps unrestricted, but also to uninstall, or remove, almost any unnecessary part of the OS. Shizuku, combined with Canta Debloater, is the winning combination for now.

I’ve already removed most Google apps from my device: the annoying AI assistant, the stupid Google app adding the annoying articles, left of your homes screen, Google One, Gboard, Safety app… it’s amazing, no distracting Google slopware, like in the good old Android 2 days! And I absolutely intend to keep it this way, from now on, no new Google apps or services on my devices, unless Google can give me a good enough reason, to allow them there and whenever the app that verifies signatures, to block installing apps not approved by Google, I’ll just remove it from my device and advocate others do so too.

⤋ Read More
In-reply-to » @lyse a content warning is kind of like a forum spoiler cut, or like the <details> tag in HTML; it lets you write a sentence or so that someone can then click to expand to see the actual post. it's called a CW because most people use it to warn for potentially triggering/harmful subjects, but you can really use it for anything, like spoilers in a TV show or even for joke punchlines

@kat@yarn.girlonthemoon.xyz Ta. The only good use for <details> is to collapse long logs in bug analysis reports. Other than that, I find it rather annoying to expand sections manually.

As for spoilers, personally, I don’t care at all. Not the slightest bit. If there is something that I don’t wanna read, I just stop reading. ĀÆ_(惄)_/ĀÆ

But I’ve got the feeling that I’ve got an unpopular opinion on that matter. ;-)

⤋ Read More
In-reply-to » @zvava I never used any of the social media platforms, that's why I'm probably ignorant.

@bender@twtxt.net I see, thanks. Well, I never found these warnings useful. To hide answers to conundrums or the like, ROT13ing or base64-encoding them is plenty sufficient.

Hahaha, I never heard of Poopgate before. :-D Poor passengers.

⤋ Read More