Searching yarn

Twts matching #define
Sort by: Newest, Oldest, Most Relevant
In-reply-to » @prologic yeah, I've had even requested access to it in order to give it a try and report whatever I can but, Sorry I never got to do any of it. 2025 slam dunked a massive pile of šŸ’© over my life (hence the disappearance, trying to avoid talking about any of it) and I'm just starting to recover (or at least trying to).

@aelaraji@aelaraji.com It’s definately been a long and fast year that’s for sure šŸ‘ Don’t worry!

⤋ Read More
In-reply-to » Hmm, so it seems this Mike is the one who inherited it: https://tilde.club/~deepend/, but not too active anywhere, though pinging ā€œdeependā€ on Libera might work...

@lyse@lyse.isobeef.org nginx allows logging per user, via using defined variables on configuration. Not sure, though, if a Tilde would be willing to go to those ā€œextremesā€.

⤋ Read More
In-reply-to » Hmm, so it seems this Mike is the one who inherited it: https://tilde.club/~deepend/, but not too active anywhere, though pinging ā€œdeependā€ on Libera might work...

@bender@twtxt.net Sounds about right.

I had a brainfart yesterday, though. For whatever reason I thought of subdomains, which are modeled with server entries in nginx. So, each could define its own access_log location. However, there are no subdomains in place! Searching around, I didn’t find any solution to give each user their own access log file.

One way would be a cronjob, aeh, systemd timer as I learned the other day, that greps the main access log and writes all user access log files with only the relevant stuff.

⤋ Read More

Hello again everyone! A little update on my twtxt client.

I think it’s finally shaping a bit better now, but… ā˜ļø

As I’m trying to put all the parts together, I decided to build multiple parallel UIs, to ensure I don’t accidentally create a structure that is more rigid than planned.

I already decided on a UI that I would want to use for myself, it would be inspired by moshidon, misskey and some other ā€œsocial feedsā€ mock-ups I found on dribbble.

I also plan on building a raw HTML version (for anyone wanting to do a full DIY client).

I would love to get any suggestions of what you would like to see (and possibly use) as a client, by sharing a link, app/website name or even a sketch made by you on paper.

I think I’ll pick a third and maybe a fourth design to build together with the two already mentioned.

For reference, the screens I think of providing are (some might be optional or conditionally/manually hidable):

  • Global / personal timeline screen
  • Profile screen (with timeline)
  • Thread screen
  • Notifications screen or popup (both valid)
  • DM list & chat screens (still planning, might come later)
  • Settings screen (it’ll probably be a hard coded form, but better mention it)
  • Publish / edit post screen or popup (still analysing some use cases, as some ā€œenginesā€ might not have direct publishing support)

I also plan on adding two optional metadata fields:

  • display_name: To show a human readable alternative for a nick, it fallback to nick if not defined
  • banner: Using the same format as avatar but the image expected is wider, inspired by other socials around

I also plan on supporting any metadata provided, including a dynamically parsable regex rule format for those extra fields, this should allow anyone to build new clients that don’t limit themselves to just the social aspect of twtxt, hoping to see unique ways of using twtxt! šŸ¤ž

⤋ Read More
In-reply-to » is the first url metadata field unequivocally treated as the canon feed url when calculating hashes, or are they ignored if they're not at least proper urls? do you just tolerate it if they're impersonating someone else's feed, or pointing to something that isn't even a feed at all?

@zvava@twtxt.net Yes, the specification defines the first url to be used for hashing. No matter if it points to a different feed or whatever. Just unsubscribe from malicious feeds and you’re done.

Since the first url is used for hashing, it must never change. Otherwise, it will break threading, as you already noticed. If your feed moves and you wanna keep the old messages in the same new feed, you still have to point to the old url location and keep that forever. But you can add more urls. As I said several times in the past, in hindsight, using the first url was a big mistake. It would have been much better, if the last encountered url were used for hashing onwards. This way, feed moves would be relatively straightforward. However, that ship has sailed. Luckily, feeds typically don’t relocate.

⤋ Read More
In-reply-to » Hi everyone, here's a little introduction of my twtxt client (still WIP).

@zvava@twtxt.net CORS is our worst enemy. 🄷

I too had the same issue being a browser-based request, so the only solution is using a proxy.

For testing (and real personal use) I rely on this one https://corsproxy.io/.

In my client, I first check if the source allows me to fetch it without issues first and fallback to prefixing with a proxy if it gives an error.

For security reasons the client don’t give you a readable error for CORS, so you must use a catch-all for that, if it fails again with the proxy you can deal with any other errors it throws as you normally would (preferably outside of the fetch function).

After the fetching responded, I store the response.url value to fetch it again for updates without having to do extra calls (you can store it verbatim or as a flag to be able to change the proxy later).

Here an extract of my code:

export async function fetchWithProxy(url, proxy=null) {
    return await fetch(url).catch(err => {
        if (!proxy) throw err;
        return fetch(`${proxy}${encodeURIComponent(url)}`);
    });
}

// Using it with
const res = await fetchWithProxy('https://twtxt.net/user/zvava/twtxt.txt', 'https://corsproxy.io/?');

// Get the working url (direct or through proxy)
const fetchingURL = res.url;

// Get the twtxt feed content (or handle errors)
const text = await res.text();

I also plan to allow the user to define a custom proxy field, I like the solution used by Delta.chat in their android app, where you can define the URL format with a variable https://my-proxy?$TWTXT_URL since it allows you to define with more freedom any proxy without a prefix format.

If the idea of using a third-party proxy is not to the user liking they can use a self-hosted solution like cors-anywhere or build their own (with twtxt it should just be a GET).

⤋ Read More

Hi everyone, here’s a little introduction of my twtxt client (still WIP).

The client I’m developing is a single tenant project that runs entirely in the browser (it might use an optional backend).

It’s entirely based on native web-components and vanilla JS, it is designed to act closer to a toolkit than a full-fledged client, allowing users to ā€œDIYā€ their own interface with pure html or plain javascript functions.

Users can also build their own engines by including a global javascript object that implement the defined internal API (TBD).

I’m planning to build a system that is easy enough to build and use with any skill level, using only pure html (with a homebrew minimal template engine) or via plain JS (I’ll be also providing some pre-made templates too).

Everything can be self-hosted on any static hosting provider, this allows to spread twtxt within communities like Neocities and similarly hosted websites (basically any Indieweb/Smallweb/Digital garden website and any of the common GitHub/Lab/Berg/lify Pages).

It will be probably named something like TxtCraft or craf.txt but I’m not really sure yet… šŸ¤” (Maybe some suggestions could help)

I’m still in the experimental phase, so there’s no decent source-code to share yet, but it will soon enough!

⤋ Read More
In-reply-to » Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@lyse@lyse.isobeef.org @prologic@twtxt.net Can’t we find a middle ground and support both?

The thread is defined by two parts:

  1. The hash
  2. The subject

The client/pod generate the hash and index it in it’s database/cache, then it simply query the subject of other posts to find the related posts, right?

In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.

function setPostIndex(post) {
    // Current hash approach
    const hash = createHash(post.url, post.timestamp, post.content);

    // New location approach
    const location = post.url + '#' + post.timestamp;

    // Unchanged (probably)
    const subject = post.subject;

    // Index them all
    addToIndex(hash, post);
    addToIndex(location, post);
    addToIndex(subject, post);
}

// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location

As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.

Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.

Otherwise, the only other solution I can think of is a different approach where the value doesn’t matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed ā€œheaderā€ for each post, but it can be seen as a separate extension).

⤋ Read More
In-reply-to » @zvava @lyse I also think a location based reference might be better.

@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.

  1. The current hash relies on a url field too, by specification, it will use the first # url = <URL> in the feed’s metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like # feed_id = <UNIQUE_RANDOM_STRING> with the url as fallback.

  2. We can prevent duplications if the reference uses that same url field too or the client ā€œcollapseā€ any reference of all the urls defined in the metadata.

  3. I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.

  4. For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe I’m missing some context on this one.

  5. To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that don’t use threads at all, a mention style format for that could be even more user-friendly for those clients.

We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so I’m not that worried about threads working the same way.

Hope to see some other thought about this matter. šŸ¤“

⤋ Read More

Hello everyone! šŸ‘‹

After a long while away, I’m back on twtxt with this new feed.

Some of you might remember me as justamoment@twtxt.net, that was a test account I made for trying things out, but I ended up keeping it more than planned.

I also tried other social platforms in search of a place that felt right for me.

In the end twtxt was the one that ticked all of my boxes:

  • Slow social: it act more like a feed reader and I really appreciate that there’s no flood of content that I can’t keep up with.
  • No server needed: I absolutely love to have total control over my content, I tend to avoid having moving parts that might break, plus you can put your feed under version control and it’s all backed up.
  • Ownership: I can put my feed anywhere I want and nobody can decide if I can access it or not.
  • For hackers: a single .txt file allows me to join a community, how cool is that!

This is why I decided to build my own twtxt client, one that allows you to decide how the feed is presented on your ā€œinstanceā€.

It’s still in the making but I’ll try to share a bit of it once I defined how things should work.

Coincidentally, I discovered that @itsericwoodward@itsericwoodward.com and @zvava@twtxt.net were also building a twtxt client, seems like twtxt is set to grow!

⤋ Read More
In-reply-to » @lyse i dont mind if the hash is not backward compatible but im not sure if this is the right way to proceed because the added complexity dealing with two hash versions isnt justified

@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But I’d keep it around in my client, because why not.

If users choose a client which supports the extensions, they don’t have to mess around with v1 and v2 hashing, just like today.

As for the school of thought, personally, I’d prefer something else, too. I’m in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.

⤋ Read More
In-reply-to » @lyse a content warning is kind of like a forum spoiler cut, or like the <details> tag in HTML; it lets you write a sentence or so that someone can then click to expand to see the actual post. it's called a CW because most people use it to warn for potentially triggering/harmful subjects, but you can really use it for anything, like spoilers in a TV show or even for joke punchlines

@kat@yarn.girlonthemoon.xyz I reckon the original <details> need to have the open attribute set in order to expand it, so I cannot just define some custom CSS rules to do that in my browser.

But in regards to twtxt, my client won’t hide anything in that realm anyway. :-) It’s just more noise.

⤋ Read More

Only figured this out yesterday:

pinentry, which is used to safely enter a password on Linux, has several frontends. There’s a GTK one, a Qt one, even an ncurses one, and so on.

GnuPG also uses pinentry. And you can configure your frontend of choice here in gpg-agent.conf.

But what happens when you don’t configure it? What’s the default?

Turns out, pinentry is a shellscript wrapper and it’s not even that long. Here it is in full:

#!/bin/bash

# Run user-defined and site-defined pre-exec hooks.
[[ -r "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec ]] && \
        . "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec
[[ -r /etc/pinentry/preexec ]] && . /etc/pinentry/preexec

# Guess preferred backend based on environment.
backends=(curses tty)
if [[ -n "$DISPLAY" || -n "$WAYLAND_DISPLAY" ]]; then
        case "$XDG_CURRENT_DESKTOP" in
        KDE|LXQT|LXQt)
                backends=(qt qt5 gnome3 gtk curses tty)
                ;;
        *)
                backends=(gnome3 gtk qt qt5 curses tty)
                ;;
        esac
fi

for backend in "${backends[@]}"
do
        lddout=$(ldd "/usr/bin/pinentry-$backend" 2>/dev/null) || continue
        [[ "$lddout" == *'not found'* ]] && continue
        exec "/usr/bin/pinentry-$backend" "$@"
done

exit 1

Preexec, okay, then some auto-detection to use a toolkit matching your desktop environment …

… and then it invokes ldd? To find out if all the required libraries are installed for the auto-detected frontend?

Oof. I was sitting here wondering why it would use pinentry-gtk on one machine and pinentry-gnome3 on another, when both machines had the exact same configs. Yeah, but different libraries were installed. One machine was missing gcr, which is needed for pinentry-gnome3, so that machine (and that one alone) spawned pinentry-gtk …

⤋ Read More
In-reply-to » Xfce does one thing very right: It stores its settings in plain-text XML files. This allows me to easily read, track, and maybe even distribute these settings to other machines.

@kat@yarn.girlonthemoon.xyz I kind of like XML because it’s mostly well-defined and easy for humans to read (unlike YAML, which is a complete mess, imho) … and at the same time, it can get complicated really fast. 🫤 But at least it’s plain-text – that’s the important part in this case. šŸ˜…

⤋ Read More
In-reply-to » The lack of suckless-like simple, hackable software these days is appalling.

@prologic@twtxt.net Hm, I wouldn’t say that. Go code could fall into that category as well.

Maybe this topic could use a blog post / article, that explains what it’s about. I’m finding it hard to really define what ā€œsuckless-like softwareā€ is. šŸ¤” (Their own philosophy focuses too much on elitism, if you ask me.)

⤋ Read More
In-reply-to » grafana is confusing af i deployed it again for my job (that is so wild to say...) and i'm like HOW DO THESE ALERTS WORK

Move beyond basic threshold alerts! Define clear Service Level Objectives (SLOs) and measure Service Level Indicators (SLIs) to track real user impact. Use Prometheus to alert when your SLOs are at risk, ensuring you focus on what truly matters to your users. #Monitoring #SRE #Prometheus

⤋ Read More
In-reply-to » @andros maybe create a separate, completely distinct feed for DM? That way, clients do not need to do anything, only those wanted to "talk in private" follow themselves, using their very special dm-only.txt feeds. šŸ˜‚

@andros@twtxt.andros.dev define ā€œcompatibleā€ šŸ˜…. On the ā€œnot addressed to meā€, if I follow you, I will see your twtxts, whether they are addressed to me, or not.

⤋ Read More

Even though I really do like the shell, I always use Dolphin to mount my digicam SD card and copy the photos onto my computer. I finally added a context menu item in Dolphin to create a forest stroll directory with the current date in order to save some typing:

Context menu item to create a new directory and directory name dialog

The following goes in ~/.local/share/kservices5/ServiceMenus/galmkdir.desktop:

[Desktop Entry]
Type=Service
X-KDE-ServiceTypes=KonqPopupMenu/Plugin,inode/directory
Actions=Waldspaziergang;

[Desktop Action Waldspaziergang]
Name=Heutigen Waldspaziergang anlegen…
Icon=folder-green
Exec=~/src/gelbariab/galmkdir "%f"

In order to update the KDE desktop cache and make this action menu item available in Dolphin, I ran:

kbuildsycoca5

The referenced galmkdir script looks like that:

#!/bin/sh
set -e

current_dir="$1"
if [ -z "$current_dir" ]; then
    echo "Usage: $0 DIRECTORY" >&2
    exit 1
fi

dir="$(kdialog \
    --geometry 350x50 \
    --title "Heutigen Waldspaziergang anlegen" \
    --inputbox "Neues Verzeichnis in ā€ž$current_dirā€œ anlegen:" \
    "waldspaziergang-$(date +%Y-%m-%d)")"
mkdir "$current_dir/$dir"
dolphin "$current_dir/$dir"

This solution is far from perfect, though. Ideally, I’d love to have it in the ā€œCreate Newā€ menu instead of the ā€œActionsā€ menu. But that doesn’t really work. I cannot define a default directory name, not to mention even a dynamic one with the current date. (I would have to update the .desktop file every day or so.) I also failed to create an empty directory. I somehow managed to create a directory with some other templates in it for some reason I do not really understand.

Let’s see how that works out in the next days. If I like it, I might define a few more default directory names.

⤋ Read More

The seL4 microkernel: an introduction
This whitepaper provides an introduction to and overview of seL4. We explain what seL4 is (and is not) and explore its defining features. We explain what makes seL4 uniquely qualified as the operating-system kernel of choice for security- and safety-critical systems, and generally embedded and cyber-physical systems. In particular, we explain seL4’s assurance story, its security- and safety-relevant features, and its benchmark-setting performance. We also d … ⌘ Read more

⤋ Read More
In-reply-to » One of the biggest gripes of the community with the way the threading model currently works with Twtxt v1.2 (https://twtxt.dev) is this notion of:

@prologic@twtxt.net We can’t agree on this idea because that makes things even more complicated than it already is today. The beauty of twtxt is, you put one file on your server, done. One. Not five million. Granted, there might be archive feeds, so it might be already a bit more, but still faaaaaaar less than one file per message.

Also, you would need to host not your own hash files, but everybody else’s as well you follow. Otherwise, what is that supposed to achieve? If people are already following my feed, they know what hashes I have, so this is to no use of them (unless they want to look up a message from an archive feed and don’t process them). But the far more common scenario is that an unknown hash originates from a feed that they have not subscribed to.

Additionally, yarnd’s URL schema would then also break, because https://twtxt.net/twt/<hash> now becomes https://twtxt.net/user/prologic/<hash>, https://twtxt.net/user/bender/<hash> and so on. To me, that looks like you would only get hashes if they belonged to this particular user. Of course, you could define rules that if there is a /user/ part in the path, then use a different URL, but this complicates things even more.

Sorry, I don’t like that idea.

⤋ Read More
In-reply-to » Dang it! I ran into import cycles with shared test utilities again. :-( Either I have to copy this function to set up an in-memory test storage across packages or I have to put it in the storage package itself and guard it with a build tag that is only used in tests (otherwise I end up with this function in my production binary as well). I don't like any of the alternatives. :-(

Thanks, @xuu@txt.sour.is, great explanation. In another project I’ve structured it exactly like you wrote. The mock storage over there extends the SQLite storage and provides mechanism to return errors and such for testing purposes:

  • storage/ defines the interface
    • sqlite/ implements the storage interface
    • mock/ extends the SQLite implementation by some mocking capabilities and assertions

Here, however, there are no storage subpackages. It’s just storage, that’s it. Everything is in there. The only implementation so far is an SQLite backend that resides in storage. My RAM storage is exactly that SQLite storage, but with :memory: instead a backing file on disk. I do not have a mock storage (yet).

I have to think about it a bit more, but I probably have to do exactly that in my tt rewrite, too. Sigh. I just have the feeling that in storage/sqlite/sqlite_test.go I cannot import storage/mock for the helper because storage/mock/mock.go imports and embeds the type from storage/sqlite. But I’m too tired right now to think clearly.

⤋ Read More
In-reply-to » Dang it! I ran into import cycles with shared test utilities again. :-( Either I have to copy this function to set up an in-memory test storage across packages or I have to put it in the storage package itself and guard it with a build tag that is only used in tests (otherwise I end up with this function in my production binary as well). I don't like any of the alternatives. :-(

re reading so NewRAMStorage(…) is just something that setups your storage and initial data.. that can probably live with storage/sqlite. The point is the storage package does not import the implementations of storage.Storage It just defines the contract for things that use that interface. Now storage/sqlite CAN import storage and not have a circle dep.

It kinda works in reverse for import directions. usually you have your root package that imports things from deeper in the directory structures.. but for the case of interfaces it reverses where the deeper can import from parents but parents cannot import from children.

- app < storage
      < storage/sqlite
      < controller < storage
                   < storage/sqlite
 
- sqlite < storage

- storage X storage/sqlite

⤋ Read More
In-reply-to » Dang it! I ran into import cycles with shared test utilities again. :-( Either I have to copy this function to set up an in-memory test storage across packages or I have to put it in the storage package itself and guard it with a build tag that is only used in tests (otherwise I end up with this function in my production binary as well). I don't like any of the alternatives. :-(

@lyse@lyse.isobeef.org OK. So how I have worked things like this out is to have the interface in the root package from the implementations. The interface doesn’t need to be tested since it’s just a contract. The implementations don’t need to import storage.Storage

  • storage/ defines the Storage interface (no tests!)
    • storage/sqlite for the sqlite implementation tests for sqlite directly
    • storage/ram for the ram implementation and tests for RAM directly
  • controller/ can now import both storage and the implementation as needed.

So now I am guessing you wanted the RAM test for testing queries against sqlite and have it return some query response?

For that I usually would register a driver for SQL that emulates sqlite. Then it’s just a matter of passing the connection string to open the registered driver on setup.

https://github.com/glebarez/go-sqlite?tab=readme-ov-file#connection-string-examples

⤋ Read More
In-reply-to » Dang it! I ran into import cycles with shared test utilities again. :-( Either I have to copy this function to set up an in-memory test storage across packages or I have to put it in the storage package itself and guard it with a build tag that is only used in tests (otherwise I end up with this function in my production binary as well). I don't like any of the alternatives. :-(

@xuu@txt.sour.is My layout looks like this:

  • storage/
    • storage.go: defines a Storage interface
    • sqlite.go: implements the Storage interface
    • sqlite_test.go: originally had a function to set up a test storage to test the SQLite storage implementation itself: newRAMStorage(testing.T, $initialData) *Storage
  • controller/
    • feeds.go: uses a Storage
    • feeds_test.go: here I wanted to reuse the newRAMStorage(…) function

I then tried to relocate the newRAMStorage(…) into a

  • teststorage/
    • storage.go: moved here as NewRAMStorage(…)

so that I could just reuse it from both

  • storage/
    • sqlite_test.go: uses testutils.NewRAMStorage(…)
  • controller/
    • feeds_test.go: uses testutils.NewRamStorage(…)

But that results into an import cycle, because the teststorage package imports storage for storage.Storage and the storage package imports testutils for testutils.NewRAMStorage(…) in its test. I’m just screwed. For now, I duplicated it as newRAMStorage(…) in controller/feeds_test.go.

I could put NewRAMStorage(…) in storage/testutils.go, which could be guarded with //go:build testutils. With go test -tags testutils …, in storage/sqlite_test.go could just use NewRAMStorage(…) directly and similarly in controller/feeds_test.go I could call storage.NewRamStorage(…). But I don’t know if I would consider this really elegant.

The more I think about it, the more appealing it sounds. Because I could then also use other test-related stuff across packages without introducing other dedicated test packages. Build some assertions, converters, types etc. directly into the same package, maybe even make them methods of types.

If I went that route, I might do the opposite with the build tag and make it something like !prod instead of testing. Only when building the final binary, I would have to specify the tag to exclude all the non-prod stuff. Hmmm.

⤋ Read More
In-reply-to » This document is the result of a series of discussions between Robert "Uncle Bob" Martin and John Ousterhout, held between September 2024 and February 2025. The text addresses three main topics: method length, comments, and Test Driven Development (TDD). https://github.com/johnousterhout/aposd-vs-clean-code/blob/main/README.md This is something to read and reflect on for days.

Amd of course, TDD! I tried that, but it doesn’t work all that great for me in its strict form. I have the feeling that coming up with a single new failing test, making it pass, maybe some refactoring, rinse and repeat wastes significantly more time than doing it in – what they call – the ā€œbundleā€ approach. Coming up with several tests in advance and then writing the code or vise versa is usually much quicker. I do find that more enjoyable, it also helps me to reduce smaller context switches. I can focus on either the tests or the production code.

As for the potentially reduced code coverage with a non-TDD approach, I can easily see which parts are lacking tests and hand them in later. So, that’s largely a specious argument. Granted, I can forget to check the coverage or simply ignore it.

I agree with John, TDD results in less elegant code or requires more refactoring to tidy it up. Sometimes, it’s also not entirely clear at the beginning how the API should really look like. It doesn’t happen often, but it does happen. Especially when experimenting or trying out different approaches. With TDD, I then also have to refactor the tests which is not only annoying, but also involves the danger of accidentally breaking them.

TDD only works really well, if you have super tiny functions. But we already established that I typically don’t like tiny methods just for the purpose of them being extremely short.

When fixing a bug, I usually come up with a failing test case first to verify that my repaired code later actually resolves the problem. For new code, it depends, sometimes tests first, sometimes the productive code first. Starting off with the tests requires the API to be well defined beforehand.

⤋ Read More
In-reply-to » Have you ever had to refactor a project that was not documented? Any suggestions?

ok, sounds like a ā€˜large’ project to me.
Is it more an API (more oriented to developers), more oriented to UI/UX/Frontend? Perhaps both?

I’d go with prologic’s advice of measuring and prioritizing. Perhaps you have a budget or at least something like ā€œlet’s see how far can we reach in 6 monthsā€, and possibly you won’t finish in the time you have (just guessing).

Something that has helped me was defining ā€œWhy do you we want to refactor this project?ā€.
Could it be to make it compile on newer versions, or making it easier to grow and scale, or perhaps they are trying to sell that product to another company. Every reason has a different path, IMO.

⤋ Read More
In-reply-to » @kat To improve you shell programming skills, I highly recommend to check out shellcheck: https://github.com/koalaman/shellcheck It points out common errors and gives some suggestions on how to improve the code. Some details in shell scripting are very tricky to get right at first. Even after decades of shell programming, I run into "corner cases" every now and then.

PSA: Yarnd operators might want to define code { white-space: pre } in their CSS themes to render things as they’re supposed to look like.

⤋ Read More
In-reply-to » @doesnm So the user should then set nick = _@domain.tld in the twtxt.txt?

What should the advantage be to nick = _compared to just not defining a nick and let the client use the domain as the handle?

What is not intuitive is that you put something in the nick field that is not to be taken literary. The special meaning of _ is only clean if you read the documentation, compared to having something in nick that makes sense in the current context of the twtxt.txt.

⤋ Read More
In-reply-to » @eapl.me A way to have a more bluesky'ish handles in twtxt could be to take inspiration from Bridgy Fed and say: If NICK = DOMAIN then only show @DOMAIN So instead of @eapl.me@eapl.me it will just be @eapl.me

@doesnm@doesnm.p.psf.lt So the user should then set nick = _@domain.tld in the twtxt.txt?

It seems more intuitive and userfriendly to just use: nick = domain.tld and have then convention for clients to render the handle as @domain.tld instead of @domain.tld@domain.tld

For a feed with no nick defined (eg. https://akkartik.name/twtxt.txt) it will also be simpler and make more sense to just use the domain as the nick and render it as @domain.tld

⤋ Read More
In-reply-to » For Example:

@prologic@twtxt.net maybe you meant to specify twtxt as a type similar to ActivityPub’s application/activity+json in https://webfinger.net/lookup/?resource=sorenpeter@norrebro.space

    {
      "rel": "self",
      "type": "application/activity+json",
      "href": "https://norrebro.space/users/sorenpeter"
    },

Then it would also make sense to define a Link Relations but should that then link to something like https://twtxt.dev/webfinger.html where we can describe the spec?

⤋ Read More
In-reply-to » Righto, @eapl.me, ta for the writeup. Here we go. :-)

@eapl.me@eapl.me here are my replies (somewhat similar to Lyse’s and James’)

  1. Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax ![NSFW](url.to/image.jpg) if something is NSFW

  2. IDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.

  3. Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.

  4. Discovery: User-agent for discovery can become better. I’m working on a wrapper script in PHP, so you don’t need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But that’s the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.

  5. Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs

  6. Languages: If the need is big then make a separate feed. I don’t mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then it’s about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.

  7. Emojis: I’m not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?

⤋ Read More
In-reply-to » Simplified twtxt - I want to suggest some dogmas or commandments for twtxt, from where we can work our way back to how to implement different feature like replies/treads:

@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. I’m referring to the definition that it’s the first url = in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.

⤋ Read More
In-reply-to » @aelaraji And pray tell/share with us what these magical commands do? 🤣

@prologic@twtxt.net Sure!! gg=G auto-indents your documents, as for the rest it’s:

  • v for selection mode, c for change and d for delete actions as usual.
  • followed by either ā€˜afor around ori` for inside/in-between whatever special character comes after it
    _ the [, (, ā€œ … special characters define the perimeter/extent of the action.

i.e: ci" would be change the text under the cursor between quotes and da[ _delete text and brackets included_

I’ve linked a reference in the first twt, hope you find it useful.

⤋ Read More

so i learned that my vpn provider uses nftables to tag traffic for split tunnelling. so it looks like i’ll be converting my iptables rules. there’s some implication for docker containers that i’ll have to reckon with, but i’m already nesting them inside a nixos container so i don’t really need docker to touch the network at all. after that i’ll be able to define some rules to allow traffic meant for the yggdrasil network to reach the tunnel. this will be important later.

⤋ Read More

More thoughts about changes to twtxt (as if we haven’t had enough thoughts):

  1. There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:

1a. Better and longer hashes.

1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.

1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8

1d. Stuff already described at dev.twtxt.net that doesn’t need any changes.

  1. We won’t know what will and won’t work until we try them. So I’m inclined to think of this as a bunch of draft ideas. Maybe later when we’ve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.

  2. Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
    https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
    https://twtxt.readthedocs.io/en/latest/user/discoverability.html
    and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long ā€œtwtxt v2ā€ document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment ā€œyou’ve ruined twtxtā€ and while I don’t completely agree with that commenter’s sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)

  3. All that being said, these are just my opinions, and I’m not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if you’re actually implementing things, you’re in charge of what you decide to make, and I’m grateful for the work.

⤋ Read More
In-reply-to » @falsifian In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt … damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?

⤋ Read More
In-reply-to » @prologic Some criticisms and a possible alternative direction:

@falsifian@www.falsifian.org In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

# url = https://example.com/alias/txtxt.txt
# url = https://example.com/initial/twtxt.txt
<message 1 uses the initial URL>
<message 2 uses the initial URL, too>
# url = https://example.com/new/twtxt.txt
<message 3 uses the new URL>
# url = https://example.com/brand-new/twtxt.txt
<message 4 uses the brand new URL>

In theory, the same could be done for prepend-style feeds. They do exist, I’ve come around them. The parser would just have to calculate the hashes afterwards and not immediately.

⤋ Read More
In-reply-to » @movq The success of large neural nets. People love to criticize today's LLMs and image models, but if you compare them to what we had before, the progress is astonishing.

@prologic@twtxt.net I don’t know what you mean when you call them stochastic parrots, or how you define understanding. It’s certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.

⤋ Read More
In-reply-to » I've been thinking about a new term I've come across whilst reading a book. It's called "Complexity Budget" and I think it has relevant in lots of difficult fields. I specifically think it has a lot of relevant in the Software Industry and organizations in this field. When doing further research on this concept, I was only able find talks on complexity budget in the context of medical care, especially phychiratistic care. In this talk it was describe as, complexity:

@prologic@twtxt.net Hmm, yeah, hmm, I’m not sure. šŸ˜… It all appears very subjective to me. Is 2k lines of code a lot or not?

I mean, I’m all for reducing complexity. šŸ˜… I just have a hard time defining it and arguing about it. What I call ā€œtoo complexā€, others might think of as ā€œjust fineā€. šŸ¤”

⤋ Read More

I’ve been thinking about a new term I’ve come across whilst reading a book. It’s called ā€œComplexity Budgetā€ and I think it has relevant in lots of difficult fields. I specifically think it has a lot of relevant in the Software Industry and organizations in this field. When doing further research on this concept, I was only able find talks on complexity budget in the context of medical care, especially phychiratistic care. In this talk it was describe as, complexity:

  • Complexity is confusing
  • Complexity is costly
  • Complexity kills

When we think of ā€œcomplexityā€ in terms of software and software development, we have a sort-of intuitive about this right? We know when software has become too complex. We know when an organization has grown in complexity, or even a system. So we have a good intuition of the concept already.

My question to y’all is; how can we concretely think about ā€œComplexity Budgetā€ and define it in terms that can be leveraged and used to control the complexity of software dns ystems?

⤋ Read More
In-reply-to » Congratulations to the British for getting rid of the Tories tyranny, and electing the forward thinking Labour party! 🄳

it works fine if you properly escape your urls!

 URIs include components and subcomponents that are delimited by
   characters in the "reserved" set.  These characters are called
   "reserved" because they may (or may not) be defined as delimiters by
   the generic syntax, by each scheme-specific syntax, or by the
   implementation-specific syntax of a URI's dereferencing algorithm.
   If data for a URI component would conflict with a reserved
   character's purpose as a delimiter, then the conflicting data must be
   percent-encoded before the URI is formed.

      reserved    = gen-delims / sub-delims
      gen-delims  = ":" / "/" / "?" / "#" / "[" / "]" / "@"
      sub-delims  = "!" / "$" / "&" / "'" / "(" / ")"
                  / "*" / "+" / "," / ";" / "="

⤋ Read More
In-reply-to » Yeah, the lack of comments makes regular JSON not a good configuration format in my view. Also, putting all keys in quotes and the use of commas is annoying. The big upside is that's in lots of standard libraries.

@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:

@namespace
# multi
# line
# comment
root :value

# example space comment
@namespace.name space-tag 

# attribute comments
attribute attr-tag  :value for attribute

# attribute with multiple 
# lines of values
foo :bar
      :bin
      :baz

repeated :value1
repeated :value2

each @ starts the definition of a namespace kinda like [name] in ini format. It can have comments that show up before. then each attribute is key :value and can have their own # comment lines.
Values can be multi line.. and also repeated..

the namespaces and values can also have little meta data tags added to them.

the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.

⤋ Read More
In-reply-to » @xuu Despite that these AoC math text problems are rather silly in my opinion (reminds me of an exercise in our math book where somebody wanted to carry a railroad rail around an L-shaped corner in the house and the question was how long that rail could be so that it still fits — sure, we've all carried several meter long railroad rails in our houses by ourselves numerous times…), these algorithms are really neat!

@lyse@lyse.isobeef.org They sure are silly at times. :-) You really have to combine this event with something else, like learning a new language. Otherwise it gets boring real quick.

What I absolutely love about AoC is that it’s – indeed – a bit like school. šŸ˜… The problems are well-defined, the inputs are well-defined, and there is a definite answer. It’s either right or wrong – period. Compared to real life and work, I welcome this very much. 🤣

⤋ Read More