@aelaraji@aelaraji.com Itās definately been a long and fast year thatās for sure š Donāt worry!
@lyse@lyse.isobeef.org nginx allows logging per user, via using defined variables on configuration. Not sure, though, if a Tilde would be willing to go to those āextremesā.
@bender@twtxt.net Sounds about right.
I had a brainfart yesterday, though. For whatever reason I thought of subdomains, which are modeled with server entries in nginx. So, each could define its own access_log location. However, there are no subdomains in place! Searching around, I didnāt find any solution to give each user their own access log file.
One way would be a cronjob, aeh, systemd timer as I learned the other day, that greps the main access log and writes all user access log files with only the relevant stuff.
Hello again everyone! A little update on my twtxt client.
I think itās finally shaping a bit better now, but⦠āļø
As Iām trying to put all the parts together, I decided to build multiple parallel UIs, to ensure I donāt accidentally create a structure that is more rigid than planned.
I already decided on a UI that I would want to use for myself, it would be inspired by moshidon, misskey and some other āsocial feedsā mock-ups I found on dribbble.
I also plan on building a raw HTML version (for anyone wanting to do a full DIY client).
I would love to get any suggestions of what you would like to see (and possibly use) as a client, by sharing a link, app/website name or even a sketch made by you on paper.
I think Iāll pick a third and maybe a fourth design to build together with the two already mentioned.
For reference, the screens I think of providing are (some might be optional or conditionally/manually hidable):
- Global / personal timeline screen
- Profile screen (with timeline)
- Thread screen
- Notifications screen or popup (both valid)
- DM list & chat screens (still planning, might come later)
- Settings screen (itāll probably be a hard coded form, but better mention it)
- Publish / edit post screen or popup (still analysing some use cases, as some āenginesā might not have direct publishing support)
I also plan on adding two optional metadata fields:
display_name: To show a human readable alternative for a nick, it fallback tonickif not defined
banner: Using the same format asavatarbut the image expected is wider, inspired by other socials around
I also plan on supporting any metadata provided, including a dynamically parsable regex rule format for those extra fields, this should allow anyone to build new clients that donāt limit themselves to just the social aspect of twtxt, hoping to see unique ways of using twtxt! š¤
url metadata field unequivocally treated as the canon feed url when calculating hashes, or are they ignored if they're not at least proper urls? do you just tolerate it if they're impersonating someone else's feed, or pointing to something that isn't even a feed at all?
@zvava@twtxt.net Yes, the specification defines the first url to be used for hashing. No matter if it points to a different feed or whatever. Just unsubscribe from malicious feeds and youāre done.
Since the first url is used for hashing, it must never change. Otherwise, it will break threading, as you already noticed. If your feed moves and you wanna keep the old messages in the same new feed, you still have to point to the old url location and keep that forever. But you can add more urls. As I said several times in the past, in hindsight, using the first url was a big mistake. It would have been much better, if the last encountered url were used for hashing onwards. This way, feed moves would be relatively straightforward. However, that ship has sailed. Luckily, feeds typically donāt relocate.
@zvava@twtxt.net CORS is our worst enemy. š„·
I too had the same issue being a browser-based request, so the only solution is using a proxy.
For testing (and real personal use) I rely on this one https://corsproxy.io/.
In my client, I first check if the source allows me to fetch it without issues first and fallback to prefixing with a proxy if it gives an error.
For security reasons the client donāt give you a readable error for CORS, so you must use a catch-all for that, if it fails again with the proxy you can deal with any other errors it throws as you normally would (preferably outside of the fetch function).
After the fetching responded, I store the response.url value to fetch it again for updates without having to do extra calls (you can store it verbatim or as a flag to be able to change the proxy later).
Here an extract of my code:
export async function fetchWithProxy(url, proxy=null) {
return await fetch(url).catch(err => {
if (!proxy) throw err;
return fetch(`${proxy}${encodeURIComponent(url)}`);
});
}
// Using it with
const res = await fetchWithProxy('https://twtxt.net/user/zvava/twtxt.txt', 'https://corsproxy.io/?');
// Get the working url (direct or through proxy)
const fetchingURL = res.url;
// Get the twtxt feed content (or handle errors)
const text = await res.text();
I also plan to allow the user to define a custom proxy field, I like the solution used by Delta.chat in their android app, where you can define the URL format with a variable https://my-proxy?$TWTXT_URL since it allows you to define with more freedom any proxy without a prefix format.
If the idea of using a third-party proxy is not to the user liking they can use a self-hosted solution like cors-anywhere or build their own (with twtxt it should just be a GET).
Hi everyone, hereās a little introduction of my twtxt client (still WIP).
The client Iām developing is a single tenant project that runs entirely in the browser (it might use an optional backend).
Itās entirely based on native web-components and vanilla JS, it is designed to act closer to a toolkit than a full-fledged client, allowing users to āDIYā their own interface with pure html or plain javascript functions.
Users can also build their own engines by including a global javascript object that implement the defined internal API (TBD).
Iām planning to build a system that is easy enough to build and use with any skill level, using only pure html (with a homebrew minimal template engine) or via plain JS (Iāll be also providing some pre-made templates too).
Everything can be self-hosted on any static hosting provider, this allows to spread twtxt within communities like Neocities and similarly hosted websites (basically any Indieweb/Smallweb/Digital garden website and any of the common GitHub/Lab/Berg/lify Pages).
It will be probably named something like TxtCraft or craf.txt but Iām not really sure yet⦠š¤ (Maybe some suggestions could help)
Iām still in the experimental phase, so thereās no decent source-code to share yet, but it will soon enough!
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the # character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
urlfield too, by specification, it will use the first# url = <URL>in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>with theurlas fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
Hello everyone! š
After a long while away, Iām back on twtxt with this new feed.
Some of you might remember me as justamoment@twtxt.net, that was a test account I made for trying things out, but I ended up keeping it more than planned.
I also tried other social platforms in search of a place that felt right for me.
In the end twtxt was the one that ticked all of my boxes:
- Slow social: it act more like a feed reader and I really appreciate that thereās no flood of content that I canāt keep up with.
- No server needed: I absolutely love to have total control over my content, I tend to avoid having moving parts that might break, plus you can put your feed under version control and itās all backed up.
- Ownership: I can put my feed anywhere I want and nobody can decide if I can access it or not.
- For hackers: a single .txt file allows me to join a community, how cool is that!
This is why I decided to build my own twtxt client, one that allows you to decide how the feed is presented on your āinstanceā.
Itās still in the making but Iāll try to share a bit of it once I defined how things should work.
Coincidentally, I discovered that @itsericwoodward@itsericwoodward.com and @zvava@twtxt.net were also building a twtxt client, seems like twtxt is set to grow!
@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But Iād keep it around in my client, because why not.
If users choose a client which supports the extensions, they donāt have to mess around with v1 and v2 hashing, just like today.
As for the school of thought, personally, Iād prefer something else, too. Iām in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.
<details> tag in HTML; it lets you write a sentence or so that someone can then click to expand to see the actual post. it's called a CW because most people use it to warn for potentially triggering/harmful subjects, but you can really use it for anything, like spoilers in a TV show or even for joke punchlines
@kat@yarn.girlonthemoon.xyz I reckon the original <details> need to have the open attribute set in order to expand it, so I cannot just define some custom CSS rules to do that in my browser.
But in regards to twtxt, my client wonāt hide anything in that realm anyway. :-) Itās just more noise.
@movq@www.uninformativ.de What do you define as āexpensiveā? š¤ (Iāve always thought of modern-day painters as a āripā, and the ink my god š¤Æ)
@eldersnake@we.loveprivacy.club Haha, yeah well āthinkingā isnāt really something we even know how to define, let alone simulate š¤£
Only figured this out yesterday:
pinentry, which is used to safely enter a password on Linux, has several frontends. Thereās a GTK one, a Qt one, even an ncurses one, and so on.
GnuPG also uses pinentry. And you can configure your frontend of choice here in gpg-agent.conf.
But what happens when you donāt configure it? Whatās the default?
Turns out, pinentry is a shellscript wrapper and itās not even that long. Here it is in full:
#!/bin/bash
# Run user-defined and site-defined pre-exec hooks.
[[ -r "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec ]] && \
. "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec
[[ -r /etc/pinentry/preexec ]] && . /etc/pinentry/preexec
# Guess preferred backend based on environment.
backends=(curses tty)
if [[ -n "$DISPLAY" || -n "$WAYLAND_DISPLAY" ]]; then
case "$XDG_CURRENT_DESKTOP" in
KDE|LXQT|LXQt)
backends=(qt qt5 gnome3 gtk curses tty)
;;
*)
backends=(gnome3 gtk qt qt5 curses tty)
;;
esac
fi
for backend in "${backends[@]}"
do
lddout=$(ldd "/usr/bin/pinentry-$backend" 2>/dev/null) || continue
[[ "$lddout" == *'not found'* ]] && continue
exec "/usr/bin/pinentry-$backend" "$@"
done
exit 1
Preexec, okay, then some auto-detection to use a toolkit matching your desktop environment ā¦
⦠and then it invokes ldd? To find out if all the required libraries are installed for the auto-detected frontend?
Oof. I was sitting here wondering why it would use pinentry-gtk on one machine and pinentry-gnome3 on another, when both machines had the exact same configs. Yeah, but different libraries were installed. One machine was missing gcr, which is needed for pinentry-gnome3, so that machine (and that one alone) spawned pinentry-gtk ā¦
@kat@yarn.girlonthemoon.xyz I kind of like XML because itās mostly well-defined and easy for humans to read (unlike YAML, which is a complete mess, imho) ⦠and at the same time, it can get complicated really fast. 𫤠But at least itās plain-text ā thatās the important part in this case. š
@prologic@twtxt.net Hm, I wouldnāt say that. Go code could fall into that category as well.
Maybe this topic could use a blog post / article, that explains what itās about. Iām finding it hard to really define what āsuckless-like softwareā is. š¤ (Their own philosophy focuses too much on elitism, if you ask me.)
@movq@www.uninformativ.de Curious what you would define as āsuck lessā software? (language agnostic of course!)
Move beyond basic threshold alerts! Define clear Service Level Objectives (SLOs) and measure Service Level Indicators (SLIs) to track real user impact. Use Prometheus to alert when your SLOs are at risk, ensuring you focus on what truly matters to your users. #Monitoring #SRE #Prometheus
dm-only.txt feeds. š
@andros@twtxt.andros.dev define ācompatibleā š . On the ānot addressed to meā, if I follow you, I will see your twtxts, whether they are addressed to me, or not.
Even though I really do like the shell, I always use Dolphin to mount my digicam SD card and copy the photos onto my computer. I finally added a context menu item in Dolphin to create a forest stroll directory with the current date in order to save some typing:

The following goes in ~/.local/share/kservices5/ServiceMenus/galmkdir.desktop:
[Desktop Entry]
Type=Service
X-KDE-ServiceTypes=KonqPopupMenu/Plugin,inode/directory
Actions=Waldspaziergang;
[Desktop Action Waldspaziergang]
Name=Heutigen Waldspaziergang anlegenā¦
Icon=folder-green
Exec=~/src/gelbariab/galmkdir "%f"
In order to update the KDE desktop cache and make this action menu item available in Dolphin, I ran:
kbuildsycoca5
The referenced galmkdir script looks like that:
#!/bin/sh
set -e
current_dir="$1"
if [ -z "$current_dir" ]; then
echo "Usage: $0 DIRECTORY" >&2
exit 1
fi
dir="$(kdialog \
--geometry 350x50 \
--title "Heutigen Waldspaziergang anlegen" \
--inputbox "Neues Verzeichnis in ā$current_dirā anlegen:" \
"waldspaziergang-$(date +%Y-%m-%d)")"
mkdir "$current_dir/$dir"
dolphin "$current_dir/$dir"
This solution is far from perfect, though. Ideally, Iād love to have it in the āCreate Newā menu instead of the āActionsā menu. But that doesnāt really work. I cannot define a default directory name, not to mention even a dynamic one with the current date. (I would have to update the .desktop file every day or so.) I also failed to create an empty directory. I somehow managed to create a directory with some other templates in it for some reason I do not really understand.
Letās see how that works out in the next days. If I like it, I might define a few more default directory names.
The seL4 microkernel: an introduction
This whitepaper provides an introduction to and overview of seL4. We explain what seL4 is (and is not) and explore its defining features. We explain what makes seL4 uniquely qualified as the operating-system kernel of choice for security- and safety-critical systems, and generally embedded and cyber-physical systems. In particular, we explain seL4ās assurance story, its security- and safety-relevant features, and its benchmark-setting performance. We also d ⦠ā Read more
@prologic@twtxt.net yes! Of course. However give me some time, I want to define a small proposal for the Registry (v2?)
@prologic@twtxt.net We canāt agree on this idea because that makes things even more complicated than it already is today. The beauty of twtxt is, you put one file on your server, done. One. Not five million. Granted, there might be archive feeds, so it might be already a bit more, but still faaaaaaar less than one file per message.
Also, you would need to host not your own hash files, but everybody elseās as well you follow. Otherwise, what is that supposed to achieve? If people are already following my feed, they know what hashes I have, so this is to no use of them (unless they want to look up a message from an archive feed and donāt process them). But the far more common scenario is that an unknown hash originates from a feed that they have not subscribed to.
Additionally, yarndās URL schema would then also break, because https://twtxt.net/twt/<hash> now becomes https://twtxt.net/user/prologic/<hash>, https://twtxt.net/user/bender/<hash> and so on. To me, that looks like you would only get hashes if they belonged to this particular user. Of course, you could define rules that if there is a /user/ part in the path, then use a different URL, but this complicates things even more.
Sorry, I donāt like that idea.
Thanks, @xuu@txt.sour.is, great explanation. In another project Iāve structured it exactly like you wrote. The mock storage over there extends the SQLite storage and provides mechanism to return errors and such for testing purposes:
- storage/ defines the interface
- sqlite/ implements the storage interface
- mock/ extends the SQLite implementation by some mocking capabilities and assertions
- sqlite/ implements the storage interface
Here, however, there are no storage subpackages. Itās just storage, thatās it. Everything is in there. The only implementation so far is an SQLite backend that resides in storage. My RAM storage is exactly that SQLite storage, but with :memory: instead a backing file on disk. I do not have a mock storage (yet).
I have to think about it a bit more, but I probably have to do exactly that in my tt rewrite, too. Sigh. I just have the feeling that in storage/sqlite/sqlite_test.go I cannot import storage/mock for the helper because storage/mock/mock.go imports and embeds the type from storage/sqlite. But Iām too tired right now to think clearly.
re reading so NewRAMStorage(ā¦) is just something that setups your storage and initial data.. that can probably live with storage/sqlite. The point is the storage package does not import the implementations of storage.Storage It just defines the contract for things that use that interface. Now storage/sqlite CAN import storage and not have a circle dep.
It kinda works in reverse for import directions. usually you have your root package that imports things from deeper in the directory structures.. but for the case of interfaces it reverses where the deeper can import from parents but parents cannot import from children.
- app < storage
< storage/sqlite
< controller < storage
< storage/sqlite
- sqlite < storage
- storage X storage/sqlite
@lyse@lyse.isobeef.org OK. So how I have worked things like this out is to have the interface in the root package from the implementations. The interface doesnāt need to be tested since itās just a contract. The implementations donāt need to import storage.Storage
- storage/ defines the
Storageinterface (no tests!)
- storage/sqlite for the sqlite implementation tests for sqlite directly
- storage/ram for the ram implementation and tests for RAM directly
- storage/sqlite for the sqlite implementation tests for sqlite directly
- controller/ can now import both storage and the implementation as needed.
So now I am guessing you wanted the RAM test for testing queries against sqlite and have it return some query response?
For that I usually would register a driver for SQL that emulates sqlite. Then itās just a matter of passing the connection string to open the registered driver on setup.
https://github.com/glebarez/go-sqlite?tab=readme-ov-file#connection-string-examples
@xuu@txt.sour.is My layout looks like this:
- storage/
- storage.go: defines a
Storageinterface
- sqlite.go: implements the
Storageinterface
- sqlite_test.go: originally had a function to set up a test storage to test the SQLite storage implementation itself:
newRAMStorage(testing.T, $initialData) *Storage
- storage.go: defines a
- controller/
- feeds.go: uses a
Storage
- feeds_test.go: here I wanted to reuse the
newRAMStorage(ā¦)function
- feeds.go: uses a
I then tried to relocate the newRAMStorage(ā¦) into a
- teststorage/
- storage.go: moved here as
NewRAMStorage(ā¦)
- storage.go: moved here as
so that I could just reuse it from both
- storage/
- sqlite_test.go: uses
testutils.NewRAMStorage(ā¦)
- sqlite_test.go: uses
- controller/
- feeds_test.go: uses
testutils.NewRamStorage(ā¦)
- feeds_test.go: uses
But that results into an import cycle, because the teststorage package imports storage for storage.Storage and the storage package imports testutils for testutils.NewRAMStorage(ā¦) in its test. Iām just screwed. For now, I duplicated it as newRAMStorage(ā¦) in controller/feeds_test.go.
I could put NewRAMStorage(ā¦) in storage/testutils.go, which could be guarded with //go:build testutils. With go test -tags testutils ā¦, in storage/sqlite_test.go could just use NewRAMStorage(ā¦) directly and similarly in controller/feeds_test.go I could call storage.NewRamStorage(ā¦). But I donāt know if I would consider this really elegant.
The more I think about it, the more appealing it sounds. Because I could then also use other test-related stuff across packages without introducing other dedicated test packages. Build some assertions, converters, types etc. directly into the same package, maybe even make them methods of types.
If I went that route, I might do the opposite with the build tag and make it something like !prod instead of testing. Only when building the final binary, I would have to specify the tag to exclude all the non-prod stuff. Hmmm.
Amd of course, TDD! I tried that, but it doesnāt work all that great for me in its strict form. I have the feeling that coming up with a single new failing test, making it pass, maybe some refactoring, rinse and repeat wastes significantly more time than doing it in ā what they call ā the ābundleā approach. Coming up with several tests in advance and then writing the code or vise versa is usually much quicker. I do find that more enjoyable, it also helps me to reduce smaller context switches. I can focus on either the tests or the production code.
As for the potentially reduced code coverage with a non-TDD approach, I can easily see which parts are lacking tests and hand them in later. So, thatās largely a specious argument. Granted, I can forget to check the coverage or simply ignore it.
I agree with John, TDD results in less elegant code or requires more refactoring to tidy it up. Sometimes, itās also not entirely clear at the beginning how the API should really look like. It doesnāt happen often, but it does happen. Especially when experimenting or trying out different approaches. With TDD, I then also have to refactor the tests which is not only annoying, but also involves the danger of accidentally breaking them.
TDD only works really well, if you have super tiny functions. But we already established that I typically donāt like tiny methods just for the purpose of them being extremely short.
When fixing a bug, I usually come up with a failing test case first to verify that my repaired code later actually resolves the problem. For new code, it depends, sometimes tests first, sometimes the productive code first. Starting off with the tests requires the API to be well defined beforehand.
ok, sounds like a ālargeā project to me.
Is it more an API (more oriented to developers), more oriented to UI/UX/Frontend? Perhaps both?
Iād go with prologicās advice of measuring and prioritizing. Perhaps you have a budget or at least something like āletās see how far can we reach in 6 monthsā, and possibly you wonāt finish in the time you have (just guessing).
Something that has helped me was defining āWhy do you we want to refactor this project?ā.
Could it be to make it compile on newer versions, or making it easier to grow and scale, or perhaps they are trying to sell that product to another company. Every reason has a different path, IMO.
shellcheck: https://github.com/koalaman/shellcheck It points out common errors and gives some suggestions on how to improve the code. Some details in shell scripting are very tricky to get right at first. Even after decades of shell programming, I run into "corner cases" every now and then.
PSA: Yarnd operators might want to define code { white-space: pre } in their CSS themes to render things as theyāre supposed to look like.
nick = _@domain.tld in the twtxt.txt?
What should the advantage be to nick = _compared to just not defining a nick and let the client use the domain as the handle?
What is not intuitive is that you put something in the nick field that is not to be taken literary. The special meaning of _ is only clean if you read the documentation, compared to having something in nick that makes sense in the current context of the twtxt.txt.
If NICK = DOMAIN then only show @DOMAIN
So instead of @eapl.me@eapl.me it will just be @eapl.me
@doesnm@doesnm.p.psf.lt So the user should then set nick = _@domain.tld in the twtxt.txt?
It seems more intuitive and userfriendly to just use: nick = domain.tld and have then convention for clients to render the handle as @domain.tld instead of @domain.tld@domain.tld
For a feed with no nick defined (eg. https://akkartik.name/twtxt.txt) it will also be simpler and make more sense to just use the domain as the nick and render it as @domain.tld
@prologic@twtxt.net maybe you meant to specify twtxt as a type similar to ActivityPubās application/activity+json in https://webfinger.net/lookup/?resource=sorenpeter@norrebro.space
{
"rel": "self",
"type": "application/activity+json",
"href": "https://norrebro.space/users/sorenpeter"
},
Then it would also make sense to define a Link Relations but should that then link to something like https://twtxt.dev/webfinger.html where we can describe the spec?
@eapl.me@eapl.me here are my replies (somewhat similar to Lyseās and Jamesā)
Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax
if something is NSFWIDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.
Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.
Discovery: User-agent for discovery can become better. Iām working on a wrapper script in PHP, so you donāt need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But thatās the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.
Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs
Languages: If the need is big then make a separate feed. I donāt mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then itās about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.
Emojis: Iām not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?
@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. Iām referring to the definition that itās the first url = in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.
@prologic@twtxt.net Sure!! gg=G auto-indents your documents, as for the rest itās:
vfor selection mode,cfor change anddfor delete actions as usual.
- followed by either āa
for around ori` for inside/in-between whatever special character comes after it
_ the [, (, ā ⦠special characters define the perimeter/extent of the action.
i.e: ci" would be change the text under the cursor between quotes and da[ _delete text and brackets included_āØāØIāve linked a reference in the first twt, hope you find it useful.
so i learned that my vpn provider uses nftables to tag traffic for split tunnelling. so it looks like iāll be converting my iptables rules. thereās some implication for docker containers that iāll have to reckon with, but iām already nesting them inside a nixos container so i donāt really need docker to touch the network at all. after that iāll be able to define some rules to allow traffic meant for the yggdrasil network to reach the tunnel. this will be important later.
More thoughts about changes to twtxt (as if we havenāt had enough thoughts):
- There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:
1a. Better and longer hashes.
1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.
1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8
1d. Stuff already described at dev.twtxt.net that doesnāt need any changes.
We wonāt know what will and wonāt work until we try them. So Iām inclined to think of this as a bunch of draft ideas. Maybe later when weāve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.
Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
https://twtxt.readthedocs.io/en/latest/user/discoverability.html
and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long ātwtxt v2ā document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment āyouāve ruined twtxtā and while I donāt completely agree with that commenterās sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)All that being said, these are just my opinions, and Iām not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if youāre actually implementing things, youāre in charge of what you decide to make, and Iām grateful for the work.
url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.
If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt ⦠damn I just notice the gemini. subdomain.
Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
@falsifian@www.falsifian.org In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
# url = https://example.com/alias/txtxt.txt
# url = https://example.com/initial/twtxt.txt
<message 1 uses the initial URL>
<message 2 uses the initial URL, too>
# url = https://example.com/new/twtxt.txt
<message 3 uses the new URL>
# url = https://example.com/brand-new/twtxt.txt
<message 4 uses the brand new URL>
In theory, the same could be done for prepend-style feeds. They do exist, Iāve come around them. The parser would just have to calculate the hashes afterwards and not immediately.
@prologic@twtxt.net I donāt know what you mean when you call them stochastic parrots, or how you define understanding. Itās certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.
@prologic@twtxt.net Hmm, yeah, hmm, Iām not sure. š It all appears very subjective to me. Is 2k lines of code a lot or not?
I mean, Iām all for reducing complexity. š I just have a hard time defining it and arguing about it. What I call ātoo complexā, others might think of as ājust fineā. š¤
Iāve been thinking about a new term Iāve come across whilst reading a book. Itās called āComplexity Budgetā and I think it has relevant in lots of difficult fields. I specifically think it has a lot of relevant in the Software Industry and organizations in this field. When doing further research on this concept, I was only able find talks on complexity budget in the context of medical care, especially phychiratistic care. In this talk it was describe as, complexity:
- Complexity is confusing
- Complexity is costly
- Complexity kills
When we think of ācomplexityā in terms of software and software development, we have a sort-of intuitive about this right? We know when software has become too complex. We know when an organization has grown in complexity, or even a system. So we have a good intuition of the concept already.
My question to yāall is; how can we concretely think about āComplexity Budgetā and define it in terms that can be leveraged and used to control the complexity of software dns ystems?
it works fine if you properly escape your urls!
URIs include components and subcomponents that are delimited by
characters in the "reserved" set. These characters are called
"reserved" because they may (or may not) be defined as delimiters by
the generic syntax, by each scheme-specific syntax, or by the
implementation-specific syntax of a URI's dereferencing algorithm.
If data for a URI component would conflict with a reserved
character's purpose as a delimiter, then the conflicting data must be
percent-encoded before the URI is formed.
reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
yarn should define its own federation protocol that extends the basic twtxt in ways that twtxt doesnāt allow. itās time. and iāve got ideas!
@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:
@namespace
# multi
# line
# comment
root :value
# example space comment
@namespace.name space-tag
# attribute comments
attribute attr-tag :value for attribute
# attribute with multiple
# lines of values
foo :bar
:bin
:baz
repeated :value1
repeated :value2
each @ starts the definition of a namespace kinda like [name] in ini format. It can have comments that show up before. then each attribute is key :value and can have their own # comment lines.
Values can be multi line.. and also repeated..
the namespaces and values can also have little meta data tags added to them.

the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.
I would love to see a world where ones twtxt feed is defined by webfinger. So @xuu@txt.sour.is => https://text.sour.is/user/xuu/twtxt.txt
Then my identity can exist independent of the feed location. And I can host multiple protocol types for my feed. Ie. http/gopher/Gemini/irc DCC/etc
@lyse@lyse.isobeef.org They sure are silly at times. :-) You really have to combine this event with something else, like learning a new language. Otherwise it gets boring real quick.
What I absolutely love about AoC is that itās ā indeed ā a bit like school. š The problems are well-defined, the inputs are well-defined, and there is a definite answer. Itās either right or wrong ā period. Compared to real life and work, I welcome this very much. š¤£
This is some cool development for the go 1.22 standard http mux. Its adding the ability to have path vars and define methods for handlers. Also the errors are quite helpful if you have conflicting paths!
https://eli.thegreenplace.net/2023/better-http-server-routing-in-go-122/