@lyse@lyse.isobeef.org In my case it was a silver necklace, a hummingbird with a wing connected with the cold welding I mentioned using thin brass wires.
It made it in a goldsmithing class (I went to a private craftmanship high-school) so no phones allowed (no photos of it) and no ātake homeā of the works.
Hereās a rough sketch of it drawn by memory, the dots in the wing is where it connects to the body.
The technique is basically the same as i described, but the scale is much smaller, the whole piece was about 5-6 cm on the largest side.
The rivet was made by drilling a hole through the parts, than with a short and thicker drill you widen the hole on the surface to let the rivet settle flatter on the piece, then with a rubber hammer you hit it to flatten the head until itās snug on the hole, lock them together by doing the same on the other side.
Note that widening the hole with a thicker drill head wonāt make a difference with bigger holes, mine had holes of about 1-2 mm of diameter maximum.
Hereās a sketch of what is going on for clarity.
How much ram/cpu does a Yarn instance need? I currently have a vps with 1 GB ram. Letās see if it will be enough š
@itsericwoodward@itsericwoodward.com No worries, all good, mate! We all have to start somewhere. Other software requests my feed several orders of magnitude more often.
I can confirm, the User-Agent
header appears to be fixed. \o/
Two other things I noticed, though:
Thereās now an
OPTIONS
request for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, itās rejected with a405 Method Not Allowed
.Not that these few requests bother me at all, but you might wanna implement caching next with either the
If-Modified-Since
orIf-None-Match
request headers. This way, if the feed hasnāt changed, the web server can reply with a304 Not Modified
and no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure youāre aware of it, thatās all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)
groff --version
)?
@movq@www.uninformativ.de Itās an ancient 1.22.4. :-)
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.
@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.
After thinking about it for a while I got to two solutions:
Proposal 1: Thread syntax (using subject)
Each post have an implicit and an optional explicit root reference:
Implicit (no action needed, all data required are already there)
- URL + timestamp
- URL + timestamp
Explicit (subject required)
- Identity (client generated)
- External reference
- Random value
- Identity (client generated)
We then add include a ārootā subject in each post for generating explicit theads:
1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters
Each post can have both references, like the current hash approach the reference can be treated as a simple string and donāt have a real meaning.
Using a custom reference this way allows a client to decide how to generate them:
- Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
- External references: can be provided from another system (Eg.
7e073bd345
, yarnsocial/yarn latest commit)
- Random value: like a UUID (Eg.
9a0c34ed-d11e-447e-9257-0a0f57ef6e07
)
Proposal 2: Threaded mentions (featuring zvava)
Inspired by @zvava@twtxt.netās solution it could be simplified into: #<nick url#timestamp>
or #<url#timestamp>
It can be shown like a mentions or hidden like a subject.
If weāre using thinking of using a counter in the client, I think thereās no point in avoiding the timestamp anymore.
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@bender@twtxt.net Thanks for asking!
So, Iāve been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and Iāve been testing it on my site since the night I made that post. Itās still very much an MVP, and Iāve been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But thatās where Iāve been struggling. The idea seems simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the āfollowā list, grabs (and parses) those twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. Iāll figure it out eventually, but for now, Iāve been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as Iāve been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, Iām hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but weāll see.
I hope this has satisfied your curiosity, but if youād like to know more, please reach out!
@lyse@lyse.isobeef.org @prologic@twtxt.net Canāt we find a middle ground and support both?
The thread is defined by two parts:
- The hash
- The subject
The client/pod generate the hash and index it in itās database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the #
character), if this is the common implemented approach then adding the location based one is somewhat simple.
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based @<example https://example.com/twtxt.txt>
, so I think we should keep that in consideration.
Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesnāt matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed āheaderā for each post, but it can be seen as a separate extension).
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
@prologic@twtxt.net I can see the issues mentioned, but I think some can be fixed.
The current hash relies on a
url
field too, by specification, it will use the first# url = <URL>
in the feedās metadata if present, that too can be different from the fetching source, if that field changes it would break the existing hashes too, a better solution would be to use a non-URL key like# feed_id = <UNIQUE_RANDOM_STRING>
with theurl
as fallback.We can prevent duplications if the reference uses that same url field too or the client ācollapseā any reference of all the urls defined in the metadata.
I agree that hashing based on content is good, but we still use the URL as part of the hashing, which is just a field in the feed, easily replicable by a bot, also noting that edits can also break the hash, for this issue an alternative solution (E.g. a private key not included in the feed) should be considered.
For offline reading the source would be downloaded already, the fetching of non followed feeds would fill the gap in the same way mentions does, maybe Iām missing some context on this one.
To prevent collisions there was a discussion on extending the hash (forgot if that was already fixed or not), but without a fallback that would break existing clients too, we should think of a parallel format that maintains current implementations unchanged, we are already backward compatible with the original that donāt use threads at all, a mention style format for that could be even more user-friendly for those clients.
We should also keep in mind that the current mention format is already location based (@<example https://example.com/twtxt.txt>
) so Iām not that worried about threads working the same way.
Hope to see some other thought about this matter. š¤
Here is just a small list of things⢠that Iām aware will break, some quite badly, others in minor ways:
- Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt ā jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they wonāt.
- Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several āparentsā and split the thread.
- Verification & spam-resistance: content addressing lets you dedupe and verify youāre pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
- Offline/cached reading: without the original URL being reachable, readers canāt resolve anchors; with hashes they can match against local caches/archives.
- Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.
@kat@yarn.girlonthemoon.xyz Mine shows 1/1 of 14 Twts š I think this is a bug š¤Æ
ok so i have found a genuine twt hash collision. what do i do.
internally, bbycll relies on a post lookup table with post hashes as keys, this is really fast but i knew iād inevitably run into this issue (just not so soon) so now i have to either:
Ā Ā 1) pick the newer post over the other
Ā Ā 2) break from specification and not lowercase hashes
Ā Ā 3) secretly associate canonical urls or additional entropy with post hashes in the backend without a sizeable performance impact somehow
Made this a few weeks ago, just listened to it again and I quite like it:
https://www.uninformativ.de/music/2025-1-ebow/Fog.ogg
This is just one instrument: Electric bass guitar + EBow. And echo/delay on top. But itās a single track, single take. It amazes me quite a bit how much you can do with that little thing. š¤Æ
gopher://hashnix.club:70/1/~dce/art/
The chemtrails have fallen down!!1 https://lyse.isobeef.org/abendhimmel-2025-09-05/
Hmm, gnu.org is slow as heck. Shorter HTML pages load in about ten seconds. This complete AWK manual all in one large HTML page took a full minute: https://www.gnu.org/software/gawk/manual/gawk.html Is there maybe some anti AI shenanigans going on?
In any case, I find the user guide super interesting. My AWK skills are basically non-existent, so I finally decided to change that. This document is incredibly well written and makes it really fun to keep reading and learning. Iām very impressed. So far, I made it to section 1.6, happy to continue.
Why do I care about this?
- The load will become a problem at some point.
- These crawlers and the current āAIā in general are breaking the rules. I am supposed to be paying for every little thing, I get sued for āpiracyā. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that.
- I simply donāt want it. Period.
Iāve got a prototype of my hardcopy simulator going. Iām typing on the keyboard and the ādisplayā goes to the printer:
https://movq.de/v/56feb53912/s.png
https://movq.de/v/235c1eabac/MVI_8810.MOV.mp4
The biiiiiiiiiig problem is that the print head and plastic cover make it impossible to see whatās currently being printed, because this is not a typewriter. This means: In order to see what I just entered, I have to feed the paper back and forth and back and forth ⦠itās not ideal.
I got that idea of moving back/forth from Drew DeVault, who ā as it turned out ā did something similar a few years back. (I tried hard to read as little as possible of his blog post, because figuring things out myself is more fun. But that could mean I missed a great idea here or there.)
But hey, at least this is running on my Pentium 133 on SuSE Linux 6.4, printer connected with a parallel cable. š
(Also, yes, you can see the printouts of earlier tests and, yes, I used ed(1)
wrong at one point. 𤪠And ls
insisted on using colors ā¦)
Hereās an interesting thought/angle on this topic:
gemini://gemini.conman.org/boston/2025/08/21.1
A further check showed that all the network blocks are owned by one organizationāTencent [4]. Iām seriously thinking that the CCP (Chinese Communist Party) encourage this with maybe the hope of externalizing the cost of the Great Firewall [5] to the rest of the world.
Sooooooooo, things happened, and I now have a dot matrix printer again. šš
(One of the end goals is to simulate a hardcopy terminal on my old box. Iām waiting for another cable to arrive, I donāt have USB there. And then use ed(1)
like it was meant to be used! š
)
@movq@www.uninformativ.de WE NEED MORE BACKUPS!!!!!!!!!!!1
ok i really like XLOV. 1&Only is a great song. so vibe-y and sensual. and they released it in pride month too they Get It https://www.youtube.com/watch?v=NBZgirj_C2Y
@lyse@lyse.isobeef.org @kat@yarn.girlonthemoon.xyz Colorized manpages have been a thing for a very long time:
https://movq.de/v/81219d7f7a/s.png
Problem is, hardly anybody knows this, because you configure this by ⦠drumroll ⦠overwriting TERMCAP entries of less
in your ~/.bashrc
:
export LESS_TERMCAP_md=$'\e[38;5;3m' # Bold⨠export LESS_TERMCAP_me=$'\e[0m' # End Bold
export LESS_TERMCAP_us=$'\e[4;38;5;6m' # Underline⨠export LESS_TERMCAP_ue=$'\e[0m' # End Underline
export GROFF_NO_SGR=1 # Needed since groff 1.23
@lyse@lyse.isobeef.org āAdvancedā, well, probably more āmatureā. There arenāt a ton of crazy features and that icon thing is the largest code addition in the last 10 years. %)
Speaking of OS/2 ⦠I just realized that Windows 3.x didnāt have icons, either. If Iām not mistaken, this only got added in Windows 95. In other words, OS/2 had this feature before Windows did, because at least OS/2 2.1 from 1993 had icons. Who would have thunk.
(Now I kind of want to know which system really introduced this feature.)
@lyse@lyse.isobeef.org Oh, huh, maybe it was just my GNOME 2 themes back then that didnāt show the icon. š¤
I like the looks of your window manager. Thatās using Wayland, right?
Oh, no. Itās still X11. All my recent Wayland comments resulted from me trying to switch, but I think itās still too early. Being unable to use QEMU (because it canāt capture the mouse pointer) is a pretty big blocker for me. This is completely broken, it just happens to be unnoticeable with modern guest OSes, so itās probably not a priority for devs.
(Not to mention that I would have to fork and substantially extend dwl in order to āreplicateā my X11 WM. And then, after having done that, Iād have to follow upstream Wayland development, for which I donāt have the resources. Things would need to slow down before I can do that.)
all that wasted space of the windows not making use of the full screen!!!1
Heh. Iāve been using tiling WMs for ~15 years now, so itās actually kind of refreshing to see something different for a change. š
Probably close to the older Windowses.
That particular theme is a ripoff of OS/2 Warp 3: https://movq.de/v/6c2a948882/s.png š
We ran some similar brownish color scheme (donāt recall its name) on Win95 or Win98
Oh god. Yeah, I wasnāt a fan of those, either. š„“
@movq@www.uninformativ.de According to this screenshot, KDE still shows good old application icons: https://upload.wikimedia.org/wikipedia/commons/9/94/KDE_Plasma_5.21_Breeze_Twilight_screenshot.png
And GNOME used to have them, too: https://upload.wikimedia.org/wikipedia/commons/9/9f/Gnome-2-22_%284%29.png
I like the looks of your window manager. Thatās using Wayland, right? The only thing on this screenshot to critique is all that wasted space of the windows not making use of the full screen!!!1 At least the file browser. 8-)
This drives me nuts when my workmates share their screens. I really donāt get it how people can work like that. You canāt even read the whole line in the IDE or log viewer with all the expanded side bars. And then thereās 200 pixels on the left and another 300 pixels on the right where the desktop wallpaper shows. Gnaa! Thereās the other extreme end when somebody shares their ultra wide screen and I just have a āregularishā 16:10 monitor and donāt see shit, because itās resized way too tiny to fit my width. Good times. :-D
Sorry for going off on a tangent here. :-) Back to your WM: It has the right mix of being subtle and still similar to motif. Probably close to the older Windowses. My memory doesnāt serve me well, but I think they actually got it fairly good in my opinion. Your purple active window title looks killer. It just fits so well. This brown one (https://www.uninformativ.de/blog/postings/2025-07-22/0/leafpads.png) gives me also classic vibes. Awww. We ran some similar brownish color scheme (donāt recall its name) on Win95 or Win98 for some time on the family computer. I remember other people visting us not liking these colors. :-D
Hereās an example of X11/Xlib being old and archaic.
X11 knows the data type ācardinalā. For example, the window property _NET_WM_ICON
(which holds image data for icons) is an array of ācardinalā. I am already not really familiar with that word and Iām assuming that it comes from mathematics:
https://en.wikipedia.org/wiki/Cardinal_number
(It could also be a bird, but probably not: https://en.wikipedia.org/wiki/Cardinalidae)
We would probably call this an āintegerā today.
EWMH says that icons are arrays of cardinals and that theyāre 32-bit numbers:
https://specifications.freedesktop.org/wm-spec/latest-single/#id-1.6.13
So itās something like 0x11223344
with 0x11
being the alpha channel, 0x22
is red, and so on.
You would assume that, when you retrieve such an array from the X11 server, youād get an array of uint32_t
, right?
Nope.
Xlib is so old, they use char
for 8-bit stuff, short int
for 16-bit, and long int
for 32-bit:
That is congruent with the general C data types, so it does make sense:
https://en.wikipedia.org/wiki/C_data_types
Now the funny thing is, on modern x86_64
, the type long int
is actually 64 bits wide.
The result is that every pixel in a Pixmap, for example, is twice as large in memory as it would need to be. Just because Xlib uses long int
, because uint32_t
didnāt exist, yet.
And this is something that I wouldnāt know how to fix without breaking clients.
@lyse@lyse.isobeef.org They are optional dependencies and listed as such:
$ pacman -Qi pinentry
Name : pinentry
Version : 1.3.1-5
Description : Collection of simple PIN or passphrase entry dialogs which
utilize the Assuan protocol
Optional Deps : gcr: GNOME backend [installed]
gtk3: GTK backend [installed]
qt5-x11extras: Qt5 backend [installed]
kwayland5: Qt5 backend
kguiaddons: Qt6 backend
kwindowsystem: Qt6 backend
And itās probably a good thing that theyāre optional. I wouldnāt want to have all that installed all the time.
@lyse@lyse.isobeef.org @kat@yarn.girlonthemoon.xyz I spent so much time in the past figuring out if something is a dict or a list in YAML, for example.
What are the types in this example?
items:
- part_no: A4786
descrip: Water Bucket (Filled)
price: 1.47
quantity: 4
- part_no: E1628
descrip: High Heeled "Ruby" Slippers
size: 8
price: 133.7
quantity: 1
items
is a dict containing ⦠a list of two other dicts? Right?
It is quite hard for me to grasp the structure of YAML docs. š¢
The big advantage of YAML (and JSON and TOML) is that itās much easier to write code for those formats, than it is with XML. json.loads()
and youāre done.
Only figured this out yesterday:
pinentry
, which is used to safely enter a password on Linux, has several frontends. Thereās a GTK one, a Qt one, even an ncurses one, and so on.
GnuPG also uses pinentry
. And you can configure your frontend of choice here in gpg-agent.conf
.
But what happens when you donāt configure it? Whatās the default?
Turns out, pinentry
is a shellscript wrapper and itās not even that long. Here it is in full:
#!/bin/bash
# Run user-defined and site-defined pre-exec hooks.
[[ -r "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec ]] && \
. "${XDG_CONFIG_HOME:-$HOME/.config}"/pinentry/preexec
[[ -r /etc/pinentry/preexec ]] && . /etc/pinentry/preexec
# Guess preferred backend based on environment.
backends=(curses tty)
if [[ -n "$DISPLAY" || -n "$WAYLAND_DISPLAY" ]]; then
case "$XDG_CURRENT_DESKTOP" in
KDE|LXQT|LXQt)
backends=(qt qt5 gnome3 gtk curses tty)
;;
*)
backends=(gnome3 gtk qt qt5 curses tty)
;;
esac
fi
for backend in "${backends[@]}"
do
lddout=$(ldd "/usr/bin/pinentry-$backend" 2>/dev/null) || continue
[[ "$lddout" == *'not found'* ]] && continue
exec "/usr/bin/pinentry-$backend" "$@"
done
exit 1
Preexec, okay, then some auto-detection to use a toolkit matching your desktop environment ā¦
⦠and then it invokes ldd
? To find out if all the required libraries are installed for the auto-detected frontend?
Oof. I was sitting here wondering why it would use pinentry-gtk
on one machine and pinentry-gnome3
on another, when both machines had the exact same configs. Yeah, but different libraries were installed. One machine was missing gcr
, which is needed for pinentry-gnome3
, so that machine (and that one alone) spawned pinentry-gtk
ā¦
@movq@www.uninformativ.de I fully agree with you on https://www.uninformativ.de/blog/postings/2025-07-22/0/POSTING-en.html!
Although, in the first screenshot, the window title background is much darker in the new version than the old one!1!1 :-P Kidding aside, the contrast in the old one is still better.
Also, note the missing underlines for the Alt hotkeys now. I just think that the underline in the old one is too thick.
@lyse@lyse.isobeef.org Ja, eine kleine Inventur vorab kann auch nicht schaden. Der Bestand an Erdankern, Heringen und Gaskartuschen ist durch mich die Tage schon wieder aufgestockt worden.
Wo das Gas bleibt weiĆ ich. Warum die Befestigungen immer weniger werden, obwohl wir durchzƤhlen (!), ist mir unbekannt. Vielleicht sind wir im Zahlenraum von 1 bis 20 einfach nur noch sehr unsicher. š¤
The WM_CLASS
Property is used on X11 to assign rules to certain windows, e.g. āthis is a GIMP window, it should appear on workspace number 16.ā It consists of two fields, name
and class
.
Wayland (or rather, the XDG shell protocol ā core Wayland knows nothing about this) only has a single field called app_id
.
When you run X11 programs under Wayland, you use XWayland, which is baked into most compositors. Then you have to deal with all three fields.
Some compositors map name
to app_id
, others map class
to app_id
, and even others directly expose the original name
and class
.
Apparently, there is no consensus.
@movq@www.uninformativ.de Yeah, itās a shitshow. MS overconfirms all my prejudices constantly.
Ignoring e-mail after lunch works great, though. :-)
Our timetracking is offline for over a week because of reasons. The responsible bunglers are falling by the skin of their teeth: https://lyse.isobeef.org/tmp/timetracking.png
- The error message neither includes the timeframe nor a link to an announcement article.
- The HTML page needs to download JS in order to display the fucking error message.
- Proper HTTP status codes are clearly only for big losers.
- Despite being down, heaps of resources are still fetched.
I find it really fascinating how one can screw up on so many levels. This is developed inhouse, Iām just so glad that weāre not a software engineering company. Oh wait. How embarrassing.
The Linux installation on my main PC turned 14 today:
$ head -n 1 /var/log/pacman.log
[2011-07-07 11:19] installed filesystem (2011.04-1)
Weāre entering the ātoo hot to thinkā-season in 3, 2, 1 ⦠and weāre live!
I did a ālectureā/āworkshopā about this at work today. 16-bit DOS, real mode. š¾ Pretty cool and the audience (devs and sysadmins) seemed quite interested. š„³
- People used the Intel docs to figure out the instruction encodings.
- Then they wrote a little DOS program that exits with a return code and they used uhex in DOSBox to do that. Yes, we wrote a COM file manually, no Assembler involved. (Many of them had never used DOS before.)
- DEBUG from FreeDOS was used to single-step through the program, showing what it does.
- This gets tedious rather quickly, so we switched to SVED from SvarDOS for writing the rest of the program in Assembly language. nasm worked great for us.
- At the end, we switched to BIOS calls instead of DOS syscalls to demonstrate that the same binary COM file works on another OS. Also a good opportunity to talk about bootloaders a little bit.
- (I think they even understood the basics of segmentation in the end.)
The 8086 / 16-bit real-mode DOS is a great platform to explain a lot of the fundamentals without having to deal with OS semantics or executable file formats.
Now that was a lot of fun. š„³ Itās very rare that we do something like this, sadly. I love doing this kind of low-level stuff.
Saw this on Mastodon:
https://racingbunny.com/@mookie/114718466149264471
18 rules of Software Engineering
- You will regret complexity when on-call
- Stop falling in love with your own code
- Everything is a trade-off. Thereās no ābestā 3. Every line of code you write is a liability 4. Document your decisions and designs
- Everyone hates code they didnāt write
- Donāt use unnecessary dependencies
- Coding standards prevent arguments
- Write meaningful commit messages
- Donāt ever stop learning new things
- Code reviews spread knowledge
- Always build for maintainability
- Ask for help when youāre stuck
- Fix root causes, not symptoms
- Software is never completed
- Estimates are not promises
- Ship early, iterate often
- Keep. It. Simple.
Solid list, even though 14 is up for debate in my opinion: Software can be completed. You have a use case / problem, you solve that problem, done. Your software is completed now. There might still be bugs and they should be fixed ā but this doesnāt āaddā to the program. Donāt use āsoftware is never doneā as an excuse to keep adding and adding stuff to your code.
@aelaraji@aelaraji.com I use Alt+.
all the time, itās great. š
FWIW, another thing I often use is !!
to recall the entire previous command line:
$ find -iname '*foo*'
./This is a foo file.txt
$ cat "$(!!)"
cat "$(find -iname '*foo*')"
This is just a test.
Yep!
Or:
$ ls -al subdir
ls: cannot open directory 'subdir': Permission denied
$ sudo !!
sudo ls -al subdir
total 0
drwx------ 2 root root 60 Jun 20 19:39 .
drwx------ 7 jess jess 360 Jun 20 19:39 ..
-rw-r--r-- 1 root root 0 Jun 20 19:39 nothing-to-see
@prologic@twtxt.net Iām trying to call some libc functions (because the Rust stdlib does not have an equivalent for getpeername()
, for example, so I donāt have a choice), so I have to do some FFI stuff and deal with raw pointers and all that, which is very gnarly in Rust ā because youāre not supposed to do this. Things like that are trivial in C or even Assembler, but I have not yet understood what Rust does under the hood. How and when does it allocate or free memory ⦠is the pointer that I get even still valid by the time I do the libc call? Stuff like that.
I hope that I eventually learn this over time ⦠but I get slapped in the face at every step. Itās very frustrating and Iām always this š¤ close to giving up (only to try again a year later).
Oh, yeah, yeah, I guess I could ājustā use some 3rd party library for this. socket2 gets mentioned a lot in this context. But I donāt want to. I literally need one getpeername()
call during the lifetime of my program, I donāt even do the socket()
, bind()
, listen()
, accept()
dance, I already have a fully functional file descriptor. Using a library for that is total overkill and Iād rather do it myself. (And look at the version number: 0.5.10
. The library is 6 years old but theyāre still saying: āNah, weāre not 1.0 yet, we reserve the right to make breaking changes with every new release.ā So many Rust libs are still unstable ā¦)
⦠and I could go on and on and on ⦠š¤£
So I was using this function in Rust:
https://doc.rust-lang.org/std/path/struct.Path.html#method.display
Note the little 1.0.0
in the top right corner, which means that this function has been āstable since Rust version 1.0.0ā. Weāre at 1.87 now, so weāre good.
Then I compiled my program on OpenBSD with Rust 1.86, i.e. just one version behind, but well ahead of 1.0.0.
The compiler said that I was using an unstable library feature.
Turns out, that function internally uses this:
https://doc.rust-lang.org/std/ffi/struct.OsStr.html#method.display
And that is only available since Rust 1.87.
How was I supposed to know this? š¤Øš«©
Current toy project: an image feed generated by mk(1). Still some edges to clean up but itās nice: http://a.9srv.net/img/_readme.html
@nghialele@nghia.im Man, I wish I could watch Formula 1 on a regular basis again, but it has become expensive as fuck here. š«¤
This is my highlight, really, havenāt seen this in action in a loooooooong time:
@oloke@nghia.im iām reading about this Garage today, will watch this one for sure, promising, since those folks look like familiar with YunoHost ecosystem and French things
also found another things called Riak
next up: authentication center / for both work & personal use.
for the work project, the customers (of my client) are unhappy with the account login flow and I need a fast & easy SSO for them.
for personal use: just a gateway to lock all the apps and provide access to friends.
i slowly realize the power of 1% everyday on what i am doing.
Maybe youāll enjoy this as well:
I still have one of my first modems, a Creatix LC 144 VF:
I think this was the modem that I used when I first connected to the internet, but Iām not sure.
I plugged it in again and it still works:
The firmware appears to be from 1994, which sounds about right. I donāt think we had internet access before that. We certainly did use local mailboxes, though. (Or BBSās, as you might call them.)
I now want to actually use that modem again. For the moment, I can only use a phone to dial into it, I lack a second modem to actually establish a connection. Hereās a video:
Not spectacular, but the modem does answer after me entering ATA
.
I bought another cheap old modem on eBay and am now waiting for it to arrive. Once itās here, I want to simulate an actual dial-up session, hopefully from OS/2 or Windows 3.x.
One of the nicest things about Go is the language itself, comparing Go to other popular languages in terms of the complexity to learn to be proficient in:
- Go:
25
keywords (Stack Overflow); CSP-style concurrency (goroutines & channels)
- Python 2:
30
keywords (TutorialsPoint); GIL-bound threads & multiprocessing (Wikipedia)
- Python 3:
35
keywords (Initial Commit); GIL-bound threads,asyncio
& multiprocessing (Wikipedia, DEV Community)
- Java:
50
keywords (Stack Overflow); threads +java.util.concurrent
(Wikipedia)
- C++:
82
keywords (Stack Overflow);std::thread
, atomics & futures (en.cppreference.com)
- JavaScript:
38
keywords (Stack Overflow); single-threaded event loop &async/await
, Web Workers (Wikipedia)
- Ruby:
42
keywords (Stack Overflow); GIL-bound threads (MRI), fibers & processes (Wikipedia)