In-reply-to » @lyse I can tell you this right now, writing assembly / machine code is fucking hard work™ 😓 I'm sure @movq can affirm 🤣 And when it all goes to shit™ (which it does often), man is debugging fucking hard as hell! Without debug symbols I can't use the regular tools like lldb or gdb 😂

@prologic@twtxt.net Debugging this stuff on bare metal hardware (without an underlying OS) is a nightmare. 🤣

⤋ Read More
In-reply-to » @lyse I’m toying with the idea of making a widget/window system on top of Python’s ncurses. I’ve never really been happy with the existing ones (like urwid, textual, pytermgui, …). I mean, they’re not horrible, it’s mostly the performance that’s bugging me – I don’t want to wait an entire second for a terminal program to start up.

@movq@www.uninformativ.de I see. Yeah, all the Unicode stuff certainly doesn’t help here, that’s for sure.

Maybe “speedcurses” could be a name. Or just select any Palatinate curse. ;-)

⤋ Read More
In-reply-to » @lyse I can tell you this right now, writing assembly / machine code is fucking hard work™ 😓 I'm sure @movq can affirm 🤣 And when it all goes to shit™ (which it does often), man is debugging fucking hard as hell! Without debug symbols I can't use the regular tools like lldb or gdb 😂

@prologic@twtxt.net Oh yeah, I bet it is horrible to troubleshoot.

⤋ Read More
In-reply-to » Trying to come up with a name for a new project and every name is already taken. 🤣 The internet is full!

@lyse@lyse.isobeef.org I’m toying with the idea of making a widget/window system on top of Python’s ncurses. I’ve never really been happy with the existing ones (like urwid, textual, pytermgui, …). I mean, they’re not horrible, it’s mostly the performance that’s bugging me – I don’t want to wait an entire second for a terminal program to start up.

Not sure if I’ll actually see it through, though. Unicode makes this kind of thing extremely hard. 🫤

⤋ Read More
In-reply-to » @lyse Yeah I remember you said some days back that your interest in compilers was rekindled by my work on mu (µ) 😅

@lyse@lyse.isobeef.org I can tell you this right now, writing assembly / machine code is fucking hard work™ 😓 I’m sure @movq@www.uninformativ.de can affirm 🤣 And when it all goes to shit™ (which it does often), man is debugging fucking hard as hell! Without debug symbols I can’t use the regular tools like lldb or gdb 😂

⤋ Read More
In-reply-to » @lyse Yeah I remember you said some days back that your interest in compilers was rekindled by my work on mu (µ) 😅

@prologic@twtxt.net Yeah, the parser part is what I typically enjoy. Haven’t really looked into code generation itself.

I’m currently looking at your µ commits from the last few days. Holy cow! :-)

⤋ Read More
In-reply-to » Whoo! I fixed one of the hardest bugs in mu (µ) I think I've had to figure out. Took me several days in fact to figure it out. The basic problem was, println(1, 2) was bring printed as 1 2 in the bytecode VM and 1 nil when natively compiled to machine code on macOS. In the end it turned out the machine code being generated / emitted meant that the list pointers for the rest... of the variadic arguments was being slot into a register that was being clobbered by the mu_retain and mu_release calls and effectively getting freed up on first use by the RC (reference counting) garbage collector 🤦‍♂️

@lyse@lyse.isobeef.org Yeah I remember you said some days back that your interest in compilers was rekindled by my work on mu (µ) 😅

⤋ Read More
In-reply-to » Whoo! I fixed one of the hardest bugs in mu (µ) I think I've had to figure out. Took me several days in fact to figure it out. The basic problem was, println(1, 2) was bring printed as 1 2 in the bytecode VM and 1 nil when natively compiled to machine code on macOS. In the end it turned out the machine code being generated / emitted meant that the list pointers for the rest... of the variadic arguments was being slot into a register that was being clobbered by the mu_retain and mu_release calls and effectively getting freed up on first use by the RC (reference counting) garbage collector 🤦‍♂️

@prologic@twtxt.net Tada, congratulations! I find that rather interesting, thanks for telling us. :-)

⤋ Read More
In-reply-to » Trying to come up with a name for a new project and every name is already taken. 🤣 The internet is full!

@movq@www.uninformativ.de How about “Quongsi”? I generated the first five letters with pwgen --no-capitalize --no-numerals 5 and since that already showed up in DDG search results, I simply appended the last two, which yielded nothing on DDG and Google).

What kind of project is it? Maybe we can help you find a name or nudge you in the right direction.

⤋ Read More

The tt URLs View now automatically selects the first URL that I probably are going to open. In decreasing order, the URL types are:

  1. markdown media URLs (images, videos, etc.)
  2. markdown or plaintext URLs
  3. subjects
  4. mentions

I might differentiate between mentions of subscribed and unsubscribed feeds in the future. The odds of opening a new feed over an already existing one are higher.

⤋ Read More

Whoo! I fixed one of the hardest bugs in mu (µ) I think I’ve had to figure out. Took me several days in fact to figure it out. The basic problem was, println(1, 2) was bring printed as 1 2 in the bytecode VM and 1 nil when natively compiled to machine code on macOS. In the end it turned out the machine code being generated / emitted meant that the list pointers for the rest... of the variadic arguments was being slot into a register that was being clobbered by the mu_retain and mu_release calls and effectively getting freed up on first use by the RC (reference counting) garbage collector 🤦‍♂️

⤋ Read More
In-reply-to » @zvava The problem you now then is you lose integrity of the message content if you compute the hashes at runtime rather than on the way in. So if your message content or database becomes corrupt in any way, so do your hashes.

@prologic@twtxt.net In my opinion, the integrity isn’t lost. The same input data always result in the same output hash, no matter when you calculate the hashes. It’s true that a corrupt database contents yields to corrupt hashes, but then you have a whole bigger problem than just receiving different hashes. :-D

⤋ Read More
In-reply-to » @lyse while caching those is a good idea the problem is baking data that can be calculated into the database instead of some cache, because post hashes are not fixed and change for every post edit. you can always easily look up other twts by hash with a cached lookup table, but now you're not locked into them so supporting hashv2 or other hash variants or any other solution becomes far easier

@zvava@twtxt.net By hashing definition, if you edit your message, it simply becomes a new message. It’s just not the same message anymore. At least from a technical point of view. As a human, personally I disagree, but that’s what I’m stuck with. There’s no reliable way to detect and “correct” for that.

Storing the hash in your database doesn’t prevent you from switching to another hashing implementation later on. As of now, message creation timestamps earlier than some magical point in time use twt hash v1, messages on or after that magical timestamp use twt hash v2. So, a message either has a v1 or a v2 hash, but not both. At least one of them is never meaningful.

Once you “upgrade” your database schema, you can check for stored messages from the future which should have been hashed using v2, but were actually v1-hashed and simply fix them.

If there will ever be another addressing scheme, you could reuse the existing hash column if it supersedes the v1/v2 hashes. Otherwise, a new column might be useful, or perhaps no column at all (looking at location-based addressing or how it was called). The old v1/v2 hashes are still needed for all past conversation trees.

In my opinion, always recalculating the hashes is a big waste of time and energy. But if it serves you well, then go for it.

⤋ Read More
In-reply-to » very good blog post that reminded me why it's taking so long to ship bbycll — previously i had computed the hashes of every post before storing them in the database, after realizing it's a much better idea to compute the hashes during runtime and only store the post content & timestamp i'm now having to rewrite every function that reads & writes data. i hope the reason as to why i lost motivation is obvious — thankfully i caught it early enough so that once i'm done rewriting just those functions i should™ be able to finalize 1.0-rc with little hassle

@zvava@twtxt.net The problem you now then is you lose integrity of the message content if you compute the hashes at runtime rather than on the way in. So if your message content or database becomes corrupt in any way, so do your hashes.

⤋ Read More
In-reply-to » very good blog post that reminded me why it's taking so long to ship bbycll — previously i had computed the hashes of every post before storing them in the database, after realizing it's a much better idea to compute the hashes during runtime and only store the post content & timestamp i'm now having to rewrite every function that reads & writes data. i hope the reason as to why i lost motivation is obvious — thankfully i caught it early enough so that once i'm done rewriting just those functions i should™ be able to finalize 1.0-rc with little hassle

@lyse@lyse.isobeef.org while caching those is a good idea the problem is baking data that can be calculated into the database instead of some cache, because post hashes are not fixed and change for every post edit. you can always easily look up other twts by hash with a cached lookup table, but now you’re not locked into them so supporting hashv2 or other hash variants or any other solution becomes far easier

⤋ Read More