lyse

lyse.isobeef.org

No description provided.

The only good thing about this absolute craziness is that I can restock my rocket sticks. I picked up twelve along the way. Unfortunately, it looks like 99.999% of ammunition is bombs instead of rockets. Some sections of my street look exactly like an arbitrary Pakistanian town that I’ve seen online.

There was surprisingly much snow in the woods. Also, all ponds have frozen over. I didn’t expect that. Not at all. There were even illegal ice skating tracks in the natural reserve. We came across a large puddle and it was at least 10cm solid ice to the ground. Crazy!

https://lyse.isobeef.org/waldspaziergang-2026-01-01/

⤋ Read More
In-reply-to » @lyse A "Hello World" binary is ~372KB in size. I currently have peephole optimization and deac code optimizations in play, and a few other performance related ones, but nothing too fancy. I have a test case that ensures fib(35) doesn't regress too badly as I continue to evolve the language.

@prologic@twtxt.net Not bad for a start, ey! Looking forward to see you going down these rabbit holes and opening one can of worms after the other. :ā€˜-D Very, very impressive, hats off to you. :-)

⤋ Read More
In-reply-to » @lyse You actually have a Markdown parser/renderer in there? Oh dear. I would have been (well, I am) way too lazy for that. šŸ˜…

@movq@www.uninformativ.de Well, just a very limited subset thereof:

  1. inline and multiline code blocks using single/double/triple backticks (but no code blocks with just indentation)
  2. markdown links using using [text](url)
  3. markdown media links using ![alt](url)

And that’s it. No bold, italics, lists, quotes, headlines, etc.

Just like mentions, plain URLs, markdown links and markdown media URLs are highlighted and available in the URLs View. They’re also colored differently, similarly to code segments.

I definitely should write some documentation and provide screenshots.

⤋ Read More
In-reply-to » @movq That's cool! I also like the name of your library. :-) I assume you made the thing load quickly, didn't you?

@movq@www.uninformativ.de Yeah, I see. Just crudely checked on my computer, with around 0.013 seconds, Python 2.7 seems a tad faster than Python 3.14’s 0.023 seconds in this little program.

The lazy imports sound not too bad, but I just skimmed over them. There are surprisingly many exceptions, but yeah, no way around them. :-)

⤋ Read More

I just fixed another bug in tt where the language hint in multiline markdown code blocks had not been stripped before rendering. It just looked like it was part of the actual code, which was ugly. I now throw it away. Actually, it’s already extracted into the data model for possible future syntax highlighting.

⤋ Read More

@shinyoukai@neko.laidback.moe Because you might not want to commit all changed files in a single commit. I very often make use of this and create several commits. In fact, I like to git add --patch to interactively select which parts of a file go in the next commit. This happens most likely when refactoring during a feature implementation or bug fix. I couldn’t live without that anymore. :-)

If you have a much more organized way of working where this does not come up, you can just git commit --all to include all changed files in the next commit without git adding them first. But new files still have to be git added manually once.

⤋ Read More
In-reply-to » Hmm, mine also resolves a leading tilde in these variables. And if $HOME is not specified it tries to resolve the user's home directory by user.Current().HomeDir. Maybe that's overkill, I have to check the XDG spec.

Ok, the standard library implementation is wonky at best, at least in regards to XDG, because it really doesn’t implement it properly. https://github.com/golang/go/issues/62382 I stick to my own code then. It doesn’t properly support anything else than Linux or Unixes that use XDG, but personally, I don’t care about them anyway. And the cross-platform situation is a giant mess. Unsurprisingly.

⤋ Read More
In-reply-to » (#dkvkbra) @shinyoukai Cool, I didn't know about os.UserConfigDir() up until a few seconds ago! I always implemented that myself.

Hmm, mine also resolves a leading tilde in these variables. And if $HOME is not specified it tries to resolve the user’s home directory by user.Current().HomeDir. Maybe that’s overkill, I have to check the XDG spec.

But I’m definitely missing os.UserDataDir(). That’s a bummer.

⤋ Read More
In-reply-to » @lyse Well, I used SnipMate years ago (until 2012). IIRC, it’s more than just ā€œinsert a bit of text hereā€, it can also jump to the correct next location(s) and stuff like that. Don’t remember why I stopped using it.

@movq@www.uninformativ.de Thanks! I’ll have a look at SnipMate. Currently, I’m (mis)using the abbreviation mechanism to expand a code snippet inplace, e.g.

autocmd FileType go inoreab <buffer> testfunc func Test(t *testing.T) {<CR>}<ESC>k0wwi

or this monstrosity:

autocmd FileType go inoreab <buffer> tabletest for _, tt := range []struct {<CR>    name string<CR><CR><BS>}{<CR>   {<CR>   name: "",<CR><BS>},<CR><BS>} {<CR>  t.Run(tt.name, func(t *testing.T) {<CR><CR>})<CR><BS>}<ESC>9ki<TAB>

But this of course has the disadvantage that I still have to remove the last space or tab to trigger the expansion by hand again. It’s a bit annoying, but better than typing it out by hand.

⤋ Read More
In-reply-to » @lyse I’m toying with the idea of making a widget/window system on top of Python’s ncurses. I’ve never really been happy with the existing ones (like urwid, textual, pytermgui, …). I mean, they’re not horrible, it’s mostly the performance that’s bugging me – I don’t want to wait an entire second for a terminal program to start up.

@movq@www.uninformativ.de I see. Yeah, all the Unicode stuff certainly doesn’t help here, that’s for sure.

Maybe ā€œspeedcursesā€ could be a name. Or just select any Palatinate curse. ;-)

⤋ Read More
In-reply-to » @lyse I can tell you this right now, writing assembly / machine code is fucking hard workā„¢ šŸ˜“ I'm sure @movq can affirm 🤣 And when it all goes to shitā„¢ (which it does often), man is debugging fucking hard as hell! Without debug symbols I can't use the regular tools like lldb or gdb šŸ˜‚

@prologic@twtxt.net Oh yeah, I bet it is horrible to troubleshoot.

⤋ Read More
In-reply-to » @lyse Yeah I remember you said some days back that your interest in compilers was rekindled by my work on mu (µ) šŸ˜…

@prologic@twtxt.net Yeah, the parser part is what I typically enjoy. Haven’t really looked into code generation itself.

I’m currently looking at your µ commits from the last few days. Holy cow! :-)

⤋ Read More
In-reply-to » Whoo! I fixed one of the hardest bugs in mu (µ) I think I've had to figure out. Took me several days in fact to figure it out. The basic problem was, println(1, 2) was bring printed as 1 2 in the bytecode VM and 1 nil when natively compiled to machine code on macOS. In the end it turned out the machine code being generated / emitted meant that the list pointers for the rest... of the variadic arguments was being slot into a register that was being clobbered by the mu_retain and mu_release calls and effectively getting freed up on first use by the RC (reference counting) garbage collector šŸ¤¦ā€ā™‚ļø

@prologic@twtxt.net Tada, congratulations! I find that rather interesting, thanks for telling us. :-)

⤋ Read More
In-reply-to » Trying to come up with a name for a new project and every name is already taken. 🤣 The internet is full!

@movq@www.uninformativ.de How about ā€œQuongsiā€? I generated the first five letters with pwgen --no-capitalize --no-numerals 5 and since that already showed up in DDG search results, I simply appended the last two, which yielded nothing on DDG and Google).

What kind of project is it? Maybe we can help you find a name or nudge you in the right direction.

⤋ Read More

The tt URLs View now automatically selects the first URL that I probably are going to open. In decreasing order, the URL types are:

  1. markdown media URLs (images, videos, etc.)
  2. markdown or plaintext URLs
  3. subjects
  4. mentions

I might differentiate between mentions of subscribed and unsubscribed feeds in the future. The odds of opening a new feed over an already existing one are higher.

⤋ Read More
In-reply-to » @zvava The problem you now then is you lose integrity of the message content if you compute the hashes at runtime rather than on the way in. So if your message content or database becomes corrupt in any way, so do your hashes.

@prologic@twtxt.net In my opinion, the integrity isn’t lost. The same input data always result in the same output hash, no matter when you calculate the hashes. It’s true that a corrupt database contents yields to corrupt hashes, but then you have a whole bigger problem than just receiving different hashes. :-D

⤋ Read More
In-reply-to » @lyse while caching those is a good idea the problem is baking data that can be calculated into the database instead of some cache, because post hashes are not fixed and change for every post edit. you can always easily look up other twts by hash with a cached lookup table, but now you're not locked into them so supporting hashv2 or other hash variants or any other solution becomes far easier

@zvava@twtxt.net By hashing definition, if you edit your message, it simply becomes a new message. It’s just not the same message anymore. At least from a technical point of view. As a human, personally I disagree, but that’s what I’m stuck with. There’s no reliable way to detect and ā€œcorrectā€ for that.

Storing the hash in your database doesn’t prevent you from switching to another hashing implementation later on. As of now, message creation timestamps earlier than some magical point in time use twt hash v1, messages on or after that magical timestamp use twt hash v2. So, a message either has a v1 or a v2 hash, but not both. At least one of them is never meaningful.

Once you ā€œupgradeā€ your database schema, you can check for stored messages from the future which should have been hashed using v2, but were actually v1-hashed and simply fix them.

If there will ever be another addressing scheme, you could reuse the existing hash column if it supersedes the v1/v2 hashes. Otherwise, a new column might be useful, or perhaps no column at all (looking at location-based addressing or how it was called). The old v1/v2 hashes are still needed for all past conversation trees.

In my opinion, always recalculating the hashes is a big waste of time and energy. But if it serves you well, then go for it.

⤋ Read More
In-reply-to » very good blog post that reminded me why it's taking so long to ship bbycll — previously i had computed the hashes of every post before storing them in the database, after realizing it's a much better idea to compute the hashes during runtime and only store the post content & timestamp i'm now having to rewrite every function that reads & writes data. i hope the reason as to why i lost motivation is obvious — thankfully i caught it early enough so that once i'm done rewriting just those functions i shouldā„¢ be able to finalize 1.0-rc with little hassle

@zvava@twtxt.net I might misunderstand what you wrote, but only hashing the message once and storing the hash together with the message in the database seems a way better approch to me. It’s fixed and doesn’t change, so there’s no need to recompute it during runtime over and over and over again. You just have it. And can easily look up other messages by hash.

⤋ Read More