Searching yarn

Twts matching #1
Sort by: Newest, Oldest, Most Relevant
In-reply-to » Advent of Code 2025 starts tomorrow. šŸ„³šŸŽ„

I rewrote all my solutions in Rust (except for day 10 part 2) and these are the runtimes on my i7-3770 from 2013 (this measures CLOCK_PROCESS_CPUTIME_ID, not wallclock):

day01/1 [      00.000501311] Result: 1066
day01/2 [      00.000400298] Result: 6223
day02/1 [      00.000358848] Result: 12586854255
day02/2 [      00.000750711] Result: 17298174201
day03/1 [      00.000106537] Result: 17405
day03/2 [      00.000404632] Result: 171990312704598
day04/1 [      00.000257517] Result: 1626
day04/2 [      00.007495342] Result: 9173
day05/1 [      00.000237212] Result: 505
day05/2 [      00.000142731] Result: 344423158480189
day06/1 [      00.000229629] Result: 4076006202939
day06/2 [      00.000279552] Result: 7903168391557
day07/1 [      00.000204422] Result: 1622
day07/2 [      00.000283816] Result: 10357305916520
day08/1 [      00.029427421] Result: 84968
day08/2 [      00.028089859] Result: 8663467782
day09/1 [      00.000310304] Result: 4764078684
day09/2 [      00.015512554] Result: 1652344888
day10/1 [      00.000796663] Result: 375
day10/2 [      --.---------] Result: 15377 (Z3)
day11/1 [      00.000416804] Result: 753
day11/2 [      00.000660528] Result: 450854305019580
day12/1 [      00.000336081] Result: 577
day12/2 [      00.000000695] Result: no part 2

A little under 90 ms total.

On my Samsung NC10 netbook from 2011 with its Intel Atom N455 at 1.6 GHz:

day01/1 [      00.003771326] Result: 1066
day01/2 [      00.003267317] Result: 6223
day02/1 [      00.003902698] Result: 12586854255
day02/2 [      00.006659479] Result: 17298174201
day03/1 [      00.000747544] Result: 17405
day03/2 [      00.002737587] Result: 171990312704598
day04/1 [      00.001263892] Result: 1626
day04/2 [      00.044985301] Result: 9173
day05/1 [      00.001696761] Result: 505
day05/2 [      00.000978962] Result: 344423158480189
day06/1 [      00.001387660] Result: 4076006202939
day06/2 [      00.001734248] Result: 7903168391557
day07/1 [      00.001295528] Result: 1622
day07/2 [      00.001809659] Result: 10357305916520
day08/1 [      00.277251443] Result: 84968
day08/2 [      00.284359332] Result: 8663467782
day09/1 [      00.003152407] Result: 4764078684
day09/2 [      00.071123459] Result: 1652344888
day10/1 [      00.005279527] Result: 375
day10/2 [      --.---------] Result: 15377 (Z3)
day11/1 [      00.003273342] Result: 753
day11/2 [      00.005139719] Result: 450854305019580
day12/1 [      00.002857552] Result: 577
day12/2 [      00.000004421] Result: no part 2

A little over 700 ms total.

I like this. You get performance that’s more or less in the ballpark of C, but without the footguns.

⤋ Read More

If your very popular project with lots of stars on GitHub is over 10 years old, and you’re still at a pre-1.0 version because you’re using SemVer and a 1.0 would mean making some kind of commitment and that’s somehow not desirable for you, then I think you’re doing something wrong. šŸ¤”

⤋ Read More

I cleaned up all my of AoC (Advent of Code) 2025 solutions, refactored many of the utilities I had to write as reusable libraries, re-tested Day 1 (but nothing else). here it is if you’re curious! This is written in mu, my own language I built as a self-hosted minimal compiler/vm with very few types and builtins.

https://git.mills.io/prologic/aoc2025

⤋ Read More

I’m having to write my own functions like this in mu just to solve AoC puzzles :D

fn pow10(k) {
    p := 1
    i := 0
    while i < k {
        p = p * 10
        i = i + 1
    }
    return p
}

⤋ Read More

Come back from my trip, run my AoC 2025 Day 1 solution in my own language (mu) and find it didn’t run correctly 🤣 Ooops!

$ ./bin/mu examples/aoc2025/day1.mu
closure[0x140001544e0]

⤋ Read More
In-reply-to » Advent of Code 2025 starts tomorrow. šŸ„³šŸŽ„

Alright, Advent of Code is over:

https://www.uninformativ.de/blog/postings/2025-12-12/0/POSTING-en.html

It’s been quite the time sink, especially with the DOS games on top, but it was fun. 🄳

In case you’re wondering: All puzzles (except for part 2 of day 10) were doable in Python 1 on SuSE Linux 6.4 and ran in a finite time on the Pentium 133. Puzzle 10/2 might have been doable as well if I had better education. 🤣

⤋ Read More

Awk to take lines from Plan 9’s /lib/unicode and prepend the actual glyph and a tab: awk ā€˜{cmd=sprintf(ā€œunicode %sā€, $1); cmd | getline c; printf(ā€œ%s %s\nā€, c, $0)}’

⤋ Read More
In-reply-to » I'm contemplating the idea of switching my activity pub instance from Gootosocial to a Pleroma one. While GTS is kinda cute (lightweight and easy to manage) of a software, the inability to fetch/scroll through people's past toots when visiting a profile or having access to a federated timeline and a proper search functionality ...etc felt like handicap for the past N months.

@bender@twtxt.net yeah, I’ve been reading through the documentation last night and it felt overwhelming for a minute… +1 point goes to GTS’s docs. but hey, I’ll be taking the easy route: podman-compose up -d they provide both a container image and an example compose file in a separate git repo but I’m wondering why that is not mentioned anywhere in the docs, (unless it is and I haven’t seen it yet)

⤋ Read More
In-reply-to » Advent of Code 2025 starts tomorrow. šŸ„³šŸŽ„

Day 2 was pretty tough on my old hardware. Part 1 originally took 16 minutes, then I got it down to 9 seconds – only to realize later that my solution abused some properties of my particular input. A correct solution will probably take about 30 seconds. 🫤

Part 2 took 29 minutes this morning. I wrote an optimized version but haven’t tested it yet. I hope it’ll be under a minute.

Python 1 feels really slow, even compared to Java 1. And these first puzzles weren’t even computationally intensive. We’ll see how far I’ll make it …

https://movq.de/v/f831d98103/day02.jpg

⤋ Read More

Thinking about doing Advent of Code in my own tiny language mu this year.

mu is:

  • Dynamically typed
  • Lexically scoped with closures
  • Has a Go-like curly-brace syntax
  • Built around lists, maps, and first-class functions

Key syntax:

  • Functions use fn and braces:
fn add(a, b) {
    return a + b
}
  • Variables use := for declaration and = for assignment:
x := 10
x = x + 1
  • Control flow includes if / else and while:
if x > 5 {
    println("big")
} else {
    println("small")
}
while x < 10 {
    x = x + 1
}
  • Lists and maps:
nums := [1, 2, 3]
nums[1] = 42
ages := {"alice": 30, "bob": 25}
ages["bob"] = ages["bob"] + 1

Supported types:

  • int
  • bool
  • string
  • list
  • map
  • fn
  • nil

mu feels like a tiny little Go-ish, Python-ish language — curious to see how far I can get with it for Advent of Code this year. šŸŽ„

⤋ Read More

Advent of Code 2025 starts tomorrow. šŸ„³šŸŽ„

This year, I’m going to use Python 1 on SuSE Linux 6.4, writing the code on my trusty old Pentium 133 with its 64 MB of RAM. No idea if that old version of Python will be fast enough for later puzzles. We’ll see.

⤋ Read More
In-reply-to » Which actively maintained Yarn/twtxt clients are there at the moment? Client authors raise your hands! šŸ™‹

@lyse@lyse.isobeef.org Damn. That was stupid of me. I should have posted examples using 2026-03-01 as cutoff date. šŸ˜‚

In my actual test suite, everything uses 2027-01-01 and then I have this, hoping that that’s good enough. 🄓

def test_rollover():
    d = jenny.HASHV2_CUTOFF_DATE
    assert len(jenny.make_twt_hash(URL, d - timedelta(days=7), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=3), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=2), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=1), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d, TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=1), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=2), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=3), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(days=7), TEXT)) == 12

(In other words, I don’t care as long as it’s before 2027-01-01. šŸ˜šŸ˜…)

⤋ Read More
In-reply-to » Alright, this yarnd installation has been properly fixed.

cat /etc/mokou/yarnd.conf
exec=/usr/pkg/sbin/daemonize -c/var/db/yarnd -u www -p /var/run/yarnd.pid /usr/pkg/sbin/chpst -e /usr/local/etc/yarnd /usr/local/sbin/yarnd -b 127.0.0.1:[classified information]

I know this might seem a bit overengineered, but the previous command until now had the secrets exposed on the process list

⤋ Read More

Tired to re-enable the Ege route to git.mills.io today (after finishing work) and this is what I found 🤯 Tehse asshole/cunts are still at it !!! 🤬 – So let’s instead see if this works:

$ host git.mills.io 1.1.1.1
Using domain server:
Name: 1.1.1.1
Address: 1.1.1.1#53
Aliases:

git.mills.io is an alias for fuckoff.mills.io.
fuckoff.mills.io has address 127.0.0.1


PS: Would anyone be interested if I started a massive global class action suit against companies that do this kind of abusive web crawling behavior, violate/disregards robots.txt and whatever else standards that are set in stone by the W3C? šŸ¤”

⤋ Read More

I’ve once again brought up a Gitea instance on my server space, but there are two highlights here:

  1. No self-registration (accounts are tied to the e-mail server, which is in turn tied to the system accounts)
  2. Going beyond the landing page requires to be logged in, no excuses. (It also could scare the AI crawlers to oblivion, avoiding Anubis at that)

That’s it.

⤋ Read More
In-reply-to » And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But it’s hard to tell, because I switched operating systems and webservers since then …

@lyse@lyse.isobeef.org Probably wouldn’t help, since almost every request comes from a different IP address. These are the hits on those weird /projects URLs since Sunday:

    1 IP  has  5 hits
    1 IP  has  4 hits
   13 IPs have 3 hits
  280 IPs have 2 hits
25543 IPs have 1 hit

The total number of hits has decreased now. Maybe the botnet has moved on …

⤋ Read More

Fark me šŸ¤¦ā€ā™‚ļø I woke up quite late today (after a long night helping/assisting with a Mainframe migration last night fork work) to abusive traffic and my alerts going off. The impact? My pod (twtxt.net) was being hammered by something at a request rate of 30 req/s (there are global rate limits in place, but still…). The culprit? Turned out to be a particular IP 43.134.51.191 and after looking into who own s that IP I discovered it was yet-another-bad-customer-or-whatever from Tencent, so that entire network (ASN) is now blocked from my Edge:

+# Who: Tentcent
+# Why: Bad Bots
+132203

Total damage?

$ caddy-log-formatter twtxt.net.log | cut -f 1 -d  ' ' | sort | uniq -c | sort -r -n -k 1 | head -n 5
  61371 43.134.51.191
    402 159.196.9.199
    121 45.77.238.240
      8 106.200.1.116
      6 104.250.53.138

61k reqs over an hour or so (before I noticed), bunch of CPU time burned, and useless waste of my fucking time.

⤋ Read More
In-reply-to » I just noticed this pattern:

And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But it’s hard to tell, because I switched operating systems and webservers since then …

But the thing is that I’m seeing new URLs constructed in this pattern. So this can’t just be an old crawling dataset.

I am now wondering if those broken URLs are bot bugs as well.

They look like this (zalgo is a new project):

https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/

When you request that URL, you get redirected to /git/:

$ curl -sI https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
HTTP/1.0 301 Moved Permanently
Date: Sat, 22 Nov 2025 06:13:51 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 510
Location: /git/

And on /git/, there are links to my repos. So if a broken client requests https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/, then sees a bunch of links and simply appends them, you’ll end up with an infinite loop.

Is that what’s going on here or are my redirects actually still broken … ?

⤋ Read More
In-reply-to » My goodness, a new level of stupidity.

I just noticed this pattern:

uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx  - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"

Let me add some spaces to make it more clear:

    uninformativ.de 201.218.xxx.xxx - - [22/Nov/2025:06:53:27 +0100] "GET                       /projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 301 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"
www.uninformativ.de 103.10.xxx.xxx  - - [22/Nov/2025:06:53:28 +0100] "GET http://uninformativ.de/projects/lariza/multipass/xiate/padme/gophcatch HTTP/1.1" 400 0 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36"

Some IP (from Brazil) requests some (non-existing, completely broken) URL from my webserver. But they use the hostname uninformativ.de, so they get redirected to www.uninformativ.de.

In the next step, just a second later, some other IP (from Nepal) issues an HTTP proxy request for the same URL.

Clearly, someone has no idea how HTTP redirects work. And clearly, they’re running their broken code on some kind of botnet all over the world.

⤋ Read More
In-reply-to » There are no really good GUI toolkits for Linux, are there?

FTR, I see one (two) issues with PyQt6, sadly:

  1. The PyQt6 docs appear to be mostly auto-generated from the C++ docs. And they contain many errors or broken examples (due to the auto-conversion). I found this relatively unpleasent to work with.
  2. (Until Python finally gets rid of the Global Interpreter Lock properly, it’s not really suited for GUI programs anyway – in my opinion. You can’t offload anything to a second thread, because the whole program is still single-threaded. This would have made my fractal rendering program impossible, for example.)

⤋ Read More

For those curious, the new Twtxt <-> ActivityPub bridge I’m building (bidirectional) simply requires three things:

  1. You register your Twtxt feed to the bridge: https://bridge.twtxt.net
  2. You verify that you in fact own/control the feed by putting the verification code somewhere on/in your feed (doesn’t matter where or how)
  3. You proxy/forward requests for /.well-known/webfinger to the Bridge bridge.twtxt.net.

I’m still testing through and ironing out bugs šŸ› Please be patient! šŸ™

⤋ Read More

My goodness, a new level of stupidity.

The bots are now doing things like this:

GET http://uninformativ.de/projects/lariza/feednotify/datenstrahler/slinp/countty HTTP/1.1
  1. That URL does not exist.
  2. By including http://uninformativ.de in that request, this instructs the webserver to do an HTTP proxy request. Of course, this isn’t allowed on my webserver (and shouldn’t by allowed on any normal webserver), resulting in HTTP 400. And even if it were, the target would be the exact same server, making a proxy request unnecessary.

And of course, it’s not just 50 hits like this or 100 or 1’000 or 10’000. No, it’s over 150’000 in the last 2 days. All from vastly different IP ranges of different cloud hosters.

This almost looks like a DDoS attack, but it’s just completely stupid. This feels more like some idiot vibe coded a crawler.

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely ā€œmisunderstoodā€ everything I wrote and confidently spat out garbage. šŸ‘Œ

@prologic@twtxt.net Let’s go through it one by one. Here’s a wall of text that took me over 1.5 hours to write.

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.

The AI also said that users must develop ā€œAI literacyā€, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is ā€œAI literacyā€, isn’t it?

My text went one step further, though: I said that when you take this requirement of ā€œAI literacyā€ into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.

Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft – okay, fine, a draft is a draft, it’s fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they don’t feel like a draft that needs editing.

Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But here’s the kicker: You did not create that draft. You were not involved in the ā€œthought processā€ behind it. When you, a human being, make a draft, you often think something like: ā€œOkay, I want to draw a picture of a landscape and there’s going to be a little house, but for now, I’ll just put in a rough sketch of the house and add the details later.ā€ You are aware of what you left out. When the AI did the draft, you are not aware of what’s missing – even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.

Skill Erosion vs. Skill Evolution

You, @prologic@twtxt.net, also mentioned this in your car tyre example.

In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Gemini’s calculator example is different (and, again, gaslight-y, see below).

What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?

No, you’re something else. You should not be hired as a programmer or a mechanic.

Yes, that is ā€œskill evolutionā€ – which is pretty much my point! But the AI framed it like a counter-argument. It didn’t understand my text.

(But what if that’s our future? What if all programming will look like that in some years? I claim: It’s not possible. If you don’t know how to program, then you don’t know how to read/understand code written by an AI. You are something else, but you’re not a programmer. It might be valid to be something else – but that wasn’t my point, my point was that you’re not a bloody programmer.)

Gemini’s calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., ā€œcomplex problem-solvingā€) are two different things. Just because you now have a calculator, doesn’t mean it’ll free you up to do mathematical proofs or whatever.

What would have worked is this: Let’s say you’re an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, there’s a little gaslight-y detail: A calculator is correct. Yes, it could have ā€œbugsā€ (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), it’s just a statistical model. So, this modified example (ā€œaccountant with a calculatorā€) would actually have to be phrased like this: Suppose there’s an accountant and you give her a magic box that spits out the correct result in, what, I don’t know, 70-90% of the time. The accountant couldn’t rely on this box now, could she? She’d either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.

Gemini has no idea that its calculator example doesn’t make sense. It just spits out some generic ā€œargumentā€ that it picked up on some website.

3. The Technical and Legal Perspective (Scraping and Copyright)

The AI makes two points here. The first one, I might actually agree with (ā€œbad bot behavior is not the fault of AI itselfā€).

The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didn’t. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didn’t even question whether it’s okay to break the current law or not. It just said ā€œlol yeah, change the lawsā€. (I wonder in what way the laws would have to be changed in the AI’s ā€œopinionā€, because some of these changes could kill some business opportunities – or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasn’t part of Gemini’s answer.)

tl;dr

Except for one point, I don’t accept any of Gemini’s ā€œcriticismā€. It didn’t pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, it’s just a statistical model).

And it framed everything like a counter-argument, while actually repeating what I said. That’s gaslighting: When Alice says ā€œthe sky is blueā€ and Bob replies with ā€œwhy do you say the sky is purple?!ā€

But it sure looks convincing, doesn’t it?

Never again

This took so much of my time. I won’t do this again. šŸ˜‚

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely ā€œmisunderstoodā€ everything I wrote and confidently spat out garbage. šŸ‘Œ

@movq@www.uninformativ.de this I find more worrisome, and saw no mention of it on your text: Right-Wing Chatbots Turbocharge America’s Political and Cultural Wars (gift article).

Enoch, one of the newer chatbots powered by artificial intelligence, promises ā€œto ā€˜mind wipe’ the pro-pharma biasā€ from its answers. Another, Arya, produces content based on instructions that tell it to be an ā€œunapologetic right-wing nationalist Christian A.I. model.ā€

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:

1. The User Perspective (Untrustworthiness)

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

  • AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
  • The Rise of AI Literacy: Users must develop a new skill—AI literacy—to critically evaluate and verify AI’s probabilistic output. This skill, along with improving citation features in AI tools, mitigates the ā€œgaslightingā€ effect.
2. The Moral/Political Perspective (Skill Erosion)

The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; it’s skill evolution, not erosion.

  • Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
  • Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
3. The Technical and Legal Perspective (Scraping and Copyright)

The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.

  • Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.

⤋ Read More

I’m building a service that lets you:

create and manage disposable, brandable email aliases so you can track leaks, forward important messages, and keep your real inbox clean.

I’ve just finishing building it for the most part, and have cut a v0.1.0 release. It’s currently closed source (to be decided later) and now open to beta testers. cc @bender@twtxt.net šŸ™ I fully intend to monetize and offer this as a paid service in teh coming weeks/months, but beta/invite-only testers and early adopters/users first 🤟

⤋ Read More
In-reply-to » @lyse They’re seriously telling us at work: ā€œCan it be AI’d? Do it, don’t waste time!ā€ Shit like that is the result. (What’s this weird gray triangle in the bottom right corner?)

@movq@www.uninformativ.de It’s way more expensive and time-consuming in the end. If only somebody had warned us!!1

The triangle reminds me of zalgo text: https://en.wikipedia.org/wiki/Zalgo_text

⤋ Read More
In-reply-to » The most infuriating 3 seconds of using this Mac every day are the first time I run man and it calls home to see if I'm allowed to do that.

Because OP twtxt seems to be a cross-post from the Fediverse, I am bringing some context here. It refers to this GitHub issue. This comment explains why the issue described is happening:

This is usually due to notarization checks. E.g. the binaries are checked by the notarization service (ā€˜XProtect’) which phones home to Apple. Depending on your network environment, this can take a long time. Once the executable has been run the results are usually cached, so any subsequent startup should be fast.

OP network must be running on 1,200 Baud modem, or less. 🤭 I have never, ever, experienced any distinguishable delays.

⤋ Read More

Advent of Code will be different this year:

https://old.reddit.com/r/adventofcode/comments/1ocwh04/changes_to_advent_of_code_starting_this_december/

There will only be 12 puzzles, i.e. only December 1 to December 12. This might make it more interesting for some people, because it’s (probably) less work and a lower chance of people getting burned out. šŸ¤”

Personally, I’ll probably stretch it out over 24 days. Giving myself more time to solve each puzzle and I really want this event to last the entire month. šŸ˜…

Maybe this makes it more interesting for some people around here as well?

⤋ Read More

@movq@www.uninformativ.de streamlining jenny.vim?

index adc0db9..cb54abc 100644
--- a/vim/ftdetect/jenny.vim
+++ b/vim/ftdetect/jenny.vim
@@ -1 +1,2 @@
 au BufNewFile,BufRead jenny-posting.eml setl completefunc=jenny#CompleteMentions fo-=t wrap
+au BufRead,BufNewFile jenny-posting.eml normal $

⤋ Read More
In-reply-to » @lyse Great job!

@lyse@lyse.isobeef.org In my case it was a silver necklace, a hummingbird with a wing connected with the cold welding I mentioned using thin brass wires.

It made it in a goldsmithing class (I went to a private craftmanship high-school) so no phones allowed (no photos of it) and no ā€œtake homeā€ of the works.

Here’s a rough sketch of it drawn by memory, the dots in the wing is where it connects to the body.

Hummingbird necklace sketch

The technique is basically the same as i described, but the scale is much smaller, the whole piece was about 5-6 cm on the largest side.

The rivet was made by drilling a hole through the parts, than with a short and thicker drill you widen the hole on the surface to let the rivet settle flatter on the piece, then with a rubber hammer you hit it to flatten the head until it’s snug on the hole, lock them together by doing the same on the other side.

Note that widening the hole with a thicker drill head won’t make a difference with bigger holes, mine had holes of about 1-2 mm of diameter maximum.

Here’s a sketch of what is going on for clarity.

Cold welding cross-section

⤋ Read More
In-reply-to » @lyse Thanks, I think I fixed it now. Sorry for the spam.

@itsericwoodward@itsericwoodward.com No worries, all good, mate! We all have to start somewhere. Other software requests my feed several orders of magnitude more often.

I can confirm, the User-Agent header appears to be fixed. \o/

Two other things I noticed, though:

  1. There’s now an OPTIONS request for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, it’s rejected with a 405 Method Not Allowed.

  2. Not that these few requests bother me at all, but you might wanna implement caching next with either the If-Modified-Since or If-None-Match request headers. This way, if the feed hasn’t changed, the web server can reply with a 304 Not Modified and no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure you’re aware of it, that’s all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)

⤋ Read More
In-reply-to » TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).

@prologic@twtxt.net I think a counter in the client is not a good choice given the decentralized nature of twtxt, especially if someone use multiple cients together.

After thinking about it for a while I got to two solutions:

Proposal 1: Thread syntax (using subject)

Each post have an implicit and an optional explicit root reference:

  • Implicit (no action needed, all data required are already there)

    • URL + timestamp
  • Explicit (subject required)

    • Identity (client generated)
    • External reference
    • Random value

We then add include a ā€œrootā€ subject in each post for generating explicit theads:

1. `[ROOT_ID] (REPLY_ID)`: simpler with no need of prefixes
2. `(root:ROOT_ID) (reply:REPLY_ID)`: more complex but could allow expansions
	- `(rt:ROOT_ID) (re:REPLY_ID)`: same but with a compact version
	- `($ROOT_ID) (>REPLY_ID)`: same but with a single characters

Each post can have both references, like the current hash approach the reference can be treated as a simple string and don’t have a real meaning.

Using a custom reference this way allows a client to decide how to generate them:

  • Identity: can be a content hash or signature or anything else, without enforcing how it is generated we can upgrade the algorithm/length freely
  • External references: can be provided from another system (Eg. 7e073bd345, yarnsocial/yarn latest commit)
  • Random value: like a UUID (Eg. 9a0c34ed-d11e-447e-9257-0a0f57ef6e07)

Proposal 2: Threaded mentions (featuring zvava)

Inspired by @zvava@twtxt.net’s solution it could be simplified into: #<nick url#timestamp> or #<url#timestamp>

It can be shown like a mentions or hidden like a subject.

If we’re using thinking of using a counter in the client, I think there’s no point in avoiding the timestamp anymore.

⤋ Read More