Searching yarn

Twts matching #twtxt.txt
Sort by: Newest, Oldest, Most Relevant
In-reply-to » I'm wrong! Both 404 and 410, among others, are considered dead feeds: https://git.mills.io/yarnsocial/yarn/src/branch/main/internal/cache.go#L1343 Whatever that actually means.

@bender@twtxt.net I’m not a yarnd user, but automatically unfollowing on 404 doesn’t seem right. Besides @lyse@lyse.isobeef.org’s example, I could imagine just accidentally renaming my own twtxt file, or forgetting to push it when I point my DNS to a new web server. I’d rather not lose all my yarnd followers in a situation like that (and hopefully they feel the same).

⤋ Read More
In-reply-to » @prologic, does this rings a bell to you? 159-196-9-199.9fc409.mel.nbn.aussiebb.net

@bender@twtxt.net 404 could be indeed a temporary error if the file resides on a mounted remote filesystem and then the mount point fails for some reason. With a symlink from the web root to the file on the mount, the web server probably will not recognize the mount point failure as such. Thus, it might not reply with a 503 Service Unavailable (or something like that), but 404 Not Found instead. (I could be wrong on that, though.)

The right™ way is to signal 410 Gone if the feed does not exist anymore and will not come back to life again. But that’s hard to come by in the wild. Somebody has to manually configure that in almost all situations.

But yes, as @falsifian@www.falsifian.org points out, exponential backoff looks like a good strategy. Probably even report a failure to users somehow, so they can check and potentially unsubscribe.

⤋ Read More
In-reply-to » The “Matrix Experiment”, i.e. running a Matrix server for our family, has failed completely and miserably. People don’t accept it. They attribute unrelated things to it, like “I can’t send messages to you, I don’t reach you! It doesn’t work!” Yes, you do, I get those messages, I just don’t reply quickly enough because I’m at work or simply doing something else.

@movq@www.uninformativ.de pleas no.

My wifes mom nearly got her account fully taken over by some hacker. They were able to get control and change password but I was able to get it recovered before they could get the phone number reset. They sent messages to all her contacts to send cash.

⤋ Read More
In-reply-to » The “Matrix Experiment”, i.e. running a Matrix server for our family, has failed completely and miserably. People don’t accept it. They attribute unrelated things to it, like “I can’t send messages to you, I don’t reach you! It doesn’t work!” Yes, you do, I get those messages, I just don’t reply quickly enough because I’m at work or simply doing something else.

@bender@twtxt.net Sigh. 🫤 Elon Musk should buy Meta. Problem solved. 🤣

⤋ Read More
In-reply-to » New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can genera ... ⌘ Read more

@prologic@twtxt.net The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.

The result is interesting, but the Neuroscience News headline greatly overstates it. If I’ve understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn’t quite as magically effective as people say — if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model’s performance, and they turn them off because they are not part of what is called “emergence”: “an ability to solve a task which is absent in smaller models, but present in LLMs”.

They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.

I’d love to hear more from someone more familiar with this stuff. (I’ve done research that touches on ML, but neural nets and especially LLMs aren’t my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.

⤋ Read More
In-reply-to » I love shell scripts because they’re so pragmatic and often allow me to get jobs done really quickly.

@movq@www.uninformativ.de Variable names used with -eq in [[ ]] are automatically expanded even without $ as explained in the “ARITHMETIC EVALUATION” section of the bash man page. Interesting. Trying this on OpenBSD’s ksh, it seems “set -u” doesn’t affect that substitution.

⤋ Read More
In-reply-to » If some of you budding fathers want to know how I created a computer nerd to one day work for Facebook in the big USA, well you purchase a $1000 Xmas present, an enormous thick book with C++ programming, and say, you can play as many games as you like kids, but James has to create them using computer software.

@movq@www.uninformativ.de If it still existed I bet the first thing he’d do is convert it to Golang 👌🤣

⤋ Read More
In-reply-to » @movq The success of large neural nets. People love to criticize today's LLMs and image models, but if you compare them to what we had before, the progress is astonishing.

@prologic@twtxt.net I don’t know what you mean when you call them stochastic parrots, or how you define understanding. It’s certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.

⤋ Read More
In-reply-to » @shreyan Haha my criteria is being inactive for over two years 🤣

@prologic@twtxt.net HAHA! Couldn’t say it better. I started abandoning main stream social media as soon as is it stopped feeling like connecting and sharing with other human beings and became an urge for feeding an algorithm and hoping for it’s blessing to get a glimpse of human interaction It deems worthy of having.

⤋ Read More
In-reply-to » @aelaraji Ahh it might very well be a Clownflare thing as @lyse eluded to 🤣 One of these days I'm going to get off Clownflare myself, when I do I'll share it with you. My idea is to basically have a cheap VPS like @eldersnake has and use Wireguard to tunnel out. The VPS becomes the Reverse Proxy that faces the internet. My home network then has in inbound whatsoever.

@prologic@twtxt.net ‘Clownflare’ 🤣🤣🤣 Love it.

But yes the idea of a cheap VPS as a tunnel and keeping home network all local is a good one I reckon.

⤋ Read More
In-reply-to » @lyse Ahh so it's not just me! 😅

@aelaraji@aelaraji.com Ahh it might very well be a Clownflare thing as @lyse@lyse.isobeef.org eluded to 🤣 One of these days I’m going to get off Clownflare myself, when I do I’ll share it with you. My idea is to basically have a cheap VPS like @eldersnake@we.loveprivacy.club has and use Wireguard to tunnel out. The VPS becomes the Reverse Proxy that faces the internet. My home network then has in inbound whatsoever.

⤋ Read More