Searching yarn

Twts matching #running
Sort by: Newest, Oldest, Most Relevant
In-reply-to » Home | Tabby This is actually pretty cool and useful. Just tried this on my Mac locally of course and it seems to have quite good utility. What would be interesting for me would be to train it on my code and many projects 😅

Most of the can run locally have such a small training set they arnt worth it. Are more like the Markov chains from the subreddit simulator days.

There is one called orca that seems promising that will be released as OSS soon. Its running at comparable numbers to OpenAI 3.5.

https://youtube.com/watch?v=Dt_UNg7Mchg&feature=share9

⤋ Read More
In-reply-to » A GTK 4 application showing an empty window uses about 160 MB of RAM:

@movq@www.uninformativ.de If I understand it correctly, gtk4 renders using OpenGL. That means some of that RAM that appears to be allocated is actually some trick of the OpenGL driver so that it can map address in RAM space to the GPU’s VRAM (depends a lot on your setup though).

What happens if you run it with GSK_RENDERER=cairo set?

⤋ Read More

A GTK 4 application showing an empty window uses about 160 MB of RAM:

$ wget https://movq.de/v/138ab3e622/win.c
$ cc -Wall -Wextra -o win win.c $(pkg-config --cflags --libs gtk4)
$ ./win

It also takes several seconds to start on my machine because it is compiling shaders and initializing DRI (it’s faster on the second run, unless you happen to lose ~/.cache/mesa_shader_cache/). This might be a hint as to why it’s using so much memory: There’s obviously much more going on behind the scenes these days, not just a little bit of internal housekeeping and then creating a window.

⤋ Read More
In-reply-to » Dear Stack Overflow, Inc.

Seems to me you could write a script that:

  • Parses a StackOverflow question
  • Runs it through an AI text generator
  • Posts the output as a post on StackOverflow

and basically pollute the entire information ecosystem there in a matter of a few months? How long before some malicious actor does this? Maybe it’s being done already 🤷

What an asinine, short-sighted decision. An astonishing number of companies are actively reducing headcount because their executives believe they can use this newfangled AI stuff to replace people. But, like the dot com boom and subsequent bust, many of the companies going this direction are going to face serious problems when the hypefest dies down and the reality of what this tech can and can’t do sinks in.

We really, really need to stop trusting important stuff to corporations. They are not tooled to last.

⤋ Read More
In-reply-to » I played with nlpodyssey/verbaflow: Neural Language Model for Go today a little bit today.... First I had to download a ~2GB file (the model), then convert that to a format the program verbaflow understands which came out to roughly ~5GB. Then I tried some of the samples in the README. My god, this this is so goddamn awfully slow its like watching paint dry 😱 All just to predict the next few tokens?! 😳 I had a look at the resource utilisation as well as it was trying to do this "work", using 100% of 1.5 Cores and ~10GB of Memory 😳 Who da fuq actually thinks any of this large language model (LLM) and neural network crap is actually any good or useful? 🤔 Its just garbage 🤣

@prologic@twtxt.net You more or less need a data center to run one of these adequately (well, train…you can run a trained one with a little less hardware). I think that’s the idea–no one can run them locally, they have to rent them (and we know how much SaaS companies and VCs love the rental model of computing).

There’s a lot of promising research-grade work being done right now to produce models that can be run on a human-scale (not data-center-scale) computing setup. I suspect those will become more commonly deployed in the next few years.

⤋ Read More

TornadoVM Continues Adapting Java OpenJDK/GraalVM For Heterogeneous Hardware
A new release of TornadoVM is now available, the open-source plug-in to OpenJDK and GraalVM to allow for Java code to run on heterogeneous hardware with ease – including various GPU models as well as FPGAs… ⌘ Read more

⤋ Read More