Seems to me you could write a script that:
- Parses a StackOverflow question
- Runs it through an AI text generator
- Posts the output as a post on StackOverflow
and basically pollute the entire information ecosystem there in a matter of a few months? How long before some malicious actor does this? Maybe itās being done already š¤·
What an asinine, short-sighted decision. An astonishing number of companies are actively reducing headcount because their executives believe they can use this newfangled AI stuff to replace people. But, like the dot com boom and subsequent bust, many of the companies going this direction are going to face serious problems when the hypefest dies down and the reality of what this tech can and canāt do sinks in.
We really, really need to stop trusting important stuff to corporations. They are not tooled to last.
Stack Overflow is being inundated with AI-generated garbage. A group of 480+ human moderators is going on strike, because:
Specifically, moderators are no longer allowed to remove AI-generated answers on the basis of being AI-generated, outside of exceedingly narrow circumstances. This results in effectively permitting nearly all AI-generated answers to be freely posted, regardless of established community consensus on such content.
In turn, this allows incorrect information (colloquially referred to as āhallucinationsā) and plagiarism to proliferate unchecked on the platform. This destroys trust in the platform, as Stack Overflow, Inc. has previously noted.
It looks like StackOverflow Inc. is saying one thing to the public, and a very different thing to its moderators.
I played with nlpodyssey/verbaflow: Neural Language Model for Go today a little bit todayā¦. First I had to download a ~2GB file (the model), then convert that to a format the program verbaflow understands which came out to roughly ~5GB. Then I tried some of the samples in the README. My god, this this is so goddamn awfully slow its like watching paint dry š± All just to predict the next few tokens?! š³ I had a look at the resource utilisation as well as it was trying to do this āworkā, using 100% of 1.5 Cores and ~10GB of Memory š³ Who da fuq actually thinks any of this large language model (LLM) and neural network crap is actually any good or useful? š¤ Its just garbage š¤£
@prologic@twtxt.net I should have posted the more recent one from May, but the rankings are still pretty similar and Go and scala are tied still!
According to the RedMonk programming language rankings from Jan 2023, Go and Scala are tied at 14th place š
1 JavaScript
2 Python
3 Java
4 PHP
5 C#
6 CSS
7 TypeScript
7 C++
9 Ruby
10 C
11 Swift
12 Shell
12 R
14 Go
14 Scala
16 Objective-C
17 Kotlin
18 PowerShell
19 Rust
19 Dart

@lyse@lyse.isobeef.org I knew from the get go it was going to be an annoying thing to track down, which is was, but that made it take even longer because I avoided trying.
@shreyan@twtxt.net I agree re: AR. Vircadia is neat. I stumbled on it years ago when I randomly started wondering āwonder whatās going on with Second Life and those VR thingsā and started googling around.
Unfortunately, like so many metaverse efforts, itās almost devoid of life. Interesting worlds to explore, cool tools to build your own stuff, but almost no people in it. It feels depressing, like an abandoned shopping mall.
@prologic@twtxt.net I think those headsets were not particularly usable for things like web browsing because the resolution was too low, something like 1080p if I recall correctly. A very small screen at that resolution close to your eye is going to look grainy. Youād need 4k at least, I think, before you could realistically have text and stuff like that be zoomable and readable for low vision people. The hardware isnāt quite there yet, and the headsets that can do that kind of resolution are extremely expensive.
But yeah, even so I can imagine the metaverse wouldnāt be very helpful for low vision people as things stand today, even with higher resolution. Iāve played VR games and that was fine, but Iāve never tried to do work of any kind.
I guess where Iām coming from is that even though Iām low vision, I can work effectively on a modern OS because of the accessibility features. I also do a lot of crap like take pictures of things with my smartphone then zoom into the picture to see detail (like words on street signs) that my eyes canāt see normally. That feels very much like rudimentary augmented reality that an appropriately-designed headset could mostly automate. VR/AR/metaverse isnāt there yet, but it seems at least possible for the hardware and software to develop accessibility features that would make it workable for low vision people.
@prologic@twtxt.net hmm, dunno about the recency of that line of thought. I suspect though that given his (recent or not) history, if someone directly asked him ādo you support rapeā he would not say ānoā, heād go on one of these rambling answers about property crime like he did in the video. Maybe Iām mind poisoned by being around academics my whole career, but that way of talking is how an academic gives you an answer they know will be unpopular. PhD = Piled Higher And Deeper, after all right? In other words, if he doesnāt say ānoā right away, heās saying āyesā, except with so many words thereās some uncertainty about whether he actually meant yes. And he damn well knows that, and thatās why I give him no slack.
There are people in academia who believe adult men should be able to have sex with children, legally, too. They use the same manner of talking about it that Peterson uses. We need to stop tolerating this, and draw hard red lines. No, thatās bad, no matter how many words you use to say it. No, donāt express doubts about it, because that provides justification and talking points to the people who actually carry out the acts.
@xuu@txt.sour.is LOL omfg.
This is the absurd logical endpoint of free market fundamentalism. āThe market will fix everything!ā Including, apparently, encroaching floodwater.
I do have to say though, after spending awhile looking at houses, that there are a crapton of homes for sale for very high prices (>$1 million) in coastal areas NASA is more or less telling us will be underwater in the next few decades. I donāt get how a house thatās going to be underwater soon is worth $1 million, but then Iām never been a free market fundamentalist either so 𤷠Maybe theyāre all watertight.
@prologic@twtxt.net Maybe so, but thatās not because of the people who are objecting to Jordan Peterson, thatās for sure. You really need to read the articles Iāve posted before going there. Really.
@prologic@twtxt.net Because they are rightwing assholes with a huge platform and they are literally HURTING PEOPLE. People get attacked because of things people like Shapiro and Peterson say. This is not just idle chitchat over coffee. They are saying things like itās OK to rape women (and NO I am not going to dig out the videos where they say that āthatās up to YOU to do, do your own homework before defending these ghouls).
@prologic@twtxt.net Iāve read half, skimmed the others. Mostly I was going for scaleālook at all those headlines. These are horrible people who say horrible things on a regular basis.
Do they legitimately believe that end users will encounter videos of gruesome murders, live streams of school shootings, etc etc etc, and be like āoh, tee hee hee, thatās not what I want to see! Iād better block that!ā and go about their business as usual?
No, they canāt possibly be that foolish. They are going to be doing some amount of content moderation. Just not of Nazis, fascists, or far right reactionaries. Which to me means they want that content on there.
Iāve seen BlueSky referred to as BS (as in Blue Sky, but you knowā¦), which seems apt.
CEO is a cryptocurrency fool, as is Jack Dorsey, so I donāt expect much from it. Then again Iām old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.
I read somewhere or another that the ādecentralizationā is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.
I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
College Knowledge
ā Read more
How do I quit getting error 400 when I go to reply to anything? @prologic@twtxt.net ???
@prologic@twtxt.net @carsten@yarn.zn80.net
There is (I assure you there will be, donāt know what it is yetā¦) a price to be paid for this convenience.
Exactly prologic, and thatās why Iām negative about these sorts of things. Iām almost 50, Iāve been around this tech hype cycle a bunch of times. Look at what happened with Facebook. When it first appeared, people loved it and signed up and shared incredibly detailed information about themselves on it. Facebook made it very easy and convenient for almost anyone, even people who had limited understanding of the internet or computers, to get connected with their friends and family. And now here we are today, where 80% of people in surveys say they donāt trust Facebook with their private data, where they think Facebook commits crimes and should be broken up or at least taken to task in a big way, etc etc etc. Facebook has been fined many billions of dollars and faces endless federal lawsuits in the US alone for its horrible practices. Yet Facebook is still exploitative. Itās a societal cancer.
All signs suggest this generative AI stuff is going to go exactly the same way. That is the inevitable course of these things in the present climate, because the tech sector is largely run by sociopathic billionaires, because the tech sector is not regulated in any meaningful way, and because the tech press / tech media has no scruples. Some new tech thing generates hype, people get excited and sign up to use it, then when the people who own the tech think they have a critical mass of users, they clamp everything down and start doing whatever it is they wanted to do from the start. Theyāll break laws, steal your shit, cause mass suffering, who knows what. They wonāt stop until they are stopped by mass protest from us, and the government action that follows.
Thatās a huge price to pay for a little bit of convenience, a price we pay and continue to pay for decades. We all know better by now. Why do we keep doing this to ourselves? It doesnāt make sense. Itās insane.
@prologic@twtxt.net @carsten@yarn.zn80.net
(1) You go to the store and buy a microwave pizza. You go home, put it in the microwave, heat it up. Maybe itās not quite the way you like it, so you put some red pepper on it, maybe some oregano.
Are you a pizza chef? No. Do we know what your cooking is like? Also no.
(2) You create a prompt for StableDiffusion to make a picture of an elephant. What pops out isnāt quite to your liking. You adjust the prompt, tweak it a bunch, till the elephant looks pretty cool.
Are you an artist? No. Do we know what your art is like? Also no.
The elephant is āfake artā in a similar sense to how a microwave pizza is āfake pizzaā. Thatās what I meant by that word. The microwave pizza is a sort of āsimulation of pizzaā, in this sense. The generated elephant picture is a simulation of art, in a similar sense, though itās even worse than that and is probably more of a simulacrum of art since you canāt āconsumeā an AI-generated image the way you āconsumeā art.
ChatGPT and Elasticsearch: OpenAI meets private data | Elastic Blog
Terrifying. Elasticsearch is celebrating that theyāre going to send your private data to OpenAI? No way.
Escape Speed
ā Read more
@prologic@twtxt.net yeah. Iād add āBig Dataā to that hype list, and Iām sure there are a bunch more that Iām forgetting.
On the topic of a GPU cluster, the optimal design is going to depend a lot on what workloads you intend to run on it. The weakest link in these things is the data transfer rate, but that wonāt matter too much for compute-heavy workloads. If your workloads are going to involve a lot of data, though, youād be better off with a smaller number of high-VRAM cards than with a larger number of interconnected cards. I guess thatās hardware engineering 101 stuff, but stillā¦
On LinkedIn I see a lot of posts aimed at software developers along the lines of āIf youāre not using these AI tools (X,Y,Z) youāre going to be left behind.ā
Two things about that:
- No youāre not. If you have good soft skills (good communication, show up on time, general time management) then youāre already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
- This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war theyāve been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the āAIā that theyāre forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Donāt fall for it. Itās far from clear how this will shake out once governments get off their asses and start regulating this stuff, by the wayāmost of these āAIā tools are blatantly breaking copyright and other IP laws, and some day thatāll catch up with them.
That said, it is helpful to know thy enemy.
I played around with parsers. This time I experimented with parser combinators for twt message text tokenization. Basically, extract mentions, subjects, URLs, media and regular text. Itās kinda nice, although my solution is not completely elegant, I have to say. Especially my communication protocol between different steps for intermediate results is really ugly. Not sure about performance, I reckon a hand-written state machine parser would be quite a bit faster. I need to write a second parser and then benchmark them.
lexer.go and newparser.go resemble the parser combinators: https://git.isobeef.org/lyse/tt2/-/commit/4d481acad0213771fe5804917576388f51c340c0 Itās far from finished yet.
The first attempt in parser.go doesnāt work as my backtracking is not accounted for, I noticed only later, that I have to do that. With twt message texts there is no real error in parsing. Just regular text as a āfallbackā. So it works a bit differently than parsing a real language. No error reporting required, except maybe for debugging. My goal was to port my Python code as closely as possible. But then the runes in the string gave me a bit of a headache, so I thought I just build myself a nice reader abstraction. When I noticed the missing backtracking, I then decided to give parser combinators a try instead of improving on my look ahead reader. It only later occurred to me, that I could have just used a rune slice instead of a string. With that, porting the Python code should have been straightforward.
Yeah, all this doesnāt probably make sense, unless you look at the code. And even then, you have to learn the ropes a bit. Sorry for the noise. :-)
go mills() š
@chunkimo@twtxt.net lol. go walrus!!
slides/go-generics.md at main - slides - Mills ā Iām presenting this tomorrow at work, something I do every Wednesday to teach colleagues about Go concepts, aptly called go mills() š
e-scooters go like the clappers
Qualifications
ā Read more
In case you didnāt notice, I deleted my Twitter and Keybase accounts. Going full indieweb.
Flatten the Planets
ā Read more
Lymphocytes
ā Read more
@prologic@twtxt.net it is from the generator. But in the actual go implementation methods are represented with a unsigned short. So 65k is the hard limit in go.
Oof.

@prologic@twtxt.net I get the worry of privacy. But I think there is some value in the data being collected. Do I think that Russ is up there scheming new ways to discover what packages you use in internal projects for targeting ads?? Probably not.
Go has always been driven by usage data. Look at modules. There was need for having repeatable builds so various package tool chains were made and evolved into what we have today. Generics took time and seeing pain points where they would provide value. They werenāt done just so it could be checked off on a box of features. Some languages seem to do that to the extreme.
Whenever changes are made to the language there are extensive searches across public modules for where the change might cause issues or could be improved with the change. The fs embed and strings.Cut come to mind.
I think its good that the language maintainers are using what metrics they have to guide where to focus time and energy. Some of the other languages could use it. So time and effort isnāt wasted in maintaining something that has little impact.
The economics of the āspyingā are to improve the product and ecosystem. Is it āspyingā when a municipality uses water usage metrics in neighborhoods to forecast need of new water projects? Or is it to discover your shower habits for nefarious reasons?
@prologic@twtxt.net the rm -rf is basically what go clean -modcache does.
I think you can use another form that will remove just the deps for a specific module. go clean -r
Any good ideas on how to maintain ~/go/pkg/mod and to remove old garbage?
Whatās with all these tech companies going through massive layoffs. The latest one is Intel, but instead theyāre cutting salaries to avoid laying off.
@eldersnake@we.loveprivacy.club Several reasons:
- Itās another language to learn (SQL)
- It adds another dependency to your system
- Itās another failure mode (database blows up, scheme changes, indexs, etc)
- It increases security problems (now you have to worry about being SQL-safe)
And most of all, in my experience, it doesnāt actually solve any problems that a good key/value store can solve with good indexes and good data structures. Iām just no longer a fan, I used to use MySQL, SQLite, etc back in the day, these days, nope I wouldnāt even go anywhere near a database (for my own projects) if I can help it ā Itās just another thing that can fail, another operational overhead.
@prologic@twtxt.net @movq@www.uninformativ.de this is the default behavior of pass on my machine:

I add a new password entry named example and then type pass example. The password I chose, ātestā, is displayed in cleartext. This is very bad default behavior. I donāt know about the other clis you both mentioned but Iāll check them out.
The browser plugin browserpass does the same kind of thing, though I have already removed it and Iām not going to reinstall it to make a movie. Next to each credential thereās an icon to copy the username to the clipboard, an icon to copy the password to the clipboard, and then an icon to view details, which shows you everything, including the password, in cleartext. The screencap in the Chrome store is out of date; it doesnāt show the offending link to show all details, which I know is there because I literally installed it today and played with it.
@mckinley@twtxt.net very weird things going on for me.. i can see your twt but its not showing up as a reply or fork? 
@prologic@twtxt.net see where its used maybe that can help.
https://github.com/sour-is/ev/blob/main/app/peerfinder/http.go#L153
This is an upsert. So I pass a streamID which is like a globally unique id for the object. And then see how the type of the parameter in the function is used to infer the generic type. In the function it will create a new *Info and populate it from the datastore to pass to the function. The func will do its modifications and if it returns a nil error it will commit the changes.
The PA type contract ensures that the type fulfills the Aggregate interface and is a pointer to type at compile time.
one that i think is pretty interesting is building up dependent constraints. see here.. it accepts a type but requires the use of a pointer to type.
https://github.com/sour-is/ev/blob/main/pkg/es/es.go#L315-L325
I learned how to make gopls syntax highlight go templates in VSCodium.
By adding the following to my config

i could go from
into 
Data Point
ā Read more
Tutorial: Getting started with generics - The Go Programming Language ā Okay @xuu@txt.sour.is I quite like Goās generics now 𤣠After going through this myself I like the semantics and the syntax. Iām glad they did a lot of work on this to keep it simple to both understand and use (just like the rest of Go) š
#GoLang #Generics
ChatGPT is good, but itās not that good 𤣠I asked it to write a program in Go that performs double ratcheting and well the code is total garbage š ā Its only as good as the inputs it was trained on 𤣠#OpenAI #GPT3
I started reading the proposal to introduce operator overloading in Go version 2 that I like to see: https://github.com/golang/go/issues/27605 Now a few hours later I ended up at this gem. Write a program that makes 2+2=5: https://codegolf.stackexchange.com/questions/28786/write-a-program-that-makes-2-2-5 There are some awesone solutions. :-)
$name$ and then dispatch the hashing or checking to its specific format.
Circling back to the IsPreferred method. A hasher can define its own IsPreferred method that will be called to check if the current hash meets the complexity requirements. This is good for updating the password hashes to be more secure over time.
func (p *Passwd) IsPreferred(hash string) bool {
_, algo := p.getAlgo(hash)
if algo != nil && algo == p.d {
// if the algorithm defines its own check for preference.
if ck, ok := algo.(interface{ IsPreferred(string) bool }); ok {
return ck.IsPreferred(hash)
}
return true
}
return false
}
https://github.com/sour-is/go-passwd/blob/main/passwd.go#L62-L74
example: https://github.com/sour-is/go-passwd/blob/main/pkg/argon2/argon2.go#L104-L133
$name$ and then dispatch the hashing or checking to its specific format.
Hold up now, that example hash doesnāt have a
$prefix!
Well for this there is the option for a hash type to set itself as a fall through if a matching hash doesnāt exist. This is good for legacy password types that donāt follow the convention.
func (p *plainPasswd) ApplyPasswd(passwd *passwd.Passwd) {
passwd.Register("plain", p)
passwd.SetFallthrough(p)
}
https://github.com/sour-is/go-passwd/blob/main/passwd_test.go#L28-L31