Donjons et Chenapans: Jeu de rôle libre et gratuit pour les enfants à partir de 4 ans. On va s’éclater cet après-midi : https://gusandco.net/wp-content/uploads/2022/04/donjons_et_chenapans-1.pdf #jdr
More data contradicting the existence of “echo chambers”. As I’ve argued many times before, the concept of an echo chamber or information bubble is not real. The podcast below is an interview of an author of a study where they actually intervened and changed the information diet of 20,000 people (with consent!), then surveyed them after three months. They observed essentially no changes to the study subjects’ beliefs and attitudes. They also observed that the typical person, while they tend to gravitate towards people with similar political leanings, only get about 50% of their content from such like-minded people. They get the rest from neutral sources and maybe 20% from non-like-minded people.
Varied information diet + No change in attitudes when information diet is forced to be different = no echo chamber.
- It’s criminal: Copilot was only possible because of massive theft of other peoples’ work (no compensation or even acknowledgement to any of the developers whose code was used to create Copilot)
- It’s positioned to put software developers out of work or so fully de-skill them that they no longer know how to code anything but prompts (after which come corporate-justified salary and benefits decreases)
Don’t use it. No one should ever use it. You’re destroying your own future as a software developer by leaning on and supporting these things.
Experts warn ‘green growth’ in high income countries is not happening, call for ‘post-growth’ climate policies
The emission reductions in the 11 high-income countries that have “decoupled” CO2 emissions from Gross Domestic Product (GDP) fall far short of the reductions that are necessary to limit global warming to 1.5°C or even just to “well below 2°C” and comply with international fairness principles, as required by the Paris Agreement, according to a paper published in The Lancet Planetary Health j … ⌘ Read more
10 - Miquiztli (1) Ollin, 11 - Acatl 13.0.10.15.6
podman
works with TLS. It does not have the "--docker" siwtch so you have to remove that and use the exact replacement commands that were in that github comment.
@prologic@twtxt.net Change your script to this:
#!/bin/sh
set -e
alias docker=podman
if [ ! command -v docker > /dev/null 2>&1 ]; then
echo "docker not found"
exit 1
fi
mkdir -p $HOME/.docker/certs.d/cas
## key stuff omitted
# DO NOT DO THIS docker context create cas --docker "host=tcp://cas.run:2376,ca=$HOME/.docker/certs.d/cas/ca.pem,key=$HOME/.docker/certs.d/cas/key.pem,cert=$HOME/.docker/certs.d/cas/cert.pem"
# DO THIS:
podman system connection add "host=tcp://cas.run:2376,ca=$HOME/.docker/certs.d/cas/ca.pem,key=$HOME/.docker/certs.d/cas/key.pem,cert=$HOME/.docker/certs.d/cas/cert.pem"
# DO NOT DO THIS docker context use cas
# DO THIS:
podman system connection default cas
podman
works with TLS. It does not have the "--docker" siwtch so you have to remove that and use the exact replacement commands that were in that github comment.
@prologic@twtxt.net what do you mean when you say “Docker API”? There are multiple possible meanings for that. podman
conforms to some of Docker’s APIs and it’s unclear to me which one you say it’s not conforming to.
You just have to Google “podman Docker API” and you find stuff like this: https://www.redhat.com/sysadmin/podman-rest-api
What is Podman’s REST API?Podman’s REST API consists of two components:
- A Docker-compatible portion called Compat API
- A native portion called Libpod API that provides access to additional features not available in Docker, including pods
Or this: https://docs.podman.io/en/latest/markdown/podman-system-service.1.html
The REST API provided by podman system service is split into two parts: a compatibility layer offering support for the Docker v1.40 API, and a Podman-native Libpod layer.
@prologic@twtxt.net FWIW, I pay a little under 3€/month for a VPS with 1 vCPU, 2 GB RAM, 20 GB disk, 40 TB traffic. 🤔
@New_scientist@feeds.twtxt.net hello @prologic@twtxt.net here’s another feed that’s spewing multiple copies of the same post. This one above is repeated 8 times. @awesome-scala-weekly@feeds.twtxt.net now has 13 copies of each post every week. This definitely looks like a bug in whatever code is generating these feeds, because the source feeds don’t have multiple copies of the original posts:
- Has 8 copies of the above post: https://feeds.twtxt.net/New_scientist/twtxt.txt
- Has only 1 copy of the above post: https://www.newscientist.com/feed/home/
I forget whether I filed an issue on this before, but can you tell me where I should do that?
Release jq 1.7rc1 · jqlang/jq · GitHub
Renewed activity on jq
after five years. This RC looks nice!
user/bmallred/data/2023-07-31-15-34-43.fit: 1.02 miles, 00:10:08 average pace, 00:10:20 duration
user/bmallred/data/2023-07-30-16-22-31.fit: 1.70 miles, 00:08:56 average pace, 00:15:09 duration
Show HN: A Python Job Board for Python Developers
Article URL: https://www.pycareer.io
Comments URL: https://news.ycombinator.com/item?id=36860953
Points: 574
# Comments: 1 ⌘ Read more
Russia attacks 200m from Nato border, you think Nato would react if the attack went 1 meter over the border?.. Or would they just come with excuses on why not to intervene?
@xuu@txt.sour.is ah, well, I think it’s on 1.0.x now but it picked up ipv6 support in 0.10.x
What I see here is that when I was reading your .txt, the timestamp was like 40 minutes later than current time. Say it’s 1pm and that twt is timed on 1.40pm
No idea why, perhaps your server has a wrong Timezone, or your twtxt tool is doing some timezome conversion?
user/bmallred/data/2023-07-04-05-12-13.fit: 1.10 miles, 00:10:28 average pace, 00:11:33 duration
user/bmallred/data/2023-07-02-06-15-08.fit: 1.24 miles, 00:09:30 average pace, 00:11:47 duration
1:Thinking that everything is dangerous. 2:Thinking you are in charge of everything. 3:High self esteem. 4:Looking for things to make a song and dance out of. These 4 things are a dangerous combination.
@movq@www.uninformativ.de
Doesn’t even compile on my system, which is apparently broken:
> cc -Wall -Wextra -o win win.c $(pkg-config --cflags --libs gtk4)
cc: error: unrecognized argument in option ‘-mfpmath=sse -msse -msse2 -pthread -I/usr/include/gtk-4.0 -I/usr/include/gio-unix-2.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/fribidi -I/usr/include/harfbuzz -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/x86_64-linux-gnu -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/graphene-1.0 -I/usr/lib/x86_64-linux-gnu/graphene-1.0/include -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lgtk-4 -lpangocairo-1.0 -lpango-1.0 -lharfbuzz -lgdk_pixbuf-2.0 -lcairo-gobject -lcairo -lgraphene-1.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0’
cc: note: valid arguments to ‘-mfpmath=’ are: 387 387+sse 387,sse both sse sse+387 sse,387
I played with nlpodyssey/verbaflow: Neural Language Model for Go today a little bit today…. First I had to download a ~2GB file (the model), then convert that to a format the program verbaflow
understands which came out to roughly ~5GB. Then I tried some of the samples in the README. My god, this this is so goddamn awfully slow its like watching paint dry 😱 All just to predict the next few tokens?! 😳 I had a look at the resource utilisation as well as it was trying to do this “work”, using 100% of 1.5 Cores and ~10GB of Memory 😳 Who da fuq actually thinks any of this large language model (LLM) and neural network crap is actually any good or useful? 🤔 Its just garbage 🤣
Physical Quantities
⌘ Read more
Bell Witch released a new album/song recently. I nominate this as “soundtrack of the apocalypse”. 🤘 // Bell Witch - Future’s Shadow Part 1: The Clandestine Gate // https://www.youtube.com/watch?v=Mg8TLge8gUU #NowPlaying
📣 Outage Notification: On Tuesday 23rd May 2023 between 7.30am to 5pm, there will be an outage of undefined length with no known start time due to planned power meter upgrades on the premises by the energy company.
You know, it’s one of those things where they give you a ~12hr window 🤦♂️ I will post here again once the technician is on-site and power down. I will power back up as soon as the work is complete.
According to the information I’ve received, the outage should be no more than ~1-2hrs.
Apologies for any inconvenience 🤗
Still undecided between TiddlyWiki, DokuWiki, Bear, Benotes, Memos, my blog software, standardnotes, apple notes and more. I like them all quite a bit, but standardnotes, the only one that has reall multiplatform is so fucking complicated to host on your own and then they have this stupid offline subscription thing that allows rich text or the block editor that works like notion. I also found codex docs which is really really nice. Unfortunately they lack proper authentication. 1 / 2
According to the RedMonk programming language rankings from Jan 2023, Go and Scala are tied at 14th place 😏
1 JavaScript
2 Python
3 Java
4 PHP
5 C#
6 CSS
7 TypeScript
7 C++
9 Ruby
10 C
11 Swift
12 Shell
12 R
14 Go
14 Scala
16 Objective-C
17 Kotlin
18 PowerShell
19 Rust
19 Dart
user/bmallred/data/2023-05-17-09-14-01.fit: 1.03 miles, 00:10:13 average pace, 00:10:34 duration
TornadoVM Continues Adapting Java OpenJDK/GraalVM For Heterogeneous Hardware
A new release of TornadoVM is now available, the open-source plug-in to OpenJDK and GraalVM to allow for Java code to run on heterogeneous hardware with ease – including various GPU models as well as FPGAs… ⌘ Read more
@stigatle@yarn.stigatle.no @prologic@twtxt.net @eldersnake@we.loveprivacy.club I love VR too, and I wonder a lot whether it can help people with accessibility challenges, like low vision.
But Meta’s approach from the beginning almost seemed like a joke? My first thought was “are they trolling us?” There’s open source metaverse software like Vircadia that looks better than Meta’s demos (avatars have legs in Vircadia, ffs) and can already do virtual co-working. Vircadia developers hold their meetings within Vircadia, and there are virtual whiteboards and walls where you can run video feeds, calendars and web browsers. What is Meta spending all that money doing, if their visuals look so weak, and their co-working affordances aren’t there?
On top of that, Meta didn’t seem to put any kind of effort into moderating the content. There are already stories of bad things happening in Horizon Worlds, like gangs forming and harassing people off of it. Imagine what that’d look like if 1 billion people were using it the way Meta says they want.
Then, there are plenty of technical challenges left, like people feeling motion sickness or disoriented after using a headset for a long period of time. I haven’t heard announcements from Meta that they’re working on these or have made any advances in these.
All around, it never sounded serious to me, despite how much money Meta seems to be throwing at it. For something with so much promise, and so many obvious challenges to attack first that Meta seems to be ignoring, what are they even doing?
@shreyan@twtxt.net probably ~1k up to 1.5k. One I found had 64G ram and 12C / 16T for 1.1k
@xuu@txt.sour.is LOL omfg.
This is the absurd logical endpoint of free market fundamentalism. “The market will fix everything!” Including, apparently, encroaching floodwater.
I do have to say though, after spending awhile looking at houses, that there are a crapton of homes for sale for very high prices (>$1 million) in coastal areas NASA is more or less telling us will be underwater in the next few decades. I don’t get how a house that’s going to be underwater soon is worth $1 million, but then I’m never been a free market fundamentalist either so 🤷 Maybe they’re all watertight.
user/bmallred/data/2023-05-09-06-13-11.fit: 1.01 miles, 00:08:41 average pace, 00:08:48 duration
Recipe Relativity
⌘ Read more
@prologic@twtxt.net @carsten@yarn.zn80.net
(1) You go to the store and buy a microwave pizza. You go home, put it in the microwave, heat it up. Maybe it’s not quite the way you like it, so you put some red pepper on it, maybe some oregano.
Are you a pizza chef? No. Do we know what your cooking is like? Also no.
(2) You create a prompt for StableDiffusion to make a picture of an elephant. What pops out isn’t quite to your liking. You adjust the prompt, tweak it a bunch, till the elephant looks pretty cool.
Are you an artist? No. Do we know what your art is like? Also no.
The elephant is “fake art” in a similar sense to how a microwave pizza is “fake pizza”. That’s what I meant by that word. The microwave pizza is a sort of “simulation of pizza”, in this sense. The generated elephant picture is a simulation of art, in a similar sense, though it’s even worse than that and is probably more of a simulacrum of art since you can’t “consume” an AI-generated image the way you “consume” art.
Started with
a concept sketch of a full body end-time factory worker on a distant planet, cyberpunk light brown suite, (badass), looking up at the viewer, 2d, line drawing, (pencil sketch:0.3), (caricature:0.2), watercolor city sketch,
Negative prompt: EasyNegativ, bad-hands-5, 3d, photo, naked, sexy, disproportionate, ugly
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2479087078, Face restoration: GFPGAN, Size: 512x768, Model hash: 2ee2a2bf90, Model: mimic_v10, Denoising strength: 0.7, Hires upscale: 1.5, Hires upscaler: Latent
On LinkedIn I see a lot of posts aimed at software developers along the lines of “If you’re not using these AI tools (X,Y,Z) you’re going to be left behind.”
Two things about that:
- No you’re not. If you have good soft skills (good communication, show up on time, general time management) then you’re already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
- This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they’ve been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the “AI” that they’re forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don’t fall for it. It’s far from clear how this will shake out once governments get off their asses and start regulating this stuff, by the way–most of these “AI” tools are blatantly breaking copyright and other IP laws, and some day that’ll catch up with them.
That said, it is helpful to know thy enemy.
1-to-1 Scale
⌘ Read more
user/bmallred/data/2023-04-07-13-19-28.fit: 1.52 miles, 00:08:59 average pace, 00:13:37 duration
user/bmallred/data/2023-04-05-06-24-03.fit: 1.55 miles, 00:07:43 average pace, 00:11:57 duration
go mills()
😅
So. Some bits.
i := fIndex(xs, 5.6)
Can also be
i := Index(xs, 5.6)
The compiler can infer the type automatically. Looks like you mention that later.
Also the infer is super smart.. You can define functions that take functions with generic types in the arguments. This can be useful for a generic value mapper for a repository
func Map[U,V any](rows []U, fn func(U) V) []V {
out := make([]V, len(rows))
for i := range rows { out = fn(rows[i]) }
return out
}
rows := []int{1,2,3}
out := Map(rows, func(v int) uint64 { return uint64(v) })
I am pretty sure the type parameters goes the other way with the type name first and constraint second.
func Foo[comparable T](xs T, s T) int
Should be
func Foo[T comparable](xs T, s T) int
Square Packing
⌘ Read more
user/bmallred/data/2023-02-06-10-02-12.fit: 1.34 miles, 00:09:39 average pace, 00:12:56 duration
user/bmallred/data/2023-02-06-08-58-34.fit: 1.43 miles, 00:09:57 average pace, 00:14:15 duration
@xuu@txt.sour.is that doesn’t seem to fit the spirit of the spec, at least by my read (I could be wrong obv). The example on Wikipedia’s webfinger page,
{
"subject": "acct:bob@example.com",
"aliases": [
"https://www.example.com/~bob/"
],
"properties": {
"http://example.com/ns/role": "employee"
},
"links": [{
"rel": "http://webfinger.example/rel/profile-page",
"href": "https://www.example.com/~bob/"
},
{
"rel": "http://webfinger.example/rel/businesscard",
"href": "https://www.example.com/~bob/bob.vcf"
}
]
}
and then the comparison with how mastodon uses webfinger,
{
"subject": "acct:Mastodon@mastodon.social",
"aliases": [
"https://mastodon.social/@Mastodon",
"https://mastodon.social/users/Mastodon"
],
"links": [
{
"rel": "http://webfinger.net/rel/profile-page",
"type": "text/html",
"href": "https://mastodon.social/@Mastodon"
},
{
"rel": "self",
"type": "application/activity+json",
"href": "https://mastodon.social/users/Mastodon"
},
{
"rel": "http://ostatus.org/schema/1.0/subscribe",
"template": "https://mastodon.social/authorize_interaction?uri={uri}"
}
]
}
suggests to me you want to leave the subject
/acct
bit as is (don’t add prefixes) and put extra information you care to include in the links
section, where you’re free to define the rel
URIs however you see fit. The notion here is that webfinger is offering a mapping from an account name to additional information about that account, so if anything you’d use a "subject": "acct:SALTY ACCOUNT_REPRESENTATION"
line in the JSON to achieve what you’re saying if you don’t want to do that via links
.
@prologic@twtxt.net That was exactly my thought at first too. but what do we put as the rel
for salty account? since it is decentralized we dont have a set URL for machines to key off. so for example take the standard response from okta:
# http GET https://example.okta.com/.well-known/webfinger resource==acct:bob
{
"links": [
{
"href": "https://example.okta.com/sso/idps/OKTA?login_hint=bob#",
"properties": {
"okta:idp:type": "OKTA"
},
"rel": "http://openid.net/specs/connect/1.0/issuer",
"titles": {
"und": "example"
}
}
],
"subject": "acct:bob"
}
It gives one link that follows the OpenID login. So the details are specific to the subject acct:bob
.
Mastodons response:
{
"subject": "acct:xuu@chaos.social",
"aliases": [
"https://chaos.social/@xuu",
"https://chaos.social/users/xuu"
],
"links": [
{
"rel": "http://webfinger.net/rel/profile-page",
"type": "text/html",
"href": "https://chaos.social/@xuu"
},
{
"rel": "self",
"type": "application/activity+json",
"href": "https://chaos.social/users/xuu"
},
{
"rel": "http://ostatus.org/schema/1.0/subscribe"
}
]
}
it supplies a profile page and a self
which are both specific to that account.
user/bmallred/data/2023-01-10-10-04-43.fit: 1.42 miles, 00:06:31 average pace, 00:09:15 duration
user/bmallred/data/2023-01-10-08-57-57.fit: 1.43 miles, 00:08:30 average pace, 00:12:12 duration
$name$
and then dispatch the hashing or checking to its specific format.
Here is an example of usage:
func Example() {
pass := "my_pass"
hash := "my_pass"
pwd := passwd.New(
&unix.MD5{}, // first is preferred type.
&plainPasswd{},
)
_, err := pwd.Passwd(pass, hash)
if err != nil {
fmt.Println("fail: ", err)
}
// Check if we want to update.
if !pwd.IsPreferred(hash) {
newHash, err := pwd.Passwd(pass, "")
if err != nil {
fmt.Println("fail: ", err)
}
fmt.Println("new hash:", newHash)
}
// Output:
// new hash: $1$81ed91e1131a3a5a50d8a68e8ef85fa0
}
This shows how one would set a preferred hashing type and if the current version of ones password is not the preferred type updates it to enhance the security of the hashed password when someone logs in.
https://github.com/sour-is/go-passwd/blob/main/passwd_test.go#L33-L59
@lyse@lyse.isobeef.org im talking like some JS projects i have seen with 1-2G node_modules dirs. though yarn is quite vast in its modules because it does a LOOOOOOT of stuff in the background.