Searching yarn

Twts matching #basic
Sort by: Newest, Oldest, Most Relevant

@prologic@twtxt.net sure! I don’t know if this is what you need but, let me give it a try.

  • I have Timeline installed, which has an endpoint to process #webmentions. Mine for example is https://aelaraji.com/timeline/webmention which you can find by querying https://aelaraji.com/.well-known/webfinger.
  • If you mention someone from #Timeline itself, it takes care of querying that and sending in the mention for you.
  • Otherwise (what I personally do) you could just:
curl -i -d 'source=https://twtxt.net/user/prologic/twtxt.txt#:~:text=2024-12-09T01:22:37Z' -d 'target=https://aelaraji.com/twtxt.txt' https://aelaraji.com/timeline/webmention

basically what @sorenpeter@darch.dk mentioned in his article Here.

Afterwards, the mentions are stored in their own mentions.txt feed. The one from the example above looks like this on my Timeline :

Image

Feel free to spam my endpoint if you’d like to give things a try. šŸ‘

[P.S: personally, I don’t seem to get the mentions if I add the Text fragment part to my target]

⤋ Read More
In-reply-to » I've been thinking of a few improvements for the next generation of twtxt spec, let me know if these are useful or interesting :) https://text.eapl.mx/a-few-ideas-for-a-next-twtxt-version

Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)

Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where you’re coming from.

Numbering twts in a feed is basically the attempt of generating message IDs. It’s an interesting idea, but I reckon it is not even needed. I’d simply use location based addressing (feed URL + ā€˜#’ + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.

yarnd actually sends If-Modified-Since request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(…) we can actually see If-Modified-Since being deployed when the feed has been retrieved with a Last-Modified response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278

Turns out etags with If-None-Match are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.

I don’t understand how the discovery URLs should work to replace the User-Agent header in HTTP(S) requests. Do you mind to elaborate?

Different protocols are basically just a client thing.

I reckon it’s best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, I’d definitely create a different feed for other languages.

Isn’t the emoji thing ā€œjustā€ a client feature? So, feed do not even have to state any emojis. As a user I’d configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feed’s nick? Hmm. But still, my terminal font doesn’t even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.

⤋ Read More
In-reply-to » What are peoples #IRC setup? Do you have your own bouncer server or just have a you computer always on? And do you IRC on mobile?

its offline atm, but my usual setup basically makes my xmpp server into a bouncer (biboumi) and my xmpp client just does its normal thing to read the irc backlog as server-side message history.

⤋ Read More
In-reply-to » "Everything should be made as simple as possible, but not simpler." – Albert Einstein

@prologic@twtxt.net I like the, allegedly, original:

ā€œIt can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.ā€

Not as simple as the interpretation you used, yet often context is king (or queen).

⤋ Read More
In-reply-to » So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

(#2024-09-24T12:39:32Z) @prologic@twtxt.net It might be simple for you to run echo -e "\t\t" | sha256sum | base64, but for people who are not comfortable in a terminal and got their dev env set up, then that is magic, compared to the simplicity of just copy/pasting what you see in a textfile into another textfile – Basically what @movq@www.uninformativ.de also said. I’m also on team extreme minimalism, otherwise we could just use mastodon etc. Replacing line-breaks with a tab would also make it easier to handwrite your twtxt. You don’t have to hardwrite it, but at least you should have the option to. Just as i do with all my HTML and CSS.

⤋ Read More

#fzf is the new emacs: a tool with a simple purpose that has evolved to include an #email client. https://sr.ht/~rakoo/omail/

I’m being a little silly, of course. fzf doesn’t actually check your email, but it appears to be basically the whole user interface for that mail program, with #mblaze wrangling the emails.

I’ve been thinking about how I handle my email, and am tempted to make something similar. (When I originally saw this linked the author was presenting it as an example tweaked to their own needs, encouraging people to make their own.)

This approach could surely also be combined with #jenny, taking the place of (neo)mutt. For example mblaze’s mthread tool presents a threaded discussion with indentation.

⤋ Read More
In-reply-to » @prologic Do you have a link to some past discussion?

@xuu@txt.sour.is I think it is more tricky than that.

https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-and-organisations/application-regulation/who-does-data-protection-law-apply_en

ā€œA company or entity ā€¦ā€

Also, as I understand it, ā€œpersonal or household activityā€ (as you called it) is rather strict: An example could be you uploading photos to a webspace behind HTTP basic auth and sending that link to a friend. So, yes, a webserver is involved and you process your friend’s data (e.g., when did he access your files), but it’s just between you and him. But if you were to publish these photos publicly on a webserver that anyone can access, then it’s a different story – even though you could say that ā€œthis is just my personal hobby, not related to any job or moneyā€.

If you operate a public Yarn pod and if you accept registrations from other users, then I’m pretty sure the GDPR applies. šŸ¤” You process personal data and you don’t really know these people. It’s not a personal/private thing anymore.

⤋ Read More
In-reply-to » @movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? šŸ˜…*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.šŸ˜‰ It is here.

i feel like we should isolate a subset of markdown that makes sense and built it into lextwt. it already has support for links and images. maybe basic formatting bold, italic. possibly block quote and bullet lists. no tables or footnotes

⤋ Read More

@prologic@twtxt.net the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

So.. basically a rehash of the email ā€œunsendā€ requests? What if i was to make a (delete: 5vbi2ea) .. would it delete someone elses twt?

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) Hmm, but yarnd also isn't showing these twts as being part of a thread. @prologic you said yarnd respects customs subjects. Shouldn't these twts count as having a custom subject, and get threaded together?

So yeah no, whilst it technically works, neither jenny nor yarnd support it very well. Only at a very basic level.

⤋ Read More
In-reply-to » (#mp6ox4a) @cuaxolotl Ah, thanks for reporting back! Okay, so you’re basically manually ā€œcrawlingā€ feeds right now. šŸ¤” What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? šŸ¤”

@movq@www.uninformativ.de Damn! I’m two years late to the discussion šŸ˜… So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF they check their logs.

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.

# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...

⤋ Read More

@cuaxolotl@sunshinegardens.org Ah, thanks for reporting back! Okay, so you’re basically manually ā€œcrawlingā€ feeds right now. šŸ¤” What do you think about the idea of adding something like # follow_notify = gemini://foo/bar to your feed’s metadata, so that clients who follow you can ping that URL every now and then? How would you even notice that, do you regularly read your gemini logs? šŸ¤”

⤋ Read More
In-reply-to » @lyse Ahh so it's not just me! šŸ˜…

@aelaraji@aelaraji.com Ahh it might very well be a Clownflare thing as @lyse@lyse.isobeef.org eluded to 🤣 One of these days I’m going to get off Clownflare myself, when I do I’ll share it with you. My idea is to basically have a cheap VPS like @eldersnake@we.loveprivacy.club has and use Wireguard to tunnel out. The VPS becomes the Reverse Proxy that faces the internet. My home network then has in inbound whatsoever.

⤋ Read More
In-reply-to » he emailed my ISP about causing logging abuse. This is the only real ISP in my area, its gonna basically send me back to dialup.

@bender@twtxt.net haha funny! though i just realized my ISP is the only one with fiber pulled to the property so i would have to get a phone line from them some how. The other ISP in the area is basically a mobile hotspot.

⤋ Read More
In-reply-to » @abucci / @abucci Any interesting errors pop up in the server logs since the the flaw got fixed (unbounded receieveFile())? šŸ¤”

he emailed my ISP about causing logging abuse. This is the only real ISP in my area, its gonna basically send me back to dialup.

⤋ Read More
In-reply-to » @stigatle / @abucci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed's preamble (metadata). I'd love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/

@prologic@twtxt.net Hitting that URL returns a bunch of HTML even though there is no user named lovetocode999 on my pod. I think it should 404, and maybe with a delay, to discourage whatever this abuse is. Basically this can be used to DDoS a pod by forcing it to generate a hunch of HTML just by doing a bogus GET like this.

⤋ Read More
In-reply-to » Regarding complexity budget, slow software, all that:

@movq@www.uninformativ.de Somewhere or another, I think in a William Byrd talk, I heard it suggested that the best ideas in computer science should fit on an index card (ah yes it’s this one: https://paperswelove.org/2017/video/will-byrd-most-beautiful-program/ ). He was referring to the basic principles of LISP/the lambda calculus, which have sometimes been called the Maxwell’s equations of computer programming (by Alan Kay). Simple, short, elegant, but very densely packed with meaning–generations of people have spent their whole careers unpacking what those simple rules can do.

Much of modern software feels like the polar opposite of that. Not only can you not write it on an index card, you never will be able to because people who write software don’t seem to aspire to try. I wish more people thought this way though!

⤋ Read More
In-reply-to » Media

I run Plan 9 on my server and my main home workstation (a raspberry pi). My ā€œdaily driverā€ time is basically split between that and a Mac (excluding time on my phone, i suppose). I think it looks elegant, too. :-)

⤋ Read More

Been clearing out my pod a bit and blocking unwanted domains that are basically either a) just noise and/or b) are just 1-way (whose authors never reply or are otherwise unaware of the larger ecosystem)

Let me know if y’all have any other candidates you’d like me to add to the blocked domain list?

⤋ Read More
In-reply-to » Yeah, the lack of comments makes regular JSON not a good configuration format in my view. Also, putting all keys in quotes and the use of commas is annoying. The big upside is that's in lots of standard libraries.

@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:

@namespace
# multi
# line
# comment
root :value

# example space comment
@namespace.name space-tag 

# attribute comments
attribute attr-tag  :value for attribute

# attribute with multiple 
# lines of values
foo :bar
      :bin
      :baz

repeated :value1
repeated :value2

each @ starts the definition of a namespace kinda like [name] in ini format. It can have comments that show up before. then each attribute is key :value and can have their own # comment lines.
Values can be multi line.. and also repeated..

the namespaces and values can also have little meta data tags added to them.

Image

the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.

⤋ Read More
In-reply-to » @prologic High five, I’m ā€œgeneration Javaā€ as well! šŸ˜‚ There were some leftovers of C++, we used that in the computer graphics courses in Uni a lot. But pretty much anything else that involved programming was Java.

@movq@www.uninformativ.de Haha! yeah sounds about like my HS CS program. A math teacher taught visual basic and pascal. and over on the other end of the school we had ā€œelectronicsā€ which was a room next to the auto body class where they had a bunch of random computer parts scavenged from the district decommissioned surplus storage.

The advanced class would piece together training kits for the basic class to put together.

⤋ Read More

Twtxt spec enhancement proposal thread 🧵

Adding attributes to individual twts similar to adding feed attributes in the heading comments.

https://git.mills.io/yarnsocial/go-lextwt/pulls/17

The basic use case would be for multilingual feeds where there is a default language and some twts will be written a different language.

As seen in the wild: https://eapl.mx/twtxt.txt

The attributes are formatted as [key=value]

They can show up in the twt anywhere it is not enclosed by another element such as codeblock or part of a markdown link.

⤋ Read More

With all M$’s apps being basically fancy web apps, there is no need to actually install any of their legacy applications locally anymore. Since I am online basically 100% of the time this turns my Office experience in a Chromebook like one. No installs, never outdated software. Just a yearly subscription contribution to worry about.

⤋ Read More

Obligatory Twtxt post: I love how I can simply use a terminal window and some very basic tools (echo, scp, ssh) to publish thoughts, as they pop up, onto the Internet in a structured way, that can be found and perhaps even appreciated.

⤋ Read More
In-reply-to » wtf is going on with Microsoft and OpenAI of late?! LIke Microsoft bought into OpenAI for some shocking $10bn USD, then Sam Altman gor fired, now he's been hired by Microsoft to run up a new "AI" division. wtf/! seriously?! šŸ¤” #Microsoft #OpenAI #Scandal

@prologic@twtxt.net the new product was GPTs. A way to create tailored bots for specific use cases. https://openai.com/blog/introducing-gpts (fun fact: I did an internal hackathon where we made something like this for $work onboarding. And I won a prize!)

The competed project is poe https://quorablog.quora.com/Introducing-creator-monetization-for-Poe which is basically the same idea. Make a AI bot tailored to a specific domain of knowledge. And monitize it.

The timing fits very well as openAI announced it just a few weeks ago.

⤋ Read More

How Google Authenticator made one company’s network breach much, much worse | Ars Technica

šŸ¤¦ā€ā™‚

WHY are these big companies treated as though they are the be all and end all of infosec? These are rookie mistakes Google’s making, at scale.

Unfortunately Google employs dark patterns to convince you to sync your MFA codes to the cloud, and our employee had indeed activated this ā€œfeatureā€. If you install Google Authenticator from the app store directly, and follow the suggested instructions, your MFA codes are by default saved to the cloud. If you want to disable it, there isn’t a clear way to ā€œdisable syncing to the cloudā€, instead there is just a ā€œunlink Google accountā€ option.

Like, never ever put your multi-factor tokens into a single cloud storage location! The whole point of this being ā€œmultiā€ factor is that there is a separate, independent physical factor involved in the authentication process. If the authenticator app on your phone puts the tokens in the cloud, then it reduces the security that comes from having a second factor. This is basic stuff.

Of course, never ever use Google Authenticator. All it does is generate TOTP and HOTP codes, which you can do with any OTP app, preferably an open source one that’s been vetted.

⤋ Read More

Would anyone pay for like cheap hosting if it only cost you say ~$0.50 USD per month for a basic space to run your website, twtxt feed, yarn pod, whatever? šŸ¤” Of course we’re talking slices of a server here in terms of memory and cpu, so this would be 10 milliCores of CPU + 64MB of Memory, more than enough to run quite a bit of shitā„¢ 🤣 (especially when you don’t need to run or manage a full OS)

⤋ Read More

An official FBI document dated January 2021, obtained by the American association ā€œProperty of Peopleā€ through the Freedom of Information Act.

Image

This document summarizes the possibilities for legal access to data from nine instant messaging services: iMessage, Line, Signal, Telegram, Threema, Viber, WeChat, WhatsApp and Wickr. For each software, different judicial methods are explored, such as subpoena, search warrant, active collection of communications metadata (ā€œPen Registerā€) or connection data retention law (ā€œ18 USC§2703ā€). Here, in essence, is the information the FBI says it can retrieve:

  • Apple iMessage: basic subscriber data; in the case of an iPhone user, investigators may be able to get their hands on message content if the user uses iCloud to synchronize iMessage messages or to back up data on their phone.

  • Line: account data (image, username, e-mail address, phone number, Line ID, creation date, usage data, etc.); if the user has not activated end-to-end encryption, investigators can retrieve the texts of exchanges over a seven-day period, but not other data (audio, video, images, location).

  • Signal: date and time of account creation and date of last connection.

  • Telegram: IP address and phone number for investigations into confirmed terrorists, otherwise nothing.

  • Threema: cryptographic fingerprint of phone number and e-mail address, push service tokens if used, public key, account creation date, last connection date.

  • Viber: account data and IP address used to create the account; investigators can also access message history (date, time, source, destination).

  • WeChat: basic data such as name, phone number, e-mail and IP address, but only for non-Chinese users.

  • WhatsApp: the targeted person’s basic data, address book and contacts who have the targeted person in their address book; it is possible to collect message metadata in real time (ā€œPen Registerā€); message content can be retrieved via iCloud backups.

  • Wickr: Date and time of account creation, types of terminal on which the application is installed, date of last connection, number of messages exchanged, external identifiers associated with the account (e-mail addresses, telephone numbers), avatar image, data linked to adding or deleting.

TL;DR Signal is the messaging system that provides the least information to investigators.

⤋ Read More
In-reply-to » I never paid a lot of attention to Ben Shapiro before, but what he says is so transparently asinine it boggles the senses. You really have to have a Fox-addled mind to believe that the search for the submersible was completely faked and that the powers-that-be knew the entire time that it had imploded. To believe that a vast conspiracy among hundreds, thousands (?) of people from several countries and spanning several days was orchestrated to lie to the public in order to.....uh, achieve what exactly? "Undermine institutional credibility"? What does that even mean?

@xuu@txt.sour.is Oh wow I didn’t know he was associated with PragerU. I’ve listened to a few episodes of The Audit podcast, where they basically shred PragerU content, and it’s hilarious and terrifying.

⤋ Read More
In-reply-to » Home | Tabby This is actually pretty cool and useful. Just tried this on my Mac locally of course and it seems to have quite good utility. What would be interesting for me would be to train it on my code and many projects šŸ˜…

@prologic@twtxt.net The hackathon project that I did recently used openai and embedded the response info into the prompt. So basically i would search for the top 3 most relevant search results to feed into the prompt and the AI would summarize to answer their question.

⤋ Read More
In-reply-to » Dear StackĀ Overflow, Inc.

Seems to me you could write a script that:

  • Parses a StackOverflow question
  • Runs it through an AI text generator
  • Posts the output as a post on StackOverflow

and basically pollute the entire information ecosystem there in a matter of a few months? How long before some malicious actor does this? Maybe it’s being done already 🤷

What an asinine, short-sighted decision. An astonishing number of companies are actively reducing headcount because their executives believe they can use this newfangled AI stuff to replace people. But, like the dot com boom and subsequent bust, many of the companies going this direction are going to face serious problems when the hypefest dies down and the reality of what this tech can and can’t do sinks in.

We really, really need to stop trusting important stuff to corporations. They are not tooled to last.

⤋ Read More

What is a good device for home virtualization these days? I have been looking at the Intel NUC 13 pro’s. Basically I want something ā€œquietā€ (ie not a screaming banshee 1U), smallish, but with lots of threads and rams. Disk will come from an external NAS.

⤋ Read More

BlueSky is cosplaying decentralization

I say ā€œostensibly decentralizedā€, because BlueSky’s (henceforth referred to as ā€œBSā€ here) decentralization is a similar kind of decentralization as with cryptocurrencies: sure, you can run your own node (in BS case: ā€œpersonal data serversā€), but that does not give you basically any meaningful agency in the system.

I don’t know why anyone would want to use this crap. It’s the same old same old and it’ll end up the same old way.

⤋ Read More
In-reply-to » @darch I think having a way to layer on features so those who can support/desire them can. It would be best for the community to be able to layer on (or off) the features.

We could ask them? But on the counter would bukket or jan6 follow the pure twtxt feeds? Probably not either way… We could use content negotiation as well. text/plain for basic and text/yarn for enhanced.

⤋ Read More

I played around with parsers. This time I experimented with parser combinators for twt message text tokenization. Basically, extract mentions, subjects, URLs, media and regular text. It’s kinda nice, although my solution is not completely elegant, I have to say. Especially my communication protocol between different steps for intermediate results is really ugly. Not sure about performance, I reckon a hand-written state machine parser would be quite a bit faster. I need to write a second parser and then benchmark them.

lexer.go and newparser.go resemble the parser combinators: https://git.isobeef.org/lyse/tt2/-/commit/4d481acad0213771fe5804917576388f51c340c0 It’s far from finished yet.

The first attempt in parser.go doesn’t work as my backtracking is not accounted for, I noticed only later, that I have to do that. With twt message texts there is no real error in parsing. Just regular text as a ā€œfallbackā€. So it works a bit differently than parsing a real language. No error reporting required, except maybe for debugging. My goal was to port my Python code as closely as possible. But then the runes in the string gave me a bit of a headache, so I thought I just build myself a nice reader abstraction. When I noticed the missing backtracking, I then decided to give parser combinators a try instead of improving on my look ahead reader. It only later occurred to me, that I could have just used a rune slice instead of a string. With that, porting the Python code should have been straightforward.

Yeah, all this doesn’t probably make sense, unless you look at the code. And even then, you have to learn the ropes a bit. Sorry for the noise. :-)

⤋ Read More

On the topic of Programming Languages and Telemetry. I’m kind of curious… Do any of these programming language and their toolchains collect telemetry on their usage and effectively ā€œspyā€ on your development?

  • Python
  • C
  • C++
  • Java
  • C#
  • Visual Basic
  • Javascript
  • SQL
  • Assembly Language
  • PHP

⤋ Read More
In-reply-to » And in the latest "don't store your passwords in the cloud" news, NortonLifeLock warns that hackers breached Password Manager accounts

@abucci@anthony.buc.ci ISO 27001 is basically the same. It means that there is management sign off for a process to improve security is in place. Not that the system is secure. And ITIL is that managment signs off that problems and incidents should have processes defined.

Though its a good mess of words you can throw around while saying ā€œmanagement supports this so X needs to get doneā€

⤋ Read More