for example, ejabberd, redka, and litefs. all using sqlite+litefs for their database needs allows agents to communicate over xmpp, matrix, mqtt, and sip. other applications can use sqlite for storage or speak the redis protocol to redka. ejabberd can also handle file uploads, static file publishing, identity, and various other web application services. when scaling, litefs integrates with consul to manage replication which grants the network access to service disco, encrypted mesh networking, and various other features that can be used to build secure service grids. ejabberd and redka can be scaled to multiple nodes that coordinate over the litefs replication protocol without any changes to the db storage config. other components can be configured to plug into this framework fairly easily as well. we keep the network config fairly simple by linking nodes together with yggdrasil to flatten the address space and then linking app nodes together using consul to provide secure routing for the local grid service. yggdrasil also offers utility for buliding federated networks in a similarly flat address space, for more secure communications i2p is also available in yggdrasil mode. minibase is wonderful, and we have not even started to talk about secure IoT.
Righto, @eapl.me@eapl.me, ta for the writeup. Here we go. :-)
Metadata on individual twts are too much for me. I do like the simplicity of the current spec. But I understand where youāre coming from.
Numbering twts in a feed is basically the attempt of generating message IDs. Itās an interesting idea, but I reckon it is not even needed. Iād simply use location based addressing (feed URL + ā#ā + timestamp) instead of content addressing. If one really wanted to, one could hash the feed URL and timestamp, but the raw form would actually improve disoverability and would not even require a richer client. But the majority of twtxt users in the last poll wanted to stick with content addressing.
yarnd actually sends If-Modified-Since
request headers. Not only can I observe heaps of 304 responses for yarnds in my access log, but in Cache.FetchFeeds(ā¦)
we can actually see If-Modified-Since
being deployed when the feed has been retrieved with a Last-Modified
response header before: https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/cache.go#L1278
Turns out etags with If-None-Match
are only supported when yarnd serves avatars (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/handlers.go#L158) and media uploads (https://git.mills.io/yarnsocial/yarn/src/commit/98eee5124ae425deb825fb5f8788a0773ec5bdd0/internal/media_handlers.go#L71). However, it ignores possible etags when fetching feeds.
I donāt understand how the discovery URLs should work to replace the User-Agent
header in HTTP(S) requests. Do you mind to elaborate?
Different protocols are basically just a client thing.
I reckon itās best to just avoid mixing several languages in one feed in the first place. Personally, I find it okay to occasionally write messages in other languages, but if that happens on a more regularly basis, Iād definitely create a different feed for other languages.
Isnāt the emoji thing ājustā a client feature? So, feed do not even have to state any emojis. As a user Iād configure my client to use a certain symbol for feed ABC. Currently, I can do a similar thing in tt
where I assign colors to feeds. On the other hand, what if a user wants to control what symbol should be displayed, similar to the feedās nick? Hmm. But still, my terminal font doesnāt even render most of emojis. So, Unicode boxes everywhere. This makes me think it should actually be a only client feature.
Been curious to see if can filter out my access.log file and output a list of my twtxt followers just in case Iāve missed someone ⦠I came up with this awk -F '\"' '/twtxt/ {print $(NF-1)}' /var/log/user.log | grep -v 'twtxt\.net' | sort -u | awk '{print $(NF-1) $NF}' | awk '/^\(/'
spaghetti monster of a command and Iām wondering if thereās a more elegant way for achieving the same thing.
@sorenpeter@darch.dk Iāve been using weechat for a while then when I started learning my way around Emacs I switched to Circe ⦠a couple months later I setup ZNC, rolled with it for some time but wasnāt sure if I wanted to stick with it. Now Iām mainly using TheLounge and do find it convenient accessing it from anywhere. but quite honestly, I donāt have a preference.
I mean sure if i want to run it over on my tooth brush why not use something that is accessible everywhere like md5? crc32? It was chosen a long while back and the only benefit in changing now is āi cant find an implementation for xā when the down side is it breaks all existing threads. soā¦
Did Apple Just Kill Social Apps?
Appleās iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it āthe en ⦠ā Read more
Been curious about how people on Pubnix instances do manage their feed, if they have access to log? Sent in a req to join one still no res.
@doesnm@doesnm.p.psf.lt Fot a sample access log? Which tool are you using?
how to parse caddy access log with useragent tool? seems it dont detect anything in json
@xuu@txt.sour.is I think it is more tricky than that.
āA company or entity ā¦ā
Also, as I understand it, āpersonal or household activityā (as you called it) is rather strict: An example could be you uploading photos to a webspace behind HTTP basic auth and sending that link to a friend. So, yes, a webserver is involved and you process your friendās data (e.g., when did he access your files), but itās just between you and him. But if you were to publish these photos publicly on a webserver that anyone can access, then itās a different story ā even though you could say that āthis is just my personal hobby, not related to any job or moneyā.
If you operate a public Yarn pod and if you accept registrations from other users, then Iām pretty sure the GDPR applies. š¤ You process personal data and you donāt really know these people. Itās not a personal/private thing anymore.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youāre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canāt verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donāt mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iām also not very familiar with IPFS or IPNS.
I havenāt been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iām too lazy to change how I publish my feed :-)
@sorenpeter@darch.dk !! I freaking love your Timeline ⦠I kind of have an justified PHP phobia š but, Iām definitely thinking about giving it a try!
/ME wondering if itās possible to use it locally just to read and manage my feed at first and then maybe make it publicly accessible later.
@bender@twtxt.net and I saw some conspiracy theory that he knew he was going to be arrested. He was working with French intelligence on a plea deal to defect. And now Russia is freaking out that Ukraine allies can have war comms access.
Yikes! If only they had salty.im!
NouveautĆ© beta sur https://3r1c.net, la version TXT āsmartphoneā de chacun des articles. Accessible via le lien [M].
@prologic@twtxt.net Remember when we used to lose access to e-mail, IM and forum accounts after 30 days of inactivity? š ⦠Then storage became cheaper and companies figured out that any tiny bit of someoneās data is worth something to someone(thing) else. š„²
@prologic@twtxt.net @lyse@lyse.isobeef.org I checked my logs and all I see are 304 responses and a couple of delayed requests here and there due to rate limiting, but not that many. Iāll disable it (the rate limiting) for a couple of days, let me know if you still get the āforbidden accessā thing š«£ I may have effed up my configuration trying to deal with some weird stuff.
@prologic@twtxt.net Yes I suppose that is true. There is an article on Tailscaleās site that explains it all quite a bit: https://tailscale.com/blog/how-nat-traversal-works
To me, with CGNAT, itās a small miracle that a direct connection can be made between peers (as opposed to going through a relay constantly) but it does indeed work. I guess to host it at home you would need to have it WAN accessible, and if youāve already gone to the trouble of port forwarding etc⦠well š
Not that I could personally do that, but for those with static IPs etc.
I admit Iāve always compromised on this way too much myself, always to this day having Facebook Messenger just to communicate in my families group chats. Sure I run it in a Work profile on my GrapheneOS phone that I can switch off at any time, I can completely cut it off from network access any time as well, I can have a lot of rudimentary control over it, I use it as sparingly as possible, but it doesnāt change the fact everytime I use it weāre funneling private convos through bloody Metaās servers and trackers etc.
Microsoft Outage Hits Users Worldwide, Leading To Canceled Flights
Microsoft grappled with a major service outage, leaving users across the world unable to access its cloud computing platforms and causing airlines to cancel flights. From a report: Thousands of users across the world reported problems with Microsoft 365 apps and services to Downdetector.com, a website that tracks service disruptions. āWeāre inve ⦠ā Read more
Hello @bmallred@staystrong.run I hope youāre doing well.
I dunno if itās normal, but it seems like I am unable to access your twtxt.tx file⦠š
I think @abucci@anthony.buc.ci and @stigatle@yarn.stigatle.no are running snac? I didnāt have a closer look at snac (no intention of running it), but if that is a relatively small daemon (maybe comparable to Yarn?) that gives you access to the whole world of ActivityPub, then, well, yeah ⦠Thatās tough to beat.
Yes, I am running snac
on the same VPS where I run my yarn pod. I heard of it from @stigatle@yarn.stigatle.no, so blame him š snac
is written in C and is one simple executable, uses very little resources on the server, and stores everything in JSON files (no databases or other integrations; easy to save and migrate your data) . Itās definitely like yarn in that respect.
I havenāt been around yarn much lately. Part of that is that Iāve been very busy at work and home and only have a limited time to spend goofing off on a social network. Part of it is that Iām finding snac
very useful: Iāve connected with friends Iād previously lost touch with, Iāve found useful work-related information, Iāve found colleagues to follow, and even found interesting conferences to attend. Thereās a lot more going on over there.
I guess if I had to put it simply, Iād say I have limited time to play and there are more kids in the ActivityPub sandbox than this one. Thatās not a ding on yarnāI like yarn and twtxtāIām just time constrained.
Even AI coding machines will need to rotate their access credentials every 90 days.
in the matter of political voice in the US money is speech and therefore companies use their āfree speechā to donate and gain access to politicians. Therefore companies are people. Thanks a lot ācitizens unitedā
@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:
@namespace
# multi
# line
# comment
root :value
# example space comment
@namespace.name space-tag
# attribute comments
attribute attr-tag :value for attribute
# attribute with multiple
# lines of values
foo :bar
:bin
:baz
repeated :value1
repeated :value2
each @
starts the definition of a namespace kinda like [name]
in ini format. It can have comments that show up before. then each attribute is key :value
and can have their own #
comment lines.
Values can be multi line.. and also repeated..
the namespaces and values can also have little meta data tags added to them.
the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.
Qnapās Hybridmount feature makes it possible for me to access the files on OneDrive as if they were available from a local network drive on my Fedora PC. Pretty neat (when everything works).
Having fun with React - yet again. A large part of my job entails (re)learning technologies - luckily I have access to some good resources in the form of training- and tutorial sites, all provided by my employer.
Iām telling ya guys š plex.tv had way better shitā¢, Get it installed on your own server, get access to free content + your own + whatever and no stupid tracking and bullshit š¤£
Anyone have any ideas how you might identify processes (pids) on Linux machine that are responsible for most of the Disk I/O on that machine and subsequently causing high I/O wait times for other processes? š¤
Important bit: The machine has no access to the internet, there are hardly any standard tools on it, etc. So I have to get something to it āair gappedā. I have terminal access to it, so I can do interesting things like, base64 encode a static binary to my clipboard and paste it to a file, then base64 decode it and execute. Thatās about the only mechanisms I have.
PensĆ©e dĆ©sagrĆ©able du jour: le protocole #gemini nāest pas Ć©cologique car il nāest pas accessible sur du vieux matĆ©riel Ć cause du TLS forcĆ©. Servir des fichiers textes Ć©crits en gemtext en #http est mieux dans ce cas
podman
works with TLS. It does not have the "--docker" siwtch so you have to remove that and use the exact replacement commands that were in that github comment.
@prologic@twtxt.net what do you mean when you say āDocker APIā? There are multiple possible meanings for that. podman
conforms to some of Dockerās APIs and itās unclear to me which one you say itās not conforming to.
You just have to Google āpodman Docker APIā and you find stuff like this: https://www.redhat.com/sysadmin/podman-rest-api
What is Podmanās REST API?Podmanās REST API consists of two components:
- A Docker-compatible portion called Compat API
- A native portion called Libpod API that provides access to additional features not available in Docker, including pods
Or this: https://docs.podman.io/en/latest/markdown/podman-system-service.1.html
The REST API provided by podman system service is split into two parts: a compatibility layer offering support for the Docker v1.40 API, and a Podman-native Libpod layer.
Question to all you Gophers out there: How do you deal with custom errors that include more information and different kinds of matching them?
I started with a simple var ErrPermissionNotAllowed = errors.New("permission not allowed")
. In my function I then wrap that using fmt.Errorf("%w: %v", ErrPermissionNotAllowed, failedPermissions)
. I can match this error using errors.Is(err, ErrPermissionNotAllowed)
. So far so good.
Now for display purposes Iād also like to access the individual permissions that could not be assigned. Parsing the error message is obviously not an option. So I thought, I create a custom error type, e.g. type PermissionNotAllowedError []Permission
and give it some func (e PermissionNotAllowedError) Error() string { return fmt.Sprintf("permission not allowed: %v", e) }
. My function would then return this error instead: PermissionNotAllowedError{failedPermissions}
At some layers I donāt care about the exact permissions that failed, but at others I do, at least when accessing them. A custom func (e PermissionNotAllowedError) Is(target err) bool
could match both the general ErrPermissionNotAllowed
as well as the PermissionNotAllowedError
. Same with As(ā¦)
. For testing purposes the PermissionNotAllowedError
would then also try to match the included permissions, so assertions in tests would work nicely. But having two different errors for different matching seems not very elegant at all.
Did you ever encounter this scenario before? How did you address this? Is my thinking flawed?
An official FBI document dated January 2021, obtained by the American association āProperty of Peopleā through the Freedom of Information Act.
This document summarizes the possibilities for legal access to data from nine instant messaging services: iMessage, Line, Signal, Telegram, Threema, Viber, WeChat, WhatsApp and Wickr. For each software, different judicial methods are explored, such as subpoena, search warrant, active collection of communications metadata (āPen Registerā) or connection data retention law (ā18 USC§2703ā). Here, in essence, is the information the FBI says it can retrieve:
Apple iMessage: basic subscriber data; in the case of an iPhone user, investigators may be able to get their hands on message content if the user uses iCloud to synchronize iMessage messages or to back up data on their phone.
Line: account data (image, username, e-mail address, phone number, Line ID, creation date, usage data, etc.); if the user has not activated end-to-end encryption, investigators can retrieve the texts of exchanges over a seven-day period, but not other data (audio, video, images, location).
Signal: date and time of account creation and date of last connection.
Telegram: IP address and phone number for investigations into confirmed terrorists, otherwise nothing.
Threema: cryptographic fingerprint of phone number and e-mail address, push service tokens if used, public key, account creation date, last connection date.
Viber: account data and IP address used to create the account; investigators can also access message history (date, time, source, destination).
WeChat: basic data such as name, phone number, e-mail and IP address, but only for non-Chinese users.
WhatsApp: the targeted personās basic data, address book and contacts who have the targeted person in their address book; it is possible to collect message metadata in real time (āPen Registerā); message content can be retrieved via iCloud backups.
Wickr: Date and time of account creation, types of terminal on which the application is installed, date of last connection, number of messages exchanged, external identifiers associated with the account (e-mail addresses, telephone numbers), avatar image, data linked to adding or deleting.
TL;DR Signal is the messaging system that provides the least information to investigators.
@prologic@twtxt.net I think those headsets were not particularly usable for things like web browsing because the resolution was too low, something like 1080p if I recall correctly. A very small screen at that resolution close to your eye is going to look grainy. Youād need 4k at least, I think, before you could realistically have text and stuff like that be zoomable and readable for low vision people. The hardware isnāt quite there yet, and the headsets that can do that kind of resolution are extremely expensive.
But yeah, even so I can imagine the metaverse wouldnāt be very helpful for low vision people as things stand today, even with higher resolution. Iāve played VR games and that was fine, but Iāve never tried to do work of any kind.
I guess where Iām coming from is that even though Iām low vision, I can work effectively on a modern OS because of the accessibility features. I also do a lot of crap like take pictures of things with my smartphone then zoom into the picture to see detail (like words on street signs) that my eyes canāt see normally. That feels very much like rudimentary augmented reality that an appropriately-designed headset could mostly automate. VR/AR/metaverse isnāt there yet, but it seems at least possible for the hardware and software to develop accessibility features that would make it workable for low vision people.
@stigatle@yarn.stigatle.no @prologic@twtxt.net @eldersnake@we.loveprivacy.club I love VR too, and I wonder a lot whether it can help people with accessibility challenges, like low vision.
But Metaās approach from the beginning almost seemed like a joke? My first thought was āare they trolling us?ā Thereās open source metaverse software like Vircadia that looks better than Metaās demos (avatars have legs in Vircadia, ffs) and can already do virtual co-working. Vircadia developers hold their meetings within Vircadia, and there are virtual whiteboards and walls where you can run video feeds, calendars and web browsers. What is Meta spending all that money doing, if their visuals look so weak, and their co-working affordances arenāt there?
On top of that, Meta didnāt seem to put any kind of effort into moderating the content. There are already stories of bad things happening in Horizon Worlds, like gangs forming and harassing people off of it. Imagine what thatād look like if 1 billion people were using it the way Meta says they want.
Then, there are plenty of technical challenges left, like people feeling motion sickness or disoriented after using a headset for a long period of time. I havenāt heard announcements from Meta that theyāre working on these or have made any advances in these.
All around, it never sounded serious to me, despite how much money Meta seems to be throwing at it. For something with so much promise, and so many obvious challenges to attack first that Meta seems to be ignoring, what are they even doing?
They havenāt written the federation code yet. Its literally run on the staging instance. People are paying to access the alpha. Though if you want a code to see what all the fuss is about there are a few with invites around here.
There is a ārightā way to make something like GitHub CoPilot, but Microsoft did not choose that way. They chose one of the most exploitative options available to them. For that reason, I hope they face significant consequences, though I doubt they will in the current climate. I also hope that CoPilot is shut down, though Iām pretty certain it will not be.
Other than access to the data behind it, Microsoft has nothing special that allows it to create something like CoPilot. The technology behind it has been around for at least a decade. There could be a āpublicā version of this same tool made by a cooperating group of people volunteering, āleasingā, or selling their source code into it. There could likewise be an ethically-created corporate version. Such a thing would give individual developers or organizations the choice to include their code in the tool, possibly for a fee if thatās something they want or require. The creators of the tool would have to acknowledge that they have suppliersāthe people who create the code that makes their tool possibleāinstead of simply stealing what they need and pretending thatās fine.
This era weāre living through, with large companies stomping over all laws and regulations, blatantly stealing other peopleās work for their own profit, cannot come to an end soon enough. It is destroying innovation, and we all suffer for that. Having one nifty tool like CoPilot that gives a bit of convenience is nowhere near worth the tremendous loss that Microsoftās actions in this instace are creating for everyone.
Iām not super a fan of using json. I feel we could still use text as the medium. Maybe a modified version to fix any weakness.
What if instead of signing each twt individually we generated a merkle tree using the twt hashes? Then a signature of the root hash. This would ensure the full stream of twts are intact with a minimal overhead. With the added bonus of helping clients identify missing twts when syncing/gossiping.
Have two endpoints. One as the webfinger to link profile details and avatar like you posted. And the signature for the merkleroot twt. And the other a pageable stream of twts. Or individual twts/merkle branch to incrementally access twt feeds.
@xuu@txt.sour.is yeah, I know less about ISO27k (in part because you have to pay for access to the complete standards documents!!!), but I figured it was similar.
@mckinley@twtxt.net Thank you! I didnāt even know about signing and encrypting XML documents. Right, RSS is a little bit messy.
Unfortunately, the autodiscovery document in one of your linked resources does not exist anymore. What annoys me in Atom is the distinction between <id>
and <link>
. I always want my URL also to be my ID, so I have to duplicate that ā unnecessarily in my opinion.
Also, never found a good explanation why I should add <link rel="self" ⦠/>
to my feeds. I just do, but I donāt understand why. The W3C Feed Validation Service says:
[ā¦] This value is important in a number of subscription scenarios where often times the feed aggregator only has access to the content of the feed and not the location from which the feed was fetched.
This just sounds like a very questionable bandaid to bad software architecture. Why would the feed parser need access to the feed URL at this stage? And if so, why not just pass down the input source? Just doesnāt make sense to me.
Also, I just noticed that I reference the http://purl.org/rss/1.0/modules/syndication/
namespace, but donāt use it in most of my feeds. Gotta fix that. Must have copied that from my yfav feed without paying attention what Iām doing.
Your article made me reread the Atom spec and I found out, that I can omit the <author>
in the <entry>
when I specify a global <author>
at <feed>
level. Awesome! Will do that as well and thus reduce the feed size.
@abucci@anthony.buc.ci Its not better than a Cat5e. I have had two versions of the device. The old ones were only 200Mbps i didnāt have the MAC issue but its like using an old 10baseT. The newer model can support 1Gbps on each port for a total bandwidth of 2Gbps.. i typically would see 400-500Mbps from my Wifi6 router. I am not sure if it was some type of internal timeout or being confused by switching between different wifi access points and seeing the mac on different sides.
Right now I have my wifi connected directly with a cat6e this gets me just under my providers 1.3G downlink. the only thing faster is plugging in directly.
MoCA is a good option, they have 2.5G models in the same price range as the 1G Powerline models BUT, only if you have the coax in wall already.. which puts you in the same spot if you donāt. You are for sure going to have an outlet in every room of the house by code.
Huh⦠Nope.
HTTP/1.1 200 OK
Content-Length: 407
Content-Type: text/calendar
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: ETag
Permissions-Policy: interest-cohort=()
Content-Security-Policy: default-src 'none'; sandbox
Referrer-Policy: same-origin
Vary: Authorization
BEGIN:VCALENDAR
VERSION:2.0;2.0
PRODID:SandCal
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20220822T180903Z
UID:bb63bfbd-623e-4805-b11b-3181d96375e6
DTSTART;TZID=America/Chicago:20220827T000000
CREATED:20220822T180903Z
LAST-MODIFIED:20220822T180903Z
LOCATION:https://meet.jit.si/Yarn.social
SUMMARY:Yarn Call
RRULE:FREQ=WEEKLY
DTEND;TZID=America/Chicago:20220827T010000
END:VEVENT
END:VCALENDAR
New subscription plan for Apple Music: Voice Plan. Available for many countries. Using Siri to access songs. Meh.
created a little music streaming service for myself. uses the same access keys as /restricted-wiki/ friends welcome gemini://sunshinegardens.org/~xj9/arconite/playlists/
āPeople are taking the piss out of you everyday. They butt into your life, take a cheap shot at you and then disappear. They leer at you from tall buildings and make you feel small. They make flippant comments from buses that imply youāre not sexy enough and that all the fun is happening somewhere else. They are on TV making your girlfriend feel inadequate. They have access to the most sophisticated technology the world has ever seen and they bully you with it. They are The Advertisers and they are laughing at you. You, however, are forbidden to touch them. Trademarks, intellectual property rights and copyright law mean advertisers can say what they like wherever they like with total impunity. Fuck that. Any advert in a public space that gives you no choice whether you see it or not is yours. Itās yours to take, re-arrange and re-use. You can do whatever you like with it. Asking for permission is like asking to keep a rock someone just threw at your head. You owe the companies nothing. Less than nothing, you especially donāt owe them any courtesy. They owe you. They have re-arranged the world to put themselves in front of you. They never asked for your permission, donāt even start asking for theirs.ā - Banksy
Words I cannot type rightly at the first attempt: testimonial, accessibility, successful
@vain@www.uninformativ.de I have seen it pop up on a few feeds around and adopted it into the new parser I built.
The format I have followed has been '# ' :whitespace: :key-name: :whitespace: '=' :whitespace: :value:
keys can be repeated and accessed like an array of values.
@prologic@twtxt.net yep. it actually extracts everything at parse time. like mentions/tags/links/media. so they can be accessed and manipulated without additional parsing. it can then be output as MarkDown
On Kickstarter: SSHatellite A public-access shell server in space. https://www.kickstarter.com/projects/sshatellite/sshatellite