@jlj@twt.nfld.uk oh dang the reply didnt add the reply. It was to @hxii@0xff.nu because Firefox shows his shruggy like ¯\_(ツ)_/¯
@prologic@twtxt.net @hxii@0xff.nu I’m certain that it is a markdown thing. Its that way on other markdown sites like Reddit. Because the underline is being escaped to prevent the underline style. Gotta double it up ¯\_(ツ)_/¯
@xuu@txt.sour.is Re CPU usage of the lextwt
branch. There’s a noticeable ~2x increase in CPU usage across the board since I deployed this at ~16.30pm (AEST).
@lyse@lyse.isobeef.org @prologic@twtxt.net @vain@www.uninformativ.de A penny saved is a penny depreciating at a rate of 1.4% per annum.
@lyse@lyse.isobeef.org @prologic@twtxt.net I think lex will do that too currently. Should be able to lock that down.
@lyse@lyse.isobeef.org (#ezmdswq) Looks good for me!
@thewismit@blog.thewismit.com @prologic@twtxt.net I too wonder about this.
@prologic@twtxt.net @thewismit@blog.thewismit.com () possible, or a pod following any feeds it finds, if any one follows or not. So it has more twts cached
@prologic@twtxt.net () Should be ready to merge with lex as opt-in option. Need more eyes on it and some clean up.
@prologic@twtxt.net @thewismit@blog.thewismit.com () Ya I get that error a lot. I mostly use the web on mobile as a result.
@thewismit@blog.thewismit.com Soo all good all round? 🤗
@prologic@twtxt.net i think i finally suss’d out my hash issue.. now to figure out why im losing avatars on restart.
@prologic@twtxt.net @thewismit not sure.. im using Caddy instead of nginix
@prologic@twtxt.net lol.. sorry about the spam
@xuu@txt.sour.is @prologic@twtxt.net (#6jkpxzq) hmm from what i can tell its parsing ok.. something got broken in the markdown conversion…
@prologic@twtxt.net hmm this line seems to be tricky to parse. will need to look into it.
@prologic@twtxt.net test. Running new parser on txt.sour.is. :D
@prologic it seem to work just fine for the most part. http://darch.dk/twtxt.txt for refernce
@darch@twtxt.net I gotta say man I really love your passion! 🤗 hopefully we can all figure out a good branding for the project going forward 👌
@prologic@twtxt.net that would be an interesting idea. I think your current spec of using an SMTP proto is probably best for DM.
but having a federation of IRC servers would be interesting for realtime twt propagation.
@prologic@twtxt.net the meta info on the top I added manually. it’s following what I have seen from some other twtxt feeds. the new parser will read them.
@prologic@twtxt.net Veri soon. I have a experimental runflag that I am just about to deploy to my node. I have a few show stoppers holding me back.
Woohoo, #phpub2twtxt - my php interface for publishing to my selfhosted twtxt.txt is now online at GitHub
So… @darch@twtxt.net and @sorenpeter@darch.dk are the same person right? 😀
@xuu@txt.sour.is Btw… I noticed your pod has some changed I’m not familiar with, for example you seem to have added metadata to the top of feeds. Can you enumerate the improvements/changes you’ve made and possibly let’s discuss contributing them back upstream? :D
@prologic@twtxt.net sometimes I think it would be nice to have a XMPP instance. then I remember it’s all XML and I think “nah.”
I am constantly in awe that IRC remains the only realtime chat that isn’t unnecessarily complex. name another that can run chatops bot with just nc and sh?
@prologic@twtxt.net @xuu@txt.sour.is Closer! Last bit to finish is a beast. FormatTwtFactory
@vain@www.uninformativ.dedd @lyse@lyse.isobeef.orgdd @prologic@twtxt.netdd Nope.. i have updated my gist to include the feeds listing. feeds.txt
@prologic@twtxt.net that seems to match my numbers. are you picking up the few gophers out there?
kinda makes me wonder about the ~300k you have cached. y’all got the library of alexandria over there.
@prologic@twtxt.net in theory shouldn’t need to let users add feeds.. if they get mentioned by a tracked feed they will get added automagically. on a pod it would just need to scan the twtxt feed to know about everyone.
@prologic@twtxt.net sounds about right. I tend to try to build my own before pulling in libs. learn more that way. I was looking at using it as a way to build my twt mirroring idea. and testing the lex parser with a wide ranging corpus to find edge cases. (the pgp signed feeds for one)
@prologic@twtxt.net the add function just scans recursivley everything.. but the idea is to just add and any new mentions then have a cron to update all known feeds
@prologic@twtxt.net yeah it reads a seed file. I’m using mine. it scans for any mention links and then scans them recursively. it reads from http/s or gopher. i don’t have much of a db yet.. it just writes to disk the feed and checks modified dates.. but I will add a db that has hashs/mentions/subjects and such.
@prologic@twtxt.netd It is pretty basic, and depends on some local changes i am still working out on my branch.. https://gist.github.com/JonLundy/dc19028ec81eb4ad6af74c50255e7cee
@lyse@lyse.isobeef.org @prologic@twtxt.net very curious… i worked on a very similar track. i built a spider that will trace off any follows =
comments and mentions from other users and came up with:
twters: 744
total: 52073
I just built a poc search engine / crawler for Twtxt. I managed to crawl this pod (twtxt.net) and a couple of others (sorry @etux@twt.u53.us and @xuu@txt.sour.is I used your pods in the tests too!). So far so good. I might keep going with this and see what happens 😀
@prologic@twtxt.net that I do. lol. I am xuu on hackint.org and freenode
@prologic@twtxt.net sure. I don’t use signal much because I have to disclose my personal phone. Telegram? https://www.t.me/xypheri
@xuu@txt.sour.is Are you interested in getting on Signal and swapping contact details and such so we can discuss some ideas in collaboration in more real-time? You have great ideas, I think we could benefit from a bit more real(ish) time 😀
go run ./cmd/stats https://twtxt.net/user/prologic/twtxt.txt
@xuu@txt.sour.is @prologic@twtxt.net Your feed was great for catching edge cases ;)
@prologic@twtxt.net https://github.com/JonLundy/twtxt/tree/xuu/integrate-lextwt I made a stats command for the new parser that extracts a bunch of info about a twtxt file. run like: go run ./cmd/stats https://twtxt.net/user/prologic/twtxt.txt
@prologic@twtxt.net yep!some of the lexer is directly copied from monkey-lang. love that book series.
@prologic@twtxt.net ah I need to add an edge case for naked urls with fragments.
@prologic@twtxt.net yep. it actually extracts everything at parse time. like mentions/tags/links/media. so they can be accessed and manipulated without additional parsing. it can then be output as MarkDown
@adi@twtxt.net @prologic@twtxt.net using regex. which can be a rather inexact science ;)
@prologic@twtxt.net kinda.. It can parse the twts into an AST.. but most of the formatting out expects a string to do regex over rather then the parsed AST. thats what i am working out next.
@prologic@twtxt.net as promised! https://github.com/JonLundy/twtxt/blob/xuu/integrate-lextwt/types/lextwt/lextwt_test.go#
the lexer is nearing completion.. the tough part left is rooting out all the formatting code.
@prologic@twtxt.netdd ooh I am adding that to my test suite