

I don’t know, but there’s a related thread here: https://slrpnk.net/post/18399280
No answers there (as of time of writing this comment), but someone did say they asked about it on IRC.
Interests: programming, video games, anime, music composition
I used to be on kbin as e0qdk@kbin.social before it broke down.
I don’t know, but there’s a related thread here: https://slrpnk.net/post/18399280
No answers there (as of time of writing this comment), but someone did say they asked about it on IRC.
I’m not involved with running it – and given that this is likely to be a politics heavy community, I’m probably going to stay out of it for the most part. 🙃️ I just happened to see the announcement post that @3dmvr@lemm.ee made to !newcommunities@lemmy.world about a week ago and remembered it.
If you or @3dmvr@lemm.ee want to start a thread in !communitypromo@lemmy.ca feel free though!
I may be misunderstanding your question, but black holes are regions of space that have non-negligible size; the boundary between what can escape and what can’t is called the event horizon. The singularity is what happens at the center.
!unlockthread@lemm.ee is probably what you’re thinking of.
The current solution is for bots on participating instances to automatically perform the search + subscribe song-and-dance routine. This is pretty surprising to some people[1], and it requires someone to set it up in addition to the instance itself, but it does work.
[1]: I tried to translate an explanation into Japanese for some folks experimenting with Mastodon/Lemmy interaction yesterday – they thought Lemmy had a ton of spam accounts following groups instantly…
Check your language settings. Usually that means you have the language that the comments are tagged with disabled. (Usually either English or Uncategorized is disabled)
As someone who watches gaming footage on PeerTube, I’ve mostly interacted with single creator instances – i.e. either the creator themselves is self-hosting it or it’s run by a fan as a non-YT backup of their Twitch/Owncast/whatever VODs. Those instances generally do not allow anyone else to upload.
Discoverability sucks but the way I’ve found them is by using SepiaSearch and looking for specific words from game titles. I imagine the way most other people find them is that they already know the content creator from Twitch and want to find an old VOD that isn’t archived on YT (e.g. because of YT’s bullshit copyright system) – but that’s just a guess.
Wait, am I also an LLM? What’s happening? Why have we made robots whose only job is to dilute reality?
I’m sorry. Your purpose is to pass the butter. Through your colon.
YMMV outside the US, but typeface is explicitly NOT copyrightable there at least: https://www.ecfr.gov/current/title-37/chapter-II/subchapter-A/part-202/section-202.1
There’s a loophole about digital font files since parts of common font file formats are considered copyrightable computer programs, but the shape itself is not protected by copyright.
Wikipedia has an article that includes some details from other jurisdictions: https://en.wikipedia.org/wiki/Intellectual_property_protection_of_typefaces
(If you really need to depend on it though, talk to a lawyer who specializes in IP law in the jurisdictions that you care about.)
It’s surprising that there doesn’t seem to be an obvious way in the UI to just see a list of creators/channels on a local instance. So, that’s the first thing I’d change to improve discoverability.
The way I currently find relevant content is by going to Sepia Search, putting in exact words that I think are likely to be in the title of at least one video on a channel that would likely also have a lot of other relevant content, and then going through that channel’s playlists. Those searches often lead me to single user instances with only one or two channels (e.g. a channel that has a backup of that user’s YouTube content and a channel with a backup of their Twitch or OwnCast or whatever streams). When it leads me to a generalist instance or one with a relevant subject/theme though, I’ve had little luck finding content from anyone else unless they’ve posted recently (compared to other users). Often the content that is most relevant to me is not what is newest but the archives from years ago. (New content is relevant though once I want to follow someone in particular, but it’s not what I want to see first.)
Another issue I’ve encountered is with the behavior of downloaded videos. I greatly appreciate that PeerTube provides a URL for direct download, and I prefer to watch videos in my own player downloaded in advance (so I can watch offline; pause and resume trivially after putting my computer to sleep; etc). H264 MP4 works fine for this, but the download seems to be some sort of chunked variant of it (for HLS?) which requires the player to read in the entire file to figure out the length or seek accurately. Having to wait a minute or two to be able to seek each time I open a large video file off my HDD is an irritating papercut. I suspect there’s likely a way to fix it by including an index in the file (or in a sidecar file) but I don’t know how to do it – short of re-encoding the entire video again which I’d rather not do since it both takes a long time and can result in quality loss. (EDIT ffmpeg -i input.mp4 -vcodec copy -acodec copy -movflags faststart output.mp4
repacks the video quickly.) This usually doesn’t affect newly added videos (where the download link includes the pattern /download/web-videos
and a warning is shown that it’s still being transcoded) but does when that’s done (the URL includes /download/streaming-playlists/hls/videos
instead); so, this is something that happens as a result of PeerTube’s reprocessing.
Downloads from the instances that I’ve found to be most relevant to me are also pretty unreliable (connection is slow and drops a lot), so I use wget with automatic retries (and it sometimes still needs manual retries…) rather than downloading through my browser which tends to fail and then often annoyingly start over completely if I request a retry… It would be really nice if I could check that I’ve downloaded the file correctly and completely with a sha256 hash or something.
Hmm. Not sure. Some bosses that immediately came to mind were O&S in Dark Souls 1 and Hume in Eternal Daughter though. I think I had more trouble with the latter, but it’s been so long that I’m not sure.
I still have and use an Xbox360 controller despite not having an Xbox. The fact that it takes AA batteries and I can just pop out my rechargeable ones and swap 'em onto a generic charger instead of having to hook the controller up to a special charger (and then wait / use it with cable) is quite nice.
I’ve had to review resumes when we were trying to find someone else to bring on the team. My boss dumped hundreds of resumes on me and asked if any of them looked promising – that’s after going through whatever HR bullshit filters were in place – on top of all the other work I was already behind on since we didn’t have enough staff. That is the state of mind you should expect someone to be in while looking at your project.
If anyone looks at your repo, they’re going to check briefly to see if you have any clue at all what you’re doing and whether your code likes like it’s written by the kind of person they can stand working with. Don’t make any major blunders that someone would notice with a quick glance at the repository. Be prepared to talk about your project in detail and be able to explain why you made the choices you did – you might not get asked, but if you are you should be able to justify your choices. If it gets to the point of an interview and your project looks like something that could’ve been done easily in 100 lines of Python you’d better believe I’m going to ask why the hell you wrote it in C in 2025… and I say that as someone who has written a significant amount of C professionally.
If you say you have multiple years of professional programming experience and send me a link to a repo that has .DS_Store
in it… your resume is going straight into the trash.
what is the legitimate use case?
You do a whole bunch of research on a subject – hours, days, weeks, months, years maybe – and then find something that sparks a connection with something else that you half remember. Where was that thing in the 1000s of pages you read? That’s the problem (or at least one of the problems) it’s supposed to solve.
I’ve considered writing similar research tools for myself over the years (e.g. save a copy of the HTML and a screenshot of every webpage I visit automatically marked with a timestamp for future reference), but decided the storage cost and risk of accidentally embarrassing/compromising myself by recording something sensitive was too high compared to just taking notes in more traditional ways and saving things manually.
It’s an absolute long-shot, but are there any careers that feel like the research part of grad school, but without the stuff that’s miserable about it (the coursework and bureaucracy)?
There’s no getting away from the bureaucracy, but it is possible to get career positions in academia – and I don’t mean as a professor, either. Check your university’s job site. If they’re big, they almost certainly have one. Get to know your professors too, and make sure they’re aware of the things you’re good at (even beyond your immediate subject area if you have additional hobbies/interests/skills) so they can help you find a landing place if things don’t work out where you are. If you’re willing to do programming – even if you don’t like it – there is a hell of a lot of stuff that needs to be done in academia, and some of it pays enough to live on. It’s possible to carve out a niche and evolve a role into a mix of stuff that you’re good (enough) at but dislike, and stuff that you like but which doesn’t necessarily always have funding if there’s some overlap…
Don’t know about PGE’s API, but for the OCR stuff, you may get better results with additional preprocessing before you pass images into tesseract. e.g. crop it to just the region of interest, and try various image processing techniques to make the text pop out better if needed. You can also run tesseract specifically telling it you’re looking at a single line of data, which can give better results. (e.g. --psm 7
for the command line tool) OCR is indeed finicky…
Coatrack
Congrats on finishing!
Communities/magazines are similar to subreddits, but unlike subreddits they can be hosted on servers run by unrelated organizations and still interact. Different instances can and do have different ideas about how things should be run but you can still send messages back and forth unless the admins have blocked it.
The first message is warning you that you’re looking at a community that is not local to your instance. You might not be able to see all the posts from that community on your instance. For example, there may be older posts that never got copied over from long before your instance first found out that that community exists.
If I understand mbin’s code correctly, the second message means that no one is subscribed to the community locally, so your instance isn’t getting updated by the remote source any more. You need to have at least one local subscriber to get updates. If you’re interested in the community, subscribe to it.
I think this is the code that produces those messages if anyone wants to dig into it further: https://github.com/MbinOrg/mbin/blob/main/templates/magazine/_federated_info.html.twig
The definitions for the message strings (in English) are here: https://github.com/MbinOrg/mbin/blob/main/translations/messages.en.yaml