The point is how expensive it is. For your 1 request, you wait 2 seconds, and the power draw is a minor inconvenience. For the massive botfarm, it adds up to days of CPU time and a significant portion of the electricity bill.
- 0 Posts
- 313 Comments
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•Can the GNU/ Linux Foundation Fork Android and Maintain it?
16·9 days agoLets ignore the “is it possible” and imagine what would happen if it was. Whatever entity forks AOSP would start off with (next to) no userbase. The platform “Android” will remain Google’s AOSP, including some proprietary components. Whenever Google decides, they can enforce apps on the Google Play Store to use a new version of the Android system API. This is often a breaking change; apps that update won’t work on older Android. There is nothing stopping Google from creating complex breaking changes that tie into their proprietary components, killing off any attempt at running Google Play Store apps on older or “fully FOSS” Android. Even if a hard fork of AOSP existed, it would not remain compatible with the vast majority of applications.
So even if this could happen, it won’t. Nobody is going to invest in hard forking a project that is going to be killed off by Google’s monopoly.
The much better (long term) option is to stay completely outside AOSP, like with mobile Linux distros such as postmarketOS. Right now, it is underdeveloped and not an option as a daily driver for most. But over time, this is the only feasible option that can give control back to the user.
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•"Windows" process using too much Memory (Dual boot setup)
7·16 days agoHeavily leaning towards malware; normal software tends to name itself the same on disk and in ram, this seems to be it trying to hide itself.
Since there’s now nothing to go off of for how this got on your system, the best course of action is to back up your documents and reinstall your system fresh. To avoid malware in the future, stick to the built-in app store and system repositories where possible.
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•"Windows" process using too much Memory (Dual boot setup)
5·16 days agoThis doesn’t really say much; this could be legitimate software thinking it crashed, or it could be malware trying to hide itself.
Try seeing if
sudo find / -type f -name windowstells you anything about where it’s installed. This command searches through/(all files) to find a file (-type f) that is namedwindows(the same as the process name).
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•"Windows" process using too much Memory (Dual boot setup)
252·16 days agoAssuming this is malware, depending on the complexity it might be really hard to remove. The best course of action is much like on Windows; Backup your personal files, figure out how the malware got on your PC (so you can avoid it next time), then reinstall the operating system.
For backing up personal files, stick to documents, media, etc. Do not include executables (like installed games), and be very careful with config files (and system files), basically only back these up if you know what’s in them is legitimate.
You can find more about the process in the
/proc/4212/directory (this is the number on the left in top). By runningls -l, you should be able to see where theexesymlink points to, which tells you where the program is installed. This might give you a clue as to where it came from (or it might not, depending on how the malware is made). If you suspect it is not malware, due to information on your system, look it up online before trusting it. I have personally never seen a root-owned ““windows”” process, which is why I’m heavily leaning towards this being malware.If you feel like you know where the malware came from, or you’re stuck and are struggling to find out more, you should reinstall your operating system to get rid of the malware. Malware can have different levels of complexity, what you’re seeing on the surface might be the whole thing, or it could have more complex systems to reinstall itself after removal. Which is why reinstalling your operating system is the safer option.
Poettering only very recently left microslop
deadcade@lemmy.deadca.deto
Technology@lemmy.world•Nvidia "confirms" DLSS 5 relies on 2D frame data as testing reveals hallucinationsEnglish
22·1 month agoYes, or like we saw in the demo, someone’s arm disappears, a ball becomes a blurry shapeless blob, and many others.
This tech is the same tech that powers other ““Generative AI””, meaning exact the issues with asking for a hand and getting one with 7.5 fingers can now happen in real time, in video games supporting DLSS 5.
It is straight up an AI slop filter over top of a game. There’s not much more to say about it.
deadcade@lemmy.deadca.deto
Asklemmy@lemmy.ml•What are some newsletters that you love seeing in your inbox?
1·1 month agoNot much of a newsletter but Dolphin Emu progress reports.
Lots of things wrong with this but one I haven’t seen yet is that CachyOS literally depends on ArchLinux, yet is more “independent” than it?
These are terrible axis to try and plot operating systems, and limiting yourself to such low resolution with no overlap doesn’t help.
deadcade@lemmy.deadca.deto
Selfhosted@lemmy.world•My NFS timeouts / dirty page writeback problem.English
7·1 month agoHell yeah! 10x speed improvement for free!
deadcade@lemmy.deadca.deto
Selfhosted@lemmy.world•My NFS timeouts / dirty page writeback problem.English
8·1 month agoWhat I’m noticing more, is that you can keep a consistent 11.4MB/s, this feels relatively close to what you’d usually pull through a 100mbit/s link (after accounting for overhead). If that’s the case, it shouldn’t matter how the NFS client decides to chunk the data, for how much throughput there is to the NAS. Which means you’re looking at a broken NFS server that can’t handle large single transmissions.
If it’s not the case, and you’ve got a faster network link, it seems that the NAS just can’t keep up when given >2gb at once. That could be a hardware resource limitation, where this fix is probably the best you can do without upgrading hardware. If it’s not a resource limitation, then the NFS server is misbehaving when sent large chunks of data.
Basically, if your network itself (like switches, cables) isn’t broken, you’re either dealing with a NAS that is severely underspecced for what it’s supposed to do, or a broken NFS server.
Another possibility for network issues, is that your proxmox thinks it has gigabit (or higher), but some device or cable in between your server and NAS limits speed to 100mbit/s. I think it’d be likely to cause the specific issues you’re seeing, and something like mixed cable speeds would explain why the issue is so uncommon/hard to find. The smaller buffers more frequent acknowledgements would sidestep this.
Do note I am also not an expert in NFS, I’m mostly going off experience with the “fuck around and find out” method.
deadcade@lemmy.deadca.deto
Selfhosted@lemmy.world•My NFS timeouts / dirty page writeback problem.English
8·1 month agoSounds like a band-aid fix to a completely different problem. If NFS is timing out, something is clearly broken. Assuming it’s not your network (though it could very well be), it’s likely the Synology NAS. Since they’re relatively closed devices afaik, I sadly can’t help much in troubleshooting. And sure, dumping 25GB on it all at once is heavy, but it should handle that, being a NAS.
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•Artix isn't going to comply with age-gating.
111·1 month agoIirc, the XZ backdoor was specifically targeting RH and Debian, which for some reason link libsystemd into OpenSSH. Afaik even upstream Arch was unaffected, not just Artix. The exploit code, though non-functional, still made its way onto your system (assuming you updated when it was in a release version).
I’m not defending systemd though, it’s clear that Poettering’s goals do not align with the rest of the Linux community. I’m saying that Artix not being affected by the XZ backdoor is not a good argument for why to use Artix or avoid systemd.
It’s like saying “Linux doesn’t get malware” because most desktop malware targets the OS with the largest desktop userbase, Windows. This alone doesn’t suddenly make Linux “better”. That doesn’t mean there aren’t other reasons to avoid Windows.
The smaller/newer distros have no evidence of staying around for years, so it’s hard to judge whether they’ll be around in another couple years. Distros like Bazzite are definitely interesting, but you can’t reliably predict whether it’ll get updates in 10 years. There are stable community-led distros that have been around for a long time, like Debian.
deadcade@lemmy.deadca.deto
Open Source@lemmy.ml•The Web Scraping Consent Model Was Always Broken. AI Just Made It Obvious.
4·1 month agoPersonally, I have nothing against crawlers and bots
If they’re implemented reasonably, web crawlers aren’t the issue. The problems with them mostly stem from laziness and cost cutting. Web crawlers by AI comapnies frequently DDoS entire services, especially Git forges like Gitlab or Forgejo. Not “intentionally”, but because these crawlers will blindly request every URL on a service, no matter the actual content. This is cheaper for the AI company to implement this way, and scan through the data later. But this also leads to the service having to render and serve tens of thousands of times as much content as is actually present. They are made to try and hide themselves doing so, which is the biggest reason we see “modern” PoW CAPTCHAs everywhere, like Anubis or go-away.
Robots.txt used to work, because search engines needed there to be an “internet” to provide their services. Web crawlers pre-AI were made knowing that taking down a service made another website go down, which lessened the overall quality of search results.
I’ve had LLM webcrawlers take down my whole server by DDoSing it several times. Pre-LLMs, a git forge would take maybe a couple hundred MB of RAM and be mostly idle while not in use. Nowadays, without a PoW CAPTCHA in front, there are often over 10.000 active concurrent connections to a small, single person Git forge. This makes hosting costs go through the roof for any smaller entity.
deadcade@lemmy.deadca.deto
Linux@lemmy.ml•No audio for videos but only for one user and with pipewire
1·1 month agoIs the video player application itself muted in pipewire? What’s the output device set to?
You can check these things with an application like
pavucontrol. Pipewire (and pulse) have a default audio device, but individual applications can set a different audio device if they want to.Another great category of utilities for pipewire is virtual patchbays. If you’re looking for something simple, helvum or qpwgraph are geat. For all the technical details in a GUI, coppwr provides a good experience.
deadcade@lemmy.deadca.deto
Games@lemmy.world•Lutris now being built with Claude AI, developer decides to hide it after backlashEnglish
3·1 month agoAbsolutely true, but there’s one clear and obvious way; drop support for the project yourself.
If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.
FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.
deadcade@lemmy.deadca.deto
Games@lemmy.world•Lutris now being built with Claude AI, developer decides to hide it after backlashEnglish
9045·1 month agoIt’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)
Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.
If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.
deadcade@lemmy.deadca.deto
Technology@lemmy.world•10% of Firefox crashes are caused by bitflipsEnglish
3·2 months agoThe exact numbers for when it messes something up, but keeps running, are unknown and highly ubpredictable.
According to above post, about 10% of firefox crashes (more numbers found in the post) are caused by this stuff. It’s not unreasonable to say those crashes could’ve had the bitflip happen on content instead, changing maybe a character on the page or something.
Note that it’s not 10% of users, as that’s reslly hard to figure out. Someone with bad RAM will likely crash more often.

For me it was the inappropriate description on the last post. This is not an NSFW community and I didn’t want to read that.