• 0 Posts
  • 36 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle
  • The documentation you were looking at might’ve been the Matrix specification.

    There is documentation on how to host a Matrix server, I’d honestly recommend using containers (maybe docker compose) for this one. It can definitely be confusing setting up a service like a Matrix homeserver for the first time.

    As for other people finding it, you can (and should) make your homeserver invite-only. It’s also possible to disable federation, which makes the server self-contained. It will not accept incoming connections from other servers, nor make outgoing connections to other servers.

    This does mean everyone you want to talk with has to be on your homeserver. There are probably better options available if you want to avoid Matrix’ federation issues, like Spacebar.


  • Web push for notifications. Sure, there’s privacy implications, but it’s already near universal. There’s other options like ntfy.sh if you’re not limited to existing infrastructure. UnifiedPush also works well as a protocol for push notifications.

    Everything else can be handled in-app. Password reset will have to be done by an admin, though it’s completely doable for a small selfhosted service.

    Some of the downsides OP listed may or may not always apply, but there are always downsides. Either you have to set up your own email server (with extra maintenance burden), or your “selfhosted” app suddenly relies on third party infrastructure, like your email provider (or those of other users on your instance).




  • This is heavily sensationalized. UEFI “secure boot” has never been “secure” if you (the end user) trust vendor or Microsoft signatures. Alongside that, this ““backdoor”” (diagnostic/troubleshooting tool) requires physical access, at which point there are plenty of other things you can do with the same result.

    Yes, the impact is theoretically high, but it’s the same for all the other vulnerable EFI applications MS and vendors sign willy-nilly. In order to get a properly locked-down secure boot, you need to trust only yourself.

    When you trust Microsoft’s secure boot keys, all it takes is one signed EFI application with an exploit to make your machine vulnerable to this type of attack.

    Another important part is persistence, especially for UEFI malware. The only reason it’s so easy is because Windows built-in “factory reset” is so terrible. Fresh installing from a USB drive can easily avoid that.



  • Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.

    Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.

    LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.

    Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.

    So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.





  • IRC does not have any federation, and XMPP does it in a completely different way from Matrix that has unique pros and cons.

    IRC is designed for you to connect to a specific server, with an account on that server, to talk to other people on that server. There is no federation, you cannot talk to oftc from libera.chat. Alongside that, with mobile devices being so common, you’d need to get people to host their own bouncer, or host one for nearly everyone on your network.

    XMPP federation conceptually has one major difference compared to Matrix: XMPP rooms are owned by the server that created them, whereas Matrix rooms are equally “owned” by everyone participating in it, with the only deciding factor being which users have administrator permissions.

    This makes for better (and easier) scaling on XMPP, so rooms with 50k people isn’t that big of an issue for any users in that room. However, if the server owning the room goes down, the whole room is down, and nobody can chat. See Google Talk dropping XMPP federation after making a mess of most client and server implementations.

    On Matrix, scaling is a much bigger issue, as everyone connects with everyone else. Your single-person homeserver has to talk with every other homeserver you interact with. If you join a lot of big rooms, this adds up, and takes a lot of resources. However, when a homeserver goes down, only the people on that homeserver are affected, not the rooms. Just recently, matrix.org had some trouble with their database going down. Although it was a bit quieter than usual, I only properly noticed when it was explicitly mentioned in chat by someone else. My service was not interrupted, as I host my own homeserver.

    The Matrix method of federation definitely comes with some issues, some conceptually, and some from the implementation. However, a single entity cannot take down the federated Matrix network, even when taking down the most used homeservers. XMPP is effectively killed off by doing the same.




  • Being able to choose the OS and kernel is also important. I would not want my hypervisor machine to load GPU kernel modules, especially not on an older LTS kernel (which often don’t support the latest hardware). Passing the GPU to a VM ensures stability of the host machine, with the flexibility to choose whatever kernel I need for specific hardware. This alongside running entirely different OSes (like *BSD, Windows :(, etc) is pretty useful for some services.





  • Start off with a clean slate. Windows, freshly installed from a Microsoft provided ISO (Assuming you’re looking at a Windows executable). Try to follow a guide on bypassing the MS account requirement (AtlasOS has a section of their guide telling you how to do this).

    When you’re setting things up, there’s no restrictions to internet access, sharing, etc. You just have to be careful not to open/view the files you want to isolate, which is easy enough by for example putting the files in a password protected zip. You can also install any required tools now (like maybe 7zip).

    At this stage, there’s a few options:

    • The easiest is to put your files into a separate folder, then run a simple webserver, like with python3 -m http.server on your host. Then download it on the VM.
    • Another option is to mount the VMs disk, then copy the files directly. Turn off the VM, mount the disk, copy the files, unmount, then turn it back on.
    • You could create a disk image that contains your files, readable by the VM.

    When you’re ready to actually open the file, close off all access from the VM to the host. No networking, clipboard sharing, etc. Do this on the hosts VM settings, not inside the VM. Also note that without further tooling, it’s extemely difficult to tell if there’s any advanced malware present.

    As soon as you view the potentially malicious files, consider anything coming from that VM as malicious. Don’t try to view/open files on your host, do not give it network access.

    Malware can be (but often isn’t) incredibly advanced, and even an isolated VM isn’t a 100% guaranteed method of keeping it contained.