Fair - there are ways to handle it. I didn’t want to include specifics since I’m not a professional contractor for this sort of thing, but I should have indicated that there are exceptions.
Fair - there are ways to handle it. I didn’t want to include specifics since I’m not a professional contractor for this sort of thing, but I should have indicated that there are exceptions.
Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.
If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It’s already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.
If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.
I’m not entirely sure what you’re seeking to accomplish here - are you looking to just impose authorization on a subset of the images? Probably those should be in a non-public bucket for starters.
Looking to only give certain people access to files and also have a nicer UI (a la Google Drive / Photos)? Maybe plain S3 isn’t the play here and a dedicated application is needed for that subset.
Pre signed URLs may also be a thing useful to what you’re trying to to solve. https://docs.min.io/docs/javascript-client-api-reference.html#presignedGetObject
Adding on one aspect to things others have mentioned here.
I personally have both ports/URLs opened and VPN-only services.
IMHO, it also depends on the exposure tolerance the software has or risk of what could get compromised if an attacker were to find the password.
Start by thinking of the VPN itself (Taliscale, Wireguard, OpenVPN, IPSec/IKEv2, Zerotier) as a service just like the service your considering exposing.
Almost all (working on the all part lol) of my external services require TOTP/2FA and are required to be directly exposed - i.e. VPN gateway, jump host, file server (nextcloud), git server, PBX, music reflector I used for D&D, game servers shared with friends. Those ones I either absolutely need to be external (VPN, jump) or are external so that I don’t have to deal with the complicated networking of per-user firewalls so my friends don’t need to VPN to me to get something done.
The second part for me is tolerance to be external and what risk it is if it got popped. I have a LOT of things I just don’t want on the web - my VM control panels (proxmox, vSphere, XCP), my UPS/PDU, my NAS control panel, my monitoring server, my SMB/RDP sessions, etc. That kind of stuff is super high risk - there’s a lot of damage that someone could do with that, a LOT of attack surface area, and, especially in the case of embedded firmware like the UPSs and PDUs, potentially software that the vendor hasn’t updated in years with who-knows-what bugs lurking in it.
So there’s not really a one size fits all kind of situation. You have to address the needs of each service you host on a case by case basis. Some potential questions to ask yourself (but obviously a non-exhaustive list):
So, as you can see, it’s not just cut and dry. You have to think about each service you host and what it does.
Larger well known products - such as Guacamole, Nextcloud, Owncloud, strongswan, OpenVPN, Wireguard - are known to behave well under these circumstances. That’s going to factor in to this too. Many times the right answer will be to expose a port - the most important thing is to make an active decision to do so.
I’m not the commenter but I can take a guess - I would assume “data source” refers to a machine readable database or aggregator.
Making the system capable of turning off a generic external service in an automated way isn’t necessarily trivial, but it’s doable given appropriate systems.
Knowing when to turn a service off is going to be the million dollar question. It not only has to determine what the backend application version is during its periodic health check, it also needs to then make an autonomous decision that a vulnerability exists and is severe enough to take action.
Home Assistant probably provides a “safe list” of versions that instances regularly pull down and automatically disconnect if they determine themselves to be affected, or, of the remote UI connection passes through the Home Assistant Central servers, the Central servers could maintain that safety database and off switch. (Note - I don’t have a home assistant so I can’t check myself)
If your party is perfectly enjoying themselves playing the system of their common choice with whatever characters all (+GM) agree on, then you’re playing the game correctly.
You don’t need to play Pathfinder, multi class, or anything else if you’re having fun.
Now, if people aren’t having fun, maybe it’s time to use Proficiency (Language) and discuss it. If there’s a resolvable conflict, resolve it. If it’s not a resolvable, then maybe it’s time to find a new table.
Okay I just figured out what happened here. Sorry for my misunderstanding in the other comment thread, I fell into the same trap initially.
@Lemmy users - OP is a Mastodon user, so when OP responded to the comment with an @ and attached image, Lemmy federation didn’t pick up the attachment. I know we have images in attachments natively, so I don’t know if this is an upstream bug in Lemmy / Mastodon or some interaction between lemmy.world and Mastodon.scot.
@twoowls73@mastodon.scot - you may not have realized that your Mastodon post generated a Lemmy thread. Welcome to another corner of the Fediverse if you’re new to Lemmy. Some of the animosity in the comments here probably came from the fact that we can’t see the image you attached to from our interface. Right now the comment is sitting at 36 net down votes, probably because our corner of the Fediverse didn’t have the full context
I’m also fediverse lol?
I edited the comment because I accidentally wrote the community as c/communityname which doesn’t auto-generate a link you can click on
If you’re aware you didn’t answer the question… Why?
Pretty much everything else posted to !nathanwpyle@lemmy.world are original comics not edits
Self edit: fixed the community link, my Reddit brain used the wrong syntax
Somewhat halfway between practical use and just messing around for fun.
Several years ago I built a GPS NTP clock out of an RPi3 and an Adafruit GPS hat. Once I had the PPS driver installed, it’s precision/drift got pretty good. According to its own self measurements, I got pretty dang close to NIST stratum 1 NTP servers, but those are hundreds of miles away so that measurement isn’t super precise. It’s still running today, clocking nearly 24/7 operation since (checks shopping history) 2017, though I replaced the breadboard and mini module with a full sized hat with the same chipset in 2021.
Recently I acquired a proper hardware GPS clock and I stacked the two against each other and found out my RPi did not half bad and can get between 0.5-10ms of the professionals (literally I’m pretty sure I’d need more precise measuring equipment to tell the difference between the two at this point than a regular computer). Now my homelab has fully redundant internet-disconneted stratum 1 time. Been half considering if I could write a GPSD driver for it as a joke, but I know upstream won’t accept it because it doesn’t offer SOOO many features they’d need.
As for what else - I just kind of keep an eye out for projects related to GPS and high precision time, like the open source atomic PCI card that was released a few years ago. Finding out what people are doing to get better and better time is just downright interesting.
Outside of the time world, it’s just fun to see what projects people come up with relating to maps and navigation. Stretch goal once I have enough server horsepower is to make a render-capable Open Street Map server with my home region loaded to start with, but eventually I’d like to get it to the point where I can load and process world.osm. That… Requires a LOT of CPU and SSD space.
I’m not sure if you meant to tag a different proxmox account, but you did generate a Lemmy post about this.
Also since SAP is probably heavily licensed software, make sure you’re in compliance with that if you start cloning. Some companies get super mad about that.
Edit: had to grab the link on my PC, phone was showing the wrong one https://lemmy.ml/post/15142970
Heyo, just wanted to say I appreciate the edit.
Some people see three extra clicks (which is what it took on mobile to get the real description out of GitHub) as a limiter. I actually clicked because I had guessed that with a name like “navidrome” it was something GNSS related, was surprised to see it was about music.
I’ve been self hosting for going on 7-8y, following various communities on reddit and Lemmy and I learn about new softwares every day. I’ll have to toss this one on my investigation queue.
Voyager PWA, but I think see what it did now.
It’s processing as markdown, and ignoring the first tilde strike marker since it’s sandwiched next to the URL brackets. So the only valid strike through is in different spots than you intended. Superscript/subscript I think is being processed correctly because it’s small.
Edit: I just noticed the other guy has the same app, so that would do it.
Edit2: I think I need to mentally review how markdown works there are wires crossed in my brain
https://en.wikipedia.org/wiki/Zalgo_text
The letters of CC BY-NC-SA 4.0 appear normal, but there’s extra stuff between the characters where the spacing and hyphens are
How did you make the creative commons license punctuation look zalgo like?
Bravo, I think this comment wins the internet for today.
You’re not wrong but Obi-wan’s line only had one syllable and mammal has two so I elected to leave the word intact instead of being biologically pedantic.
You use console to turn on embedded shell then Ctrl+Alt+Fn over to it (I forget whether it’s on f1 or f2), then you can use esxcli
and all the rest of that to fix it up.
Once you get enough networking/storage pieces sorted out you can get back into the management HTML UI and SSH
Then when you’re done fixing, turn shell and SSH back off.
I don’t have a full answer to snapshots right now, but I can confirm Nextcloud has VFS support on Windows. I’ve been working on a project to move myself over to it from Syno drive. Client wise, the two have fairly similar features with one exception - Nextcloud generates one Explorer sidebar object per connection, which I think Synology handles as shortcuts in the one directory. If prefer if NC did the later or allowed me to choose, but I’m happier with what I got for now.
As for the snapshotting, you should be able to snapshot the underlying FS/DB at the same time, but I haven’t poked deeply at that. Files I believe are plain (I will disassemble my nextcloud server to confirm this tonight and update my comment), but some do preserve version history so I want to be sure before I give you final confirmation. The Nextcloud root data directory is broken up by internal user ID, which is an immutable field (you cannot change your username even in LDAP), probably because of this filesystem.
One thing that may interest you is the external storage feature, which I’ve been working on migrating a large data set I have to:
Admin docs for reference: https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html
I use LDAP user auth to my nextcloud, with two external shares to my NAS using a pass-through session password (the NAS is AD joined to the same domain as Nextcloud uses for LDAPS). I don’t know if/how the “store password in database” option is encrypted, but if anyone knows I would be curious, because using session passwords prevents the user from sharing the folder to at least a federated destination (I tried with my friend’s NC server, haven’t tried with a local user yet but I assume the same limitations apply). If that’s your vibe, then this is a feature XD.
One of my two external storage mounts is a “common” share with multiple users accessing the same directory, and the second share is \\nas.example.com\home\nextcloud. Internally, these I believe is handled by PHP spawning
smbclient
subprocesses, so if you have lots of remote files and don’t want to nuke your Nextcloud, you will probably need to increase the PHP child limits (that too me too long to solve lol)That funny sub-mount name above handles an edge case where Nextcloud/DAV can’t handle directories with certain characters - notably the # that Synology uses to expose their #recycle and #snapshot structures. This means that remote mount to SMB has a limitation at the moment where you can’t mount the base share of a Synology NAS that has this feature enabled. I tried a server-side Nextcloud plugin to try to filter this out before it exposed to DAV, but it was glitchy. Unsure if this was because I just had too many files for it to handle thanks to the way Synology snapshots are exposed or if it actually was something else - either way I worked around the problem for now by not ever mounting a base share of my Synology NAS. Other snapshot exposure methods may be affected - I have a ZFS TrueNAS Core, so maybe I’ll throw that at it and see if I can break Nextcloud again :P
Edit addon: OP just so I answer your real question when I get to this again this evening - when you said that Nextcloud might not meet your needs, was your concern specifically the server-side data format? I assume from the rest of your questions that you’re concerned with data resilience and the ability to get your data back without any vendor tools - that it will just be there when you need it.