Encrypting and auto-boot-decryption of an LXC zpool on Ubuntu with LUKS

Image result for luks key list

So we have seen some postings online that suggested you can’t encrypt an lxd zpool, such as this GitHub posting here, which correctly explains that an encrypted zpool that doesn’t mount at startup disappears WITHOUT WARNING from your lxd configuration.

It’s not the whole picture as it IS possible to so encrypt an lxd zpool with luks (the standard full disk encryption option for Linux Ubuntu) and have it work out-of-the-box at startup, but perhaps it’s not as straightforward as everyone would like

WARNING WARNING – THE INSTRUCTIONS BELOW ARE NOT GUARANTEED.  WE USE COMMANDS THAT WILL WIPE A DRIVE SO GREAT CARE IS NEEDED AND WE CANNOT HELP YOU IF YOU LOSE ACCESS TO YOUR DATA.  DO NOT TRY THIS ON A PRODUCTION SERVER.  SEEK PROFESSIONAL HELP INSTEAD, PLEASE!

With that said…this post is for those who, for example, have a new clean system that they can always do-over if this tutorial does not work as advertised.  Ubuntu OS changes and so the instructions might not work on your particular system.

Firstly, we assume you have your ubuntu 16.04 installed on a luks encrypted drive (i.e. the standard ubuntu instal using the “encrypt HD” option).  This of course requires you to enter a password at boot-up to decrypt your system, something like:Image result for ubuntu full disk encryption

We assume you have a second drive that you want to use for your linux lxd containers.  That’s how we roll our lxd.

So, to setup an encrypted zpool, select your drive to be used (we assume it’s /dev/sdd here, and we assume it’s a newly created partition that is not yet formatted – your drive might be /dev/sda, /dev/sdb or something quite different – MAKE SURE YOU GET THAT RIGHT).

Go through the normal luks procedure to encrypt the drive:

sudo cryptsetup -y -v luksFormat /dev/sdd

Enter the password and NOTE THE WARNING – this WILL destroy the drive contents.  #YOUHAVEBEENWARNED

Then open it:

sudo cryptsetup luksOpen /dev/sd?X sd?X_crypt

Normally, you would create your normal file system now, such as an ext4, but we don’t do that.  Instead, create your zpool (we are calling ours ‘lxdzpool’ – feel free to change that to ‘tank’ or whatever pool name you prefer):

sudo zpool create -f -o ashift=12 -O normalization=formD -O atime=off -m none -R /mnt -O compression=lz4 lxdzpool  /dev/mapper/sdd_crypt

And there you have an encrypted zpool.  Add it to lxd using the standard ‘sudo lxd init’ procedure that you need to go through to create lxc containers, then start launching your containers and voila, you are using an encrypted zpool.

So, we are not done yet.  We can’t let the OS boot up without decrypting the zpool drive, lest our containers disappear and lxd goes back to using a directory for its zpool, per the GitHub posting referred to above.  That would not be good.  So how do we make sure this is auto-decrypted at boot-up (which is needed for lxc containers to launch)?

Well, we have to create a keyfile that is used to decrypt this drive after you decrypt the main OS drive (so you do still need to decrypt your PC at bootup as usual – as above):

sudo dd if=/dev/urandom of=/root/.keyfile bs=1024 count=4
sudo chmod 0400 /root/.keyfile
sudo cryptsetup luksAddKey /dev/sdd /root/.keyfile

This creates  keyfile at /root/.keyfile.  This file is used to decrypt the zpool drive.  Just answer the prompts that these commands generate (self explanatory).

Now find out your disks UUID number with:

sudo blkid

This should give you a list of your drives with various information.  We need the long string that comes after “UUID=…” for your drive, e.g.:

/dev/sdd: UUID=”971bf7bc-43f2-4ce0-85aa-9c6437240ec5″ TYPE=”crypto_LUKS”

Note we need the UUID – not the PARTUUID or anything else.  It must say “UUID=…”.

Now edit /etc/crypttab as root:

sudo nano /etc/crypttab

And add an entry like this:

#Add entry to aut-unlock the encrypted drive at boot-up,
#after the main OS drive has been unlocked
sdd_crypt UUID=971bf7bc-43f2-4ce0-85aa-9c6437240ec5 /root/.keyfile luks,discard

And now reboot.  You should see your familiar boot-up screen for decrypting your ubuntu OS.  And once you enter the correct password, the encrypted zfs zpool drive will be automatically decrypted and will allow lxd to access it as your zpool.  Here’s an excerpt from our ‘lxc info’ output AFTER a reboot.  We highlighted the most important bit for this tutorial:

$ lxc info
config:
storage.zfs_pool_name: lxdzpool
api_extensions:
– id_map
– id_map_base
– resource_limits
api_status: stable
api_version: “1.0”
auth: trusted
auth_methods: []
public: false
driver: lxc
driver_version: 2.0.8
kernel: Linux
kernel_architecture: x86_64
kernel_version: 4.15.0-34-generic
server: lxd
storage: zfs

Note we are using our ‘lxdzpool’.

We hope this is useful.  GOOD LUCK!

Useful additional reference materials are here (or at least, they were here when we posted this article):

Encrypting a second hard drive on Ubuntu (post-install)

Setting up ZFS on LUKS

 

Securely Sharing Files from Nextcloud

So we’ve been asked ‘How do you share files from Nextcloud securely’?

It’s a fair question because if you END-TO-END encrypt a file, you can’t actually share the file unless you also share a means of decrypting it.  And we don’t share our end-to-end encryption keys, EVER.

So how do we manage sharing?  How do we protect our files as we hand them over to our customers?  The short answer is with LAYERS of security:

Our files are protected by one of several levels of independent security:

Robust SSL/TLS connectivity (HTTPS all digital coms)

End-to-end encryption for files

Server-side encryption for files

Full Disk Encryption (FDE) for computer drives (server or desktop)

We can ignore FDE here as it doesn’t really come into play (it’s only really applicable here to a stolen PC).

For normal at-rest files, we have end-to-end AND server-side encryption in play.  So our files are protected against threats of all kinds.  You just can’t get at our data without TWO separate sets of credentials, neither of which are connected, neither of which are stored in a file.

When we share a file, we have to remove one of these layers (the end-to-end encryption), and that of course makes the protection weaker during the process of sharing FOR THAT FILE, since it only has server-side encryption to keep it safe.  Note that this is not “bad” – the file is still encrypted, and it’s safe from even quite determined hackers.  But you know, we like to be a bit paranoid and very responsible, so we bring other measures in play.

Firstly, here’s a real file from our server we created with end-to-end encryption still applied:

It’s very exciting.  #notReally.  You can see, the name is complete gibberish.  And for the record, if you download this ‘blob’ and view it in Notepad, this is what you get:

“îe¤ÎÈCÀ’0pMi%?/£`ÕÕäÔ}CÇ+>ê3ؽ[Æ~©HïïþÑàß·”M*ÓÝì?L‚L*x)š!ýU\nW£7zšK‚ubÎ
91Ïií=¡l‰­¸üN#åS›¬Æàçý*¨”PÔãºÕë*,8ƒÇŠûìÒ†Áè‹™p\ \©Jšs|Ö_ž›”ÒÿEÊ ÈÈõÁÁ”

And as you will see later, the real file is actually a simple text message.

Once the file has end-to-end encryption removed, we can now more easily find it on the server AS THE SYS-ADMIN (or as a hacker that has somehow penetrated our system).  It takes a while, but you can find it:

Wow, there it is.  Plain as day.  This file is actually present in an lxc container.  If you can ‘see’ this, you are ither a SysAdmin or an unauthorized hacker.  But the good news, you are not at the system root (you just think you are), so your damage potential is still very limited.

The file we created especially for this demo is shown among other directories.  The server ADMIN or hacker (not necessarily the owner of the file) can “see” the file, so it seems.  But before everyone panics, let’s take a close look at what he or she can actually “see”.  There’s a file name and permissions and modification time and apparent size (meta-data if you will).  You can also see file/folder names too.  Gosh, does that mean the SYS-ADMIN or hacker can snoop on my files?  Let’s look at the contents using cat:

Well that doesn’t look terribly useful.  As you can see, the DATA are still encrypted.  Albeit this time via the SERVER keys.  A hacker and even the SysAdmin can’t look at file contents (but it’s true, he or she can see the name and other file meta data).  So, the file is still quite well protected against hacking and theft.

In fact, the ONLY way you can view these data is if you unlock the server-side encryption that’s now in effect for that file.  And for this, you either need an exploit that works against Nextcloud’s encryption (very unlikely) OR you need to somehow secure the credentials needed to login to the Nextcloud account which will then unlock these files.  For that, you need a Nextcloud password AND 2FA code (one that changes every 30 seconds), so that too we think is quite safe from unauthorized disclosure.  But when you do enter those data, the file becomes fully decrypted and is then more visible, albeit over an HTTPS connection only (our final level of protection):

So “finally”, this is a file we can read:  HelloWorld.txt, that says “Hello everyone.  Have a nice day”.  So the Nextcloud user can read the file (just as he/she should).

When it comes to SHARING this file, we actually share an HTTPS:// link for that file that includes the decryption credentials for that file.  In Nextcloud, this is what that looks like:

You can see on the bottom right that a sharing link has been created.  We also add a PASSWORD to that link and an EXPIRATION DATE for the link, so that the link will not be active for long AND you need the extra password to access the file.  We send the LINK to the file via regular email.  Regular email is not terribly safe.  It can be intercepted.  We know that.  But let’s continue:

This link looks like this:

https://cloudvault.exploinsights.com/index.php/s/aeX7irmbWfneipn

(The link won’t exist for long, so it’s probably not worth trying, sorry!).

If you click on the link and it’s still active, it will bring up THIS page:

Which is still not the file.  You need that password.  We do not send the password with the link.  We don’t even send the password via email.  We send that via another means, say WhatsApp, or Signal or even SMS.  So, to get to this file, you need the link (from an email sent to you) AND the password (sent in a different way) sent to you.  Finally, if you get the link and the password, you can click/enter the top secret password assigned to protect this file, you get rewarded in this case with this:

The file.  In decrypted form.  Finally!  You can download the file but ONLY via https:// (SSL/TLS) connection (our final layer), so even then it remains encrypted-in-transit until it hits your download folder.  Only then do we relinquish our management –> OVER TO YOU!  🙂

So that’s how we share our data.  it is ALWAYS protected with up to four layers of encryption: FDE, End-to-End, Server-side and lastly SSL/TLS.

These “layers” are gradually stripped off as we get closer and closer to sharing a file with you, but only when it lands on your machine is our last (but strong) layer of encryption removed.  We use 2FA credentials to share our data and even the Sys-Admin (or unauthorized hacker) has very little access to our data: they NEVER see raw decrypted data, but occasionally they can sometimes view a file name and some basic meta-data.

Is this perfect?  No.  We are always looking to improve it.  This system is our latest and greatest implementation, but it won’t be our last.  Is this pretty good?  We think so!  What do you think?

So this is how we ROLL our file-sharing; how we securely share data with our customer.  What do you do?

Comments by email only – [email protected] or you can catch us on Twitter @SysAdminEI

Meeting Compliance with the aid of Nextcloud, Cryptomator and Mountain Duck

So like all SysAdmins, we have a lot to worry about in order to continually meet the burdens of compliance.  For us, data residency (location) and digital “state” (encryption status) is very important.

We have had a productive few days improving the PERFORMANCE of our systems by using better-integrated software.  So, why is PERFORMANCE being addressed under compliance?  Well, simply, because if you make a compliant system easier to use, users are more likely to use them, and thus be more compliant.

It’s no secret, we are great fans of Nextcloud – we self-host our cloud server, and because we use that to host our data, we use several different security layers to help thwart accidental and malicious exposure.  Our cloud server, based at our office HQ in Tennessee, is where we store all of our important data.  So we try to make it secure.

Firstly, our cloud server can only be accessed by HTTPS (SSL/TLS).  Logging in requires 2FA credentials:

This gets you to the next step:

So this is good.  And we have server-side encryption enabled in Nextcloud, which provides an additional layer of security – not even the SysAdmin can read the client data on the server.

This is cool.  Server-side encryption does help us with sharing of files to customers, but because the decryption key is stored on the server, we don’t like to rely solely on that for protecting our important data.

So our data are end-to-end encrypted with Cryptomator – an open source, AES encryption software package, with apps for Linux, Windows and Android (we use all three) – and even apps for IOS for those that like their Apples too (we don’t).

Of course, one of the things that makes this system “inconvenient” is that it’s hard to access end-to-end cloud-encrypted files quickly.  On the server, they look like THIS:

Not entirely useful.  Really hard to work on a file like this.  You have to  physically download your encrypted file(s) (say using the webdav sync software platform, like the great one provided by Nextcloud) and store them locally on your device, then decrypt them locally, on your device, so you can work on them.  This is fine when you are in your office, but what about when you are on the road (as we often are)?

Data residency and security issues can arise with this method of working when on travel, so you can’t download files en-mass to your device and decrypt them all when you are in the “wrong place”.  You have to wait until you can get back to a right location before you can work.  Customers don’t really like the delays this can cause them, and we don’t blame them.  And worse still, in that situation, even when you are done, you have to delete (or even ERASE) files on your PC if you are going back on travel etc. again to a location where data residency becomes an issue.  Then when you get to a secure location again, you have to repeat this entire process for working on your files again. This is hard to do.  Believe us, we know, as we have had to do this, but we now have a better way.

A more secure but LESS CONVENIENT way is to somehow only download the files you need as you need them, and decrypt and work on them etc.  ONLY AS YOU NEED THEM.   This is more secure, as you only have one decrypted file on your device (the one you are working on in your word processor etc), but how can that be done and be done CONVENIENTLY?

Obviously, this “convenience v security” issue is one we have spent a lot of time looking at.  We have used webdavs connected to our cloud servers and tried to selectively sync folder(s).  It’s not efficient, not pretty, not fast (actually really slow) and sometimes it just doesn’t work.

But thankfully we now have a much faster, reliable, efficient, effective yet totally compliant way of addressing the problem of keeping files end-to-end encrypted yet still be able to work on them ONE AT A TIME even when you are in a location that can otherwise bring data residency issues.

For us, this requires several systems that have to work together, but they are built (mostly) on Open Source software that has, in some cases, been tried and tested for many years so is probably as good as you can get today:

  • OpenSSH – for Secure Shell connectivity;
  • Nextcloud server;
  • Nextcloud sync clients;
  • Cryptomator;
  • 2FA Authentication credential management apps;
  • And last but not least…Mountain Duck.

SSH is an established, reliable secure service used globally to securely connect to servers.  If configured correctly, they are very reliable.  We believe ours are configured properly and hardened, not least because NONE of our SSH connections work with a username/password login.  Every connection requires a strong public/private key combination, and every key itself is further password protected; and, each SSH connection ALSO requires a second-factor (2FA) code.  We think that’s pretty good.  We have already explained Nextcloud and Cryptomator.  2FA apps are the apps that generate the six-digit code that changes on your phone every 30 seconds.  We have written about a new one we like (PIN protected), so we won’t go into that anymore.  That leaves ‘Mountain Duck’.  Yes ‘Mountain Duck‘.  We know, it’s not a name one naturally gives to a secure file-access system, but bear with us (and in any case, we didn’t name it!).  Put simply, Mountain duck allows us to use our 2FA protected SSH connections to our servers to access our files, but it does so in a really neat way:

In the image above, taken from their we site, note the text we circled in red.  Mountain Duck comes with CRYPTOMATOR integration.  So this effectively makes Mountain Duck a ‘Windows explorer’ that can be 2FA-connected via SSH to a server to access and decrypt in real-time Cryptomator end-to-end encrypted files stored on our HQ-based servers.  To that, we say:

Just WOW.

So how does this work in the real world; how do we use this?

Well we have a Nextcloud sync client that syncs each users files to a separate LXC container running on the corporate server.  Each container is used to sync the users files between the Nextcloud server and the Nextcloud user files.  Both the Nextcloud server AND the Nextcloud client files are ultimately stored in the HQ facilities, albeit in very different LXC containers.  Files are end-to-end encrypted in both the Nextcloud server and the Nextcloud client container.  All are further protected by full disk encryption and SSL/TLS connectivity.

Whether we are on the road OR in the office, we work on our files by logging into our Nextcloud client container using OpenSSH and Mountain Duck.

It’s all a lot simpler than it might sound:  First, we connect to our Nextcloud client containers via strong SSH credentials via Mountain Duck’s friendly interface, which asks first for our private key password:

And then asks for our 2FA code:

The 2FA code is generated on an our 2FA App on our encrypted, locked android smartphone, in a PIN protected app (We use ‘Protectimus‘ and also ‘andOTP‘).

With these credentials, Mountain Duck logs into our Nextcloud client container via the secure SSH connection, and then it accesses our end-to-end encrypted files but in doing so, it automatically detects our Cryptomator Vault (because it’s built into Mountain Duck) And it then allows us (if we want) to UNLOCK our Cryptomator Vault and access it:

So in one operation and three “codes”, we can securely connect to our Nextcloud client container via secure SSH and access our end-to-end encrypted files from anywhere in the world (where there’s internet!).

And Mountain Duck makes this easier because it allows you to bookmark the connection: open Mountain Duck and click a bookmark to the SSH server then enter your SSH password/2FA code.  Mountain Duck logs into your server and accesses the files located there.  It can even remember your passwords if you want (we don’t do that, as we don’t trust Windows much at all), but you could configure this to JUST require a 2FA code if you want.

The whole process takes, maybe, 30 seconds including the time to get the phone out and obtain a 2FA code.  Not bad!  And once you have done that, you can work on your END TO END encrypted files stored on your office based Nextcloud client container files from anywhere in the world.  No files are needed to be on your PC – in the office or on the road.  Everything remains solidly encrypted so if you do get hacked, there are no readable data that can be stolen, so at least that threat is low.  And everything is going through modern, secure OpenSSH transfer protocols, which in our case, makes us sleep a lot better than having some proprietary code with back-doors and all sorts of unpleasant surprises that always seem to arise.

The catch?  Well, you do need internet connectivity or you can do NOTHING on your data.  The risk in the office is low, but when your on the road it does happen, so it’s still not perfect. 🙂

Also, you do have to set this stuff up with some poor SysAdmin.  But if we can do it, probably anyone can?

Finally, this is what it looks like for real after we go through that hard-to-explain-but-easy-to-do login.  A window appears and you can see your files just like a normal file view on your Windows PC.

Above is a regular Windows Explorer window, accessing the Nextcloud client container files via fast and secure SSH.  The folder ‘Vault’ is actually a Cryptomator end-to-end encrypted folder, but because we have entered the credentials (above), it can be accessed like any other.  Files inside can be copied, edited, saved, shared, printed etc. just as files in the other folders.  It’s totally transparent, and it’s TOTALLY AWESOME.  The files in the Nextcloud client container (and the Nextcloud server) remain fully encrypted all the time.  A file ONLY gets decrypted when it’s selected and downloaded by the user.  Viewing a file in explorer (as in the view above) does NOT decrypt it – you have to double-click etc. the file to initiate downloading and decryption.  All your changes get immediately sync’d to the Nextcloud client container (which immediately sync’s everything back to the server).  Nothing gets stored on the PC you are using to work on your files unless you deliberately save it to the device.

So, THIS is how WE roll our own end-to-end encrypted data storage.  How do you do yours?

questions or comments by email only [email protected]

 

Another News Scare on Full Disk Encryption Hacking

Another day, another scary headline:

Security flaw in ‘nearly all’ modern PCs and Macs exposes encrypted data

Don’t get us wrong, we don’t discount this as false.  It’s almost certainly not.

But for us, we never ever rely on one lock for our IT systems.  Full disk encryption?  Sure, we got it.  But we also server-side encrypt our data AND we end to end encrypt our most important data.  Three levels of encryption.  Each with a completely different software package.  All Open Source.

We also 2FA protect out logins for all key accounts (email, ssh access, cloud and even our web site portal).

We note this headline, but then go about our day.

Don’t let the headlines scare you too much!

LXC Container Migration – WORKING

So we found a spare hour at a remote location and thought we could tinker a little more with lxc live migration as part of our LXD experiments.

Related image

We executed the following in a terminal as NON-ROOT users yet again:

lxc copy Nextcloud EI:Nextcloud-BAK-13-Sep-18

lxc start EI:Nextcloud-BAK-13-Sep-18

lxc list EI: | grep Nextcloud-BAK-Sep-13-Sep-18

And we got this at the terminal (a little time later…)

| Nextcloud-BAK-13-Sep-18 | RUNNING | 192.168.1.38 (eth0) | | PERSISTENT | 0 |

Note that this is a 138GB file.  Not small by any standard.  It holds every single file that’s important to our business (server-side AND end-to-end encrypted of course).  That’s a big file-copy.  So even at LAN speed, this gave us enough time to make some really good coffee!

So we then modified our front-end haproxy server to redirect traffic intended for our primary cloud-file server to this lxc instance instead. (Two minor changes to a config, replacing the IP address of the current cloud to the new cloud).  Then we restarted our proxy server and….sharp intake of breath…

IT WORKED BEAUTIFULLY!

Almost unbelievably, our entire public-facing cloud server was now running on another machine (just a few feet away as it happens).   We hoped for this, but we really did not expect a 138GB file to copy and startup first time.  #WOW

We need to test and work this instance to death to make sure it’s every bit as SOUND as our primary server, which is now back online and this backup version is just sleeping.

Note that this is a complete working copy of our entire cloud infrastructure – the Nextcloud software, every single file, all the HTTPS: certs, databases, configurations, OS – everything.  A user changes NOTHING to access this site, in fact, it’s not even possible for them to know it’s any different.

We think this is amazing, and is a great reflection of the abilities of lxc, which is why we are such big fans,

With this set-up, we could create working copies of our servers in another geo-location every, say, month, or maybe even every week (once a day is too much for a geo-remote facility – 138GB for this one server over the intenet?  Yikes).

So yes, bandwidth needed IS significant, and thus you can’t flash the larger server images over the internet every day, but it does provide for a very resistant disaster-recovery situation: if our premises go up in a Tornado, we can be back online with just a few clicks from a web-browser (change DNS settings and maybe a router setting or two) and issue a few commands from an ssh terminal, connected to the backup facility.

We will develop a proper, sensible strategy for using this technique after we have tested it extensively, but for now, we are happy it works.  It gives us another level of redundancy for our updating and backup processes.

GOTTA LOVE LXD

Image result for love LXD

An LXC Experiment for Backups – Take 2

Remember this recent article:  An LXC Experiment for Live Backups?

It was out first attempt to perform live migrations of one LXC container on one physical machine, to a new container running on a completely different machine.  The idea being to create live containers with real-world very current information that can act as part of a complete disaster-resistant backup strategy.

The plan failed as the copy process gave us errors.  This may be due to a timeout of SSH (but it shouldn’t as the files were not THAT big for our LAN network speed).  It could also have been due to trying to restore container cpu states on a different machine – maybe it’s too much for lxc.  It doesn’t matter, it just failed and so we had to rethink.

We have…a new plan.  What if we take a SNAPSHOT of a container and IMMEDIATELY copy that (in a ‘stopped’ state of course)?  No cpu registers and memory to worry so much about as part of a copy.  The container is in a re-start-able form, not a running state.

Something like:

lxc snapshot Nextcloud Snapshot-name 

lxc copy Nextcloud/Snapshot-name NEWMACHINE:NextcloudMirror

lxc start NEWMACHINE:NextcloudMirror

This series of non-sudo user commands (that’s right, no scary super-user stuff again) creates then copies the container ‘Nextcloud’ snapshot named ‘Snapshot-name’ to a new lxc node called ‘NEWMACHINE’.  The node is an lxc remote system – can be anywhere: same machine, same network, different machine in a different country connected via internet over public-private ssh connection – all handled by lxc).

Well we tried this…it WORKS.  Lots of testing to do, but we are very excited at this prospect.  More to come!

🙂

 

 

 

run-one

Image result for happy sysadminOur favorite new command-of-the-day:

run-one (found at Ubuntu’s Manpage – here)

Just made it easier to run a single instance of rsync for one of our routine server-to-server file copy jobs.

Image result for rsync

 

An LXC Experiment for Live Backups

Don’t we all love our backups?  We all have them.  Some of us have backups done poorly, and some of us worry that ours is still not as good as we would like.  Few have it nailed.  We don’t have it nailed…

Here at EXPLOINSIGHTS, Inc. we think we are in the second camp (“not as good as we would like it to be”).  We have a ton of backups of our data, much of it offline (totally safe from e.g. malware), and some of those are in different locations (protected against theft, fire, flood, mayhem AND hackers), but they would all require a lot of work to get going if we ever needed them.  So, if we suffer a disaster (office burned to ground or swallowed up by a Tornado or house stolen by Hillary Clinton’s Russian Hackers), then rebuilding our system would still take time.  What we ALL want, but can seldom get, is a live backup that runs in parallel to our existing system.  Like a hidden ghost system that mirrors every change we make, without being exposed to the hazards of users and such.

So hold that thought…

Onto our favourite Linux Ubuntu capability: LXC.  LXC has a capability of basically exporting (copying or moving) containers from one machine to another – LIVE.  This means you don’t have to stop a container to take a backup copy of it and place it on ANOTHER MACHINE.  Theoretically, this is similar to taking a copy of a stopped container but without the drag of stopping it.

We know it has to be a pretty complicated under the hood for this to work,  and it’s evident it’s not really intended for a production environment, but we are going to play with this a little to see if we can use live migration to give us full working copies of our system servers on another machine.  And if we can, to place that machine not on the LAN, but on the WAN.

Our largest container is our Nextcloud instance.  We have multiple users, with all kinds of end-to-end encrypted AND simultaneously server-side encrypted data in multiple accounts.  All stored in one CONVENIENT container.  We are confident it’s SECURE from theft – we have tried to hack it.  All the useful data are encrypted.  But the container is growing.  Today it stands at about 138 GB.  Stopping that container and copying it even over the LAN is a slow process.  And that container is DOWN at that time.  If a user (or worse, a customer), tries to access the container to get a file, all they see is “server is down” in their browser.  #NotUseful

So for this reason, we don’t like to “copy” our containers – we hate the downtime risk.

So….we are going to play with live copying.  We have installed criu on two of our servers, and we are doing LAN-based experiments.  It’ll take time, as we have to copy-and-test.  Copy-and-test.  We have to make sure all accounts can be accessed AND that all data (double encrypted at rest, triple if you count the full-disk-encryption; quadruple if you count the https:// transport mechanism for copying) can be accessed without one byte of corruption.

Let the FUN begin.   We have written this short article as the first trial is underway.  We have our 138GB container on our “OB1” container (a Star Wars throwback):

See the last entry?  We are copying the RUNNING container to our second server (boringly called ‘exploinsights’).  It’s a big file, even for our super fast LAN router, and it’s not there even now:

The image has not yet appeared, but we have confirmed it’s still up ‘n running on the OB1 host.

Lots of testing to do, and clearly files this large can’t be backed up easily over the internet, so this is definitely an “as-well-as” non-routine option for a machine-to-machine back up, but we like the concept of this, so we are spending calories exploring it.

#StayTuned for an update.  Also, please let us know what you think of this – drop us an email at:

[email protected] or

[email protected]

UPDATE:

I need a good Plan B: Thecopy failed after a delay of about two hours.  It would not take that long to copy, so something is broken, and we are not going to try to dig our way out of that hole.  LXC live migration died on the day we tried it.  #RIP

Another stress-free Nextcloud update, courtesy of LXC

Image result for nextcloud 14 update

The above image is definitely one that would ruin our morning coffee.  Definitely.

We do literally ALL of our business document management including generating and sharing our client files using the totally excellent Nextcloud product operating on a server in the smokey mountains in the Great State of Tennessee.  We NEED this server.

So, when you see a message saying ‘hey, there’s an update’ (just as we saw yesterday), it can make us worry.  But then, we remember…we use LXC.  We run our Nextcloud instance in an LXC container, backed up by a zfs file system with full, reliable and totally brilliant ‘snapshot’ capabilities.

So, how did we do our update?  From the command line, running as a NON-ROOT user (no scary sudo commands here!):

lxc snapshot Nextcloud
lxc exec Nextcloud bash
apt update && apt upgrade && apt autoremove
exit

This updates the container OS software (as it turns out, ours was already up to date – not a surprise, we check it regularly).

Then we log into our two-factor credential protected Nextcloud server via an HTTPS web portal, and we click on the Nextcloud updater.  About two minutes later, we saw this:

The update worked as it was supposed to.  But we were not just lucky – we had our snapshot.  Had the upgrade failed, we could and would restore the pre-updated version, then contact the Nextcloud support folks to figure out why our upgrade broke, and what to do about it.  Our mission-critical file cloud server is working just great, as usual.  LXC is our insurance policy.

Our coffee this morning tastes just great!  🙂

Use LXC.  Make YOUR coffee taste good too!

SSH attacks – MOVE YOUR PORT #

Image result for ssh hackSo when we first set-up our remote linux logins, we used the standard SSH port # (22).  We use public keys for our SSH logins, so we weren’t especially worried about port scanners and bot attacks,

But it’s a funny thing going over your logs and seeing remote systems (with IP addresses that trace back to China and Russia, if that means anything) that are trying to login, even though we know for certain there are NO AUTHORIZED USERS at IP locations listed.  The logins fail, but even so…

So, some time ago, we moved to a different SSH port.  It’s not hard – just go to your ssh_config file and pick another unused port.  We have seen people pick ‘222’ and ‘2222’ and even ‘22222’ because, well, they associate ‘2’ with SSH.  We picked different numbers (and that’s all we are going to say on the subject).  Since that time (many months ago), we have not seen a SINGLE ssh login hack.  Not one.  Such a remarkable outcome was not expected.  Even today, going through two log files for our /var/log/auth.log, the only odd IP address we noted was actually from one of our approved mobile devices that was roaming in Europe (so that was not an issue).

It seems hackers are sometimes lazy.  They run a  port-scan of web sites starting presumably at port 22 and working from there.

We like that.  We hope that never changes.

What it tells us is…CHANGE YOUR SSH PORT # FROM 22 to <some-other-number> on any new Linux install.  Do not use port 22 unless you enjoy seeing a log entry of someone try to hack your server.

Enjoy your weekend, and if you adopt our suggestion, you might better enjoy reviewing your log files!  🙂