Interactively Updating LXC containers

We love our LXC containers.  They make it so easy to provide and update services – snapshots take most of the fear out of the process, as we have discussed previously here.  But even so, we are instinctively lazy and are always looking for ways to make updates EASIER.  Now it’s possible to fully automate the updating of a running service in an LXC container BUT a responsible administrator wants to know what’s going on when key applications are being updated.  We created a compromise, a simple script that runs an interactive process to backup and update our containers.  It saves us repetitively typing the same commands, but it still keeps us fully in control as we answer yes/no upgrade related questions.  We thought our script is worth sharing.  So, without further ado, here’s our script, which you can just copy and paste to a file in your home directory (called say ‘update-containers.sh’).  Then just run the script when you want to update and upgrade your containers.  Don’t forget to change the name(s) of your linux containers in the ‘names=…’ line of the script:

#!/bin/bash
#
# Simple Container Update Script
# Interactively update lxc containers

# Change THESE ENTRIES with container names and remote path:
#
names='container-name-1 c-name2 c-name3 name4 nameN'

# Now we just run a loop until all containers are backed up & updated
#
for name in $names
do
echo ""
echo "Creating a Snapshot of container:" $name
lxc snapshot $name
echo "Updating container:" $name
lxc exec $name apt update
lxc exec $name apt upgrade
lxc exec $name apt autoremove
echo "Container updated. Re-starting..."
lxc restart $name
echo ""
done
echo "All containers updated"

Also, after you save it, don’t forget to chmod the file if you run it as a regular script:

chmod +x container-update.sh

Now run the script:

./container-update

Note – no need to run using ‘sudo’ i.e. as ROOT user- this is LXC, we like to be run with minimal privileges so as not to ever break anything important!

So this simple script, which runs in Ubuntu or equivalex distro, does the following INTERACTIVELY for every container you name:

lxc snapshot container #Make a full backup, in case the update fails
apt update             #Update the repositories
apt upgrade            #Upgrade everything possible
apt autoremove         #Free up space by deleting old files 
restart container      #Make changes take effect

This process is repeated for every container that is named.  The ‘lxc snapshot’ is very useful: sometimes an ‘apt upgrade’ breaks the system.  In our case, we can then EASILY restore the container to its prior updated state using the ‘lxc restore command.  All you have to do is firstly find out a containers snapshot name:

lxc info container-name

E.g. – here’s the output of ‘lxc info’ on one of our real live containers:

sysadmin@server1:~lxc info office

Name: office
Remote: unix://
Architecture: x86_64
Created: 2018/07/24 07:02 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 21139
Ips:
eth0: inet 192.168.1.28
eth0: inet6 fe80::216:3eff:feab:4453
lo: inet 127.0.0.1
lo: inet6 ::1
Resources:
Processes: 198
Disk usage:
root: 1.71GB
Memory usage:
Memory (current): 301.40MB
Memory (peak): 376.90MB
Network usage:
eth0:
Bytes received: 2.52MB
Bytes sent: 1.12MB
Packets received: 32258
Packets sent: 16224
lo:
Bytes received: 2.81MB
Bytes sent: 2.81MB
Packets received: 18614
Packets sent: 18614
Snapshots:
http-works (taken at 2018/07/24 07:07 UTC) (stateless)
https-works (taken at 2018/07/24 09:59 UTC) (stateless)
snap1 (taken at 2018/08/07 07:37 UTC) (stateless)

The snapshots are listed at the end of the info screen.  This container has three: the most recent being called ‘snap1’.  We can restore our container to that state by issuing:

lxc restore office snap1

…and then we have our container back just where it was before we tried (and hypothetically failed) to update it.   So we could do more investigating to find out what’s breaking and then take corrective action.

The rest of the script is boiler-plate linux updating on Ubuntu, but it’s interactive in that you still have to accept proposed upgrade(s) – we call that “responsible upgrading”.  Finally, each container is restarted so that the correct changes are propagated.  This gives a BRIEF downtime of each container (typically 1-several seconds).  Don’t do this if you cannot afford even a few seconds of downtime.

We run this script manually once a week or so, and it makes the whole container update process less-painful and thus EASIER.

Happy LXC container updating!

‘Protectimus’ – A Better 2FA Android App

We don’t usually review android apps, but this one is worth a mention, especially if you have to deploy second-factor-authentication (2FA) in your IT systems (as we do).  The Google Authenticator 2FA app is perhaps the most common app, and it was one we used for providing 2FA login for our key accounts…until now.

We have always had an issue with Google’s 2FA on the android devices.  Modern android phones are unlocked via bio-metric settings, which sounds really cool, but we think they are really weak.  Fingerprints, faces and even voices can be all used to “conveniently” unlock android devices.  Breaking into an android phone is thus easy for the determined hacker who has physical possession of an android device (e.g. phone).  This makes all the apps on an android device susceptible to “easier” hacking, because once you bypass the weak bio-metric login, the entire device is at your disposal.  Add to that, the major security weakness caused by the “convenience” of having passwords auto-entered for logging into regularly visited web sites by your browser (like e.g. Google’s Chrome password-manager) then it’s relatively EASY to find that secure sites are actually less secure on a lost/stolen, unlocked android phone: the second factor might be the ONLY factor preventing unauthorised account access  And if you have 2FA deployed, even that’s no good if your 2FA code generation app is on the lost/stolen phone.  The codes are on available by simply opening the app.  In our opinion, this contributes to our view that android phones are a WEAK POINT for security.

Enter ‘Protectimus Smart OTP‘ – a relatively new android app that is another implementation of the Google App.  It too can be used to scan QR codes and produce 2FA login credentials.  BUT, this app has a separate PIN that you can optionally configure to open the app…so it can’t be opened with the standard android device login credentials (like a fingerprint), as shown in the screenshot below:

The app is built well: even if you try to use the task-switcher (built-into most android devices), the app cleverly hides the 2FA codes.  We know: we checked!  You have to UNLOCK the app to use it, as shown below:

Very nice!  By comparison, Google’s 2FA app is MUCH less secure and offers no protection to a lost/stolen, unlocked android device.

We think this is app represents a BETTER 2FA implementation for android devices and thus represents a small improvement to our data and device protection.  So, we are using the ‘Protectimus Smart OTP’ app on our android devices.  We also know that this is just an IMPROVEMENT, it’s not an excuse to not further improve account/data security, but we think this is a step in the right direction.  Our thanks to the folks at Protectimus!

The app is FREE.  More information can be found at the Protectimus site – https://www.protectimus.com/

UPDATE 9-Sep-18: the app got broken by an Android P update, but the folks at Protectimus have been informed and are field-testing their fixed app now, so hopefully it will be back with us soon.  The issue only seems to affect Android P devices (i.e. our Pixel 2 XL devices).

Happy 2FA’ing.

Installing OnlyOffice Document Server in an Ubuntu 16.04 LXC Container

In our quest to migrate away from the relentlessly privacy-mining Microsoft products, we have discovered ‘OnlyOffice’ – a very Microsoft-compatible document editing suite.  Onlyoffice have Desktop and server-based versions, including an Open Source self-hosted version, which scratches a LOT of itches for Exploinsights, Inc for NIST-800-171 compliance and data-residency requirements.

If you’ve ever tried to install the open-source self-hosted OnlyOffice document server (e.g. using the official installation instructions here) you may find it’s not as simple as you’d like.  Firstly, per the official instructions, the onlyoffice server needs to be installed on a separate machine.  You can of course use a dedicated server, but we found that for our application, this is a poor use of resources as our usage is relatively low (so why have a physical machine sitting idly for most of the time?).  If you try to install onlyoffice on a machine with other services to try to better utilise your hardware, you can quickly find all kinds of conflicts, as the onlyoffice server uses multiple services to function and things can get messed up very quickly, breaking a LOT of functionality on what could well be a critical asset you were using (before you broke it!).

Clearly, a good compromise is to use a Virtual Machine – and we like those a LOT here at Exploinsights, Inc.  Our preferred form of virtualisation is LXD/LXC because of performance – LXC is blindingly fast, so it minimizes user-experience lag issues.  There is however no official documentation for installing onlyoffice in an lxc container, and although it turns out to be not straightforward, it IS possible – and quite easy once you work through the issues.

This article is to help guide those who want to install onlyoffice document server in an LXC container, running under Ubuntu 16.04.  We have this running on a System76 Lemur Laptop.  The onlyoffice service is resource heavy, so you need a good supply of memory, cpu power and disk space.  We assume you have these covered.  For the record, the base OS we are running our lxc containers in is Ubuntu 16.04 server.

Pre-requisites:

You need a dns name for this service – a subdomain of your main url is fine.  So if you own “mybusiness.com”, a good server name could be “onlyoffice.mybusiness.com”.  Obviously you need dns records to point to the server we are about to create.  Also, your network router or reverse proxy needs to be configured to direct traffic for ports 80 and 443 to your soon-to-be-created onlyoffice server.

Instructions:

Create and launch a container, then enter the container to execute commands:

lxc launch ubuntu:16.04 onlyoffice
lxc exec onlyoffice bash

Now let’s start the installation.  Firstly, a mandatory update (follow any prompts that ask permission to install update(s)):

apt update && apt upgrade && apt autoremove

Then restart the container to make sure all changes take effect:

exit                     #Leave the container
lxc restart onlyoffice   #Restart it
lxc exec onlyoffice bash #Re-enter the container

Now, we must add an entry to the /etc/hosts file (lxc should really do this for us, but it doesn’t, and only office will not work unless we do this):

nano /etc/hosts  #edit the file

Adjust your file to change from something like this:

127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

To (see bold entry):

127.0.0.1 localhost
127.0.1.1 onlyoffice.mybusiness.com

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

save and quit.  Now we install postgresql:

apt install postgresql

Now we have to do somethings a little differently than at a regular command line because we operate as a root user in lxc.  So we can create the database using these commands:

su - postgres

Then type:

psql
CREATE DATABASE onlyoffice;
CREATE USER onlyoffice WITH password 'onlyoffice';
GRANT ALL privileges ON DATABASE onlyoffice TO onlyoffice;
\q
exit

We should now have a database created and ready for use.  Now this:

curl -sL https://deb.nodesource.com/setup_6.x | bash -
apt install nodejs
apt install redis-server rabbitmq-server
echo "deb http://download.onlyoffice.com/repo/debian squeeze main" |  tee /etc/apt/sources.list.d/onlyoffice.list
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys CB2DE8E5
apt update

We are now ready to install the document server.  This is an EXCELLENT TIME to take  a snapshot of the lxc container:

exit
lxc snapshot onlyoffice pre-server-install

This creates a snapshot that we can EASILY restore another day.  And sadly, we probably have to as we have yet to find a way of UPDATING an existing document-server instance, so whenever onlyoffice release an update, we repeat the installation from this point forward after restoring the container configuration.

Let’s continue with the installation:

apt install onlyoffice-documentserver

You will be asked to enter the credentials for the database during the install.  Type the following and press enter:

onlyoffice

Once this is done, if you access your web site (i.e. your version of ‘www.onlyoffice.mybusiness.com’) you should see the following screen:

We now have a document server running, albeit in http mode only.  This is not good enough, we need to use SSL/TLS to make our server safe from eavesdroppers.  There’s a FREE way to do this using the EXCELLENT LetsEncrypt service, and this is how we do that:

Back to the command line in our lxc container.  Edit this file:

nano /etc/nginx/conf.d/onlyoffice-documentserver.conf

Delete everything there and change it to the following (changing your domain name accordingly):

include /etc/nginx/includes/onlyoffice-http.conf;
server {
  listen 0.0.0.0:80;
  listen [::]:80 default_server;
  server_name onlyoffice.mybusiness.com;
  server_tokens off;

  include /etc/nginx/includes/onlyoffice-documentserver-*.conf;

  location ~ /.well-known/acme-challenge {
        root /var/www/onlyoffice/;
        allow all;
  }
}

Save and quit the editor.  Then exeute:

systemctl reload nginx
apt install letsencrypt

And then this, changing the email address and domain name to yours:

letsencrypt certonly --webroot --agree-tos --email [email protected] -d onlyoffice.mybusiness.com -w /var/www/onlyoffice/

Now, we have to re-edit the nginx file”

nano /etc/nginx/conf.d/onlyoffice-documentserver.conf

…and replace the contents with the text below, changing all the bold items to your specific credentials:

include /etc/nginx/includes/onlyoffice-http.conf;
## Normal HTTP host
server {
  listen 0.0.0.0:80;
  listen [::]:80 default_server;
  server_name onlyoffice.mybusiness.com;
  server_tokens off;
  ## Redirects all traffic to the HTTPS host
  root /nowhere; ## root doesn't have to be a valid path since we are redirecting
  rewrite ^ https://$host$request_uri? permanent;
}
#HTTP host for internal services
server {
  listen 127.0.0.1:80;
  listen [::1]:80;
  server_name localhost;
  server_tokens off;
  include /etc/nginx/includes/onlyoffice-documentserver-common.conf;
  include /etc/nginx/includes/onlyoffice-documentserver-docservice.conf;
}
## HTTPS host
server {
  listen 0.0.0.0:443 ssl;
  listen [::]:443 ssl default_server;
  server_name onlyoffice.mybusiness.com;
  server_tokens off;
  root /usr/share/nginx/html;
  
  ssl_certificate /etc/letsencrypt/live/onlyoffice.mybusiness.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/onlyoffice.mybusiness.com/privkey.pem;

  # modern configuration. tweak to your needs.
  ssl_protocols TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  ssl_session_cache builtin:1000 shared:SSL:10m;
  # add_header X-Frame-Options SAMEORIGIN;
  add_header X-Content-Type-Options nosniff;
 
  # ssl_stapling on;
  # ssl_stapling_verify on;
  # ssl_trusted_certificate /etc/nginx/ssl/stapling.trusted.crt;
  # resolver 208.67.222.222 208.67.222.220 valid=300s; # Can change to your DNS resolver if desired
  # resolver_timeout 10s;
  ## [Optional] Generate a stronger DHE parameter:
  ##   cd /etc/ssl/certs
  ##   sudo openssl dhparam -out dhparam.pem 4096
  ##
  #ssl_dhparam {{SSL_DHPARAM_PATH}};

  location ~ /.well-known/acme-challenge {
     root /var/www/onlyoffice/;
     allow all;
  }
  include /etc/nginx/includes/onlyoffice-documentserver-*.conf;
}

Save the file, then reload nginx:

systemctl reload nginx

Navigate back to your web page onlyoffice.mybusiness.com and you should get the following now:

And if you do indeed see that screen then you now have a fully operational self-hosted OnlyOffice document server.

If you use these instructions, please let us know how it goes.  In a future article, we will show you how to update the container from the snapshot we created earlier.

 

 

 

 

 

Fixing Security Issues – Low Stress, the LXD Way

Routine updates – but not really.  So we got a routine report from this excellent server scanning service @ www.scanmyserver.com
And it seems they did not like our Apache2 install on this site:

Scary stuff, given the incredible hacking risks of today.
Now updating the primary web server is not always a comfortable journey – you can break more than you can fix.  At www.exploinsights.com, we don’t like to live dangerously, so we took full advantage of the snapshot capabilities of LXD to update the server.  The WordPress instance that operates the public web site (what you are reading now!) is running in an LXD container, so here’s the hard server update today, exactly as employed from the primary Ubuntu server:

lxc snapshot WordPress
lxc exec WordPress bash
add-apt-repository ppa:ondrej/apache2
apt update
apt upgrade
exit
logout

This created a full working copy of the exploinsights web site container (so it can be restored if it breaks during the update), then updated the repository so the instance uses the latest and greatest version of apache2, then it updated the entire web server, including ‘apache2’.
Note that none of these commands require root access on the main server, so the risk to actual hardware running the primary linux OS is very very low.  Root is needed in the LXD container, but that is separate from the host OS by design.  Excellent security management!
After that, we accessed the site to make sure it works (and it does)…and then wrote this article to share the low-risk experience.  So it WORKED!  It took longer to write this article than it took to perform the update – that’s our kind of IT maintenance.
The web site should receive a much better security score when scanmyserver.com revisits (in about a week unless we initiate it manually).  But here’s the results from a similar scan, which was satisfactory:

No high risk issues, but probably still more work to do, so no surprises there.  Since the web site does NOT host sensitive or mission-critical information, we will address additional lower risk issues routinely.  But for our mission-critical assets, we like to fix things immediately or at least ASAP.  Our cloud file server gets a much better score, and hopefully always will:
We love containers for running and updating key IT infrastructure as they continue to take the stress out of important and potentially system-breaking updates

UPDATE 14th July:

So we just got a new vulnerability report: 😊

Progress.  And an A+ rating is not as shabby as we have seen elsewhere for even government web servers.  Of course, this does NOT mean that we can relax.  You have to keep looking for vulnerabilities and address them as you find them, and you always find more.  But this is progress.  Thanks LXD!

VPN – To TCP, Or not to TCP – that is the question

So, when on travel, I notice the EXPLOINSIGHTS, Inc. OpenVPN server becomes a little unreliable (slow).  It was configured originally using UDP as the transfer protocol.  UDP is the faster of the modern two protocol options (UDP and TCP).  It’s fast because it throws data down the pipe and assumes it makes it to the right destination.  This is brilliant…when it works.  And when it doesn’t you want to use the slightly slower but much more robust TCP protocol, which actually checks to make sure the packets it sends make it to the right place!
So it’s a simple change: on Ubuntu, just edit your openvpn config at /etc/openvpn/server.conf, and change ‘proto udp’ to ‘proto tcp’.  Save config, restart your openvpn server (sudo service openvpn restart – or your server equivalent, or reboot server if more convenient for you).  If you run a different server distro, then just google your distro and openvpn.  It will give you links to show you where your config file is, but it could well be the same as above (so look there first!).
Now you need to make the same change for each of your client config files.  So whatever devices are using the VPN server, find their local config files (in windows it’s user/yourname/openvpn/config/file-name.ovpn) and make the same change as above.  No need to restart anything, just load the config into your client and run.  Hey presto, your server is running with TCP.  Enjoy a more reliable/consistent, and marginally slower vpn service.
🙂

Server Security

Routine maintenance and general configuration management of a new  cloud server supporting EXPLOINSIGHTS installed as an LXC instance is going ok.  Nextcloud’s security scan has given me a top rating for a new installation, which is encouraging but not enough to rest on laurels.  This installation is currently a mirror (in terms of files) of a current install on a live server, but after testing, this will become the main cloud server for the organizations needs and the older server will be retired.

 
This version is of course running with server-side file encryption.  As part of the testing process, client-side encryption (a feature of Nextcloud version 13) will be evaluated, as BoxCryptor (the current Exploinsights, Inc. end to end encryption service) causes a few operational issues.

Server Outage Automated-Reporting

So there are a lot of methods of checking to see if your server(s) are running.  I have spent some time adapting a simple ping script.  My servers are all LAN side of a single IP address.  If my external (WAN) IP goes down, it’s likely either my ISP, or a power cut that’s gone on long enough to kill the modem/router UPS.  Not a lot I can do about that unless I am at the office (maybe not then!).  BUT what about my multiple containerized LAN servers, which, as I now know, can just “stop working”.
I have today set-up a simple PING-TEST script (which I downloaded and adapted).  I run it via cron every hour, and it emails me if a server is down.  It pings each LAN server and does nothing else if all is well.  If all is not well, it emails me.
Why not run it every minute?  Well, if a server is down, I might not be able to fix it quickly (or even at all) and it will email me the same bad news every minute until I do fix it. I could have just left the office for a week… I don’t think I need that kind of “excitement”.  Every hour will do for now.  🙂
PS – here’s a link to the script I am currently running, if anyone is interested.  The source of the original script is shown in the comments.  As I run the script in cron, I let cron email me if there’s any results.  Nil output = no email.  Any other ouput and my Inbox gets ping’d (pun intended).  🙂
Note – when viewing the link, only the first few lines are displayed.  Feel free to download a copy, if you are that interested.  Or just check out the referenced source.

Server Backups using LXD

So I am working the process of server backups today.  Most people do backups wrong, and I have been guilty of that too.  You know it’s true when you accidentally delete a file, and you think ‘No worries, I’ll restore it from a backup…’; and about an hour later of opening archives and trying to extract the one file but finding some issue or other…makes you realize your backup strategy sucks.  I am thus trying to do get this right from the get-go today:
LXD makes the process easy (albeit with a few quirks).  EXPLOINSIGHTS Inc. (EI) servers are structured such that each service is running in an LXD container.  Today, there are several active, ‘production’ servers (plus several developmental servers, which are ignored in this posting):

  • Nextcloud – cloud file storage;
  • WordPress – this web-site/blog;
  • Onlyoffice – an ‘OnlyOffice’ document server;
  • Haproxy – the front-end server that routes traffic across the LAN

All of these services are running on one physical device.  They are important to EI as customers access these servers (whether they know it or not), so they need to JUST WORK.
What can I do if the single device (‘server1’) running these services just dies?  well I have battery backup, so a power glitch won’t do it.  Check.  And the modem/router are also UPS charged, so connectivity is good.  Check.  I don’t have RAID on the device but I do have new HD’s – low risk (but not great).  Half-check there.  And if the device hardware just crashes and burns just because it can…well that’s what I want to fix today:
So my way of creating functionally useful backups is to do the following, as a simple loop in a script file:

  1. For each <container name> on server1:
    1. lxc stop <container-name>
    2. lxc copy <container-name> TO server2:<container-name##>
    3. lxc restart <container-name>
  2. Next <container-name>

The ‘##’ at the end of the lxc copy command is the week-number, so I can create weekly container backups EASILY and store them on server2.  I had hoped to do this without stopping the containers, but the criu LXD add-on program (which is supposed to provide that very capability) is not performing properly on server2, so I have a brief server-outage when I run this script for each service for now.  I thus have to try to run this at “quite times”, if such a thing exists; but I can live with that for now.
I did a dry-run today: I executed the script, then I stopped two of the production containers.  I then launched the backup containers with the command:

  • lxc start <container-name##>

I then edited the LAN addresses for these services and I was operational again IN MINUTES.  The only user-experience change I noticed was my login credentials expired, but other than that it was exactly the same experience “as a user”.  Just awesome!
Such a strategy is of no use if you need 100% up-time, but this works for EI for now until I develop something better.  And to be clear, this solution is still far from perfect so it’s always going to be a work in progress:-
Residual risks include:

  1. Both servers are on same premises, so e.g. fire or theft risks are not covered;
    1. Really hard to fix this because of data residency and control requirements.
  2. This strategy requires human intervention to launch the backup servers, so there could be considerable downtime.  Starting a backup lxd container for the haproxy server will also require changes at the router (this one container receives and routes all http and https traffic except ssh/vpn connections.  The LAN router presently sends everything to this server.  A backup container will have a different LAN IP address thus router reconfig is needed);
  3. The cloud file storage container is not small – about 13GB today.  52 weeks of those will have a notable impact on storage at server2 (but storage is cheap);
  4. I still have to routinely check that the backup containers actually WORK (so practice drills are needed);
  5. I have to manually add new production containers to my script – easy to forget;
  6. I don’t like scheduled downtime for the servers…

But overall, today, I am satisfied with this approach.  The backup script will be placed in a cron file for auto-execution weekly.  I may make my script a bit more friendly by sending log files and/or email notification etc., but for now a manual check-up on backup status will suffice.

Self-Hosting Journey Continues

Microsoft helped the journey to independence from them this week: EXPLOINSIGHTS Inc. (EI) signed-up for Project Online service, but after several days of frustration, the decision has been reversed.
Was it a bad program?  No.  Or more correctly: we don’t know.  The service was never added to EI services portal.  Even after about five or more emails to the Microsoft help-desk.  So we never got to USE the program we PAID for.
We have been evaluating OnlyOffice as an alternative office suite to the Microsoft Office 365 products.  That journey is still underway, as EI’s customers use Microsoft products (because their customer, the DoD, uses the same product), and compatibility is a concern; BUT OnlyOffice is definitely a contender.  The self-hosted server EI installed for online creation/editing of the EI cloud storage server Nextcloud is proved to be stable and reliable, which is an encouraging start.
With a need for a Project Management software package (and in the absence of a Microsoft option, even one we paid for!) the journey has expanded this week, and several Open S0urce Project Management packages are being evaluated, including:

  • Open Project
  • OnlyOffice Community Server (Project app)
  • Gantt Project
  • Redmine

These all have pro’s and cons of course.  Web portal offerings are most attractive as they allow for Customer visibility of programs, but they seem to be the least configurable so far.  The Open Source ‘Gantt Project’ product  is excellent and is a virtual Microsoft Project replacement BUT it runs on Java script, which is a system security weak-point.  And it’s client-side desktop install only, so no server protection or easy customer access either.  Of the four, Open Project has been abandoned.  It was easy to install and it has a great web portal BUT you can’t add a new task easily: it always appears at the end of a schedule, which makes for complicated-looking Gantt charts.  The OnlyOffice portal is better, but it does not allow for dependencies across milestones, which is counter-intuitive and makes it easy to miss important implications for e.g. a slipping milestone/task.
Redmine has potential: it’s a server-side install BUT the system depends on third-party plugins to get really good features, and these are always an area of concern from a security perspective (and they make installation more difficult too).
As has proved to be the case for Office suite software, finding a replacement for Microsoft Project is not “easy”, but it’s a rewarding journey because of the greatly improved awareness it creates regarding the options.
Much more to do before EI can officially drop Microsoft, but the journey continues.
 

Ransomware

Another day, another government agency hit by ransomware:
Here
We need some serious effort to take down those behind these attacks; crypto currency does not help, as bad guys hide behind anonymous payments.  I am also left wondering how long before I get hacked AND whether my backup strategy will work.  That said, since my backups are NOT CONNECTED TO THE INTERNET so I at least have a fighting chance.
I backup whenever I am in my office, onto separate drives that are not internet connected, so ransomware cannot easily affect them.  That doesn’t help my systems, which can be rebuilt, but my data at least are relatively safe.
My Nextcloud instance provides some protection too, as file versioning means my changed files are always retained even if changed by malware.
Good luck to those who have to worry about this stuff.  #MeToo!
#Offline is sometimes the only way