Updating the P5 client on the Jellyfish

You’ve successfully installed Archiware’s P5 backup and archiving software on your backup server following my previous blog posts and after it has run smoothly for a while you decided to upgrade the version of P5 on your server, but how do you do this on the Lumaforge Jellyfish storage? I’m glad you asked.

There are a couple of ways to update your P5 agent, and I will show you the built in way in Archiware’s P5 software. Surprisingly after many years of using P5 I have never used this method before. I’ve been using Munki for years to upgrade all software on my Mac clients including P5 and on Linux and Solaris servers I’ve just done it by hand. Install over top of the previous version and voila upgrade! But what if you didn’t want to ssh in as root and just install over top, what if there was a better way? I present to you the official “Update client” dialog box. It’s nice.

Update-p5-jellyish-1

Updating client software assumes you’ve set up clients in the P5 server clients section, This is needed when you want to use these server agents to designate their attached storage as a backup, archive or sync source. And also, this assumes you’ve updated your server.

P5 client update Jellyfish 2 Screen Shot 2018-08-06 at 4.40.03 PM

During the update process there are some nice dialog boxes to let you know what is happening.

P5 client update Jellyfish 4 Screen Shot 2018-08-06 at 4.44.34 PM

And afterwards you can test your client with a Ping test.

P5 client update Jellyfish 3 Screen Shot 2018-08-06 at 4.44.27 PM

Success! Looks like we’ve updated our client successfully. How wonderful. And no need to mess about in Terminal with a root shell. No telling what kind of trouble we could get into with those elevated privileges…. much safer this way.

Thanks Archiware for making this great software. I depend on it every day.

 

 

P5 on the Jellyfish: Archiving Gotchas

TL;DR

Using Archiware P5 to Archive files to tapes is awesome, but watch out for little things you might miss, such as the path to the files and backing up your Archive Db.

P5 Archive on the Jellyfish

Using P5 Archive with the Lumaforge Jellyfish is a great way to preserve your digital archives. See this post for how to set up P5 on the Jellyfish

Using Archiware P5 for archiving makes sense. You want your completed projects and original camera footage on LTO tape. But how do you do archives? There are several different ways, and there be gotchas.

P5 Archive vs P5 Archive app

Using P5 Archive to manually archive completed projects to LTO tape is a process of logging into the server via a web browser and selecting the the project folder you want to archive to tape.

The completed project folder could be on the storage visible to the server or it could be storage the client sees. And that can make a difference. Where the storage is mounted is different on a Mac vs Linux. Its’ the difference between “/Volumes” and “/mnt”.

The same Jellyfish storage, either SMB or NFS, when seen on a Mac is mounted by default at “/Volumes” (this can be changed but for most people leave it at the default). But when archiving the storage via a Jellyfish client you will get “/mnt” path.

p5-smb-test2.png

Using the P5 Archive app, which is a Mac only companion application to P5 Archive, to run the archives you will see the storage archived as “/Volumes”.

This first Archiving gotcha is if you’re archiving the Jellyfish storage with the web application of P5 Archive you will have to find your footage and restore from the “/mnt” path vs if you’re archiving from the P5 Archive app which is running from a Mac and will see and store the footage using the “/Volumes” path.

All this to say that using both ways to archive may double up your footage in your archive which may be unintended. And from a restore in the web browser finding your footage may be confusing if you’re used to seeing it mounted in “/Volumes” and you actually find it under “/mnt”.

Note: the reason to use the P5 Archive app is because of the simplicity of right-clicking files in the finder which are on your storage and telling them to archive right then and there. Files are copied to tape then the original files on the storage are replaced with stub files. Right-click again to restore. Simple.

p5-archive-app-job-monitor.png

Backup your Archive!

Don’t forget to backup your archives. Or rather, your archive Db. A more recent addition is the ability to automate the backups on the Archive index, so don’t forget to enable it.

In the managed index section, choose your Archive index.

Set the target client where the backups are going and the backup directory. Choose a time and don’t forget to enable it (check the checkbox and hit apply before closing the windows).

Note: Repeat this setup for each Archive index you want to backup.

Archive Backup db setup3.png

Monitoring your Archive!

Don’t forget to enable email notifications for your P5 server to get your inbox full of status notifications and errors and other important stuff. But if you want to cut down on email notifications or you have multiple P5 servers (many different clients, perhaps), then you might want to check out Watchman Monitoring and the P5 plugin that is built-in). Find out easily when your tape pools are getting low, the tape drives needs to be cleaned, the support maintenance needs renewing etc. All in one dashboard. How convenient!

Maybe everything is going well…

Watchman-P5-info.png

Or maybe not!

Archiware-P5-Jobs-Watchman-tapes-required.png

 

Install P5 on the Jellyfish

TL;DR

You can easily install Archiware P5 backup and archive software on a Lumaforge Jellyfish storage server. Once you’ve done that you can backup to tape or disk or the cloud directly or through another P5 server. Backups are good. Archive are good. Restores are better.

P5 install on the Jellyfish (Linux) How-To:

Note: Thank you to Lumaforge’s CTO Eric Altman who gave me some basic instructions to get me going.

Step One: Download the latest Linux P5 rpm file 

http://p5.archiware.com/download

p5-Linux-rpm.png

Copy the downloaded rpm file to the root folder of your SMB or NFS file share.

 

Step Two: Install the rpm file

Open Terminal and ssh into your Jellyfish. Login as root or as another appropriate user.

yum localinstall /mnt/Primary/ShareSMB/awpst554.rpm

 

Step Three: Browse to server on port 8000 to test that the server is up

e.g. https://jellyfish:8000

Or in Terminal and ssh into your Jellyfish and ping your P5 server

cd /usr/local/aw 

./ping-server

Pinging PresStore application servers...

  lexxsrv pid: 4840 (server is running)

  lexxsrv url: http://127.0.1.1:8000/login 

Pinged 1 from 1 application servers.

 

Step Four: Decide if the Jellyfish storage will be a P5 client or a server.

Note: If configuring the Jellyfish storage as the main P5 server you may wish to set up a user that only has access to the shared volumes.

For my set up the Jellyfish storage is going to act as a P5 client to a main P5 server on a Mac mini (yes, they are useful for something). The Mac mini is this case is the P5 server and is attached to theOverland tape library via a Promise SANlink2 Thunderbolt Fibre Channel adapter.

NEOs-T24-large-new.jpg

macmini-ports.png

 

Step Five: Set up the Jellyfish storage as a P5 client

Log into your P5 server and add the Jellyfish by the IP known to the P5 server. In this case the P5 server is connected via 1GB to the Jellyfish in Port 1.

P5 clients jellyfish setup1.png

Note: You could also choose to plug into the Jellyfish via a 10GB port, but in my setup these 10GB ports are reserved for the edit stations. You should choose what’s appropriate for your setup.

P5 clients jellyfish setup2.png

Resource utilization of P5 on the server is low, topping off generally at 1GB of RAM at peak usage. While this does technically take resources from ZFS caching, the impact should be super minimal.

In my observations the CPU never spiked too high while both serving NFS and SMB mount points to multiple Final Cut Pro X workstations even with backups or archive jobs going to tape at the same time.

jellyfish-cpu-resources-graph.png

More Jellyfish P5

See the follow up post on Archiving gotchas with the Jellyfish here

 

macOS Server is dead. Long live macOS.

Yes, it’s been a hot topic in the MacAdmin community both on Mac Enterprise list (oh no it’s the end of the world!) and MacAdmins Slack (told you it was coming, don’t be surprised).

My professional opinion is: “Don’t panic!”

My MacDevOps conference is all about supporting MacAdmins who have been writing code as infrastructure to manage Macs. And do it while replacing macOS Server in the server room with Linux and other OS.

Xsan is staying in macOS Server so I am happy and that’s my main use for the Mac Mini and macOS Server.

I have other Mac Minis doing file sharing for small work groups and moving that out of Server.app in the last revision was unfortunate (it is in the standard OS and usable there but less manageable). There’s also Synology and QNAP NAS for small workgroup file sharing and so much more And many enterprise storage vendors for larger setups.

Imaging has been dying a slow death for years and has been replaced with a thin or “no imaging” concept supported by tools such as AutoPkg and Munki.

Profile Manager is a demo version of MDM and should not be used to actually manage Macs.

Wikis, DNS and Mail should be hosted on Linux, in VMs, AWS, GCP or anywhere other than macOS Server so no problem.

Overall it might be disconcerting to some. But change is constant. And especially at Apple change comes fast and often. We have to get used to it.

Reference:

Apple Support article

Apple to Deprecate Many macOS Server Services – TidBITS http://tidbits.com/e/17760

macOS High Sierra vs Server.app

Upgrading to macOS High Sierra is akin to walking on the bridge of peril. Too perilous!

I don’t recommend macOS 10.13.x for production, but it is necessary to test and for this reason back in September I did upgrade my test Mac. Of course, when the installer detects server it will give you a warning about it not being compatible and you’ll have to download a compatible version from the App Store. Be warned!

ThisVersionOfServerNoLongerSupported2

Which is no big deal as long you are warned and have backups and maybe you can download the compatible version from the App Store. Trying to launch the old version will get you a warning to go to the App Store and be quick about it.

ThisVersionOfServerNoLongerSupported

Some people are reporting that the macOS installer is erasing their Server.app and refusing to upgrade their Server with the macOS 10.13 compatible version (v.5.4).

In that case, restore from Time Machine or other backups and start again?

Setting up Secure Munki

So you’ve set up Munki to deploy software to your Macs by following the basic set up here: Set up Munki, and now you want to set it up more securely.

You need two things. 1) a cert and 2) a secure repo

  • TRUST US

The optimal situation is a trusted secure certificate for your server from a reputable certificate authority, if you don’t have that, or want to use the self-signed certificate your server has then your Munki Mac clients will need to trust this certificate.

Export out the cert from Server Admin if you’re using that to manage your Mac mini server. Place this cert file on your clients (using ARD, or other methods) then use the security command to get the Mac clients to trust this cert.

security add-trusted-cert -d -r trustRoot -k “/Library/Keychains/System.keychain” “/private/tmp/name-of-server.cer”

REFERENCE: Rich Trouton’s blog goes into more detail and details a way to script this.

  •  SECURE IT

Use htpasswd to add a password to your Munki repo.

htpasswd -c .htpasswd munki

Edit the htaccess info

AuthType Basic
AuthName "Munki Repository"
AuthUserFile /path/to/your/munki/repo_root/.htpasswd
Require valid-user

Encode this password for Munki:

python -c 'import base64; print "Authorization: Basic %s" % base64.b64encode("USERNAME:PASSWORD")'
Authorization: Basic VVNFUk5BTUU6UEFTU1dPUkQ=

Push out this password to your Munki clients with ARD (or use some other method)

defaults write /Library/Preferences/ManagedInstalls.plist AdditionalHttpHeaders -array “Authorization: VVNFUk5BTUU6UEFTU1dPUkQ=”

Change the Munki RepoURL on all your clients to use the new secure URL

defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL “https://munkiserver/munki_repo”

REFERENCES:

Consult the Munki Wiki for: Basic authentication setup for Munki 

Ala Siu’s excellent write on securing munki

Notes:

Consider using a server made for securing Munki, like the Squirrel server from the MicroMDM project. More on this in another blog post.

Consider using certificate from a known reputable certificate authority such as Let’s Encrypt (the Squirrel server above automates the setup with Let’s Encrypt).

Further:

Another project which seeks to combine all these open source projects in the Munki ecosystem is Munki in a Box. There’s a secure branch of this project which setups a basic authentication as well but while it aims to simplify setting up a secure Munki it may be a bit confusing to set up at first glance. Test, and test again.

 

 

NFS set up with OS X 10.9 Mavericks

One way to set up NFS shares on OS X 10.9.x

Summary: On OS X create “exports” text file describing your share that you want to export over NFS. Server.app is not necessary or needed. On the client the fstab file will describe the client end where the share gets mounted. Note: use whatever text editor you wish, whether it is vi, nano or TextWrangler, etc.

Server:

1. “sudo vi /etc/exports” example:

/MySharedFolder -maproot=nobody

2. “sudo nfsd checkexports”

Check the correctness of exports file

3. “sudo nfsd enable”

Start nfsd

Note: run “sudo kill -1 `cat /var/run/mountd.pid`” is nfsd had been running previously and you want to reread exports.

4. “/usr/bin/showmount -e”

Test the share. It should show something like: “/MySharedFolder Everyone”

Client:

1. “mkdir /MyShare”

Make the mount point for the NFS share

2. “vi /etc/fstab”

Edit the fstab file to show the mounts you wish to have

Example:

192.168.23.5:/MySharedFolder /MyShare nfs rw,async,noatime 0 0

4. “mount -a”

Mount all