Thunderbolt Xsan: Set up a T-SAN

Setting up your very own Xsan at home… What could be more exciting? Nothing like SAN storage to cure those stacks of hard drive blues. Don’t have a spare fibre channel switch or fibre channel storage at home? No problem Grab some thunderbolt storage from Accusys and join the fun.

I am testing the A12T3-Share 12-drive desktop Thunderbolt RAID solution to build my Xsan. Accusys also have a 16 drive rack mounted raid storage box if you want to install a nice pro set up in the server room you have tucked neatly in your home office. Ha ha. Seriously, the 12 drive unit is whisper quiet and would be a great addition to any home lab or production storage setup. I mean, aren’t we all doing video production at home these days? And even if we are doing a proxy workflow in the clouds, we still need to store the original footage somewhere before it goes to LTO tape, or backed up in the clouds (hopefully another cloud). A few years ago I tested the Accusys 16 drive Thunderbolt 2 unit and it worked perfectly with my fibre channel storage but this time I am testing the newest Thunderbolt 3 unit. Home office test lab is GO!

It is a pretty straight forward setup but I ran into some minor issues that anyone could run into and so I want to mention them and save you all the frustration by learning from my mistakes. Always be learning. That’s my motto. Or “break things at home not in production”, but if your home is production now, then break things fast and learn very quickly.

First step is to download the software for the RAID and you’ll find it on the Accusys website.

(I found the support downloads well organized but still a bit confusing as to what i needed)

The installer is not signed which in our security conscious age is a little concerning, but examining the package with Suspicious package should allay any concerns.

The installer installs the RAIDGuard X app which you will need to configure the RAID.

Of course, RAIDGuard X needs a Java Runtime Environment to run. Why is this still a thing? Hmm…

RAIDGuardX will allow you to configure your connected Thunderbolt hardware.

Configure the array as you like. I only had four drives to test with. Just enough for RAID5.

Choose your favourite RAID level. I picked RAID5 for my 4 drives.

The first gotcha that got me was this surprisingly simple and easy to overlook section. “Assign LUN automatically” asks you to choose which port that LUN (the configured RAID) will be assigned to. If you don’t check anything like I didn’t in my first run through then you configure a RAID5 array that you’ll never see on your connected Mac. Fun, right? Ha ha.

Xsan requires a sacrifice…. I mean, a LUN (available RAID array). Check your Fibre Channel in System Information. Yes, this is from the thunderbolt storage. Hard to believe, but it’s true!

Setting up enterprise grade SAN storage requires a trip to the Mac App Store. Server.app

Open Server.app, enable Xsan, create a new volume and add your LUN from the Accusys Thunderbolt array. Set the usage to “any” (metadata and data) since this is a one LUN test setup.

Pro tip: connect your Xsan controller to your Open Directory server. Ok, just kidding. You don’t have an OD server in your home office? Hmm… Create an entry in /etc/hosts instead.

If you’ve set up your SAN volume then you will see it listed in the Finder.

Easy shareable SAN storage is possible with thunderbolt RAID arrays from Accusys. No more Fibre channel switches needed. Small SAN setups are possible for creative teams without a server room. This setup was a quiet 12 drive RAID and a Mac mini. Add some Thunderbolt cables. There are four thunderbolt 3 connections and you can add more with an additional RAID. Up to 8 connections with one of them for the Mac Mini running the SAN. Not bad at all. And Xsan is free. Add a Server app from the App Store, but the Xsan client is free and built-in (Xsan has been included with macOS since 10.7 so many years ago). Fibre channel protocol (even through Thunderbolt) is faster than network protocols and great for video production. Fast and shareable storage at home. Or in your office. Thunderbolt Xsan. T-SAN.

Minecraft Server for My Kids and My Sanity

Summer time or anytime is a good time to run a minecraft server. And when I am not troubleshooting IT networks, planning SAN storage upgrades, running a DevOps for Dummies bookclub and the MDOYVR podcast then I like to upgrade my minecraft server.

Every time there is an update to the java client there is demand from my users (uh, I mean, my kids) to immediately stop all other work (hey kids, I’m working here! let Dad work) and upgrade the minecraft server.

Like all other IT domains where there are variety of solutions and software fixes to problems, it would seem that Minecraft has official server downloads as well as the unofficial artisanal craft versions. I’ve tried a few, and some out of desperation… there was an incident with netherite blocks and the server wouldn’t start anymore but the Ppaer minecraft server fixed the issues!

The normal routine is that when an official release comes out the other versions may not be up to date as quick, so it’s back to the official versions.

Download the official Minecraft Server

Or try the Paper Minecraft Server

See also Michael Lynn’s two part family harmony blog series which started me on this road to keep the kids happy and maintain family happiness.

Xsan Upgrade and Big Sur Prep. Hello Catalina!

Big Sur summer testing time!

Summer time is beta testing time. A new macOS beta cycle with Big Sur is upon us. Test early, and test often. With all the excitement of Big Sur in the air, it’s time to look at Catalina.

Our day to day production Xsan systems do not run beta software, not even the latest version of macOS, they only run tested and safe versions of macOS. I always recommend being a revision behind the latest. Until now that meant macOS 10.14 (Mojave). With the imminent release of macOS Big Sur (is it 10.16 or macOS 11?) then it’s time to move from 10.14.6 Mojave to 10.15.6 Catalina. It must be safe now, right? 

Background

Xsan is Apple’s based Storage Area Network (SAN) software licensed from Quantum (see StorNext), and since macOS 10.7 aka Lion it has been included with macOS for free (it was $1,000 per client previously!).

Ethernet vs Fibre Channel vs Thunderbolt

A SAN is not the same as a NAS (Network attached storage) or DAS (direct attached storage). A NAS or other network based storage is often 10GbE and can be quite fast and capable. I will often use Synology NAS with 10GbE for a nearline archive (a second copy of tape archive) but can also use it as a primary storage with enough cache. Lumaforge’s Jellyfish is another example of network based storage.

Xsan storage is usually fibre channel based and even old 4GB storage is fast because … fibre channel protocol (FCP) is fast and the data frames are sent in order unlike TCP. It is more common to see 8GB or 16Gb fibre channel storage these days (though 32GB is starting to appear). And while fibre channel is typically what you use for Xsan you can also use shared Thunderbolt based storage like the Accusys A16T3-Share. I have tested a Thunderbolt 2 version of this hardware with Xsan and it works very well. I’m hoping to test a newer Thunderbolt 3 version soon. Stay tuned.

Xsan vs macOS Versions

We’ve discussed all the things that the Xsan is not and now what is it? Xsan is often created from multiple fibre channel RAID storage units but the data is entirely dependent on the Xsan controller that creates the volume. The Xsan controller is typically a Mac Mini but can be any Mac with Server.app (from Apple’s App Store). The existence of any defined Xsan volumes depends on the sanity of its SAN metadata controllers. If the SAN controllers die and the configuration files go with it then your data is gone.  POOF! I’ve always said that Xsan is a shared hallucination, and all the dreamers should dream the same dream. To make sure of this we always recommend running the same version of macOS on the Mac clients as well as the servers (the Xsan controllers). And while the Xsan controllers should be the same or at a higher macOS version level it can sometimes be the opposite in practise. To be sure what versions of macOS are interoperable we can check with Apple’s Xsan controllers and clients compatibility chart and Xsan versions included in macOS for the rules and exceptions. Check the included version of Xsan on your Mac with the cvversions command

File System Server:
  Server  Revision 5.3.1 Build 589[63493] Branch Head BuildId D
   Built for Darwin 17.0 x86_64
   Created on Sun Dec  1 19:58:57 PST 2019
   Built in /BuildRoot/Library/Caches/com.apple.xbs/Sources/XsanFS/XsanFS-613.50.3/buildinfo

This is from a Mac running macOS 10.13

Host OS Version:
 Darwin 17.7.0 Darwin Kernel Version 17.7.0: Sun Dec  1 19:19:56 PST 2019; root:xnu-4570.71.63~1/RELEASE_X86_64 x86_64

We see similar results from a newer build below:

File System Server:
  Server  Revision 5.3.1 Build 589[63493] Branch Head BuildId D
   Built for Darwin 19.0 x86_64
   Created on Sun Jul  5 02:42:52 PDT 2020
   Built in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/XsanFS/XsanFS-630.120.1/buildinfo

This is from a Mac running macOS 10.15.

Host OS Version:
 Darwin 19.6.0 Darwin Kernel Version 19.6.0: Sun Jul  5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64 x86_64

Which tells us that the same version of Xsan are included with macOS 10.13 and 10.15 (and indeed is the same from 10.12 to 10.15). So we have situations with Xsan controllers running 10.13 and clients running 10.14 are possible even though macOS versions are a mismatch, the Xsan versions are the same. There are other reasons for keeping things the macOS versions the same: troubleshooting, security, management tools, etc  To be safe check with Apple and other members of the Xsan community (on MacAdmins Slack).

Backups are important

Do not run Xsan or any kind of storage in production without backups. Do not do it. If your Xsan controllers die then your storage is gone. Early versions of Xsan (v1 especially) were unstable and the backups lesson can be a hard one to learn. All later versions of Xsan are much better but we still recommend backups if you like your data. Or your clients. (Clients are the people that make that data and pay your bills). I use Archiware P5 to make tape backups, tape archives, nearline copies as well as workstation backups. Archiware is a great company and P5 is a great product. It has saved my life (backups are boring, restores are awesome!).

P5-Restore-FCPX.png

Xsan Upgrade Preparation

When you upgrade macOS it will warn you that you have Server.app installed and you might have problems. After the macOS upgrade you’ll need to download and install a new version of Server.app. In my recent upgrades from macOS 10.13 to macOS 10.15 via 10.14 detour I started with Server.app 5.6, then install 5.8 and finally version 5.10.

After the macOS upgrade I would zip up the old Server.app application and put in place the new version which I had already downloaded elsewhere. Of course you get a warning about removing the Server app

 

Xsan-ServerApp-ZipRemovalDetected.png

Install the new Server app then really start your Xsan upgrade adventure.

Serverapp-setup.png

Restore your previous Xsan setup.

This slideshow requires JavaScript.

If everything goes well then you have Xsan setup and working on macOS 10.15.6 Catalina

Xsan-Catalina-Upgrade-Success

Best of 2018: FCPX and iMac Pro

Part of a series of blog post on the “Best of 2018”

Part 1: the iMac Pro and FCPX

The year started off with the new iMac Pro and Final Cut Pro X 10.4. Both new hardware and software were released in December 2017. New awesome hardware and software to start of 2018.

FCPX and the iMac Pro have proven themselves to be a great combination that has been amazing for FCPX editors everywhere. The new colour grading tools and other enhancements were warmly received in FCP X 10.4. The power of the iMac Pros was not exaggerated. Excellent pro hardware.

FCPX works great on a MacBook Pro and internal storage, with Apple’s Xsan and fibre channel or with Lumaforge Jellyfish 10GbE over NFS. I worked with all different setups in 2018 and happy to report that editors kept editing and left the storage and backup worries to me (and I didn’t worry since I’ve got Archiware P5 watching my back).

Working with the Jellyfish I installed the P5 Linux agent to backup and archive to tape. Getting the Jellyfish to back up to my P5 server running on a Mac Mini couldn’t have been easier. Through the year I worked with Archiware to make improvements in the P5 Archive app so that my editor clients can archive and restore more easily on their own. Works well and look forward to working more closely with both companies to help make awesome setups for FCPX editors and creative professionals everywhere.

NAB and FCPX

The week before NAB 2018, Apple announced a new version of Final Cut Pro X with support for closed captions, and the brand new ProRes RAW codec.

NAB in April is always a busy month with announcements from all companies in the media production and media asset management world and Apple’s public talk at NAB showing off new features so soon after their last major release was unexpected but very warmly received.

Of course there was one more major event in the 2018, in November there was the FCPX Creative Summit.

I attended this year and it was awesome. Apple released a brand new version with 3rd party integration in the form of extensions. This is huge. This will be amazing for FCPX editors who want to stay in FCPX and do their editing work but integrate with other apps.

What was the FCPX creative summit?

⁃ rendez-vous in Cupertino with Final Cut Pro editors, studio owners, plugin authors, creative apps vendors

⁃ Visit to Apple HQ. With Apple Pro Apps engineers, QA, managers and everyone involved.

⁃ In depth discussion of the next version of FCPX extensions which allow third party integration deep into the app for example: Frame IO for review and approve or Keyflow Pro or Cat DV media asset management apps.

⁃ Great team of people organizing. This event had multiple tracks and lots of great sessions for everyone. Well done. Enjoyed it immensely. Everyone using Final Cut Pro or involved in this creative universe should be there.

2018 was great year for pro hardware and software. The iMac Pro and the constant stream of FCPX updates kept us grinning from ear to ear. Great stuff. Awesome year.

Next up: best conferences of 2018

Updating the P5 client on the Jellyfish

You’ve successfully installed Archiware’s P5 backup and archiving software on your backup server following my previous blog posts and after it has run smoothly for a while you decided to upgrade the version of P5 on your server, but how do you do this on the Lumaforge Jellyfish storage? I’m glad you asked.

There are a couple of ways to update your P5 agent, and I will show you the built in way in Archiware’s P5 software. Surprisingly after many years of using P5 I have never used this method before. I’ve been using Munki for years to upgrade all software on my Mac clients including P5 and on Linux and Solaris servers I’ve just done it by hand. Install over top of the previous version and voila upgrade! But what if you didn’t want to ssh in as root and just install over top, what if there was a better way? I present to you the official “Update client” dialog box. It’s nice.

Update-p5-jellyish-1

Updating client software assumes you’ve set up clients in the P5 server clients section, This is needed when you want to use these server agents to designate their attached storage as a backup, archive or sync source. And also, this assumes you’ve updated your server.

P5 client update Jellyfish 2 Screen Shot 2018-08-06 at 4.40.03 PM

During the update process there are some nice dialog boxes to let you know what is happening.

P5 client update Jellyfish 4 Screen Shot 2018-08-06 at 4.44.34 PM

And afterwards you can test your client with a Ping test.

P5 client update Jellyfish 3 Screen Shot 2018-08-06 at 4.44.27 PM

Success! Looks like we’ve updated our client successfully. How wonderful. And no need to mess about in Terminal with a root shell. No telling what kind of trouble we could get into with those elevated privileges…. much safer this way.

Thanks Archiware for making this great software. I depend on it every day.

 

 

P5 on the Jellyfish: Archiving Gotchas

TL;DR

Using Archiware P5 to Archive files to tapes is awesome, but watch out for little things you might miss, such as the path to the files and backing up your Archive Db.

P5 Archive on the Jellyfish

Using P5 Archive with the Lumaforge Jellyfish is a great way to preserve your digital archives. See this post for how to set up P5 on the Jellyfish

Using Archiware P5 for archiving makes sense. You want your completed projects and original camera footage on LTO tape. But how do you do archives? There are several different ways, and there be gotchas.

P5 Archive vs P5 Archive app

Using P5 Archive to manually archive completed projects to LTO tape is a process of logging into the server via a web browser and selecting the the project folder you want to archive to tape.

The completed project folder could be on the storage visible to the server or it could be storage the client sees. And that can make a difference. Where the storage is mounted is different on a Mac vs Linux. Its’ the difference between “/Volumes” and “/mnt”.

The same Jellyfish storage, either SMB or NFS, when seen on a Mac is mounted by default at “/Volumes” (this can be changed but for most people leave it at the default). But when archiving the storage via a Jellyfish client you will get “/mnt” path.

p5-smb-test2.png

Using the P5 Archive app, which is a Mac only companion application to P5 Archive, to run the archives you will see the storage archived as “/Volumes”.

This first Archiving gotcha is if you’re archiving the Jellyfish storage with the web application of P5 Archive you will have to find your footage and restore from the “/mnt” path vs if you’re archiving from the P5 Archive app which is running from a Mac and will see and store the footage using the “/Volumes” path.

All this to say that using both ways to archive may double up your footage in your archive which may be unintended. And from a restore in the web browser finding your footage may be confusing if you’re used to seeing it mounted in “/Volumes” and you actually find it under “/mnt”.

Note: the reason to use the P5 Archive app is because of the simplicity of right-clicking files in the finder which are on your storage and telling them to archive right then and there. Files are copied to tape then the original files on the storage are replaced with stub files. Right-click again to restore. Simple.

p5-archive-app-job-monitor.png

Backup your Archive!

Don’t forget to backup your archives. Or rather, your archive Db. A more recent addition is the ability to automate the backups on the Archive index, so don’t forget to enable it.

In the managed index section, choose your Archive index.

Set the target client where the backups are going and the backup directory. Choose a time and don’t forget to enable it (check the checkbox and hit apply before closing the windows).

Note: Repeat this setup for each Archive index you want to backup.

Archive Backup db setup3.png

Monitoring your Archive!

Don’t forget to enable email notifications for your P5 server to get your inbox full of status notifications and errors and other important stuff. But if you want to cut down on email notifications or you have multiple P5 servers (many different clients, perhaps), then you might want to check out Watchman Monitoring and the P5 plugin that is built-in). Find out easily when your tape pools are getting low, the tape drives needs to be cleaned, the support maintenance needs renewing etc. All in one dashboard. How convenient!

Maybe everything is going well…

Watchman-P5-info.png

Or maybe not!

Archiware-P5-Jobs-Watchman-tapes-required.png

 

Install P5 on the Jellyfish

TL;DR

You can easily install Archiware P5 backup and archive software on a Lumaforge Jellyfish storage server. Once you’ve done that you can backup to tape or disk or the cloud directly or through another P5 server. Backups are good. Archive are good. Restores are better.

P5 install on the Jellyfish (Linux) How-To:

Note: Thank you to Lumaforge’s CTO Eric Altman who gave me some basic instructions to get me going.

Step One: Download the latest Linux P5 rpm file 

http://p5.archiware.com/download

p5-Linux-rpm.png

Copy the downloaded rpm file to the root folder of your SMB or NFS file share.

 

Step Two: Install the rpm file

Open Terminal and ssh into your Jellyfish. Login as root or as another appropriate user.

yum localinstall /mnt/Primary/ShareSMB/awpst554.rpm

 

Step Three: Browse to server on port 8000 to test that the server is up

e.g. https://jellyfish:8000

Or in Terminal and ssh into your Jellyfish and ping your P5 server

cd /usr/local/aw 

./ping-server

Pinging PresStore application servers...

  lexxsrv pid: 4840 (server is running)

  lexxsrv url: http://127.0.1.1:8000/login 

Pinged 1 from 1 application servers.

 

Step Four: Decide if the Jellyfish storage will be a P5 client or a server.

Note: If configuring the Jellyfish storage as the main P5 server you may wish to set up a user that only has access to the shared volumes.

For my set up the Jellyfish storage is going to act as a P5 client to a main P5 server on a Mac mini (yes, they are useful for something). The Mac mini is this case is the P5 server and is attached to theOverland tape library via a Promise SANlink2 Thunderbolt Fibre Channel adapter.

NEOs-T24-large-new.jpg

macmini-ports.png

 

Step Five: Set up the Jellyfish storage as a P5 client

Log into your P5 server and add the Jellyfish by the IP known to the P5 server. In this case the P5 server is connected via 1GB to the Jellyfish in Port 1.

P5 clients jellyfish setup1.png

Note: You could also choose to plug into the Jellyfish via a 10GB port, but in my setup these 10GB ports are reserved for the edit stations. You should choose what’s appropriate for your setup.

P5 clients jellyfish setup2.png

Resource utilization of P5 on the server is low, topping off generally at 1GB of RAM at peak usage. While this does technically take resources from ZFS caching, the impact should be super minimal.

In my observations the CPU never spiked too high while both serving NFS and SMB mount points to multiple Final Cut Pro X workstations even with backups or archive jobs going to tape at the same time.

jellyfish-cpu-resources-graph.png

More Jellyfish P5

See the follow up post on Archiving gotchas with the Jellyfish here

 

macOS Server is dead. Long live macOS.

Yes, it’s been a hot topic in the MacAdmin community both on Mac Enterprise list (oh no it’s the end of the world!) and MacAdmins Slack (told you it was coming, don’t be surprised).

My professional opinion is: “Don’t panic!”

My MacDevOps conference is all about supporting MacAdmins who have been writing code as infrastructure to manage Macs. And do it while replacing macOS Server in the server room with Linux and other OS.

Xsan is staying in macOS Server so I am happy and that’s my main use for the Mac Mini and macOS Server.

I have other Mac Minis doing file sharing for small work groups and moving that out of Server.app in the last revision was unfortunate (it is in the standard OS and usable there but less manageable). There’s also Synology and QNAP NAS for small workgroup file sharing and so much more And many enterprise storage vendors for larger setups.

Imaging has been dying a slow death for years and has been replaced with a thin or “no imaging” concept supported by tools such as AutoPkg and Munki.

Profile Manager is a demo version of MDM and should not be used to actually manage Macs.

Wikis, DNS and Mail should be hosted on Linux, in VMs, AWS, GCP or anywhere other than macOS Server so no problem.

Overall it might be disconcerting to some. But change is constant. And especially at Apple change comes fast and often. We have to get used to it.

Reference:

Apple Support article

Apple to Deprecate Many macOS Server Services – TidBITS http://tidbits.com/e/17760

macOS High Sierra vs Server.app

Upgrading to macOS High Sierra is akin to walking on the bridge of peril. Too perilous!

I don’t recommend macOS 10.13.x for production, but it is necessary to test and for this reason back in September I did upgrade my test Mac. Of course, when the installer detects server it will give you a warning about it not being compatible and you’ll have to download a compatible version from the App Store. Be warned!

ThisVersionOfServerNoLongerSupported2

Which is no big deal as long you are warned and have backups and maybe you can download the compatible version from the App Store. Trying to launch the old version will get you a warning to go to the App Store and be quick about it.

ThisVersionOfServerNoLongerSupported

Some people are reporting that the macOS installer is erasing their Server.app and refusing to upgrade their Server with the macOS 10.13 compatible version (v.5.4).

In that case, restore from Time Machine or other backups and start again?

Setting up Secure Munki

So you’ve set up Munki to deploy software to your Macs by following the basic set up here: Set up Munki, and now you want to set it up more securely.

You need two things. 1) a cert and 2) a secure repo

  • TRUST US

The optimal situation is a trusted secure certificate for your server from a reputable certificate authority, if you don’t have that, or want to use the self-signed certificate your server has then your Munki Mac clients will need to trust this certificate.

Export out the cert from Server Admin if you’re using that to manage your Mac mini server. Place this cert file on your clients (using ARD, or other methods) then use the security command to get the Mac clients to trust this cert.

security add-trusted-cert -d -r trustRoot -k “/Library/Keychains/System.keychain” “/private/tmp/name-of-server.cer”

REFERENCE: Rich Trouton’s blog goes into more detail and details a way to script this.

  •  SECURE IT

Use htpasswd to add a password to your Munki repo.

htpasswd -c .htpasswd munki

Edit the htaccess info

AuthType Basic
AuthName "Munki Repository"
AuthUserFile /path/to/your/munki/repo_root/.htpasswd
Require valid-user

Encode this password for Munki:

python -c 'import base64; print "Authorization: Basic %s" % base64.b64encode("USERNAME:PASSWORD")'
Authorization: Basic VVNFUk5BTUU6UEFTU1dPUkQ=

Push out this password to your Munki clients with ARD (or use some other method)

defaults write /Library/Preferences/ManagedInstalls.plist AdditionalHttpHeaders -array “Authorization: VVNFUk5BTUU6UEFTU1dPUkQ=”

Change the Munki RepoURL on all your clients to use the new secure URL

defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL “https://munkiserver/munki_repo”

REFERENCES:

Consult the Munki Wiki for: Basic authentication setup for Munki 

Ala Siu’s excellent write on securing munki

Notes:

Consider using a server made for securing Munki, like the Squirrel server from the MicroMDM project. More on this in another blog post.

Consider using certificate from a known reputable certificate authority such as Let’s Encrypt (the Squirrel server above automates the setup with Let’s Encrypt).

Further:

Another project which seeks to combine all these open source projects in the Munki ecosystem is Munki in a Box. There’s a secure branch of this project which setups a basic authentication as well but while it aims to simplify setting up a secure Munki it may be a bit confusing to set up at first glance. Test, and test again.