Tag: OSX

  • Watchman Monitoring + Archiware P5

    I’ve been a little busy lately. I’m working on some scripts for Watchman Monitoring that alert when Archiware P5 needs attention. It’s really much more exciting than it sounds. 🙂

    WatchmanArchiwareP5

    Archiware P5 plugin (included with Watchman Client 6.6.0)

    UPDATE: The Archiware P5 plugin is now included with the Watchman Monitoring client version 6.6.0

    Use the link above to read up about Watchman Monitoring and the Archiware P5 plugin.

    This plugin is now part of Watchman Monitoring thanks to Allen and his team! Of course, big thanks to a lot of help from Python magician and MacDevOps:YVR colleague, Wade Robson. I couldn’t have finished this plugin without his help. Merci, mon ami. (Early help to get started with this project is thanks to Scott Neal, automation expert and programming wizard. Thank you so much Scott, and thanks for the tasty Portland beer!).

    Watchman Monitoring is a group of plugins that will warn when drives are failing, computers have restarted unexpectedly or backups are not running. All reporting goes to a beautiful web interface in the cloud which can keep a history of plugin issues. Watchman allows for integration with ticket systems and multiple users including clients and IT staff that can keep track of what’s up with their workstations, and servers.

    Watchman Monitoring helps me keep tabs of major issues at all my clients before they become disasters. I even use it in discovery for new clients to see what issues exist but are ignored or unknown.

    Since I set up a lot of SAN storage for my clients, and I use Archiware P5 for backups and archives I realized I needed to write a plugin for Watchman Monitoring that alerts me to issues. Instead of remoting in with VPN to each and every client every day to check on backups the only alternative is to automate it. These scripts watch the LTO tape drives and emails when they need cleaning, or warns when running jobs need tapes, if workstations haven’t backed up in a while or if tape pools need more tapes. And in Beta 2 we’ve added a check to see if the P5 maintenance support needs to be renewed to give you time to renew it before it expires. As well as better alerts for issues with running jobs, and lots of bug fixes.

    We have it working on Mac servers running Archiware P5 and the next step is Linux, and the Unix family. Later on, Watchman will port it to Windows. The scripts are written in Python which is great for portability (except to Windows. Ha ha). And the P5 Watchman plugins should eventually run everywhere that Archiware P5 runs (OS X, Linux, FreeBSD, Solaris and Windows).

    The best part of writing plugins for Watchman Monitoring is the great help that Allen and the whole team at Watchman have given us been throughout our development of these Archiware P5 plugins. And of course everyone at Archiware and Mike at PVT have been super helpful in explaining the use of the nsdchat cli for Archiware P5, even going so far as to add some features we needed to nsdchat when we explained how useful they’d be for this project. Mille mercis. Vielen danke.

    Using GitHub to check code in, document business logic, write code, build a wiki and then track issues that need bug fixes or enhancement requests has been an adventure. It all starts with an problem that you want to be alerted for. It’s easy enough to add custom plugins to Watchman Monitoring you just need some ideas, a programmer (or two) and some time for testing, debugging, more testing and time. Did I mention you need lots of time? Ha ha

    And now for a sneak peak of the Archiware P5 beta 2 plugins for Watchman Monitoring.

    1. Watchman nicely lists the new warnings and expirations for quickly getting to the issues you need to see.             Watchman Monitoring Archiware P5 warnings expiration X
    2. Expirations are tracked with Watchman. In this case we note the date when the maintenance for Archiware P5 needs to be renewed. Don’t want to miss that! Watchman Monitoring Archiware P5 Expirations plugin Xpng
    3. Server info is good to know. Uptime, port used, and what exactly is licensed.         Watchman Monitoring Archiware P5 Info plugin X
    4. The LTO tape drive is the heart of any tape library, and alerting when it needs cleaning is very important.                                               Watchman Monitoring Archiware P5 Devices plugin X
    5. Other plugins watch the tape pools, running and completed jobs, as well as Backup2Go (workstation backup).

    Watchman Monitoring Archiware P5 B2Go plugin X

    Watchman Monitoring Archiware P5 Pools plugin X

    Watchman Monitoring Archiware P5 Jobs plugin X

  • Hands on with Imagr

    At the recent MacTech conference in Los Angeles I got a chance to sit in a workshop led by Graham Gilbert walking us through his open source imaging tool, Imagr.

    This was a perfect follow-up to last year’s awesome demo by Pepijn Bruienne at last year’s MacTech where he demoed his BSDPy netboot replacement running in a Docker container net booting and imaging a new VM in VMWare. Amazing live netboot demo with bonus points for writing your own netboot replacement in Python, stuffing it into a Docker container!

    This year, Graham Gilbert led us through setting up BSDPy Docker container, getting the link to VMware working and using his Imagr tool to image a new VM instance of OS X. Fun stuff.

    Here are some screenshots:

    1. VMWare booting up looking for NetBoot services
    VMWare booting up
    VMWare booting up

    2. The lovely NetBoot globe spinning

    Netboot globe
    NetBoot

    3. Progress!

    Booting up
    Booting up

    4. Image NetBoot image booted

    Netboot image booted, but there’s an issue with the plist I built by hand. Some of the keys and strings got mixed up when copying from the whiteboard. Thanks to Rich Trouton who was sitting next to me who helped me diff his plist with mine to find how I’d messed it up. Easy to fix, slightly tricky to find. Luckily you only have to edit this plist to do initial set up.

    Image NetBoot image booted
    Image NetBoot image booted

    5. Imagr start up

    Imagr start up
    Imagr start up

    6. Imagr starting, password first

    Image password
    Image password

    7. Imagr restoring OS X image

    Imagr restoring OS X image
    Imagr restoring OS X image

    8. Imgr completed workflow

    Imgr completed workflow
    Imgr completed workflow

    9. Shutting docker down

    docker down
    docker down

    Reference:

    Graham Gilbert’s blog post with slides of the workshop.

    http://grahamgilbert.com/blog/2015/11/12/mactech-2015-hands-on-with-imagr/

    Pepijn Bruienne’s blog, Enterprise Mac

    http://enterprisemac.bruienne.com

  • Umask fixes in Yosemite aka OS X 10.10.3 and shared storage

    Finally!

    Yes, Apple has restored the ability to set a user and system umask in OS X 10.10.3. This is a huge fix for users of shared storage. Xsan and all SANs where users want to be able to share files, projects and all things without using ACLs or any LDAP directory. This is great. I am jumping up and down. So happy. So many people wanted this. Anyone using shared storage have been demanding this since the upgrade to Yosemite. 10.10.3 is out today and we will be happy.

    Reference: https://support.apple.com/en-us/HT201684

    tl;dr

    sudo launchctl config user umask nnn

    and

    sudo launchctl config system umask nnn
  • Configuration Profiles and Identity payloads

    Pretty sweet. It was a great gathering of IT pros in the deployment session. Great feedback and info sharing.

  • Using Munki and AutoPkg to automate Mac software deployment (Part 1)

    Recently Munki v2.01 was released and now more than ever with the help of other apps it is easier to automate software deployment. With help with AutoPkg (and AutoPkgr) you can quickly set up a Munki server to deliver software to all your Macs. In the time it takes to download one new app and update each of your client workstations you could instead put it in your Munki repo and have it ready to deploy to everyone.

    Munki allows you to automate software deployment. When you have more than one or two Macs to ensure that they are up to date with security, Flash, Java or other app updates you being to realize that an automated system can save you time and maybe even your sanity. You don’t backup manually, of course, you automate it. When it’s important and you want it done right, then some planning ahead of time and automation will make your life much easier.

    If you have not yet set up a Munki server then follow along as I walk you through setting Munki 2.01 with AutoPkgr 1.1 in part 1 of this blog post of Munki and AutoPkg. In part 2 I will go into further detail of how to use MunkiAdmin (Mac app) and Mandrill (a node.js web server) to edit and maintain your Munki set up. Pros and cons of each method will be touched upon. Using the command line in the past was required but I will show you how some really good apps and web services can help you maintain your automated software deployment workflow.

    Note: Munki requires only a web server to deploy software, while traditionally the munki tools ran on a Mac. You can put your software repo on any web server. I will show you the set up on a Mac for the purposes of this blog post.

    Steps to a basic Munki server set up on a Mac running 10.8, 10.9, or 10.10:
    1. Install latest Munki tools (v.2.01 at the time I write this), restart
    muni tools 2.01 pkg
    muni tools 2.01 pkg
    2. Install AutoPKGr (v.1.1 at the time I write this)

    AutoPkgr icon

    Install AutoPkg, and Git using AutoPkgr.
    Install autopkg and git using autopkgr
    Install autopkg and git using autopkgr
    3. Set your Munki repo to some folder (for example, /Users/Shared/munki_repo)
    Munki repo
    Munki repo
    4. Set up web services on OS X by manually editing httpd.conf document root to your Munki repo or with Server.app, setting your munki_repo as where you store your site files.
    Server.app Website document root munki repo
    Server.app Website document root munki repo
    6. Add recipes to AutoPKGr and choose apps. Set a schedule for AutoPkgr.
    Configure AutoPkgr
    Configure AutoPkgr
    7. Check for apps manually the first time, and let AutoPKG download them to your repo
    Configure AutoPkgr schedule
    Configure AutoPkgr schedule
    8. Check your repo for a manifests folder, and if it is not there, create it
    Munki repo manifests
    Munki repo manifests
    9. Download icon importer, move to /usr/local/munki folder, run against your repo
    curl -O https://munki.googlecode.com/git-history/Munki2/code/client/iconimporter
    mv iconimporter /usr/local/munki/iconimporter.py
    sudo chmod +x /usr/local/munki/iconimporter.py
    cd /usr/local/munki ; sudo ./iconimporter.py /Users/Shared/munki_repo/
    iconimporter munki repo
    iconimporter munki repo
    Next, go to the icons folder in your repo, pick a fav icon and rename if necessary (some have more than one icon with name with “_1, _2, etc”).
    10. Open MunkiAdmin and add packages to catalogs as needed, edit package info (add developer and category info, descriptions etc as needed), then create a client manifest.
    11. Choose apps to install for clients (choose from installs, optional installs, uninstalls)
    12. Set client id and repoURL on actual clients.

    sudo defaults write /Library/Preferences/ManagedInstalls ClientIdentifier “test-client”

    sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL “http://ip.addr.ess”

    Done. Your munki server is set up and ready for clients to connect. Next up, in part 2, we will look at Munki’s client facing app, the Managed Software Center. We will also look at how to use Munki Admin (Mac app) and Mandrill (a node.js web server) to edit and maintain your Munki set up. Pros and cons of each method will be touched upon. Using the command line with Munki was required in the past but the Munki ecosystem has grown and there are some really good apps and web services can help you maintain your automated software deployment workflow.
    Further Reading:
    1. What’s new in Munki 2  (Links to apps in the Munki ecosystem)
    2. Munki 2 Demonstations setup (basic walkthrough setup)
  • Xsan 4 in OS X 10.10 (Yosemite)

    Apple released Yosemite (OS X 10.10) today.  The big news for me is the built-in version of Xsan is v.4. But don’t get too excited and upgrade your OS without some planning (and backups). If your systems are in production then please leave them as is. Install OS X 10.10 on a test system first. Install a test Xsan and play with that. Don’t test in production. ‘Nough said.

    What you need to know is, if you upgrade your Mac to 10.10 then it is officially incompatible with Xsan 3. You can NOT have Xsan 3 (10.9) clients on a 10.10 Xsan, and I don’t think that 10.10 (Xsan 4) clients will work on a Xsan 3 based SAN. There may be a hack to get incompatible versions working together but that’s left to imaginative tinkerers and not useful for production where deadlines are involved.

    I’ve done some basic testing with Xsan 4 and it does away with the Xsan Admin app, all setup is done in the Server.app. Also, it depends on Open Directory (and DNS, of course). If there is no OD master set up then it will create one (same with DNS). If you have OD then join your Xsan controllers to it as replicas or else they will create a new OD master on the first Xsan controller and a replica on the second. You were warned.

    To configure the clients you export a config profile and install it on the clients, or alternatively you can enrol the Xsan controller in MDM (Profile Manager, for example) and push out the config to the clients.

    I have not tested Xsan 4 with StorNext but I expect there is compatibility, as usual.

    In Summary:

    More testing is needed, but strictly speaking Xsan 4 is not going to work with Xsan 3 and vice versa. If an Xsan 3 (10.9 client) is part of Xsan 4 (10.10) then it may work but commands and configs will not come across (unmount / mount the volume, the volume is destroyed stop looking for it, etc).

    And now for some screenshots of the actual setup.

    Step 1. Install Server. Turn on Xsan and get ready to rumble.

    Screen Shot 2014-10-15 at 2.02.06 PM

    Step 2. Change your name. If you’re using dot-local change it.

    Change-dot-local-name-Xsan4

    Step 3. Set up valid DNS

    Setup-DNS-if-you-dont-have-none

    Step 4. Set up a new SAN

    Set-up-new-SAN

    Step 5. Choose a SAN name

    Choose-SAN-Name

    Step 6. Configure Users and Groups (OD)

    Config-users-groups

    Step 7. Choose your organization name

    OD-name

    Step 8. Create the Xsan volume

    Add-Xsan-volume2

    Step 9. Add LUNs to your storage

    Edit-storage-pool-add-LUNs

    Step 10. Save a configuration profile

    Save-mobile-config

    Step 11. Deploy config to clients

    Use MDM or manually deliver the file to your clients.

    Stay tuned.

  • NFS set up with OS X 10.9 Mavericks

    One way to set up NFS shares on OS X 10.9.x

    Summary: On OS X create “exports” text file describing your share that you want to export over NFS. Server.app is not necessary or needed. On the client the fstab file will describe the client end where the share gets mounted. Note: use whatever text editor you wish, whether it is vi, nano or TextWrangler, etc.

    Server:

    1. “sudo vi /etc/exports” example:

    /MySharedFolder -maproot=nobody

    2. “sudo nfsd checkexports”

    Check the correctness of exports file

    3. “sudo nfsd enable”

    Start nfsd

    Note: run “sudo kill -1 `cat /var/run/mountd.pid`” is nfsd had been running previously and you want to reread exports.

    4. “/usr/bin/showmount -e”

    Test the share. It should show something like: “/MySharedFolder Everyone”

    Client:

    1. “mkdir /MyShare”

    Make the mount point for the NFS share

    2. “vi /etc/fstab”

    Edit the fstab file to show the mounts you wish to have

    Example:

    192.168.23.5:/MySharedFolder /MyShare nfs rw,async,noatime 0 0

    4. “mount -a”

    Mount all