GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages


Manual Reference Pages  -  SALT (7)

NAME

salt - Salt Documentation \$1 \n[an-margin] level \n[rst2man-indent-level] level margin: \n[rst2man-indent\n[rst2man-indent-level]] - \n[rst2man-indent0] \n[rst2man-indent1] \n[rst2man-indent2]

CONTENTS

Introduction To Salt
     Simplicity
     Parallel execution
     Building on proven technology
     Python client interface
     Fast, flexible, scalable
     Open
     Salt Community
     Mailing List
     Irc
     Follow on GitHub
     Blogs
     Example Salt States
     Follow on ohloh
     Other community links
     Hack the Source
Installation
     Quick Install
     Platform-specific Installation Instructions
     Arch Linux
     Installation
     Stable Release
     Tracking develop
     Post-installation tasks
     Debian Installation
     Installation from the SaltStack Repository
     Post-installation tasks
     Installation from the Community Repository
     Jessie (Stable)
     Wheezy (Old Stable)
     Squeeze (Old Old Stable)
     Sid (Unstable)
     Import the repository key.
     Update the package database
     Install packages
     Post-installation tasks
     Fedora
     Installation
     Stable Release
          Installing from updates-testing
     Installation Using pip
     Post-installation tasks
     FreeBSD
     Installation
     FreeBSD repo
     SaltStack repo
     Post-installation tasks
     Gentoo
     Post-installation tasks
     OpenBSD
     Installation
     Post-installation tasks
     Os X
     Dependency Installation
     Salt-Master Customizations
     Post-installation tasks
     RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux
     Installation from the SaltStack Repository
     Post-installation tasks
     Installation from the Community Repository
     RHEL/CentOS 6 and 7, Scientific Linux, etc.
     Enabling EPEL
     Installing Stable Release
          Installing from epel-testing
     Installation Using pip
     ZeroMQ 4
     Package Management
     Post-installation tasks
     Solaris
     Installation
     Minion Configuration
     Troubleshooting
     Ubuntu Installation
     Installation from the SaltStack Repository
     Post-installation tasks
     Installation from the Community Repository
     Install packages
     Post-installation tasks
     Windows
     Windows Installer
     Silent Installer Options
     Running the Salt Minion on Windows as an Unprivileged User
     A. Create the Unprivileged User that the Salt Minion will Run As
     B. Add the New User to the Access Control List for the Salt Folder
          C. Update the Windows Service User for the salt-minion Service
     Setting up a Windows build environment
     The Easy Way
     Prerequisite Software
     Create the Build Environment
     The Hard Way
     Prerequisite Software
     Create the Build Environment
     Developing with Salt
     Configure the Minion
     Setup.py Method
     Setup Tools Develop Mode (Preferred Method)
     Build the windows installer
     Clean the Environment
     Install Salt
     Build the Installer
     Testing the Salt minion
     Single command bootstrap script
     Packages management under Windows 2003
     SUSE Installation
     Installation
     Stable Release
     Post-installation tasks openSUSE
     Post-installation tasks SLES
     Unstable Release
     Suse Linux Enterprise
     Dependencies
     Optional Dependencies
     Upgrading Salt
Tutorials
     Introduction
     Salt Masterless Quickstart
     Bootstrap Salt Minion
     Telling Salt to Run Masterless
     Create State Tree
     Salt-call
     Basics
     Standalone Minion
     Telling Salt Call to Run Masterless
     Running States Masterless
     External Pillars
     Opening the Firewall up for Salt
     Fedora 18 and beyond / RHEL 7 / CentOS 7
     RHEL 6 / CentOS 6
     Whitelist communication to Master
     Using cron with Salt
     Use cron to initiate a highstate
     Remote execution tutorial
     Order your minions around
     Pillar Walkthrough
     Setting Up Pillar
     More Complex Data
     Parameterizing States With Pillar
     Pillar Makes Simple States Grow Easily
     Setting Pillar Data on the Command Line
     More On Pillar
     Minion Config in Pillar
     States
     How Do I Use Salt States?
     It is All Just Data
     The Top File
     Default Data - YAML
     Adding Configs and Users
     Moving Beyond a Single SLS
     Extending Included SLS Data
     Understanding the Render System
     Getting to Know the Default - yaml_jinja
     Introducing the Python, PyDSL, and the Pyobjects Renderers
     Running and debugging salt states.
     Next Reading
     States tutorial, part 1 - Basic Usage
     Setting up the Salt State Tree
     Preparing the Top File
          Create an sls file
     Install the package
     Next steps
     States tutorial, part 2 - More Complex States, Requisites
     Call multiple States
     Require other states
     Next steps
     States tutorial, part 3 - Templating, Includes, Extends
     Templating SLS modules
     Using Grains in SLS modules
     Using Environment Variables in SLS modules
     Calling Salt modules from templates
     Advanced SLS module syntax
     Include declaration
     Extend declaration
     Name declaration
     Names declaration
     Next steps
     States tutorial, part 4
     Salt fileserver path inheritance
     Environment configuration
     Practical Example
     Continue Learning
     States Tutorial, Part 5 - Orchestration with Salt
     The Orchestrate Runner
     Executing the Orchestrate Runner
     Examples
     Function
     State
     Highstate
     More Complex Orchestration
     Syslog-ng usage
     Overview
     Configuration
     Quotation
     Full example
     Syslog_ng module functions
     Examples
     Simple source
     Complex source
     Filter
     Template
     Rewrite
     Global options
     Log
     Advanced Topics
     SaltStack Walk-through
     Getting Started
     What is Salt?
     Installing Salt
     Starting Salt
     Setting Up the Salt Master
     Setting up a Salt Minion
     Using salt-key
     Sending the First Commands
     Getting to Know the Functions
     Helpful Functions to Know
     Changing the Output Format
     Grains
     Targeting
     Passing in Arguments
     Salt States
     The First SLS Formula
     Adding Some Depth
     Next Reading
     Getting Deeper Into States
     So Much More!
     Running Salt functions as non root user
     MinionFS Backend Walkthrough
     Propagating Files
     Enabling File Propagation
     MinionFS Backend
     Simple Configuration
     Commandline Example
     Automatic Updates / Frozen Deployments
     Getting Started
     Building and Freezing
     Windows
     Using the Frozen Build
     Troubleshooting
     A Windows minion isn\(aqt responding
     Windows and the Visual Studio Redist
     Mixed Linux environments and Yum
     Automatic (Python) module discovery
     Multi Master Tutorial
     Summary of Steps
     Prepping a Redundant Master
     Configure Minions
     Sharing Files Between Masters
     Minion Keys
     File_Roots
     Pillar_Roots
     Master Configurations
     Multi-Master-PKI Tutorial With Failover
     Motivation
     The Goal
     Prepping the master to sign its public key
     Prepping the minion to verify received public keys
     Multiple Masters For A Minion
     Testing the setup
     Performance Tuning
     How the signing and verification works
     Preseed Minion with Accepted Key
     Salt Bootstrap
     Supported Operating Systems
     Example Usage
     Installing via an Insecure One-Liner
     Examples
     Example Usage
     Command Line Options
     Git Fileserver Backend Walkthrough
     Installing Dependencies
     GitPython
     Dulwich
     Simple Configuration
     Multiple Remotes
     Per-remote Configuration Parameters
     Serving from a Subdirectory
     Mountpoints
     Using gitfs Alongside Other Backends
     Branches, Environments, and Top Files
     Environment Whitelist/Blacklist
     Authentication
     Https
     Ssh
     GitPython
     Adding the SSH Host Key to the known_hosts File
     Verifying the Fingerprint
     Refreshing gitfs Upon Push
     Using Git as an External Pillar Source
     Why aren\(aqt my custom modules/states/etc. syncing to my Minions?
     The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation
     The 5 Cent Salt Intro
     Before Digging In, The Architecture Of The Salt Cluster
     Salt Master
     Salt Minion
     Step 1 - Configuring The Salt Master On Your Mac
     Install Homebrew
     Install Salt
     Create The Master Configuration
     Step 2 - Configuring The Minion VM
     Install VirtualBox
     Install Vagrant
     Create The Minion VM Folder
     Initialize Vagrant
     Import Precise64 Ubuntu Box
     Modify the Vagrantfile
     Checking The VM
     Step 3 - Connecting Master and Minion
     Creating The Minion Configuration File
     Preseed minion keys
     Modify Vagrantfile to Use Salt Provisioner
     Checking Master-Minion Communication
     Step 4 - Configure Services to Install On the Minion
     Checking the system\(aqs original state
     Initialize the top.sls file
     Create The Nginx Configuration
     Check Minion State
     Where To Go From Here
     Writing Salt Tests
     Getting Set Up For Tests
     Destructive vs Non-destructive
     Automated Test Runs
     HTTP Modules
          The salt.utils.http Library
     Configuring Libraries
     Return Data
     Writing Return Data to Files
     SSL Verification
     CA Bundles
     Updating CA Bundles
     Test Mode
     Execution Module
     Runner Module
     State Module
     LXC Management with Salt
     Dependencies
     Profiles
     Container Profiles
     Network Profiles
     Old lxc support (<1.0.7)
     Tricky network setups Examples
     Creating a Container on the CLI
     From a Template
     Cloning an Existing Container
     Using a Container Image
     Initializing a New Container as a Salt Minion
     Running Commands Within a Container
     Container Management Using salt-cloud
     Container Management Using States
     Ensuring a Container Is Present
     Ensuring a Container Does Not Exist
     Ensuring a Container is Running/Stopped/Frozen
     Using Salt with Stormpath
     External Authentication
     Configuring Stormpath Modules
     Managing Stormpath Accounts
     Using Stormpath States
     Salt Virt
     Salt as a Cloud Controller
     Setting up Hypervisors
     Installing Hypervisor Software
     Hypervisor Network Setup
     Virtual Machine Network Setup
     Libvirt State
     Getting Virtual Machine Images Ready
     Existing Virtual Machine Images
     CentOS
     Fedora Linux
     Ubuntu Linux
     Using Salt Virt
     Migrating Virtual Machines
     VNC Consoles
     Conclusion
     Lxc
     ESXi Proxy Minion
     ESXi Proxy Minion
     Dependencies
     ESXi Password
     Esxcli
     Configuration
     Proxy Config File
     Pillar Profiles
     Example Configuration Files
     Starting the Proxy Minion
     Executing Commands
     ESXi Execution Module
     Running Remote Execution Commands
     ESXi State Module
     Relevant Salt Files and Resources
     Using Salt at scale
     Using Salt at scale
     The Master
     Too many minions authing
     Too many minions re-authing
     Too many minions re-connecting
     Too many minions returning at once
     Too few resources
     The Master is CPU bound
     The Master is disk IO bound
Targeting Minions
          Matching the minion id
     Globbing
     Regular Expressions
     Lists
     Grains
     Listing Grains
     Grains in the Minion Config
     Grains in /usr/local/etc/salt/grains
     Matching Grains in the Top File
     Writing Grains
     Loading Custom Grains
     Precedence
     Examples of Grains
     Syncing Grains
     Targeting with Pillar
     Subnet/IP Address Matching
     Compound matchers
     Precedence Matching
     Alternate Delimiters
     Node groups
     Using Nodegroups in SLS files
     Batch Size
     SECO Range
     Prerequisites
     Preparing Salt
     Targeting with Range
Storing Static Data In The Pillar
     Declaring the Master Pillar
     Pillar namespace flattened
     Pillar Namespace Merges
     Including Other Pillars
     Viewing Minion Pillar
     Pillar get Function
     Refreshing Pillar Data
     Set Pillar Data at the Command Line
     Master Config In Pillar
     Minion Config in Pillar
     Master Provided Pillar Error
Reactor System
     Event System
     Mapping Events to Reactor SLS Files
     Fire an event
     Knowing what event is being fired
     Debugging the Reactor
     Understanding the Structure of Reactor Formulas
     Calling Execution modules on Minions
     Calling Runner modules and Wheel modules
     Passing event data to Minions or Orchestrate as Pillar
     A Complete Example
     Syncing Custom Types on Minion Start
The Salt Mine
     Mine vs Grains
     Mine Functions
     Mine Functions Aliases
     Mine Interval
     Mine in Salt-SSH
     Example
External Authentication System
     Access Control System
     Tokens
     LDAP and Active Directory
     OpenLDAP and similar systems
     Active Directory
Access Control System
Job Management
     The Minion proc System
     Functions in the saltutil Module
     The jobs Runner
     Scheduling Jobs
     States
     Highstates
     Runners
     Scheduler With Returner
     States
     Highstates
     Runners
     Scheduler With Returner
Managing The Job Cache
     Default Job Cache
     Additional Job Cache Options
Storing Job Results In An External System
     External Job Cache - Minion-Side Returner
     Master Job Cache - Master-Side Returner
     Configure an External or Master Job Cache
     Step 1: Understand Salt Returners
     Step 2: Configure the Returner
     Step 3: Enable the External or Master Job Cache
     External Job Cache
     Master Job Cache
Storing Data In Other Databases
     SDB Configuration
     SDB URIs
     Writing SDB Modules
Salt Event System
     Event types
     Salt Master Events
     Authentication events
     Start events
     Key events
     Job events
     Presence events
     Cloud Events
     Listening for Events
     From the CLI
     Remotely via the REST API
     From Python
     Firing Events
     Firing Events from Python
     From Salt execution modules
     From Custom Python Scripts
Beacons
     Configuring Beacons
     Beacon Monitoring Interval
     Avoiding Event Loops
     Beacon Example
     View Events on the Master
     Create a Reactor
     Start the Salt Master in Debug Mode
     Trigger the Reactor
     Writing Beacon Plugins
          The beacon Function
     The Beacon Return
     Calling Execution Modules
Salt Engines
     Configuration
     Writing an Engine
Running Custom Master Processes
     Example Configuration
     Example Process Class
High Availability Features In Salt
     Multimaster
     Multimaster with Failover
     Syndic
     Syndic with Multimaster
Salt Syndic
     Configuring the Syndic
     Configuring the Syndic with Multimaster
     Running the Syndic
     Topology
     Syndic wait
     Syndic config options
Salt Proxy Minion
     New in 2015.8.2
     New in 2015.8
     Getting Started
     Configuration parameters
     Proxymodules
     The __proxyenabled__ directive
     Salt Proxy Minion End-to-End Example
     SSH Proxymodules
     Connection Setup
     Command execution
     Output parsing
     Connection teardown
     Salt Proxy Minion SSH End-to-End Example
Salt Package Manager
     Building Packages
     Formula
     Required Fields
     Optional Fields
     Building a Package
     Building Repositories
     Configuring Remote Repositories
     Repository Configuration Files
     Updating Local Repository Metadata
     Installing Packages
     Pillars
     Loader Modules
     Removing Packages
     Technical Information
     SPM-Specific Loader Modules
     Package Database
     Package Files
     SPM Configuration
     Types of Packages
     SPM Developmnent Guide
     SPM-Specific Loader Modules
     Package Database
     Package Files
Salt Transport
     Pub Channel
     Req Channel
     Zeromq Transport
     Pub Channel
     Req Channel
     TCP Transport
     Wire Protocol
     Crypto
     Pub Channel
     Req Channel
     The RAET Transport
     Using RAET in Salt
     Limitations
     Why?
     Customer and User Request
     More Capabilities
     RAET Reliability
     RAET and ZeroMQ
     Encryption
     Programming Intro
     Intro to RAET Programming
     UDP Stack Messages
Windows Software Repository
     Configuration
     Populate the Repository
     Sync Repo to Windows Minions
     Install Windows Software
     Show Installed Packages
     Install a Package
     Uninstall Windows Software
     Repository Location
     Maintaining Windows Repo Definitions in Git Repositories
     Creating a Package Definition SLS File
     Managing Windows Software on a Standalone Windows Minion
     Custom Location for Repository SLS Files
     Config Options for Minions 2015.8.0 and Later
     Config Options for Minions Before 2015.8.0
     Changes in Version 2015.8.0
     Config Parameters Renamed
     Master Config
     Minion Config
     Troubleshooting
     Incorrect name/version
     Changes to sls files not being picked up
     Packages management under Windows 2003
     How Success and Failure are Reported
Windows-specific Behaviour
     Group parameter for files
     Dealing with case-insensitive but case-preserving names
     Dealing with various username forms
     Specifying the None group
     Symbolic link loops
     Modifying security properties (ACLs) on files
Salt Cloud
     Getting Started
     Define a Provider
     List Cloud Provider Options
     Create VM Profiles
     Create VMs
     Destroy VMs
     Query VMs
     Using Salt Cloud
     Synopsis
     Description
     Options
     Execution Options
     Query Options
     Cloud Providers Listings
     Cloud Credentials
     Output Options
     Examples
     See also
     Salt Cloud basic usage
     Creating a VM
     Destroying a VM
     VM Profiles
     Multiple Configuration Files
     Larger Example
     Cloud Map File
     Setting up New Salt Masters
     Cloud Actions
     Cloud Functions
     Core Configuration
     Install Salt Cloud
     Installing Salt Cloud for development
     Core Configuration
     Thread Pool Size
     Minion Configuration
     Cloud Configuration Syntax
     Pillar Configuration
     Cloud Configurations
     Scaleway
     Rackspace
     Amazon AWS
     Linode
     Joyent Cloud
     GoGrid
     OpenStack
     DigitalOcean
     Parallels
     Proxmox
     Lxc
     Saltify
     Extending Profiles and Cloud Providers Configuration
     Extending Profiles
     Extending Providers
     Windows Configuration
     Spinning up Windows Minions
     Requirements
     Firewall Settings
     Configuration
     Auto-Generated Passwords on EC2
     Cloud Provider Specifics
     Getting Started With Aliyun ECS
     Dependencies
     Configuration
     Profiles
     Cloud Profiles
     Getting Started With Azure
     Dependencies
     Configuration
     Cloud Profiles
     Profile Options
     Show Instance
     Destroying VMs
     Managing Hosted Services
     CLI Example
     Managing Storage Accounts
     CLI Example
     Managing Disks
     CLI Example
     Managing Service Certificates
     Managing Management Certificates
     Virtual Network Management
     Managing Input Endpoints
     CLI Example
     CLI Example
     Managing Affinity Groups
     Managing Blob Storage
     Blob Storage Configuration
     Blob Functions
     Getting Started With DigitalOcean
     Configuration
     Profiles
     Cloud Profiles
     Profile Specifics:
     Miscellaneous Information
     Getting Started With AWS EC2
     Dependencies
     Configuration
     Access Credentials
     Windows Deploy Timeouts
     Windows Deploy Timeouts
     Key Pairs
     Security Groups
     IAM Profile
     Cloud Profiles
     Required Settings
     Optional Settings
     Modify EC2 Tags
     Rename EC2 Instances
     EC2 Termination Protection
     Rename on Destroy
     Listing Images
     EC2 Images
     EC2 Termination Protection
     Alternate Endpoint
     Volume Management
     Creating Volumes
     Attaching Volumes
     Show a Volume
     Detaching Volumes
     Deleting Volumes
     Managing Key Pairs
     Creating a Key Pair
     Importing a Key Pair
     Show a Key Pair
     Delete a Key Pair
     Launching instances into a VPC
     Simple launching into a VPC
     Specifying interface properties
     Getting Started With GoGrid
     Configuration
     Profiles
     Cloud Profiles
     Assigning IPs
     Getting Started With Google Compute Engine
     Dependencies
     Google Compute Engine Setup
     Provider Configuration
     Profile Configuration
     GCE Specific Settings
     Initial Profile
     Profile with scopes
     SSH Remote Access
     Single instance details
     Destroy, persistent disks, and metadata
     List various resources
     Persistent Disk
     Create
     Delete
     Attach
     Detach
     Show disk
     Create snapshot
     Delete snapshot
     Show snapshot
     Networking
     Create network
     Destroy network
     Show network
     Create address
     Delete address
     Show address
     Create firewall
     Delete firewall
     Show firewall
     Load Balancer
     HTTP Health Check
     Load-balancer
     Attach and Detach LB
     Getting Started With HP Cloud
     Set up a cloud provider configuration file
     Compute Region
     Authentication
     Set up a cloud profile config file
     Launch an instance
     Manage the instance
     SSH to the instance
     Using a private IP
     Getting Started With Joyent
     Dependencies
     Configuration
     Profiles
     Cloud Profiles
     SmartDataCenter
     Miscellaneous Configuration
     Getting Started With LXC
     Limitations
     Operation
     Provider configuration
     Profile configuration
     Driver Support
     Getting Started With Linode
     Configuration
     Profiles
     Cloud Profiles
     Cloning
     Getting Started With OpenStack
     Dependencies
     Configuration
     Using nova client to get information from OpenStack
     Compute Region
     Authentication
     Profiles
     Getting Started With Parallels
     Access Credentials
     Cloud Profiles
     Required Settings
     Optional Settings
     Getting Started With Proxmox
     Dependencies
     Access Credentials
     Cloud Profiles
     Required Settings
     Optional Settings
     Getting Started With Rackspace
     Dependencies
     Configuration
     Compute Region
     Authentication
     RackConnect Environments
     Managed Cloud Environments
     First and Next Generation Images
     Private Subnets
     Getting Started With Saltify
     Dependencies
     Configuration
     Profiles
     Using Map Files
     Getting Started With Scaleway
     Configuration
     Profiles
     Cloud Profiles
     Getting Started With SoftLayer
     Dependencies
     Configuration
     Access Credentials
     Profiles
     Cloud Profiles
     Using Multiple Disks
     Cloud Profiles
     Actions
     Functions
     Optional Products for SoftLayer HW
     Public Secondary IP Addresses
     Primary IPv6 Addresses
     Public Static IPv6 Addresses
     OS-Specific Addon
     Control Panel Software
     Database Software
     Anti-Virus & Spyware Protection
     Insurance
     Monitoring
     Notification
     Advanced Monitoring
     Response
     Intrusion Detection & Protection
     Hardware & Software Firewalls
     Getting Started with VEXXHOST
     Cloud Provider Configuration
     Authentication
     Cloud Profile Configuration
     Provision an instance
     Getting Started With VMware
     Dependencies
     Configuration
     Profiles
     Getting Started With vSphere
     Dependencies
     Configuration
     Profiles
     Cloud Profiles
     Miscellaneous Options
     Miscellaneous Salt Cloud Options
     Deploy Script Arguments
     Selecting the File Transport
     Sync After Install
     Setting Up New Salt Masters
     Setting Up a Salt Syndic with Salt Cloud
     SSH Port
     Delete SSH Keys
     Keeping /tmp/ Files
     Hide Output From Minion Install
     Connection Timeout
     Salt Cloud Cache
     SSH Known Hosts
     SSH Agent
     File Map Upload
     Troubleshooting Steps
     Troubleshooting Salt Cloud
     Virtual Machines Are Created, But Do Not Respond
     Generic Troubleshooting Steps
     Debug Mode
     Salt Bootstrap
     The Bootstrap Log
     Keeping Temp Files
     Unprivileged Primary Users
     Executing the Deploy Script Manually
     Extending Salt Cloud
     Writing Cloud Driver Modules
     All Driver Modules
     The __virtual__() Function
     The get_configured_provider() Function
     Libcloud Based Modules
     The create() Function
     The libcloudfuncs Functions
     Non-Libcloud Based Modules
          The create() Function
     The get_size() Function
     The get_image() Function
     The avail_locations() Function
     The avail_images() Function
     The avail_sizes() Function
     The script() Function
     The destroy() Function
     The list_nodes() Function
     The list_nodes_full() Function
     The list_nodes_select() Function
     The show_instance() Function
     Actions and Functions
     Actions
     Functions
     OS Support for Cloud VMs
     Other Generic Deploy Scripts
     Post-Deploy Commands
     Skipping the Deploy Script
     Updating Salt Bootstrap
     Keeping /tmp/ Files
     Deploy Script Arguments
     Using Salt Cloud from Salt
     Using the Salt Modules for Cloud
     Minion Keys
     Execution Module
     State Module
     Runner Module
     CloudClient
     Reactor
     Feature Comparison
     Feature Matrix
     Legacy Drivers
     Note for Developers
     Standard Features
     Actions
     Functions
     Tutorials
     Using Salt Cloud with the Event Reactor
     Event Structure
     Available Events
     Configuring the Event Reactor
     Reactor SLS Files
     Example: Reactor-Based Highstate
Netapi Modules
     Writing netapi modules
     Configuration
          The __virtual__ function
          The start function
     Inline documentation
     Loader “magic” methods
     Introduction to netapi modules
     Client interfaces
Salt Virt
     Salt Virt Tutorial
     The Salt Virt Runner
     Based on Live State Data
     Deploy from Network or Disk
     Virtual Machine Disk Profiles
     Define More Profiles
     Virtual Machine Network Profiles
     Define More Profiles
Understanding Yaml
     Rule One: Indentation
     Rule Two: Colons
     Rule Three: Dashes
     Learning More
Master Tops System
Salt Ssh
     Getting Started
     Salt SSH Roster
     Deploy ssh key for salt-ssh
     Calling Salt SSH
     Raw Shell Calls
     States Via Salt SSH
     Targeting with Salt SSH
     Configuring Salt SSH
     Minion Config
     Running Salt SSH as non-root user
     Define CLI Options with Saltfile
     Debugging salt-ssh
Salt Rosters
     How Rosters Work
     Targets Data
Reference
     Full list of builtin auth modules
     Command Line Reference
     Using the Salt Command
     Defining the Target Minions
     More Powerful Targets
     Targeting with Grains
     Targeting with Executions
     Compound Targeting
     Node Group Targeting
     Calling the Function
     Finding available minion functions
     Compound Command Execution
     CLI Completion
     Synopsis
     Description
     Options
     Logging Options
     Output Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     Target Selection
     Output Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     Target Selection
     See also
     Synopsis
     Description
     Options
     Logging Options
     Output Options
     Actions
     Key Generation Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Target Selection
     Logging Options
     Output Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     See also
     Synopsis
     Description
     Options
     Logging Options
     Commands
     See also
     Client ACL system
     Permission Issues
     Python client API
          Salt\(aqs opts dictionary
     Salt\(aqs Loader Interface
     Salt\(aqs Client Interfaces
     LocalClient
     Salt Caller
     RunnerClient
     WheelClient
     CloudClient
     SSHClient
     Full list of Salt Cloud modules
     AliYun ECS Cloud Module
     The AWS Cloud Module
     CloudStack Cloud Module
     DigitalOcean Cloud Module
     The EC2 Cloud Module
     Google Compute Engine Module
     Example Provider Configuration
     GoGrid Cloud Module
     Joyent Cloud Module
     The AWS Cloud Module
     Linode Cloud Module using Linode\(aqs REST API
     Install Salt on an LXC Container
     Azure Cloud Module
     OpenStack Nova Cloud Module
     OpenNebula Cloud Module
     OpenStack Cloud Module
     Parallels Cloud Module
     Proxmox Cloud Module
     Pyrax Cloud Module
     Rackspace Cloud Module
     Saltify Module
     SoftLayer Cloud Module
     SoftLayer HW Cloud Module
     VMware Cloud Module
     Dependencies
     Configuration
     Configuration file examples
     Example master configuration file
     Example minion configuration file
     Configuring Salt
     Master Configuration
     Minion Configuration
     Running Salt
     Key Identity
     Master Key Fingerprint
     Minion Key Fingerprint
     Key Management
     Sending Commands
     What\(aqs Next?
     Configuring the Salt Master
     Primary Master Configuration
     Salt-SSH Configuration
     Master Security Settings
     Master Module Management
     Master State System Settings
     Master File Server Settings
     GitFS Authentication Options
     Pillar Configuration
     Git External Pillar (git_pillar) Configuration Options
     Git External Pillar Authentication Options
     Syndic Server Settings
     Peer Publish Settings
     Master Logging Settings
     Node Groups
     Range Cluster Settings
     Include Configuration
     Windows Software Repo Settings
     Winrepo Authentication Options
     Configuring the Salt Minion
     Minion Primary Configuration
     Minion Module Management
     State Management Settings
     File Directory Settings
     Security Settings
     Thread Settings
     Minion Logging Settings
     Include Configuration
     Frozen Build Update Settings
     Standalone Minion Windows Software Repo Settings
     Running the Salt Master/Minion as an Unprivileged User
     Logging
     Available Configuration Settings
     External Logging Handlers
     External Logging Handlers
     Logstash Logging Handler
     UDP Logging Handler
     ZeroMQ Logging Handler
     Log Level
     Hwm
     Sentry Logging Handler
     Threaded Transports
     Salt File Server
     File Server Backends
     Enabling a Fileserver Backend
     Using Multiple Backends
     Environments
     Dynamic Module Distribution
     Sync Via States
     Sync Via the saltutil Module
     File Server Configuration
     Environments
     Directory Overlay
     Local File Server
     The cp Module
     Escaping Special Characters
     Environments
     File Server Client API
     FileClient Class
     Full list of builtin fileserver modules
     Salt code and internals
     Contents
     How to instructs merging
     Default merge strategy is keep untouched
     Limitations
     Introspection
     Exceptions
     Salt opts dictionary
     Full list of builtin execution modules
     Manage and query Bower packages
     Manage and query Cabal packages
     General Notes
     Installation Prerequisites
     Prerequisite Pillar Configuration for Authentication
     Methods
     Runtime Execution within a specific, already existing/running container
     Why Make a Second Docker Execution Module?
     Installation Prerequisites
     Authentication
     Configuration Options
     Functions
     Executing Commands Within a Running Container
     Detailed Function Documentation
     Submodules
     Module contents
     Provides access to randomness generators.
     Windows Support
     State File Support
     Upgrading Salt using Pip
     Qemu-img Command Wrapper
     Manage the Windows registry
     Hives
     Keys
     Values or Entries
     Module for Manipulating Data via the Salt DB API
     Wrapper around Server Density API
     External References
     Coming Features in 0.3
     Override these in the minion config
     Required Options for DIH
     State Caching
     Wrapper around uptime API
     Dependencies
     Esxcli
     About
     General notes
     Concurrency controls in zookeeper
     Full list of netapi modules
     A REST API for Salt
     Authentication
     Usage
     Deployment
     Using a WSGI-compliant web server
     REST URI Reference
     A non-blocking REST API for Salt
     Authentication
     Usage
     A Websockets add-on to saltnado
     All Events
     Formatted Events
     Example responses
     Setup
     REST URI Reference
     A minimalist REST API for Salt
     Usage
     Deployment
     Using a WSGI-compliant web server
     Usage examples
     Full list of builtin output modules
     Display compact output data structure
     Outputter for displaying results of state runs
     Display return data in JSON format
     Display salt-key output
     Recursively display nested data
     Display values only, separated by newlines
     Example 1
     Input
     Output
     Example 2
     Input
     Output
     Display no output
     Display output for minions that did not return
     Display clean output of an overstate stage
     Python pretty-print (pprint)
     Display raw output data structure
     Simple text outputter
     Display return data in YAML format
     Peer Communication
     Peer Configuration
     Peer Runner Communication
     Using Peer Communication
     Pillars
     Full list of builtin pillar modules
     Configuring the Cobbler ext_pillar
     Module Documentation
     Configuring the django_orm ext_pillar
     Module Documentation
     Example Configuration
     Assigning Pillar Data to Individual Hosts
     Assigning Pillar Data to Entire Nodegroups
     Configuring the Foreman ext_pillar
     Module Documentation
     Use a git repository as a Pillar source
     Configuring git_pillar for Salt releases before 2015.8.0
     Configuring git_pillar for Salt releases 2015.8.0 and later
     Salt Master Mongo Configuration
     Configuring the Mongo ext_pillar
     Module Documentation
     Legacy compatibility
     Configuring the mysql ext_pillar
     Complete example
     Pepa
     Configuring Pepa
     Command line
     Templates
     Nested dictionaries
     Operators
     Validation
     Schema
     Links
     Configuring the LDAP ext_pillar
     Configuring the LDAP searches
     Map Mode
     List Mode
     Read pillar data from a Redis backend
     Salt Master Redis Configuration
     Configuring the Redis ext_pillar
     Theory of sql_base ext_pillar
     Configuring a sql_base ext_pillar
     More complete example for MySQL (to also show configuration)
     Configuring the sqlite3 ext_pillar
     Complete example
     Configuring Varstack
     Full list of builtin proxy modules
     Dependencies
     Esxcli
     Configuration
     Pillar
     Salt Proxy
     States
     Dell FX2 chassis
     Dependencies
     Pillar
     States
     Renderers
     Multiple Renderers
     Composing Renderers
     Writing Renderers
     Examples
     Full List of Renderers
     Full list of builtin renderer modules
     Jinja in States
     Include and Import
     Macros
     Template Inheritance
     Filters
     Jinja in Files
     Calling Salt Functions
     Debugging
          Special integration with the cmd state
     Implicit ordering of states
     Render time state execution
     Integration with the stateconf renderer
     Importing custom Python modules
     Creating state data
     Context Managers and requisites
     Including and Extending
     Importing from other state files
     Salt object
     Pillar, grain, mine & config data
     Map Data
     Todo
     Understanding YAML
     Rule One: Indentation
     Rule Two: Colons
     Rule Three: Dashes
     Reference
     Reference
     Returners
     Using Returners
     Writing a Returner
     Master Job Cache Support
     External Job Cache Support
     Event Support
     Custom Returners
     Naming the Returner
     Testing the Returner
     Event Returners
     Full List of Returners
     Full list of builtin returner modules
     Jid
     Jid/minion_id
     On concurrent database access
     Full list of builtin roster modules
     Salt Runners
     Full list of runner modules
     The Salt Cloud Runner
     Writing Salt Runners
     Synchronous vs. Asynchronous
     Examples
     State Enforcement
     Mod Aggregate State Runtime Modifications
     How it Works
     How to Use it
     In config files
     In states
     Adding mod_aggregate to a State Module
     Altering States
     File State Backups
     Backed-up Files
     Interacting with Backups
     Listing
     Restoring
     Deleting
     Understanding State Compiler Ordering
     Compiler Basics
     High Data and Low Data
     Ordering Layers
     Definition Order
     The Include Statement
          The order Flag
     Lexicographical Fall-back
     Requisite Ordering
     Runtime Requisite Evaluation
     Simple Runtime Evaluation Example
     Best Practice
     Extending External SLS Data
     The Extend Declaration
     Extend is a Top Level Declaration
     The Requisite in Statement
     Rules to Extend By
     Failhard Global Option
     State Level Failhard
     Global Failhard
     Global State Arguments
     Highstate data structure definitions
     The Salt State Tree
     Top file
     Include declaration
     Module reference
     ID declaration
     Extend declaration
     State declaration
     Requisite declaration
     Requisite reference
     Function declaration
     Function arg declaration
     Name declaration
     Names declaration
     Large example
     Include and Exclude
     Include
     Relative Include
     Exclude
     State System Layers
     Function Call
     Low Chunk
     Low State
     High Data
     Sls
     HighState
     OverState
     The Orchestrate Runner
     Ordering States
     State Auto Ordering
     Requisite Statements
     Multiple Requisites
     Requisite Documentation
     The Order Option
     OverState System
     State Providers
     Setting a Provider in the Minion Config File
          Provider: pkg
          Provider: service
          Provider: user
          Provider: group
     Arbitrary Module Redirects
     Requisites and Other Global State Arguments
     Fire Event Notifications
     Requisites
     Direct Requisite and Requisite_in types
     Require an entire sls file
     The _in versions of requisites
     Altering States
     Reload
     Unless
     Onlyif
     Listen/Listen_in
     Overriding Checks
     Startup States
     Examples:
     State Testing
     Default Test
     The Top File
     Introduction
     A Basic Example
     Environments
     Getting Started with Top Files
     Multiple Environments
     Choosing an Environment to Target
     Shorthand
     Advanced Minion Targeting
     How Top Files Are Compiled
     SLS Template Variable Reference
     Salt
     Opts
     Pillar
     Grains
     State Modules
     States are Easy to Write!
     Using Custom State Modules
     Cross Calling Execution Modules from States
     Cross Calling State Modules
     Return Data
     Test State
     Watcher Function
     Mod_init Interface
     Log Output
     Full State Module Example
     Example state module
     State Management
     Understanding the Salt State System Components
     Salt SLS System
     SLS File Layout
     SLS Files
     The Top File
     Reloading Modules
     Full list of builtin state modules
     Package management operations specific to APT- and DEB-based systems
     Configuration disposable regularly scheduled tasks for at.
     Management of the Salt beacons
     Manage Autoscale Groups
     Manage DynamoDB Tables
     Manage Elasticache
     Manage IAM objects
     Manage IAM roles
     Manage RDSs
     Manage Security Groups
     Manage VPCs
     Installation of Bower Packages
     Installation of Cabal Packages
     Execute Chef client runs
     Using states instead of maps to deploy clouds
     Execution of arbitrary commands
     Using the Stateful Argument
          Should I use  cmd.run or  cmd.wait?
     How do I create an environment from a pillar map?
     Installation of Composer Packages
     Management of cron, the Unix command scheduler
     Dynamic DNS updates
     Management of debconf selections
     Available Functions
     Manage Docker containers
     Available Functions
     Use Cases
     Why Make a Second Docker State Module?
     Management of Gentoo configuration using eselect
     Manage etcd Keys
     Salt Master Configuration
     Available Functions
     Dependencies
     Esxcli
     About
     Operations on regular files, special files, directories, and symlinks
     Installation of Ruby modules packaged as gems
     Configuration of the GNOME desktop
     Manage grains on the minion
     Management of user groups
     Interaction with Mercurial repositories
     Send a message to Hipchat
     Management of addresses and names in hosts file
     Trigger an event in IFTTT
     Management of incron, the inotify cron
     Management of InfluxDB databases
     Management of InfluxDB users
     Manage ini files
     Manage IPMI devices over LAN
     Management of ipsets
     Management of iptables
     Management of keyboard layouts
     Management of Keystone users
     Loading and unloading of kernel modules
     Management of Gentoo Overlays using layman
     Manage libvirt certificates
     Management of languages/locales
     Management of Linux logical volumes
     Management of LVS (Linux Virtual Server) Real Server
     Management of LVS (Linux Virtual Server) Service
     Manage Linux Containers
     Management of Gentoo make.conf
     Managing software RAID with mdadm
     States for Management of Memcached Keys
     Manage modjk workers
     Execution of Salt modules from within states
     Management of Mongodb users
     Monit state
     Mounting of filesystems
     Management of MySQL databases (schemas)
     Management of MySQL grants (user permissions)
     Execution of MySQL queries
     Management of MySQL users
     Configuration of network interfaces
     Management of nftables
     Installation of NPM Packages
     Management of NTP servers
     Create an Event in PagerDuty
     Installation of PHP Extensions Using pecl
     Installation of Python Packages Using pip
     Installation of packages using OS package managers such as yum or apt-get
     Manage package remote repo using FreeBSD pkgng
     Management of APT/YUM package repos
     Management of Portage package configuration on Gentoo
     Management of PostgreSQL databases
     Management of PostgreSQL extensions (e.g.: postgis)
     Management of PostgreSQL groups (roles)
     Management of PostgreSQL schemas
     Management of PostgreSQL tablespace
     Management of PostgreSQL users (roles)
     Powerpath configuration support
     Process Management
     Send a message to PushOver
     Managing python installations with pyenv
     Manage Rackspace Queues
     Management of POSIX Quotas
     Manage RabbitMQ Clusters
     Manage RabbitMQ Plugins
     Manage RabbitMQ Policies
     Manage RabbitMQ Users
     Manage RabbitMQ Virtual Hosts
     Managing Ruby installations with rbenv
     Management of Redis server
     Manage the Windows registry
     Hives
     Keys
     Values or Entries
     Example
     Managing Ruby installations and gemsets with Ruby Version Manager (RVM)
     Control the Salt command interface
     Management of the Salt scheduler
     Management of SELinux rules
     Monitor Server with Server Density
     Starting or restarting of services and daemons
     Send a message to Slack
     Sending Messages via SMTP
     Control of entries in SSH authorized_key files
     Control of SSH known_hosts entries
     Stateconf System
     Interaction with the Supervisor daemon
     Manage SVN repositories
     Configuration of the Linux kernel using sysctl
     State module for syslog_ng
     Details
     Syslog-ng configuration file format
     Test States
     Management of timezones
     Enforce state for SSL/TLS
     Control Apache Traffic Server
     Monitor Web Server with Uptime
     Management of user accounts
     Create an Event in VictorOps
     Configuration of network interfaces on Windows hosts
     Management of Windows system information
     Management of the windows update agent
     Sending Messages over XMPP
     Management of zc.buildout
     Available Functions
     Control concurrency of steps within state execution using zookeeper
     Execution Modules
     Modules Are Easy to Write!
     Zip Archives as Modules
     Creating a Zip Archive Module
     Cross Calling Execution Modules
     Preloaded Execution Module Data
     Grains Data
     Module Configuration
     Printout Configuration
     Virtual Modules
          Returning Error Information from __virtual__
     Examples
     Documentation
     Adding Documentation to Salt Modules
     Add Execution Module Metadata
     Log Output
     Private Functions
     Objects Loaded Into the Salt Minion
     Objects NOT Loaded into the Salt Minion
     Useful Decorators for Modules
     Depends Decorator
     Master Tops
     Full list of builtin master tops modules
     Cobbler Tops
     Module Documentation
     External Nodes Classifier
     Salt Master Mongo Configuration
     Configuring the Mongo Tops Subsystem
     Module Documentation
     Full list of builtin wheel modules
     Full list of builtin beacon modules
     Full list of builtin engine modules
     Full list of builtin sdb modules
     Advanced Usage:
     Full list of builtin serializers
Salt Best Practices
     General rules
     Structuring States and Formulas
     Structuring Pillar Files
     Variable Flexibility
     Modularity Within States
     Storing Secure Data
Hardening Salt
     General hardening tips
     Salt hardening tips
Troubleshooting
     Troubleshooting the Salt Master
     Troubleshooting the Salt Master
     Running in the Foreground
     What Ports does the Master Need Open?
     Too many open files
     Salt Master Stops Responding
     Live Python Debug Output
     Live Salt-Master Profiling
     Commands Time Out or Do Not Return Output
     Passing the -c Option to Salt Returns a Permissions Error
     Salt Master Doesn\(aqt Return Anything While Running jobs
     Salt Master Auth Flooding
     Running state locally
     Salt Master Umask
     Troubleshooting the Salt Minion
     Troubleshooting the Salt Minion
     Running in the Foreground
     What Ports does the Minion Need Open?
     Using salt-call
     Live Python Debug Output
     Multiprocessing in Execution Modules
     Salt Minion Doesn\(aqt Return Anything While Running Jobs Locally
     Running in the Foreground
     What Ports do the Master and Minion Need Open?
     Using salt-call
     Too many open files
     Salt Master Stops Responding
     Salt and SELinux
     Red Hat Enterprise Linux 5
     Common YAML Gotchas
     YAML Idiosyncrasies
     Spaces vs Tabs
     Indentation
     Nested Dictionaries
     True/False, Yes/No, On/Off
     Integers are Parsed as Integers
     YAML does not like Double Short Decs
     YAML support only plain ASCII
     Underscores stripped in Integer Definitions
          Automatic datetime conversion
     Live Python Debug Output
     Salt 0.16.x minions cannot communicate with a 0.17.x master
     Debugging the Master and Minion
Developing Salt
     Overview
     Salt Client
     Overview
     Salt Master
     Overview
     Moving Pieces
     Publisher
     EventPublisher
     MWorker
     ReqServer
     Job Flow
     Salt Minion
     Overview
     Event System
     Job Flow
     Master Job Flow
     A Note on ClearFuncs vs. AESFuncs
     Contributing
     Sending a GitHub pull request
     Which Salt branch?
     The current release branch
          The develop branch
     Keeping Salt Forks in Sync
     Posting patches to the mailing list
     Backporting Pull Requests
     Issue and Pull Request Labeling System
     Deprecating Code
     Dunder Dictionaries
     Available in
     Available in
     Available in
     Available in
     External Pillars
     Location
     Configuration
     The Module
     Imports and Logging
     Options
     Initialization
     Example configuration
     Reminder
     Installing Salt for development
     Running a self-contained development version
     Changing Default Paths
     Additional Options
     Installing Salt from the Python Package Index
     Editing and previewing the documentation
     Running unit and integration tests
     Issue and Pull Request Labeling System
     GitHub Labels and Milestones
     Milestones
     Labels
     Type
     Priority
     Severity
     Functional Area
     Functional Group
     Status
     Type of Change
     Test Status
     Other
     Logging Internals
     Modular Systems
     Execution Modules
     Interactive Debugging
     State Modules
     Auth
     Fileserver
     Grains
     Output
     Pillar
     Renderers
     Returners
     Runners
     Tops
     Wheel
     Package Providers
     Package Functions
     Package Repo Functions
     Low-Package Functions
     Reporting Bugs
     Community Projects That Use Salt
     Salt Topology
     Servers
     Pub/sub
     Return
     Translating Documentation
     Building A Localized Version of the Documentation
     Install The Transifex Client
     Configure The Transifex Client
     Download Remote Translations
     Build Localized Documentation
     View Localized Documentation
     Developing Salt Tutorial
     Fork
     Clone
     Fetch
     Branch
     Edit
     Commit
     Push
     Merge
     Resources
     Running The Tests
     Running Unit Tests Without Integration Test Daemons
     Running Destructive Integration Tests
     Running Cloud Provider Tests
     Running The Tests In A Docker Container
     Automated Test Runs
     Using Salt-Cloud on Jenkins
     Writing Tests
     Naming Conventions
     Integration Tests
     Unit Tests
     Integration Tests
     Adding New Directories
     Integration Classes
     ModuleCase
     SyndicCase
     ShellCase
     Examples
     Module Example via ModuleCase Class
     Shell Example via ShellCase
     Integration Test Files
     Destructive vs Non-Destructive Tests
     Cloud Provider Tests
     Writing Unit Tests
     Introduction
     Preparing to Write a Unit Test
     A Simple Example
     Evaluating Truth
     Tests Using Mock Objects
          Modifying __salt__ In Place
     A More Complete Example
     A Complex Example
     Protocol
     Header
     Packet
     Header Fields
     Session Bootstrap
     Session
     Service Types or Modular Services
     SaltStack Git Policy
     New Code Entry
     Release Branching
     Feature Release Branching
     Point Releases
     Salt Conventions
     Writing Salt Documentation
     Style
     Point-of-view
     Active voice
     Title capitalization
     Serial Commas
     Documenting modules
     Inline documentation
     Specify a release for additions or changes
     Adding module documentation to the index
     Cross-references
     Glossary entries
     Index entries
     Documents and sections
     Modules
     Settings
     Documentation Changes and Fixes
     Building the documentation
     Salt Formulas
     Installation
     Adding a Formula as a GitFS remote
     Adding a Formula directory manually
     Usage
     Including a Formula in an existing State tree
     Including a Formula from a Top File
     Configuring Formula using Pillar
     Using Formula with your own states
     Reporting problems & making additions
     Writing Formulas
     Style
     Use a descriptive State ID
          Use module.function notation
          Specify the name parameter
     Comment state files
     Easy on the Jinja!
     Know the evaluation and execution order
     Avoid changing the underlying system with Jinja
     Inspect the local system
     Gather external data
     Light conditionals and looping
     Avoid heavy logic and programming
     Jinja Macros
     Abstracting static defaults into a lookup table
     Collecting common values
     Overriding values in the lookup table
     When to use lookup tables
     Platform-specific information
     Sane defaults
     Environment specific information
     Single-purpose SLS files
     Parameterization
     Configuration
     Pillar overrides
     Scripting
     Repository structure
     Versioning
     Testing Formulas
     SaltStack Packaging Guide
     Patching Salt For Distributions
     Source Files
     Single Package
     Split Package
     Salt Common
     Name
     Files
     Depends
     Salt Master
     Name
     Files
     Depends
     Salt Syndic
     Name
     Files
     Depends
     Salt Minion
     Name
     Files
     Depends
     Salt SSH
     Name
     Files
     Depends
     Salt Cloud
     Name
     Files
     Depends
     Salt Doc
     Name
     Files
     Optional Depends
     Salt Release Process
     Feature Release Process
     Maintenance and Bugfix Releases
     Cherry-Picking Process for Bugfixes
     Salt Coding Style
     Linting
     Variables
     Strings
     Single Quotes
     Formatting Strings
     Docstring Conventions
     Dictionaries
     Imports
     Absolute Imports
     Vertical is Better
     Line Length
     Indenting
     Code Churn
Release Notes
     Latest Branch Release
     Previous Releases
     Salt 2015.8.0 Release Notes - Codename Beryllium
     New SaltStack Installation Repositories
     Send Event on State Completion
     ZeroMQ socket monitoring
     SPM (Salt Package Manager)
     Specify a Single Environment for Top Files
     Tornado TCP Transport
     Proxy Minion Enhancements
     Engines
     Core Changes
     Git Pillar
     Salt Cloud Improvements
     Salt Cloud Changes
     State and Execution Module Improvements
     Git State and Execution Modules Rewritten
          Changes in the git.latest State
     Initial Support for Git Worktrees in Execution Module
     New Functions in Git Execution Module
     Changes to Functions in Git Execution Module
     Windows Improvements
     Windows Software Repo Changes
     Changes to legacy Windows repository
     Win System Module
     Other Improvements
     Deprecations
     Security Fixes
     Major Bug Fixes
     Salt 2015.8.1 Release Notes
     Security Fixes
     Major Bug Fixes
     Changes for v2015.8.0..v2015.8.1
     Salt 2015.8.2 Release Notes
     Salt 2015.8.3 Release Notes
     Security Fix
     Changes
     Salt 2015.8.4 Release Notes
     Core Changes
     Salt 2015.5.0 Release Notes - Codename Lithium
     Beacons
     Sudo Minion Settings
     Lazy Loader
     Enhanced Active Directory Support
     Salt LXC Enhancements
     Salt SSH
     New Windows Installer
     Removed Requests Dependency
     Python 3 Updates
     RAET Additions
     Modified File Detection
     Reactor Update
     Misc Fixes/Additions
     Deprecations
     Known Issues
     Salt 2015.5.1 Release Notes
     Salt 2015.5.2 Release Notes
     Salt 2015.5.3 Release Notes
     Salt 2015.5.4 Release Notes
     Changes for v2015.5.3..v2015.5.4
     Salt 2015.5.5 Release Notes
     Changes for v2015.5.3..v2015.5.5
     Salt 2015.5.6 Release Notes
     Security Fixes
     Changes for v2015.5.5..v2015.5.6
     Salt 2015.5.7 Release Notes
     Salt 2015.5.8 Release Notes
     Security Fix
     Changes
     Salt 2014.7.0 Release Notes - Codename Helium
     New Transport!
     RAET Transport Option
     Salt SSH Enhancements
     Install salt-ssh Using pip
     Fileserver Backends
     Saltfile Support
     Ext Pillar
     No More sshpass
     Pure Python Shim
     Custom Module Delivery
     CP Module Support
     More Thin Directory Options
     State System Enhancements
     New Imperative State Keyword Listen
     Mod Aggregate Runtime Manipulator
     New Requisites: onchanges and onfail
     Global onlyif and unless
          Use names to expand and override values
     Major Features
     Scheduler Additions
     Red Hat 7 Family Support
     Fileserver Backends in salt-call
     Amazon Execution Modules
     LXC Runner Enhancements
     Next Gen Docker Management
     Peer System Performance Improvements
     Sdb
     GPG Renderer
     OpenStack Expansion
     Queues System
     Multi Master Failover Additions
     Chef Execution Module
     Synchronous and Asynchronous Execution of Runner and Wheel Modules
     Web Hooks
     Generating and Accepting Minion Keys
     Fileserver Backend Enhancements
          New gitfs Features
     Pygit2 and Dulwich
     Mountpoints
     Environment Whitelisting/Blacklisting
     Expanded Authentication Support
          New hgfs Features
     Mountpoints
     Environment Whitelisting/Blacklisting
          New svnfs Features
     Mountpoints
     Environment Whitelisting/Blacklisting
     Configurable Trunk/Branches/Tags Paths
          New minionfs Features
     Mountpoint
     Changing the Saltenv from Which Files are Served
     Minion Whitelisting/Blacklisting
     Pyobjects Renderer
     New Modules
     New Runners
     New External Pillars
     New Salt-Cloud Providers
     Salt Call Change
     Deprecations
     Salt 2014.7.1 Release Notes
     Salt 2014.7.2 Release Notes
     Salt 2014.7.3 Release Notes
     Salt 2014.7.4 Release Notes
     Salt 2014.7.5 Release Notes
     Salt 2014.7.6 Release Notes
     Salt 2014.1.0 Release Notes - Codename Hydrogen
     Major Features
     Salt Cloud Merged into Salt
     Google Compute Engine
     Salt Virt
     Docker Integration
     Substantial Testing Expansion
     BSD Package Management
     Network Management for Debian/Ubuntu
     IPv6 Support for iptables State/Module
     GitFS Improvements
     MinionFS
     Grains Caching
     Improved Command Logging Control
     PagerDuty Support
     Virtual Terminal
     Proxy Minions
     Additional Bugfixes (Release Candidate Period)
     Salt 2014.1.1 Release Notes
     Salt 2014.1.10 Release Notes
     Salt 2014.1.11 Release Notes
     Salt 2014.1.12 Release Notes
     Salt 2014.1.13 Release Notes
     Salt 2014.1.2 Release Notes
     Salt 2014.1.3 Release Notes
     Salt 2014.1.4 Release Notes
     Salt 2014.1.5 Release Notes
     Salt 2014.1.6 Release Notes
     Salt 2014.1.7 Release Notes
     Salt 2014.1.8 Release Notes
     Salt 2014.1.9 Release Notes
     Salt 0.10.0 Release Notes
     Major Features
     Event System
     Unprivileged User Updates
     Peer Runner Execution
     YAML Parsing Updates
     State Call Data Files
     Turning Off the Job Cache
     Test Updates
     Minion Swarms Are Faster
     Many Fixes
     Master and Minion Stability Fixes
     Salt 0.10.1 Release Notes
     Salt 0.10.2 Release Notes
     Major Features
     Ext Pillar Modules
     Minion Events
     Minion Data Caching
     Backup Files
     Configuration files
     Salt Key Verification
     Key auto-signing
     Module changes
     Improved OpenBSD support
     SQL Modules
     ZFS Support on FreeBSD
     Augeas
     Native Debian Service module
     Cassandra
     Networking
     Monit
     Bluetooth
     Test Updates
     Consistency Testing
     Many Fixes
     Master and Minion Stability Fixes
     Salt 0.10.3 Release Notes
     Major Features
     ACL System
     Master Finger Option
     Salt Key Fingerprint Generation
     Parsing System
     Key Generation
     Startup States
     New Exclude Declaration
     Max Open Files
     More State Output Options
     Security Fix
     Salt 0.10.4 Release Notes
     Major Features
     External Authentication System
     Access Control System
     Target by Network
     Nodegroup Nesting
     Salt Key Delete by Glob
     Master Tops System
     Next Level Solaris Support
     Security
     Pillar Updates
     Salt 0.10.5 Release Notes
     Major Features
     External Job Cache
     OpenStack Additions
     Wheel System
     Render Pipes
     Salt Key Overhaul
     Modular Outputters
     Gzip from Fileserver
     Unified Module Configuration
     Salt Call Enhancements
     Death to pub_refresh and sub_timeout
     Git Revision Versions
     Svn Module Addition
     Noteworthy Changes
     Arch Linux Defaults to Systemd
     Salt, Salt Cloud and Openstack
     Salt 0.11.0 Release Notes
     Major Features
     OverState
     Reactor System
     Module Context
     Multiple Package Management
     Search System
     Notable Changes
     Salt 0.11.1 Release Notes
     Salt 0.12.0 Release Notes
     Major Features
     Modular Fileserver Backend
     Windows is First Class!
     New Default Outputter
     Internal Scheduler
     Optional DSL for SLS Formulas
     Set Grains Remotely
     Gentoo Additions
     Salt 0.12.1 Release Notes
     Salt 0.13.0 Release Notes
     Major Features
     Improved file.recurse Performance
     Windows Improvements
     Nodegroup Targeting in Peer System
     Blacklist Additions
     Command Line Pillar Embedding
     CLI Notifications
     Version Specification in Multiple-Package States
     Noteworthy Changes
     Salt 0.13.1 Release Notes
     Salt 0.13.2 Release Notes
     Salt 0.13.3 Release Notes
     Salt 0.14.0 Release Notes
     Major Features
     Salt - As a Cloud Controller
     Libvirt State
     New get Functions
     Salt 0.14.1 Release Notes
     Salt 0.15.0 Release Notes
     Major Features
     The Salt Mine
     IPV6 Support
     Copy Files From Minions to the Master
     Better Template Debugging
     State Event Firing
     Major Syndic Updates
     Peer System Updates
     Minion Key Revocation
     Function Return Codes
     Functions in Overstate
     Pillar Error Reporting
     Using Cached State Data
     Monitoring States
     Salt 0.15.1 Release Notes
     Security Updates
     Path Injection in Minion IDs
     Patch
     RSA Key Generation Fault
     Patch
     Command Injection Via ext_pillar
     Patch
     Salt 0.15.2 Release Notes
     Salt 0.15.3 Release Notes
     Salt 0.16.0 Release Notes
     Major Features
     Multi-Master
     Prereq, the New Requisite
     Peer System Improvements
     Relative Includes
     More State Output Options
     Improved Windows Support
     Multiple Targets for pkg.removed, pkg.purged States
     Random Times in Cron States
     Confirmation Prompt on Key Acceptance
     Support for Setting Password Hashes on BSD Minions
     Salt 0.16.1 Release Notes
     Salt 0.16.2 Release Notes
     Windows
     Grains
     Pillar
     Peer Publishing
     Minion
     User/Group Management
     File Management
     Package/Repository Management
     Service Management
     Networking
     Ssh
     MySQL
     PostgreSQL
     Miscellaneous
     Salt 0.16.3 Release Notes
     Salt 0.16.4 Release Notes
     Salt 0.17.0 Release Notes
     Major Features
     Halite
     Salt SSH
     Rosters
     State Auto Order
     Salt Thin
     Event Namespacing
     Mercurial Fileserver Backend
     External Logging Handlers
     Jenkins Testing
     Salt Testing Project
     StormPath External Authentication
     LXC Support
     Mac OS X User/Group Support
     Django ORM External Pillar
     Fixes from RC to release
     Salt 0.17.1 Release Notes
     SSH Enhancements
     Shell Improvements
     Performance
     Security Updates
     Insufficient Argument Validation
     Cve
     Affected Versions
     Patches
     Found By
     MITM SSH attack in salt-ssh
     Cve
     Affected Versions
     Found By
     Insecure Usage of /tmp in salt-ssh
     Cve
     Affected Versions
     Patches
     Found By
     YAML Calling Unsafe Loading Routine
     Cve
     Patches
     Found By
     Failure to Drop Supplementary Group on Salt Master
     Cve
     Affected Versions
     Patches
     Found By
     Failure to Validate Minions Posting Data
     Cve
     Affected Versions
     Patches
     Found By
     Fix Reference
     Salt 0.17.2 Release Notes
     Salt 0.17.3 Release Notes
     Salt 0.17.4 Release Notes
     Salt 0.17.5 Release Notes
     Salt 0.6.0 release notes
     Salt 0.7.0 release notes
     Salt 0.8.0 release notes
     Salt-cp
     Cython minion modules
     Dynamic Returners
     Configurable Minion Modules
     Advanced Minion Threading
     Lowered Supported Python to 2.6
     Salt 0.8.7 release notes
     Salt 0.8.8 release notes
     Salt 0.8.9 Release Notes
     Download!
     New Features
     Salt Run
     Refined Outputters
     Cross Calling Salt Modules
     Watch Option Added to Salt State System
     Root Dir Option
     Config Files Defined in Variables
     New Modules
     New Minion Modules
     New States
     New Returners
     New Runners
     Salt 0.9.0 Release Notes
     Download!
     New Features
     Salt Syndic
     Peer Communication
     Easily Extensible API
     Cleaner Key Management
     Improved 0MQ Master Workers
     New Modules
     New Minion Modules
     Salt 0.9.1 Release Notes
     Salt 0.9.2 Release Notes
     Download!
     New Features
     Salt-Call Additions
     State System Fixes
     Notable Bug Fixes
     Python 2.6 String Formatting
     Cython Loading Disabled by Default
     Salt 0.9.3 Release Notes
     Download!
     New Features
     WAN Support
     State System Fixes
     Extend Declaration
     Highstate Structure Specification
     SheBang Renderer Switch
     Python State Renderer
     FreeBSD Support
     Module and State Additions
     Cron Support
     File State Additions
     Sysctl Module and State
     Kernel Module Management
     Ssh Authorized Keys
     Salt 0.9.4 Release Notes
     Download!
     New Features
     Failhard State Option
     State Level Failhard
     Global Failhard
     Finite Ordering of State Execution
     The Order Option
     Gentoo Support
     Salt 0.9.5 Release Notes
     Community
     Major Features
     SPEED! Pickle to msgpack
     C Bindings for YAML
     Experimental Windows Support
     Dynamic Module Distribution
     Modules via States
     Modules via Module Environment Directories
     Module Reloading
     Enable / Disable Added to Service
     Compound Target
     Node Groups
     Minion Side Data Store
     Major Grains Improvement
     Salt -Q is Useful Now
     Packaging Updates
     FreeBSD
     Fedora and Red Hat Enterprise
     Debian/Ubuntu
     More to Come
     Refinement
     More Testing, More BugFixes
     Custom Exceptions
     New Modules
     New States
     New Returners
     Salt 0.9.6 Release Notes
     New Features
     HTTP and ftp support in files.managed
     Allow Multiple Returners
     Minion Memory Improvements
     Minions Can Locally Cache Return Data
     Pure Python Template Support For file.managed
     Salt 0.9.7 Release Notes
     Major Features
     Salt Jobs Interface
     Functions in the saltutil Module
     The jobs Runner
     External Node Classification
     State Mod Init System
     Source File Search Path
     Refinements to the Requisite System
     Initial Unit Testing Framework
     Compound Targets Expanded
     Nodegroups in the Top File
     Salt 0.9.8 Release Notes
     Upgrade Considerations
     Upgrade Issues
     Debian/Ubuntu Packages
     Major Features
     Pillar
     CLI Additions
     Running States Without a Master
     Keyword Arguments and States
     Keyword Arguments and the CLI
     Matcher Refinements and Changes
     Requisite in
     Providers
     Requisite Glob Matching
     Batch Size
     Module Updates
     In Progress Development
     Master Side State Compiling
     Solaris Support
     Windows Support
     Salt 0.9.9 Release Notes
     Major Features
     State Test Interface
     State Syntax Update
     Use and Use_in Requisites
     Network State
     Exponential Jobs
     LocalClient Additions
     Better Self Salting
     Wildcards for SLS Modules
     External Pillar
     Single State Executions
     New Tests
     Minion Swarm
     Shell Tests
     Client Tests
Salt Based Projects
     Salt Sandbox
Security Disclosure Policy
     Security response procedure
     Receiving security announcements
Frequently Asked Questions
     Faq
     Is Salt open-core?
     I think I found a bug! What should I do?
     What ports should I open on my firewall?
     I\(aqm seeing weird behavior (including but not limited to packages not installing their users properly)
          My script runs every time I run a state.highstate. Why?
          When I run test.ping, why don\(aqt the Minions that aren\(aqt responding return anything? Returning False would be helpful.
     How does Salt determine the Minion\(aqs id?
     I\(aqm trying to manage packages/services but I get an error saying that the state is not available. Why?
     Why aren\(aqt my custom modules/states/etc. available on my Minions?
          Module X isn\(aqt available, even though the shell command it uses is installed. Why?
     Can I run different versions of Salt on my Master and Minion?
     Does Salt support backing up managed files?
     Is it possible to deploy a file to a specific minion, without other minions having access to it?
     What is the best way to restart a Salt daemon using Salt?
     Linux/Unix
     Windows
     Salting the Salt Master
     Is Targeting using Grain Data Secure?
Glossary
Author
Copyright

INTRODUCTION TO SALT

We’re not just talking about NaCl..SS The 30 second summary

Salt is:
o a configuration management system, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running)
o a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria

It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface.

    Simplicity

Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.

    Parallel execution

The core functions of Salt:
o enable commands to remote systems to be called in parallel rather than serially
o use a secure and encrypted protocol
o use the smallest and fastest network payloads possible
o provide a simple programming interface

Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties.

    Building on proven technology

Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent  ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster  AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via  msgpack, enabling fast and light network traffic.

    Python client interface

In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.

    Fast, flexible, scalable

The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network.

    Open

Salt is developed under the  Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth.

    Salt Community

Join the Salt!

There are many ways to participate in and communicate with the Salt community.

Salt has an active IRC channel and a mailing list.

    Mailing List

Join the  salt-users mailing list. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members.

 https://groups.google.com/forum/#!forum/salt-users

There is also a low-traffic list used to announce new releases called  salt-announce

 https://groups.google.com/forum/#!forum/salt-announce

    IRC

The #salt IRC channel is hosted on the popular  Freenode network. You can use the  Freenode webchat client right from your browser.

 Logs of the IRC channel activity are being collected courtesy of Moritz Lenz.

If you wish to discuss the development of Salt itself join us in #salt-devel.

    Follow on GitHub

The Salt code is developed via GitHub. Follow Salt for constant updates on what is happening in Salt development:

 https://github.com/saltstack/salt

    Blogs

SaltStack Inc. keeps a  blog with recent news and advancements:

 http://www.saltstack.com/blog/

Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog  The Red45:

 http://red45.wordpress.com/

    Example Salt States

The official salt-states repository is:  https://github.com/saltstack/salt-states

A few examples of salt states from the community:
o  https://github.com/blast-hardcheese/blast-salt-states
o  https://github.com/kevingranade/kevingranade-salt-state
o  https://github.com/mattmcclean/salt-openstack/tree/master/salt
o  https://github.com/rentalita/ubuntu-setup/
o  https://github.com/brutasse/states
o  https://github.com/bclermont/states
o  https://github.com/pcrews/salt-data

    Follow on ohloh

 https://www.ohloh.net/p/salt

    Other community links

o  Salt Stack Inc.
o  Subreddit
o  Google+
o  YouTube
o  Facebook
o  Twitter
o  Wikipedia page

    Hack the Source

If you want to get involved with the development of source code or the documentation efforts, please review the hacking section!

INSTALLATION

SEE ALSO: Installing Salt for development and contributing to the project.

    Quick Install

On most distributions, you can set up a Salt Minion with the  Salt Bootstrap.

    Platform-specific Installation Instructions

These guides go into detail how to install Salt on a given platform.

    Arch Linux

    Installation

Salt (stable) is currently available via the Arch Linux Official repositories. There are currently -git packages available in the Arch User repositories (AUR) as well.

    Stable Release

Install Salt stable releases from the Arch Linux Official repositories as follows:


pacman -S salt-zmq


To install Salt stable releases using the RAET protocol, use the following:


pacman -S salt-raet


NOTE: transports

Unlike other linux distributions, please be aware that Arch Linux\(aqs package manager pacman defaults to RAET as the Salt transport. If you want to use ZeroMQ instead, make sure to enter the associated number for the salt-zmq repository when prompted.

    Tracking develop

To install the bleeding edge version of Salt (may include bugs!), use the -git package. Installing the -git package as follows:


wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz
tar xf salt-git.tar.gz
cd salt-git/
makepkg -is


NOTE: yaourt

If a tool such as  Yaourt is used, the dependencies will be gathered and built automatically.

The command to install salt using the yaourt tool is:


yaourt salt-git


    Post-installation tasks

systemd

Activate the Salt Master and/or Minion via systemctl as follows:


systemctl enable salt-master.service
systemctl enable salt-minion.service


Start the Master

Once you\(aqve completed all of these steps you\(aqre ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here:


systemctl start salt-master


Now go to the Configuring Salt page.

    Debian Installation

    Installation from the SaltStack Repository

2015.5 and later packages for Debian 8 (Jessie) are available in the SaltStack repository.

IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used.

To install using the SaltStack repository:
1. Run the following command to import the SaltStack repository key:


wget -O - https://repo.saltstack.com/apt/debian/8/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -


2. Add the following line to /etc/apt/sources.list:


deb http://repo.saltstack.com/apt/debian/8/amd64/latest jessie main


3. Run sudo apt-get update.
4. Install the salt-minion, salt-master, or other Salt components:
o apt-get install salt-master
o apt-get install salt-minion
o apt-get install salt-ssh
o apt-get install salt-syndic
o apt-get install salt-cloud

    Post-installation tasks

Now, go to the Configuring Salt page.

    Installation from the Community Repository

The SaltStack community maintains a Debian repository at debian.saltstack.com. Packages for Debian Old Stable, Stable, and Unstable (Wheezy, Jessie, and Sid) for Salt 0.16 and later are published in this repository.

NOTE: Packages in this repository are community built, and it can take a little while until the latest SaltStack release is available in this repository.

    Jessie (Stable)

For Jessie, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:


deb http://debian.saltstack.com/debian jessie-saltstack main


    Wheezy (Old Stable)

For wheezy, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:


deb http://debian.saltstack.com/debian wheezy-saltstack main


    Squeeze (Old Old Stable)

For squeeze, you will need to enable the Debian backports repository as well as the debian.saltstack.com repository. To do so, add the following to /etc/apt/sources.list or a file in /etc/apt/sources.list.d:


deb http://debian.saltstack.com/debian squeeze-saltstack main
deb http://backports.debian.org/debian-backports squeeze-backports main


    Sid (Unstable)

For sid, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:


deb http://debian.saltstack.com/debian unstable main


    Import the repository key.

You will need to import the key used for signing.


wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key" | apt-key add -


NOTE: You can optionally verify the key integrity with sha512sum using the public key signature shown here. E.g:


echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d735dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6  debian-salt-team-joehealy.gpg.key" | sha512sum -c


    Update the package database


apt-get update


    Install packages

Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
o apt-get install salt-master
o apt-get install salt-minion
o apt-get install salt-ssh
o apt-get install salt-syndic

    Post-installation tasks

Now, go to the Configuring Salt page.

    Fedora

Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and  EPEL. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt!

WARNING: Fedora 19 comes with systemd 204. Systemd has known bugs fixed in later revisions that prevent the salt-master from starting reliably or opening the network connections that it needs to. It\(aqs not likely that a salt-master will start or run reliably on any distribution that uses systemd version 204 or earlier. Running salt-minions should be OK.

    Installation

Salt can be installed using yum and is available in the standard Fedora repositories.

    Stable Release

Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.


yum install salt-master
yum install salt-minion


Installing from updates-testing

When a new Salt release is packaged, it is first admitted into the updates-testing repository, before being moved to the stable repo.

To install from updates-testing, use the enablerepo argument for yum:


yum --enablerepo=updates-testing install salt-master
yum --enablerepo=updates-testing install salt-minion


    Installation Using pip

Since Salt is on  PyPI, it can be installed using pip, though most users prefer to install using a package manager.

Installing from pip has a few additional requirements:
o Install the group \(aqDevelopment Tools\(aq, dnf groupinstall \(aqDevelopment Tools\(aq
o Install the \(aqzeromq-devel\(aq package if it fails on linking against that afterwards as well.

A pip install does not make the init scripts or the /usr/local/etc/salt directory, and you will need to provide your own systemd service unit.

Installation from pip:


pip install salt


WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here.

    Post-installation tasks

Master

To have the Master start automatically at boot time:


systemctl enable salt-master.service


To start the Master:


systemctl start salt-master.service


Minion

To have the Minion start automatically at boot time:


systemctl enable salt-minion.service


To start the Minion:


systemctl start salt-minion.service


Now go to the Configuring Salt page.

    FreeBSD

Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards < christer.edwards@gmail.com>. It has been tested on FreeBSD 7.4, 8.2, 9.0, 9.1, 10.0 and later releases.

    Installation

Salt is available in binary package form from both the FreeBSD pkgng repository or directly from SaltStack. The instructions below outline installation via both methods:

    FreeBSD repo

The FreeBSD pkgng repository is preconfigured on systems 10.x and above. No configuration is needed to pull from these repositories.


pkg install py27-salt


These packages are usually available within a few days of upstream release.

    SaltStack repo

SaltStack also hosts internal binary builds of the Salt package, available from  https://repo.saltstack.com/freebsd/. To make use of this repository, add the following file to your system:

/usr/local/etc/pkg/repos/saltstack.conf:


saltstack: {
  url: "https://repo.saltstack.com/freebsd/${ABI}/",
  mirror_type: "http",
  enabled: yes
  priority: 10
}


You should now be able to install Salt from this new repository:


pkg install py27-salt


These packages are usually available earlier than upstream FreeBSD. Also available are release candidates and development releases. Use these pre-release packages with caution.

    Post-installation tasks

Master

Copy the sample configuration file:


cp /usr/local/usr/local/etc/salt/master.sample /usr/local/etc/salt/master


rc.conf

Activate the Salt Master in /etc/rc.conf:


sysrc salt_master_enable="YES"


Start the Master

Start the Salt Master as follows:


service salt_master start


Minion

Copy the sample configuration file:


cp /usr/local/usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion


rc.conf

Activate the Salt Minion in /etc/rc.conf:


sysrc salt_minion_enable="YES"


Start the Minion

Start the Salt Minion as follows:


service salt_minion start


Now go to the Configuring Salt page.

    Gentoo

Salt can be easily installed on Gentoo via Portage:


emerge app-admin/salt


    Post-installation tasks

Now go to the Configuring Salt page.

    OpenBSD

Salt was added to the OpenBSD ports tree on Aug 10th 2013. It has been tested on OpenBSD 5.5 onwards.

Salt is dependent on the following additional ports. These will be installed as dependencies of the sysutils/salt port:


devel/py-futures
devel/py-progressbar
net/py-msgpack
net/py-zmq
security/py-crypto
security/py-M2Crypto
textproc/py-MarkupSafe
textproc/py-yaml
www/py-jinja2
www/py-requests
www/py-tornado


    Installation

To install Salt from the OpenBSD pkg repo, use the command:


pkg_add salt


    Post-installation tasks

Master

To have the Master start automatically at boot time:


rcctl enable salt_master


To start the Master:


rcctl start salt_master


Minion

To have the Minion start automatically at boot time:


rcctl enable salt_minion


To start the Minion:


rcctl start salt_minion


Now go to the Configuring Salt page.

    OS X

    Dependency Installation

It should be noted that Homebrew explicitly discourages the  use of sudo: Homebrew is designed to work without using sudo. You can decide to use it but we strongly recommend not to do so. If you have used sudo and run into a bug then it is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo

So when using Homebrew, if you want support from the Homebrew community, install this way:


brew install saltstack


When using MacPorts, install this way:


sudo port install salt


When only using the OS X system\(aqs pip, install this way:


sudo pip install salt


    Salt-Master Customizations

To run salt-master on OS X, the root user maxfiles limit must be increased:

NOTE: On OS X 10.10 (Yosemite) and higher, maxfiles should not be adjusted. The default limits are sufficient in all but the most extreme scenarios. Overriding these values with the setting below will cause system instability!


sudo launchctl limit maxfiles 4096 8192


And sudo add this configuration option to the /usr/local/etc/salt/master file:


max_open_files: 8192


Now the salt-master should run without errors:


sudo salt-master --log-level=all


    Post-installation tasks

Now go to the Configuring Salt page.

    RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux

Salt should work properly with all mainstream derivatives of Red Hat Enterprise Linux, including CentOS, Scientific Linux, Oracle Linux, and Amazon Linux. Report any bugs or issues on the  issue tracker.

    Installation from the SaltStack Repository

2015.5 and later packages for RHEL 5, 6, and 7 are available in the SaltStack repository.

IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used.

To install using the SaltStack repository:
1. Run one of the following commands based on your version to import the SaltStack repository key:

Version 7:


rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub


Version 6:


rpm --import https://repo.saltstack.com/yum/redhat/6/x86_64/latest/SALTSTACK-GPG-KEY.pub


Version 5:


wget https://repo.saltstack.com/yum/redhat/5/x86_64/latest/SALTSTACK-EL5-GPG-KEY.pub
rpm --import SALTSTACK-EL5-GPG-KEY.pub
rm -f SALTSTACK-EL5-GPG-KEY.pub


2. Save the following file to /etc/yum.repos.d/saltstack.repo:

Version 7 and 6:


[saltstack-repo]
name=SaltStack repo for RHEL/CentOS $releasever
baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
enabled=1
gpgcheck=1
gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub


Version 5:


[saltstack-repo]
name=SaltStack repo for RHEL/CentOS $releasever
baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
enabled=1
gpgcheck=1
gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-EL5-GPG-KEY.pub


3. Run sudo yum clean expire-cache.
4. Run sudo yum update.
5. Install the salt-minion, salt-master, or other Salt components:
o yum install salt-master
o yum install salt-minion
o yum install salt-ssh
o yum install salt-syndic
o yum install salt-cloud

NOTE: EPEL support is not required when installing using the SaltStack repository on Red Hat 6 and 7. EPEL must be enabled when installing on Red Hat 5.

    Post-installation tasks

Master

To have the Master start automatically at boot time:


chkconfig salt-master on


To start the Master:


service salt-master start


Minion

To have the Minion start automatically at boot time:


chkconfig salt-minion on


To start the Minion:


service salt-minion start


Now go to the Configuring Salt page.

    Installation from the Community Repository

Beginning with version 0.9.4, Salt has been available in  EPEL. For RHEL/CentOS 5,  Fedora COPR is recommended due to the removal of some dependencies from EPEL5.

On RHEL/CentOS 6, the proper Jinja package \(aqpython-jinja2\(aq was moved from EPEL to the "RHEL Server Optional Channel". Verify this repository is enabled before installing salt on RHEL/CentOS 6.

NOTE: Packages in these repositories are community built, and it can take a little while until the latest SaltStack release is available in this repository.

    RHEL/CentOS 6 and 7, Scientific Linux, etc.

WARNING: Salt 2015.8 requires python-crypto 2.6.1 or higher, and python-tornado version 4.2.1 or higher. These packages are not currently available in EPEL for Red Hat 5 and 6. You must install these dependencies from another location or use the SaltStack repository documented above.

    Enabling EPEL

If the EPEL repository is not installed on your system, you can download the RPM for  RHEL/CentOS 6 or for  RHEL/CentOS 7 and install it using the following command:


rpm -Uvh epel-release-X-Y.rpm


Replace epel-release-X-Y.rpm with the appropriate filename.

    Installing Stable Release

Salt is packaged separately for the minion and the master. It is necessary to install only the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
o yum install salt-master
o yum install salt-minion
o yum install salt-ssh
o yum install salt-syndic
o yum install salt-cloud

Installing from epel-testing

When a new Salt release is packaged, it is first admitted into the epel-testing repository, before being moved to the stable repo.

To install from epel-testing, use the enablerepo argument for yum:


yum --enablerepo=epel-testing install salt-minion


    Installation Using pip

Since Salt is on  PyPI, it can be installed using pip, though most users prefer to install using RPMs (which can be installed from  EPEL).

Installing from pip has a few additional requirements:
o Install the group \(aqDevelopment Tools\(aq, yum groupinstall \(aqDevelopment Tools\(aq
o Install the \(aqzeromq-devel\(aq package if it fails on linking against that afterwards as well.

A pip install does not make the init scripts or the /usr/local/etc/salt directory, and you will need to provide your own systemd service unit.

Installation from pip:


pip install salt


WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here.

    ZeroMQ 4

We recommend using ZeroMQ 4 where available. SaltStack provides ZeroMQ 4.0.4 and pyzmq 14.3.1 in the  SaltStack Repository as well as a  COPR repository.

If this repo is added before Salt is installed, then installing either salt-master or salt-minion will automatically pull in ZeroMQ 4.0.4, and additional states to upgrade ZeroMQ and pyzmq are unnecessary.

WARNING: RHEL/CentOS 5 Users Using COPR repos on RHEL/CentOS 5 requires that the python-hashlib package be installed. Not having it present will result in checksum errors because YUM will not be able to process the SHA256 checksums used by COPR.

NOTE: For RHEL/CentOS 5 installations, if using the new repository to install Salt (as detailed above), then it is not necessary to enable the zeromq4 COPR, as the new EL5 repository includes ZeroMQ 4.

    Package Management

Salt\(aqs interface to yum makes heavy use of the repoquery utility, from the  yum-utils package. This package will be installed as a dependency if salt is installed via EPEL. However, if salt has been installed using pip, or a host is being managed using salt-ssh, then as of version 2014.7.0  yum-utils will be installed automatically to satisfy this dependency.

    Post-installation tasks

Master

To have the Master start automatically at boot time:


chkconfig salt-master on


To start the Master:


service salt-master start


Minion

To have the Minion start automatically at boot time:


chkconfig salt-minion on


To start the Minion:


service salt-minion start


Now go to the Configuring Salt page.

    Solaris

Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault < romeot@hawaii.edu> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you\(aqre using it on these platforms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10.

Comments and patches for better support on these platforms is very welcome.

As of version 0.10.4, Solaris is well supported under salt, with all of the following working well:
1. remote execution
2. grain detection
3. service control with SMF
4. \(aqpkg\(aq states with \(aqpkgadd\(aq and \(aqpkgutil\(aq modules
5. cron modules/states
6. user and group modules/states
7. shadow password management modules/states

Salt is dependent on the following additional packages. These will automatically be installed as dependencies of the py_salt package:
o py_yaml
o py_pyzmq
o py_jinja2
o py_msgpack_python
o py_m2crypto
o py_crypto
o python

    Installation

To install Salt from the OpenCSW package repository you first need to install  pkgutil assuming you don\(aqt already have it installed:

On Solaris 10:


pkgadd -d http://get.opencsw.org/now


On Solaris 9:


wget http://mirror.opencsw.org/opencsw/pkgutil.pkg
pkgadd -d pkgutil.pkg all


Once pkgutil is installed you\(aqll need to edit it\(aqs config file /etc/opt/csw/pkgutil.conf to point it at the unstable catalog:


- #mirror=http://mirror.opencsw.org/opencsw/testing
+ mirror=http://mirror.opencsw.org/opencsw/unstable


OK, time to install salt.


# Update the catalog
root> /opt/csw/bin/pkgutil -U
# Install salt
root> /opt/csw/bin/pkgutil -i -y py_salt


    Minion Configuration

Now that salt is installed you can find it\(aqs configuration files in /etc/opt/csw/salt/.

You\(aqll want to edit the minion config file to set the name of your salt master server:


- #master: salt
+ master: your-salt-server


If you would like to use  pkgutil as the default package provider for your Solaris minions, you can do so using the providers option in the minion config file.

You can now start the salt minion like so:

On Solaris 10:


svcadm enable salt-minion


On Solaris 9:


/etc/init.d/salt-minion start


You should now be able to log onto the salt master and check to see if the salt-minion key is awaiting acceptance:


salt-key -l un


Accept the key:


salt-key -a <your-salt-minion>


Run a simple test against the minion:


salt \(aq<your-salt-minion>\(aq test.ping


    Troubleshooting

Logs are in /var/log/salt

    Ubuntu Installation

    Installation from the SaltStack Repository

2015.5 and later packages for Ubuntu 14 (Trusty) and Ubuntu 12 (Precise) are available in the SaltStack repository.

IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used.

To install using the SaltStack repository:
1. Run the following command to import the SaltStack repository key:

Ubuntu 14:


wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -


Ubuntu 12:


wget -O - https://repo.saltstack.com/apt/ubuntu/12.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -


2. Add the following line to /etc/apt/sources.list:

Ubuntu 14:


deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main


Ubuntu 12:


deb http://repo.saltstack.com/apt/ubuntu/12.04/amd64/latest precise main


3. Run sudo apt-get update.
4. Install the salt-minion, salt-master, or other Salt components:
o apt-get install salt-master
o apt-get install salt-minion
o apt-get install salt-ssh
o apt-get install salt-syndic
o apt-get install salt-cloud

    Post-installation tasks

Now, go to the Configuring Salt page.

    Installation from the Community Repository

Packages for Ubuntu are also published in the saltstack PPA. If you have the add-apt-repository utility, you can add the repository and import the key in one step:


sudo add-apt-repository ppa:saltstack/salt


In addition to the main repository, there are secondary repositories for each individual major release. These repositories receive security and point releases but will not upgrade to any subsequent major release. There are currently several available repos: salt16, salt17, salt2014-1, salt2014-7, salt2015-5. For example to follow 2015.5.x releases:


sudo add-apt-repository ppa:saltstack/salt2015-5


add-apt-repository: command not found?

The add-apt-repository command is not always present on Ubuntu systems. This can be fixed by installing python-software-properties:


sudo apt-get install python-software-properties


The following may be required as well:


sudo apt-get install software-properties-common


Note that since Ubuntu 12.10 (Raring Ringtail), add-apt-repository is found in the software-properties-common package, and is part of the base install. Thus, add-apt-repository should be able to be used out-of-the-box to add the PPA.

Alternately, manually add the repository and import the PPA key with these commands:


echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu `lsb_release -sc` main | sudo tee /etc/apt/sources.list.d/saltstack.list
wget -q -O- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6" | sudo apt-key add -


After adding the repository, update the package management database:


sudo apt-get update


    Install packages

Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
o apt-get install salt-master
o apt-get install salt-minion
o apt-get install salt-ssh
o apt-get install salt-syndic

    Post-installation tasks

Now go to the Configuring Salt page.

    Windows

Salt has full support for running the Salt Minion on Windows.

There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows.

Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well.

    Windows Installer

Salt Minion Windows installers can be found here. The output of md5sum <salt minion exe> should match the contents of the corresponding md5 file.

Latest stable build from the selected branch:

 Earlier builds from supported branches

 Archived builds from unsupported branches

NOTE: The installation executable installs dependencies that the Salt minion requires.

The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found.

The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion.

The salt-minion service will appear in the Windows Service Manager and can be started and stopped there or with the command line program sc like any other Windows service.

If the minion won\(aqt start, try installing the Microsoft Visual C++ 2008 x64 SP1 redistributable. Allow all Windows updates to run salt-minion smoothly.

    Silent Installer Options

The installer can be run silently by providing the /S option at the command line. The installer also accepts the following options for configuring the Salt Minion silently:
o /master= A string value to set the IP address or host name of the master. Default value is \(aqsalt\(aq
o /minion-name= A string value to set the minion name. Default is \(aqhostname\(aq
o /start-service= Either a 1 or 0. \(aq1\(aq will start the service, \(aq0\(aq will not. Default is to start the service after installation.

Here\(aqs an example of using the silent installer:


Salt-Minion-2015.5.6-Setup-amd64.exe /S /master=yoursaltmaster /minion-name=yourminionname /start-service=0


    Running the Salt Minion on Windows as an Unprivileged User

Notes: - These instructions were tested with Windows Server 2008 R2 - They are generalizable to any version of Windows that supports a salt-minion

    A. Create the Unprivileged User that the Salt Minion will Run As

1. Click Start > Control Panel > User Accounts.
2. Click Add or remove user accounts.
3. Click Create new account.
4. Enter salt-user (or a name of your preference) in the New account name field.
5. Select the Standard user radio button.
6. Click the Create Account button.
7. Click on the newly created user account.
8. Click the Create a password link.
9. In the New password and Confirm new password fields, provide a password (e.g "SuperSecretMinionPassword4Me!").
10. In the Type a password hint field, provide appropriate text (e.g. "My Salt Password").
11. Click the Create password button.
12. Close the Change an Account window.

    B. Add the New User to the Access Control List for the Salt Folder

1. In a File Explorer window, browse to the path where Salt is installed (the default path is C:\Salt).
2. Right-click on the Salt folder and select Properties.
3. Click on the Security tab.
4. Click the Edit button.
5. Click the Add button.
6. Type the name of your designated Salt user and click the OK button.
7. Check the box to Allow the Modify permission.
8. Click the OK button.
9. Click the OK button to close the Salt Properties window.

C. Update the Windows Service User for the salt-minion Service

1. Click Start > Administrative Tools > Services.
2. In the Services list, right-click on salt-minion and select Properties.
3. Click the Log On tab.
4. Click the This account radio button.
5. Provide the account credentials created in section A.
6. Click the OK button.
7. Click the OK button to the prompt confirming that the user has been granted the Log On As A Service right.
8. Click the OK button to the prompt confirming that The new logon name will not take effect until you stop and restart the service.
9. Right-Click on salt-minion and select Stop.
10. Right-Click on salt-minion and select Start.

    Setting up a Windows build environment

This document will explain how to set up a development environment for salt on Windows. The development environment allows you to work with the source code to customize or fix bugs. It will also allow you to build your own installation.

    The Easy Way

    Prerequisite Software

To do this the easy way you only need to install  Git for Windows.

    Create the Build Environment

1. Clone the  Salt-Windows-Dev repo from github.

Open a command line and type:


git clone https://github.com/saltstack/salt-windows-dev


2. Build the Python Environment

Go into the salt-windows-dev directory. Right-click the file named dev_env.ps1 and select Run with PowerShell

If you get an error, you may need to change the execution policy.

Open a powershell window and type the following:


Set-ExecutionPolicy RemoteSigned


This will download and install Python with all the dependencies needed to develop and build salt.
3. Build the Salt Environment

Right-click on the file named dev_env_salt.ps1 and select Run with Powershell

This will clone salt into C:\Salt-Dev\salt and set it to the 2015.5 branch. You could optionally run the command from a powershell window with a -Version switch to pull a different version. For example:


dev_env_salt.ps1 -Version \(aq2014.7\(aq


To view a list of available branches and tags, open a command prompt in your C:Salt-Devsalt directory and type:


git branch -a
git tag -n


    The Hard Way

    Prerequisite Software

Install the following software:
1.  Git for Windows
2.  Nullsoft Installer

Download the Prerequisite zip file for your CPU architecture from the SaltStack download site:
o  Salt32.zip
o  Salt64.zip

These files contain all software required to build and develop salt. Unzip the contents of the file to C:\Salt-Dev\temp.

    Create the Build Environment

1. Build the Python Environment
o Install Python:

Browse to the C:\Salt-Dev\temp directory and find the Python installation file for your CPU Architecture under the corresponding subfolder. Double-click the file to install python.

Make sure the following are in your PATH environment variable:


C:\Python27
C:\Python27\Scripts


o Install Pip

Open a command prompt and navigate to C:\Salt-Dev\temp Run the following command:


python get-pip.py


o Easy Install compiled binaries.

M2Crypto, PyCrypto, and PyWin32 need to be installed using Easy Install. Open a command prompt and navigate to C:\Salt-Dev\temp\<cpuarch>. Run the following commands:


easy_install -Z <M2Crypto file name>
easy_install -Z <PyCrypto file name>
easy_install -Z <PyWin32 file name>


NOTE: You can type the first part of the file name and then press the tab key to auto-complete the name of the file.
o Pip Install Additional Prerequisites

All remaining prerequisites need to be pip installed. These prerequisites are as follow:

o MarkupSafe
o Jinja
o MsgPack
o PSUtil
o PyYAML
o PyZMQ
o WMI
o Requests
o Certifi

Open a command prompt and navigate to C:\Salt-Dev\temp. Run the following commands:


pip install <cpuarch>\<MarkupSafe file name>
pip install <Jinja file name>
pip install <cpuarch>\<MsgPack file name>
pip install <cpuarch>\<psutil file name>
pip install <cpuarch>\<PyYAML file name>
pip install <cpuarch>\<pyzmq file name>
pip install <WMI file name>
pip install <requests file name>
pip install <certifi file name>


2. Build the Salt Environment
o Clone Salt

Open a command prompt and navigate to C:\Salt-Dev. Run the following command to clone salt:


git clone https://github.com/saltstack/salt


o Checkout Branch

Checkout the branch or tag of salt you want to work on or build. Open a command prompt and navigate to C:\Salt-Dev\salt. Get a list of available tags and branches by running the following commands:


git fetch --all

To view a list of available branches: git branch -a

To view a list of availabel tags: git tag -n

Checkout the branch or tag by typing the following command:


git checkout <branch/tag name>


o Clean the Environment

When switching between branches residual files can be left behind that will interfere with the functionality of salt. Therefore, after you check out the branch you want to work on, type the following commands to clean the salt environment:

    Developing with Salt

There are two ways to develop with salt. You can run salt\(aqs setup.py each time you make a change to source code or you can use the setup tools develop mode.

    Configure the Minion

Both methods require that the minion configuration be in the C:\salt directory. Copy the conf and var directories from C:\Salt-Dev\salt\pkg\ windows\buildenv to C:\salt. Now go into the C:\salt\conf directory and edit the file name minion (no extension). You need to configure the master and id parameters in this file. Edit the following lines:


master: <ip or name of your master>
id: <name of your minion>


    Setup.py Method

Go into the C:\Salt-Dev\salt directory from a cmd prompt and type:


python setup.py install --force


This will install python into your python installation at C:\Python27. Everytime you make an edit to your source code, you\(aqll have to stop the minion, run the setup, and start the minion.

To start the salt-minion go into C:\Python27\Scripts from a cmd prompt and type:


salt-minion


For debug mode type:


salt-minion -l debug


To stop the minion press Ctrl+C.

    Setup Tools Develop Mode (Preferred Method)

To use the Setup Tools Develop Mode go into C:\Salt-Dev\salt from a cmd prompt and type:


pip install -e .


This will install pointers to your source code that resides at C:\Salt-Dev\salt. When you edit your source code you only have to restart the minion.

    Build the windows installer

This is the method of building the installer as of version 2014.7.4.

    Clean the Environment

Make sure you don\(aqt have any leftover salt files from previous versions of salt in your Python directory.
1. Remove all files that start with salt in the C:\Python27\Scripts directory
2. Remove all files and directorys that start with salt in the C:\Python27\Lib\site-packages directory

    Install Salt

Install salt using salt\(aqs setup.py. From the C:\Salt-Dev\salt directory type the following command:


python setup.py install --force


    Build the Installer

From cmd prompt go into the C:\Salt-Dev\salt\pkg\windows directory. Type the following command for the branch or tag of salt you\(aqre building:


BuildSalt.bat <branch or tag>


This will copy python with salt installed to the buildenv\bin directory, make it portable, and then create the windows installer . The .exe for the windows installer will be placed in the installer directory.

    Testing the Salt minion

1. Create the directory C:\salt (if it doesn\(aqt exist already)
2. Copy the example conf and var directories from pkg/windows/buildenv/ into C:\salt
3. Edit C:\salt\conf\minion


master: ipaddress or hostname of your salt-master


4. Start the salt-minion


cd C:\Python27\Scripts
python salt-minion


5. On the salt-master accept the new minion\(aqs key


sudo salt-key -A


This accepts all unaccepted keys. If you\(aqre concerned about security just accept the key for this specific minion.
6. Test that your minion is responding

On the salt-master run:


sudo salt \(aq*\(aq test.ping


You should get the following response: {\(aqyour minion hostname\(aq: True}

    Single command bootstrap script

On a 64 bit Windows host the following script makes an unattended install of salt, including all dependencies:
Not up to date.

This script is not up to date. Please use the installer found above


# (All in one line.)

"PowerShell (New-Object System.Net.WebClient).DownloadFile(\(aqhttp://csa-net.dk/salt/bootstrap64.bat\(aq,\(aqC:\bootstrap.bat\(aq);(New-Object -com Shell.Application).ShellExecute(\(aqC:\bootstrap.bat\(aq);"

You can execute the above command remotely from a Linux host using winexe:


winexe -U "administrator" //fqdn "PowerShell (New-Object ......);"


For more info check  http://csa-net.dk/salt

    Packages management under Windows 2003

On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don\(aqt have this, salt-minion can\(aqt report some installed software.

    SUSE Installation

With openSUSE 13.2, Salt 2014.1.11 is available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package development will be done there.

    Installation

Salt can be installed using zypper and is available in the standard openSUSE repositories.

    Stable Release

Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.


zypper install salt-master
zypper install salt-minion


    Post-installation tasks openSUSE

Master

To have the Master start automatically at boot time:


systemctl enable salt-master.service


To start the Master:


systemctl start salt-master.service


Minion

To have the Minion start automatically at boot time:


systemctl enable salt-minion.service


To start the Minion:


systemctl start salt-minion.service


    Post-installation tasks SLES

Master

To have the Master start automatically at boot time:


chkconfig salt-master on


To start the Master:


rcsalt-master start


Minion

To have the Minion start automatically at boot time:


chkconfig salt-minion on


To start the Minion:


rcsalt-minion start


    Unstable Release

    openSUSE

For openSUSE Factory run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


For openSUSE 13.2 run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


For openSUSE 13.1 run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


For bleeding edge python Factory run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_edge_python_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


    Suse Linux Enterprise

For SLE 12 run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_12/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


For SLE 11 SP3 run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


For SLE 11 SP2 run the following as root:


zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master


Now go to the Configuring Salt page.

    Dependencies

Salt should run on any Unix-like platform so long as the dependencies are met.
o  Python 2.6 >= 2.6 <3.0
o  msgpack-python - High-performance message interchange format
o  YAML - Python YAML bindings
o  Jinja2 - parsing Salt States (configurable in the master settings)
o  MarkupSafe - Implements a XML/HTML/XHTML Markup safe string for Python
o  apache-libcloud - Python lib for interacting with many of the popular cloud service providers using a unified API
o  Requests - HTTP library

Depending on the chosen Salt transport,  ZeroMQ or  RAET, dependencies vary:
o ZeroMQ:
o  ZeroMQ >= 3.2.0
o  pyzmq >= 2.2.0 - ZeroMQ Python bindings
o  PyCrypto - The Python cryptography toolkit
o  M2Crypto - "Me Too Crypto" - Python OpenSSL wrapper
o RAET:
o  libnacl - Python bindings to  libsodium
o  ioflo - The flo programming interface raet and salt-raet is built on
o  RAET - The worlds most awesome UDP protocol

Salt defaults to the  ZeroMQ transport, and the choice can be made at install time, for example:


python setup.py --salt-transport=raet install


This way, only the required dependencies are pulled by the setup script if need be.

If installing using pip, the --salt-transport install option can be provided like:


pip install --install-option="--salt-transport=raet" salt


    Optional Dependencies

o  mako - an optional parser for Salt States (configurable in the master settings)
o gcc - dynamic  Cython module compiling

    Upgrading Salt

When upgrading Salt, the master(s) should always be upgraded first. Backward compatibility for minions running newer versions of salt than their masters is not guaranteed.

Whenever possible, backward compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.

TUTORIALS

    Introduction

    Salt Masterless Quickstart

Running a masterless salt-minion lets you use Salt\(aqs configuration management for a single machine without calling out to a Salt master on another machine.

Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
o Stand up a master server via States (Salting a Salt Master)
o Use salt-call commands on a system without connectivity to a master
o Masterless States, run states entirely from files local to the minion

It is also useful for testing out state trees before deploying to a production setup.

    Bootstrap Salt Minion

The  salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell:


curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh


See the  salt-bootstrap documentation for other one liners. When using  Vagrant to test out salt, the  Vagrant salt provisioner will provision the VM for you.

    Telling Salt to Run Masterless

To instruct the minion to not look for a master, the file_client configuration option needs to be set in the minion configuration file. By default the file_client is set to remote so that the minion gathers file server and pillar data from the salt master. When setting the file_client option to local the minion is configured to not gather this data from the master.


file_client: local


Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources.

NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.

    Create State Tree

Following the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored.

The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed.

NOTE: For a complete explanation on Salt States, see the  tutorial.

1. Create the top.sls file:

/usr/local/etc/salt/states/top.sls:


base:
  \(aq*\(aq:
    - webserver


2. Create the webserver state tree:

/usr/local/etc/salt/states/webserver.sls:


apache:               # ID declaration
  pkg:                # state declaration
    - installed       # function declaration


NOTE: The apache package has different names on different platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache

The only thing left is to provision our minion using salt-call and the highstate command.

    Salt-call

The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data:


salt-call --local state.highstate


The --local flag tells the salt-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions.

To provide verbose output, use -l debug:


salt-call --local state.highstate -l debug


The minion first examines the top.sls file and determines that it is a part of the group matched by * glob and that the webserver SLS should be applied.

It then examines the webserver.sls file and finds the apache state, which installs the Apache package.

The minion should now have Apache installed, and the next step is to begin learning how to write more complex states.

    Basics

    Standalone Minion

Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
o Use salt-call commands on a system without connectivity to a master
o Masterless States, run states entirely from files local to the minion

NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.

    Telling Salt Call to Run Masterless

The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt-call the file_client configuration option needs to be set. By default the file_client is set to remote so that the minion knows that file server and pillar data are to be gathered from the master. When setting the file_client option to local the minion is configured to not gather this data from the master.


file_client: local


Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources.

    Running States Masterless

The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /usr/local/etc/salt/states for the base environment just like on the master:


file_roots:
  base:
    - /usr/local/etc/salt/states


Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the file_client option set to local and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master.

Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion.

This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows.

The declared state can now be executed with:


salt-call state.highstate


Or the salt-call command can be executed with the --local flag, this makes it unnecessary to change the configuration file:


salt-call state.highstate --local


    External Pillars

External pillars are supported when running in masterless mode.

    Opening the Firewall up for Salt

The Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master.

NOTE: No firewall configuration needs to be done on Salt minions. These changes refer to the master only.

    Fedora 18 and beyond / RHEL 7 / CentOS 7

Starting with Fedora 18  FirewallD is the tool that is used to dynamically manage the firewall rules on a host. It has support for IPv4/6 settings and the separation of runtime and permanent configurations. To interact with FirewallD use the command line client firewall-cmd.

firewall-cmd example:


firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp


Please choose the desired zone according to your setup. Don\(aqt forget to reload after you made your changes.


firewall-cmd --reload


    RHEL 6 / CentOS 6

The lokkit command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port.

lokkit example:


lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp


The system-config-firewall-tui command provides a text-based interface to modifying the firewall.

system-config-firewall-tui:


system-config-firewall-tui


    openSUSE

Salt installs firewall rules in  /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with:


SuSEfirewall2 open
SuSEfirewall2 start


If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2 command makes opening iptables firewall ports very simple via the command line.

SuSEfirewall example:


SuSEfirewall2 open EXT TCP 4505
SuSEfirewall2 open EXT TCP 4506


The firewall module in YaST2 provides a text-based interface to modifying the firewall.

YaST2:


yast2 firewall


    iptables

Different Linux distributions store their iptables (also known as  netfilter) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary.

Fedora / RHEL / CentOS:


/etc/sysconfig/iptables


Arch Linux:


/etc/iptables/iptables.rules


Debian

Follow these instructions:  https://wiki.debian.org/iptables

Once you\(aqve found your firewall rules, you\(aqll need to add the two lines below to allow traffic on tcp/4505 and tcp/4506:


-A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
-A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT


Ubuntu

Salt installs firewall rules in  /etc/ufw/applications.d/salt.ufw. Enable with:


ufw allow salt


    pf.conf

The BSD-family of operating systems uses  packet filter (pf). The following example describes the additions to pf.conf needed to access the Salt master.


pass in on $int_if proto tcp from any to $int_if port 4505
pass in on $int_if proto tcp from any to $int_if port 4506


Once these additions have been made to the pf.conf the rules will need to be reloaded. This can be done using the pfctl command.


pfctl -vf /etc/pf.conf


    Whitelist communication to Master

There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment.

Here is an example  Linux iptables ruleset to be set on the Master:


# Allow Minions from these networks
-I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
-I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Allow Salt to communicate with Master on the loopback interface
-A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Reject everything else
-A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT


NOTE: The important thing to note here is that the salt command needs to communicate with the listening network socket of salt-master on the loopback interface. Without this you will see no outgoing Salt traffic from the master, even for a simple salt \(aq*\(aq test.ping, because the salt client never reached the salt-master to tell it to carry out the execution.

    Using cron with Salt

The Salt Minion can initiate its own highstate using the salt-call command.


$ salt-call state.highstate


This will cause the minion to check in with the master and ensure it is in the correct \(aqstate\(aq.

    Use cron to initiate a highstate

If you would like the Salt Minion to regularly check in with the master you can use the venerable cron to run the salt-call command.


# PATH=/bin:/sbin:/usr/bin:/usr/sbin

00 00 * * * salt-call state.highstate

The above cron entry will run a highstate every day at midnight.

NOTE: Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed.

    Remote execution tutorial

Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
Stuck?

There are many ways to get help from the Salt community including our  mailing list and our  IRC channel #salt.

    Order your minions around

Now that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components:


salt \(aq<target>\(aq <function> [arguments]


SEE ALSO: salt manpage

    target

The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example:


salt \(aq*\(aq test.ping
salt \(aq*.example.org\(aq test.ping


Targets can be based on minion system information using the Grains system:


salt -G \(aqos:Ubuntu\(aq test.ping


SEE ALSO: Grains system

Targets can be filtered by regular expression:


salt -E \(aqvirtmach[0-9]\(aq test.ping


Targets can be explicitly specified in a list:


salt -L \(aqfoo,bar,baz,quo\(aq test.ping


Or Multiple target types can be combined in one command:


salt -C \(aqG@os:Ubuntu and webser* or E@database.*\(aq test.ping


    function

A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions:


salt \(aq*\(aq sys.doc


Here are some examples:

Show all currently available minions:


salt \(aq*\(aq test.ping


Run an arbitrary shell command:


salt \(aq*\(aq cmd.run \(aquname -a\(aq


SEE ALSO: the full list of modules

    arguments

Space-delimited arguments to the function:


salt \(aq*\(aq cmd.exec_code python \(aqimport sys; print sys.version\(aq


Optional, keyword arguments are also supported:


salt \(aq*\(aq pip.install salt timeout=5 upgrade=True


They are always in the form of kwarg=argument.

    Pillar Walkthrough

NOTE: This walkthrough assumes that the reader has already completed the initial Salt walkthrough.

Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.

NOTE: Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master.

Pillar data is useful for:
Highly Sensitive Data:
  Information transferred via pillar is guaranteed to only be presented to the minions that are targeted, making Pillar suitable for managing security information, such as cryptographic keys and passwords.
Minion Configuration:
  Minion modules such as the execution modules, states, and returners can often be configured via data stored in pillar.
Variables:
  Variables which need to be assigned to specific minions or groups of minions can be defined in pillar and then accessed inside sls formulas and template files.
Arbitrary Data:
  Pillar can contain any basic data structure in dictionary format, so a key/value store can be defined making it easy to iterate over a group of values in sls formulas.

Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available.

    Setting Up Pillar

The pillar is already running in Salt by default. To see the minion\(aqs pillar data:


salt \(aq*\(aq pillar.items


NOTE: Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility.

By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions.

Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /usr/local/etc/salt/pillar.

NOTE: The pillar location can be configured via the pillar_roots option inside the master configuration file. It must not be in a subdirectory of the state tree or file_roots. If the pillar is under file_roots, any pillar targeting can be bypassed by minions.

To start setting up the pillar, the /usr/local/etc/salt/pillar directory needs to be present:


mkdir /usr/local/etc/salt/pillar


Now create a simple top file, following the same format as the top file used for states:

/usr/local/etc/salt/pillar/top.sls:


base:
  \(aq*\(aq:
    - data


This top file associates the data.sls file to all minions. Now the /usr/local/etc/salt/pillar/data.sls file needs to be populated:

/usr/local/etc/salt/pillar/data.sls:


info: some data


To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master:


salt \(aq*\(aq saltutil.refresh_pillar


Now that the minions have the new pillar, it can be retrieved:


salt \(aq*\(aq pillar.items


The key info should now appear in the returned pillar data.

    More Complex Data

Unlike states, pillar files do not need to define formulas. This example sets up user data with a UID:

/usr/local/etc/salt/pillar/users/init.sls:


users:
  thatch: 1000
  shouse: 1001
  utahdave: 1002
  redbeard: 1003


NOTE: The same directory lookups that exist in states exist in pillar, so the file users/init.sls can be referenced with users in the top file.

The top file will need to be updated to include this sls file:

/usr/local/etc/salt/pillar/top.sls:


base:
  \(aq*\(aq:
    - data
    - users


Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja:

/usr/local/etc/salt/states/users/init.sls


{% for user, uid in pillar.get(\(aqusers\(aq, {}).items() %}
{{user}}:
  user.present:
    - uid: {{uid}}
{% endfor %}


This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file.

    Parameterizing States With Pillar

Pillar data can be accessed in state files to customise behavior for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don\(aqt apply.

A simple example is to set up a mapping of package names in pillar for separate Linux distributions:

/usr/local/etc/salt/pillar/pkg/init.sls:


pkgs:
  {% if grains[\(aqos_family\(aq] == \(aqRedHat\(aq %}
  apache: httpd
  vim: vim-enhanced
  {% elif grains[\(aqos_family\(aq] == \(aqDebian\(aq %}
  apache: apache2
  vim: vim
  {% elif grains[\(aqos\(aq] == \(aqArch\(aq %}
  apache: apache
  vim: vim
  {% endif %}


The new pkg sls needs to be added to the top file:

/usr/local/etc/salt/pillar/top.sls:


base:
  \(aq*\(aq:
    - data
    - users
    - pkg


Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized:

/usr/local/etc/salt/states/apache/init.sls:


apache:
  pkg.installed:
    - name: {{ pillar[\(aqpkgs\(aq][\(aqapache\(aq] }}


Or, if no pillar is available a default can be set as well:

NOTE: The function pillar.get used in this example was added to Salt in version 0.14.0

/usr/local/etc/salt/states/apache/init.sls:


apache:
  pkg.installed:
    - name: {{ salt[\(aqpillar.get\(aq](\(aqpkgs:apache\(aq, \(aqhttpd\(aq) }}


In the above example, if the pillar value pillar[\(aqpkgs\(aq][\(aqapache\(aq] is not set in the minion\(aqs pillar, then the default of httpd will be used.

NOTE: Under the hood, pillar is just a Python dict, so Python dict methods such as get and items can be used.

    Pillar Makes Simple States Grow Easily

One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states.

A simple formula:

/usr/local/etc/salt/states/edit/vim.sls:


vim:
  pkg.installed: []

/etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root - require: - pkg: vim

Can be easily transformed into a powerful, parameterized formula:

/usr/local/etc/salt/states/edit/vim.sls:


vim:
  pkg.installed:
    - name: {{ pillar[\(aqpkgs\(aq][\(aqvim\(aq] }}

/etc/vimrc: file.managed: - source: {{ pillar[\(aqvimrc\(aq] }} - mode: 644 - user: root - group: root - require: - pkg: vim

Where the vimrc source location can now be changed via pillar:

/usr/local/etc/salt/pillar/edit/vim.sls:


{% if grains[\(aqid\(aq].startswith(\(aqdev\(aq) %}
vimrc: salt://edit/dev_vimrc
{% elif grains[\(aqid\(aq].startswith(\(aqqa\(aq) %}
vimrc: salt://edit/qa_vimrc
{% else %}
vimrc: salt://edit/vimrc
{% endif %}


Ensuring that the right vimrc is sent out to the correct minions.

    Setting Pillar Data on the Command Line

Pillar data can be set on the command line like so:


salt \(aq*\(aq state.highstate pillar=\(aq{"foo": "bar"}\(aq


The state.sls command can also be used to set pillar values via the command line:


salt \(aq*\(aq state.sls my_sls_file pillar=\(aq{"hello": "world"}\(aq


NOTE: If a key is passed on the command line that already exists on the minion, the key that is passed in will overwrite the entire value of that key, rather than merging only the specified value set via the command line.

The example below will swap the value for vim with telnet in the previously specified list, notice the nested pillar dict:


salt \(aq*\(aq state.sls edit.vim pillar=\(aq{"pkgs": {"vim": "telnet"}}\(aq


NOTE: This will attempt to install telnet on your minions, feel free to uninstall the package or replace telnet value with anything else.

    More On Pillar

Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location.

Reference information on pillar and the external pillar interface can be found in the Salt documentation:

Pillar

    Minion Config in Pillar

Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module:


mysql.pass: hardtoguesspassword


This is very convenient when you need some dynamic configuration change that you want to be applied on the fly. For example, there is a chicken and the egg problem if you do this:


mysql-admin-passwd:
  mysql_user.present:
    - name: root
    - password: somepasswd

mydb: mysql_db.present

The second state will fail, because you changed the root password and the minion didn\(aqt notice it. Setting mysql.pass in the pillar, will help to sort out the issue. But always change the root admin password in the first place.

This is very helpful for any module that needs credentials to apply state changes: mysql, keystone, etc.

    States

    How Do I Use Salt States?

Simplicity, Simplicity, Simplicity

Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple)

The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.

NOTE: This is just the beginning of using states, make sure to read up on pillar Pillar next.

    It is All Just Data

Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn\(aqt critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is.

SLS files are therefore, in reality, just  dictionaries,  lists,  strings, and  numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer.

    The Top File

The example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here.

    Default Data - YAML

By default Salt represents the SLS data in what is one of the simplest serialization formats available -  YAML.

A typical SLS file will often look like this in YAML:

NOTE: These demos use some generic service and package names, different distributions often use different names for packages and services. For instance apache should be replaced with httpd on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function.

Information on how to make states work with multiple distributions is later in the tutorial.


apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache


This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way.

The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated.

The second and third lines contain the state module function to be run, in the format <state_module>.<function>. The pkg.installed state module function ensures that a software package is installed via the system\(aqs native package manager. The service.running state module function ensures that a given system daemon is running.

Finally, on line five, is the word require. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package.

    Adding Configs and Users

When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up.


apache:
  pkg.installed: []
  service.running:
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644

This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: watch.

Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The require statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed.

Next, the require statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The watch statement will run the state\(aqs watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state\(aqs watcher will be run. The service state\(aqs watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service.

    Moving Beyond a Single SLS

When setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source - salt://apache/httpd.conf. That file will need to be available as well.

The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files.

The Apache example would be laid out in the root of the Salt file server like this:


apache/init.sls
apache/httpd.conf


So the httpd.conf is just a file in the apache directory, and is referenced directly.
Do not use dots in SLS file names or their directories

The initial implementation of top.sls and include-declaration followed the python import model where a slash is represented as a period. This means that a SLS file with a period in the name ( besides the suffix period) can not be referenced. For example, webserver_1.0.sls is not referenceable because webserver_1.0 would refer to the directory/file webserver_1/0.sls

The same applies for any subdirecortories, this is especially \(aqtricky\(aq when git repos are created. Another command that typically can\(aqt render it\(aqs output is `state.show_sls` of a file in a path that contains a dot.

But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example:

ssh/init.sls:


openssh-client:
  pkg.installed

/etc/ssh/ssh_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/ssh_config - require: - pkg: openssh-client

ssh/server.sls:


include:
  - ssh

openssh-server: pkg.installed

sshd: service.running: - require: - pkg: openssh-client - pkg: openssh-server - file: /etc/ssh/banner - file: /etc/ssh/sshd_config

/etc/ssh/sshd_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/sshd_config - require: - pkg: openssh-server

/etc/ssh/banner: file: - managed - user: root - group: root - mode: 644 - source: salt://ssh/banner - require: - pkg: openssh-server

NOTE: Notice that we use two similar ways of denoting that a file is managed by Salt. In the /etc/ssh/sshd_config state section above, we use the file.managed state declaration whereas with the /etc/ssh/banner state section, we use the file state declaration and add a managed attribute to that state declaration. Both ways produce an identical result; the first way -- using file.managed -- is merely a shortcut.

Now our State Tree looks like this:


apache/init.sls
apache/httpd.conf
ssh/init.sls
ssh/server.sls
ssh/banner
ssh/ssh_config
ssh/sshd_config


This example now introduces the include statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demonstrated - extended.

The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files.

Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial.

    Extending Included SLS Data

Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed.

In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python.

ssh/custom-server.sls:


include:
  - ssh.server

extend: /etc/ssh/banner: file: - source: salt://ssh/custom-banner

python/mod_python.sls:


include:
  - apache

extend: apache: service: - watch: - pkg: mod_python

mod_python: pkg.installed

The custom-server.sls file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner.

In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package.
Using extend with require or watch

The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.

    Understanding the Render System

Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided.

The default rendering system is the yaml_jinja renderer. The yaml_jinja renderer will first pass the template through the  Jinja2 templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files.

Other renderers available are yaml_mako and yaml_wempy which each use the  Mako or  Wempy templating system respectively rather than the jinja templating system, and more notably, the pure Python or py, pydsl & pyobjects renderers. The py renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the pydsl renderer provides a flexible, domain-specific language for authoring SLS data in Python; and the pyobjects renderer gives you a  "Pythonic" interface to building state data.

NOTE: The templating engines described above aren\(aqt just available in SLS files. They can also be used in file.managed states, making file management much more dynamic and flexible. Some examples for using templates in managed files can be found in the documentation for the file states, as well as the  MooseFS example below.

    Getting to Know the Default - yaml_jinja

The default renderer - yaml_jinja, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here:  http://jinja.pocoo.org/docs

When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, salt, grains, and pillar. The salt object allows for any Salt function to be called from within the template, and grains allows for the Grains to be accessed from within the template. A few examples:

apache/init.sls:


apache:
  pkg.installed:
    {% if grains[\(aqos\(aq] == \(aqRedHat\(aq%}
    - name: httpd
    {% endif %}
  service.running:
    {% if grains[\(aqos\(aq] == \(aqRedHat\(aq%}
    - name: httpd
    {% endif %}
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644

This example is simple. If the os grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd.

A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver:

moosefs/chunk.sls:


include:
  - moosefs

{% for mnt in salt[\(aqcmd.run\(aq](\(aqls /dev/data/moose*\(aq).split() %} /mnt/moose{{ mnt[-1] }}: mount.mounted: - device: {{ mnt }} - fstype: xfs - mkmnt: True file.directory: - user: mfs - group: mfs - require: - user: mfs - group: mfs {% endfor %}

/etc/mfshdd.cfg: file.managed: - source: salt://moosefs/mfshdd.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver

/etc/mfschunkserver.cfg: file.managed: - source: salt://moosefs/mfschunkserver.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver

mfs-chunkserver: pkg.installed: [] mfschunkserver: service.running: - require: {% for mnt in salt[\(aqcmd.run\(aq](\(aqls /dev/data/moose*\(aq) %} - mount: /mnt/moose{{ mnt[-1] }} - file: /mnt/moose{{ mnt[-1] }} {% endfor %} - file: /etc/mfschunkserver.cfg - file: /etc/mfshdd.cfg - file: /var/lib/mfs

This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the salt object is used multiple times to call shell commands to gather data.

    Introducing the Python, PyDSL, and the Pyobjects Renderers

Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree.

This example shows a very basic Python SLS file:

python/django.sls:


#!py

def run(): \(aq\(aq\(aq Install the django package \(aq\(aq\(aq return {\(aqinclude\(aq: [\(aqpython\(aq], \(aqdjango\(aq: {\(aqpkg\(aq: [\(aqinstalled\(aq]}}

This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the py renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt HighState data structure.

Alternatively, using the pydsl renderer, the above example can be written more succinctly as:


#!pydsl

include(\(aqpython\(aq, delayed=True) state(\(aqdjango\(aq).pkg.installed()

The pyobjects renderer provides an  "Pythonic" object based approach for building the state data. The above example could be written as:


#!pyobjects

include(\(aqpython\(aq) Pkg.installed("django")

These Python examples would look like this if they were written in YAML:


include:
  - python

django: pkg.installed

This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS.

    Running and debugging salt states.

Once the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute salt \(aq*\(aq state.highstate on the command line. If you get back only hostnames with a : after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the salt-call command: salt-call state.highstate -l debug to examine the output for errors. This should help troubleshoot the issue. The minions can also be started in the foreground in debug mode: salt-minion -l debug.

    Next Reading

With an understanding of states, the next recommendation is to become familiar with Salt\(aqs pillar interface: Pillar Walkthrough

    States tutorial, part 1 - Basic Usage

The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference.

This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running.

Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
Stuck?

There are many ways to get help from the Salt community including our  mailing list and our  IRC channel #salt.

    Setting up the Salt State Tree

States are stored in text files on the master and transferred to the minions on demand via the master\(aqs File Server. The collection of state files make up the State Tree.

To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (file_roots) and uncomment the following lines:


file_roots:
  base:
    - /usr/local/etc/salt/states


NOTE: If you are deploying on FreeBSD via ports, the file_roots path defaults to /usr/local/usr/local/etc/salt/states.

Restart the Salt master in order to pick up this change:


pkill salt-master
salt-master -d


    Preparing the Top File

On the master, in the directory uncommented in the previous step, (/usr/local/etc/salt/states by default), create a new file called top.sls and add the following:


base:
  \(aq*\(aq:
    - webserver


The top file is separated into environments (discussed later). The default environment is base. Under the base environment a collection of minion matches is defined; for now simply specify all hosts (*).
Targeting minions

The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by grains. For example:


base:
  \(aqos:Fedora\(aq:
    - match: grain
    - webserver


Create an sls file

In the same directory as the top file, create a file named webserver.sls, containing the following:


apache:                 # ID declaration
  pkg:                  # state declaration
    - installed         # function declaration


The first line, called the id-declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed.

NOTE: The package name for the Apache httpd web server may differ depending on OS or distro — for example, on Fedora it is httpd but on Debian/Ubuntu it is apache2.

The second line, called the state-declaration, defines which of the Salt States we are using. In this example, we are using the pkg state to ensure that a given package is installed.

The third line, called the function-declaration, defines which function in the pkg state module to call.
Renderers

States sls files can be written in many formats. Salt requires only a simple data structure and is not concerned with how that data structure is built. Templating languages and  DSLs are a dime-a-dozen and everyone has a favorite.

Building the expected data structure is the job of Salt renderers and they are dead-simple to write.

In this tutorial we will be using YAML in Jinja2 templates, which is the default format. The default can be changed by editing renderer in the master configuration file.

    Install the package

Next, let\(aqs run the state we created. Open a terminal on the master and run:


% salt \(aq*\(aq state.highstate


Our master is instructing all targeted minions to run state.highstate. When a minion executes a highstate call it will download the top file and attempt to match the expressions. When it does match an expression the modules listed for it will be downloaded, compiled, and executed.

Once completed, the minion will report back with a summary of all actions taken and all changes made.

WARNING: If you have created custom grain modules, they will not be available in the top file until after the first  highstate. To make custom grains available on a minion\(aqs first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.
SLS File Namespace

Note that in the  example above, the SLS file webserver.sls was referred to simply as webserver. The namespace for SLS files when referenced in top.sls or an include-declaration follows a few simple rules:

1. The .sls is discarded (i.e. webserver.sls becomes webserver).
2.
Subdirectories can be used for better organization.
 
a. Each subdirectory can be represented with a dot (following the python import model) or a slash. webserver/dev.sls can also be referred to as webserver.dev
b. Because slashes can be represented as dots, SLS files can not contain dots in the name besides the dot for the SLS suffix. The SLS file webserver_1.0.sls can not be matched, and webserver_1.0 would match the directory/file webserver_1/0.sls
3. A file called init.sls in a subdirectory is referred to by the path of the directory. So, webserver/init.sls is referred to as webserver.
4. If both webserver.sls and webserver/init.sls happen to exist, webserver/init.sls will be ignored and webserver.sls will be the file referred to as webserver.
Troubleshooting Salt

If the expected output isn\(aqt seen, the following tips can help to narrow down the problem.

Turn up logging
  Salt can be quite chatty when you change the logging setting to debug:


salt-minion -l debug


Run the minion in the foreground
  By not starting the minion in daemon mode (-d) one can view any output from the minion as it works:


salt-minion &


Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds:


salt -t 60


For best results, combine all three:


salt-minion -l debug &          # On the minion
salt \(aq*\(aq state.highstate -t 60  # On the master


    Next steps

This tutorial focused on getting a simple Salt States configuration working. Part 2 will build on this example to cover more advanced sls syntax and will explore more of the states that ship with Salt.

    States tutorial, part 2 - More Complex States, Requisites

NOTE: This tutorial builds on topics covered in part 1. It is recommended that you begin there.

In the last part of the Salt States tutorial we covered the basics of installing a package. We will now modify our webserver.sls file to have requirements, and use even more Salt States.

    Call multiple States

You can specify multiple state-declaration under an id-declaration. For example, a quick modification to our webserver.sls to also start Apache if it is not running:


apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache


Try stopping Apache before running state.highstate once again and observe the output.

NOTE: For those running RedhatOS derivatives (Centos, AWS), you will want to specify the service name to be httpd. More on state service here, service state. With the example above, just add "- name: httpd" above the require line and with the same spacing.

    Require other states

We now have a working installation of Apache so let\(aqs add an HTML file to customize our website. It isn\(aqt exactly useful to have a website without a webserver so we don\(aqt want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your webserver/init.sls file:


apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache

/var/www/index.html: # ID declaration file: # state declaration - managed # function - source: salt://webserver/index.html # function arg - require: # requisite declaration - pkg: apache # requisite reference

line 7 is the id-declaration. In this example it is the location we want to install our custom HTML file. (Note: the default location that Apache serves may differ from the above on your OS or distro. /srv/www could also be a likely place to look.)

Line 8 the state-declaration. This example uses the Salt file state.

Line 9 is the function-declaration. The managed function will download a file from the master and install it in the location specified.

Line 10 is a function-arg-declaration which, in this example, passes the source argument to the managed function.

Line 11 is a requisite-declaration.

Line 12 is a requisite-reference which refers to a state and an ID. In this example, it is referring to the ID declaration from our example in part 1. This declaration tells Salt not to install the HTML file until Apache is installed.

Next, create the index.html file and save it in the webserver directory:


<!DOCTYPE html>
<html>
    <head><title>Salt rocks</title></head>
    <body>
        <h1>This file brought to you by Salt</h1>
    </body>
</html>


Last, call state.highstate again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt\(aqs File Server:


salt \(aq*\(aq state.highstate


Verify that Apache is now serving your custom HTML.
require vs. watch

There are two requisite-declaration, “require”, and “watch”. Not every state supports “watch”. The service state does support “watch” and will restart a service based on the watch condition.

For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows:


/etc/httpd/extra/httpd-vhosts.conf:
  file.managed:
    - source: salt://webserver/httpd-vhosts.conf

apache: pkg.installed: [] service.running: - watch: - file: /etc/httpd/extra/httpd-vhosts.conf - require: - pkg: apache

If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a name-declaration which explained in Part 3.

    Next steps

In part 3 we will discuss how to use includes, extends, and templating to make a more complete State Tree configuration.

    States tutorial, part 3 - Templating, Includes, Extends

NOTE: This tutorial builds on topics covered in part 1 and part 2. It is recommended that you begin there.

This part of the tutorial will cover more advanced templating and configuration techniques for sls files.

    Templating SLS modules

SLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is  Jinja2 and may be configured by changing the renderer value in the master config.

All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this:


{% for usr in [\(aqmoe\(aq,\(aqlarry\(aq,\(aqcurly\(aq] %}
{{ usr }}:
  user.present
{% endfor %}


This templated sls file once generated will look like this:


moe:
  user.present
larry:
  user.present
curly:
  user.present


Here\(aqs a more complex example:


# Comments in yaml start with a hash symbol.
# Since jinja rendering occurs before yaml parsing, if you want to include jinja
# in the comments you may need to escape them using \(aqjinja\(aq comments to prevent
# jinja from trying to render something which is not well-defined jinja.
# e.g.
# {# iterate over the Three Stooges using a {% for %}..{% endfor %} loop
# with the iterator variable {{ usr }} becoming the state ID. #}
{% for usr in \(aqmoe\(aq,\(aqlarry\(aq,\(aqcurly\(aq %}
{{ usr }}:
  group:
    - present
  user:
    - present
    - gid_from_name: True
    - require:
      - group: {{ usr }}
{% endfor %}


    Using Grains in SLS modules

Often times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules:


apache:
  pkg.installed:
    {% if grains[\(aqos\(aq] == \(aqRedHat\(aq %}
    - name: httpd
    {% elif grains[\(aqos\(aq] == \(aqUbuntu\(aq %}
    - name: apache2
    {% endif %}


    Using Environment Variables in SLS modules

You can use salt[\(aqenviron.get\(aq](\(aqVARNAME\(aq) to use an environment variable in a Salt state.


MYENVVAR="world" salt-call state.template test.sls



 Create a file with contents from an environment variable:
file.managed:
  - name: /tmp/hello
  - contents: {{ salt[\(aqenviron.get\(aq](\(aqMYENVVAR\(aq) }}


Error checking:


{% set myenvvar = salt[\(aqenviron.get\(aq](\(aqMYENVVAR\(aq) %}
{% if myenvvar %}

Create a file with contents from an environment variable: file.managed: - name: /tmp/hello - contents: {{ salt[\(aqenviron.get\(aq](\(aqMYENVVAR\(aq) }}

{% else %}

Fail - no environment passed in: test: A. fail_without_changes

{% endif %}

    Calling Salt modules from templates

All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules.

The Salt module functions are also made available in the template context as salt:


moe:
  user.present:
    - gid: {{ salt[\(aqfile.group_to_gid\(aq](\(aqsome_group_that_exists\(aq) }}


Note that for the above example to work, some_group_that_exists must exist before the state file is processed by the templating engine.

Below is an example that uses the network.hw_addr function to retrieve the MAC address for eth0:


salt[\(aqnetwork.hw_addr\(aq](\(aqeth0\(aq)


    Advanced SLS module syntax

Lastly, we will cover some incredibly useful techniques for more complex State trees.

    Include declaration

A previous example showed how to spread a Salt tree across several files. Similarly, requisites span multiple files by using an include-declaration. For example:

python/python-libs.sls:


python-dateutil:
  pkg.installed


python/django.sls:


include:
  - python.python-libs

django: pkg.installed: - require: - pkg: python-dateutil

    Extend declaration

You can modify previous declarations by using an extend-declaration. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed:

apache/apache.sls:


apache:
  pkg.installed


apache/mywebsite.sls:


include:
  - apache.apache

extend: apache: service: - running - watch: - file: /etc/httpd/extra/httpd-vhosts.conf

/etc/httpd/extra/httpd-vhosts.conf: file.managed: - source: salt://apache/httpd-vhosts.conf

Using extend with require or watch

The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.

    Name declaration

You can override the id-declaration by using a name-declaration. For example, the previous example is a bit more maintainable if rewritten as follows:

apache/mywebsite.sls:


include:
  - apache.apache

extend: apache: service: - running - watch: - file: mywebsite

mywebsite: file.managed: - name: /etc/httpd/extra/httpd-vhosts.conf - source: salt://apache/httpd-vhosts.conf

    Names declaration

Even more powerful is using a names-declaration to override the id-declaration for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop:


stooges:
  user.present:
    - names:
      - moe
      - larry
      - curly


    Next steps

In part 4 we will discuss how to use salt\(aqs file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.

    States tutorial, part 4

NOTE: This tutorial builds on topics covered in part 1, part 2 and part 3. It is recommended that you begin there.

This part of the tutorial will show how to use salt\(aqs file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.

    Salt fileserver path inheritance

Salt\(aqs fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS:


# In the master config file (/usr/local/etc/salt/master)
file_roots:
  base:
    - /usr/local/etc/salt/states
    - /mnt/salt-nfs/base


Salt\(aqs fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top-most match "wins". For example, if /usr/local/etc/salt/states/foo.txt and /mnt/salt-nfs/base/foo.txt both exist, then salt://foo.txt will point to /usr/local/etc/salt/states/foo.txt.

NOTE: When using multiple fileserver backends, the order in which they are listed in the fileserver_backend parameter also matters. If both roots and git backends contain a file with the same relative path, and roots appears before git in the fileserver_backend list, then the file in roots will "win", and the file in gitfs will be ignored.

A more thorough explanation of how Salt\(aqs modular fileserver works can be found here. We recommend reading this.

    Environment configuration

Configure a multiple-environment setup like so:


file_roots:
  base:
    - /usr/local/etc/salt/states/prod
  qa:
    - /usr/local/etc/salt/states/qa
    - /usr/local/etc/salt/states/prod
  dev:
    - /usr/local/etc/salt/states/dev
    - /usr/local/etc/salt/states/qa
    - /usr/local/etc/salt/states/prod


Given the path inheritance described above, files within /usr/local/etc/salt/states/prod would be available in all environments. Files within /usr/local/etc/salt/states/qa would be available in both qa, and dev. Finally, the files within /usr/local/etc/salt/states/dev would only be available within the dev environment.

Based on the order in which the roots are defined, new files/states can be placed within /usr/local/etc/salt/states/dev, and pushed out to the dev hosts for testing.

Those files/states can then be moved to the same relative path within /usr/local/etc/salt/states/qa, and they are now available only in the dev and qa environments, allowing them to be pushed to QA hosts and tested.

Finally, if moved to the same relative path within /usr/local/etc/salt/states/prod, the files are now available in all three environments.

    Practical Example

As an example, consider a simple website, installed to /var/www/foobarcom. Below is a top.sls that can be used to deploy the website:

/usr/local/etc/salt/states/prod/top.sls:


base:
  \(aqweb*prod*\(aq:
    - webserver.foobarcom
qa:
  \(aqweb*qa*\(aq:
    - webserver.foobarcom
dev:
  \(aqweb*dev*\(aq:
    - webserver.foobarcom


Using pillar, roles can be assigned to the hosts:

/usr/local/etc/salt/pillar/top.sls:


base:
  \(aqweb*prod*\(aq:
    - webserver.prod
  \(aqweb*qa*\(aq:
    - webserver.qa
  \(aqweb*dev*\(aq:
    - webserver.dev


/usr/local/etc/salt/pillar/webserver/prod.sls:


webserver_role: prod


/usr/local/etc/salt/pillar/webserver/qa.sls:


webserver_role: qa


/usr/local/etc/salt/pillar/webserver/dev.sls:


webserver_role: dev


And finally, the SLS to deploy the website:

/usr/local/etc/salt/states/prod/webserver/foobarcom.sls:


{% if pillar.get(\(aqwebserver_role\(aq, \(aq\(aq) %}
/var/www/foobarcom:
  file.recurse:
    - source: salt://webserver/src/foobarcom
    - env: {{ pillar[\(aqwebserver_role\(aq] }}
    - user: www
    - group: www
    - dir_mode: 755
    - file_mode: 644
{% endif %}


Given the above SLS, the source for the website should initially be placed in /usr/local/etc/salt/states/dev/webserver/src/foobarcom.

First, let\(aqs deploy to dev. Given the configuration in the top file, this can be done using state.highstate:


salt --pillar \(aqwebserver_role:dev\(aq state.highstate


However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the foobarcom website, using state.sls:


salt --pillar \(aqwebserver_role:dev\(aq state.sls webserver.foobarcom


Once the site has been tested in dev, then the files can be moved from /usr/local/etc/salt/states/dev/webserver/src/foobarcom to /usr/local/etc/salt/states/qa/webserver/src/foobarcom, and deployed using the following:


salt --pillar \(aqwebserver_role:qa\(aq state.sls webserver.foobarcom


Finally, once the site has been tested in qa, then the files can be moved from /usr/local/etc/salt/states/qa/webserver/src/foobarcom to /usr/local/etc/salt/states/prod/webserver/src/foobarcom, and deployed using the following:


salt --pillar \(aqwebserver_role:prod\(aq state.sls webserver.foobarcom


Thanks to Salt\(aqs fileserver inheritance, even though the files have been moved to within /usr/local/etc/salt/states/prod, they are still available from the same salt:// URI in both the qa and dev environments.

    Continue Learning

The best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the  saltstack-formulas collection of repositories.

If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we\(aqd love to hear from you.

In addition, by continuing to part 5, you can learn about the powerful orchestration of which Salt is capable.

    States Tutorial, Part 5 - Orchestration with Salt

NOTE: This tutorial builds on some of the topics covered in the earlier States Walkthrough pages. It is recommended to start with Part 1 if you are not familiar with how to use states.

Orchestration is accomplished in salt primarily through the  Orchestrate Runner. Added in version 0.17.0, this Salt Runner can use the full suite of requisites available in states, and can also execute states/functions using salt-ssh.

    The Orchestrate Runner

New in version 0.17.0.

NOTE: Orchestrate Deprecates OverState

The Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with some advantages:
o All requisites available in states can be used.
o The states/functions will also work on salt-ssh minions.

The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained until Salt Boron.

The orchestrate runner generalizes the Salt state system to a Salt master context. Whereas the state.sls, state.highstate, et al functions are concurrently and independently executed on each Salt minion, the state.orchestrate runner is executed on the master, giving it a master-level view and control over requisites, such as state ordering and conditionals. This allows for inter minion requisites, like ordering the application of states on different minions that must not happen simultaneously, or for halting the state run on all minions if a minion fails one of its states.

If you want to setup a load balancer in front of a cluster of web servers, for example, you can ensure the load balancer is setup before the web servers or stop the state run altogether if one of the minions does not set up correctly.

The state.sls, state.highstate, et al functions allow you to statefully manage each minion and the state.orchestrate runner allows you to statefully manage your entire infrastructure.

    Executing the Orchestrate Runner

The Orchestrate Runner command format is the same as for the state.sls function, except that since it is a runner, it is executed with salt-run rather than salt. Assuming you have a state.sls file called /usr/local/etc/salt/states/orch/webserver.sls the following command run on the master will apply the states defined in that file.


salt-run state.orchestrate orch.webserver


NOTE: state.orch is a synonym for state.orchestrate

Changed in version 2014.1.1: The runner function was renamed to state.orchestrate to avoid confusion with the state.sls execution function. In versions 0.17.0 through 2014.1.0, state.sls must be used.

    Examples

    Function

To execute a function, use salt.function:


# /usr/local/etc/salt/states/orch/cleanfoo.sls
cmd.run:
  salt.function:
    - tgt: \(aq*\(aq
    - arg:
      - rm -rf /tmp/foo



salt-run state.orchestrate orch.cleanfoo


    State

To execute a state, use salt.state.


# /usr/local/etc/salt/states/orch/webserver.sls
install_nginx:
  salt.state:
    - tgt: \(aqweb*\(aq
    - sls:
      - nginx



salt-run state.orchestrate orch.webserver


    Highstate

To run a highstate, set highstate: True in your state config:


# /usr/local/etc/salt/states/orch/web_setup.sls
webserver_setup:
  salt.state:
    - tgt: \(aqweb*\(aq
    - highstate: True



salt-run state.orchestrate orch.web_setup


    More Complex Orchestration

Many states/functions can be configured in a single file, which when combined with the full suite of requisites, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any requisites, as is the default in SLS files since 0.17.0.


cmd.run:
  salt.function:
    - tgt: 10.0.0.0/24
    - tgt_type: ipcidr
    - arg:
      - bootstrap

storage_setup: salt.state: - tgt: \(aqrole:storage\(aq - tgt_type: grain - sls: ceph - require: - salt: webserver_setup

webserver_setup: salt.state: - tgt: \(aqweb*\(aq - highstate: True

Given the above setup, the orchestration will be carried out as follows:
1. The shell command bootstrap will be executed on all minions in the 10.0.0.0/24 subnet.
2. A Highstate will be run on all minions whose ID starts with "web", since the storage_setup state requires it.
3. Finally, the ceph SLS target will be executed on all minions which have a grain called role with a value of storage.

NOTE: Remember, salt-run is always executed on the master.

    Syslog-ng usage

    Overview

Syslog_ng state module is for generating syslog-ng configurations. You can do the following things:
o generate syslog-ng configuration from YAML,
o use non-YAML configuration,
o start, stop or reload syslog-ng.

There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng.

    Configuration

Users can create syslog-ng configuration statements with the syslog_ng.config function. It requires a name and a config parameter. The name parameter determines the name of the generated statement and the config parameter holds a parsed YAML structure.

A statement can be declared in the following forms (both are equivalent):


source.s_localhost:
  syslog_ng.config:
    - config:
        - tcp:
          - ip: "127.0.0.1"
          - port: 1233



s_localhost:
  syslog_ng.config:
    - config:
        source:
          - tcp:
            - ip: "127.0.0.1"
            - port: 1233


The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document.

    Quotation

The quotation can be tricky sometimes but here are some rules to follow:
 
o when a string meant to be "string" in the generated configuration, it should be like \(aq"string"\(aq in the YAML document
o similarly, users should write "\(aqstring\(aq" to get \(aqstring\(aq in the generated configuration

    Full example

The following configuration is an example, how a complete syslog-ng configuration looks like:


# Set the location of the configuration file
set_location:
  module.run:
    - name: syslog_ng.set_config_file
    - m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"

# The syslog-ng and syslog-ng-ctl binaries are here. You needn\(aqt use # this method if these binaries can be found in a directory in your PATH. set_bin_path: module.run: - name: syslog_ng.set_binary_path - m_name: "/home/tibi/install/syslog-ng/sbin"

# Writes the first lines into the config file, also erases its previous # content write_version: module.run: - name: syslog_ng.write_version - m_name: "3.6"

# There is a shorter form to set the above variables set_variables: module.run: - name: syslog_ng.set_parameters - version: "3.6" - binary_path: "/home/tibi/install/syslog-ng/sbin" - config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"

# Some global options options.global_options: syslog_ng.config: - config: - time_reap: 30 - mark_freq: 10 - keep_hostname: "yes"

source.s_localhost: syslog_ng.config: - config: - tcp: - ip: "127.0.0.1" - port: 1233

destination.d_log_server: syslog_ng.config: - config: - tcp: - "127.0.0.1" - port: 1234

log.l_log_to_central_server: syslog_ng.config: - config: - source: s_localhost - destination: d_log_server

some_comment: module.run: - name: syslog_ng.write_config - config: | # Multi line # comment

# Another mode to use comments or existing configuration snippets config.other_comment_form: syslog_ng.config: - config: | # Multi line # comment

The syslog_ng.reloaded function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser, etc.) has a name, this function uses the id as the name, otherwise (log statement) it\(aqs purpose is like a mandatory comment.

After execution this example the syslog_ng state will generate this file:


#Generated by Salt on 2014-08-18 00:11:11
@version: 3.6

options { time_reap( 30 ); mark_freq( 10 ); keep_hostname( yes ); };

source s_localhost { tcp( ip( 127.0.0.1 ), port( 1233 ) ); };

destination d_log_server { tcp( 127.0.0.1, port( 1234 ) ); };

log { source( s_localhost ); destination( d_log_server ); };

# Multi line # comment

# Multi line # comment

Users can include arbitrary texts in the generated configuration with using the config statement (see the example above).

    Syslog_ng module functions

You can use syslog_ng.set_binary_path to set the directory which contains the syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH, you don\(aqt need to use this function. There is also a syslog_ng.set_config_file function to set the location of the configuration file.

    Examples

    Simple source


source s_tail {
 file(
   "/var/log/apache/access.log",
   follow_freq(1),
   flags(no-parse, validate-utf8)
 );
};



s_tail:
  # Salt will call the source function of syslog_ng module
  syslog_ng.config:
    - config:
        source:
          - file:
            - file: \(aq\(aq"/var/log/apache/access.log"\(aq\(aq
            - follow_freq : 1
            - flags:
              - no-parse
              - validate-utf8


OR


s_tail:
  syslog_ng.config:
    - config:
        source:
            - file:
              - \(aq\(aq"/var/log/apache/access.log"\(aq\(aq
              - follow_freq : 1
              - flags:
                - no-parse
                - validate-utf8


OR


source.s_tail:
  syslog_ng.config:
    - config:
        - file:
          - \(aq\(aq"/var/log/apache/access.log"\(aq\(aq
          - follow_freq : 1
          - flags:
            - no-parse
            - validate-utf8


    Complex source


source s_gsoc2014 {
 tcp(
   ip("0.0.0.0"),
   port(1234),
   flags(no-parse)
 );
};



s_gsoc2014:
  syslog_ng.config:
    - config:
        source:
          - tcp:
            - ip: 0.0.0.0
            - port: 1234
            - flags: no-parse


    Filter


filter f_json {
 match(
   "@json:"
 );
};



f_json:
  syslog_ng.config:
    - config:
        filter:
          - match:
            - \(aq\(aq"@json:"\(aq\(aq


    Template


template t_demo_filetemplate {
 template(
   "$ISODATE $HOST $MSG "
 );
 template_escape(
   no
 );
};



t_demo_filetemplate:
  syslog_ng.config:
    -config:
        template:
          - template:
            - \(aq"$ISODATE $HOST $MSG\n"\(aq
          - template_escape:
            - "no"


    Rewrite


rewrite r_set_message_to_MESSAGE {
 set(
   "${.json.message}",
   value("$MESSAGE")
 );
};



r_set_message_to_MESSAGE:
  syslog_ng.config:
    - config:
        rewrite:
          - set:
            - \(aq"${.json.message}"\(aq
            - value : \(aq"$MESSAGE"\(aq


    Global options


options {
   time_reap(30);
   mark_freq(10);
   keep_hostname(yes);
};



global_options:
  syslog_ng.config:
    - config:
        options:
          - time_reap: 30
          - mark_freq: 10
          - keep_hostname: "yes"


    Log


log {
 source(s_gsoc2014);
 junction {
  channel {
   filter(f_json);
   parser(p_json);
   rewrite(r_set_json_tag);
   rewrite(r_set_message_to_MESSAGE);
   destination {
    file(
      "/tmp/json-input.log",
      template(t_gsoc2014)
    );
   };
   flags(final);
  };
  channel {
   filter(f_not_json);
   parser {
    syslog-parser(

); }; rewrite(r_set_syslog_tag); flags(final); }; }; destination { file( "/tmp/all.log", template(t_gsoc2014) ); }; };


l_gsoc2014:
  syslog_ng.config:
    - config:
        log:
          - source: s_gsoc2014
          - junction:
            - channel:
              - filter: f_json
              - parser: p_json
              - rewrite: r_set_json_tag
              - rewrite: r_set_message_to_MESSAGE
              - destination:
                - file:
                  - \(aq"/tmp/json-input.log"\(aq
                  - template: t_gsoc2014
              - flags: final
            - channel:
              - filter: f_not_json
              - parser:
                - syslog-parser: []
              - rewrite: r_set_syslog_tag
              - flags: final
          - destination:
            - file:
              - "/tmp/all.log"
              - template: t_gsoc2014


    Advanced Topics

    SaltStack Walk-through

NOTE: Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs!
o Thomas S Hatch
o Salt creator and Chief Developer
o CTO of SaltStack, Inc.

    Getting Started

    What is Salt?

Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.

The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States.

    Installing Salt

SaltStack has been made to be very easy to install and get started. The installation documents contain instructions for all supported platforms.

    Starting Salt

Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master.

    Setting Up the Salt Master

Turning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager:

On Systemd based platforms (OpenSuse, Fedora):


systemctl start salt-master


On Upstart based systems (Ubuntu, older Fedora/RHEL):


service salt-master start


On SysV Init systems (Debian, Gentoo etc.):


/etc/init.d/salt-master start


Alternatively, the Master can be started directly on the command-line:


salt-master -d


The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output:


salt-master -l debug


The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here.

    Setting up a Salt Minion

NOTE: The Salt Minion can operate with or without a Salt Master. This walk-through assumes that the minion will be connected to the master, for information on how to run a master-less minion please see the master-less quick-start guide:

Masterless Minion Quickstart

The Salt Minion only needs to be aware of one piece of information to run, the network location of the master.

By default the minion will look for the DNS name salt for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP.

Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master:

NOTE: The default location of the configuration files is /usr/local/etc/salt. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations.

/usr/local/etc/salt/minion:


master: saltmaster.example.com


Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly:

As a daemon:


salt-minion -d


In the foreground in debug mode:


salt-minion -l debug


When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached in the configuration directory, which is /usr/local/etc/salt by default. This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not localhost:
1. The Python function socket.getfqdn() is run
2. /etc/hostname is checked (non-Windows only)
3. /etc/hosts (%WINDIR%\system32\drivers\etc\hosts on Windows hosts) is checked for hostnames that map to anything within 127.0.0.0/8.

If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first privately-routable IP address is used.

If all else fails, then localhost is used as a fallback.

NOTE: Overriding the id

The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id.

Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion\(aqs public key.

    Using salt-key

Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master.

The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master:


salt-key -L


The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys:


salt-key -A


NOTE: Keys should be verified! Print the master key fingerprint by running salt-key -F master on the Salt master. Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Restart the Salt minion.

On the master, run salt-key -f minion-id to print the fingerprint of the minion\(aqs public key that was received by the master. On the minion, run salt-call key.finger --local to print the fingerprint of the minion key.

On the master:


# salt-key -f foo.domain.com
Unaccepted Keys:
foo.domain.com:  39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9


On the minion:


# salt-call key.finger --local
local:
    39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9


If they match, approve the key with salt-key -a foo.domain.com.

    Sending the First Commands

Now that the minion is connected to the master and authenticated, the master can start to command the minion.

Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution.

The salt command is comprised of command options, target specification, the function to execute, and arguments to the function.

A simple command to start with looks like this:


salt \(aq*\(aq test.ping


The * is the target, which specifies all minions.

test.ping tells the minion to run the test.ping function.

In the case of test.ping, test refers to a execution module. ping refers to the ping function contained in the aforementioned test module.

NOTE: Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services.

The result of running this command will be the master instructing all of the minions to execute test.ping in parallel and return the result.

This is not an actual ICMP ping, but rather a simple function which returns True. Using test.ping is a good way of confirming that a minion is connected.

NOTE: Each minion registers itself with a unique minion ID. This ID defaults to the minion\(aqs hostname, but can be explicitly defined in the minion config as well by using the id parameter.

Of course, there are hundreds of other modules that can be called just as test.ping can. For example, the following would return disk usage on all targeted minions:


salt \(aq*\(aq disk.usage


    Getting to Know the Functions

Salt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function:


salt \(aq*\(aq sys.doc


This will display a very large list of available functions and documentation on them.

NOTE: Module documentation is also available on the web.

These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt.

NOTE: Salt comes with many plugin systems. The functions that are available via the salt command are called Execution Modules.

    Helpful Functions to Know

The cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all:


salt \(aq*\(aq cmd.run \(aqls -l /etc\(aq


The pkg functions automatically map local system package managers to the same salt functions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.:


salt \(aq*\(aq pkg.install vim


NOTE: Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that pkg.install is not available, then you may need to override the pkg provider. This process is explained here.

The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc:


salt \(aq*\(aq network.interfaces


    Changing the Output Format

The default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the pprint outputter can be used to display the return data using Python\(aqs pprint module:


root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint
{\(aqmyminion\(aq: {\(aqpythonpath\(aq: [\(aq/usr/lib64/python2.7\(aq,
                             \(aq/usr/lib/python2.7/plat-linux2\(aq,
                             \(aq/usr/lib64/python2.7/lib-tk\(aq,
                             \(aq/usr/lib/python2.7/lib-tk\(aq,
                             \(aq/usr/lib/python2.7/site-packages\(aq,
                             \(aq/usr/lib/python2.7/site-packages/gst-0.10\(aq,
                             \(aq/usr/lib/python2.7/site-packages/gtk-2.0\(aq]}}


The full list of Salt outputters, as well as example output, can be found here.

salt-call

The examples so far have described running commands from the Master using the salt command, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call.

Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here.

    Grains

Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users.

Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing.

A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the grains.setval function.

    Targeting

Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry* will match larry1 and larry2, and a glob of *1 will match larry1 and curly1.

Many other targeting systems can be used other than globs, these systems include:
Regular Expressions
  Target using PCRE-compliant regular expressions
Grains Target based on grains data: Targeting with Grains
Pillar Target based on pillar data: Targeting with Pillar
IP Target based on IP address/subnet/range
Compound
  Create logic to target based on multiple targets: Targeting with Compound
Nodegroup
  Target with nodegroups: Targeting with Nodegroup

The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions.

    Passing in Arguments

Many of the functions available accept arguments which can be passed in on the command line:


salt \(aq*\(aq pkg.install vim


This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input than just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line:


salt \(aq*\(aq test.echo \(aqfoo: bar\(aq


In this case Salt translates the string \(aqfoo: bar\(aq into the dictionary "{\(aqfoo\(aq: \(aqbar\(aq}"

NOTE: Any line that contains a newline will not be parsed by YAML.

    Salt States

Now that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management.

The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately.

NOTE: Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other.

The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate.

Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset.

    The First SLS Formula

The state system is built on SLS formulas. These formulas are built out in files on Salt\(aqs file server. To make a very basic SLS formula open up a file under /usr/local/etc/salt/states named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied.

/usr/local/etc/salt/states/vim.sls:


vim:
  pkg.installed


Now install vim on the minions by calling the SLS directly:


salt \(aq*\(aq state.sls vim


This command will invoke the state system and run the vim SLS.

Now, to beef up the vim SLS formula, a vimrc can be added:

/usr/local/etc/salt/states/vim.sls:


vim:
  pkg.installed: []

/etc/vimrc: file.managed: - source: salt://vimrc - mode: 644 - user: root - group: root

Now the desired vimrc needs to be copied into the Salt file server to /usr/local/etc/salt/states/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted for. The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file.

NOTE: Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available.

    Adding Some Depth

Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file:

/usr/local/etc/salt/states/nginx/init.sls:


nginx:
  pkg.installed: []
  service.running:
    - require:
      - pkg: nginx


A few concepts are introduced in this SLS formula.

First is the service statement which ensures that the nginx service is running.

Of course, the nginx service can\(aqt be started unless the package is installed -- hence the require statement which sets up a dependency between the two.

The require statement makes sure that the required component is executed before and that it results in success.

NOTE: The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites

Also evaluation ordering is available in Salt as well: Ordering States

This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command:


salt \(aq*\(aq state.sls nginx


NOTE: Reminder!

Just as one could call the test.ping or disk.usage execution modules, state.sls is simply another execution module. It simply takes the name of an SLS file as an argument.

Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change:

/usr/local/etc/salt/states/edit/vim.sls:


vim:
  pkg.installed

/etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root

Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed.

    Next Reading

Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar.
1. Starting States
2. Pillar Walkthrough

An understanding of Pillar is extremely helpful in using States.

    Getting Deeper Into States

Two more in-depth States tutorials exist, which delve much more deeply into States functionality.
1. How Do I Use Salt States?, covers much more to get off the ground with States.
2. The States Tutorial also provides a fantastic introduction.

These tutorials include much more in-depth information including templating SLS formulas etc.

    So Much More!

This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt:
o Pillar
o Job Management

A few more tutorials are also available:
o Remote Execution Tutorial
o Standalone Minion

This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents.

    running salt as normal user tutorial

Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
Stuck?

There are many ways to get help from the Salt community including our  mailing list and our  IRC channel #salt.

    Running Salt functions as non root user

If you don\(aqt want to run salt cloud as root or even install it you can configure it to have a virtual root in your working directory.

The salt system uses the salt.syspath module to find the variables

If you run the salt-build, it will generated in:


./build/lib.linux-x86_64-2.7/salt/_syspaths.py


To generate it, run the command:


python setup.py build


Copy the generated module into your salt directory


cp ./build/lib.linux-x86_64-2.7/salt/_syspaths.py salt/_syspaths.py


Edit it to include needed variables and your new paths


# you need to edit this
ROOT_DIR = *your current dir* + \(aq/salt/root\(aq

# you need to edit this INSTALL_DIR = *location of source code*

CONFIG_DIR = ROOT_DIR + \(aq/usr/local/etc/salt\(aq CACHE_DIR = ROOT_DIR + \(aq/var/cache/salt\(aq SOCK_DIR = ROOT_DIR + \(aq/var/run/salt\(aq SRV_ROOT_DIR= ROOT_DIR + \(aq/srv\(aq BASE_FILE_ROOTS_DIR = ROOT_DIR + \(aq/usr/local/etc/salt/states\(aq BASE_PILLAR_ROOTS_DIR = ROOT_DIR + \(aq/usr/local/etc/salt/pillar\(aq BASE_MASTER_ROOTS_DIR = ROOT_DIR + \(aq/usr/local/etc/salt/states-master\(aq LOGS_DIR = ROOT_DIR + \(aq/var/log/salt\(aq PIDFILE_DIR = ROOT_DIR + \(aq/var/run\(aq CLOUD_DIR = INSTALL_DIR + \(aq/cloud\(aq BOOTSTRAP = CLOUD_DIR + \(aq/deploy/bootstrap-salt.sh\(aq

Create the directory structure


mkdir -p root/usr/local/etc/salt root/var/cache/run root/run/salt root/srv
root/usr/local/etc/salt/states root/usr/local/etc/salt/pillar root/srv/salt-master root/var/log/salt root/var/run


Populate the configuration files:


cp -r conf/* root/usr/local/etc/salt/


Edit your root/usr/local/etc/salt/master configuration that is used by salt-cloud:


user: *your user name*


Run like this:


PYTHONPATH=`pwd` scripts/salt-cloud


    MinionFS Backend Walkthrough

    Propagating Files

New in version 2014.1.0.

Sometimes, one might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master.

    Enabling File Propagation

To enable propagation, the file_recv option needs to be set to True.


file_recv: True


These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master.


salt \(aqminion-id\(aq cp.push /path/to/the/file


This command will store the file, including its full path, under cachedir /master/minions/minion-id/files. With the default cachedir the example file above would be stored as /var/cache/salt/master/minions/minion-id/files/path/to/the/file.

NOTE: This walkthrough assumes basic knowledge of Salt and cp.push. To get up to speed, check out the walkthrough.

    MinionFS Backend

Since it is not a good idea to expose the whole cachedir, MinionFS should be used to send these files to other minions.

    Simple Configuration

To use the minionfs backend only two configuration changes are required on the master. The fileserver_backend option needs to contain a value of minion and file_recv needs to be set to true:


fileserver_backend:
  - roots
  - minion

file_recv: True

These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master.

NOTE: All of the files that are pushed to the master are going to be available to all of the minions. If this is not what you want, please remove minion from fileserver_backend in the master config file.

NOTE: Having directories with the same name as your minions in the root that can be accessed like salt://minion-id/ might cause confusion.

    Commandline Example

Lets assume that we are going to generate SSH keys on a minion called minion-source and put the public part in ~/.ssh/authorized_keys of root user of a minion called minion-destination.

First, lets make sure that /root/.ssh exists and has the right permissions:


[root@salt-master file]# salt \(aq*\(aq file.mkdir dir_path=/root/.ssh user=root group=root mode=700
minion-source:
    None
minion-destination:
    None


We create an RSA key pair without a passphrase [*]:


[root@salt-master file]# salt \(aqminion-source\(aq cmd.run \(aqssh-keygen -N "" -f /root/.ssh/id_rsa\(aq
minion-source:
    Generating public/private rsa key pair.
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root@minion-source
    The key\(aqs randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |                 |
    |                 |
    |  o        .     |
    | o o    S o      |
    |=   +  . B o     |
    |o+ E    B =      |
    |+ .   .+ o       |
    |o  ...ooo        |
    +-----------------+


and we send the public part to the master to be available to all minions:


[root@salt-master file]# salt \(aqminion-source\(aq cp.push /root/.ssh/id_rsa.pub
minion-source:
    True


now it can be seen by everyone:


[root@salt-master file]# salt \(aqminion-destination\(aq cp.list_master_dirs
minion-destination:
    - .
    - etc
    - minion-source/root
    - minion-source/root/.ssh


Lets copy that as the only authorized key to minion-destination:


[root@salt-master file]# salt \(aqminion-destination\(aq cp.get_file salt://minion-source/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
minion-destination:
    /root/.ssh/authorized_keys


Or we can use a more elegant and salty way to add an SSH key:


[root@salt-master file]# salt \(aqminion-destination\(aq ssh.set_auth_key_from_file user=root source=salt://minion-source/root/.ssh/id_rsa.pub
minion-destination:
    new


[*] Yes, that was the actual key on my server, but the server is already destroyed.

    Automatic Updates / Frozen Deployments

New in version 0.10.3.d.

Salt has support for the  Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs.

    Getting Started

To build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A.

This process does work on Windows. Directions are available at  https://github.com/saltstack/salt-windows-install for details on installing Salt in Windows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows.

Install bbfreeze, and then esky from PyPI in order to enable the bdist_esky command in setup.py. Salt itself must also be installed, in addition to its dependencies.

    Building and Freezing

Once you have your tools installed and the environment configured, use setup.py to prepare the distribution files.


python setup.py sdist
python setup.py bdist


Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable.


python setup.py bdist_esky


There will be an appropriately versioned salt-VERSION.zip in dist/ if everything went smoothly.

    Windows

C:\Python27\lib\site-packages\zmq will need to be added to the PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up.

    Using the Frozen Build

Unpack the zip file in the desired install location. Scripts like salt-minion and salt-call will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the  Esky documentation for more information)

To support updating your minions in the wild, put the builds on a web server that the minions can reach. salt.modules.saltutil.update() will trigger an update and (optionally) a restart of the minion service under the new version.

    Troubleshooting

    A Windows minion isn\(aqt responding

The process dispatch on Windows is slower than it is on *nix. It may be necessary to add \(aq-t 15\(aq to salt commands to give minions plenty of time to return.

    Windows and the Visual Studio Redist

The Visual C++ 2008 32-bit redistributable will need to be installed on all Windows minions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If a no OPENSSL_Applink error appears on the console when trying to start a frozen minion, the redistributable is not installed.

    Mixed Linux environments and Yum

The Yum Python module doesn\(aqt appear to be available on any of the standard Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen build should created on that platform to support all the Linux nodes. Remember to build the virtualenv with --system-site-packages so that the yum module is included.

    Automatic (Python) module discovery

Automatic (Python) module discovery does not work with the late-loaded scheme that Salt uses for (Salt) modules. Any misbehaving modules will need to be explicitly added to the freezer_includes in Salt\(aqs setup.py. Always check the zipped application to make sure that the necessary modules were included.

    Multi Master Tutorial

As of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions.

NOTE: If you need failover capabilities with multiple masters, there is also a MultiMaster-PKI setup available, that uses a different topology  MultiMaster-PKI with Failover Tutorial

In 0.16.0, the masters do not share any information, keys need to be accepted on both masters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the file_roots are kept consistent.

    Summary of Steps

1. Create a redundant master server
2. Copy primary master key to redundant master
3. Start redundant master
4. Configure minions to connect to redundant master
5. Restart minions
6. Accept keys on redundant master

    Prepping a Redundant Master

The first task is to prepare the redundant master. If the redundant master is already running, stop it. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master\(aqs identifying key pair was generated and placed in the master\(aqs pki_dir. The default location of the master\(aqs key pair is /usr/local/etc/salt/pki/master/. Take the private key, master.pem, and copy it to the same location on the redundant master. Do the same for the master\(aqs public key, master.pub. Assuming that no minions have yet been connected to the new redundant master, it is safe to delete any existing key in this location and replace it.

NOTE: There is no logical limit to the number of redundant masters that can be used.

Once the new key is in place, the redundant master can be safely started.

    Configure Minions

Since minions need to be master-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters:


master:
  - saltmaster1.example.com
  - saltmaster2.example.com


Now the minion can be safely restarted.

NOTE: If the ipc_mode for the minion is set to TCP (default in Windows), then each minion in the multi-minion setup (one per master) needs its own tcp_pub_port and tcp_pull_port.

If these settings are left as the default 4510/4511, each minion object will receive a port 2 higher than the previous. Thus the first minion will get 4510/4511, the second will get 4512/4513, and so on. If these port decisions are unacceptable, you must configure tcp_pub_port and tcp_pull_port with lists of ports for each master. The length of these lists should match the number of masters, and there should not be overlap in the lists.

Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions.

NOTE: Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status.

If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates.

    Sharing Files Between Masters

Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered.

    Minion Keys

Minion keys can be accepted the normal way using salt-key on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt-key on both masters or sharing the /usr/local/etc/salt/pki/master/{minions,minions_pre,minions_rejected} directories between masters.

NOTE: While sharing the /usr/local/etc/salt/pki/master directory will work, it is strongly discouraged, since allowing access to the master.pem key outside of Salt creates a SERIOUS security risk.

    File_Roots

The file_roots contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters.

The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage.

    Pillar_Roots

Pillar roots should be given the same considerations as file_roots.

    Master Configurations

While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent.

These access control options include but are not limited to:
o external_auth
o client_acl
o peer
o peer_run

    Multi-Master-PKI Tutorial With Failover

This tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails.

The individual steps are
o setup the master(s) to sign its auth-replies
o setup minion(s) to verify master-public-keys
o enable multiple masters on minion(s)
o enable master-check on minion(s) Please note, that it is advised to have good knowledge of the salt- authentication and communication-process to understand this tutorial. All of the settings described here, go on top of the default authentication/communication process.

    Motivation

The default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time.

Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure?

NOTE: There is also a MultiMaster-Tutorial with a different approach and topology than this one, that might also suite your needs or might even be better suited  Multi-Master Tutorial

It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted.

    The Goal

Setup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity.

    Prepping the master to sign its public key

For signing to work, both master and minion must have the signing and/or verification settings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify.

The easiest way to have the master sign its public key is to set


master_sign_pubkey: True


After restarting the salt-master service, the master will automatically generate a new key-pair


master_sign.pem
master_sign.pub


A custom name can be set for the signing key-pair by setting


master_sign_key_name: <name_without_suffix>


The master will then generate that key-pair upon restart and use it for creating the public keys signature attached to the auth-reply.

The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:master_use_pubkey_signature settings described below.

If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair.

    Prepping the minion to verify received public keys

The minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory.


/usr/local/etc/salt/pki/minion/master_sign.pub

DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE MASTER AND ONLY THERE!

When that is done, enable the signature checking in the minions configuration


verify_master_pubkey_sign: True


and restart the minion. For the first try, the minion should be run in manual debug mode.


$ salt-minion -l debug


Upon connecting to the master, the following lines should appear on the output:


[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO    ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG   ] Decrypting the current master AES key


If the signature verification fails, something went wrong and it will look like this


[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Failed to verify signature of public key
[CRITICAL] The Salt Master server\(aqs public key did not authenticate!


In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master.

Once the verification is successful, the minion can be started in daemon mode again.

For the paranoid among us, its also possible to verify the public whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc.

If that is desired, enable the setting


always_verify_signature: True


    Multiple Masters For A Minion

Configuring multiple masters on a minion is done by specifying two settings:
o a list of masters addresses
o what type of master is defined


master:
    - 172.16.0.10
    - 172.16.0.11
    - 172.16.0.12



master_type: failover


This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set


master_shuffle: True


The master-list will then be shuffled before the first connection attempt.

The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master.

For the minion to be able to detect if its still connected to its current master enable the check for it


master_alive_interval: <seconds>


If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled).

    Testing the setup

At least two running masters are needed to test the failover setup.

Both masters should be running and the minion should be running on the command line in debug mode


$ salt-minion -l debug


The minion will connect to the first master from its master list


[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO    ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG   ] Decrypting the current master AES key


A test.ping on the master the minion is currently connected to should be run to test connectivity.

If successful, that master should be turned off. A firewall-rule denying the minions packets will also do the trick.

Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile.


[INFO    ] Connection to master 172.16.0.10 lost
[INFO    ] Trying to tune in to next master from master-list


The minion will then remove the current master from the list and try connecting to the next master


[INFO    ] Removing possibly failed master 172.16.0.10 from list of masters
[WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11
[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.11


If everything is configured correctly, the new masters public key will be verified successfully


[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub


the authentication with the new master is successful


[INFO    ] Received signed and verified master pubkey from master 172.16.0.11
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[INFO    ] Authentication with master successful!


and the minion can be pinged again from its new master.

    Performance Tuning

With the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power.

To avoid that, the master can use a pre-created signature of its public-key. The signature is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies.

Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc.

That signature can be created with


$ salt-key --gen-signature


This will create a default signature file in the master pki-directory


/usr/local/etc/salt/pki/master/master_pubkey_signature


It is a simple text-file with the binary-signature converted to base64.

If no signing-pair is present yet, this will auto-create the signing pair and the signature file in one call


$ salt-key --gen-signature --auto-create


Telling the master to use the pre-created signature is done with


master_use_pubkey_signature: True


That requires the file \(aqmaster_pubkey_signature\(aq to be present in the masters pki-directory with the correct signature.

If the signature file is named differently, its name can be set with


master_pubkey_signature: <filename>


With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from.

Verifying that everything works is done the same way as above.

    How the signing and verification works

The default key-pair of the salt-master is


/usr/local/etc/salt/pki/master/master.pem
/usr/local/etc/salt/pki/master/master.pub


To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is:


master_sign.pem
master_sign.pub


The combination of the master.* and master_sign.* key-pairs give the possibility of generating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion).

The signature of the masters public-key in master.pub is computed with


master_sign.pem
master.pub
M2Crypto.EVP.sign_update()


This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion.

With the signing-pairs public-key available to the minion, the attached signature can be verified with


master_sign.pub
master.pub
M2Cryptos EVP.verify_update().


When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys). DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF USE.

    Preseed Minion with Accepted Key

In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly.

SEE ALSO: Many ways to preseed minion keys

Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below.

salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to).

salt-api exposes an HTTP call to Salt\(aqs REST API to generate and download the new minion keys as a tarball.

There is a general four step process to do this:
1. Generate the keys on the master:


root@saltmaster# salt-key --gen-keys=[key_name]


Pick a name for the key, such as the minion\(aqs id.
2. Add the public key to the accepted minion folder:


root@saltmaster# cp key_name.pub /usr/local/etc/salt/pki/master/minions/[minion_id]


It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file.
3. Distribute the minion keys.

There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post,  http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances )
Security Warning

Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key.

4. Preseed the Minion with the keys

You will want to place the minion keys before starting the salt-minion daemon:


/usr/local/etc/salt/pki/minion/minion.pem
/usr/local/etc/salt/pki/minion/minion.pub


Once in place, you should be able to start salt-minion and run salt-call state.highstate or any other salt commands that require master authentication.

    Salt Bootstrap

The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as bootstrap-salt.sh runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub:  https://github.com/saltstack/salt-bootstrap

    Supported Operating Systems

o Amazon Linux 2012.09
o Arch
o CentOS 5/6
o Debian 6.x/7.x/8(git installations only)
o Fedora 17/18
o FreeBSD 9.1/9.2/10
o Gentoo
o Linaro
o Linux Mint 13/14
o OpenSUSE 12.x
o Oracle Linux 5/5
o Red Hat 5/6
o Red Hat Enterprise 5/6
o Scientific Linux 5/6
o SmartOS
o SuSE 11 SP1/11 SP2
o Ubuntu 10.x/11.x/12.x/13.04/13.10
o Elementary OS 0.2

NOTE: In the event you do not see your distribution or version available please review the develop branch on GitHub as it main contain updates that are not present in the stable release:  https://github.com/saltstack/salt-bootstrap/tree/develop

    Example Usage

If you\(aqre looking for the one-liner to install salt, please scroll to the bottom and use the instructions for  Installing via an Insecure One-Liner

NOTE: In every two-step example, you would be well-served to examine the downloaded file and examine it to ensure that it does what you expect.

Using curl to install latest git:


curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh git develop


Using wget to install your distribution\(aqs stable packages:


wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh


Install a specific version from git using wget:


wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh -P git v0.16.4


If you already have python installed, python 2.6, then it\(aqs as easy as:


python -m urllib "https://bootstrap.saltstack.com" > install_salt.sh
sudo sh install_salt.sh git develop


All python versions should support the following one liner:


python -c \(aqimport urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()\(aq > install_salt.sh
sudo sh install_salt.sh git develop


On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You do have fetch available though:


fetch -o install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh


If all you want is to install a salt-master using latest git:


curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -M -N git develop


If you want to install a specific release version (based on the git tags):


curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh git v0.16.4


To install a specific branch from a git fork:


curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -g https://github.com/myuser/salt.git git mybranch


    Installing via an Insecure One-Liner

The following examples illustrate how to install Salt via a one-liner.

NOTE: Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy.

    Examples

Installing the latest develop branch of Salt:


curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop


Any of the example above which use two-lines can be made to run in a single-line configuration with minor modifications.

    Example Usage

The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself.

For example, using curl to install your distribution\(aqs stable packages:


curl -L https://bootstrap.saltstack.com | sudo sh


Using wget to install your distribution\(aqs stable packages:


wget -O - https://bootstrap.saltstack.com | sudo sh


Installing the latest version available from git with curl:


curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop


Install a specific version from git using wget:


wget -O - https://bootstrap.saltstack.com | sh -s -- -P git v0.16.4


If you already have python installed, python 2.6, then it\(aqs as easy as:


python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop


All python versions should support the following one liner:


python -c \(aqimport urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()\(aq | \
sudo  sh -s -- git develop


On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You do have fetch available though:


fetch -o - https://bootstrap.saltstack.com | sudo sh


If all you want is to install a salt-master using latest git:


curl -L https://bootstrap.saltstack.com | sudo sh -s -- -M -N git develop


If you want to install a specific release version (based on the git tags):


curl -L https://bootstrap.saltstack.com | sudo sh -s -- git v0.16.4


Downloading the develop branch (from here standard command line options may be passed):


wget https://bootstrap.saltstack.com/develop


    Command Line Options

Here\(aqs a summary of the command line options:


$ sh bootstrap-salt.sh -h

Usage : bootstrap-salt.sh [options] <install-type> <install-type-args>

Installation types: - stable (default) - stable [version] (ubuntu specific) - daily (ubuntu specific) - testing (redhat specific) - git

Examples: - bootstrap-salt.sh - bootstrap-salt.sh stable - bootstrap-salt.sh stable 2014.7 - bootstrap-salt.sh daily - bootstrap-salt.sh testing - bootstrap-salt.sh git - bootstrap-salt.sh git develop - bootstrap-salt.sh git v0.17.0 - bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357

Options: -h Display this message -v Display script version -n No colours. -D Show debug output. -c Temporary configuration directory -g Salt repository URL. (default: git://github.com/saltstack/salt.git) -G Instead of cloning from git://github.com/saltstack/salt.git, clone from https://github.com/saltstack/salt.git (Usually necessary on systems which have the regular git protocol port blocked, where https usually is not) -k Temporary directory holding the minion keys which will pre-seed the master. -s Sleep time used when waiting for daemons to start, restart and when checking for the services running. Default: 3 -M Also install salt-master -S Also install salt-syndic -N Do not install salt-minion -X Do not start daemons after installation -C Only run the configuration function. This option automatically bypasses any installation. -P Allow pip based installations. On some distributions the required salt packages or its dependencies are not available as a package for that distribution. Using this flag allows the script to use pip as a last resort method. NOTE: This only works for functions which actually implement pip based installations. -F Allow copied files to overwrite existing(config, init.d, etc) -U If set, fully upgrade the system prior to bootstrapping salt -K If set, keep the temporary files in the temporary directories specified with -c and -k. -I If set, allow insecure connections while downloading any files. For example, pass \(aq--no-check-certificate\(aq to \(aqwget\(aq or \(aq--insecure\(aq to \(aqcurl\(aq -A Pass the salt-master DNS name or IP. This will be stored under ${_SALT_ETC_DIR}/minion.d/99-master-address.conf -i Pass the salt-minion id. This will be stored under ${_SALT_ETC_DIR}/minion_id -L Install the Apache Libcloud package if possible(required for salt-cloud) -p Extra-package to install while installing salt dependencies. One package per -p flag. You\(aqre responsible for providing the proper package name. -d Disable check_service functions. Setting this flag disables the \(aqinstall_<distro>_check_services\(aq checks. You can also do this by touching /tmp/disable_salt_checks on the target host. Defaults ${BS_FALSE} -H Use the specified http proxy for the installation -Z Enable external software source for newer ZeroMQ(Only available for RHEL/CentOS/Fedora/Ubuntu based distributions)

    Git Fileserver Backend Walkthrough

NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes.

Branches and tags become Salt fileserver environments.

    Installing Dependencies

Beginning with version 2014.7.0, both  pygit2 and  Dulwich are supported as alternatives to  GitPython. The desired provider can be configured using the gitfs_provider parameter in the master config file.

If gitfs_provider is not configured, then Salt will prefer  pygit2 if a suitable version is available, followed by  GitPython and  Dulwich.

NOTE: It is recommended to always run the most recent version of any the below dependencies. Certain features of gitfs may not be available without the most recent version of the chosen library.

    pygit2

The minimum supported version of  pygit2 is 0.20.3. Availability for this version of  pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible.

For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install  pygit2:


# yum install python-pygit2


Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it:


# apt-get install python-pygit2


If  pygit2 is not packaged for the platform on which the Master is running, the  pygit2 website has installation instructions  here. Keep in mind however that following these instructions will install libgit2 and  pygit2 without system packages. Additionally, keep in mind that  SSH authentication in pygit2 requires  libssh2 (not libssh) development libraries to be present before libgit2 is built. On some distros (debian based) pkg-config is also required to link libgit2 with libssh2.

WARNING:  pygit2 is actively developed and frequently makes non-backwards-compatible API changes, even in minor releases. It is not uncommon for  pygit2 upgrades to result in errors in Salt. Please take care when upgrading  pygit2, and pay close attention to the changelog, keeping an eye out for API changes. Errors can be reported on the SaltStack issue tracker.

    GitPython

 GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum:


# yum install GitPython


Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:


# apt-get install python-git


If your master is running an older version (such as Ubuntu 12.04 LTS or Debian Squeeze), then you will need to install GitPython using either  pip or easy_install (it is recommended to use pip). Version 0.3.2.RC1 is now marked as the stable release in PyPI, so it should be a simple matter of running pip install GitPython (or easy_install GitPython) as root.

WARNING: Keep in mind that if GitPython has been previously installed on the master using pip (even if it was subsequently uninstalled), then it may still exist in the build cache (typically /tmp/pip-build-root/GitPython) if the cache is not cleared after installation. The package in the build cache will override any requirement specifiers, so if you try upgrading to version 0.3.2.RC1 by running pip install \(aqGitPython==0.3.2.RC1\(aq then it will ignore this and simply install the version from the cache directory. Therefore, it may be necessary to delete the GitPython directory from the build cache in order to ensure that the specified version is installed.

    Dulwich

Dulwich 0.9.4 or newer is required to use Dulwich as backend for gitfs.

Dulwich is available in EPEL, and can be easily installed on the master using yum:


# yum install python-dulwich


For APT-based distros such as Ubuntu and Debian:


# apt-get install python-dulwich


IMPORTANT: If switching to Dulwich from GitPython/pygit2, or switching from GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to avoid unpredictable behavior. This is probably a good idea whenever switching to a new gitfs_provider, but it is less important when switching between GitPython and pygit2.

Beginning in version 2015.5.0, the gitfs cache can be easily cleared using the fileserver.clear_cache runner.


salt-run fileserver.clear_cache backend=git


If the Master is running an earlier version, then the cache can be cleared by removing the gitfs and file_lists/gitfs directories (both paths relative to the master cache directory, usually /var/cache/salt/master).


rm -rf /var/cache/salt/master{,/file_lists}/gitfs


    Simple Configuration

To use the gitfs backend, only two configuration changes are required on the master:
1. Include git in the fileserver_backend list in the master config file:


fileserver_backend:
  - git


2. Specify one or more git://, https://, file://, or ssh:// URLs in gitfs_remotes to configure which repositories to cache and search for requested files:


gitfs_remotes:
  - https://github.com/saltstack-formulas/salt-formula.git


SSH remotes can also be configured using scp-like syntax:


gitfs_remotes:
  - git@github.com:user/repo.git
  - ssh://user@domain.tld/path/to/repo.git


Information on how to authenticate to SSH remotes can be found  here.

NOTE: Dulwich does not recognize ssh:// URLs, git+ssh:// must be used instead. Salt version 2015.5.0 and later will automatically add the git+ to the beginning of these URLs before fetching, but earlier Salt versions will fail to fetch unless the URL is specified using git+ssh://.
3. Restart the master to load the new configuration.

NOTE: In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository.

    Multiple Remotes

The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files.

A simple scenario illustrates this cascading lookup behavior:

If the gitfs_remotes option specifies three remotes:


gitfs_remotes:
  - git://github.com/example/first.git
  - https://github.com/example/second.git
  - file:///root/third


And each repository contains some files:


first.git:
    top.sls
    edit/vim.sls
    edit/vimrc
    nginx/init.sls

second.git: edit/dev_vimrc haproxy/init.sls

third: haproxy/haproxy.conf edit/dev_vimrc

Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:
o A request for the file salt://haproxy/init.sls will be served from the https://github.com/example/second.git git repo.
o A request for the file salt://haproxy/haproxy.conf will be served from the file:///root/third repo.

NOTE: This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos.

The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo.

WARNING: Salt versions prior to 2014.1.0 are not tolerant of changing the order of remotes or modifying the URI of existing remotes. In those versions, when modifying remotes it is a good idea to remove the gitfs cache directory (/var/cache/salt/master/gitfs) before restarting the salt-master service.

    Per-remote Configuration Parameters

New in version 2014.7.0.

The following master config parameters are global (that is, they apply to all configured gitfs remotes):
o gitfs_base
o gitfs_root
o gitfs_mountpoint (new in 2014.7.0)
o gitfs_user (pygit2 only, new in 2014.7.0)
o gitfs_password (pygit2 only, new in 2014.7.0)
o gitfs_insecure_auth (pygit2 only, new in 2014.7.0)
o gitfs_pubkey (pygit2 only, new in 2014.7.0)
o gitfs_privkey (pygit2 only, new in 2014.7.0)
o gitfs_passphrase (pygit2 only, new in 2014.7.0)

These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here\(aqs some example usage:


gitfs_provider: pygit2
gitfs_base: develop

gitfs_remotes: - https://foo.com/foo.git - https://foo.com/bar.git: - root: salt - mountpoint: salt://foo/bar/baz - base: salt-base - http://foo.com/baz.git: - root: salt/states - user: joe - password: mysupersecretpassword - insecure_auth: True

IMPORTANT: There are two important distinctions which should be noted for per-remote configuration:
1. The URL of a remote which has per-remote configuration must be suffixed with a colon.
2. Per-remote configuration parameters are named like the global versions, with the gitfs_ removed from the beginning.

In the example configuration above, the following is true:
1. The first and third gitfs remotes will use the develop branch/tag as the base environment, while the second one will use the salt-base branch/tag as the base environment.
2. The first remote will serve all files in the repository. The second remote will only serve files from the salt directory (and its subdirectories), while the third remote will only serve files from the salt/states directory (and its subdirectories).
3. The files from the second remote will be located under salt://foo/bar/baz, while the files from the first and third remotes will be located under the root of the Salt fileserver namespace (salt://).
4. The third remote overrides the default behavior of  not authenticating to insecure (non-HTTPS) remotes.

    Serving from a Subdirectory

The gitfs_root parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver.

Assume the below layout:


.gitignore
README.txt
foo/
foo/bar/
foo/bar/one.txt
foo/bar/two.txt
foo/bar/three.txt
foo/baz/
foo/baz/top.sls
foo/baz/edit/vim.sls
foo/baz/edit/vimrc
foo/baz/nginx/init.sls


The below configuration would serve only the files under foo/baz, ignoring the other files in the repository:


gitfs_remotes:
  - git://mydomain.com/stuff.git

gitfs_root: foo/baz

The root can also be configured on a  per-remote basis.

    Mountpoints

New in version 2014.7.0.

The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver.

Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository).

The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf.


gitfs_remotes:
  - https://mydomain.com/stuff.git

gitfs_mountpoint: salt://webapps/foo/files

Mountpoints can also be configured on a  per-remote basis.

    Using gitfs Alongside Other Backends

Sometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master.

The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends:


fileserver_backend:
  - roots
  - git


Then the roots backend (the default backend of files in /usr/local/etc/salt/states) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched.

    Branches, Environments, and Top Files

When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier.

There is one exception to this rule: the master branch is implicitly mapped to the base environment.

So, for a typical base, qa, dev setup, the following branches could be used:


master
qa
dev


top.sls files from different branches will be merged into one at runtime. Since this can lead to overly complex configurations, the recommended setup is to have a separate repository, containing only the top.sls file with just one single master branch.

To map a branch other than master as the base environment, use the gitfs_base parameter.


gitfs_base: salt-base


The base can also be configured on a  per-remote basis.

    Environment Whitelist/Blacklist

New in version 2014.7.0.

The gitfs_env_whitelist and gitfs_env_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and regular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag.


gitfs_env_whitelist:
  - base
  - v1.*
  - \(aqmybranch\d+\(aq


NOTE: v1.*, in this example, will match as both a glob and a regular expression (though it will have been matched as a glob, since globs are evaluated before regular expressions).

The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:
o If only gitfs_env_whitelist is used, then only branches/tags which match the whitelist will be available as environments
o If only gitfs_env_blacklist is used, then the branches/tags which match the blacklist will not be available as environments
o If both are used, then the branches/tags which match the whitelist, but do not match the blacklist, will be available as environments.

    Authentication

    pygit2

New in version 2014.7.0.

Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of  pygit2 supported by Salt for gitfs.

NOTE: The examples below make use of per-remote configuration parameters, a feature new to Salt 2014.7.0. More information on these can be found  here.

    HTTPS

For HTTPS repositories which require authentication, the username and password can be provided like so:


gitfs_remotes:
  - https://domain.tld/myrepo.git:
    - user: git
    - password: mypassword


If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter:


gitfs_remotes:
  - http://domain.tld/insecure_repo.git:
    - user: git
    - password: mypassword
    - insecure_auth: True


    SSH

SSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent:
o ssh://git@github.com/user/repo.git
o git@github.com:user/repo.git

Both gitfs_pubkey and gitfs_privkey (or their  per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured  per-remote). For example:


gitfs_remotes:
  - git@github.com:user/repo.git:
    - pubkey: /root/.ssh/id_rsa.pub
    - privkey: /root/.ssh/id_rsa
    - passphrase: myawesomepassphrase


Finally, the SSH host key must be  added to the known_hosts file.

    GitPython

With  GitPython, only passphrase-less SSH public key authentication is supported. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython.


gitfs_remotes:
  - ssh://git@github.com/example/salt-states.git


Since  GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL,  GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root).

If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here\(aqs an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs:


Host github.com
    IdentityFile /root/.ssh/id_rsa_gitfs


The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository.

It is also necessary to  add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding StrictHostKeyChecking no to the entry in ~/.ssh/config


Host github.com
    IdentityFile /root/.ssh/id_rsa_gitfs
    StrictHostKeyChecking no


However, this is generally regarded as insecure, and is not recommended.

    Adding the SSH Host Key to the known_hosts File

To use SSH authentication, it is necessary to have the remote repository\(aqs SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function:


# salt mymaster ssh.set_known_host user=root hostname=github.com
mymaster:
    ----------
    new:
        ----------
        enc:
            ssh-rsa
        fingerprint:
            16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
        hostname:
            |1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI=
        key:
            AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
    old:
        None
    status:
        updated


If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH:


$ su
Password:
# ssh github.com
The authenticity of host \(aqgithub.com (192.30.252.128)\(aq can\(aqt be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added \(aqgithub.com,192.30.252.128\(aq (RSA) to the list of known hosts.
Permission denied (publickey).


It doesn\(aqt matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file.

    Verifying the Fingerprint

To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap:


$ nmap github.com --script ssh-hostkey

Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT Nmap scan report for github.com (192.30.252.129) Host is up (0.17s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA) |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA) 80/tcp open http 443/tcp open https 9418/tcp open git

Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds

Another way is to check one\(aqs own known_hosts file, using this one-liner:


$ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan -t rsa github.com 2>/dev/null` | awk \(aq{print $2}\(aq
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48


    Refreshing gitfs Upon Push

By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:
1. On the master, create a file /srv/reactor/update_fileserver.sls, with the following contents:


update_fileserver:
  runner.fileserver.update


2. Add the following reactor configuration to the master config file:


reactor:
  - \(aqsalt/fileserver/gitfs/update\(aq:
    - /srv/reactor/update_fileserver.sls


3. On the git server, add a  post-receive hook with the following contents:


#!/usr/bin/env sh

salt-call event.fire_master update salt/fileserver/gitfs/update

The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor.

Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent.

    Using Git as an External Pillar Source

The git external pillar (a.k.a. git_pillar) has been rewritten for the 2015.8.0 release. This rewrite brings with it  pygit2 support (allowing for access to authenticated repositories), as well as more granular support for per-remote configuration.

To make use of the new features, changes to the git ext_pillar configuration must be made. The new configuration schema is detailed here.

For Salt releases before 2015.8.0, click here for documentation.

    Why aren\(aqt my custom modules/states/etc. syncing to my Minions?

In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again.

This issue is worked around in Salt 0.16.4 and newer.

    The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation

This document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X.

NOTE: This guide is aimed at developers who wish to run Salt in a virtual machine. The official (Linux) walkthrough can be found  here.

    The 5 Cent Salt Intro

Since you\(aqre here you\(aqve probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here\(aqs a brief overview of a Salt cluster:
o Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your business.
o Salt has two types of configuration files:

1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /usr/local/etc/salt/master , on the master server), and one for minions (default is /usr/local/etc/salt/minion or /etc/salt/minion.conf, on the minion servers). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install.

2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which software should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /usr/local/etc/salt/states folder by default, but their location can be changed using ... /usr/local/etc/salt/master configuration file!

NOTE: This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters.
[1] Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that.

    Before Digging In, The Architecture Of The Salt Cluster

    Salt Master

The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files.

    Salt Minion

We\(aqll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution.

    Step 1 - Configuring The Salt Master On Your Mac

 official documentation

Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it\(aqs great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they\(aqre configuring a brand new machine and have to do it all over again. It also lets you uninstall things easily.

NOTE: Brew is a Ruby program (Ruby is installed by default with your Mac). Brew downloads, compiles, and links software. The linking phase is when compiled software is deployed on your machine. It may conflict with manually installed software, especially in the /usr/local directory. It\(aqs ok, remove the manually installed version then refresh the link by typing brew link \(aqpackageName\(aq. Brew has a brew doctor command that can help you troubleshoot. It\(aqs a great command, use it often. Brew requires xcode command line tools. When you run brew the first time it asks you to install them if they\(aqre not already on your system. Brew installs software in /usr/local/bin (system bins are in /usr/bin). In order to use those bins you need your $PATH to search there first. Brew tells you if your $PATH needs to be fixed.

TIP: Use the keyboard shortcut cmd + shift + period in the "open" Mac OS X dialog box to display hidden files and folders, such as .profile.

    Install Homebrew

Install Homebrew here  http://brew.sh/ Or just type


ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"


Now type the following commands in your terminal (you may want to type brew doctor after each to make sure everything\(aqs fine):


brew install python
brew install swig
brew install zmq


NOTE: zmq is ZeroMQ. It\(aqs a fantastic library used for server to server network communication and is at the core of Salt efficiency.

    Install Salt

You should now have everything ready to launch this command:


pip install salt


NOTE: There should be no need for sudo pip install salt. Brew installed Python for your user, so you should have all the access. In case you would like to check, type which python to ensure that it\(aqs /usr/local/bin/python, and which pip which should be /usr/local/bin/pip.

Now type python in a terminal then, import salt. There should be no errors. Now exit the Python terminal using exit().

    Create The Master Configuration

If the default /usr/local/etc/salt/master configuration file was not created, copy-paste it from here:  http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master

NOTE: /usr/local/etc/salt/master is a file, not a folder.

Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X:


sudo launchctl limit maxfiles 4096 8192


In the /usr/local/etc/salt/master file, change max_open_files to 8192 (or just add the line: max_open_files: 8192 (no quote) if it doesn\(aqt already exists).

You should now be able to launch the Salt master:


sudo salt-master --log-level=all


There should be no errors when running the above command.

NOTE: This command is supposed to be a daemon, but for toying around, we\(aqll keep it running on a terminal to monitor the activity.

Now that the master is set, let\(aqs configure a minion on a VM.

    Step 2 - Configuring The Minion VM

The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we\(aqre going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration.

Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we\(aqll use it to:
o Download the base Ubuntu image
o Install salt on that Ubuntu image (Salt is going to be the "provisioner" for the VM).
o Launch the VM
o SSH into the VM to debug
o Stop the VM once you\(aqre done.

    Install VirtualBox

Go get it here:  https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for OS X hosts => x86/amd64)

    Install Vagrant

Go get it here:  http://downloads.vagrantup.com/ and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double-click to install it. Make sure the vagrant command is found when run in the terminal. Type vagrant. It should display a list of commands.

    Create The Minion VM Folder

Create a folder in which you will store your minion\(aqs VM. In this tutorial, it\(aqs going to be a minion folder in the $home directory.


cd $home
mkdir minion


    Initialize Vagrant

From the minion folder, type


vagrant init


This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3.

    Import Precise64 Ubuntu Box


vagrant box add precise64 http://files.vagrantup.com/precise64.box


NOTE: This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file.

    Modify the Vagrantfile

Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to:


config.vm.box = "precise64"


Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use):


config.vm.network :private_network, ip: "192.168.33.10"


At this point you should have a VM that can run, although there won\(aqt be much in it. Let\(aqs check that.

    Checking The VM

From the $home/minion folder type:


vagrant up


A log showing the VM booting should be present. Once it\(aqs done you\(aqll be back to the terminal:


ping 192.168.33.10


The VM should respond to your ping request.

Now log into the VM in ssh using Vagrant again:


vagrant ssh


You should see the shell prompt change to something similar to vagrant@precise64:~$ meaning you\(aqre inside the VM. From there, enter the following:


ping 10.0.2.2


NOTE: That ip is the ip of your VM host (the Mac OS X OS). The number is a VirtualBox default and is displayed in the log after the Vagrant ssh command. We\(aqll use that IP to tell the minion where the Salt master is. Once you\(aqre done, end the ssh session by typing exit.

It\(aqs now time to connect the VM to the salt master

    Step 3 - Connecting Master and Minion

    Creating The Minion Configuration File

Create the /usr/local/etc/salt/minion file. In that file, put the following lines, giving the ID for this minion, and the IP of the master:


master: 10.0.2.2
id: \(aqminion1\(aq
file_client: remote


Minions authenticate with the master using keys. Keys are generated automatically if you don\(aqt provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once.

    Preseed minion keys

From the minion folder on your Mac run:


sudo salt-key --gen-keys=minion1


This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership:


sudo chown youruser:yourgroup minion1.pem
sudo chown youruser:yourgroup minion1.pub


Then copy the .pub file into the list of accepted minions:


sudo cp minion1.pub /usr/local/etc/salt/pki/master/minions/minion1


    Modify Vagrantfile to Use Salt Provisioner

Let\(aqs now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties):


# salt-vagrant config
config.vm.provision :salt do |salt|
    salt.run_highstate = true
    salt.minion_config = "/usr/local/etc/salt/minion"
    salt.minion_key = "./minion1.pem"
    salt.minion_pub = "./minion1.pub"
end


Now destroy the vm and recreate it from the /minion folder:


vagrant destroy
vagrant up


If everything is fine you should see the following message:


"Bootstrapping Salt... (this may take a while)
Salt successfully configured and installed!"


    Checking Master-Minion Communication

To make sure the master and minion are talking to each other, enter the following:


sudo salt \(aq*\(aq test.ping


You should see your minion answering the ping. It\(aqs now time to do some configuration.

    Step 4 - Configure Services to Install On the Minion

In this step we\(aqll use the Salt master to instruct our minion to install Nginx.

    Checking the system\(aqs original state

First, make sure that an HTTP server is not installed on our minion. When opening a browser directed at http://192.168.33.10/ You should get an error saying the site cannot be reached.

    Initialize the top.sls file

System configuration is done in the /usr/local/etc/salt/states/top.sls file (and subfiles/folders), and then applied by running the state.highstate command to have the Salt master give orders so minions will update their instructions and run the associated commands.

First Create an empty file on your Salt master (Mac OS X machine):


touch /usr/local/etc/salt/states/top.sls


When the file is empty, or if no configuration is found for our minion an error is reported:


sudo salt \(aqminion1\(aq state.highstate


Should return an error stating: "No Top file or external nodes data matches found".

    Create The Nginx Configuration

Now is finally the time to enter the real meat of our server\(aqs configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed.

Insert the following lines into the /usr/local/etc/salt/states/top.sls file (which should current be empty).


base:
  \(aqminion1\(aq:
    - bin.nginx


Now create a /usr/local/etc/salt/states/bin/nginx.sls file containing the following:


nginx:
  pkg.installed:
    - name: nginx
  service.running:
    - enable: True
    - reload: True


    Check Minion State

Finally run the state.highstate command again:


sudo salt \(aqminion1\(aq state.highstate


You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to  http://192.168.33.10/, you should see the standard Nginx welcome page.

Congratulations!

    Where To Go From Here

A full description of configuration management within Salt (sls files among other things) is available here:  http://docs.saltstack.com/en/latest/index.html#configuration-management

    Writing Salt Tests

NOTE: THIS TUTORIAL IS A WORK IN PROGRESS

Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. The integration tests are surprisingly easy to write and can be written to be either destructive or non-destructive.

    Getting Set Up For Tests

To walk through adding an integration test, start by getting the latest development code and the test system from GitHub:

NOTE: The develop branch often has failing tests and should always be considered a staging area. For a checkout that tests should be running perfectly on, please check out a specific release tag (such as v2014.1.4).


git clone git@github.com:saltstack/salt.git
pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting


Now that a fresh checkout is available run the test suite

    Destructive vs Non-destructive

Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.

To write a destructive test import and use the destructiveTest decorator for the test method:


import integration
from salttesting.helpers import destructiveTest

class PkgTest(integration.ModuleCase): @destructiveTest def test_pkg_install(self): ret = self.run_function(\(aqpkg.install\(aq, name=\(aqfinch\(aq) self.assertSaltTrueReturn(ret) ret = self.run_function(\(aqpkg.purge\(aq, name=\(aqfinch\(aq) self.assertSaltTrueReturn(ret)

    Automated Test Runs

SaltStack maintains a Jenkins server which can be viewed at  http://jenkins.saltstack.com. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new clean virtual machine. This allows for the execution of tests across supported platforms.

    HTTP Modules

This tutorial demonstrates using the various HTTP modules available in Salt. These modules wrap the Python tornado, urllib2, and requests libraries, extending them in a manner that is more consistent with Salt workflows.

The salt.utils.http Library

This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality.

Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below.

This library can be imported with:


import salt.utils.http


    Configuring Libraries

This library can make use of either tornado, which is required by Salt, urllib2, which ships with Python, or requests, which can be installed separately. By default, tornado will be used. In order to switch to urllib2, set the following variable:


backend: urllib2


In order to switch to requests, set the following variable:


backend: requests


This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions.

salt.utils.http.query()

This function forms a basic query, but with some add-ons not present in the tornado, urllib2, and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations.

A basic query can be performed by calling this function with no more than a single URL:


salt.utils.http.query(\(aqhttp://example.com\(aq)


By default the query will be performed with a GET method. The method can be overridden with the method argument:


salt.utils.http.query(\(aqhttp://example.com/delete/url\(aq, \(aqDELETE\(aq)


When using the POST method (and others, such as PUT), extra data is usually sent as well. This data can be sent directly, in whatever format is required by the remote server (XML, JSON, plain text, etc).


salt.utils.http.query(
    \(aqhttp://example.com/delete/url\(aq,
    method=\(aqPOST\(aq,
    data=json.loads(mydict)
)


Bear in mind that this data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated):


salt.utils.http.query(
    \(aqhttp://example.com/post/url\(aq,
    method=\(aqPOST\(aq,
    data_file=\(aq/usr/local/etc/salt/states/somefile.xml\(aq
)


To pass through a file that contains jinja + yaml templating (the default):


salt.utils.http.query(
    \(aqhttp://example.com/post/url\(aq,
    method=\(aqPOST\(aq,
    data_file=\(aq/usr/local/etc/salt/states/somefile.jinja\(aq,
    data_render=True,
    template_data={\(aqkey1\(aq: \(aqvalue1\(aq, \(aqkey2\(aq: \(aqvalue2\(aq}
)


To pass through a file that contains mako templating:


salt.utils.http.query(
    \(aqhttp://example.com/post/url\(aq,
    method=\(aqPOST\(aq,
    data_file=\(aq/usr/local/etc/salt/states/somefile.mako\(aq,
    data_render=True,
    data_renderer=\(aqmako\(aq,
    template_data={\(aqkey1\(aq: \(aqvalue1\(aq, \(aqkey2\(aq: \(aqvalue2\(aq}
)


Because this function uses Salt\(aqs own rendering system, any Salt renderer can be used. Because Salt\(aqs renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary.


salt.utils.http.query(
    \(aqhttp://example.com/post/url\(aq,
    method=\(aqPOST\(aq,
    data_file=\(aq/usr/local/etc/salt/states/somefile.jinja\(aq,
    data_render=True,
    template_data={\(aqkey1\(aq: \(aqvalue1\(aq, \(aqkey2\(aq: \(aqvalue2\(aq},
    opts=__opts__
)

salt.utils.http.query( \(aqhttp://example.com/post/url\(aq, method=\(aqPOST\(aq, data_file=\(aq/usr/local/etc/salt/states/somefile.jinja\(aq, data_render=True, template_data={\(aqkey1\(aq: \(aqvalue1\(aq, \(aqkey2\(aq: \(aqvalue2\(aq}, node=\(aqmaster\(aq )

Headers may also be passed through, either as a header_list, a header_dict, or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict.


salt.utils.http.query(
    \(aqhttp://example.com/delete/url\(aq,
    method=\(aqPOST\(aq,
    header_file=\(aq/usr/local/etc/salt/states/headers.jinja\(aq,
    header_render=True,
    header_renderer=\(aqjinja\(aq,
    template_data={\(aqkey1\(aq: \(aqvalue1\(aq, \(aqkey2\(aq: \(aqvalue2\(aq}
)


Because much of the data that would be templated between headers and data may be the same, the template_data is the same for both. Correcting possible variable name collisions is up to the user.

The query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    username=\(aqlarry\(aq,
    password=`5700g3543v4r`,
)


Cookies are also supported, using Python\(aqs built-in cookielib. However, they are turned off by default. To turn cookies on, set cookies to True.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    cookies=True
)


By default cookies are stored in Salt\(aqs cache directory, normally /var/cache/salt, as a file called cookies.txt. However, this location may be changed with the cookie_jar argument:


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    cookies=True,
    cookie_jar=\(aq/path/to/cookie_jar.txt\(aq
)


By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was chosen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla:


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    cookies=True,
    cookie_jar=\(aq/path/to/cookie_jar.txt\(aq,
    cookie_format=\(aqmozilla\(aq
)


Because Salt commands are normally one-off commands that are piped together, this library cannot normally behave as a normal browser, with session cookies that persist across multiple HTTP requests. However, the session can be persisted in a separate cookie jar. The default filename for this file, inside Salt\(aqs cache directory, is cookies.session.p. This can also be changed.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    persist_session=True,
    session_cookie_jar=\(aq/path/to/jar.p\(aq
)


The format of this file is msgpack, which is consistent with much of the rest of Salt\(aqs internal structure. Historically, the extension for this file is .p. There are no current plans to make this configurable.

    Return Data

By default, query() will attempt to decode the return data. Because it was designed to be used with REST interfaces, it will attempt to decode the data received from the remote server. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded.

JSON data is translated into a dict using Python\(aqs built-in json library. XML is translated using salt.utils.xml_util, which will use Python\(aqs built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set:


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    decode_type=\(aqxml\(aq
)


Once translated, the return dict from query() will include a dict called dict.

If the data is not to be translated using one of these methods, decoding may be turned off.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    decode=False
)


If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below).

The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    status=True,
    headers=True,
    text=True
)


The return from these will be found in the return dict as status, headers and text, respectively.

    Writing Return Data to Files

It is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this.


salt.utils.http.query(
    \(aqhttp://example.com\(aq,
    text=False,
    headers=False,
    text_out=\(aq/path/to/url_download.txt\(aq,
    headers_out=\(aq/path/to/headers_download.txt\(aq,
)


    SSL Verification

By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off.


salt.utils.http.query(
    \(aqhttps://example.com\(aq,
    verify_ssl=False,
)


    CA Bundles

The requests library has its own method of detecting which CA (certficate authority) bundle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable.


salt.utils.http.query(
    \(aqhttps://example.com\(aq,
    ca_bundle=\(aq/path/to/ca_bundle.pem\(aq,
)


    Updating CA Bundles

The update_ca_bundle() function can be used to update the bundle file at a specified location. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bundle will be downloaded from the cURL website.

CAUTION: The target and the source should always be specified! Failure to specify the target may result in the file being written to the wrong location on the local system. Failure to specify the source may cause the upstream URL to receive excess unnecessary traffic, and may cause a file to be download which is hazardous or does not meet the needs of the user.


salt.utils.http.update_ca_bundle(
    target=\(aq/path/to/ca-bundle.crt\(aq,
    source=\(aqhttps://example.com/path/to/ca-bundle.crt\(aq,
    opts=__opts__,
)


The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively.


ca_bundle: /path/to/ca-bundle.crt
ca_bundle_url: https://example.com/path/to/ca-bundle.crt


If Salt is unable to auto-detect the location of the CA bundle, it will raise an error.

The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file.


salt.utils.http.update_ca_bundle(
    opts=__opts__,
    merge_files=[
        \(aq/etc/ssl/private_cert_1.pem\(aq,
        \(aq/etc/ssl/private_cert_2.pem\(aq,
        \(aq/etc/ssl/private_cert_3.pem\(aq,
    ]
)


    Test Mode

This function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent.

Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a custom, non-destructive URL to be used for testing when necessary.

    Execution Module

The http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary.

Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here.

All methods for the library are available in the execution module, as kwargs.


salt myminion http.query http://example.com/restapi method=POST \
    username=\(aqlarry\(aq password=\(aq5700g3543v4r\(aq headers=True text=True \
    status=True decode_type=xml data_render=True \
    header_file=/tmp/headers.txt data_file=/tmp/data.txt \
    header_render=True cookies=True persist_session=True


    Runner Module

Like the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config.

All methods for the library are available in the runner module, as kwargs.


salt-run http.query http://example.com/restapi method=POST \
    username=\(aqlarry\(aq password=\(aq5700g3543v4r\(aq headers=True text=True \
    status=True decode_type=xml data_render=True \
    header_file=/tmp/headers.txt data_file=/tmp/data.txt \
    header_render=True cookies=True persist_session=True


    State Module

The state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which specifies a pattern to look for in the return text. By default, this will perform a string comparison of looking for the value of match in the return text. In Python terms this looks like:


if match in html_text:
    return True


If more complex pattern matching is required, a regular expression can be used by specifying a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python\(aqs re.search() rather than re.match().

Therefore, the following states are valid:


http://example.com/restapi:
  http.query:
    - match: \(aqSUCCESS\(aq
    - username: \(aqlarry\(aq
    - password: \(aq5700g3543v4r\(aq
    - data_render: True
    - header_file: /tmp/headers.txt
    - data_file: /tmp/data.txt
    - header_render: True
    - cookies: True
    - persist_session: True

http://example.com/restapi: http.query: - match_type: pcre - match: \(aq(?i)succe[ss|ed]\(aq - username: \(aqlarry\(aq - password: \(aq5700g3543v4r\(aq - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True

In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument:


http://example.com/:
  http.query:
    - status: \(aq200\(aq


If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting.

Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively.

    LXC Management with Salt

NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

    Dependencies

Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:
o RHEL/CentOS 6 and later (via  EPEL)
o Fedora (All non-EOL releases)
o Debian 8.0 (Jessie)
o Ubuntu 14.04 LTS and later (LXC templates are packaged separately as lxc-templates, it is recommended to also install this package)
o openSUSE 13.2 and later

    Profiles

Profiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual minions.

There are two types of profiles:
o One for defining the parameters used in container creation/clone.
o One for defining the container\(aqs network interface(s) settings.

    Container Profiles

LXC container profiles are defined defined underneath the lxc.container_profile config option:


lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G
  centos_big:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 20G


Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data:

In the Master config file:


lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G


In the Pillar data


lxc.container_profile:
  centos:
    size: 20G


Any minion with the above Pillar data would have the size parameter in the centos profile overridden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.

NOTE: In the 2014.7.x release cycle and earlier, container profiles are defined under lxc.profile. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.container_profile, and only in versions 2015.5.0 and later.

Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles:

Parameter 2015.5.0 and Newer 2014.7.x and Earlier
template1 Yes Yes
options1 Yes No
image1 Yes Yes
backing Yes Yes
snapshot2 Yes Yes
lvname1 Yes Yes
fstype1 Yes Yes
size Yes Yes
1. Parameter is only supported for container creation, and will be ignored if the profile is used when cloning a container.
2. Parameter is only supported for container cloning, and will be ignored if the profile is used when not cloning a container.

    Network Profiles

LXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity.

WARNING: on pre 2015.5.2, you need to specify explicitly the network bridge


lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up
  ubuntu:
    eth0:
      link: lxcbr0
      type: veth
      flags: up


As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data:

In the Master config file:


lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up


In the Pillar data


lxc.network_profile:
  centos:
    eth0:
      link: lxcbr0


Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.

NOTE: In the 2014.7.x release cycle and earlier, network profiles are defined under lxc.nic. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.network_profile, and only in versions 2015.5.0 and later.

The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.container.conf).
o type - Corresponds to lxc.network.type
o link - Corresponds to lxc.network.link
o flags - Corresponds to lxc.network.flags

Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a container-by-container basis, for instance using the nic_opts argument to lxc.create:


salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts=\(aq{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}\(aq


WARNING: The ipv4, ipv6, gateway, and link (bridge) settings in network profiles / nic_opts will only work if the container doesnt redefine the network configuration (for example in /etc/sysconfig/network-scripts/ifcfg-<interface_name> on RHEL/CentOS, or /etc/network/interfaces on Debian/Ubuntu/etc.). Use these with caution. The container images installed using the download template, for instance, typically are configured for eth0 to use DHCP, which will conflict with static IP addresses set at the container level.

NOTE: For LXC < 1.0.7 and DHCP support, set ipv4.gateway: \(aqauto\(aq is your network profile, ie.:


lxc.network_profile.nic:
  debian:
    eth0:
      link: lxcbr0
      ipv4.gateway: \(aqauto\(aq


    Old lxc support (<1.0.7)

With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you\(aqll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.

Thus you\(aqll need


lxc.network_profile.foo:
  etho:
    link: lxcbr0
    ipv4.gateway: auto


    Tricky network setups Examples

This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.

The another interface which receives directly a public routable ip can\(aqt be on the first interface that we reserve for private inter LXC networking.


lxc.network_profile.foo:
  eth0: {gateway: null, bridge: lxcbr0}
  eth1:
    # replace that by your main interface
    \(aqlink\(aq: \(aqbr0\(aq
    \(aqmac\(aq: \(aq00:16:5b:01:24:e1\(aq
    \(aqgateway\(aq: \(aq2.20.9.14\(aq
    \(aqipv4\(aq: \(aq2.20.9.1\(aq


    Creating a Container on the CLI

    From a Template

LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.

There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu template uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired.

The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the centos LXC template, one can simply run the following command:


salt mycentosminion lxc.create container1 template=centos


For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values:
1. dist - the Linux distribution (i.e. ubuntu or centos)
2. release - the release name/version (i.e. trusty or 6)
3. arch - CPU architecture (i.e. amd64 or i386)

The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on  http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of centos, a release of 6, and an arch of amd64.

Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used:


salt myminion lxc.create container1 template=download options=\(aq{dist: centos, release: 6, arch: amd64}\(aq


NOTE: These command-line options can be placed into a  container profile, like so:


lxc.container_profile.cent6:
  template: download
  options:
    dist: centos
    release: 6
    arch: amd64


The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line.

    Cloning an Existing Container

To clone a container, use the lxc.clone function:


salt myminion lxc.clone container2 orig=container1


    Using a Container Image

While cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt\(aqs lxc.create is capable of installing a container from a tar archive of another container\(aqs rootfs. To create an image of a container named cent6, run the following command as root:


tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs


NOTE: Before doing this, it is recommended that the container is stopped.

The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image parameter with lxc.create:


salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz


NOTE: Making images of containers with LVM backing

For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM backing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs:


mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
umount /var/lib/lxc/cent6/rootfs


WARNING: One caveat of using this method of container creation is that /etc/hosts is left unmodified. This could cause confusion for some distros if salt-minion is later installed on the container, as the functions that determine the hostname take /etc/hosts into account.

Additionally, when creating an rootfs image, be sure to remove /usr/local/etc/salt/minion_id and make sure that id is not defined in /usr/local/etc/salt/minion, as this will cause similar issues.

    Initializing a New Container as a Salt Minion

The above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init function can be used. This function will do the following:
1. Create a new container
2. Optionally set password and/or DNS
3. Bootstrap the minion (using either  salt-bootstrap or a custom command)

By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.


salt myminion lxc.init test1 profile=centos
salt-key -a test1


For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step:


salt-run lxc.init test1 host=myminion profile=centos


    Running Commands Within a Container

For containers which are not running their own Minion, commands can be run within the container in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below.

    2015.5.0 and Newer

New functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents:

Description cmd module lxc module
Run a command and get all output cmd.run lxc.run
Run a command and get just stdout cmd.run_stdout lxc.run_stdout
Run a command and get just stderr cmd.run_stderr lxc.run_stderr
Run a command and get just the retcode cmd.retcode lxc.retcode
Run a command and get all information cmd.run_all lxc.run_all

    2014.7.x and Earlier

Earlier Salt releases use a single function (lxc.run_cmd) to run commands within containers. Whether stdout, stderr, etc. are returned depends on how the function is invoked.

To run a command and return the stdout:


salt myminion lxc.run_cmd web1 \(aqtail /var/log/messages\(aq


To run a command and return the stderr:


salt myminion lxc.run_cmd web1 \(aqtail /var/log/messages\(aq stdout=False stderr=True


To run a command and return the retcode:


salt myminion lxc.run_cmd web1 \(aqtail /var/log/messages\(aq stdout=False stderr=False


To run a command and return all information:


salt myminion lxc.run_cmd web1 \(aqtail /var/log/messages\(aq stdout=True stderr=True


    Container Management Using salt-cloud

Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter

    Container Management Using States

Several states are being renamed or otherwise modified in version 2015.5.0. The information in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states.

    Ensuring a Container Is Present

To ensure the existence of a named container, use the lxc.present state. Here are some examples:


# Using a template
web1:
  lxc.present:
    - template: download
    - options:
        dist: centos
        release: 6
        arch: amd64

# Cloning web2: lxc.present: - clone_from: web-base

# Using a rootfs image web3: lxc.present: - image: salt://path/to/cent6.tar.gz

# Using profiles web4: lxc.present: - profile: centos_web - network_profile: centos

WARNING: The lxc.present state will not modify an existing container (in other words, it will not re-create the container). If an lxc.present state is run on an existing container, there will be no change and the state will return a True result.

The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose.

    Ensuring a Container Does Not Exist

To ensure that a named container is not present, use the lxc.absent state. For example:


web1:
  lxc.absent


    Ensuring a Container is Running/Stopped/Frozen

Containers can be in one of three states:
o running - Container is running and active
o frozen - Container is running, but all process are blocked and the container is essentially non-active until the container is "unfrozen"
o stopped - Container is not running

Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states:


web1:
  lxc.running

# Restart the container if it was already running web2: lxc.running: - restart: True

web3: lxc.stopped

# Explicitly kill all tasks in container instead of gracefully stopping web4: lxc.stopped: - kill: True

web5: lxc.frozen

# If container is stopped, do not start it (in which case the state will fail) web6: lxc.frozen: - start: False

    Using Salt with Stormpath

 Stormpath is a user management and authentication service. This tutorial covers using SaltStack to manage and take advantage of Stormpath\(aqs features.

    External Authentication

Stormpath can be used for Salt\(aqs external authentication system. In order to do this, the master should be configured with an apiid, apikey, and the ID of the application that is associated with the users to be authenticated:


stormpath:
  apiid: 367DFSF4FRJ8767FSF4G34FGH
  apikey: FEFREF43t3FEFRe/f323fwer4FWF3445gferWRWEer1
  application: 786786FREFrefreg435fr1


NOTE: These values can be found in the Stormpath dashboard <https://api.stormpath.com/ui2/index.html#/>`_.

Users that are to be authenticated should be set up under the stormpath dict under external_auth:


external_auth:
  stormpath:
    larry:
      - .*
      - \(aq@runner\(aq
      - \(aq@wheel\(aq


Keep in mind that while Stormpath defaults the username associated with the account to the email address, it is better to use a username without an @ sign in it.

    Configuring Stormpath Modules

Stormpath accounts can be managed via either an execution or state module. In order to use either, a minion must be configured with an API ID and key.


stormpath:
  apiid: 367DFSF4FRJ8767FSF4G34FGH
  apikey: FEFREF43t3FEFRe/f323fwer4FWF3445gferWRWEer1
  directory: efreg435fr1786786FREFr
  application: 786786FREFrefreg435fr1


Some functions in the stormpath modules can make use of other options. The following options are also available.

    directory

The ID of the directory that is to be used with this minion. Many functions require an ID to be specified to do their work. However, if the ID of a directory is specified, then Salt can often look up the resource in question.

    application

The ID of the application that is to be used with this minion. Many functions require an ID to be specified to do their work. However, if the ID of a application is specified, then Salt can often look up the resource in question.

    Managing Stormpath Accounts

With the stormpath configuration in place, Salt can be used to configure accounts (which may be thought of as users) on the Stormpath service. The following functions are available.

    stormpath.create_account

Create an account on the Stormpath service. This requires a directory_id as the first argument; it will not be retrieved from the minion configuration. An email address, password, first name (givenName) and last name (surname) are also required. For the full list of other parameters that may be specified, see:

 http://docs.stormpath.com/rest/product-guide/#account-resource

When executed with no errors, this function will return the information about the account, from Stormpath.


salt myminion stormpath.create_account <directory_id> shemp@example.com letmein Shemp Howard


    stormpath.list_accounts

Show all accounts on the Stormpath service. This will return all accounts, regardless of directory, application, or group.


salt myminion stormpath.list_accounts
\(aq\(aq\(aq


    stormpath.show_account

Show the details for a specific Stormpath account. An account_id is normally required. However, if am email is provided instead, along with either a directory_id, application_id, or group_id, then Salt will search the specified resource to try and locate the account_id.


salt myminion stormpath.show_account <account_id>
salt myminion stormpath.show_account email=<email> directory_id=<directory_id>


    stormpath.update_account

Update one or more items for this account. Specifying an empty value will clear it for that account. This function may be used in one of two ways. In order to update only one key/value pair, specify them in order:


salt myminion stormpath.update_account <account_id> givenName shemp
salt myminion stormpath.update_account <account_id> middleName \(aq\(aq


In order to specify multiple items, they need to be passed in as a dict. From the command line, it is best to do this as a JSON string:


salt myminion stormpath.update_account <account_id> items=\(aq{"givenName": "Shemp"}
salt myminion stormpath.update_account <account_id> items=\(aq{"middlename": ""}


When executed with no errors, this function will return the information about the account, from Stormpath.

    stormpath.delete_account

Delete an account from Stormpath.


salt myminion stormpath.delete_account <account_id>


    stormpath.list_directories

Show all directories associated with this tenant.


salt myminion stormpath.list_directories


    Using Stormpath States

Stormpath resources may be managed using the state system. The following states are available.

    stormpath_account.present

Ensure that an account exists on the Stormpath service. All options that are available with the stormpath.create_account function are available here. If an account needs to be created, then this function will require the same fields that stormpath.create_account requires, including the password. However, if a password changes for an existing account, it will NOT be updated by this state.


curly@example.com:
  stormpath_account.present:
    - directory_id: efreg435fr1786786FREFr
    - password: badpass
    - firstName: Curly
    - surname: Howard
    - nickname: curly


It is advisable to always set a nickname that is not also an email address, so that it can be used by Salt\(aqs external authentication module.

    stormpath_account.absent

Ensure that an account does not exist on Stormpath. As with stormpath_account.present, the name supplied to this state is the email address associated with this account. Salt will use this, with or without the directory ID that is configured for the minion. However, lookups will be much faster with a directory ID specified.

    Salt Virt

    Salt as a Cloud Controller

In Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms to be managed directly with Salt. This system is generally referred to as Salt Virt.

The Salt Virt system already exists and is installed within Salt itself, this means that beside setting up Salt, no additional salt code needs to be deployed.

The main goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that can scale and fully featured. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image, and disk management, as well as virtual machine migration with and without shared storage.

This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well.

    Setting up Hypervisors

The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces.

    Installing Hypervisor Software

Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt.

The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu-nbd.

NOTE: Libguestfs and qemu-nbd allow for virtual machine images to be mounted before startup and get pre-seeded with configurations and a salt minion

This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys.

NOTE: Package names and setup used is Red Hat specific, different package names will be required for different platforms


libvirt:
  pkg.installed: []
  file.managed:
    - name: /etc/sysconfig/libvirtd
    - contents: \(aqLIBVIRTD_ARGS="--listen"\(aq
    - require:
      - pkg: libvirt
  libvirt.keys:
    - require:
      - pkg: libvirt
  service.running:
    - name: libvirtd
    - require:
      - pkg: libvirt
      - network: br0
      - libvirt: libvirt
    - watch:
      - file: libvirt

libvirt-python: pkg.installed: []

libguestfs: pkg.installed: - pkgs: - libguestfs - libguestfs-tools

    Hypervisor Network Setup

The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0:


eth0:
  network.managed:
    - enabled: True
    - type: eth
    - bridge: br0

br0: network.managed: - enabled: True - type: bridge - proto: dhcp - require: - network: eth0

    Virtual Machine Network Setup

Salt Virt comes with a system to model the network interfaces used by the deployed virtual machines; by default a single interface is created for the deployed virtual machine and is bridged to br0. To get going with the default networking setup, ensure that the bridge interface named br0 exists on the hypervisor and is bridged to an active network device.

NOTE: To use more advanced networking in Salt Virt, read the Salt Virt Networking document:

Salt Virt Networking

    Libvirt State

One of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate authority and uses pillar to distribute them. This is managed via the libvirt state. Simply execute this formula on the minion to ensure that the certificate is in place and up to date:

NOTE: The above formula includes the calls needed to set up libvirt keys.


libvirt_keys:
  libvirt.keys


    Getting Virtual Machine Images Ready

Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform.

Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors.

Virtual Machine generation applications are available for many platforms:
vm-builder:
   https://wiki.debian.org/VMBuilder

SEE ALSO:  vmbuilder-formula

Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into /usr/local/etc/salt/states and it can now be used by Salt Virt.

For purposes of this demo, the file name centos.img will be used.

    Existing Virtual Machine Images

Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK.

    CentOS

These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed:  http://wiki.centos.org/Cloud/OpenNebula

    Fedora Linux

Images for Fedora Linux can be found here:  http://fedoraproject.org/en/get-fedora#clouds

    Ubuntu Linux

Images for Ubuntu Linux can be found here:  http://cloud-images.ubuntu.com/

    Using Salt Virt

With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands.

Start by running a Salt Virt hypervisor info command:


salt-run virt.hyper_info


This will query what the running hypervisor stats are and display information for all configured hypervisors. This command will also validate that the hypervisors are properly configured.

Now that hypervisors are available a virtual machine can be provisioned. The virt.init routine will create a new virtual machine:


salt-run virt.init centos1 2 512 salt://centos.img


This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor.

Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started.

NOTE: The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment.

Now that the new VM has been prepared, it can be seen via the virt.query command:


salt-run virt.query


This command will return data about all of the hypervisors and respective virtual machines.

Now that the new VM is booted it should have contacted the Salt Master, a test.ping will reveal if the new VM is running.

    Migrating Virtual Machines

Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible.

A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular port 16514 needs to be opened on hypervisors:


iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT


NOTE: More in-depth information regarding distribution specific firewall settings can read in:

Opening the Firewall up for Salt

Salt also needs an additional flag to be turned on as well. The virt.tunnel option needs to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tunnel and to use port 16514. Without virt.tunnel libvirt tries to bind to random ports when running migrations. To turn on virt.tunnel simple apply it to the master config file:


virt.tunnel: True


Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change:


salt \* saltutil.refresh_modules


Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine:


salt-run virt.migrate centos <new hypervisor>


    VNC Consoles

Salt Virt also sets up VNC consoles by default, allowing for remote visual consoles to be oped up. The information from a virt.query routine will display the vnc console port for the specific vms:


centos
  CPU: 2
  Memory: 524288
  State: running
  Graphics: vnc - hyper6:5900
  Disk - vda:
    Size: 2.0G
    File: /usr/local/etc/salt/states-images/ubuntu2/system.qcow2
    File Format: qcow2
  Nic - ac:de:48:98:08:77:
    Source: br0
    Type: bridge


The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor\(aqs firewall. Once the port is open, then the console can be easily opened via vncviewer:


vncviewer hyper6:5900


By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine.

    Conclusion

Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt.

    LXC

    ESXi Proxy Minion

    ESXi Proxy Minion

New in version 2015.8.4.

NOTE: This tutorial assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

This tutorial also assumes a basic understanding of Salt Proxy Minions. If you\(aqre unfamiliar with Salt\(aqs Proxy Minion system, please read the Salt Proxy Minion documentation and the Salt Proxy Minion End-to-End Example tutorial.

The third assumption that this tutorial makes is that you also have a basic understanding of ESXi hosts. You can learn more about ESXi hosts on  VMware\(aqs various resources.

Salt\(aqs ESXi Proxy Minion allows a VMware ESXi host to be treated as an individual Salt Minion, without installing a Salt Minion on the ESXi host.

Since an ESXi host may not necessarily run on an OS capable of hosting a Python stack, the ESXi host can\(aqt run a regular Salt Minion directly. Therefore, Salt\(aqs Proxy Minion functionality enables you to designate another machine to host a proxy process that "proxies" communication from the Salt Master to the ESXi host. The master does not know or care that the ESXi target is not a "real" Salt Minion.

More in-depth conceptual reading on Proxy Minions can be found in the Proxy Minion section of Salt\(aqs documentation.

Salt\(aqs ESXi Proxy Minion was added in the 2015.8.4 release of Salt.

NOTE: Be aware that some functionality for the ESXi Proxy Minion may depend on the type of license attached the ESXi host(s).

For example, certain services are only available to manipulate service state or policies with a VMware vSphere Enterprise or Enterprise Plus license, while others are available with a Standard license. The ntpd service is restricted to an Enterprise Plus license, while ssh is available via the Standard license.

Please see the  vSphere Comparison page for more information.

    Dependencies

Manipulation of the ESXi host via a Proxy Minion requires the machine running the Proxy Minion process to have the ESXCLI package (and all of it\(aqs dependencies) and the pyVmomi Python Library to be installed.

    ESXi Password

The ESXi Proxy Minion uses VMware\(aqs API to perform tasks on the host as if it was a regular Salt Minion. In order to access the API that is already running on the ESXi host, the ESXi host must have a username and password that is used to log into the host. The username is usually root. Before Salt can access the ESXi host via VMware\(aqs API, a default password must be set on the host.

    pyVmomi

The pyVmomi Python library must be installed on the machine that is running the proxy process. pyVmomi can be installed via pip:


pip install pyVmomi


NOTE: Version 6.0 of pyVmomi has some problems with SSL error handling on certain versions of Python. If using version 6.0 of pyVmomi, the machine that you are running the proxy minion process from must have either Python 2.6, Python 2.7.9, or newer. This is due to an upstream dependency in pyVmomi 6.0 that is not supported in Python version 2.7 to 2.7.8. If the version of Python running the proxy process is not in the supported range, you will need to install an earlier version of pyVmomi. See  Issue #29537 for more information.

Based on the note above, to install an earlier version of pyVmomi than the version currently listed in PyPi, run the following:


pip install pyVmomi==5.5.0.2014.1.1


The 5.5.0.2014.1.1 is a known stable version that the original ESXi Proxy Minion was developed against.

    ESXCLI

Currently, about a third of the functions used for the ESXi Proxy Minion require the ESXCLI package be installed on the machine running the Proxy Minion process.

The ESXCLI package is also referred to as the VMware vSphere CLI, or vCLI. VMware provides vCLI package installation instructions for  vSphere 5.5 and  vSphere 6.0.

Once all of the required dependencies are in place and the vCLI package is installed, you can check to see if you can connect to your ESXi host by running the following command:


esxcli -s <host-location> -u <username> -p <password> system syslog config get


If the connection was successful, ESXCLI was successfully installed on your system. You should see output related to the ESXi host\(aqs syslog configuration.

    Configuration

There are several places where various configuration values need to be set in order for the ESXi Proxy Minion to run and connect properly.

    Proxy Config File

On the machine that will be running the Proxy Minon process(es), a proxy config file must be in place. This file should be located in the /usr/local/etc/salt/ directory and should be named proxy. If the file is not there by default, create it.

This file should contain the location of your Salt Master that the Salt Proxy will connect to.

NOTE: If you\(aqre running your ESXi Proxy Minion on version of Salt that is 2015.8.4 or newer, you also need to set add_proxymodule_to_opts: False in your proxy config file. The need to specify this configuration will be removed with Salt Boron, the next major feature release. See the  New in 2015.8.2 section of the Proxy Minion documentation for more information.

Example Proxy Config File:


# /usr/local/etc/salt/proxy

master: <salt-master-location> add_proxymodule_to_opts: False

    Pillar Profiles

Proxy minions get their configuration from Salt\(aqs Pillar. Every proxy must have a stanza in Pillar and a reference in the Pillar top-file that matches the Proxy ID. At a minimum for communication with the ESXi host, the pillar should look like this:


proxy:
  proxytype: esxi
  host: <ip or dns name of esxi host>
  username: <ESXi username>
  passwords:
    - first_password
    - second_password
    - third_password


Some other optional settings are protocol and port. These can be added to the pillar configuration.

    proxytype

The proxytype key and value pair is critical, as it tells Salt which interface to load from the proxy directory in Salt\(aqs install hierarchy, or from /usr/local/etc/salt/states/_proxy on the Salt Master (if you have created your own proxy module, for example). To use this ESXi Proxy Module, set this to esxi.

    host

The location, or ip/dns, of the ESXi host. Required.

    username

The username used to login to the ESXi host, such as root. Required.

    passwords

A list of passwords to be used to try and login to the ESXi host. At least one password in this list is required.

The proxy integration will try the passwords listed in order. It is configured this way so you can have a regular password and the password you may be updating for an ESXi host either via the vsphere.update_host_password execution module function or via the esxi.password_present state function. This way, after the password is changed, you should not need to restart the proxy minion--it should just pick up the the new password provided in the list. You can then change pillar at will to move that password to the front and retire the unused ones.

Use-case/reasoning for using a list of passwords: You are setting up an ESXi host for the first time, and the host comes with a default password. You know that you\(aqll be changing this password during your initial setup from the default to a new password. If you only have one password option, and if you have a state changing the password, any remote execution commands or states that run after the password change will not be able to run on the host until the password is updated in Pillar and the Proxy Minion process is restarted.

This allows you to use any number of potential fallback passwords.

NOTE: When a password is changed on the host to one in the list of possible passwords, the further down on the list the password is, the longer individual commands will take to return. This is due to the nature of pyVmomi\(aqs login system. We have to wait for the first attempt to fail before trying the next password on the list.

This scenario is especially true, and even slower, when the proxy minion first starts. If the correct password is not the first password on the list, it may take up to a minute for test.ping to respond with a True result. Once the initial authorization is complete, the responses for commands will be a little faster.

To avoid these longer waiting periods, SaltStack recommends moving the correct password to the top of the list and restarting the proxy minion at your earliest convenience.

    protocol

If the ESXi host is not using the default protocol, set this value to an alternate protocol. Default is https. For example:

    port

If the ESXi host is not using the default port, set this value to an alternate port. Default is 443.

    Example Configuration Files

An example of all of the basic configurations that need to be in place before starting the Proxy Minion processes includes the Proxy Config File, Pillar Top File, and any individual Proxy Minion Pillar files.

In this example, we\(aqll assuming there are two ESXi hosts to connect to. Therefore, we\(aqll be creating two Proxy Minion config files, one config for each ESXi host.

Proxy Config File:


# /usr/local/etc/salt/proxy

master: <salt-master-location> add_proxymodule_to_opts: False

Pillar Top File:


# /usr/local/etc/salt/pillar/top.sls

base: \(aqesxi-1\(aq: - esxi-1 \(aqesxi-2\(aq: - esxi-2

Pillar Config File for the first ESXi host, esxi-1:


# /usr/local/etc/salt/pillar/esxi-1.sls

proxy: proxytype: esxi host: esxi-1.example.com username: \(aqroot\(aq passwords: - bad-password-1 - backup-bad-password-1

Pillar Config File for the second ESXi host, esxi-2:


# /usr/local/etc/salt/pillar/esxi-2.sls

proxy: proxytype: esxi host: esxi-2.example.com username: \(aqroot\(aq passwords: - bad-password-2 - backup-bad-password-2

    Starting the Proxy Minion

Once all of the correct configuration files are in place, it is time to start the proxy processes!
1. First, make sure your Salt Master is running.
2. Start the first Salt Proxy, in debug mode, by giving the Proxy Minion process and ID that matches the config file name created in the  Configuration section.


salt-proxy --proxyid=\(aqesxi-1\(aq -l debug


1. Accept the esxi-1 Proxy Minion\(aqs key on the Salt Master:


# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
esxi-1
Rejected Keys:
#
# salt-key -a esxi-1
The following keys are going to be accepted:
Unaccepted Keys:
esxi-1
Proceed? [n/Y] y
Key for minion esxi-1 accepted.


1. Repeat for the second Salt Proxy, this time we\(aqll run the proxy process as a daemon, as an example.


salt-proxy --proxyid=\(aqesxi-2\(aq -d


1. Accept the esxi-2 Proxy Minion\(aqs key on the Salt Master:


# salt-key -L
Accepted Keys:
esxi-1
Denied Keys:
Unaccepted Keys:
esxi-2
Rejected Keys:
#
# salt-key -a esxi-1
The following keys are going to be accepted:
Unaccepted Keys:
esxi-2
Proceed? [n/Y] y
Key for minion esxi-1 accepted.


1. Check and see if your Proxy Minions are responding:


# salt \(aqesxi-*\(aq test.ping
esxi-1:
    True
esxi-3:
    True


    Executing Commands

Now that you\(aqve configured your Proxy Minions and have them responding successfully to a test.ping, we can start executing commands against the ESXi hosts via Salt.

It\(aqs important to understand how this particular proxy works, and there are a couple of important pieces to be aware of in order to start running remote execution and state commands against the ESXi host via a Proxy Minion: the  vSphere Execution Module, the  ESXi Execution Module, and the  ESXi State Module.

    vSphere Execution Module

The Salt.modules.vsphere is a standard Salt execution module that does the bulk of the work for the ESXi Proxy Minion. If you pull up the docs for it you\(aqll see that almost every function in the module takes credentials (username and password) and a target host argument. When credentials and a host aren\(aqt passed, Salt runs commands through pyVmomi or ESXCLI against the local machine. If you wanted, you could run functions from this module on any machine where an appropriate version of pyVmomi and ESXCLI are installed, and that machine would reach out over the network and communicate with the ESXi host.

You\(aqll notice that most of the functions in the vSphere module require a host, username, and password. These parameters are contained in the Pillar files and passed through to the function via the proxy process that is already running. You don\(aqt need to provide these parameters when you execute the commands. See the  Running Remote Execution Commands section below for an example.

    ESXi Execution Module

In order for the Pillar information set up in the  Configuration section above to be passed to the function call in the vSphere Execution Module, the salt.modules.esxi execution module acts as a "shim" between the vSphere execution module functions and the proxy process.

The "shim" takes the authentication credentials specified in the Pillar files and passes them through to the host, username, password, and optional protocol and port options required by the vSphere Execution Module functions.

If the function takes more positional, or keyword, arguments you can append them to the call. It\(aqs this shim that speaks to the ESXi host through the proxy, arranging for the credentials and hostname to be pulled from the Pillar section for the ESXi Proxy Minion.

Because of the presence of the shim, to lookup documentation for what functions you can use to interface with the ESXi host, you\(aqll want to look in salt.modules.vsphere instead of salt.modules.esxi.

    Running Remote Execution Commands

To run commands from the Salt Master to execute, via the ESXi Proxy Minion, against the ESXi host, you use the esxi.cmd <vsphere-function-name> syntax to call functions located in the vSphere Execution Module. Both args and kwargs needed for various vsphere execution module functions must be passed through in a kwarg- type manor. For example:


salt \(aqesxi-*\(aq esxi.cmd system_info
salt \(aqexsi-*\(aq esxi.cmd get_service_running service_name=\(aqssh\(aq


    ESXi State Module

The ESXi State Module functions similarly to other state modules. The "shim" provided by the  ESXi Execution Module passes the necessary host, username, and password credentials through, so those options don\(aqt need to be provided in the state. Other than that, state files are written and executed just like any other Salt state. See the salt.modules.esxi state for ESXi state functions.

The follow state file is an example of how to configure various pieces of an ESXi host including enabling SSH, uploading and SSH key, configuring a coredump network config, syslog, ntp, enabling VMotion, resetting a host password, and more.


# /usr/local/etc/salt/states/configure-esxi.sls

configure-host-ssh: esxi.ssh_configured: - service_running: True - ssh_key_file: /usr/local/etc/salt/ssh_keys/my_key.pub - service_policy: \(aqautomatic\(aq - service_restart: True - certificate_verify: True

configure-host-coredump: esxi.coredump_configured: - enabled: True - dump_ip: \(aqmy-coredump-ip.example.com\(aq

configure-host-syslog: esxi.syslog_configured: - syslog_configs: loghost: ssl://localhost:5432,tcp://10.1.0.1:1514 default-timeout: 120 - firewall: True - reset_service: True - reset_syslog_config: True - reset_configs: loghost,default-timeout

configure-host-ntp: esxi.ntp_configured: - service_running: True - ntp_servers: - 192.174.1.100 - 192.174.1.200 - service_policy: \(aqautomatic\(aq - service_restart: True

configure-vmotion: esxi.vmotion_configured: - enabled: True

configure-host-vsan: esxi.vsan_configured: - enabled: True - add_disks_to_vsan: True

configure-host-password: esxi.password_present: - password: \(aqnew-bad-password\(aq

States are called via the ESXi Proxy Minion just as they would on a regular minion. For example:


salt \(aqesxi-*\(aq state.sls configure-esxi test=true
salt \(aqesxi-*\(aq state.sls configure-esxi


    Relevant Salt Files and Resources

o ESXi Proxy Minion
o ESXi Execution Module
o ESXi State Module
o Salt Proxy Minion Docs
o Salt Proxy Minion End-to-End Example
o vSphere Execution Module

    Using Salt at scale

    Using Salt at scale

The focus of this tutorial will be building a Salt infrastructure for handling large numbers of minions. This will include tuning, topology, and best practices.

For how to install the Salt Master please go here:  Installing saltstack

NOTE: This tutorial is intended for large installations, although these same settings won\(aqt hurt, it may not be worth the complexity to smaller installations.

When used with minions, the term \(aqmany\(aq refers to at least a thousand and \(aqa few\(aq always means 500.

For simplicity reasons, this tutorial will default to the standard ports used by Salt.

    The Master

The most common problems on the Salt Master are:
1. too many minions authing at once
2. too many minions re-authing at once
3. too many minions re-connecting at once
4. too many minions returning at once
5. too few resources (CPU/HDD)

The first three are all "thundering herd" problems. To mitigate these issues we must configure the minions to back-off appropriately when the Master is under heavy load.

The fourth is caused by masters with little hardware resources in combination with a possible bug in ZeroMQ. At least thats what it looks like till today ( Issue 118651,  Issue 5948,  Mail thread)

To fully understand each problem, it is important to understand, how Salt works.

Very briefly, the Salt Master offers two services to the minions.
o a job publisher on port 4505
o an open port 4506 to receive the minions returns

All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle Master, there will only be connections on port 4505.

    Too many minions authing

When the Minion service is first started up, it will connect to its Master\(aqs publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once.

The connection itself usually isn\(aqt the culprit, the more likely cause of master-side issues is the authentication that the Minion must do with the Master. If the Master is too heavily loaded to handle the auth request it will time it out. The Minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the Minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max.

    Too many minions re-authing

This is most likely to happen in the testing phase of a Salt deployment, when all Minion keys have already been accepted, but the framework is being tested and parameters are frequently changed in the Salt Master\(aqs configuration file(s).

The Salt Master generates a new AES key to encrypt its publications at certain events such as a Master restart or the removal of a Minion key. If you are encountering this problem of too many minions re-authing against the Master, you will need to recalibrate your setup to reduce the rate of events like a Master restart or Minion key removal (salt-key -d).

When the Master generates a new AES key, the minions aren\(aqt notified of this but will discover it on the next pub job they receive. When the Minion receives such a job it will then re-auth with the Master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the


random_reauth_delay: 60


in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all minions are reachable via Salt commands.

    Too many minions re-connecting

By default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load.

To tune the minions sockets reconnect attempts, there are a few values in the sample configuration file (default values)


recon_default: 100ms
recon_max: 5000
recon_randomize: True


o recon_default: the default value the socket should use, i.e. 100ms
o recon_max: the max value that the socket should use as a delay before trying to reconnect
o recon_randomize: enables randomization between recon_default and recon_max

To tune this values to an existing environment, a few decision have to be made.
1. How long can one wait, before the minions should be online and reachable via Salt?
2. How many reconnects can the Master handle without a syn flood?

These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements.

Here is an example scenario with the goal, to have all minions reconnect within a 60 second time-frame on a Salt Master service restart.


recon_default: 1000
recon_max: 59000
recon_randomize: True


Each Minion will have a randomized reconnect value between \(aqrecon_default\(aq and \(aqrecon_default + recon_max\(aq, which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior).

Lets say the generated random value is 11 seconds (or 11000ms).


reconnect 1: wait 11 seconds
reconnect 2: wait 22 seconds
reconnect 3: wait 33 seconds
reconnect 4: wait 44 seconds
reconnect 5: wait 55 seconds
reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
reconnect 7: wait 11 seconds
reconnect 8: wait 22 seconds
reconnect 9: wait 33 seconds
reconnect x: etc.


With a thousand minions this will mean


1000/60 = ~16


round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more minions might raise the problem again.

    Too many minions returning at once

This can also happen during the testing phase, if all minions are addressed at once with


$ salt * test.ping


it may cause thousands of minions trying to return their data to the Salt Master open port 4506. Also causing a flood of syn-flood if the Master can\(aqt handle that many returns at once.

This can be easily avoided with Salt\(aqs batch mode:


$ salt * test.ping -b 50


This will only address 50 minions at once while looping through all addressed minions.

    Too few resources

The masters resources always have to match the environment. There is no way to give good advise without knowing the environment the Master is supposed to run in. But here are some general tuning tips for different situations:

    The Master is CPU bound

Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096 bit key-pairs on first start. While the key-size for the Master is currently not configurable, the minions keysize can be configured with different key-sizes. For example with a 2048 bit key:


keysize: 2048


With thousands of decryptions, the amount of time that can be saved on the masters end should not be neglected. See here for reference:  Pull Request 9235 how much influence the key-size can have.

Downsizing the Salt Master\(aqs key is not that important, because the minions do not encrypt as many messages as the Master does.

    The Master is disk IO bound

By default, the Master saves every Minion\(aqs return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is:


cachedir: /var/cache/salt


and then in the /proc directory.

Each job return for every Minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by


keep_jobs: 24



250 jobs/day * 2000 minions returns = 500.000 files a day


If no job history is needed, the job cache can be disabled:


job_cache: False


If the job cache is necessary there are (currently) 2 options:
o ext_job_cache: this will have the minions store their return data directly into a returner (not sent through the Master)
o master_job_cache (New in 2014.7.0): this will make the Master store the job data using a returner (instead of the local job cache on disk).

TARGETING MINIONS

Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof.

For example the command salt web1 apache.signal restart to restart the Apache httpd server specifies the machine web1 as the target and the command will only be run on that one minion.

Similarly when using States, the following top file specifies that only the web1 minion should execute the contents of webserver.sls:


base:
  \(aqweb1\(aq:
    - webserver


There are many ways to target individual minions or groups of minions in Salt:

Matching the minion id

Each minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion\(aqs id configuration setting.

TIP: minion id and minion keys

The minion id is used to generate the minion\(aqs public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host.

    Globbing

The default matching that Salt utilizes is  shell-style globbing around the minion id. This also works for states in the top file.

NOTE: You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked.

Match all minions:


salt \(aq*\(aq test.ping


Match all minions in the example.net domain or any of the example domains:


salt \(aq*.example.net\(aq test.ping
salt \(aq*.example.*\(aq test.ping


Match all the webN minions in the example.net domain (web1.example.net, web2.example.netwebN.example.net):


salt \(aqweb?.example.net\(aq test.ping


Match the web1 through web5 minions:


salt \(aqweb[1-5]\(aq test.ping


Match the web1 and web3 minions:


salt \(