|
NAMEsalt - Salt Documentation SALT PROJECTSalt Project License: Apache v2.0PyPi Package DownloadsPyPi Package DownloadsSalt Project Slack CommunitySalt Project Twitch ChannelSalt Project subredditFollow SaltStack on Twitter.INDENT 0.0 [image: Salt Project Logo] [image]
Salt is the world's fastest, most intelligent and scalable automation engine. About SaltBuilt on Python, Salt is an event-driven automation tool and framework to deploy, configure, and manage complex IT systems. Use Salt to automate common infrastructure administration tasks and ensure that all the components of your infrastructure are operating in a consistent desired state. Salt has many possible uses, including configuration management, which involves:
Salt is ideal for configuration management because it is pluggable, customizable, and plays well with many existing technologies. Salt enables you to deploy and manage applications that use any tech stack running on nearly any operating system, including different types of network devices such as switches and routers from a variety of vendors. In addition to configuration management Salt can also:
About our sponsorsSalt powers VMware's VMware Aria Automation Config (previously vRealize Automation SaltStack Config / SaltStack Enterprise), and can be found under the hood of products from Juniper, Cisco, Cloudflare, Nutanix, SUSE, and Tieto, to name a few. The original sponsor of our community, SaltStack, was acquired by VMware in 2020. The Salt Project remains an open source ecosystem that VMware supports and contributes to. VMware ensures the code integrity and quality of the Salt modules by acting as the official sponsor and manager of the Salt project. Many of the core Salt Project contributors are also VMware employees. This team carefully reviews and enhances the Salt modules to ensure speed, quality, and security. Download and install SaltSalt is tested and packaged to run on CentOS, Debian, RHEL, Ubuntu, MacOS, Windows, and more. Download Salt and get started now. See supported operating systems for more information. To download and install Salt, see: * The Salt install guide * Salt Project repository Technical supportReport bugs or problems using Salt by opening an issue: https://github.com/saltstack/salt/issues To join our community forum where you can exchange ideas, best practices, discuss technical support questions, and talk to project maintainers, join our Slack workspace: Salt Project Community Slack Salt Project documentationInstallation instructions, tutorials, in-depth API and module documentation:
Security advisoriesKeep an eye on the Salt Project Security Announcements landing page. Salt Project recommends subscribing to the Salt Project Security RSS feed to receive notification when new information is available regarding security announcements. Other channels to receive security announcements include the Salt Community mailing list and the Salt Project Community Slack. Responsibly reporting security vulnerabilitiesWhen reporting security vulnerabilities for Salt or other SaltStack projects, refer to the SECURITY.md file found in this repository. Join our communitySalt is built by the Salt Project community, which includes more than 3,000 contributors working in roles just like yours. This well-known and trusted community works together to improve the underlying technology and extend Salt by creating a variety of execution and state modules to accomplish the most common tasks or solve the most important problems that people in your role are likely to face. If you want to help extend Salt or solve a problem with Salt, you can join our community and contribute today. Please be sure to review our Code of Conduct. Also, check out some of our community resources including:
There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Project Community Events Calendar on the main https://saltproject.io website. If you have additional questions, email us at saltproject@vmware.com or reach out directly to the Community Manager, Jimmy Chunga via Slack. We'd be glad to have you join our community! LicenseSalt is licensed under the Apache 2.0 license. Please see the LICENSE file for the full text of the Apache license, followed by a full summary of the licensing used by external modules. A complete list of attributions and dependencies can be found here: salt/DEPENDENCIES.md INTRODUCTION TO SALTWe’re not just talking about NaCl. The 30 second summarySalt is:
It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface. SimplicityProviding versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs. Parallel executionThe core functions of Salt:
Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties. Builds on proven technologySalt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic. Python client interfaceIn order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application. Fast, flexible, scalableThe result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network. OpenSalt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth. SALT SYSTEM ARCHITECTUREOverviewThis page provides a high-level overview of the Salt system architecture and its different components. What is Salt?Salt is a Python-based open-source remote execution framework used for:
The Salt system architectureThe following diagram shows the primary components of the basic Salt architecture: [image] The following sections describe some of the core components of the Salt architecture. Salt Masters and Salt MinionsSalt uses the master-client model in which a master issues commands to a client and the client executes the command. In the Salt ecosystem, the Salt Master is a server that is running the salt-master service. It issues commands to one or more Salt Minions, which are servers that are running the salt-minion service and that are registered with that particular Salt Master. Another way to describe Salt is as a publisher-subscriber model. The master publishes jobs that need to be executed and Salt Minions subscribe to those jobs. When a specific job applies to that minion, it will execute the job. When a minion finishes executing a job, it sends job return data back to the master. Salt has two ports used by default for the minions to communicate with their master(s). These ports work in concert to receive and deliver data to the Message Bus. Salt’s message bus is ZeroMQ, which creates an asynchronous network topology to provide the fastest communication possible. Targets and grainsThe master indicates which minions should execute the job by defining a target. A target is the group of minions, across one or many masters, that a job's Salt command applies to. NOTE: A master can also be managed like a minion and can be a
target if it is running the salt-minion service.
The following is an example of one of the many kinds of commands that a master might issue to a minion. This command indicates that all minions should install the Vim application: salt -v '*' pkg.install vim In this case the glob '*' is the target, which indicates that all minions should execute this command. Many other targeting options are available, including targeting a specific minion by its ID or targeting minions by their shared traits or characteristics (called grains in Salt). Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents Salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties. You can also create your own custom grain data. Grain data is relatively static. However, grain data is refreshed when system information changes (such as network settings) or when a new value is assigned to a custom grain. Open event system (event bus)The event system is used for inter-process communication between the Salt Master and Salt Minions. In the event system:
The event bus lays the groundwork for orchestration and real-time monitoring. All minions see jobs and results by subscribing to events published on the event system. Salt uses a pluggable event system with two layers:
One of the greatest strengths of Salt is the speed of execution. The event system’s communication bus is more efficient than running a higher-level web service (http). The remote execution system is the component that all components are built upon, allowing for decentralized remote execution to spread load across resources. Salt statesIn addition to remote execution, Salt provides another method for configuring minions by declaring which state a minion should be in, otherwise referred to as Salt states. Salt states make configuration management possible. You can use Salt states to deploy and manage infrastructure with simple YAML files. Using states, you can automate recursive and predictable tasks by queueing jobs for Salt to implement without needing user input. You can also add more complex conditional logic to state files with Jinja. To illustrate the subtle differences between remote execution and configuration management, take the command referenced in the previous section about Targets and grains in which Salt installed the application Vim on all minions:
The state file that verifies Vim is installed might look like the following example: # File:/usr/local/etc/salt/states/vim_install.sls install_vim_now: To apply this state to a minion, you would use the state.apply module, such as in the following example: salt '*' state.apply vim_install This command applies the vim_install state to all minions. Formulas are collections of states that work in harmony to configure a minion or application. For example, one state might trigger another state. The Top fileIt is not practical to manually run each state individually targeting specific minions each time. Some environments have hundreds of state files targeting thousands of minions. Salt offers two features to help with this scaling problem:
The top file maps which states should be applied to different minions in certain environments. The following is an example of a simple top file: # File: /usr/local/etc/salt/states/top.sls base: In this example, base refers to the Salt environment, which is the default. You can specify more than one environment as needed, such as prod, dev, QA, etc. Groups of minions are specified under the environment, and states are listed for each set of minions. This top file indicates that a state called all_server_setup should be applied to all minions '*' and the state called web_server_setup should be applied to the 01webserver minion. To run the Salt command, you would use the state.highstate function: salt \* state.highstate This command applies the top file to the targeted minions. Salt pillarSalt’s pillar feature takes data defined on the Salt Master and distributes it to minions as needed. Pillar is primarily used to store secrets or other highly sensitive data, such as account credentials, cryptographic keys, or passwords. Pillar is also useful for storing non-secret data that you don't want to place directly in your state files, such as configuration data. Salt pillar brings data into the cluster from the opposite direction as grains. While grains are data generated from the minion, the pillar is data generated from the master. Pillars are organized similarly to states in a Pillar state tree, where top.sls acts to coordinate pillar data to environments and minions privy to the data. Information transferred using pillar has a dictionary generated for the targeted minion and encrypted with that minion’s key for secure data transfer. Pillar data is encrypted on a per-minion basis, which makes it useful for storing sensitive data specific to a particular minion. Beacons and reactorsThe beacon system is a monitoring tool that can listen for a variety of system processes on Salt Minions. Beacons can trigger reactors which can then help implement a change or troubleshoot an issue. For example, if a service’s response times out, the reactor system can restart the service. Beacons are used for a variety of purposes, including:
When coupled with reactors, beacons can create automated pre-written responses to infrastructure and application issues. Reactors expand Salt with automated responses using pre-written remediation states. Reactors can be applied in a variety of scenarios:
When both beacons and reactors are used together , you can create unique states customized to your specific needs. Salt runners and orchestrationSalt runners are convenience applications executed with the salt-run command. Salt runners work similarly to Salt execution modules. However, they execute on the Salt Master instead of the Salt Minions. A Salt runner can be a simple client call or a complex application. Salt provides the ability to orchestrate system administrative tasks throughout the enterprise. Orchestration makes it possible to coordinate the activities of multiple machines from a central place. It has the added advantage of being able to control the sequence of when certain configuration events occur. Orchestration states execute on the master using the state runner module. CONTRIBUTINGSo you want to contribute to the Salt project? Excellent! You can help in a number of ways:
If you'd like to update docs or fix an issue, you're going to need the Salt repo. The best way to contribute is using Git. Environment setupTo hack on Salt or the docs you're going to need to set up your development environment. If you already have a workflow that you're comfortable with, you can use that, but otherwise this is an opinionated guide for setting up your dev environment. Follow these steps and you'll end out with a functioning dev environment and be able to submit your first PR. This guide assumes at least a passing familiarity with Git, a common version control tool used across many open source projects, and is necessary for contributing to Salt. For an introduction to Git, watch Salt Docs Clinic - Git For the True Beginner. Because of its widespread use, there are many resources for learning more about Git. One popular resource is the free online book Learn Git in a Month of Lunches. pyenv, Virtual Environments, and youWe recommend pyenv, since it allows installing multiple different Python versions, which is important for testing Salt across all the versions of Python that we support. On LinuxInstall pyenv: git clone https://github.com/pyenv/pyenv.git ~/.pyenv export PATH="$HOME/.pyenv/bin:$PATH" git clone https://github.com/pyenv/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv On MacInstall pyenv using brew: brew update brew install pyenv brew install pyenv-virtualenv
Now add pyenv to your .bashrc: echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc pyenv init 2>> ~/.bashrc pyenv virtualenv-init 2>> ~/.bashrc For other shells, see the pyenv instructions. Go ahead and restart your shell. Now you should be able to install a new version of Python: pyenv install 3.9.18 If that fails, don't panic! You're probably just missing some build dependencies. Check out pyenv common build problems. Now that you've got your version of Python installed, you can create a new virtual environment with this command: pyenv virtualenv 3.9.18 salt Then activate it: pyenv activate salt Sweet! Now you're ready to clone Salt so you can start hacking away! If you get stuck at any point, check out the resources at the beginning of this guide. IRC and Slack are particularly helpful places to go. Get the source!Salt uses the fork and clone workflow for Git contributions. See Using the Fork-and-Branch Git Workflow for how to implement it. But if you just want to hurry and get started you can go ahead and follow these steps: Clones are so shallow. Well, this one is anyway: git clone --depth=1 --origin salt https://github.com/saltstack/salt.git This creates a shallow clone of Salt, which should be fast. Most of the time that's all you'll need, and you can start building out other commits as you go. If you really want all 108,300+ commits you can just run git fetch --unshallow. Then go make a sandwich because it's gonna be a while. You're also going to want to head over to GitHub and create your own fork of Salt. Once you've got that set up you can add it as a remote: git remote add yourname <YOUR SALT REMOTE> If you use your name to refer to your fork, and salt to refer to the official Salt repo you'll never get upstream or origin confused. NOTE: Each time you start work on a new issue you should fetch
the most recent changes from salt/upstream.
Set up pre-commit and noxHere at Salt we use pre-commit and nox to make it easier for contributors to get quick feedback, for quality control, and to increase the chance that your merge request will get reviewed and merged. Nox enables us to run multiple different test configurations, as well as other common tasks. You can think of it as Make with superpowers. Pre-commit does what it sounds like: it configures some Git pre-commit hooks to run black for formatting, isort for keeping our imports sorted, and pylint to catch issues like unused imports, among others. You can easily install them in your virtualenv with: python -m pip install pre-commit nox pre-commit install WARNING: Currently there is an issue with the pip-tools-compile
pre-commit hook on Windows. The details around this issue are included here:
https://github.com/saltstack/salt/issues/56642. Please ensure you
export SKIP=pip-tools-compile to skip pip-tools-compile.
Now before each commit, it will ensure that your code at least looks right before you open a pull request. And with that step, it's time to start hacking on Salt! Set up imagemagickOne last prerequisite is to have imagemagick installed, as it is required by Sphinx for generating the HTML documentation. # On Mac, via homebrew brew install imagemagick # Example Linux installation: Debian-based sudo apt install imagemagick Salt issuesCreate your ownPerhaps you've come to this guide because you found a problem in Salt, and you've diagnosed the cause. Maybe you need some help figuring out the problem. In any case, creating quality bug reports is a great way to contribute to Salt even if you lack the skills, time, or inclination to fix it yourself. If that's the case, head on over to Salt's issue tracker on GitHub. Creating a good report can take a little bit of time - but every minute you invest in making it easier for others to reproduce and understand your issue is time well spent. The faster someone can understand your issue, the faster it will be able to get fixed correctly. The thing that every issue needs goes by many names, but one at least as good as any other is MCVE - Minimum Complete Verifiable Example. In a nutshell:
Slow is smooth, and smooth is fast - it may feel like you're taking a long time to create your issue if you're creating a proper MCVE, but a MCVE eliminates back and forth required to reproduce/verify the issue so someone can actually create a fix. Pick an issueIf you don't already have an issue in mind, you can search for help wanted issues. If you also search for good first issue then you should be able to find some issues that are good for getting started contributing to Salt. Documentation issues are also good starter issues. When you find an issue that catches your eye (or one of your own), it's a good idea to comment on the issue and mention that you're working on it. Good communication is key to collaboration - so if you don't have time to complete work on the issue, just leaving some information about when you expect to pick things up again is a great idea! Hacking awaySalt, tests, documentation, and youBefore approving code contributions, Salt requires:
Documentation fixes just require correct documentation. What if I don't write tests or docs?If you aren't into writing documentation or tests, we still welcome your contributions! But your PR will be labeled Needs Testcase and Help Wanted until someone can get to write the tests/documentation. Of course, if you have a desire but just lack the skill we are more than happy to collaborate and help out! There's the documentation working group and the testing working group. We also regularly stream our test clinic live on Twitch every Tuesday afternoon and Thursday morning, Central Time. If you'd like specific help with tests, bring them to the clinic. If no community members need help, you can also just watch tests written in real time. DocumentationSalt uses both docstrings, as well as normal reStructuredText files in the salt/doc folder for documentation. Sphinx is used to generate the documentation, and does require imagemagick. See Set up imagemagick for more information. Before submitting a documentation PR, it helps to first build the Salt docs locally on your machine and preview them. Local previews helps you:
To set up your local environment to preview the core Salt and module documentation:
sudo apt-get update sudo apt-get install -y enchant-2 git gcc imagemagick make zlib1g-dev libc-dev libffi-dev g++ libxml2 libxml2-dev libxslt-dev libcurl4-openssl-dev libssl-dev libgnutls28-dev xz-utils inkscape
rm -rf .nox
pyenv install 3.9.18 pyenv virtualenv 3.9.18 salt-docs echo 'salt-docs' > .python-version
pyenv exec pip install -U pip setuptools wheel
pyenv exec pip install nox Since we use nox, you can build your docs and view them in your browser with this one-liner: python -m nox -e 'docs-html(compress=False, clean=False)'; cd doc/_build/html; python -m webbrowser http://localhost:8000/contents.html; python -m http.server The first time you build the docs, it will take a while because there are a lot of modules. Maybe you should go grab some dessert if you already finished that sandwich. But once nox and Sphinx are done building the docs, python should launch your default browser with the URL http://localhost:8000/contents.html. Now you can navigate to your docs and ensure your changes exist. If you make changes, you can simply run this: cd -; python -m nox -e 'docs-html(compress=False, clean=False)'; cd doc/_build/html; python -m http.server And then refresh your browser to get your updated docs. This one should be quite a bit faster since Sphinx won't need to rebuild everything. Alternatively, you could build the docs on your local machine and then preview the build output. To build the docs locally: pyenv exec nox -e 'docs-html(compress=False, clean=True)' The output from this command will put the preview files in: doc > _build > html. If your change is a docs-only change, you can go ahead and commit/push your code and open a PR. You can indicate that it's a docs-only change by adding [Documentation] to the title of your PR. Otherwise, you'll want to write some tests and code. Running development SaltNote: If you run into any issues in this section, check the Troubleshooting section. If you're going to hack on the Salt codebase you're going to want to be able to run Salt locally. The first thing you need to do is install Salt as an editable pip install: python -m pip install -e . This will let you make changes to Salt without having to re-install it. After all of the dependencies and Salt are installed, it's time to set up the config for development. Typically Salt runs as root, but you can specify which user to run as. To configure that, just copy the master and minion configs. We have .gitignore setup to ignore the local/ directory, so we can put all of our personal files there. mkdir -p local/usr/local/etc/salt/ Create a master config file as local/usr/local/etc/salt/master: cat <<EOF >local/usr/local/etc/salt/master user: $(whoami) root_dir: $PWD/local/ publish_port: 55505 ret_port: 55506 EOF And a minion config as local/usr/local/etc/salt/minion: cat <<EOF >local/usr/local/etc/salt/minion user: $(whoami) root_dir: $PWD/local/ master: localhost id: saltdev master_port: 55506 EOF Now you can start your Salt master and minion, specifying the config dir. salt-master --config-dir=local/usr/local/etc/salt/ --log-level=debug --daemon salt-minion --config-dir=local/usr/local/etc/salt/ --log-level=debug --daemon Now you should be able to accept the minion key: salt-key -c local/usr/local/etc/salt -Ay And check that your master/minion are communicating: salt -c local/usr/local/etc/salt \* test.version Rather than running test.version from your master, you can run it from the minion instead: salt-call -c local/usr/local/etc/salt test.version Note that you're running salt-call instead of salt, and you're not specifying the minion (\*), but if you're running the dev version then you still will need to pass in the config dir. Now that you've got Salt running, you can hack away on the Salt codebase! If you need to restart Salt for some reason, if you've made changes and they don't appear to be reflected, this is one option: kill -INT $(pgrep salt-master) kill -INT $(pgrep salt-minion) If you'd rather not use kill, you can have a couple of terminals open with your salt virtualenv activated and omit the --daemon argument. Salt will run in the foreground, so you can just use ctrl+c to quit. Test first? Test last? Test meaningfully!You can write tests first or tests last, as long as your tests are meaningful and complete! Typically the best tests for Salt are going to be unit tests. Testing is a whole topic on its own, But you may also want to write functional or integration tests. You'll find those in the tests/ directory. When you're thinking about tests to write, the most important thing to keep in mind is, “What, exactly, am I testing?” When a test fails, you should know:
If you can't answer those questions then you might need to refactor your tests. When you're running tests locally, you should make sure that if you remove your code changes your tests are failing. If your tests aren't failing when you haven't yet made changes, then it's possible that you're testing the wrong thing. But whether you adhere to TDD/BDD, or you write your code first and your tests last, ensure that your tests are meaningful. Running testsAs previously mentioned, we use nox, and that's how we run our tests. You should have it installed by this point but if not you can install it with this: python -m pip install nox Now you can run your tests: python -m nox -e "test-3(coverage=False)" -- tests/unit/cli/test_batch.py It's a good idea to install espeak or use say on Mac if you're running some long-running tests. You can do something like this: python -m nox -e "test-3(coverage=False)" -- tests/unit/cli/test_batch.py; espeak "Tests done, woohoo!" That way you don't have to keep monitoring the actual test run. python -m nox -e "test-3(coverage=False)" -- --core-tests You can enable or disable test groups locally by passing their respected flag:
In your PR, you can enable or disable test groups by setting a label. All fast, slow, and core tests specified in the change file will always run.
Changelog and commit!When you write your commit message you should use imperative style. Do this: Add frobnosticate capability
Don't do this: Added frobnosticate capability
But that advice is backwards for the changelog. We follow the keepachangelog approach for our changelog, and use towncrier to generate it for each release. As a contributor, all that means is that you need to add a file to the salt/changelog directory, using the <issue #>.<type> format. For instance, if you fixed issue 123, you would do: echo "Made sys.doc inform when no minions return" > changelog/123.fixed And that's all that would go into your file. When it comes to your commit message, it's usually a good idea to add other information, such as
This will also help you out, because when you go to create the PR it will automatically insert the body of your commit messages. See the changelog docs for more information. Pull request time!Once you've done all your dev work and tested locally, you should check out our PR guidelines. After you read that page, it's time to open a new PR. Fill out the PR template - you should have updated or created any necessary docs, and written tests if you're providing a code change. When you submit your PR, we have a suite of tests that will run across different platforms to help ensure that no known bugs were introduced. Now what?You've made your changes, added documentation, opened your PR, and have passing tests… now what? When can you expect your code to be merged? When you open your PR, a reviewer will get automatically assigned. If your PR is submitted during the week you should be able to expect some kind of communication within that business day. If your tests are passing and we're not in a code freeze, ideally your code will be merged that week or month. If you haven't heard from your assigned reviewer, ping them on GitHub, irc, or Community Slack. It's likely that your reviewer will leave some comments that need addressing - it may be a style change, or you forgot a changelog entry, or need to update the docs. Maybe it's something more fundamental - perhaps you encountered the rare case where your PR has a much larger scope than initially assumed. Whatever the case, simply make the requested changes (or discuss why the requests are incorrect), and push up your new commits. If your PR is open for a significant period of time it may be worth rebasing your changes on the most recent changes to Salt. If you need help, the previously linked Git resources will be valuable. But if, for whatever reason, you're not interested in driving your PR to completion then just note that in your PR. Something like, “I'm not interested in writing docs/tests, I just wanted to provide this fix - someone else will need to complete this PR.” If you do that then we'll add a “Help Wanted” label and someone will be able to pick up the PR, make the required changes, and it can eventually get merged in. In any case, now that you have a PR open, congrats! You're a Salt developer! You rock! Troubleshootingzmq.core.error.ZMQErrorOnce the minion starts, you may see an error like the following: :: zmq.core.error.ZMQError: ipc path
"/path/to/your/virtualenv/var/run/salt/minion/minion_event_7824dcbcfd7a8f6755939af70b96249f_pub.ipc"
is longer than 107 characters (sizeof(sockaddr_un.sun_path)).
This means that the path to the socket the minion is using is too long. This is a system limitation, so the only workaround is to reduce the length of this path. This can be done in a couple different ways:
NOTE: The socket path is limited to 107 characters on Solaris and Linux, and 103 characters on BSD-based systems. No permissions to access ...If you forget to pass your config path to any of the salt* commands, you might see No permissions to access "/var/log/salt/master", are you running as the correct user? Just pass -c local/usr/local/etc/salt (or whatever you named it) File descriptor limitYou might need to raise your file descriptor limit. You can check it with: ulimit -n If the value is less than 3072, you should increase it with: ulimit -n 3072 # For c-shell: limit descriptors 3072 Pygit2 or other dependency install failsYou may see some failure messages when installing requirements. You can directly access your nox environment and possibly install pygit (or other dependency) that way. When you run nox, you'll see a message like this: nox > Re-using existing virtual environment at .nox/pytest-parametrized-3-crypto-none-transport-zeromq-coverage-false. For this, you would be able to install with: .nox/pytest-parametrized-3-crypto-none-transport-zeromq-coverage-false/bin/python -m pip install pygit2 SALT PROJECT MAINTENANCE POLICIESThis document explains the current project maintenance policies. The goal of these policies are to reduce the maintenance burden on core maintainers of the Salt Project and to encourage more active engagement from the Salt community.
Issue managementIssues for the Salt Project are critical to Salt community communication and to find and resolve issues in the Salt Project. As such, the issue tracker needs to be kept clean and current to the currently supported releases of Salt. They also need to be free of feature requests, arguments, and trolling. We have decided to update our issue policy to be similar to RedHat community project policies. Community members who repeatedly violate these policies are subject to bans.
Pull request managementThe Salt pull request (PR) queue has been a challenge to maintain for the entire life of the project. This is in large part due to the incredibly active and vibrant community around Salt. Unfortunately, it has proven to be too much for the core team and the greater Salt community to manage. As such, we deem it necessary to make fundamental changes to how we manage the PR queue:
Salt Enhancement Proposals (SEP) processA message from Thomas Hatch, creator of Salt: In 2019, we decided to create a community process to discuss and review Salt Enhancement Proposals (SEPs). Unfortunately, I feel that this process has not proven to be an effective way to solve the core issues around Salt Enhancements. Overall, the Salt enhancement process has proven itself to be more of a burden than an accelerant to Salt stability, security, and progress. As such, I feel that the current optimal course of action is to shut the process down. Instead of the Salt Enhancement Proposal process, we will add a time in the Open Hour for people to present ideas and concepts to better understand if they are worth their effort to develop. Extensive documentation around more intrusive or involved enhancements should be included in pull requests (PRs). Conversations about enhancements can also be held in the Discussions tab in GitHub. By migrating the conversation into the PR process, we ensure that we are only reviewing viable proposals instead of being burdened with requests that the core team is expected to fulfill. Effective immediately (January 2024), we are archiving and freezing the SEP repo. INSTALLATIONSee the Salt Install Guide for the current installation instructions. CONFIGURING SALTThis section explains how to configure user access, view and store job results, secure and troubleshoot, and how to perform many other administrative tasks. Configuring the Salt MasterThe Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file. SEE ALSO: Example master configuration file.
The configuration file for the salt-master is located at /usr/local/etc/salt/master by default. Atomic included configuration files can be placed in /usr/local/etc/salt/master.d/*.conf. Warning: files with other suffixes than .conf will not be included. A notable exception is FreeBSD, where the configuration file is located at /usr/local/usr/local/etc/salt. The available options are as follows: Primary Master ConfigurationinterfaceDefault: 0.0.0.0 (all interfaces) The local interface to bind to, must be an IP address. interface: 192.168.0.1 ipv6Default: False Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: interface: '::') ipv6: True publish_portDefault: 4505 The network port to set up the publication interface. publish_port: 4505 master_idDefault: None The id to be passed in the publish job to minions. This is used for MultiSyndics to return the job to the requesting master. NOTE: This must be the same string as the syndic is configured
with.
master_id: MasterOfMaster userDefault: root The user to run the Salt processes user: root NOTE: Starting with version 3006.0, Salt's offical
packages ship with a default configuration which runs the Master as a
non-priviledged user. The Master's configuration file has the user
option set to user: salt. Unless you are absolutly sure want to run
salt as some other user, care should be taken to preserve this setting in your
Master configuration file..
enable_ssh_minionsDefault: False Tell the master to also use salt-ssh when running commands against minions. enable_ssh_minions: True NOTE: Cross-minion communication is still not possible. The
Salt mine and publish.publish do not work between minion types.
ret_portDefault: 4506 The port used by the return server, this is the server used by Salt to receive execution returns and command executions. ret_port: 4506 pidfileDefault: /var/run/salt-master.pid Specify the location of the master pidfile. pidfile: /var/run/salt-master.pid root_dirDefault: / The system root directory to operate from, change this to make Salt run from an alternative root. root_dir: / NOTE: This directory is prepended to the following options:
pki_dir, cachedir, sock_dir, log_file,
autosign_file, autoreject_file, pidfile,
autosign_grains_dir.
conf_fileDefault: /usr/local/etc/salt/master The path to the master's configuration file. conf_file: /usr/local/etc/salt/master pki_dirDefault: <LIB_STATE_DIR>/pki/master The directory to store the pki authentication keys. <LIB_STATE_DIR> is the pre-configured variable state directory set during installation via --salt-lib-state-dir. It defaults to /usr/local/etc/salt. Systems following the Filesystem Hierarchy Standard (FHS) might set it to /var/lib/salt. pki_dir: /usr/local/etc/salt/pki/master extension_modulesChanged in version 2016.3.0: The default location for this directory has been moved. Prior to this version, the location was a directory named extmods in the Salt cachedir (on most platforms, /var/cache/salt/extmods). It has been moved into the master cachedir (on most platforms, /var/cache/salt/master/extmods). Directory where custom modules are synced to. This directory can contain subdirectories for each of Salt's module types such as runners, output, wheel, modules, states, returners, engines, utils, etc. This path is appended to root_dir. Note, any directories or files not found in the module_dirs location will be removed from the extension_modules path. extension_modules: /root/salt_extmods extmod_whitelist/extmod_blacklistNew in version 2017.7.0. By using this dictionary, the modules that are synced to the master's extmod cache using saltutil.sync_* can be limited. If nothing is set to a specific type, then all modules are accepted. To block all modules of a specific type, whitelist an empty list. extmod_whitelist:
module_dirsDefault: [] Like extension_modules, but a list of extra directories to search for Salt modules. module_dirs: cachedirDefault: /var/cache/salt/master The location used to store cache information, particularly the job information for executed salt commands. This directory may contain sensitive data and should be protected accordingly. cachedir: /var/cache/salt/master verify_envDefault: True Verify and set permissions on configuration directories at startup. verify_env: True keep_jobsDefault: 24 Set the number of hours to keep old job information. Note that setting this option to 0 disables the cache cleaner. Deprecated since version 3006: Replaced by keep_jobs_seconds keep_jobs: 24 keep_jobs_secondsDefault: 86400 Set the number of seconds to keep old job information. Note that setting this option to 0 disables the cache cleaner. keep_jobs_seconds: 86400 gather_job_timeoutNew in version 2014.7.0. Default: 10 The number of seconds to wait when the client is requesting information about running jobs. gather_job_timeout: 10 timeoutDefault: 5 Set the default timeout for the salt command and api. loop_intervalDefault: 60 The loop_interval option controls the seconds for the master's Maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler. maintenance_intervalNew in version 3006.0. Default: 3600 Defines how often to restart the master's Maintenance process. maintenance_interval: 9600 outputDefault: nested Set the default outputter used by the salt command. outputter_dirsDefault: [] A list of additional directories to search for salt outputters in. outputter_dirs: [] output_fileDefault: None Set the default output file used by the salt command. Default is to output to the CLI and not to a file. Functions the same way as the "--out-file" CLI option, only sets this to a single file for all salt commands. output_file: /path/output/file show_timeoutDefault: True Tell the client to show minions that have timed out. show_timeout: True show_jidDefault: False Tell the client to display the jid when a job is published. show_jid: False colorDefault: True By default output is colored, to disable colored output set the color value to False. color: False color_themeDefault: "" Specifies a path to the color theme to use for colored command line output. color_theme: /usr/local/etc/salt/color_theme cli_summaryDefault: False When set to True, displays a summary of the number of minions targeted, the number of minions returned, and the number of minions that did not return. cli_summary: False sock_dirDefault: /var/run/salt/master Set the location to use for creating Unix sockets for master process communication. sock_dir: /var/run/salt/master enable_gpu_grainsDefault: False Enable GPU hardware data for your master. Be aware that the master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master. enable_gpu_grains: True skip_grainsDefault: False MasterMinions should omit grains. A MasterMinion is "a minion function object for generic use on the master" that omit pillar. A RunnerClient creates a MasterMinion omitting states and renderer. Setting to True can improve master performance. skip_grains: True job_cacheDefault: True The master maintains a temporary job cache. While this is a great addition, it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir. job_cache: True NOTE: Setting the job_cache to False will not
cache minion returns, but the JID directory for each job is still created. The
creation of the JID directories is necessary because Salt uses those
directories to check for JID collisions. By setting this option to
False, the job cache directory, which is
/var/cache/salt/master/jobs/ by default, will be smaller, but the JID
directories will still be present.
Note that the keep_jobs_seconds option can be set to a lower value, such as 3600, to limit the number of seconds jobs are stored in the job cache. (The default is 86400 seconds.) Please see the Managing the Job Cache documentation for more information. minion_data_cacheDefault: True The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar, grains and mine data. The data is cached via the cache subsystem in the Master cachedir under the name of the minion or in a supported database. The data is used to predetermine what minions are expected to reply from executions. minion_data_cache: True cacheDefault: localfs Cache subsystem module to use for minion data cache. cache: consul memcache_expire_secondsDefault: 0 Memcache is an additional cache layer that keeps a limited amount of data fetched from the minion data cache for a limited period of time in memory that makes cache operations faster. It doesn't make much sense for the localfs cache driver but helps for more complex drivers like consul. This option sets the memcache items expiration time. By default is set to 0 that disables the memcache. memcache_expire_seconds: 30 memcache_max_itemsDefault: 1024 Set memcache limit in items that are bank-key pairs. I.e the list of minion_0/data, minion_0/mine, minion_1/data contains 3 items. This value depends on the count of minions usually targeted in your environment. The best one could be found by analyzing the cache log with memcache_debug enabled. memcache_max_items: 1024 memcache_full_cleanupDefault: False If cache storage got full, i.e. the items count exceeds the memcache_max_items value, memcache cleans up its storage. If this option set to False memcache removes the only one oldest value from its storage. If this set set to True memcache removes all the expired items and also removes the oldest one if there are no expired items. memcache_full_cleanup: True memcache_debugDefault: False Enable collecting the memcache stats and log it on debug log level. If enabled memcache collect information about how many fetch calls has been done and how many of them has been hit by memcache. Also it outputs the rate value that is the result of division of the first two values. This should help to choose right values for the expiration time and the cache size. memcache_debug: True ext_job_cacheDefault: '' Used to specify a default returner for all minions. When this option is set, the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master. ext_job_cache: redis event_returnNew in version 2015.5.0. Default: '' Specify the returner(s) to use to log events. Each returner may have installation and configuration requirements. Read the returner's documentation. NOTE: Not all returners support event returns. Verify that a
returner has an event_return() function before configuring this option
with a returner.
event_return: event_return_queueNew in version 2015.5.0. Default: 0 On busy systems, enabling event_returns can cause a considerable load on the storage system for returners. Events can be queued on the master and stored in a batched fashion using a single transaction for multiple events. By default, events are not queued. event_return_queue: 0 event_return_whitelistNew in version 2015.5.0. Default: [] Only return events matching tags in a whitelist. Changed in version 2016.11.0: Supports glob matching patterns. event_return_whitelist: event_return_blacklistNew in version 2015.5.0. Default: [] Store all event returns _except_ the tags in a blacklist. Changed in version 2016.11.0: Supports glob matching patterns. event_return_blacklist: max_event_sizeNew in version 2014.7.0. Default: 1048576 Passing very large events can cause the minion to consume large amounts of memory. This value tunes the maximum size of a message allowed onto the master event bus. The value is expressed in bytes. max_event_size: 1048576 master_job_cacheNew in version 2014.7.0. Default: local_cache Specify the returner to use for the job cache. The job cache will only be interacted with from the salt master and therefore does not need to be accessible from the minions. master_job_cache: redis job_cache_store_endtimeNew in version 2015.8.0. Default: False Specify whether the Salt Master should store end times for jobs as returns come in. job_cache_store_endtime: False enforce_mine_cacheDefault: False By-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system. enforce_mine_cache: False max_minionsDefault: 0 The maximum number of minion connections allowed by the master. Use this to accommodate the number of minions per master if you have different types of hardware serving your minions. The default of 0 means unlimited connections. Please note that this can slow down the authentication process a bit in large setups. max_minions: 100 con_cacheDefault: False If max_minions is used in large installations, the master might experience high-load situations because of having to check the number of connected minions for every authentication. This cache provides the minion-ids of all connected minions to all MWorker-processes and greatly improves the performance of max_minions. con_cache: True presence_eventsDefault: False Causes the master to periodically look for actively connected minions. Presence events are fired on the event bus on a regular interval with a list of connected minions, as well as events with lists of newly connected or disconnected minions. This is a master-only operation that does not send executions to minions. presence_events: False detect_remote_minionsDefault: False When checking the minions connected to a master, also include the master's connections to minions on the port specified in the setting remote_minions_port. This is particularly useful when checking if the master is connected to any Heist-Salt minions. If this setting is set to True, the master will check all connections on port 22 by default unless a user also configures a different port with the setting remote_minions_port. Changing this setting will check the remote minions the master is connected to when using presence events, the manage runner, and any other parts of the code that call the connected_ids method to check the status of connected minions. detect_remote_minions: True remote_minions_portDefault: 22 The port to use when checking for remote minions when detect_remote_minions is set to True. remote_minions_port: 2222 ping_on_rotateNew in version 2014.7.0. Default: False By default, the master AES key rotates every 24 hours. The next command following a key rotation will trigger a key refresh from the minion which may result in minions which do not respond to the first command after a key refresh. To tell the master to ping all minions immediately after an AES key refresh, set ping_on_rotate to True. This should mitigate the issue where a minion does not appear to initially respond after a key is rotated. Note that enabling this may cause high load on the master immediately after the key rotation event as minions reconnect. Consider this carefully if this salt master is managing a large number of minions. If disabled, it is recommended to handle this event by listening for the aes_key_rotate event with the key tag and acting appropriately. ping_on_rotate: False transportDefault: zeromq Changes the underlying transport layer. ZeroMQ is the recommended transport while additional transport layers are under development. Supported values are zeromq and tcp (experimental). This setting has a significant impact on performance and should not be changed unless you know what you are doing! transport: zeromq transport_optsDefault: {} (experimental) Starts multiple transports and overrides options for each transport with the provided dictionary This setting has a significant impact on performance and should not be changed unless you know what you are doing! The following example shows how to start a TCP transport alongside a ZMQ transport. transport_opts: master_statsDefault: False Turning on the master stats enables runtime throughput and statistics events to be fired from the master event bus. These events will report on what functions have been run on the master and how long these runs have, on average, taken over a given period of time. master_stats_event_iterDefault: 60 The time in seconds to fire master_stats events. This will only fire in conjunction with receiving a request to the master, idle masters will not fire these events. sock_pool_sizeDefault: 1 To avoid blocking waiting while writing a data to a socket, we support socket pool for Salt applications. For example, a job with a large number of target host list can cause long period blocking waiting. The option is used by ZMQ and TCP transports, and the other transport methods don't need the socket pool by definition. Most of Salt tools, including CLI, are enough to use a single bucket of socket pool. On the other hands, it is highly recommended to set the size of socket pool larger than 1 for other Salt applications, especially Salt API, which must write data to socket concurrently. sock_pool_size: 15 ipc_modeDefault: ipc The ipc strategy. (i.e., sockets versus tcp, etc.) Windows platforms lack POSIX IPC and must rely on TCP based inter-process communications. ipc_mode is set to tcp by default on Windows. ipc_mode: ipc ipc_write_bufferDefault: 0 The maximum size of a message sent via the IPC transport module can be limited dynamically or by sharing an integer value lower than the total memory size. When the value dynamic is set, salt will use 2.5% of the total memory as ipc_write_buffer value (rounded to an integer). A value of 0 disables this option. ipc_write_buffer: 10485760 tcp_master_pub_portDefault: 4512 The TCP port on which events for the master should be published if ipc_mode is TCP. tcp_master_pub_port: 4512 tcp_master_pull_portDefault: 4513 The TCP port on which events for the master should be pulled if ipc_mode is TCP. tcp_master_pull_port: 4513 tcp_master_publish_pullDefault: 4514 The TCP port on which events for the master should be pulled fom and then republished onto the event bus on the master. tcp_master_publish_pull: 4514 tcp_master_workersDefault: 4515 The TCP port for mworkers to connect to on the master. tcp_master_workers: 4515 auth_eventsNew in version 2017.7.3. Default: True Determines whether the master will fire authentication events. Authentication events are fired when a minion performs an authentication check with the master. auth_events: True minion_data_cache_eventsNew in version 2017.7.3. Default: True Determines whether the master will fire minion data cache events. Minion data cache events are fired when a minion requests a minion data cache refresh. minion_data_cache_events: True http_connect_timeoutNew in version 2019.2.0. Default: 20 HTTP connection timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time. http_connect_timeout: 20 http_request_timeoutNew in version 2015.8.0. Default: 3600 HTTP request timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time. http_request_timeout: 3600 use_yamlloader_oldNew in version 2019.2.1. Default: False Use the pre-2019.2 YAML renderer. Uses legacy YAML rendering to support some legacy inline data structures. See the 2019.2.1 release notes for more details. use_yamlloader_old: False req_server_nicenessNew in version 3001. Default: None Process priority level of the ReqServer subprocess of the master. Supported on POSIX platforms only. req_server_niceness: 9 pub_server_nicenessNew in version 3001. Default: None Process priority level of the PubServer subprocess of the master. Supported on POSIX platforms only. pub_server_niceness: 9 fileserver_update_nicenessNew in version 3001. Default: None Process priority level of the FileServerUpdate subprocess of the master. Supported on POSIX platforms only. fileserver_update_niceness: 9 maintenance_nicenessNew in version 3001. Default: None Process priority level of the Maintenance subprocess of the master. Supported on POSIX platforms only. maintenance_niceness: 9 mworker_nicenessNew in version 3001. Default: None Process priority level of the MWorker subprocess of the master. Supported on POSIX platforms only. mworker_niceness: 9 mworker_queue_nicenessNew in version 3001. default: None process priority level of the MWorkerQueue subprocess of the master. supported on POSIX platforms only. mworker_queue_niceness: 9 event_return_nicenessNew in version 3001. default: None process priority level of the EventReturn subprocess of the master. supported on POSIX platforms only. event_return_niceness: 9 event_publisher_nicenessNew in version 3001. default: none process priority level of the EventPublisher subprocess of the master. supported on POSIX platforms only. event_publisher_niceness: 9 reactor_nicenessNew in version 3001. default: None process priority level of the Reactor subprocess of the master. supported on POSIX platforms only. reactor_niceness: 9 Salt-SSH ConfigurationrosterDefault: flat Define the default salt-ssh roster module to use roster: cache roster_defaultsNew in version 2017.7.0. Default settings which will be inherited by all rosters. roster_defaults: roster_fileDefault: /usr/local/etc/salt/roster Pass in an alternative location for the salt-ssh flat roster file. roster_file: /root/roster rostersDefault: None Define locations for flat roster files so they can be chosen when using Salt API. An administrator can place roster files into these locations. Then, when calling Salt API, the roster_file parameter should contain a relative path to these locations. That is, roster_file=/foo/roster will be resolved as /usr/local/etc/salt/roster.d/foo/roster etc. This feature prevents passing insecure custom rosters through the Salt API. rosters: ssh_passwdDefault: '' The ssh password to log in with. ssh_passwd: '' ssh_priv_passwdDefault: '' Passphrase for ssh private key file. ssh_priv_passwd: '' ssh_portDefault: 22 The target system's ssh port number. ssh_port: 22 ssh_scan_portsDefault: 22 Comma-separated list of ports to scan. ssh_scan_ports: 22 ssh_scan_timeoutDefault: 0.01 Scanning socket timeout for salt-ssh. ssh_scan_timeout: 0.01 ssh_sudoDefault: False Boolean to run command via sudo. ssh_sudo: False ssh_timeoutDefault: 60 Number of seconds to wait for a response when establishing an SSH connection. ssh_timeout: 60 ssh_userDefault: root The user to log in as. ssh_user: root ssh_log_fileNew in version 2016.3.5. Default: /var/log/salt/ssh Specify the log file of the salt-ssh command. ssh_log_file: /var/log/salt/ssh ssh_minion_optsDefault: None Pass in minion option overrides that will be inserted into the SHIM for salt-ssh calls. The local minion config is not used for salt-ssh. Can be overridden on a per-minion basis in the roster (minion_opts) ssh_minion_opts: ssh_use_home_keyDefault: False Set this to True to default to using ~/.ssh/id_rsa for salt-ssh authentication with minions ssh_use_home_key: False ssh_identities_onlyDefault: False Set this to True to default salt-ssh to run with -o IdentitiesOnly=yes. This option is intended for situations where the ssh-agent offers many different identities and allows ssh to ignore those identities and use the only one specified in options. ssh_identities_only: False ssh_list_nodegroupsDefault: {} List-only nodegroups for salt-ssh. Each group must be formed as either a comma-separated list, or a YAML list. This option is useful to group minions into easy-to-target groups when using salt-ssh. These groups can then be targeted with the normal -N argument to salt-ssh. ssh_list_nodegroups: Default: False Run the ssh_pre_flight script defined in the salt-ssh roster. By default the script will only run when the thin dir does not exist on the targeted minion. This will force the script to run and not check if the thin dir exists first. thin_extra_modsDefault: None List of additional modules, needed to be included into the Salt Thin. Pass a list of importable Python modules that are typically located in the site-packages Python directory so they will be also always included into the Salt Thin, once generated. min_extra_modsDefault: None Identical as thin_extra_mods, only applied to the Salt Minimal. Master Security Settingsopen_modeDefault: False Open mode is a dangerous security feature. One problem encountered with pki authentication systems is that keys can become "mixed up" and authentication begins to fail. Open mode turns off authentication and tells the master to accept all authentication. This will clean up the pki keys received from the minions. Open mode should not be turned on for general use. Open mode should only be used for a short period of time to clean up pki keys. To turn on open mode set this value to True. open_mode: False auto_acceptDefault: False Enable auto_accept. This setting will automatically accept all incoming public keys from minions. auto_accept: False keysizeDefault: 2048 The size of key that should be generated when creating new keys. keysize: 2048 autosign_timeoutNew in version 2014.7.0. Default: 120 Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id. autosign_fileDefault: not defined If the autosign_file is specified incoming keys specified in the autosign_file will be automatically accepted. Matches will be searched for first by string comparison, then by globbing, then by full-string regex matching. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id. Changed in version 2018.3.0: For security reasons the file must be readonly except for its owner. If permissive_pki_access is True the owning group can also have write access, but if Salt is running as root it must be a member of that group. A less strict requirement also existed in previous version. autoreject_fileNew in version 2014.1.0. Default: not defined Works like autosign_file, but instead allows you to specify minion IDs for which keys will automatically be rejected. Will override both membership in the autosign_file and the auto_accept setting. autosign_grains_dirNew in version 2018.3.0. Default: not defined If the autosign_grains_dir is specified, incoming keys from minions with grain values that match those defined in files in the autosign_grains_dir will be accepted automatically. Grain values that should be accepted automatically can be defined by creating a file named like the corresponding grain in the autosign_grains_dir and writing the values into that file, one value per line. Lines starting with a # will be ignored. Minion must be configured to send the corresponding grains on authentication. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion. Please see the Autoaccept Minions from Grains documentation for more information. autosign_grains_dir: /usr/local/etc/salt/autosign_grains permissive_pki_accessDefault: False Enable permissive access to the salt keys. This allows you to run the master or minion as root, but have a non-root group be given access to your pki_dir. To make the access explicit, root must belong to the group you've given access to. This is potentially quite insecure. If an autosign_file is specified, enabling permissive_pki_access will allow group access to that specific file. permissive_pki_access: False publisher_aclDefault: {} Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions. publisher_acl: publisher_acl_blacklistDefault: {} Blacklist users or modules This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module. This is completely disabled by default. publisher_acl_blacklist: sudo_aclDefault: False Enforce publisher_acl and publisher_acl_blacklist when users have sudo access to the salt command. sudo_acl: False external_authDefault: {} The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system. external_auth: token_expireDefault: 43200 Time (in seconds) for a newly generated token to live. Default: 12 hours token_expire: 43200 token_expire_user_overrideDefault: False Allow eauth users to specify the expiry time of the tokens they generate. A boolean applies to all users or a dictionary of whitelisted eauth backends and usernames may be given: token_expire_user_override: keep_acl_in_tokenDefault: False Set to True to enable keeping the calculated user's auth list in the token file. This is disabled by default and the auth list is calculated or requested from the eauth driver each time. Note: keep_acl_in_token will be forced to True when using external authentication for REST API (rest is present under external_auth). This is because the REST API does not store the password, and can therefore not retroactively fetch the ACL, so the ACL must be stored in the token. keep_acl_in_token: False eauth_acl_moduleDefault: '' Auth subsystem module to use to get authorized access list for a user. By default it's the same module used for external authentication. eauth_acl_module: django file_recvDefault: False Allow minions to push files to the master. This is disabled by default, for security purposes. file_recv: False file_recv_max_sizeNew in version 2014.7.0. Default: 100 Set a hard-limit on the size of the files that can be pushed to the master. It will be interpreted as megabytes. file_recv_max_size: 100 master_sign_pubkeyDefault: False Sign the master auth-replies with a cryptographic signature of the master's public key. Please see the tutorial how to use these settings in the Multimaster-PKI with Failover Tutorial master_sign_pubkey: True master_sign_key_nameDefault: master_sign The customizable name of the signing-key-pair without suffix. master_sign_key_name: <filename_without_suffix> master_pubkey_signatureDefault: master_pubkey_signature The name of the file in the master's pki-directory that holds the pre-calculated signature of the master's public-key. master_pubkey_signature: <filename> master_use_pubkey_signatureDefault: False Instead of computing the signature for each auth-reply, use a pre-calculated signature. The master_pubkey_signature must also be set for this. master_use_pubkey_signature: True rotate_aes_keyDefault: True Rotate the salt-masters AES-key when a minion-public is deleted with salt-key. This is a very important security-setting. Disabling it will enable deleted minions to still listen in on the messages published by the salt-master. Do not disable this unless it is absolutely clear what this does. rotate_aes_key: True publish_sessionDefault: 86400 The number of seconds between AES key rotations on the master. publish_session: Default: 86400 publish_signing_algorithmNew in version 3006.9. Default: PKCS1v15-SHA1 The RSA signing algorithm used by this minion when connecting to the master's request channel. Valid values are PKCS1v15-SHA1 and PKCS1v15-SHA224. Minions must be at version 3006.9 or greater if this is changed from the default setting. sslNew in version 2016.11.0. Default: None TLS/SSL connection options. This could be set to a dictionary containing arguments corresponding to python ssl.wrap_socket method. For details see Tornado and Python documentation. Note: to set enum arguments values like cert_reqs and ssl_version use constant names without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23. ssl: preserve_minion_cacheDefault: False By default, the master deletes its cache of minion data when the key for that minion is removed. To preserve the cache after key deletion, set preserve_minion_cache to True. WARNING: This may have security implications if compromised minions auth with a previous deleted minion ID. preserve_minion_cache: False allow_minion_key_revokeDefault: True Controls whether a minion can request its own key revocation. When True the master will honor the minion's request and revoke its key. When False, the master will drop the request and the minion's key will remain accepted. allow_minion_key_revoke: False optimization_orderDefault: [0, 1, 2] In cases where Salt is distributed without .py files, this option determines the priority of optimization level(s) Salt's module loader should prefer. NOTE: This option is only supported on Python 3.5+.
optimization_order: Master Large Scale Tuning Settingsmax_open_filesDefault: 100000 Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt-master crashes): Too many open files (tcp_listener.cpp:335) Aborted (core dumped) max_open_files: 100000 By default this value will be the one of ulimit -Hn, i.e., the hard limit for max open files. To set a different value than the default one, uncomment, and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on the OS and/or distribution, a good way to find the limit is to search the internet for something like this: raise max open files hard limit debian worker_threadsDefault: 5 The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value. Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise. Standards for busy environments:
NOTE: When the master daemon starts, it is expected behaviour
to see multiple salt-master processes, even if 'worker_threads' is set to '1'.
At a minimum, a controlling process will start along with a Publisher, an
EventPublisher, and a number of MWorker processes will be started. The number
of MWorker processes is tuneable by the 'worker_threads' configuration value
while the others are not.
worker_threads: 5 pub_hwmDefault: 1000 The zeromq high water mark on the publisher interface. pub_hwm: 1000 zmq_backlogDefault: 1000 The listen queue size of the ZeroMQ backlog. zmq_backlog: 1000 Master Module Managementrunner_dirsDefault: [] Set additional directories to search for runner modules. runner_dirs: utils_dirsNew in version 2018.3.0. Default: [] Set additional directories to search for util modules. utils_dirs: cython_enableDefault: False Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master. cython_enable: False Master State System Settingsstate_topDefault: top.sls The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment. The value of "state_top" is also used for the pillar top file state_top: top.sls state_top_saltenvThis option has no default value. Set it to an environment name to ensure that only the top file from that environment is considered during a highstate. NOTE: Using this value does not change the merging strategy.
For instance, if top_file_merging_strategy is set to merge, and
state_top_saltenv is set to foo, then any sections for
environments other than foo in the top file for the foo
environment will be ignored. With state_top_saltenv set to base,
all states from all environments in the base top file will be applied,
while all other top files are ignored. The only way to set
state_top_saltenv to something other than base and not have the
other environments in the targeted top file ignored, would be to set
top_file_merging_strategy to merge_all.
state_top_saltenv: dev top_file_merging_strategyChanged in version 2016.11.0: A merge_all strategy has been added. Default: merge When no specific fileserver environment (a.k.a. saltenv) has been specified for a highstate, all environments' top files are inspected. This config option determines how the SLS targets in those top files are handled. When set to merge, the base environment's top file is evaluated first, followed by the other environments' top files. The first target expression (e.g. '*') for a given environment is kept, and when the same target expression is used in a different top file evaluated later, it is ignored. Because base is evaluated first, it is authoritative. For example, if there is a target for '*' for the foo environment in both the base and foo environment's top files, the one in the foo environment would be ignored. The environments will be evaluated in no specific order (aside from base coming first). For greater control over the order in which the environments are evaluated, use env_order. Note that, aside from the base environment's top file, any sections in top files that do not match that top file's environment will be ignored. So, for example, a section for the qa environment would be ignored if it appears in the dev environment's top file. To keep use cases like this from being ignored, use the merge_all strategy. When set to same, then for each environment, only that environment's top file is processed, with the others being ignored. For example, only the dev environment's top file will be processed for the dev environment, and any SLS targets defined for dev in the base environment's (or any other environment's) top file will be ignored. If an environment does not have a top file, then the top file from the default_top config parameter will be used as a fallback. When set to merge_all, then all states in all environments in all top files will be applied. The order in which individual SLS files will be executed will depend on the order in which the top files were evaluated, and the environments will be evaluated in no specific order. For greater control over the order in which the environments are evaluated, use env_order. top_file_merging_strategy: same env_orderDefault: [] When top_file_merging_strategy is set to merge, and no environment is specified for a highstate, this config option allows for the order in which top files are evaluated to be explicitly defined. env_order: master_topsDefault: {} The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration: master_tops: rendererDefault: jinja|yaml The renderer to use on the minions to render the state data. renderer: jinja|json userdata_templateNew in version 2016.11.4. Default: None The renderer to use for templating userdata files in salt-cloud, if the userdata_template is not set in the cloud profile. If no value is set in the cloud profile or master config file, no templating will be performed. userdata_template: jinja jinja_envNew in version 2018.3.0. Default: {} jinja_env overrides the default Jinja environment options for all templates except sls templates. To set the options for sls templates use jinja_sls_env. NOTE: The Jinja2 Environment documentation is the
official source for the default values. Not all the options listed in the
jinja documentation can be overridden using jinja_env or
jinja_sls_env.
The default options are: jinja_env: jinja_sls_envNew in version 2018.3.0. Default: {} jinja_sls_env sets the Jinja environment options for sls templates. The defaults and accepted options are exactly the same as they are for jinja_env. The default options are: jinja_sls_env: Example using line statements and line comments to increase ease of use: If your configuration options are jinja_sls_env: With these options jinja will interpret anything after a % at the start of a line (ignoreing whitespace) as a jinja statement and will interpret anything after a ## as a comment. This allows the following more convenient syntax to be used: ## (this comment will not stay once rendered)
# (this comment remains in the rendered template)
## ensure all the formula services are running
% for service in formula_services:
enable_service_{{ service }}:
The following less convenient but equivalent syntax would have to be used if you had not set the line_statement and line_comment options: {# (this comment will not stay once rendered) #}
# (this comment remains in the rendered template)
{# ensure all the formula services are running #}
{% for service in formula_services %}
enable_service_{{ service }}:
jinja_trim_blocksDeprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env New in version 2014.1.0. Default: False If this is set to True, the first newline after a Jinja block is removed (block, not variable tag!). Defaults to False and corresponds to the Jinja environment init variable trim_blocks. jinja_trim_blocks: False jinja_lstrip_blocksDeprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env New in version 2014.1.0. Default: False If this is set to True, leading spaces and tabs are stripped from the start of a line to a block. Defaults to False and corresponds to the Jinja environment init variable lstrip_blocks. jinja_lstrip_blocks: False failhardDefault: False Set the global failhard flag. This informs all states to stop running states at the moment a single state fails. failhard: False state_verboseDefault: True Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states that failed or states that have changes. state_verbose: False state_outputDefault: full The state_output setting controls which results will be output full multi line:
full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state ID will be used as name in the output. state_output: full state_output_diffDefault: False The state_output_diff setting changes whether or not the output from successful states is returned. Useful when even the terse output of these states is cluttering the logs. Set it to True to ignore them. state_output_diff: False state_output_profileDefault: True The state_output_profile setting changes whether profile information will be shown for each state run. state_output_profile: True state_output_pctDefault: False The state_output_pct setting changes whether success and failure information as a percent of total actions will be shown for each state run. state_output_pct: False state_compress_idsDefault: False The state_compress_ids setting aggregates information about states which have multiple "names" under the same state ID in the highstate output. state_compress_ids: False state_aggregateDefault: False Automatically aggregate all states that have support for mod_aggregate by setting to True. state_aggregate: True Or pass a list of state module names to automatically aggregate just those types. state_aggregate: state_eventsDefault: False Send progress events as each function in a state run completes execution by setting to True. Progress events are in the format salt/job/<JID>/prog/<MID>/<RUN NUM>. state_events: True yaml_utf8Default: False Enable extra routines for YAML renderer used states containing UTF characters. yaml_utf8: False runner_returnsDefault: True If set to False, runner jobs will not be saved to job cache (defined by master_job_cache). runner_returns: False Master File Server Settingsfileserver_backendDefault: ['roots'] Salt supports a modular fileserver backend system, this system allows the salt master to link directly to third party systems to gather and manage the files available to minions. Multiple backends can be configured and will be searched for the requested file in the order in which they are defined here. The default setting only enables the standard backend roots, which is configured using the file_roots option. Example: fileserver_backend: NOTE: For masterless Salt, this parameter must be specified in
the minion config file.
fileserver_followsymlinksNew in version 2014.1.0. Default: True By default, the file_server follows symlinks when walking the filesystem tree. Currently this only applies to the default roots fileserver_backend. fileserver_followsymlinks: True fileserver_ignoresymlinksNew in version 2014.1.0. Default: False If you do not want symlinks to be treated as the files they are pointing to, set fileserver_ignoresymlinks to True. By default this is set to False. When set to True, any detected symlink while listing files on the Master will not be returned to the Minion. fileserver_ignoresymlinks: False fileserver_list_cache_timeNew in version 2014.1.0. Changed in version 2016.11.0: The default was changed from 30 seconds to 20. Default: 20 Salt caches the list of files/symlinks/directories for each fileserver backend and environment as they are requested, to guard against a performance bottleneck at scale when many minions all ask the fileserver which files are available simultaneously. This configuration parameter allows for the max age of that cache to be altered. Set this value to 0 to disable use of this cache altogether, but keep in mind that this may increase the CPU load on the master when running a highstate on a large number of minions. NOTE: Rather than altering this configuration parameter, it may
be advisable to use the fileserver.clear_file_list_cache runner to
clear these caches.
fileserver_list_cache_time: 5 fileserver_verify_configNew in version 2017.7.0. Default: True By default, as the master starts it performs some sanity checks on the configured fileserver backends. If any of these sanity checks fail (such as when an invalid configuration is used), the master daemon will abort. To skip these sanity checks, set this option to False. fileserver_verify_config: False hash_typeDefault: sha256 The hash_type is the hash to use when discovering the hash of a file on the master server. The default is sha256, but md5, sha1, sha224, sha384, and sha512 are also supported. hash_type: sha256 file_buffer_sizeDefault: 1048576 The buffer size in the file server in bytes. file_buffer_size: 1048576 file_ignore_regexDefault: '' A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don't want all the '.svn' folders and content synced to your minions, you could set this to '/.svn($|/)'. By default nothing is ignored. file_ignore_regex: file_ignore_globDefault '' A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored. file_ignore_glob: NOTE: Vim's .swp files are a common cause of Unicode errors in
file.recurse states which use templating. Unless there is a good reason
to distribute them via the fileserver, it is good practice to include
'\*.swp' in the file_ignore_glob.
master_rootsDefault: '' A master-only copy of the file_roots dictionary, used by the state compiler. Example: master_roots: roots: Master's Local File Serverfile_rootsChanged in version 3005. Default: base: Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port. The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file. As of 2018.3.5 and 2019.2.1, it is possible to have __env__ as a catch-all environment. Example: file_roots: Taking dynamic environments one step further, __env__ can also be used in the file_roots filesystem path as of version 3005. It will be replaced with the actual saltenv and searched for states and data to provide to the minion. Note this substitution ONLY occurs for the __env__ environment. For instance, this configuration: file_roots: is equivalent to this static configuration: file_roots: NOTE: For masterless Salt, this parameter must be specified in
the minion config file.
roots_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for file_roots. NOTE: Since file_roots consists of files local to the
minion, the update process for this fileserver backend just reaps the cache
for this backend.
roots_update_interval: 120 gitfs: Git Remote File Server Backendgitfs_remotesDefault: [] When using the git fileserver backend at least one git remote needs to be defined. The user running the salt master will need read access to the repo. The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments. gitfs_remotes: NOTE: file:// repos will be treated as a remote and
copied into the master's gitfs cache, so only the local refs for those
repos will be exposed as fileserver environments.
As of 2014.7.0, it is possible to have per-repo versions of several of the gitfs configuration parameters. For more information, see the GitFS Walkthrough. gitfs_providerNew in version 2014.7.0. Optional parameter used to specify the provider to be used for gitfs. More information can be found in the GitFS Walkthrough. Must be either pygit2 or gitpython. If unset, then each will be tried in that same order, and the first one with a compatible version installed will be the provider that is used. gitfs_provider: gitpython gitfs_ssl_verifyDefault: True Specifies whether or not to ignore SSL certificate errors when fetching from the repositories configured in gitfs_remotes. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option. gitfs_ssl_verify: False NOTE: pygit2 only supports disabling SSL verification in
versions 0.23.2 and newer.
Changed in version 2015.8.0: This option can now be configured on individual repositories as well. See here for more info. Changed in version 2016.11.0: The default config value changed from False to True. gitfs_mountpointNew in version 2014.7.0. Default: '' Specifies a path on the salt fileserver which will be prepended to all files served by gitfs. This option can be used in conjunction with gitfs_root. It can also be configured for an individual repository, see here for more info. gitfs_mountpoint: salt://foo/bar NOTE: The salt:// protocol designation can be left off
(in other words, foo/bar and salt://foo/bar are equivalent).
Assuming a file baz.sh in the root of a gitfs remote, and the above
example mountpoint, this file would be served up via
salt://foo/bar/baz.sh.
gitfs_rootDefault: '' Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with gitfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of gitfs) be considered as the root of the repo. gitfs_root: somefolder/otherfolder Changed in version 2014.7.0: This option can now be configured on individual repositories as well. See here for more info. gitfs_baseDefault: master Defines which branch/tag should be used as the base environment. gitfs_base: salt Changed in version 2014.7.0: This option can now be configured on individual repositories as well. See here for more info. gitfs_saltenvNew in version 2016.11.0. Default: [] Global settings for per-saltenv configuration parameters. Though per-saltenv configuration parameters are typically one-off changes specific to a single gitfs remote, and thus more often configured on a per-remote basis, this parameter can be used to specify per-saltenv changes which should apply to all remotes. For example, the below configuration will map the develop branch to the dev saltenv for all gitfs remotes. gitfs_saltenv: gitfs_disable_saltenv_mappingNew in version 2018.3.0. Default: False When set to True, all saltenv mapping logic is disregarded (aside from which branch/tag is mapped to the base saltenv). To use any other environments, they must then be defined using per-saltenv configuration parameters. gitfs_disable_saltenv_mapping: True NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_ref_typesNew in version 2018.3.0. Default: ['branch', 'tag', 'sha'] This option defines what types of refs are mapped to fileserver environments (i.e. saltenvs). It also sets the order of preference when there are ambiguously-named refs (i.e. when a branch and tag both have the same name). The below example disables mapping of both tags and SHAs, so that only branches are mapped as saltenvs: gitfs_ref_types: NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
NOTE: sha is special in that it will not show up when
listing saltenvs (e.g. with the fileserver.envs runner), but works
within states and with cp.cache_file to retrieve a file from a specific
git SHA.
gitfs_saltenv_whitelistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from gitfs_env_whitelist to gitfs_saltenv_whitelist Default: [] Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough. gitfs_saltenv_whitelist: gitfs_saltenv_blacklistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from gitfs_env_blacklist to gitfs_saltenv_blacklist Default: [] Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough. gitfs_saltenv_blacklist: gitfs_global_lockNew in version 2015.8.9. Default: True When set to False, if there is an update lock for a gitfs remote and the pid written to it is not running on the master, the lock file will be automatically cleared and a new lock will be obtained. When set to True, Salt will simply log a warning when there is an update lock present. On single-master deployments, disabling this option can help automatically deal with instances where the master was shutdown/restarted during the middle of a gitfs update, leaving a update lock in place. However, on multi-master deployments with the gitfs cachedir shared via GlusterFS, nfs, or another network filesystem, it is strongly recommended not to disable this option as doing so will cause lock files to be removed if they were created by a different master. # Disable global lock gitfs_global_lock: False gitfs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the default update interval (in seconds) for gitfs remotes. The update interval can also be set for a single repository via a per-remote config option gitfs_update_interval: 120 GitFS Authentication OptionsThese parameters only currently apply to the pygit2 gitfs provider. Examples of how to use these can be found in the GitFS Walkthrough. gitfs_userNew in version 2014.7.0. Default: '' Along with gitfs_password, is used to authenticate to HTTPS remotes. gitfs_user: git NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_passwordNew in version 2014.7.0. Default: '' Along with gitfs_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication. gitfs_password: mypassword NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_insecure_authNew in version 2014.7.0. Default: False By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk. gitfs_insecure_auth: True NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_pubkeyNew in version 2014.7.0. Default: '' Along with gitfs_privkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. Required for SSH remotes. gitfs_pubkey: /path/to/key.pub NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_privkeyNew in version 2014.7.0. Default: '' Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. Required for SSH remotes. gitfs_privkey: /path/to/key NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_passphraseNew in version 2014.7.0. Default: '' This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase. gitfs_passphrase: mypassphrase NOTE: This is is a global configuration option, see here
for examples of configuring it for individual repositories.
gitfs_refspecsNew in version 2017.7.0. Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*'] When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. More information on how this feature works can be found in the GitFS Walkthrough. gitfs_refspecs: hgfs: Mercurial Remote File Server Backendhgfs_remotesNew in version 0.17.0. Default: [] When using the hg fileserver backend at least one mercurial remote needs to be defined. The user running the salt master will need read access to the repo. The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and/or bookmarks are translated into salt environments, as defined by the hgfs_branch_method parameter. hgfs_remotes: NOTE: As of 2014.7.0, it is possible to have per-repo versions
of the hgfs_root, hgfs_mountpoint, hgfs_base, and
hgfs_branch_method parameters. For example:
hgfs_remotes: hgfs_branch_methodNew in version 0.17.0. Default: branches Defines the objects that will be used as fileserver environments.
hgfs_branch_method: mixed NOTE: Starting in version 2014.1.0, the value of the
hgfs_base parameter defines which branch is used as the base
environment, allowing for a base environment to be used with an
hgfs_branch_method of bookmarks.
Prior to this release, the default branch will be used as the base environment. hgfs_mountpointNew in version 2014.7.0. Default: '' Specifies a path on the salt fileserver which will be prepended to all files served by hgfs. This option can be used in conjunction with hgfs_root. It can also be configured on a per-remote basis, see here for more info. hgfs_mountpoint: salt://foo/bar NOTE: The salt:// protocol designation can be left off
(in other words, foo/bar and salt://foo/bar are equivalent).
Assuming a file baz.sh in the root of an hgfs remote, this file would
be served up via salt://foo/bar/baz.sh.
hgfs_rootNew in version 0.17.0. Default: '' Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with hgfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of hgfs) be considered as the root of the repo. hgfs_root: somefolder/otherfolder Changed in version 2014.7.0: Ability to specify hgfs roots on a per-remote basis was added. See here for more info. hgfs_baseNew in version 2014.1.0. Default: default Defines which branch should be used as the base environment. Change this if hgfs_branch_method is set to bookmarks to specify which bookmark should be used as the base environment. hgfs_base: salt hgfs_saltenv_whitelistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from hgfs_env_whitelist to hgfs_saltenv_whitelist Default: [] Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. If used, only branches/bookmarks/tags which match one of the specified expressions will be exposed as fileserver environments. If used in conjunction with hgfs_saltenv_blacklist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments. hgfs_saltenv_whitelist: hgfs_saltenv_blacklistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from hgfs_env_blacklist to hgfs_saltenv_blacklist Default: [] Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. If used, branches/bookmarks/tags which match one of the specified expressions will not be exposed as fileserver environments. If used in conjunction with hgfs_saltenv_whitelist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments. hgfs_saltenv_blacklist: hgfs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for hgfs_remotes. hgfs_update_interval: 120 svnfs: Subversion Remote File Server Backendsvnfs_remotesNew in version 0.17.0. Default: [] When using the svn fileserver backend at least one subversion remote needs to be defined. The user running the salt master will need read access to the repo. The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. The trunk, branches, and tags become environments, with the trunk being the base environment. svnfs_remotes: NOTE: As of 2014.7.0, it is possible to have per-repo versions
of the following configuration parameters:
For example: svnfs_remotes: svnfs_mountpointNew in version 2014.7.0. Default: '' Specifies a path on the salt fileserver which will be prepended to all files served by hgfs. This option can be used in conjunction with svnfs_root. It can also be configured on a per-remote basis, see here for more info. svnfs_mountpoint: salt://foo/bar NOTE: The salt:// protocol designation can be left off
(in other words, foo/bar and salt://foo/bar are equivalent).
Assuming a file baz.sh in the root of an svnfs remote, this file would
be served up via salt://foo/bar/baz.sh.
svnfs_rootNew in version 0.17.0. Default: '' Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with svnfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of svnfs) be considered as the root of the repo. svnfs_root: somefolder/otherfolder Changed in version 2014.7.0: Ability to specify svnfs roots on a per-remote basis was added. See here for more info. svnfs_trunkNew in version 2014.7.0. Default: trunk Path relative to the root of the repository where the trunk is located. Can also be configured on a per-remote basis, see here for more info. svnfs_trunk: trunk svnfs_branchesNew in version 2014.7.0. Default: branches Path relative to the root of the repository where the branches are located. Can also be configured on a per-remote basis, see here for more info. svnfs_branches: branches svnfs_tagsNew in version 2014.7.0. Default: tags Path relative to the root of the repository where the tags are located. Can also be configured on a per-remote basis, see here for more info. svnfs_tags: tags svnfs_saltenv_whitelistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from svnfs_env_whitelist to svnfs_saltenv_whitelist Default: [] Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. If used, only branches/tags which match one of the specified expressions will be exposed as fileserver environments. If used in conjunction with svnfs_saltenv_blacklist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments. svnfs_saltenv_whitelist: svnfs_saltenv_blacklistNew in version 2014.7.0. Changed in version 2018.3.0: Renamed from svnfs_env_blacklist to svnfs_saltenv_blacklist Default: [] Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. If used, branches/tags which match one of the specified expressions will not be exposed as fileserver environments. If used in conjunction with svnfs_saltenv_whitelist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments. svnfs_saltenv_blacklist: svnfs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for svnfs_remotes. svnfs_update_interval: 120 minionfs: MinionFS Remote File Server Backendminionfs_envNew in version 2014.7.0. Default: base Environment from which MinionFS files are made available. minionfs_env: minionfs minionfs_mountpointNew in version 2014.7.0. Default: '' Specifies a path on the salt fileserver from which minionfs files are served. minionfs_mountpoint: salt://foo/bar NOTE: The salt:// protocol designation can be left off
(in other words, foo/bar and salt://foo/bar are
equivalent).
minionfs_whitelistNew in version 2014.7.0. Default: [] Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID. If used, only the pushed files from minions which match one of the specified expressions will be exposed. If used in conjunction with minionfs_blacklist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed. minionfs_whitelist: minionfs_blacklistNew in version 2014.7.0. Default: [] Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID. If used, only the pushed files from minions which match one of the specified expressions will not be exposed. If used in conjunction with minionfs_whitelist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed. minionfs_blacklist: minionfs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for MinionFS. NOTE: Since MinionFS consists of files local to the
master, the update process for this fileserver backend just reaps the cache
for this backend.
minionfs_update_interval: 120 azurefs: Azure File Server BackendNew in version 2015.8.0. See the azurefs documentation for usage examples. azurefs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for azurefs. azurefs_update_interval: 120 s3fs: S3 File Server BackendNew in version 0.16.0. See the s3fs documentation for usage examples. s3fs_update_intervalNew in version 2018.3.0. Default: 60 This option defines the update interval (in seconds) for s3fs. s3fs_update_interval: 120 fileserver_intervalNew in version 3006.0. Default: 3600 Defines how often to restart the master's FilesServerUpdate process. fileserver_interval: 9600 Pillar Configurationpillar_rootsChanged in version 3005. Default: base: Set the environments and directories used to hold pillar sls data. This configuration is the same as file_roots: As of 2017.7.5 and 2018.3.1, it is possible to have __env__ as a catch-all environment. Example: pillar_roots: Taking dynamic environments one step further, __env__ can also be used in the pillar_roots filesystem path as of version 3005. It will be replaced with the actual pillarenv and searched for Pillar data to provide to the minion. Note this substitution ONLY occurs for the __env__ environment. For instance, this configuration: pillar_roots: is equivalent to this static configuration: pillar_roots: on_demand_ext_pillarNew in version 2016.3.6,2016.11.3,2017.7.0. Default: ['libvirt', 'virtkey'] The external pillars permitted to be used on-demand using pillar.ext. on_demand_ext_pillar: WARNING: This will allow minions to request specific pillar data
via pillar.ext, and may be considered a security risk. However, pillar
data generated in this way will not affect the in-memory pillar data,
so this risk is limited to instances in which states/modules/etc. (built-in or
custom) rely upon pillar data generated by pillar.ext.
decrypt_pillarNew in version 2017.7.0. Default: [] A list of paths to be recursively decrypted during pillar compilation. decrypt_pillar: Entries in this list can be formatted either as a simple string, or as a key/value pair, with the key being the pillar location, and the value being the renderer to use for pillar decryption. If the former is used, the renderer specified by decrypt_pillar_default will be used. decrypt_pillar_delimiterNew in version 2017.7.0. Default: : The delimiter used to distinguish nested data structures in the decrypt_pillar option. decrypt_pillar_delimiter: '|' decrypt_pillar: decrypt_pillar_defaultNew in version 2017.7.0. Default: gpg The default renderer used for decryption, if one is not specified for a given pillar key in decrypt_pillar. decrypt_pillar_default: my_custom_renderer decrypt_pillar_renderersNew in version 2017.7.0. Default: ['gpg'] List of renderers which are permitted to be used for pillar decryption. decrypt_pillar_renderers: gpg_decrypt_must_succeedNew in version 3005. Default: False If this is True and the ciphertext could not be decrypted, then an error is raised. Sending the ciphertext through basically is never desired, for example if a state is setting a database password from pillar and gpg rendering fails, then the state will update the password to the ciphertext, which by definition is not encrypted. WARNING: The value defaults to False for backwards
compatibility. In the Chlorine release, this option will default to
True.
gpg_decrypt_must_succeed: False pillar_optsDefault: False The pillar_opts option adds the master configuration file data to a dict in the pillar called master. This can be used to set simple configurations in the master config file that can then be used on minions. Note that setting this option to True means the master config file will be included in all minion's pillars. While this makes global configuration of services and systems easy, it may not be desired if sensitive data is stored in the master configuration. pillar_opts: False pillar_safe_render_errorDefault: True The pillar_safe_render_error option prevents the master from passing pillar render errors to the minion. This is set on by default because the error could contain templating data which would give that minion information it shouldn't have, like a password! When set True the error message will only show: Rendering SLS 'my.sls' failed. Please see master log for details. pillar_safe_render_error: True ext_pillarThe ext_pillar option allows for any number of external pillar interfaces to be called when populating pillar data. The configuration is based on ext_pillar functions. The available ext_pillar functions can be found herein: salt/pillar By default, the ext_pillar interface is not configured to run. Default: [] ext_pillar: There are additional details at Pillars ext_pillar_firstNew in version 2015.5.0. Default: False This option allows for external pillar sources to be evaluated before pillar_roots. External pillar data is evaluated separately from pillar_roots pillar data, and then both sets of pillar data are merged into a single pillar dictionary, so the value of this config option will have an impact on which key "wins" when there is one of the same name in both the external pillar data and pillar_roots pillar data. By setting this option to True, ext_pillar keys will be overridden by pillar_roots, while leaving it as False will allow ext_pillar keys to override those from pillar_roots. NOTE: For a while, this config option did not work as specified
above, because of a bug in Pillar compilation. This bug has been resolved in
version 2016.3.4 and later.
ext_pillar_first: False pillarenv_from_saltenvDefault: False When set to True, the pillarenv value will assume the value of the effective saltenv when running states. This essentially makes salt-run pillar.show_pillar saltenv=dev equivalent to salt-run pillar.show_pillar saltenv=dev pillarenv=dev. If pillarenv is set on the CLI, it will override this option. pillarenv_from_saltenv: True NOTE: For salt remote execution commands this option should be
set in the Minion configuration instead.
pillar_raise_on_missingNew in version 2015.5.0. Default: False Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a named value from pillar fails. When this option is set to False, the failed attempt returns an empty string. Git External Pillar (git_pillar) Configuration Optionsgit_pillar_providerNew in version 2015.8.0. Specify the provider to be used for git_pillar. Must be either pygit2 or gitpython. If unset, then both will be tried in that same order, and the first one with a compatible version installed will be the provider that is used. git_pillar_provider: gitpython git_pillar_baseNew in version 2015.8.0. Default: master If the desired branch matches this value, and the environment is omitted from the git_pillar configuration, then the environment for that git_pillar remote will be base. For example, in the configuration below, the foo branch/tag would be assigned to the base environment, while bar would be mapped to the bar environment. git_pillar_base: foo ext_pillar: git_pillar_branchNew in version 2015.8.0. Default: master If the branch is omitted from a git_pillar remote, then this branch will be used instead. For example, in the configuration below, the first two remotes would use the pillardata branch/tag, while the third would use the foo branch/tag. git_pillar_branch: pillardata ext_pillar: git_pillar_envNew in version 2015.8.0. Default: '' (unset) Environment to use for git_pillar remotes. This is normally derived from the branch/tag (or from a per-remote env parameter), but if set this will override the process of deriving the env from the branch/tag name. For example, in the configuration below the foo branch would be assigned to the base environment, while the bar branch would need to explicitly have bar configured as its environment to keep it from also being mapped to the base environment. git_pillar_env: base ext_pillar: For this reason, this option is recommended to be left unset, unless the use case calls for all (or almost all) of the git_pillar remotes to use the same environment irrespective of the branch/tag being used. git_pillar_rootNew in version 2015.8.0. Default: '' Path relative to the root of the repository where the git_pillar top file and SLS files are located. In the below configuration, the pillar top file and SLS files would be looked for in a subdirectory called pillar. git_pillar_root: pillar ext_pillar: NOTE: This is a global option. If only one or two repos need to
have their files sourced from a subdirectory, then git_pillar_root can
be omitted and the root can be specified on a per-remote basis, like so:
ext_pillar: In this example, for the first remote the top file and SLS files would be looked for in the root of the repository, while in the second remote the pillar data would be retrieved from the pillar subdirectory. git_pillar_ssl_verifyNew in version 2015.8.0. Changed in version 2016.11.0. Default: False Specifies whether or not to ignore SSL certificate errors when contacting the remote repository. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option. In the 2016.11.0 release, the default config value changed from False to True. git_pillar_ssl_verify: True NOTE: pygit2 only supports disabling SSL verification in
versions 0.23.2 and newer.
git_pillar_global_lockNew in version 2015.8.9. Default: True When set to False, if there is an update/checkout lock for a git_pillar remote and the pid written to it is not running on the master, the lock file will be automatically cleared and a new lock will be obtained. When set to True, Salt will simply log a warning when there is an lock present. On single-master deployments, disabling this option can help automatically deal with instances where the master was shutdown/restarted during the middle of a git_pillar update/checkout, leaving a lock in place. However, on multi-master deployments with the git_pillar cachedir shared via GlusterFS, nfs, or another network filesystem, it is strongly recommended not to disable this option as doing so will cause lock files to be removed if they were created by a different master. # Disable global lock git_pillar_global_lock: False git_pillar_includesNew in version 2017.7.0. Default: True Normally, when processing git_pillar remotes, if more than one repo under the same git section in the ext_pillar configuration refers to the same pillar environment, then each repo in a given environment will have access to the other repos' files to be referenced in their top files. However, it may be desirable to disable this behavior. If so, set this value to False. For a more detailed examination of how includes work, see this explanation from the git_pillar documentation. git_pillar_includes: False git_pillar_update_intervalNew in version 3000. Default: 60 This option defines the default update interval (in seconds) for git_pillar remotes. The update is handled within the global loop, hence git_pillar_update_interval should be a multiple of loop_interval. git_pillar_update_interval: 120 Git External Pillar Authentication OptionsThese parameters only currently apply to the pygit2 git_pillar_provider. Authentication works the same as it does in gitfs, as outlined in the GitFS Walkthrough, though the global configuration options are named differently to reflect that they are for git_pillar instead of gitfs. git_pillar_userNew in version 2015.8.0. Default: '' Along with git_pillar_password, is used to authenticate to HTTPS remotes. git_pillar_user: git git_pillar_passwordNew in version 2015.8.0. Default: '' Along with git_pillar_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication. git_pillar_password: mypassword git_pillar_insecure_authNew in version 2015.8.0. Default: False By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk. git_pillar_insecure_auth: True git_pillar_pubkeyNew in version 2015.8.0. Default: '' Along with git_pillar_privkey (and optionally git_pillar_passphrase), is used to authenticate to SSH remotes. git_pillar_pubkey: /path/to/key.pub git_pillar_privkeyNew in version 2015.8.0. Default: '' Along with git_pillar_pubkey (and optionally git_pillar_passphrase), is used to authenticate to SSH remotes. git_pillar_privkey: /path/to/key git_pillar_passphraseNew in version 2015.8.0. Default: '' This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase. git_pillar_passphrase: mypassphrase git_pillar_refspecsNew in version 2017.7.0. Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*'] When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. This parameter works similarly to its GitFS counterpart, in that it can be configured both globally and for individual remotes. git_pillar_refspecs: git_pillar_verify_configNew in version 2017.7.0. Default: True By default, as the master starts it performs some sanity checks on the configured git_pillar repositories. If any of these sanity checks fail (such as when an invalid configuration is used), the master daemon will abort. To skip these sanity checks, set this option to False. git_pillar_verify_config: False Pillar Merging Optionspillar_source_merging_strategyNew in version 2014.7.0. Default: smart The pillar_source_merging_strategy option allows you to configure merging strategy between different sources. It accepts 5 values:
foo: 42 bar: bar: will be merged as: foo: 42 bar:
foo: 42
bar: !aggregate {
bar: !aggregate {
will be merged as: foo: 42 bar: NOTE: This requires that the render pipeline defined in
the renderer master configuration ends in yamlex.
A: Second pillar processed: A: will be merged as: A:
NOTE: In order for yamlex based features such as
!aggregate to work as expected across documents using the default
smart merge strategy, the renderer config option must be set to
jinja|yamlex or similar.
pillar_merge_listsNew in version 2015.8.0. Default: False Recursively merge lists by aggregating them instead of replacing them. pillar_merge_lists: False pillar_includes_override_slsNew in version 2017.7.6,2018.3.1. Default: False Prior to version 2017.7.3, keys from pillar includes would be merged on top of the pillar SLS. Since 2017.7.3, the includes are merged together and then the pillar SLS is merged on top of that. Set this option to True to return to the old behavior. pillar_includes_override_sls: True Pillar Cache Optionspillar_cacheNew in version 2015.8.8. Default: False A master can cache pillars locally to bypass the expense of having to render them for each minion on every request. This feature should only be enabled in cases where pillar rendering time is known to be unsatisfactory and any attendant security concerns about storing pillars in a master cache have been addressed. When enabling this feature, be certain to read through the additional pillar_cache_* configuration options to fully understand the tunable parameters and their implications. pillar_cache: False NOTE: Setting pillar_cache: True has no effect on
targeting minions with pillar.
pillar_cache_ttlNew in version 2015.8.8. Default: 3600 If and only if a master has set pillar_cache: True, the cache TTL controls the amount of time, in seconds, before the cache is considered invalid by a master and a fresh pillar is recompiled and stored. The cache TTL does not prevent pillar cache from being refreshed before its TTL expires. pillar_cache_backendNew in version 2015.8.8. Default: disk If an only if a master has set pillar_cache: True, one of several storage providers can be utilized:
pillar_cache_backend: disk Master Reactor SettingsreactorDefault: [] Defines a salt reactor. See the Reactor documentation for more information. reactor: reactor_refresh_intervalDefault: 60 The TTL for the cache of the reactor configuration. reactor_refresh_interval: 60 reactor_worker_threadsDefault: 10 The number of workers for the runner/wheel in the reactor. reactor_worker_threads: 10 reactor_worker_hwmDefault: 10000 The queue size for workers in the reactor. reactor_worker_hwm: 10000 Salt-API Master SettingsThere are some settings for salt-api that can be configured on the Salt Master. api_logfileDefault: /var/log/salt/api The logfile location for salt-api. api_logfile: /var/log/salt/api api_pidfileDefault: /var/run/salt-api.pid If this master will be running salt-api, specify the pidfile of the salt-api daemon. api_pidfile: /var/run/salt-api.pid rest_timeoutDefault: 300 Used by salt-api for the master requests timeout. rest_timeout: 300 netapi_enable_clientsNew in version 3006.0. Default: [] Used by salt-api to enable access to the listed clients. Unless a client is addded to this list, requests will be rejected before authentication is attempted or processing of the low state occurs. This can be used to only expose the required functionality via salt-api. Configuration with all possible clients enabled: netapi_enable_clients: NOTE: Enabling all clients is not recommended - only enable the
clients that provide the functionality required.
Syndic Server SettingsA Salt syndic is a Salt master used to pass commands from a higher Salt master to minions below the syndic. Using the syndic is simple. If this is a master that will have syndic servers(s) below it, set the order_masters setting to True. If this is a master that will be running a syndic daemon for passthrough the syndic_master setting needs to be set to the location of the master server. Do not forget that, in other words, it means that it shares with the local minion its ID and PKI directory. order_mastersDefault: False Extra data needs to be sent with publications if the master is controlling a lower level master via a syndic minion. If this is the case the order_masters value must be set to True order_masters: False syndic_masterChanged in version 2016.3.5,2016.11.1: Set default higher level master address. Default: masterofmasters If this master will be running the salt-syndic to connect to a higher level master, specify the higher level master with this configuration value. syndic_master: masterofmasters You can optionally connect a syndic to multiple higher level masters by setting the syndic_master value to a list: syndic_master: Each higher level master must be set up in a multi-master configuration. syndic_master_portDefault: 4506 If this master will be running the salt-syndic to connect to a higher level master, specify the higher level master port with this configuration value. syndic_master_port: 4506 syndic_pidfileDefault: /var/run/salt-syndic.pid If this master will be running the salt-syndic to connect to a higher level master, specify the pidfile of the syndic daemon. syndic_pidfile: /var/run/syndic.pid syndic_log_fileDefault: /var/log/salt/syndic If this master will be running the salt-syndic to connect to a higher level master, specify the log file of the syndic daemon. syndic_log_file: /var/log/salt-syndic.log syndic_failoverNew in version 2016.3.0. Default: random The behaviour of the multi-syndic when connection to a master of masters failed. Can specify random (default) or ordered. If set to random, masters will be iterated in random order. If ordered is specified, the configured order will be used. syndic_failover: random syndic_waitDefault: 5 The number of seconds for the salt client to wait for additional syndics to check in with their lists of expected minions before giving up. syndic_wait: 5 syndic_forward_all_eventsNew in version 2017.7.0. Default: False Option on multi-syndic or single when connected to multiple masters to be able to send events to all connected masters. syndic_forward_all_events: False Peer Publish SettingsSalt minions can send commands to other minions, but only if the minion is allowed to. By default "Peer Publication" is disabled, and when enabled it is enabled for specific minions and specific commands. This allows secure compartmentalization of commands based on individual minions. peerDefault: {} The configuration uses regular expressions to match minions and then a list of regular expressions to match functions. The following will allow the minion authenticated as foo.example.com to execute functions from the test and pkg modules. peer: This will allow all minions to execute all commands: peer: This is not recommended, since it would allow anyone who gets root on any single minion to instantly have root on all of the minions! By adding an additional layer you can limit the target hosts in addition to the accessible commands: peer: peer_runDefault: {} The peer_run option is used to open up runners on the master to access from the minions. The peer_run configuration matches the format of the peer configuration. The following example would allow foo.example.com to execute the manage.up runner: peer_run: Master Logging Settingslog_fileDefault: /var/log/salt/master The master log can be sent to a regular file, local path name, or network location. See also log_file. Examples: log_file: /var/log/salt/master log_file: file:///dev/log log_file: udp://loghost:10514 log_levelDefault: warning The level of messages to send to the console. See also log_level. log_level: warning Any log level below the info level is INSECURE and may log sensitive data. This currently includes: #. profile #. debug #. trace #. garbage #. all log_level_logfileDefault: warning The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option. log_level_logfile: warning Any log level below the info level is INSECURE and may log sensitive data. This currently includes: #. profile #. debug #. trace #. garbage #. all log_datefmtDefault: %H:%M:%S The date and time format used in console log messages. See also log_datefmt. log_datefmt: '%H:%M:%S' log_datefmt_logfileDefault: %Y-%m-%d %H:%M:%S The date and time format used in log file messages. See also log_datefmt_logfile. log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' log_fmt_consoleDefault: [%(levelname)-8s] %(message)s The format of the console logging messages. See also log_fmt_console. NOTE: Log colors are enabled in log_fmt_console rather
than the color config since the logging system is loaded before the
master config.
Console log colors are specified by these additional formatters: %(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s Since it is desirable to include the surrounding brackets, '[' and ']', in the coloring of the messages, these color formatters also include padding as well. Color LogRecord attributes are only available for console logging. log_fmt_console: '%(colorlevel)s %(colormsg)s' log_fmt_console: '[%(levelname)-8s] %(message)s' log_fmt_logfileDefault: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s The format of the log file logging messages. See also log_fmt_logfile. log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' log_granular_levelsDefault: {} This can be used to control logging levels more specifically. See also log_granular_levels. log_rotate_max_bytesDefault: 0 The maximum number of bytes a single log file may contain before it is rotated. A value of 0 disables this feature. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_max_bytes log_rotate_backup_countDefault: 0 The number of backup files to keep when rotating log files. Only used if log_rotate_max_bytes is greater than 0. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_backup_count Node GroupsnodegroupsDefault: {} Node groups allow for logical groupings of minion nodes. A group consists of a group name and a compound target. nodegroups: More information on using nodegroups can be found here. Range Cluster Settingsrange_serverDefault: 'range:80' The range server (and optional port) that serves your cluster information https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec range_server: range:80 Include ConfigurationConfiguration can be loaded from multiple files. The order in which this is done is:
Each successive step overrides any values defined in the previous steps. Therefore, any config options defined in one of the default_include files would override the same value in the master config file, and any options defined in include would override both. default_includeDefault: master.d/*.conf The master can include configuration from other files. Per default the master will automatically include all config files from master.d/*.conf where master.d is relative to the directory of the master configuration file. NOTE: Salt creates files in the master.d directory for
its own use. These files are prefixed with an underscore. A common example of
this is the _schedule.conf file.
includeDefault: not defined The master can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the master will log a warning message. # Include files from a master.d directory in the same # directory as the master config file include: master.d/* # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the master.d directory include: Keepalive Settingstcp_keepaliveDefault: True The tcp keepalive interval to set on TCP ports. This setting can be used to tune Salt connectivity issues in messy network environments with misbehaving firewalls. tcp_keepalive: True tcp_keepalive_cntDefault: -1 Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects. tcp_keepalive_cnt: -1 tcp_keepalive_idleDefault: 300 Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects. tcp_keepalive_idle: 300 tcp_keepalive_intvlDefault: -1 Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects. tcp_keepalive_intvl': -1 Windows Software Repo Settingswinrepo_providerNew in version 2015.8.0. Specify the provider to be used for winrepo. Must be either pygit2 or gitpython. If unset, then both will be tried in that same order, and the first one with a compatible version installed will be the provider that is used. winrepo_provider: gitpython winrepo_dirChanged in version 2015.8.0: Renamed from win_repo to winrepo_dir. Default: /usr/local/etc/salt/states/win/repo Location on the master where the winrepo_remotes are checked out for pre-2015.8.0 minions. 2015.8.0 and later minions use winrepo_remotes_ng instead. winrepo_dir: /usr/local/etc/salt/states/win/repo winrepo_dir_ngNew in version 2015.8.0: A new ng repo was added. Default: /usr/local/etc/salt/states/win/repo-ng Location on the master where the winrepo_remotes_ng are checked out for 2015.8.0 and later minions. winrepo_dir_ng: /usr/local/etc/salt/states/win/repo-ng winrepo_cachefileChanged in version 2015.8.0: Renamed from win_repo_mastercachefile to winrepo_cachefile NOTE: 2015.8.0 and later minions do not use this setting since
the cachefile is now generated by the minion.
Default: winrepo.p Path relative to winrepo_dir where the winrepo cache should be created. winrepo_cachefile: winrepo.p winrepo_remotesChanged in version 2015.8.0: Renamed from win_gitrepos to winrepo_remotes. Default: ['https://github.com/saltstack/salt-winrepo.git'] List of git repositories to checkout and include in the winrepo for pre-2015.8.0 minions. 2015.8.0 and later minions use winrepo_remotes_ng instead. winrepo_remotes: To specify a specific revision of the repository, prepend a commit ID to the URL of the repository: winrepo_remotes: Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo. winrepo_remotes_ngNew in version 2015.8.0: A new ng repo was added. Default: ['https://github.com/saltstack/salt-winrepo-ng.git'] List of git repositories to checkout and include in the winrepo for 2015.8.0 and later minions. winrepo_remotes_ng: To specify a specific revision of the repository, prepend a commit ID to the URL of the repository: winrepo_remotes_ng: Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo. winrepo_branchNew in version 2015.8.0. Default: master If the branch is omitted from a winrepo remote, then this branch will be used instead. For example, in the configuration below, the first two remotes would use the winrepo branch/tag, while the third would use the foo branch/tag. winrepo_branch: winrepo winrepo_remotes: winrepo_ssl_verifyNew in version 2015.8.0. Changed in version 2016.11.0. Default: False Specifies whether or not to ignore SSL certificate errors when contacting the remote repository. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option. In the 2016.11.0 release, the default config value changed from False to True. winrepo_ssl_verify: True Winrepo Authentication OptionsThese parameters only currently apply to the pygit2 winrepo_provider. Authentication works the same as it does in gitfs, as outlined in the GitFS Walkthrough, though the global configuration options are named differently to reflect that they are for winrepo instead of gitfs. winrepo_userNew in version 2015.8.0. Default: '' Along with winrepo_password, is used to authenticate to HTTPS remotes. winrepo_user: git winrepo_passwordNew in version 2015.8.0. Default: '' Along with winrepo_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication. winrepo_password: mypassword winrepo_insecure_authNew in version 2015.8.0. Default: False By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk. winrepo_insecure_auth: True winrepo_pubkeyNew in version 2015.8.0. Default: '' Along with winrepo_privkey (and optionally winrepo_passphrase), is used to authenticate to SSH remotes. winrepo_pubkey: /path/to/key.pub winrepo_privkeyNew in version 2015.8.0. Default: '' Along with winrepo_pubkey (and optionally winrepo_passphrase), is used to authenticate to SSH remotes. winrepo_privkey: /path/to/key winrepo_passphraseNew in version 2015.8.0. Default: '' This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase. winrepo_passphrase: mypassphrase winrepo_refspecsNew in version 2017.7.0. Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*'] When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. This parameter works similarly to its GitFS counterpart, in that it can be configured both globally and for individual remotes. winrepo_refspecs: Configure Master on WindowsThe master on Windows requires no additional configuration. You can modify the master configuration by creating/editing the master config file located at c:\salt\conf\master. The same configuration options available on Linux are available in Windows, as long as they apply. For example, SSH options wouldn't apply in Windows. The main differences are the file paths. If you are familiar with common salt paths, the following table may be useful:
So, for example, the master config file in Linux is /usr/local/etc/salt/master. In Windows the master config file is c:\salt\conf\master. The Linux path /usr/local/etc/salt becomes c:\salt\conf in Windows. Common File Locations
Common Directories
Rootsfile_roots
pillar_roots
Win Repo Settings
Configuring the Salt MinionThe Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file. SEE ALSO: example minion configuration file
The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the minion knows where to locate its master. By default, the salt-minion configuration will be in /usr/local/etc/salt/minion. A notable exception is FreeBSD, where the configuration will be in /usr/local/usr/local/etc/salt/minion. Minion Primary ConfigurationmasterDefault: salt The hostname or IP address of the master. See ipv6 for IPv6 connections to the master. Default: salt master: salt master:port SyntaxNew in version 2015.8.0. The master config option can also be set to use the master's IP in conjunction with a port number by default. master: localhost:1234 For IPv6 formatting with a port, remember to add brackets around the IP address before adding the port and enclose the line in single quotes to make it a string: master: '[2001:db8:85a3:8d3:1319:8a2e:370:7348]:1234' NOTE: If a port is specified in the master as well as
master_port, the master_port setting will be overridden by the
master configuration.
List of Masters SyntaxThe option can also be set to a list of masters, enabling multi-master mode. master: Changed in version 2014.7.0: The master can be dynamically configured. The master value can be set to an module function which will be executed and will assume that the returning value is the ip or hostname of the desired master. If a function is being specified, then the master_type option must be set to func, to tell the minion that the value is a function to be run and not a fully-qualified domain name. master: module.function master_type: func In addition, instead of using multi-master mode, the minion can be configured to use the list of master addresses as a failover list, trying the first address, then the second, etc. until the minion successfully connects. To enable this behavior, set master_type to failover: master: colorDefault: True By default output is colored. To disable colored output, set the color value to False. ipv6Default: None Whether the master should be connected over IPv6. By default salt minion will try to automatically detect IPv6 connectivity to master. ipv6: True master_uri_formatNew in version 2015.8.0. Specify the format in which the master address will be evaluated. Valid options are default or ip_only. If ip_only is specified, then the master address will not be split into IP and PORT, so be sure that only an IP (or domain name) is set in the master configuration setting. master_uri_format: ip_only master_tops_firstNew in version 2018.3.0. Default: False SLS targets defined using the Master Tops system are normally executed after any matches defined in the Top File. Set this option to True to have the minion execute the Master Tops states first. master_tops_first: True master_typeNew in version 2014.7.0. Default: str The type of the master variable. Can be str, failover, func or disable. master_type: str If this option is str (default), multiple hot masters are configured. Minions can connect to multiple masters simultaneously (all master are "hot"). master_type: failover If this option is set to failover, master must be a list of master addresses. The minion will then try each master in the order specified in the list until it successfully connects. master_alive_interval must also be set, this determines how often the minion will verify the presence of the master. master_type: func If the master needs to be dynamically assigned by executing a function instead of reading in the static master value, set this to func. This can be used to manage the minion's master setting from an execution module. By simply changing the algorithm in the module to return a new master ip/fqdn, restart the minion and it will connect to the new master. As of version 2016.11.0 this option can be set to disable and the minion will never attempt to talk to the master. This is useful for running a masterless minion daemon. master_type: disable max_event_sizeNew in version 2014.7.0. Default: 1048576 Passing very large events can cause the minion to consume large amounts of memory. This value tunes the maximum size of a message allowed onto the minion event bus. The value is expressed in bytes. max_event_size: 1048576 enable_legacy_startup_eventsNew in version 2019.2.0. Default: True When a minion starts up it sends a notification on the event bus with a tag that looks like this: salt/minion/<minion_id>/start. For historical reasons the minion also sends a similar event with an event tag like this: minion_start. This duplication can cause a lot of clutter on the event bus when there are many minions. Set enable_legacy_startup_events: False in the minion config to ensure only the salt/minion/<minion_id>/start events are sent. Beginning with the 3001 Salt release this option will default to False. enable_legacy_startup_events: True master_failbackNew in version 2016.3.0. Default: False If the minion is in multi-master mode and the :conf_minion`master_type` configuration option is set to failover, this setting can be set to True to force the minion to fail back to the first master in the list if the first master is back online. master_failback: False master_failback_intervalNew in version 2016.3.0. Default: 0 If the minion is in multi-master mode, the :conf_minion`master_type` configuration is set to failover, and the master_failback option is enabled, the master failback interval can be set to ping the top master with this interval, in seconds. master_failback_interval: 0 master_alive_intervalDefault: 0 Configures how often, in seconds, the minion will verify that the current master is alive and responding. The minion will try to establish a connection to the next master in the list if it finds the existing one is dead. This setting can also be used to detect master DNS record changes when a minion has been disconnected. master_alive_interval: 30 master_shuffleNew in version 2014.7.0. Deprecated since version 2019.2.0. Default: False WARNING: This option has been deprecated in Salt 2019.2.0.
Please use random_master instead.
master_shuffle: True random_masterNew in version 2014.7.0. Changed in version 2019.2.0: The master_failback option can be used in conjunction with random_master to force the minion to fail back to the first master in the list if the first master is back online. Note that master_type must be set to failover in order for the master_failback setting to work. Default: False If master is a list of addresses, shuffle them before trying to connect to distribute the minions over all available masters. This uses Python's random.shuffle method. If multiple masters are specified in the 'master' setting as a list, the default behavior is to always try to connect to them in the order they are listed. If random_master is set to True, the order will be randomized instead upon Minion startup. This can be helpful in distributing the load of many minions executing salt-call requests, for example, from a cron job. If only one master is listed, this setting is ignored and a warning is logged. random_master: True NOTE: When the failover, master_failback, and
random_master options are used together, only the "secondary
masters" will be shuffled. The first master in the list is ignored in the
random.shuffle call. See master_failback for more
information.
retry_dnsDefault: 30 Set the number of seconds to wait before attempting to resolve the master hostname if name resolution fails. Defaults to 30 seconds. Set to zero if the minion should shutdown and not retry. retry_dns: 30 retry_dns_countNew in version 2018.3.4. Default: None Set the number of attempts to perform when resolving the master hostname if name resolution fails. By default the minion will retry indefinitely. retry_dns_count: 3 master_portDefault: 4506 The port of the master ret server, this needs to coincide with the ret_port option on the Salt master. master_port: 4506 publish_portDefault: 4505 The port of the master publish server, this needs to coincide with the publish_port option on the Salt master. publish_port: 4505 source_interface_nameNew in version 2018.3.0. The name of the interface to use when establishing the connection to the Master. NOTE: If multiple IP addresses are configured on the named
interface, the first one will be selected. In that case, for a better
selection, consider using the source_address option.
NOTE: To use an IPv6 address from the named interface, make
sure the option ipv6 is enabled, i.e., ipv6: true.
NOTE: If the interface is down, it will avoid using it, and the
Minion will bind to 0.0.0.0 (all interfaces).
WARNING: This option requires modern version of the underlying
libraries used by the selected transport:
Configuration example: source_interface_name: bond0.1234 source_addressNew in version 2018.3.0. The source IP address or the domain name to be used when connecting the Minion to the Master. See ipv6 for IPv6 connections to the Master. WARNING: This option requires modern version of the underlying
libraries used by the selected transport:
Configuration example: source_address: if-bond0-1234.sjc.us-west.internal source_ret_portNew in version 2018.3.0. The source port to be used when connecting the Minion to the Master ret server. WARNING: This option requires modern version of the underlying
libraries used by the selected transport:
Configuration example: source_ret_port: 49017 source_publish_portNew in version 2018.3.0. The source port to be used when connecting the Minion to the Master publish server. WARNING: This option requires modern version of the underlying
libraries used by the selected transport:
Configuration example: source_publish_port: 49018 userDefault: root The user to run the Salt processes user: root sudo_userDefault: '' The user to run salt remote execution commands as via sudo. If this option is enabled then sudo will be used to change the active user executing the remote command. If enabled the user will need to be allowed access via the sudoers file for the user that the salt minion is configured to run as. The most common option would be to use the root user. If this option is set the user option should also be set to a non-root user. If migrating from a root minion to a non root minion the minion cache should be cleared and the minion pki directory will need to be changed to the ownership of the new user. sudo_user: root pidfileDefault: /var/run/salt-minion.pid The location of the daemon's process ID file pidfile: /var/run/salt-minion.pid root_dirDefault: / This directory is prepended to the following options: pki_dir, cachedir, log_file, sock_dir, and pidfile. root_dir: / conf_fileDefault: /usr/local/etc/salt/minion The path to the minion's configuration file. conf_file: /usr/local/etc/salt/minion pki_dirDefault: <LIB_STATE_DIR>/pki/minion The directory used to store the minion's public and private keys. <LIB_STATE_DIR> is the pre-configured variable state directory set during installation via --salt-lib-state-dir. It defaults to /usr/local/etc/salt. Systems following the Filesystem Hierarchy Standard (FHS) might set it to /var/lib/salt. pki_dir: /usr/local/etc/salt/pki/minion idDefault: the system's hostname SEE ALSO: Salt Walkthrough
The Setting up a Salt Minion section contains detailed information on how the hostname is determined. Explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids. id: foo.bar.com minion_id_cachingNew in version 0.17.2. Default: True Caches the minion id to a file when the minion's id is not statically defined in the minion config. This setting prevents potential problems when automatic minion id resolution changes, which can cause the minion to lose connection with the master. To turn off minion id caching, set this config to False. For more information, please see Issue #7558 and Pull Request #8488. minion_id_caching: True append_domainDefault: None Append a domain to a hostname in the event that it does not exist. This is useful for systems where socket.getfqdn() does not actually result in a FQDN (for instance, Solaris). append_domain: foo.org minion_id_remove_domainNew in version 3000. Default: False Remove a domain when the minion id is generated as a fully qualified domain name (either by the user provided id_function, or by Salt). This is useful when the minions shall be named like hostnames. Can be a single domain (to prevent name clashes), or True, to remove all domains.
For more information, please see issue 49212 and PR 49378. minion_id_remove_domain: foo.org minion_id_lowercaseDefault: False Convert minion id to lowercase when it is being generated. Helpful when some hosts get the minion id in uppercase. Cached ids will remain the same and not converted. minion_id_lowercase: True cachedirDefault: /var/cache/salt/minion The location for minion cache data. This directory may contain sensitive data and should be protected accordingly. cachedir: /var/cache/salt/minion color_themeDefault: "" Specifies a path to the color theme to use for colored command line output. color_theme: /usr/local/etc/salt/color_theme append_minionid_config_dirsDefault: [] (the empty list) for regular minions, ['cachedir'] for proxy minions. Append minion_id to these configuration directories. Helps with multiple proxies and minions running on the same machine. Allowed elements in the list: pki_dir, cachedir, extension_modules. Normally not needed unless running several proxies and/or minions on the same machine. append_minionid_config_dirs: verify_envDefault: True Verify and set permissions on configuration directories at startup. verify_env: True NOTE: When set to True the verify_env option requires
WRITE access to the configuration directory (/usr/local/etc/salt/). In certain
situations such as mounting /usr/local/etc/salt/ as read-only for templating
this will create a stack trace when state.apply is called.
cache_jobsDefault: False The minion can locally cache the return data from jobs sent to it, this can be a good way to keep track of the minion side of the jobs the minion has executed. By default this feature is disabled, to enable set cache_jobs to True. cache_jobs: False grainsDefault: (empty) SEE ALSO: Using grains in a state
Statically assigns grains to the minion. grains: grains_blacklistDefault: [] Each grains key will be compared against each of the expressions in this list. Any keys which match will be filtered from the grains. Exact matches, glob matches, and regular expressions are supported. NOTE: Some states and execution modules depend on grains.
Filtering may cause them to be unavailable or run unreliably.
New in version 3000. grains_blacklist: grains_cacheDefault: False The minion can locally cache grain data instead of refreshing the data each time the grain is referenced. By default this feature is disabled, to enable set grains_cache to True. grains_cache: False grains_cache_expirationDefault: 300 Grains cache expiration, in seconds. If the cache file is older than this number of seconds then the grains cache will be dumped and fully re-populated with fresh data. Defaults to 5 minutes. Will have no effect if grains_cache is not enabled. grains_cache_expiration: 300 grains_deep_mergeNew in version 2016.3.0. Default: False The grains can be merged, instead of overridden, using this option. This allows custom grains to defined different subvalues of a dictionary grain. By default this feature is disabled, to enable set grains_deep_merge to True. grains_deep_merge: False For example, with these custom grains functions: def custom1_k1(): Without grains_deep_merge, the result would be: custom1: With grains_deep_merge, the result will be: custom1: grains_refresh_everyDefault: 0 The grains_refresh_every setting allows for a minion to periodically check its grains to see if they have changed and, if so, to inform the master of the new grains. This operation is moderately expensive, therefore care should be taken not to set this value too low. Note: This value is expressed in minutes. A value of 10 minutes is a reasonable default. grains_refresh_every: 0 grains_refresh_pre_execNew in version 3005. Default: False The grains_refresh_pre_exec setting allows for a minion to check its grains prior to the execution of any operation to see if they have changed and, if so, to inform the master of the new grains. This operation is moderately expensive, therefore care should be taken before enabling this behavior. grains_refresh_pre_exec: True metadata_server_grainsNew in version 2017.7.0. Default: False Set this option to enable gathering of cloud metadata from http://169.254.169.254/latest for use in grains (see here for more information). metadata_server_grains: True fibre_channel_grainsDefault: False The fibre_channel_grains setting will enable the fc_wwn grain for Fibre Channel WWN's on the minion. Since this grain is expensive, it is disabled by default. fibre_channel_grains: True iscsi_grainsDefault: False The iscsi_grains setting will enable the iscsi_iqn grain on the minion. Since this grain is expensive, it is disabled by default. iscsi_grains: True nvme_grainsDefault: False The nvme_grains setting will enable the nvme_nqn grain on the minion. Since this grain is expensive, it is disabled by default. nvme_grains: True mine_enabledNew in version 2015.8.10. Default: True Determines whether or not the salt minion should run scheduled mine updates. If this is set to False then the mine update function will not get added to the scheduler for the minion. mine_enabled: True mine_return_jobNew in version 2015.8.10. Default: False Determines whether or not scheduled mine updates should be accompanied by a job return for the job cache. mine_return_job: False mine_functionsDefault: Empty Designate which functions should be executed at mine_interval intervals on each minion. See this documentation on the Salt Mine for more information. Note these can be defined in the pillar for a minion as well. example minion configuration file
mine_functions: mine_intervalDefault: 60 The number of minutes between mine updates. mine_interval: 60 sock_dirDefault: /var/run/salt/minion The directory where Unix sockets will be kept. sock_dir: /var/run/salt/minion enable_fqdns_grainsDefault: True In order to calculate the fqdns grain, all the IP addresses from the minion are processed with underlying calls to socket.gethostbyaddr which can take 5 seconds to be released (after reaching socket.timeout) when there is no fqdn for that IP. These calls to socket.gethostbyaddr are processed asynchronously, however, it still adds 5 seconds every time grains are generated if an IP does not resolve. In Windows grains are regenerated each time a new process is spawned. Therefore, the default for Windows is False. In many cases this value does not make sense to include for proxy minions as it will be FQDN for the host running the proxy minion process, so the default for proxy minions is False`. On macOS, FQDN resolution can be very slow, therefore the default for macOS is False as well. All other OSes default to True. This option was added here. enable_fqdns_grains: False enable_gpu_grainsDefault: True Enable GPU hardware data for your master. Be aware that the minion can take a while to start up when lspci and/or dmidecode is used to populate the grains for the minion, so this can be set to False if you do not need these grains. enable_gpu_grains: False outputter_dirsDefault: [] A list of additional directories to search for salt outputters in. outputter_dirs: [] backup_modeDefault: '' Make backups of files replaced by file.managed and file.recurse state modules under cachedir in file_backup subdirectory preserving original paths. Refer to File State Backups documentation for more details. backup_mode: minion acceptance_wait_timeDefault: 10 The number of seconds to wait until attempting to re-authenticate with the master. acceptance_wait_time: 10 acceptance_wait_time_maxDefault: 0 The maximum number of seconds to wait until attempting to re-authenticate with the master. If set, the wait will increase by acceptance_wait_time seconds each iteration. acceptance_wait_time_max: 0 rejected_retryDefault: False If the master denies or rejects the minion's public key, retry instead of exiting. These keys will be handled the same as waiting on acceptance. rejected_retry: False random_reauth_delayDefault: 10 When the master key changes, the minion will try to re-auth itself to receive the new master key. In larger environments this can cause a syn-flood on the master because all minions try to re-auth immediately. To prevent this and have a minion wait for a random amount of time, use this optional parameter. The wait-time will be a random number of seconds between 0 and the defined value. random_reauth_delay: 60 master_triesNew in version 2016.3.0. Default: 1 The number of attempts to connect to a master before giving up. Set this to -1 for unlimited attempts. This allows for a master to have downtime and the minion to reconnect to it later when it comes back up. In 'failover' mode, which is set in the master_type configuration, this value is the number of attempts for each set of masters. In this mode, it will cycle through the list of masters for each attempt. master_tries is different than auth_tries because auth_tries attempts to retry auth attempts with a single master. auth_tries is under the assumption that you can connect to the master but not gain authorization from it. master_tries will still cycle through all of the masters in a given try, so it is appropriate if you expect occasional downtime from the master(s). master_tries: 1 auth_triesNew in version 2014.7.0. Default: 7 The number of attempts to authenticate to a master before giving up. Or, more technically, the number of consecutive SaltReqTimeoutErrors that are acceptable when trying to authenticate to the master. auth_tries: 7 auth_timeoutNew in version 2014.7.0. Default: 5 When waiting for a master to accept the minion's public key, salt will continuously attempt to reconnect until successful. This is the timeout value, in seconds, for each individual attempt. After this timeout expires, the minion will wait for acceptance_wait_time seconds before trying again. Unless your master is under unusually heavy load, this should be left at the default. NOTE: For high latency networks try increasing this value
auth_timeout: 5 auth_safemodeNew in version 2014.7.0. Default: False If authentication fails due to SaltReqTimeoutError during a ping_interval, this setting, when set to True, will cause a sub-minion process to restart. auth_safemode: False request_channel_timeoutNew in version 3006.2. Default: 30 The default timeout timeout for request channel requests. This setting can be used to tune minions to better handle long running pillar and file client requests. request_channel_timeout: 30 request_channel_triesNew in version 3006.2. Default: 3 The default number of times the minion will try request channel requests. This setting can be used to tune minions to better handle long running pillar and file client requests by retrying them after a timeout happens. request_channel_tries: 3 ping_intervalDefault: 0 Instructs the minion to ping its master(s) every n number of minutes. Used primarily as a mitigation technique against minion disconnects. ping_interval: 0 random_startup_delayDefault: 0 The maximum bound for an interval in which a minion will randomly sleep upon starting up prior to attempting to connect to a master. This can be used to splay connection attempts for cases where many minions starting up at once may place undue load on a master. For example, setting this to 5 will tell a minion to sleep for a value between 0 and 5 seconds. random_startup_delay: 5 recon_defaultDefault: 1000 The interval in milliseconds that the socket should wait before trying to reconnect to the master (1000ms = 1 second). recon_default: 1000 recon_maxDefault: 10000 The maximum time a socket should wait. Each interval the time to wait is calculated by doubling the previous time. If recon_max is reached, it starts again at the recon_default.
recon_max: 10000 recon_randomizeDefault: True Generate a random wait time on minion start. The wait time will be a random value between recon_default and recon_default + recon_max. Having all minions reconnect with the same recon_default and recon_max value kind of defeats the purpose of being able to change these settings. If all minions have the same values and the setup is quite large (several thousand minions), they will still flood the master. The desired behavior is to have time-frame within all minions try to reconnect. recon_randomize: True loop_intervalDefault: 1 The loop_interval sets how long in seconds the minion will wait between evaluating the scheduler and running cleanup tasks. This defaults to 1 second on the minion scheduler. loop_interval: 1 pub_retDefault: True Some installations choose to start all job returns in a cache or a returner and forgo sending the results back to a master. In this workflow, jobs are most often executed with --async from the Salt CLI and then results are evaluated by examining job caches on the minions or any configured returners. WARNING: Setting this to False will disable returns back to the master. pub_ret: True return_retry_timerDefault: 5 The default timeout for a minion return attempt. return_retry_timer: 5 return_retry_timer_maxDefault: 10 The maximum timeout for a minion return attempt. If non-zero the minion return retry timeout will be a random int between return_retry_timer and return_retry_timer_max return_retry_timer_max: 10 return_retry_triesDefault: 3 The maximum number of retries for a minion return attempt. return_retry_tries: 3 cache_sreqsDefault: True The connection to the master ret_port is kept open. When set to False, the minion creates a new connection for every return to the master. cache_sreqs: True ipc_modeDefault: ipc Windows platforms lack POSIX IPC and must rely on slower TCP based inter- process communications. ipc_mode is set to tcp on such systems. ipc_mode: ipc ipc_write_bufferDefault: 0 The maximum size of a message sent via the IPC transport module can be limited dynamically or by sharing an integer value lower than the total memory size. When the value dynamic is set, salt will use 2.5% of the total memory as ipc_write_buffer value (rounded to an integer). A value of 0 disables this option. ipc_write_buffer: 10485760 tcp_pub_portDefault: 4510 Publish port used when ipc_mode is set to tcp. tcp_pub_port: 4510 tcp_pull_portDefault: 4511 Pull port used when ipc_mode is set to tcp. tcp_pull_port: 4511 transportDefault: zeromq Changes the underlying transport layer. ZeroMQ is the recommended transport while additional transport layers are under development. Supported values are zeromq and tcp (experimental). This setting has a significant impact on performance and should not be changed unless you know what you are doing! transport: zeromq syndic_fingerDefault: '' The key fingerprint of the higher-level master for the syndic to verify it is talking to the intended master. syndic_finger: 'ab:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:50:10' http_connect_timeoutNew in version 2019.2.0. Default: 20 HTTP connection timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time. http_connect_timeout: 20 http_request_timeoutNew in version 2015.8.0. Default: 3600 HTTP request timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time. http_request_timeout: 3600 proxy_hostDefault: '' The hostname used for HTTP proxy access. proxy_host: proxy.my-domain proxy_portDefault: 0 The port number used for HTTP proxy access. proxy_port: 31337 proxy_usernameDefault: '' The username used for HTTP proxy access. proxy_username: charon proxy_passwordDefault: '' The password used for HTTP proxy access. proxy_password: obolus no_proxyNew in version 2019.2.0. Default: [] List of hosts to bypass HTTP proxy NOTE: This key does nothing unless proxy_host etc is
configured, it does not support any kind of wildcards.
no_proxy: [ '127.0.0.1', 'foo.tld' ] use_yamlloader_oldNew in version 2019.2.1. Default: False Use the pre-2019.2 YAML renderer. Uses legacy YAML rendering to support some legacy inline data structures. See the 2019.2.1 release notes for more details. use_yamlloader_old: False Docker Configurationdocker.update_mineNew in version 2017.7.8,2018.3.3. Changed in version 2019.2.0: The default value is now False Default: True If enabled, when containers are added, removed, stopped, started, etc., the mine will be updated with the results of docker.ps verbose=True all=True host=True. This mine data is used by mine.get_docker. Set this option to False to keep Salt from updating the mine with this information. NOTE: This option can also be set in Grains or Pillar data,
with Grains overriding Pillar and the minion config file overriding
Grains.
NOTE: Disabling this will of course keep mine.get_docker
from returning any information for a given minion.
docker.update_mine: False docker.compare_container_networksNew in version 2018.3.0. Default: {'static': ['Aliases', 'Links', 'IPAMConfig'], 'automatic': ['IPAddress', 'Gateway', 'GlobalIPv6Address', 'IPv6Gateway']} Specifies which keys are examined by docker.compare_container_networks. NOTE: This should not need to be modified unless new features
added to Docker result in new keys added to the network configuration which
must be compared to determine if two containers have different network
configs. This config option exists solely as a way to allow users to continue
using Salt to manage their containers after an API change, without waiting for
a new Salt release to catch up to the changes in the Docker API.
docker.compare_container_networks: optimization_orderDefault: [0, 1, 2] In cases where Salt is distributed without .py files, this option determines the priority of optimization level(s) Salt's module loader should prefer. NOTE: This option is only supported on Python 3.5+.
optimization_order: Minion Execution Module Managementdisable_modulesDefault: [] (all execution modules are enabled by default) The event may occur in which the administrator desires that a minion should not be able to execute a certain module. However, the sys module is built into the minion and cannot be disabled. This setting can also tune the minion. Because all modules are loaded into system memory, disabling modules will lower the minion's memory footprint. Modules should be specified according to their file name on the system and not by their virtual name. For example, to disable cmd, use the string cmdmod which corresponds to salt.modules.cmdmod. disable_modules: disable_returnersDefault: [] (all returners are enabled by default) If certain returners should be disabled, this is the place disable_returners: whitelist_modulesDefault: [] (Module whitelisting is disabled. Adding anything to the config option will cause only the listed modules to be enabled. Modules not in the list will not be loaded.) This option is the reverse of disable_modules. If enabled, only execution modules in this list will be loaded and executed on the minion. Note that this is a very large hammer and it can be quite difficult to keep the minion working the way you think it should since Salt uses many modules internally itself. At a bare minimum you need the following enabled or else the minion won't start. whitelist_modules: module_dirsDefault: [] A list of extra directories to search for Salt modules module_dirs: returner_dirsDefault: [] A list of extra directories to search for Salt returners returner_dirs: states_dirsDefault: [] A list of extra directories to search for Salt states states_dirs: grains_dirsDefault: [] A list of extra directories to search for Salt grains grains_dirs: render_dirsDefault: [] A list of extra directories to search for Salt renderers render_dirs: utils_dirsDefault: [] A list of extra directories to search for Salt utilities utils_dirs: cython_enableDefault: False Set this value to true to enable auto-loading and compiling of .pyx modules, This setting requires that gcc and cython are installed on the minion. cython_enable: False enable_zip_modulesNew in version 2015.8.0. Default: False Set this value to true to enable loading of zip archives as extension modules. This allows for packing module code with specific dependencies to avoid conflicts and/or having to install specific modules' dependencies in system libraries. enable_zip_modules: False providersDefault: (empty) A module provider can be statically overwritten or extended for the minion via the providers option. This can be done on an individual basis in an SLS file, or globally here in the minion config, like below. providers: modules_max_memoryDefault: -1 Specify a max size (in bytes) for modules on import. This feature is currently only supported on *NIX operating systems and requires psutil. modules_max_memory: -1 extmod_whitelist/extmod_blacklistNew in version 2017.7.0. By using this dictionary, the modules that are synced to the minion's extmod cache using saltutil.sync_* can be limited. If nothing is set to a specific type, then all modules are accepted. To block all modules of a specific type, whitelist an empty list. extmod_whitelist: Valid options:
Top File SettingsThese parameters only have an effect if running a masterless minion. state_topDefault: top.sls The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment. state_top: top.sls state_top_saltenvThis option has no default value. Set it to an environment name to ensure that only the top file from that environment is considered during a highstate. NOTE: Using this value does not change the merging strategy.
For instance, if top_file_merging_strategy is set to merge, and
state_top_saltenv is set to foo, then any sections for
environments other than foo in the top file for the foo
environment will be ignored. With state_top_saltenv set to base,
all states from all environments in the base top file will be applied,
while all other top files are ignored. The only way to set
state_top_saltenv to something other than base and not have the
other environments in the targeted top file ignored, would be to set
top_file_merging_strategy to merge_all.
state_top_saltenv: dev top_file_merging_strategyChanged in version 2016.11.0: A merge_all strategy has been added. Default: merge When no specific fileserver environment (a.k.a. saltenv) has been specified for a highstate, all environments' top files are inspected. This config option determines how the SLS targets in those top files are handled. When set to merge, the base environment's top file is evaluated first, followed by the other environments' top files. The first target expression (e.g. '*') for a given environment is kept, and when the same target expression is used in a different top file evaluated later, it is ignored. Because base is evaluated first, it is authoritative. For example, if there is a target for '*' for the foo environment in both the base and foo environment's top files, the one in the foo environment would be ignored. The environments will be evaluated in no specific order (aside from base coming first). For greater control over the order in which the environments are evaluated, use env_order. Note that, aside from the base environment's top file, any sections in top files that do not match that top file's environment will be ignored. So, for example, a section for the qa environment would be ignored if it appears in the dev environment's top file. To keep use cases like this from being ignored, use the merge_all strategy. When set to same, then for each environment, only that environment's top file is processed, with the others being ignored. For example, only the dev environment's top file will be processed for the dev environment, and any SLS targets defined for dev in the base environment's (or any other environment's) top file will be ignored. If an environment does not have a top file, then the top file from the default_top config parameter will be used as a fallback. When set to merge_all, then all states in all environments in all top files will be applied. The order in which individual SLS files will be executed will depend on the order in which the top files were evaluated, and the environments will be evaluated in no specific order. For greater control over the order in which the environments are evaluated, use env_order. top_file_merging_strategy: same env_orderDefault: [] When top_file_merging_strategy is set to merge, and no environment is specified for a highstate, this config option allows for the order in which top files are evaluated to be explicitly defined. env_order: default_topDefault: base When top_file_merging_strategy is set to same, and no environment is specified for a highstate (i.e. environment is not set for the minion), this config option specifies a fallback environment in which to look for a top file if an environment lacks one. default_top: dev startup_statesDefault: '' States to run when the minion daemon starts. To enable, set startup_states to:
startup_states: '' sls_listDefault: [] List of states to run when the minion starts up if startup_states is set to sls. sls_list: start_event_grainsDefault: [] List of grains to pass in start event when minion starts up. start_event_grains: top_fileDefault: '' Top file to execute if startup_states is set to top. top_file: '' State Management SettingsrendererDefault: jinja|yaml The default renderer used for local state executions renderer: jinja|json testDefault: False Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made. test: False state_aggregateDefault: False Automatically aggregate all states that have support for mod_aggregate by setting to True. state_aggregate: True Or pass a list of state module names to automatically aggregate just those types. state_aggregate: state_queueDefault: False Instead of failing immediately when another state run is in progress, a value of True will queue the new state run to begin running once the other has finished. This option starts a new thread for each queued state run, so use this option sparingly. state_queue: True Additionally, it can be set to an integer representing the maximum queue size which can be attained before the state runs will fail to be queued. This can prevent runaway conditions where new threads are started until system performance is hampered. state_queue: 2 state_verboseDefault: True Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states that failed or states that have changes. state_verbose: True state_outputDefault: full The state_output setting controls which results will be output full multi line:
full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state ID will be used as name in the output. state_output: full state_output_diffDefault: False The state_output_diff setting changes whether or not the output from successful states is returned. Useful when even the terse output of these states is cluttering the logs. Set it to True to ignore them. state_output_diff: False state_output_profileDefault: True The state_output_profile setting changes whether profile information will be shown for each state run. state_output_profile: True state_output_pctDefault: False The state_output_pct setting changes whether success and failure information as a percent of total actions will be shown for each state run. state_output_pct: False state_compress_idsDefault: False The state_compress_ids setting aggregates information about states which have multiple "names" under the same state ID in the highstate output. state_compress_ids: False autoload_dynamic_modulesDefault: True autoload_dynamic_modules turns on automatic loading of modules found in the environments on the master. This is turned on by default. To turn off auto-loading modules when states run, set this value to False. autoload_dynamic_modules: True clean_dynamic_modulesDefault: True clean_dynamic_modules keeps the dynamic modules on the minion in sync with the dynamic modules on the master. This means that if a dynamic module is not on the master it will be deleted from the minion. By default this is enabled and can be disabled by changing this value to False. clean_dynamic_modules: True NOTE: If extmod_whitelist is specified, modules which
are not whitelisted will also be cleaned here.
saltenvChanged in version 2018.3.0: Renamed from environment to saltenv. If environment is used, saltenv will take its value. If both are used, environment will be ignored and saltenv will be used. The default fileserver environment to use when copying files and applying states. saltenv: dev lock_saltenvNew in version 2018.3.0. Default: False For purposes of running states, this option prevents using the saltenv argument to manually set the environment. This is useful to keep a minion which has the saltenv option set to dev from running states from an environment other than dev. lock_saltenv: True snapper_statesDefault: False The snapper_states value is used to enable taking snapper snapshots before and after salt state runs. This allows for state runs to be rolled back. For snapper states to function properly snapper needs to be installed and enabled. snapper_states: True snapper_states_configDefault: root Snapper can execute based on a snapper configuration. The configuration needs to be set up before snapper can use it. The default configuration is root, this default makes snapper run on SUSE systems using the default configuration set up at install time. snapper_states_config: root global_state_conditionsDefault: None If set, this parameter expects a dictionary of state module names as keys and a list of conditions which must be satisfied in order to run any functions in that state module. global_state_conditions: File Directory Settingsfile_clientDefault: remote The client defaults to looking on the master server for files, but can be directed to look on the minion by setting this parameter to local. file_client: remote use_master_when_localDefault: False When using a local file_client, this parameter is used to allow the client to connect to a master for remote execution. use_master_when_local: False file_rootsDefault: base: When using a local file_client, this parameter is used to setup the fileserver's environments. This parameter operates identically to the master config parameter of the same name. file_roots: fileserver_followsymlinksNew in version 2014.1.0. Default: True By default, the file_server follows symlinks when walking the filesystem tree. Currently this only applies to the default roots fileserver_backend. fileserver_followsymlinks: True fileserver_ignoresymlinksNew in version 2014.1.0. Default: False If you do not want symlinks to be treated as the files they are pointing to, set fileserver_ignoresymlinks to True. By default this is set to False. When set to True, any detected symlink while listing files on the Master will not be returned to the Minion. fileserver_ignoresymlinks: False hash_typeDefault: sha256 The hash_type is the hash to use when discovering the hash of a file on the local fileserver. The default is sha256, but md5, sha1, sha224, sha384, and sha512 are also supported. hash_type: sha256 Pillar Configurationpillar_rootsDefault: base: When using a local file_client, this parameter is used to setup the pillar environments. pillar_roots: on_demand_ext_pillarNew in version 2016.3.6,2016.11.3,2017.7.0. Default: ['libvirt', 'virtkey'] When using a local file_client, this option controls which external pillars are permitted to be used on-demand using pillar.ext. on_demand_ext_pillar: WARNING: This will allow a masterless minion to request specific
pillar data via pillar.ext, and may be considered a security risk.
However, pillar data generated in this way will not affect the in-memory
pillar data, so this risk is limited to instances in which
states/modules/etc. (built-in or custom) rely upon pillar data generated by
pillar.ext.
decrypt_pillarNew in version 2017.7.0. Default: [] A list of paths to be recursively decrypted during pillar compilation. decrypt_pillar: Entries in this list can be formatted either as a simple string, or as a key/value pair, with the key being the pillar location, and the value being the renderer to use for pillar decryption. If the former is used, the renderer specified by decrypt_pillar_default will be used. decrypt_pillar_delimiterNew in version 2017.7.0. Default: : The delimiter used to distinguish nested data structures in the decrypt_pillar option. decrypt_pillar_delimiter: '|' decrypt_pillar: decrypt_pillar_defaultNew in version 2017.7.0. Default: gpg The default renderer used for decryption, if one is not specified for a given pillar key in decrypt_pillar. decrypt_pillar_default: my_custom_renderer decrypt_pillar_renderersNew in version 2017.7.0. Default: ['gpg'] List of renderers which are permitted to be used for pillar decryption. decrypt_pillar_renderers: gpg_decrypt_must_succeedNew in version 3005. Default: False If this is True and the ciphertext could not be decrypted, then an error is raised. Sending the ciphertext through basically is never desired, for example if a state is setting a database password from pillar and gpg rendering fails, then the state will update the password to the ciphertext, which by definition is not encrypted. WARNING: The value defaults to False for backwards
compatibility. In the Chlorine release, this option will default to
True.
gpg_decrypt_must_succeed: False pillarenvDefault: None Isolates the pillar environment on the minion side. This functions the same as the environment setting, but for pillar instead of states. pillarenv: dev pillarenv_from_saltenvNew in version 2017.7.0. Default: False When set to True, the pillarenv value will assume the value of the effective saltenv when running states. This essentially makes salt '*' state.sls mysls saltenv=dev equivalent to salt '*' state.sls mysls saltenv=dev pillarenv=dev. If pillarenv is set, either in the minion config file or via the CLI, it will override this option. pillarenv_from_saltenv: True pillar_raise_on_missingNew in version 2015.5.0. Default: False Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a named value from pillar fails. When this option is set to False, the failed attempt returns an empty string. minion_pillar_cacheNew in version 2016.3.0. Default: False The minion can locally cache rendered pillar data under cachedir/pillar. This allows a temporarily disconnected minion to access previously cached pillar data by invoking salt-call with the --local and --pillar_root=:conf_minion:cachedir/pillar options. Before enabling this setting consider that the rendered pillar may contain security sensitive data. Appropriate access restrictions should be in place. By default the saved pillar data will be readable only by the user account running salt. By default this feature is disabled, to enable set minion_pillar_cache to True. minion_pillar_cache: False file_recv_max_sizeNew in version 2014.7.0. Default: 100 Set a hard-limit on the size of the files that can be pushed to the master. It will be interpreted as megabytes. file_recv_max_size: 100 pass_to_ext_pillarsSpecify a list of configuration keys whose values are to be passed to external pillar functions. Suboptions can be specified using the ':' notation (i.e. option:suboption) The values are merged and included in the extra_minion_data optional parameter of the external pillar function. The extra_minion_data parameter is passed only to the external pillar functions that have it explicitly specified in their definition. If the config contains opt1: value1 opt2: the extra_minion_data parameter will be {"opt1": "value1", "opt2": {"subopt1": "value2"}}
ssh_merge_pillarNew in version 2018.3.2. Default: True Merges the compiled pillar data with the pillar data already available globally. This is useful when using salt-ssh or salt-call --local and overriding the pillar data in a state file: apply_showpillar: If set to True, the showpillar state will have access to the global pillar data. If set to False, only the overriding pillar data will be available to the showpillar state. Security Settingsopen_modeDefault: False Open mode can be used to clean out the PKI key received from the Salt master, turn on open mode, restart the minion, then turn off open mode and restart the minion to clean the keys. open_mode: False master_fingerDefault: '' Fingerprint of the master public key to validate the identity of your Salt master before the initial key exchange. The master fingerprint can be found as master.pub by running "salt-key -F master" on the Salt master. master_finger: 'ba:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:11:13' keysizeDefault: 2048 The size of key that should be generated when creating new keys. keysize: 2048 permissive_pki_accessDefault: False Enable permissive access to the salt keys. This allows you to run the master or minion as root, but have a non-root group be given access to your pki_dir. To make the access explicit, root must belong to the group you've given access to. This is potentially quite insecure. permissive_pki_access: False verify_master_pubkey_signDefault: False Enables verification of the master-public-signature returned by the master in auth-replies. Please see the tutorial on how to configure this properly Multimaster-PKI with Failover Tutorial New in version 2014.7.0. verify_master_pubkey_sign: True If this is set to True, master_sign_pubkey must be also set to True in the master configuration file. master_sign_key_nameDefault: master_sign The filename without the .pub suffix of the public key that should be used for verifying the signature from the master. The file must be located in the minion's pki directory. New in version 2014.7.0. master_sign_key_name: <filename_without_suffix> autosign_grainsNew in version 2018.3.0. Default: not defined The grains that should be sent to the master on authentication to decide if the minion's key should be accepted automatically. Please see the Autoaccept Minions from Grains documentation for more information. autosign_grains: always_verify_signatureDefault: False If verify_master_pubkey_sign is enabled, the signature is only verified if the public-key of the master changes. If the signature should always be verified, this can be set to True. New in version 2014.7.0. always_verify_signature: True cmd_blacklist_globDefault: [] If cmd_blacklist_glob is enabled then any shell command called over remote execution or via salt-call will be checked against the glob matches found in the cmd_blacklist_glob list and any matched shell command will be blocked. NOTE: This blacklist is only applied to direct executions made
by the salt and salt-call commands. This does NOT blacklist
commands called from states or shell commands executed from other
modules.
New in version 2016.11.0. cmd_blacklist_glob: cmd_whitelist_globDefault: [] If cmd_whitelist_glob is enabled then any shell command called over remote execution or via salt-call will be checked against the glob matches found in the cmd_whitelist_glob list and any shell command NOT found in the list will be blocked. If cmd_whitelist_glob is NOT SET, then all shell commands are permitted. NOTE: This whitelist is only applied to direct executions made
by the salt and salt-call commands. This does NOT restrict
commands called from states or shell commands executed from other
modules.
New in version 2016.11.0. cmd_whitelist_glob: sslNew in version 2016.11.0. Default: None TLS/SSL connection options. This could be set to a dictionary containing arguments corresponding to python ssl.wrap_socket method. For details see Tornado and Python documentation. Note: to set enum arguments values like cert_reqs and ssl_version use constant names without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23. ssl: encryption_algorithmNew in version 3006.9. Default: OAEP-SHA1 The RSA encryption algorithm used by this minion when connecting to the master's request channel. Valid values are OAEP-SHA1 and OAEP-SHA224 signing_algorithmNew in version 3006.9. Default: PKCS1v15-SHA1 The RSA signing algorithm used by this minion when connecting to the master's request channel. Valid values are PKCS1v15-SHA1 and PKCS1v15-SHA224 Reactor SettingsreactorDefault: [] Defines a salt reactor. See the Reactor documentation for more information. reactor: [] reactor_refresh_intervalDefault: 60 The TTL for the cache of the reactor configuration. reactor_refresh_interval: 60 reactor_worker_threadsDefault: 10 The number of workers for the runner/wheel in the reactor. reactor_worker_threads: 10 reactor_worker_hwmDefault: 10000 The queue size for workers in the reactor. reactor_worker_hwm: 10000 Thread SettingsmultiprocessingDefault: True If multiprocessing is enabled when a minion receives a publication a new process is spawned and the command is executed therein. Conversely, if multiprocessing is disabled the new publication will be run executed in a thread. multiprocessing: True process_count_maxNew in version 2018.3.0. Default: -1 Limit the maximum amount of processes or threads created by salt-minion. This is useful to avoid resource exhaustion in case the minion receives more publications than it is able to handle, as it limits the number of spawned processes or threads. -1 is the default and disables the limit. process_count_max: -1 Minion Logging Settingslog_fileDefault: /var/log/salt/minion The minion log can be sent to a regular file, local path name, or network location. See also log_file. Examples: log_file: /var/log/salt/minion log_file: file:///dev/log log_file: udp://loghost:10514 log_levelDefault: warning The level of messages to send to the console. See also log_level. log_level: warning Any log level below the info level is INSECURE and may log sensitive data. This currently includes: #. profile #. debug #. trace #. garbage #. all log_level_logfileDefault: warning The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option. log_level_logfile: warning Any log level below the info level is INSECURE and may log sensitive data. This currently includes: #. profile #. debug #. trace #. garbage #. all log_datefmtDefault: %H:%M:%S The date and time format used in console log messages. See also log_datefmt. log_datefmt: '%H:%M:%S' log_datefmt_logfileDefault: %Y-%m-%d %H:%M:%S The date and time format used in log file messages. See also log_datefmt_logfile. log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' log_fmt_consoleDefault: [%(levelname)-8s] %(message)s The format of the console logging messages. See also log_fmt_console. NOTE: Log colors are enabled in log_fmt_console rather
than the color config since the logging system is loaded before the
minion config.
Console log colors are specified by these additional formatters: %(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s Since it is desirable to include the surrounding brackets, '[' and ']', in the coloring of the messages, these color formatters also include padding as well. Color LogRecord attributes are only available for console logging. log_fmt_console: '%(colorlevel)s %(colormsg)s' log_fmt_console: '[%(levelname)-8s] %(message)s' log_fmt_logfileDefault: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s The format of the log file logging messages. See also log_fmt_logfile. log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' log_granular_levelsDefault: {} This can be used to control logging levels more specifically. See also log_granular_levels. log_rotate_max_bytesDefault: 0 The maximum number of bytes a single log file may contain before it is rotated. A value of 0 disables this feature. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_max_bytes log_rotate_backup_countDefault: 0 The number of backup files to keep when rotating log files. Only used if log_rotate_max_bytes is greater than 0. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_backup_count zmq_monitorDefault: False To diagnose issues with minions disconnecting or missing returns, ZeroMQ supports the use of monitor sockets to log connection events. This feature requires ZeroMQ 4.0 or higher. To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a debug level or higher. A sample log event is as follows: [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
'value': 27, 'description': 'EVENT_DISCONNECTED'}
All events logged will include the string ZeroMQ event. A connection event should be logged as the minion starts up and initially connects to the master. If not, check for debug log level and that the necessary version of ZeroMQ is installed. tcp_authentication_retriesDefault: 5 The number of times to retry authenticating with the salt master when it comes back online. Zeromq does a lot to make sure when connections come back online that they reauthenticate. The tcp transport should try to connect with a new connection if the old one times out on reauthenticating. -1 for infinite tries. tcp_reconnect_backoffDefault: 1 The time in seconds to wait before attempting another connection with salt master when the previous connection fails while on TCP transport. failhardDefault: False Set the global failhard flag. This informs all states to stop running states at the moment a single state fails failhard: False Include ConfigurationConfiguration can be loaded from multiple files. The order in which this is done is:
Each successive step overrides any values defined in the previous steps. Therefore, any config options defined in one of the default_include files would override the same value in the minion config file, and any options defined in include would override both. default_includeDefault: minion.d/*.conf The minion can include configuration from other files. Per default the minion will automatically include all config files from minion.d/*.conf where minion.d is relative to the directory of the minion configuration file. NOTE: Salt creates files in the minion.d directory for
its own use. These files are prefixed with an underscore. A common example of
this is the _schedule.conf file.
includeDefault: not defined The minion can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the minion will log a warning message. # Include files from a minion.d directory in the same # directory as the minion config file include: minion.d/*.conf # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the minion.d directory include: Keepalive Settingstcp_keepaliveDefault: True The tcp keepalive interval to set on TCP ports. This setting can be used to tune Salt connectivity issues in messy network environments with misbehaving firewalls. tcp_keepalive: True tcp_keepalive_cntDefault: -1 Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects. tcp_keepalive_cnt: -1 tcp_keepalive_idleDefault: 300 Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects. tcp_keepalive_idle: 300 tcp_keepalive_intvlDefault: -1 Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects. tcp_keepalive_intvl': -1 Frozen Build Update SettingsThese options control how salt.modules.saltutil.update() works with esky frozen apps. For more information look at https://github.com/cloudmatrix/esky/. update_urlDefault: False (Update feature is disabled) The url to use when looking for application updates. Esky depends on directory listings to search for new versions. A webserver running on your Master is a good starting point for most setups. update_url: 'http://salt.example.com/minion-updates' update_restart_servicesDefault: [] (service restarting on update is disabled) A list of services to restart when the minion software is updated. This would typically just be a list containing the minion's service name, but you may have other services that need to go with it. update_restart_services: ['salt-minion'] Windows Software Repo SettingsThese settings apply to all minions, whether running in masterless or master-minion mode. winrepo_cache_expire_minNew in version 2016.11.0. Default: 1800 If set to a nonzero integer, then passing refresh=True to functions in the windows pkg module will not refresh the windows repo metadata if the age of the metadata is less than this value. The exception to this is pkg.refresh_db, which will always refresh the metadata, regardless of age. winrepo_cache_expire_min: 1800 winrepo_cache_expire_maxNew in version 2016.11.0. Default: 21600 If the windows repo metadata is older than this value, and the metadata is needed by a function in the windows pkg module, the metadata will be refreshed. winrepo_cache_expire_max: 86400 winrepo_source_dirDefault: salt://win/repo-ng/ The source location for the winrepo sls files. winrepo_source_dir: salt://win/repo-ng/ Standalone Minion Windows Software Repo SettingsThe following settings are for configuring the Windows Software Repository (winrepo) on a masterless minion. To run in masterless minion mode, set the file_client to local or run salt-call with the --local option IMPORTANT: These config options are only valid for minions running
in masterless mode
winrepo_dirChanged in version 2015.8.0: Renamed from win_repo to winrepo_dir. This option did not have a default value until this version. Default: C:\salt\srv\salt\win\repo Location on the minion file_roots where winrepo files are kept. This is also where the winrepo_remotes are cloned to by winrepo.update_git_repos. winrepo_dir: 'D:\winrepo' winrepo_dir_ngNew in version 2015.8.0: A new ng repo was added. Default: C:\salt\srv\salt\win\repo-ng Location on the minion file_roots where winrepo files are kept for 2018.8.0 and later minions. This is also where the winrepo_remotes are cloned to by winrepo.update_git_repos. winrepo_dir_ng: /usr/local/etc/salt/states/win/repo-ng winrepo_cachefileChanged in version 2015.8.0: Renamed from win_repo_cachefile to winrepo_cachefile. Also, this option did not have a default value until this version. Default: winrepo.p The name of the winrepo cache file. The file will be created at root of the directory specified by winrepo_dir_ng. winrepo_cachefile: winrepo.p winrepo_remotesChanged in version 2015.8.0: Renamed from win_gitrepos to winrepo_remotes. Also, this option did not have a default value until this version. New in version 2015.8.0. Default: ['https://github.com/saltstack/salt-winrepo.git'] List of git repositories to checkout and include in the winrepo winrepo_remotes: To specify a specific revision of the repository, prepend a commit ID to the URL of the repository: winrepo_remotes: Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo. winrepo_remotes_ngNew in version 2015.8.0: A new ng repo was added. Default: ['https://github.com/saltstack/salt-winrepo-ng.git'] List of git repositories to checkout and include in the winrepo for 2015.8.0 and later minions. winrepo_remotes_ng: To specify a specific revision of the repository, prepend a commit ID to the URL of the repository: winrepo_remotes_ng: Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo. Configuring the Salt Proxy MinionThe Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-proxy is configured via the proxy configuration file. SEE ALSO: example proxy minion configuration file
The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the proxy knows where to locate its master. By default, the salt-proxy configuration will be in /usr/local/etc/salt/proxy. A notable exception is FreeBSD, where the configuration will be in /usr/local/usr/local/etc/salt/proxy. With the Salt 3004 release, the ability to configure proxy minions using the delta proxy was introduced. The delta proxy provides the ability for a single control proxy minion to manage multiple proxy minions. SEE ALSO: Installing and Using Deltaproxy
Proxy-specific Configuration Optionsadd_proxymodule_to_optsNew in version 2015.8.2. Changed in version 2016.3.0. Default: False Add the proxymodule LazyLoader object to opts. add_proxymodule_to_opts: True proxy_merge_grains_in_moduleNew in version 2016.3.0. Changed in version 2017.7.0. Default: True If a proxymodule has a function called grains, then call it during regular grains loading and merge the results with the proxy's grains dictionary. Otherwise it is assumed that the module calls the grains function in a custom way and returns the data elsewhere. proxy_merge_grains_in_module: False proxy_keep_aliveNew in version 2017.7.0. Default: True Whether the connection with the remote device should be restarted when dead. The proxy module must implement the alive function, otherwise the connection is considered alive. proxy_keep_alive: False proxy_keep_alive_intervalNew in version 2017.7.0. Default: 1 The frequency of keepalive checks, in minutes. It requires the proxy_keep_alive option to be enabled (and the proxy module to implement the alive function). proxy_keep_alive_interval: 5 proxy_always_aliveNew in version 2017.7.0. Default: True Whether the proxy should maintain the connection with the remote device. Similarly to proxy_keep_alive, this option is very specific to the design of the proxy module. When proxy_always_alive is set to False, the connection with the remote device is not maintained and has to be closed after every command. proxy_always_alive: False proxy_merge_pillar_in_optsNew in version 2017.7.3. Default: False. Whether the pillar data to be merged into the proxy configuration options. As multiple proxies can run on the same server, we may need different configuration options for each, while there's one single configuration file. The solution is merging the pillar data of each proxy minion into the opts. proxy_merge_pillar_in_opts: True proxy_deep_merge_pillar_in_optsNew in version 2017.7.3. Default: False. Deep merge of pillar data into configuration opts. This option is evaluated only when proxy_merge_pillar_in_opts is enabled. proxy_merge_pillar_in_opts_strategyNew in version 2017.7.3. Default: smart. The strategy used when merging pillar configuration into opts. This option is evaluated only when proxy_merge_pillar_in_opts is enabled. proxy_mines_pillarNew in version 2017.7.3. Default: True. Allow enabling mine details using pillar data. This evaluates the mine configuration under the pillar, for the following regular minion options that are also equally available on the proxy minion: mine_interval, and mine_functions. Delta proxy minionsWelcome to the delta proxy minion installation guide. This installation guide explains the process for installing and using delta proxy minion which is available beginning in version 3004. This guide is intended for system and network administrators with the general knowledge and experience required in the field. This guide is also intended for users that have ideally already tested and used standard Salt proxy minions in their environment before deciding to move to a delta proxy minion environment. See Salt proxy minions for more information. NOTE: If you have not used standard Salt proxy minions before,
consider testing and deploying standard Salt proxy minions in your environment
first.
Proxy minions vs. delta proxy minionsSalt can target network devices through Salt proxy minions, Proxy minions allow you to control network devices that, for whatever reason, cannot run the standard Salt minion. Examples include:
A proxy minion acts as an intermediary between the Salt master and the device it represents. The proxy minion runs on the Salt master and then translates commands from the Salt master to the device as needed. By acting as an intermediary for the actual minion, proxy minions eliminate the need to establish a constant connection from a Salt master to a minion. Proxy minions generally only open a connection to the actual minion when necessary. Proxy minions also reduce the amount of CPU or memory the minion must spend checking for commands from the Salt master. Proxy minions use the Salt master's CPU or memory to check for commands. The actual minion only needs to use CPU or memory to run commands when needed. NOTE: For more information about Salt proxy minions, see:
When delta proxy minions are neededNormally, you would create a separate instance of proxy minion for each device that needs to be managed. However, this doesn't always scale well if you have thousands of devices. Running several thousand proxy minions can require a lot of memory and CPU. A delta proxy minion can solve this problem: it makes it possible to run one minion that acts as the intermediary between the Salt master and the many network devices it can represent. In this scenario, one device (the delta proxy minion on the Salt master) runs several proxies. This configuration boosts performance and improves the overall scalability of the network. Key termsThe following lists some important terminology that is used throughout this guide:
Pre-installationBefore you startBefore installing the delta proxy minion, ensure that:
Install or upgrade SaltEnsure your Salt masters are running at least Salt version 3004. For instructions on installing or upgrading Salt, see repo.saltproject.io. For RedHat systems, see Install or Upgrade Salt. InstallationBefore you begin the delta proxy minion installation process, ensure you have read and completed the Pre-installation steps. Overview of the installation processSimilar to proxy minions, all the delta proxy minion configurations are done on the Salt master rather than on the minions that will be managed. The installation process has the following phases:
Configure the master to use delta proxyIn this step, you'll create a configuration file on the Salt master that defines its proxy settings. This is a general configuration file that tells the Salt master how to handle all proxy minions. To create this configuration:
# Use delta proxy metaproxy metaproxy: deltaproxy # Disable the FQDNS grain enable_fqdns_grains: False # Enabled multprocessing multiprocessing: True NOTE: See the following section about delta proxy
configuration options for a more detailed description of these
configuration options.
Your Salt master is now configured to use delta proxy. Next, you need to Create a pillar file for each managed device. Delta proxy configuration optionsThe following table describes the configuration options used in the delta proxy configuration file:
Create a pillar file for each managed deviceEach device that needs to be managed by delta proxy needs a separate pillar file on the Salt master. To create this file:
proxy: NOTE: The available configuration options vary depending on the
proxy type (in other words, the type of device it is). To read a detailed
explanation of the configuration options, refer to the proxy module
documentation for the type of device you need to manage. See:
my_managed_device_minion_ID:
You've now created the pillar file for the minions that will be managed by the delta proxy minion and you have referenced these files in the top file. Proceed to the next section. Create a control proxy configuration fileOn the Salt master, you'll need to create or edit a control proxy file for each control proxy. The control proxy manages several devices and issues commands to the network devices it represents. The Salt master needs at least one control proxy, but it is possible to have more than one control proxy, each managing a different set of devices. To configure a control proxy, you'll create a file that lists the minion IDs of the minions that it will manage. Then you will reference this control proxy configuration file in the top file. To create a control proxy configuration file:
proxy:
base:
metaproxy: deltaproxy Now that you have created the necessary configurations, proceed to the next section. Start the delta proxy minionAfter you've successfully configured the delta proxy minion, you need to start the proxy minion service for each managed device and validate that it is working correctly. NOTE: This step explains the process for starting a single
instance of a delta proxy minion. Because starting each minion individually
can potentially be very time-consuming, most organizations use a script to
start their delta proxy minions since there are typically many devices being
managed. Consider implementing a similar script for your environment to save
time in deployment.
To start a single instance of a delta proxy minion and test that it is configured correctly:
sudo salt-proxy --proxyid=<control_proxy_id>
salt my_managed_device_minion_ID test.version This command returns an output similar to the following: local: After you've successfully started the delta proxy minions and verified that they are working correctly, you can now use these minions the same as standard proxy minions. Additional resourcesThis reference section includes additional resources for delta proxy minions. For reference, see:
Configuration file examples
Example master configuration file##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master.
# Values that are commented out but have an empty line after the comment are
# defaults that do not need to be set in the config. If there is no blank line
# after the comment then the value is presented as an example and is not the
# default.
# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file).
#default_include: master.d/*.conf
# The address of the interface to bind to:
#interface: 0.0.0.0
# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted, too. (For example: "interface: '::'")
#ipv6: False
# The tcp port used by the publisher:
#publish_port: 4505
# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
#user: root
# Tell the master to also use salt-ssh when running commands against minions.
#enable_ssh_minions: False
# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returns, etc.
#ret_port: 4506
# Specify the location of the daemon process ID file:
#pidfile: /var/run/salt-master.pid
# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile, autosign_grains_dir:
#root_dir: /
# The path to the master's configuration file.
#conf_file: /usr/local/etc/salt/master
# Directory used to store public key data:
#pki_dir: /usr/local/etc/salt/pki/master
# Key cache. Increases master speed for large numbers of accepted
# keys. Available options: 'sched'. (Updates on a fixed schedule.)
# Note that enabling this feature means that minions will not be
# available to target for up to the length of the maintenance loop
# which by default is 60s.
#key_cache: ''
# Directory to store job and cache data:
# This directory may contain sensitive data and should be protected accordingly.
#
#cachedir: /var/cache/salt/master
# Directory where custom modules sync to. This directory can contain
# subdirectories for each of Salt's module types such as "runners",
# "output", "wheel", "modules", "states", "returners", "engines",
# "utils", etc.
#
# Note, any directories or files not found in the `module_dirs`
# location will be removed from the extension_modules path.
#extension_modules: /var/cache/salt/master/extmods
# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", "engines", "utils", etc.
#module_dirs: []
# Verify and set permissions on configuration directories at startup:
#verify_env: True
# Set the number of hours to keep old job information in the job cache.
# This option is deprecated by the keep_jobs_seconds option.
#keep_jobs: 24
# Set the number of seconds to keep old job information in the job cache:
#keep_jobs_seconds: 86400
# The number of seconds to wait when the client is requesting information
# about running jobs.
#gather_job_timeout: 10
# Set the default timeout for the salt command and api. The default is 5
# seconds.
#timeout: 5
# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60
# Set the default outputter used by the salt command. The default is "nested".
#output: nested
# To set a list of additional directories to search for salt outputters, set the
# outputter_dirs option.
#outputter_dirs: []
# Set the default output file used by the salt command. Default is to output
# to the CLI and not to a file. Functions the same way as the "--out-file"
# CLI option, only sets this to a single file for all salt commands.
#output_file: None
# Return minions that timeout when running commands like test.ping
#show_timeout: True
# Tell the client to display the jid when a job is published.
#show_jid: False
# By default, output is colored. To disable colored output, set the color value
# to False.
#color: True
# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False
# To display a summary of the number of minions targeted, the number of
# minions returned, and the number of minions that did not return, set the
# cli_summary value to True. (False by default.)
#
#cli_summary: False
# Set the directory used to hold unix sockets:
#sock_dir: /var/run/salt/master
# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
# enable_gpu_grains: False
# The master maintains a job cache. While this is a great addition, it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#job_cache: True
# Cache minion grains, pillar and mine data via the cache subsystem in the
# cachedir or a database.
#minion_data_cache: True
# Cache subsystem module to use for minion data cache.
#cache: localfs
# Enables a fast in-memory cache booster and sets the expiration time.
#memcache_expire_seconds: 0
# Set a memcache limit in items (bank + key) per cache storage (driver + driver_opts).
#memcache_max_items: 1024
# Each time a cache storage got full cleanup all the expired items not just the oldest one.
#memcache_full_cleanup: False
# Enable collecting the memcache stats and log it on `debug` log level.
#memcache_debug: False
# Store all returns in the given returner.
# Setting this option requires that any returner-specific configuration also
# be set. See various returners in salt/returners for details on required
# configuration values. (See also, event_return_queue, and event_return_queue_max_seconds below.)
#
#event_return: mysql
# On busy systems, enabling event_returns can cause a considerable load on
# the storage system for returners. Events can be queued on the master and
# stored in a batched fashion using a single transaction for multiple events.
# By default, events are not queued.
#event_return_queue: 0
# In some cases enabling event return queueing can be very helpful, but the bus
# may not busy enough to flush the queue consistently. Setting this to a reasonable
# value (1-30 seconds) will cause the queue to be flushed when the oldest event is older
# than `event_return_queue_max_seconds` regardless of how many events are in the queue.
#event_return_queue_max_seconds: 0
# Only return events matching tags in a whitelist, supports glob matches.
#event_return_whitelist:
# - salt/master/a_tag
# - salt/run/*/ret
# Store all event returns **except** the tags in a blacklist, supports globs.
#event_return_blacklist:
# - salt/master/not_this_tag
# - salt/wheel/*/ret
# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# master event bus. The value is expressed in bytes.
#max_event_size: 1048576
# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc
# Overwrite the default tcp ports used by the minion when ipc_mode is set to 'tcp'
#tcp_master_pub_port: 4510
#tcp_master_pull_port: 4511
# By default, the master AES key rotates every 24 hours. The next command
# following a key rotation will trigger a key refresh from the minion which may
# result in minions which do not respond to the first command after a key refresh.
#
# To tell the master to ping all minions immediately after an AES key refresh, set
# ping_on_rotate to True. This should mitigate the issue where a minion does not
# appear to initially respond after a key is rotated.
#
# Note that ping_on_rotate may cause high load on the master immediately after
# the key rotation event as minions reconnect. Consider this carefully if this
# salt master is managing a large number of minions.
#
# If disabled, it is recommended to handle this event by listening for the
# 'aes_key_rotate' event with the 'key' tag and acting appropriately.
# ping_on_rotate: False
# By default, the master deletes its cache of minion data when the key for that
# minion is removed. To preserve the cache after key deletion, set
# 'preserve_minion_cache' to True.
#
# WARNING: This may have security implications if compromised minions auth with
# a previous deleted minion ID.
#preserve_minion_cache: False
# Allow or deny minions from requesting their own key revocation
#allow_minion_key_revoke: True
# If max_minions is used in large installations, the master might experience
# high-load situations because of having to check the number of connected
# minions for every authentication. This cache provides the minion-ids of
# all connected minions to all MWorker-processes and greatly improves the
# performance of max_minions.
# con_cache: False
# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option, then the master will log a warning message.
#
# Include a config file from some other path:
# include: /usr/local/etc/salt/extra_config
#
# Include config from several files and directories:
# include:
# - /usr/local/etc/salt/extra_config
##### Large-scale tuning settings #####
##########################################
# Max open files
#
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console (and then salt-master crashes):
# Too many open files (tcp_listener.cpp:335)
# Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet. For example:
# raise max open files hard limit debian
#
#max_open_files: 100000
# The number of worker threads to start. These threads are used to manage
# return calls made from minions to the master. If the master seems to be
# running slowly, increase the number of threads. This setting can not be
# set lower than 3.
#worker_threads: 5
# Set the ZeroMQ high water marks
# http://api.zeromq.org/3-2:zmq-setsockopt
# The listen queue size / backlog
#zmq_backlog: 1000
# The publisher interface ZeroMQPubServerChannel
#pub_hwm: 1000
# The master may allocate memory per-event and not
# reclaim it.
# To set a high-water mark for memory allocation, use
# ipc_write_buffer to set a high-water mark for message
# buffering.
# Value: In bytes. Set to 'dynamic' to have Salt select
# a value for you. Default is disabled.
# ipc_write_buffer: 'dynamic'
# These two batch settings, batch_safe_limit and batch_safe_size, are used to
# automatically switch to a batch mode execution. If a command would have been
# sent to more than <batch_safe_limit> minions, then run the command in
# batches of <batch_safe_size>. If no batch_safe_size is specified, a default
# of 8 will be used. If no batch_safe_limit is specified, then no automatic
# batching will occur.
#batch_safe_limit: 100
#batch_safe_size: 8
# Master stats enables stats events to be fired from the master at close
# to the defined interval
#master_stats: False
#master_stats_event_iter: 60
##### Security settings #####
##########################################
# Enable passphrase protection of Master private key. Although a string value
# is acceptable; passwords should be stored in an external vaulting mechanism
# and retrieved via sdb. See https://docs.saltproject.io/en/latest/topics/sdb/.
# Passphrase protection is off by default but an example of an sdb profile and
# query is as follows.
# masterkeyring:
# driver: keyring
# service: system
#
# key_pass: sdb://masterkeyring/key_pass
# Enable passphrase protection of the Master signing_key. This only applies if
# master_sign_pubkey is set to True. This is disabled by default.
# master_sign_pubkey: True
# signing_key_pass: sdb://masterkeyring/signing_pass
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False
# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False
# The size of key that should be generated when creating new keys.
#keysize: 2048
# Time in minutes that an incoming public key with a matching name found in
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
# are removed when the master checks the minion_autosign directory.
# 0 equals no timeout
# autosign_timeout: 120
# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure. Regular
# expressions as well as globing lines are supported. The file must be readonly
# except for the owner. Use permissive_pki_access to allow the group write access.
#autosign_file: /usr/local/etc/salt/autosign.conf
# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autoreject_file: /usr/local/etc/salt/autoreject.conf
# If the autosign_grains_dir is specified, incoming keys from minions with grain
# values matching those defined in files in this directory will be accepted
# automatically. This is insecure. Minions need to be configured to send the grains.
#autosign_grains_dir: /usr/local/etc/salt/autosign_grains
# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure. If an autosign_file
# is specified, enabling permissive_pki_access will allow group access to that
# specific file.
#permissive_pki_access: False
# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#publisher_acl:
# larry:
# - test.ping
# - network.*
#
# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module. This is completely disabled by default.
#
#
# Check the list of configured users in client ACL against users on the
# system and throw errors if they do not exist.
#client_acl_verify: True
#
#publisher_acl_blacklist:
# users:
# - root
# - '^(?!sudo_).*$' # all non sudo users
# modules:
# - cmd
# Enforce publisher_acl & publisher_acl_blacklist when users have sudo
# access to the salt command.
#
#sudo_acl: False
# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#external_auth:
# pam:
# fred:
# - test.*
#
# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200
#
# Allow eauth users to specify the expiry time of the tokens they generate.
# A boolean applies to all users or a dictionary of whitelisted eauth backends
# and usernames may be given.
# token_expire_user_override:
# pam:
# - fred
# - tom
# ldap:
# - gary
#
#token_expire_user_override: False
# Set to True to enable keeping the calculated user's auth list in the token
# file. This is disabled by default and the auth list is calculated or requested
# from the eauth driver each time.
#
# Note: `keep_acl_in_token` will be forced to True when using external authentication
# for REST API (`rest` is present under `external_auth`). This is because the REST API
# does not store the password, and can therefore not retroactively fetch the ACL, so
# the ACL must be stored in the token.
#keep_acl_in_token: False
# Auth subsystem module to use to get authorized access list for a user. By default it's
# the same module used for external authentication.
#eauth_acl_module: django
# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
#file_recv: False
# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100
# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged. This behavior went away in Salt 2014.1.0 and these two situations
# will cause minion to throw an exception and drop the message.
# sign_pub_messages: False
# Signature verification on messages published from minions
# This requires that minions cryptographically sign the messages they
# publish to the master. If minions are not signing, then log this information
# at loglevel 'INFO' and drop the message without acting on it.
# require_minion_sign_messages: False
# The below will drop messages when their signatures do not validate.
# Note that when this option is False but `require_minion_sign_messages` is True
# minions MUST sign their messages but the validity of their signatures
# is ignored.
# These two config options exist so a Salt infrastructure can be moved
# to signing minion messages gradually.
# drop_messages_signature_fail: False
# Use TLS/SSL encrypted connection between master and minion.
# Can be set to a dictionary containing keyword arguments corresponding to Python's
# 'ssl.wrap_socket' method.
# Default is None.
#ssl:
# keyfile: <path_to_keyfile>
# certfile: <path_to_certfile>
# ssl_version: PROTOCOL_TLSv1_2
##### Salt-SSH Configuration #####
##########################################
# Define the default salt-ssh roster module to use
#roster: flat
# Pass in an alternative location for the salt-ssh `flat` roster file
#roster_file: /usr/local/etc/salt/roster
# Define locations for `flat` roster files so they can be chosen when using Salt API.
# An administrator can place roster files into these locations. Then when
# calling Salt API, parameter 'roster_file' should contain a relative path to
# these locations. That is, "roster_file=/foo/roster" will be resolved as
# "/usr/local/etc/salt/roster.d/foo/roster" etc. This feature prevents passing insecure
# custom rosters through the Salt API.
#
#rosters:
# - /usr/local/etc/salt/roster.d
# - /opt/salt/some/more/rosters
# The ssh password to log in with.
#ssh_passwd: ''
#The target system's ssh port number.
#ssh_port: 22
# Comma-separated list of ports to scan.
#ssh_scan_ports: 22
# Scanning socket timeout for salt-ssh.
#ssh_scan_timeout: 0.01
# Boolean to run command via sudo.
#ssh_sudo: False
# Boolean to run ssh_pre_flight script defined in roster. By default
# the script will only run if the thin_dir does not exist on the targeted
# minion. This forces the script to run regardless of the thin dir existing
# or not.
#ssh_run_pre_flight: True
# Number of seconds to wait for a response when establishing an SSH connection.
#ssh_timeout: 60
# The user to log in as.
#ssh_user: root
# The log file of the salt-ssh command:
#ssh_log_file: /var/log/salt/ssh
# Pass in minion option overrides that will be inserted into the SHIM for
# salt-ssh calls. The local minion config is not used for salt-ssh. Can be
# overridden on a per-minion basis in the roster (`minion_opts`)
#ssh_minion_opts:
# gpg_keydir: /root/gpg
# Set this to True to default to using ~/.ssh/id_rsa for salt-ssh
# authentication with minions
#ssh_use_home_key: False
# Set this to True to default salt-ssh to run with ``-o IdentitiesOnly=yes``.
# This option is intended for situations where the ssh-agent offers many
# different identities and allows ssh to ignore those identities and use the
# only one specified in options.
#ssh_identities_only: False
# List-only nodegroups for salt-ssh. Each group must be formed as either a
# comma-separated list, or a YAML list. This option is useful to group minions
# into easy-to-target groups when using salt-ssh. These groups can then be
# targeted with the normal -N argument to salt-ssh.
#ssh_list_nodegroups: {}
# salt-ssh has the ability to update the flat roster file if a minion is not
# found in the roster. Set this to True to enable it.
#ssh_update_roster: False
##### Master Module Management #####
##########################################
# Manage how master side modules are loaded.
# Add any additional locations to look for master runners:
#runner_dirs: []
# Add any additional locations to look for master utils:
#utils_dirs: []
# Enable Cython for master side modules:
#cython_enable: False
##### State System settings #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls
# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
#
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
# ext_nodes: <Shell command which returns yaml>
#
#master_tops: {}
# The renderer to use on the minions to render the state data
#renderer: jinja|yaml
# Default Jinja environment options for all templates except sls templates
#jinja_env:
# block_start_string: '{%'
# block_end_string: '%}'
# variable_start_string: '{{'
# variable_end_string: '}}'
# comment_start_string: '{#'
# comment_end_string: '#}'
# line_statement_prefix:
# line_comment_prefix:
# trim_blocks: False
# lstrip_blocks: False
# newline_sequence: '\n'
# keep_trailing_newline: False
# Jinja environment options for sls templates
#jinja_sls_env:
# block_start_string: '{%'
# block_end_string: '%}'
# variable_start_string: '{{'
# variable_end_string: '}}'
# comment_start_string: '{#'
# comment_end_string: '#}'
# line_statement_prefix:
# line_comment_prefix:
# trim_blocks: False
# lstrip_blocks: False
# newline_sequence: '\n'
# keep_trailing_newline: False
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False
# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
# The state_output setting controls which results will be output full multi line
# full, terse - each state will be full/terse
# mixed - only states with errors will be full
# changes - states with changes and errors will be full
# full_id, mixed_id, changes_id and terse_id are also allowed;
# when set, the state ID will be used as name in the output
#state_output: full
# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False
# The state_output_profile setting changes whether profile information
# will be shown for each state run.
#state_output_profile: True
# The state_output_pct setting changes whether success and failure information
# as a percent of total actions will be shown for each state run.
#state_output_pct: False
# The state_compress_ids setting aggregates information about states which have
# multiple "names" under the same state ID in the highstate output.
#state_compress_ids: False
# Automatically aggregate all states that have support for mod_aggregate by
# setting to 'True'. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
# - pkg
#
#state_aggregate: False
# Send progress events as each function in a state run completes execution
# by setting to 'True'. Progress events are in the format
# 'salt/job/<JID>/prog/<MID>/<RUN NUM>'.
#state_events: False
##### File Server settings #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.
# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /usr/local/etc/salt/states/
# dev:
# - /usr/local/etc/salt/states/dev/services
# - /usr/local/etc/salt/states/dev/states
# prod:
# - /usr/local/etc/salt/states/prod/services
# - /usr/local/etc/salt/states/prod/states
#
#file_roots:
# base:
# - /usr/local/etc/salt/states
#
# The master_roots setting configures a master-only copy of the file_roots dictionary,
# used by the state compiler.
#master_roots:
# base:
# - /usr/local/etc/salt/states-master
# When using multiple environments, each with their own top file, the
# default behaviour is an unordered merge. To prevent top files from
# being merged together and instead to only use the top file from the
# requested environment, set this value to 'same'.
#top_file_merging_strategy: merge
# To specify the order in which environments are merged, set the ordering
# in the env_order option. Given a conflict, the last matching value will
# win.
#env_order: ['base', 'dev', 'prod']
# If top_file_merging_strategy is set to 'same' and an environment does not
# contain a top file, the top file in the environment specified by default_top
# will be used instead.
#default_top: base
# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is sha256, but md5, sha1, sha224, sha384 and
# sha512 are also supported.
#
# WARNING: While md5 and sha1 are also supported, do not use them due to the
# high chance of possible collisions and thus security breach.
#
# Prior to changing this value, the master should be stopped and all Salt
# caches should be cleared.
#hash_type: sha256
# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576
# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.
#file_ignore_regex:
# - '/\.svn($|/)'
# - '/\.git($|/)'
# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.
# file_ignore_glob:
# - '*.pyc'
# - '*/somefolder/*.bak'
# - '*.swp'
# File Server Backend
#
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#fileserver_backend:
# - roots
#
# To use multiple backends list them in the order they are searched:
#fileserver_backend:
# - git
# - roots
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False
# Git File Server Backend Configuration
#
# Optional parameter used to specify the provider to be used for gitfs. Must be
# either pygit2 or gitpython. If unset, then both will be tried (in that
# order), and the first one with a compatible version installed will be the
# provider that is used.
#
#gitfs_provider: pygit2
# Along with gitfs_password, is used to authenticate to HTTPS remotes.
# gitfs_user: ''
# Along with gitfs_user, is used to authenticate to HTTPS remotes.
# This parameter is not required if the repository does not use authentication.
#gitfs_password: ''
# By default, Salt will not authenticate to an HTTP (non-HTTPS) remote.
# This parameter enables authentication over HTTP. Enable this at your own risk.
#gitfs_insecure_auth: False
# Along with gitfs_privkey (and optionally gitfs_passphrase), is used to
# authenticate to SSH remotes. This parameter (or its per-remote counterpart)
# is required for SSH remotes.
#gitfs_pubkey: ''
# Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to
# authenticate to SSH remotes. This parameter (or its per-remote counterpart)
# is required for SSH remotes.
#gitfs_privkey: ''
# This parameter is optional, required only when the SSH key being used to
# authenticate is protected by a passphrase.
#gitfs_passphrase: ''
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note: file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#gitfs_remotes:
# - git://github.com/saltstack/salt-states.git
# - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder
#
# The refspecs fetched by gitfs remotes
#gitfs_refspecs:
# - '+refs/heads/*:refs/remotes/origin/*'
# - '+refs/tags/*:refs/tags/*'
#
#
##### Pillar settings #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
# base:
# - /usr/local/etc/salt/pillar
#
#ext_pillar:
# - hiera: /etc/hiera.yaml
# - cmd_yaml: cat /usr/local/etc/salt/yaml
# A list of paths to be recursively decrypted during pillar compilation.
# Entries in this list can be formatted either as a simple string, or as a
# key/value pair, with the key being the pillar location, and the value being
# the renderer to use for pillar decryption. If the former is used, the
# renderer specified by decrypt_pillar_default will be used.
#decrypt_pillar:
# - 'foo:bar': gpg
# - 'lorem:ipsum:dolor'
# The delimiter used to distinguish nested data structures in the
# decrypt_pillar option.
#decrypt_pillar_delimiter: ':'
# The default renderer used for decryption, if one is not specified for a given
# pillar key in decrypt_pillar.
#decrypt_pillar_default: gpg
# List of renderers which are permitted to be used for pillar decryption.
#decrypt_pillar_renderers:
# - gpg
# If this is `True` and the ciphertext could not be decrypted, then an error is
# raised.
#gpg_decrypt_must_succeed: False
# The ext_pillar_first option allows for external pillar sources to populate
# before file system pillar. This allows for targeting file system pillar from
# ext_pillar.
#ext_pillar_first: False
# The external pillars permitted to be used on-demand using pillar.ext
#on_demand_ext_pillar:
# - libvirt
# - virtkey
# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True
# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: False
# The pillar_safe_render_error option prevents the master from passing pillar
# render errors to the minion. This is set on by default because the error could
# contain templating data which would give that minion information it shouldn't
# have, like a password! When set true the error message will only show:
# Rendering SLS 'my.sls' failed. Please see master log for details.
#pillar_safe_render_error: True
# The pillar_source_merging_strategy option allows you to configure merging strategy
# between different sources. It accepts five values: none, recurse, aggregate, overwrite,
# or smart. None will not do any merging at all. Recurse will merge recursively mapping of data.
# Aggregate instructs aggregation of elements between sources that use the #!yamlex renderer. Overwrite
# will overwrite elements according the order in which they are processed. This is
# behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
# on the "renderer" setting and is the default value.
#pillar_source_merging_strategy: smart
# Recursively merge lists by aggregating them instead of replacing them.
#pillar_merge_lists: False
# Set this option to True to force the pillarenv to be the same as the effective
# saltenv when running states. If pillarenv is specified this option will be
# ignored.
#pillarenv_from_saltenv: False
# Set this option to 'True' to force a 'KeyError' to be raised whenever an
# attempt to retrieve a named value from pillar fails. When this option is set
# to 'False', the failed attempt returns an empty string. Default is 'False'.
#pillar_raise_on_missing: False
# Git External Pillar (git_pillar) Configuration Options
#
# Specify the provider to be used for git_pillar. Must be either pygit2 or
# gitpython. If unset, then both will be tried in that same order, and the
# first one with a compatible version installed will be the provider that
# is used.
#git_pillar_provider: pygit2
# If the desired branch matches this value, and the environment is omitted
# from the git_pillar configuration, then the environment for that git_pillar
# remote will be base.
#git_pillar_base: master
# If the branch is omitted from a git_pillar remote, then this branch will
# be used instead
#git_pillar_branch: master
# Environment to use for git_pillar remotes. This is normally derived from
# the branch/tag (or from a per-remote env parameter), but if set this will
# override the process of deriving the env from the branch/tag name.
#git_pillar_env: ''
# Path relative to the root of the repository where the git_pillar top file
# and SLS files are located.
#git_pillar_root: ''
# Specifies whether or not to ignore SSL certificate errors when contacting
# the remote repository.
#git_pillar_ssl_verify: False
# When set to False, if there is an update/checkout lock for a git_pillar
# remote and the pid written to it is not running on the master, the lock
# file will be automatically cleared and a new lock will be obtained.
#git_pillar_global_lock: True
# Git External Pillar Authentication Options
#
# Along with git_pillar_password, is used to authenticate to HTTPS remotes.
#git_pillar_user: ''
# Along with git_pillar_user, is used to authenticate to HTTPS remotes.
# This parameter is not required if the repository does not use authentication.
#git_pillar_password: ''
# By default, Salt will not authenticate to an HTTP (non-HTTPS) remote.
# This parameter enables authentication over HTTP.
#git_pillar_insecure_auth: False
# Along with git_pillar_privkey (and optionally git_pillar_passphrase),
# is used to authenticate to SSH remotes.
#git_pillar_pubkey: ''
# Along with git_pillar_pubkey (and optionally git_pillar_passphrase),
# is used to authenticate to SSH remotes.
#git_pillar_privkey: ''
# This parameter is optional, required only when the SSH key being used
# to authenticate is protected by a passphrase.
#git_pillar_passphrase: ''
# The refspecs fetched by git_pillar remotes
#git_pillar_refspecs:
# - '+refs/heads/*:refs/remotes/origin/*'
# - '+refs/tags/*:refs/tags/*'
# A master can cache pillars locally to bypass the expense of having to render them
# for each minion on every request. This feature should only be enabled in cases
# where pillar rendering time is known to be unsatisfactory and any attendant security
# concerns about storing pillars in a master cache have been addressed.
#
# When enabling this feature, be certain to read through the additional ``pillar_cache_*``
# configuration options to fully understand the tunable parameters and their implications.
#
# Note: setting ``pillar_cache: True`` has no effect on targeting Minions with Pillars.
# See https://docs.saltproject.io/en/latest/topics/targeting/pillar.html
#pillar_cache: False
# If and only if a master has set ``pillar_cache: True``, the cache TTL controls the amount
# of time, in seconds, before the cache is considered invalid by a master and a fresh
# pillar is recompiled and stored.
# The cache TTL does not prevent pillar cache from being refreshed before its TTL expires.
#pillar_cache_ttl: 3600
# If and only if a master has set `pillar_cache: True`, one of several storage providers
# can be utilized.
#
# `disk`: The default storage backend. This caches rendered pillars to the master cache.
# Rendered pillars are serialized and deserialized as msgpack structures for speed.
# Note that pillars are stored UNENCRYPTED. Ensure that the master cache
# has permissions set appropriately. (Same defaults are provided.)
#
# memory: [EXPERIMENTAL] An optional backend for pillar caches which uses a pure-Python
# in-memory data structure for maximal performance. There are several caveats,
# however. First, because each master worker contains its own in-memory cache,
# there is no guarantee of cache consistency between minion requests. This
# works best in situations where the pillar rarely if ever changes. Secondly,
# and perhaps more importantly, this means that unencrypted pillars will
# be accessible to any process which can examine the memory of the ``salt-master``!
# This may represent a substantial security risk.
#
#pillar_cache_backend: disk
# A master can also cache GPG data locally to bypass the expense of having to render them
# for each minion on every request. This feature should only be enabled in cases
# where pillar rendering time is known to be unsatisfactory and any attendant security
# concerns about storing decrypted GPG data in a master cache have been addressed.
#
# When enabling this feature, be certain to read through the additional ``gpg_cache_*``
# configuration options to fully understand the tunable parameters and their implications.
#gpg_cache: False
# If and only if a master has set ``gpg_cache: True``, the cache TTL controls the amount
# of time, in seconds, before the cache is considered invalid by a master and a fresh
# pillar is recompiled and stored.
#gpg_cache_ttl: 86400
# If and only if a master has set `gpg_cache: True`, one of several storage providers
# can be utilized. Available options are the same as ``pillar_cache_backend``.
#gpg_cache_backend: disk
###### Reactor Settings #####
###########################################
# Define a salt reactor. See https://docs.saltproject.io/en/latest/topics/reactor/
#reactor: []
#Set the TTL for the cache of the reactor configuration.
#reactor_refresh_interval: 60
#Configure the number of workers for the runner/wheel in the reactor.
#reactor_worker_threads: 10
#Define the queue size for workers in the reactor.
#reactor_worker_hwm: 10000
##### Syndic settings #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple. If this is a master that will have
# syndic servers(s) below it, then set the "order_masters" setting to True.
#
# If this is a master that will be running a syndic daemon for passthrough, then
# the "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.
# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False
# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmasters
# This is the 'ret_port' of the MasterOfMaster:
#syndic_master_port: 4506
# PID file of the syndic daemon:
#syndic_pidfile: /var/run/salt-syndic.pid
# The log file of the salt-syndic daemon:
#syndic_log_file: /var/log/salt/syndic
# The behaviour of the multi-syndic when connection to a master of masters failed.
# Can specify ``random`` (default) or ``ordered``. If set to ``random``, masters
# will be iterated in random order. If ``ordered`` is specified, the configured
# order will be used.
#syndic_failover: random
# The number of seconds for the salt client to wait for additional syndics to
# check in with their lists of expected minions before giving up.
#syndic_wait: 5
##### Peer Publish settings #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.
# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#peer:
# foo.example.com:
# - test.*
# - pkg.*
#
# This will allow all minions to execute all commands:
#peer:
# .*:
# - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!
# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#peer_run:
# .*:
# - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#peer_run:
# foo.example.com:
# - manage.up
#
#
##### Mine settings #####
#####################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
# mine_get:
# .*:
# - .*
#
# The example below enables minion foo.example.com to get 'network.interfaces' mine
# data only, minions web* to get all network.* and disk.* mine data and all other
# minions won't get any mine data.
# mine_get:
# foo.example.com:
# - network.interfaces
# web.*:
# - network.*
# - disk.*
##### Logging settings #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#log_file: /var/log/salt/master
#key_logfile: /var/log/salt/key
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
#
# The following log levels are considered INSECURE and may log sensitive data:
# ['profile', 'garbage', 'trace', 'debug', 'all']
#
#log_level: warning
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
#log_level_logfile: warning
# The date and time format used in log messages. Allowed date/time formatting
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well. Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '%(colorlevel)s %(colormsg)s'
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#
#log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
# This can be used to control logging levels more specificically. This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
# log_granular_levels:
# 'salt': 'warning'
# 'salt.modules': 'debug'
#
#log_granular_levels: {}
##### Node Groups ######
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of
# a group name and a compound target. Nodgroups can reference other nodegroups
# with 'N@' classifier. Ensure that you do not have circular references.
#
#nodegroups:
# group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
# group2: 'G@os:Debian and foo.domain.com'
# group3: 'G@os:Debian and N@group1'
# group4:
# - 'G@foo:bar'
# - 'or'
# - 'G@foo:baz'
##### Range Cluster settings #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
#
#range_server: range:80
##### Windows Software Repo settings #####
###########################################
# Location of the repo on the master:
#winrepo_dir_ng: '/usr/local/etc/salt/states/win/repo-ng'
#
# List of git repositories to include with the local repo:
#winrepo_remotes_ng:
# - 'https://github.com/saltstack/salt-winrepo-ng.git'
##### Windows Software Repo settings - Pre 2015.8 #####
########################################################
# Legacy repo settings for pre-2015.8 Windows minions.
#
# Location of the repo on the master:
#winrepo_dir: '/usr/local/etc/salt/states/win/repo'
#
# Location of the master's repo cache file:
#winrepo_mastercachefile: '/usr/local/etc/salt/states/win/repo/winrepo.p'
#
# List of git repositories to include with the local repo:
#winrepo_remotes:
# - 'https://github.com/saltstack/salt-winrepo.git'
# The refspecs fetched by winrepo remotes
#winrepo_refspecs:
# - '+refs/heads/*:refs/remotes/origin/*'
# - '+refs/tags/*:refs/tags/*'
#
##### Returner settings ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
###### Miscellaneous settings ######
############################################
# Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch
#event_match_type: startswith
# Save runner returns to the job cache
#runner_returns: True
# Permanently include any available Python 3rd party modules into thin and minimal Salt
# when they are generated for Salt-SSH or other purposes.
# The modules should be named by the names they are actually imported inside the Python.
# The value of the parameters can be either one module or a comma separated list of them.
#thin_extra_mods: foo,bar
#min_extra_mods: foo,bar,baz
###### Keepalive settings ######
############################################
# Warning: Failure to set TCP keepalives on the salt-master can result in
# not detecting the loss of a minion when the connection is lost or when
# its host has been terminated without first closing the socket.
# Salt's Presence System depends on this connection status to know if a minion
# is "present".
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.
# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True
# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300
# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1
# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1
##### NetAPI settings #####
############################################
# Allow the raw_shell parameter to be used when calling Salt SSH client via API
#netapi_allow_raw_shell: True
# Set a list of clients to enable in in the API
#netapi_enable_clients: []
Example minion configuration file##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Minion.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.
# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
#master: salt
# Set http proxy information for the minion when doing requests
#proxy_host:
#proxy_port:
#proxy_username:
#proxy_password:
# List of hosts to bypass HTTP proxy. This key does nothing unless proxy_host etc is
# configured, it does not support any kind of wildcards.
#no_proxy: []
# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master
# is set to True, the order will be randomized upon Minion startup instead. This can
# be helpful in distributing the load of many minions executing salt-call requests,
# for example, from a cron job. If only one master is listed, this setting is ignored
# and a warning will be logged.
#random_master: False
# NOTE: Deprecated in Salt 2019.2.0. Use 'random_master' instead.
#master_shuffle: False
# Minions can connect to multiple masters simultaneously (all masters
# are "hot"), or can be configured to failover if a master becomes
# unavailable. Multiple hot masters are configured by setting this
# value to "str". Failover masters can be requested by setting
# to "failover". MAKE SURE TO SET master_alive_interval if you are
# using failover.
# Setting master_type to 'disable' lets you have a running minion (with engines and
# beacons) without a master connection
# master_type: str
# Poll interval in seconds for checking if the master is still there. Only
# respected if master_type above is "failover". To disable the interval entirely,
# set the value to -1. (This may be necessary on machines which have high numbers
# of TCP connections, such as load balancers.)
# master_alive_interval: 30
# If the minion is in multi-master mode and the master_type configuration option
# is set to "failover", this setting can be set to "True" to force the minion
# to fail back to the first master in the list if the first master is back online.
#master_failback: False
# If the minion is in multi-master mode, the "master_type" configuration is set to
# "failover", and the "master_failback" option is enabled, the master failback
# interval can be set to ping the top master with this interval, in seconds.
#master_failback_interval: 0
# Set whether the minion should connect to the master via IPv6:
#ipv6: False
# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30
# Set the number of times to attempt to resolve
# the master hostname if name resolution fails. Defaults to None,
# which will attempt the resolution indefinitely.
# retry_dns_count: 3
# Set the port used by the master reply and authentication server.
#master_port: 4506
# The user to run salt.
#user: root
# The user to run salt remote execution commands as via sudo. If this option is
# enabled then sudo will be used to change the active user executing the remote
# command. If enabled the user will need to be allowed access via the sudoers
# file for the user that the salt minion is configured to run as. The most
# common option would be to use the root user. If this option is set the user
# option should also be set to a non-root user. If migrating from a root minion
# to a non root minion the minion cache should be cleared and the minion pki
# directory will need to be changed to the ownership of the new user.
#sudo_user: root
# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid
# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /
# The path to the minion's configuration file.
#conf_file: /usr/local/etc/salt/minion
# The directory to store the pki information in
#pki_dir: /usr/local/etc/salt/pki/minion
# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:
# Cache the minion id to a file when the minion's id is not statically defined
# in the minion config. Defaults to "True". This setting prevents potential
# problems when automatic minion id resolution changes, which can cause the
# minion to lose connection with the master. To turn off minion id caching,
# set this config to ``False``.
#minion_id_caching: True
# Convert minion id to lowercase when it is being generated. Helpful when some
# hosts get the minion id in uppercase. Cached ids will remain the same and
# not converted. For example, Windows minions often have uppercase minion
# names when they are set up but not always. To turn on, set this config to
# ``True``.
#minion_id_lowercase: False
# Append a domain to a hostname in the event that it does not exist. This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
#append_domain:
# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
#grains:
# roles:
# - webserver
# - memcache
# deployment: datacenter4
# cabinet: 13
# cab_u: 14-15
#
# Where cache data goes.
# This data may contain sensitive data and should be protected accordingly.
#cachedir: /var/cache/salt/minion
# Append minion_id to these directories. Helps with
# multiple proxies and minions running on the same machine.
# Allowed elements in the list: pki_dir, cachedir, extension_modules
# Normally not needed unless running several proxies and/or minions on the same machine
# Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions
#append_minionid_config_dirs:
# Verify and set permissions on configuration directories at startup.
#verify_env: True
# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable, set
# cache_jobs to True.
#cache_jobs: False
# Set the directory used to hold unix sockets.
#sock_dir: /var/run/salt/minion
# In order to calculate the fqdns grain, all the IP addresses from the minion
# are processed with underlying calls to `socket.gethostbyaddr` which can take
# 5 seconds to be released (after reaching `socket.timeout`) when there is no
# fqdn for that IP. These calls to `socket.gethostbyaddr` are processed
# asynchronously, however, it still adds 5 seconds every time grains are
# generated if an IP does not resolve. In Windows grains are regenerated each
# time a new process is spawned. Therefore, the default for Windows is `False`.
# On macOS, FQDN resolution can be very slow, therefore the default for macOS is
# `False` as well. All other OSes default to `True`
# enable_fqdns_grains: True
# The minion can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the minion. Set this to False if you do not need
# GPU hardware grains for your minion.
# enable_gpu_grains: True
# Set the default outputter used by the salt-call command. The default is
# "nested".
#output: nested
# To set a list of additional directories to search for salt outputters, set the
# outputter_dirs option.
#outputter_dirs: []
# By default output is colored. To disable colored output, set the color value
# to False.
#color: True
# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False
# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backup relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
# /etc/ssh/sshd_config:
# file.managed:
# - source: salt://ssh/sshd_config
# - backup: minion
#
#backup_mode: minion
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10
# If this is nonzero, the time between reconnection attempts will increase by
# acceptance_wait_time seconds per iteration, up to this maximum. If this is
# set to zero, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: 0
# If the master rejects the minion's public key, retry instead of exiting.
# Rejected keys will be handled the same as waiting on acceptance.
#rejected_retry: False
# When the master key changes, the minion will try to re-auth itself to receive
# the new master key. In larger environments this can cause a SYN flood on the
# master because all minions try to re-auth immediately. To prevent this and
# have a minion wait for a random amount of time, use this optional parameter.
# The wait-time will be a random number of seconds between 0 and the defined value.
#random_reauth_delay: 60
# To avoid overloading a master when many minions startup at once, a randomized
# delay may be set to tell the minions to wait before connecting to the master.
# This value is the number of seconds to choose from for a random number. For
# example, setting this value to 60 will choose a random number of seconds to delay
# on startup between zero seconds and sixty seconds. Setting to '0' will disable
# this feature.
#random_startup_delay: 0
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the timeout value,
# in seconds, for each individual attempt. After this timeout expires, the minion
# will wait for acceptance_wait_time seconds before trying again. Unless your master
# is under unusually heavy load, this should be left at the default.
#auth_timeout: 60
# Number of consecutive SaltReqTimeoutError that are acceptable when trying to
# authenticate.
#auth_tries: 7
# The number of attempts to connect to a master before giving up.
# Set this to -1 for unlimited attempts. This allows for a master to have
# downtime and the minion to reconnect to it later when it comes back up.
# In 'failover' mode, it is the number of attempts for each set of masters.
# In this mode, it will cycle through the list of masters for each attempt.
#
# This is different than auth_tries because auth_tries attempts to
# retry auth attempts with a single master. auth_tries is under the
# assumption that you can connect to the master but not gain
# authorization from it. master_tries will still cycle through all
# the masters in a given try, so it is appropriate if you expect
# occasional downtime from the master(s).
#master_tries: 1
# If authentication fails due to SaltReqTimeoutError during a ping_interval,
# cause sub minion process to restart.
#auth_safemode: False
# Ping Master to ensure connection is alive (minutes).
#ping_interval: 0
# To auto recover minions if master changes IP address (DDNS)
# master_alive_interval: 10
# master_tries: -1
#
# Minions won't know master is missing until a ping fails. After the ping fail,
# the minion will attempt authentication and likely fails out and cause a restart.
# When the minion restarts it will resolve the masters IP and attempt to reconnect.
# If you don't have any problems with syn-floods, don't bother with the
# three recon_* settings described below, just leave the defaults!
#
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
# to reconnect immediately, if the socket is disconnected (for example if
# the master processes are restarted). In large setups this will have all
# minions reconnect immediately which might flood the master (the ZeroMQ-default
# is usually a 100ms delay). To prevent this, these three recon_* settings
# can be used.
# recon_default: the interval in milliseconds that the socket should wait before
# trying to reconnect to the master (1000ms = 1 second)
#
# recon_max: the maximum time a socket should wait. each interval the time to wait
# is calculated by doubling the previous time. if recon_max is reached,
# it starts again at recon_default. Short example:
#
# reconnect 1: the socket will wait 'recon_default' milliseconds
# reconnect 2: 'recon_default' * 2
# reconnect 3: ('recon_default' * 2) * 2
# reconnect 4: value from previous interval * 2
# reconnect 5: value from previous interval * 2
# reconnect x: if value >= recon_max, it starts again with recon_default
#
# recon_randomize: generate a random wait time on minion start. The wait time will
# be a random value between recon_default and recon_default +
# recon_max. Having all minions reconnect with the same recon_default
# and recon_max value kind of defeats the purpose of being able to
# change these settings. If all minions have the same values and your
# setup is quite large (several thousand minions), they will still
# flood the master. The desired behavior is to have timeframe within
# all minions try to reconnect.
#
# Example on how to use these settings. The goal: have all minions reconnect within a
# 60 second timeframe on a disconnect.
# recon_default: 1000
# recon_max: 59000
# recon_randomize: True
#
# Each minion will have a randomized reconnect value between 'recon_default'
# and 'recon_default + recon_max', which in this example means between 1000ms
# 60000ms (or between 1 and 60 seconds). The generated random-value will be
# doubled after each attempt to reconnect. Lets say the generated random
# value is 11 seconds (or 11000ms).
# reconnect 1: wait 11 seconds
# reconnect 2: wait 22 seconds
# reconnect 3: wait 33 seconds
# reconnect 4: wait 44 seconds
# reconnect 5: wait 55 seconds
# reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
# reconnect 7: wait 11 seconds
# reconnect 8: wait 22 seconds
# reconnect 9: wait 33 seconds
# reconnect x: etc.
#
# In a setup with ~6000 hosts these settings would average the reconnects
# to about 100 per second and all hosts would be reconnected within 60 seconds.
# recon_default: 100
# recon_max: 5000
# recon_randomize: False
#
#
# The loop_interval sets how long in seconds the minion will wait between
# evaluating the scheduler and running cleanup tasks. This defaults to 1
# second on the minion scheduler.
#loop_interval: 1
# Some installations choose to start all job returns in a cache or a returner
# and forgo sending the results back to a master. In this workflow, jobs
# are most often executed with --async from the Salt CLI and then results
# are evaluated by examining job caches on the minions or any configured returners.
# WARNING: Setting this to False will **disable** returns back to the master.
#pub_ret: True
# The grains can be merged, instead of overridden, using this option.
# This allows custom grains to defined different subvalues of a dictionary
# grain. By default this feature is disabled, to enable set grains_deep_merge
# to ``True``.
#grains_deep_merge: False
# The grains_refresh_every setting allows for a minion to periodically check
# its grains to see if they have changed and, if so, to inform the master
# of the new grains. This operation is moderately expensive, therefore
# care should be taken not to set this value too low.
#
# Note: This value is expressed in __minutes__!
#
# A value of 10 minutes is a reasonable default.
#
# If the value is set to zero, this check is disabled.
#grains_refresh_every: 1
# The grains_refresh_pre_exec setting allows for a minion to check its grains
# prior to the execution of any operation to see if they have changed and, if
# so, to inform the master of the new grains. This operation is moderately
# expensive, therefore care should be taken before enabling this behavior.
#grains_refresh_pre_exec: False
# Cache grains on the minion. Default is False.
#grains_cache: False
# Cache rendered pillar data on the minion. Default is False.
# This may cause 'cachedir'/pillar to contain sensitive data that should be
# protected accordingly.
#minion_pillar_cache: False
# Grains cache expiration, in seconds. If the cache file is older than this
# number of seconds then the grains cache will be dumped and fully re-populated
# with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
# is not enabled.
# grains_cache_expiration: 300
# Determines whether or not the salt minion should run scheduled mine updates.
# Defaults to "True". Set to "False" to disable the scheduled mine updates
# (this essentially just does not add the mine update function to the minion's
# scheduler).
#mine_enabled: True
# Determines whether or not scheduled mine updates should be accompanied by a job
# return for the job cache. Defaults to "False". Set to "True" to include job
# returns in the job cache for mine updates.
#mine_return_job: False
# Example functions that can be run via the mine facility
# NO mine functions are established by default.
# Note these can be defined in the minion's pillar as well.
#mine_functions:
# test.ping: []
# network.ip_addrs:
# interface: eth0
# cidr: '10.0.0.0/8'
# The number of minutes between mine updates.
#mine_interval: 60
# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. ipc_mode is set to 'tcp' on such systems.
#ipc_mode: ipc
# Overwrite the default tcp ports used by the minion when ipc_mode is set to 'tcp'
#tcp_pub_port: 4510
#tcp_pull_port: 4511
# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# minion event bus. The value is expressed in bytes.
#max_event_size: 1048576
# When a minion starts up it sends a notification on the event bus with a tag
# that looks like this: `salt/minion/<minion_id>/start`. For historical reasons
# the minion also sends a similar event with an event tag like this:
# `minion_start`. This duplication can cause a lot of clutter on the event bus
# when there are many minions. Set `enable_legacy_startup_events: False` in the
# minion config to ensure only the `salt/minion/<minion_id>/start` events are
# sent. Beginning with the `Sodium` Salt release this option will default to
# `False`
#enable_legacy_startup_events: True
# To detect failed master(s) and fire events on connect/disconnect, set
# master_alive_interval to the number of seconds to poll the masters for
# connection events.
#
#master_alive_interval: 30
# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
# Include a config file from some other path:
# include: /usr/local/etc/salt/extra_config
#
# Include config from several files and directories:
#include:
# - /usr/local/etc/salt/extra_config
# - /etc/roles/webserver
# The syndic minion can verify that it is talking to the correct master via the
# key fingerprint of the higher-level master with the "syndic_finger" config.
#syndic_finger: ''
#
#
#
##### Minion module management #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion. The default here is the empty list,
# below is an example of how this needs to be formatted in the config file
#disable_modules:
# - cmdmod
# - test
#disable_returners: []
# This is the reverse of disable_modules. The default, like disable_modules, is the empty list,
# but if this option is set to *anything* then *only* those modules will load.
# Note that this is a very large hammer and it can be quite difficult to keep the minion working
# the way you think it should since Salt uses many modules internally itself. At a bare minimum
# you need the following enabled or else the minion won't start.
#whitelist_modules:
# - cmdmod
# - test
# - config
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#utils_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#providers:
# pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#
# Specify a max size (in bytes) for modules on import. This feature is currently
# only supported on *nix operating systems and requires psutil.
# modules_max_memory: -1
##### State Management Settings #####
###########################################
# The default renderer to use in SLS files. This is configured as a
# pipe-delimited expression. For example, jinja|yaml will first run jinja
# templating on the SLS file, and then load the result as YAML. This syntax is
# documented in further depth at the following URL:
#
# https://docs.saltproject.io/en/latest/ref/renderers/#composing-renderers
#
# NOTE: The "shebang" prefix (e.g. "#!jinja|yaml") described in the
# documentation linked above is for use in an SLS file to override the default
# renderer, it should not be used when configuring the renderer here.
#
#renderer: jinja|yaml
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution. Defaults to False.
#failhard: False
#
# Reload the modules prior to a highstate run.
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default, this is
# enabled and can be disabled by changing this value to False.
#clean_dynamic_modules: True
#
# Renamed from ``environment`` to ``saltenv``. If ``environment`` is used,
# ``saltenv`` will take its value. If both are used, ``environment`` will be
# ignored and ``saltenv`` will be used.
# Normally the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#saltenv: None
#
# Isolates the pillar environment on the minion side. This functions the same
# as the environment setting, but for pillar instead of states.
#pillarenv: None
#
# Set this option to True to force the pillarenv to be the same as the
# effective saltenv when running states. Note that if pillarenv is specified,
# this option will be ignored.
#pillarenv_from_saltenv: False
#
# Set this option to 'True' to force a 'KeyError' to be raised whenever an
# attempt to retrieve a named value from pillar fails. When this option is set
# to 'False', the failed attempt returns an empty string. Default is 'False'.
#pillar_raise_on_missing: False
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master
#startup_states: ''
#
# List of states to run when the minion starts up if startup_states is 'sls':
#sls_list:
# - edit.vim
# - hyper
#
# List of grains to pass in start event when minion starts up:
#start_event_grains:
# - machine_id
# - uuid
#
# Top file to execute if startup_states is 'top':
#top_file: ''
# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
# - pkg
#
#state_aggregate: False
# Instead of failing immediately when another state run is in progress, a value
# of True will queue the new state run to begin running once the other has
# finished. This option starts a new thread for each queued state run, so use
# this option sparingly. Additionally, it can be set to an integer representing
# the maximum queue size which can be attained before the state runs will fail
# to be queued. This can prevent runaway conditions where new threads are
# started until system performance is hampered.
#
#state_queue: False
# Disable requisites during state runs by specifying a single requisite
# or a list of requisites to disable.
#
# disabled_requisites: require_in
#
# disabled_requisites:
# - require
# - require_in
# If set, this parameter expects a dictionary of state module names as keys
# and list of conditions which must be satisfied in order to run any functions
# in that state module.
#
#global_state_conditions:
# "*": ["G@global_noop:false"]
# service: ["not G@virtual_subtype:chroot"]
##### File Directory Settings #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.
# Set the file client. The client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to "local". Setting a local file_client runs the
# minion in masterless mode.
#file_client: remote
# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /usr/local/etc/salt/states/
# dev:
# - /usr/local/etc/salt/states/dev/services
# - /usr/local/etc/salt/states/dev/states
# prod:
# - /usr/local/etc/salt/states/prod/services
# - /usr/local/etc/salt/states/prod/states
#
#file_roots:
# base:
# - /usr/local/etc/salt/states
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# The hash_type is the hash to use when discovering the hash of a file on
# the local fileserver. The default is sha256, but md5, sha1, sha224, sha384
# and sha512 are also supported.
#
# WARNING: While md5 and sha1 are also supported, do not use them due to the
# high chance of possible collisions and thus security breach.
#
# Warning: Prior to changing this value, the minion should be stopped and all
# Salt caches should be cleared.
#hash_type: sha256
# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
# base:
# - /usr/local/etc/salt/pillar
# If this is `True` and the ciphertext could not be decrypted, then an error is
# raised.
#gpg_decrypt_must_succeed: False
# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100
#
#
###### Security settings #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False
# The size of key that should be generated when creating new keys.
#keysize: 2048
# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False
# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
# The state_output setting controls which results will be output full multi line
# full, terse - each state will be full/terse
# mixed - only states with errors will be full
# changes - states with changes and errors will be full
# full_id, mixed_id, changes_id and terse_id are also allowed;
# when set, the state ID will be used as name in the output
#state_output: full
# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False
# The state_output_profile setting changes whether profile information
# will be shown for each state run.
#state_output_profile: True
# The state_output_pct setting changes whether success and failure information
# as a percent of total actions will be shown for each state run.
#state_output_pct: False
# The state_compress_ids setting aggregates information about states which have
# multiple "names" under the same state ID in the highstate output.
#state_compress_ids: False
# Fingerprint of the master public key to validate the identity of your Salt master
# before the initial key exchange. The master fingerprint can be found by running
# "salt-key -f master.pub" on the Salt master.
#master_finger: ''
# Use TLS/SSL encrypted connection between master and minion.
# Can be set to a dictionary containing keyword arguments corresponding to Python's
# 'ssl.wrap_socket' method.
# Default is None.
#ssl:
# keyfile: <path_to_keyfile>
# certfile: <path_to_certfile>
# ssl_version: PROTOCOL_TLSv1_2
# Grains to be sent to the master on authentication to check if the minion's key
# will be accepted automatically. Needs to be configured on the master.
#autosign_grains:
# - uuid
# - server_id
###### Reactor Settings #####
###########################################
# Define a salt reactor. See https://docs.saltproject.io/en/latest/topics/reactor/
#reactor: []
#Set the TTL for the cache of the reactor configuration.
#reactor_refresh_interval: 60
#Configure the number of workers for the runner/wheel in the reactor.
#reactor_worker_threads: 10
#Define the queue size for workers in the reactor.
#reactor_worker_hwm: 10000
###### Thread settings #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#
# WARNING: Disabling multiprocessing may result in substantial slowdowns
# when processing large pillars. See https://github.com/saltstack/salt/issues/38758
# for a full explanation.
#multiprocessing: True
# Limit the maximum amount of processes or threads created by salt-minion.
# This is useful to avoid resource exhaustion in case the minion receives more
# publications than it is able to handle, as it limits the number of spawned
# processes or threads. -1 is the default and disables the limit.
#process_count_max: -1
##### Logging settings #####
##########################################
# The location of the minion log file
# The minion log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/minion
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#
#log_file: /var/log/salt/minion
#key_logfile: /var/log/salt/key
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'.
#
# The following log levels are considered INSECURE and may log sensitive data:
# ['profile', 'garbage', 'trace', 'debug', 'all']
#
# Default: 'warning'
#log_level: warning
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
# Default: 'warning'
#log_level_logfile:
# The date and time format used in log messages. Allowed date/time formatting
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well. Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '%(colorlevel)s %(colormsg)s'
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#
#log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
# This can be used to control logging levels more specificically. This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
# log_granular_levels:
# 'salt': 'warning'
# 'salt.modules': 'debug'
#
#log_granular_levels: {}
# To diagnose issues with minions disconnecting or missing returns, ZeroMQ
# supports the use of monitor sockets to log connection events. This
# feature requires ZeroMQ 4.0 or higher.
#
# To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
# debug level or higher.
#
# A sample log event is as follows:
#
# [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
# 'value': 27, 'description': 'EVENT_DISCONNECTED'}
#
# All events logged will include the string 'ZeroMQ event'. A connection event
# should be logged as the minion starts up and initially connects to the
# master. If not, check for debug log level and that the necessary version of
# ZeroMQ is installed.
#
#zmq_monitor: False
# Number of times to try to authenticate with the salt master when reconnecting
# to the master
#tcp_authentication_retries: 5
###### Module configuration #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
#
###### Update settings ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []
###### Keepalive settings ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.
# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True
# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300
# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1
# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1
###### Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'
###### Returner settings ######
############################################
# Default Minion returners. Can be a comma delimited string or a list:
#
#return: mysql
#
#return: mysql,slack,redis
#
#return:
# - mysql
# - hipchat
# - slack
###### Miscellaneous settings ######
############################################
# Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch
#event_match_type: startswith
Example proxy minion configuration file##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of all Salt Proxy
# Minions on this host.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.
# Per default the proxy minion will automatically include all config files
# from proxy.d/*.conf (proxy.d is a directory in the same directory
# as the main minion config file).
#default_include: proxy.d/*.conf
# Backwards compatibility option for proxymodules created before 2015.8.2
# This setting will default to 'False' in the 2016.3.0 release
# Setting this to True adds proxymodules to the __opts__ dictionary.
# This breaks several Salt features (basically anything that serializes
# __opts__ over the wire) but retains backwards compatibility.
#add_proxymodule_to_opts: True
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
#master: salt
# If a proxymodule has a function called 'grains', then call it during
# regular grains loading and merge the results with the proxy's grains
# dictionary. Otherwise it is assumed that the module calls the grains
# function in a custom way and returns the data elsewhere
#
# Default to False for 2016.3 and 2016.11. Switch to True for 2017.7.0.
# proxy_merge_grains_in_module: True
# If a proxymodule has a function called 'alive' returning a boolean
# flag reflecting the state of the connection with the remove device,
# when this option is set as True, a scheduled job on the proxy will
# try restarting the connection. The polling frequency depends on the
# next option, 'proxy_keep_alive_interval'. Added in 2017.7.0.
# proxy_keep_alive: True
# The polling interval (in minutes) to check if the underlying connection
# with the remote device is still alive. This option requires
# 'proxy_keep_alive' to be configured as True and the proxymodule to
# implement the 'alive' function. Added in 2017.7.0.
# proxy_keep_alive_interval: 1
# By default, any proxy opens the connection with the remote device when
# initialized. Some proxymodules allow through this option to open/close
# the session per command. This requires the proxymodule to have this
# capability. Please consult the documentation to see if the proxy type
# used can be that flexible. Added in 2017.7.0.
# proxy_always_alive: True
# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing
# the load of many minions executing salt-call requests, for example, from a cron job.
# If only one master is listed, this setting is ignored and a warning will be logged.
#random_master: False
# Minions can connect to multiple masters simultaneously (all masters
# are "hot"), or can be configured to failover if a master becomes
# unavailable. Multiple hot masters are configured by setting this
# value to "str". Failover masters can be requested by setting
# to "failover". MAKE SURE TO SET master_alive_interval if you are
# using failover.
# master_type: str
# Poll interval in seconds for checking if the master is still there. Only
# respected if master_type above is "failover".
# master_alive_interval: 30
# Set whether the minion should connect to the master via IPv6:
#ipv6: False
# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30
# Set the port used by the master reply and authentication server.
#master_port: 4506
# The user to run salt.
#user: root
# Setting sudo_user will cause salt to run all execution modules under an sudo
# to the user given in sudo_user. The user under which the salt minion process
# itself runs will still be that provided in the user config above, but all
# execution modules run by the minion will be rerouted through sudo.
#sudo_user: saltdev
# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid
# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /
# The directory to store the pki information in
#pki_dir: /usr/local/etc/salt/pki/minion
# Where cache data goes.
# This data may contain sensitive data and should be protected accordingly.
#cachedir: /var/cache/salt/minion
# Append minion_id to these directories. Helps with
# multiple proxies and minions running on the same machine.
# Allowed elements in the list: pki_dir, cachedir, extension_modules
# Normally not needed unless running several proxies and/or minions on the same machine
# Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions
# append_minionid_config_dirs:
# - cachedir
# Verify and set permissions on configuration directories at startup.
#verify_env: True
# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable, set
# cache_jobs to True.
#cache_jobs: False
# Set the directory used to hold unix sockets.
#sock_dir: /var/run/salt/minion
# Set the default outputter used by the salt-call command. The default is
# "nested".
#output: nested
#
# By default output is colored. To disable colored output, set the color value
# to False.
#color: True
# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False
# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backup relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
# /etc/ssh/sshd_config:
# file.managed:
# - source: salt://ssh/sshd_config
# - backup: minion
#
#backup_mode: minion
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10
# If this is nonzero, the time between reconnection attempts will increase by
# acceptance_wait_time seconds per iteration, up to this maximum. If this is
# set to zero, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: 0
# If the master rejects the minion's public key, retry instead of exiting.
# Rejected keys will be handled the same as waiting on acceptance.
#rejected_retry: False
# When the master key changes, the minion will try to re-auth itself to receive
# the new master key. In larger environments this can cause a SYN flood on the
# master because all minions try to re-auth immediately. To prevent this and
# have a minion wait for a random amount of time, use this optional parameter.
# The wait-time will be a random number of seconds between 0 and the defined value.
#random_reauth_delay: 60
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the timeout value,
# in seconds, for each individual attempt. After this timeout expires, the minion
# will wait for acceptance_wait_time seconds before trying again. Unless your master
# is under unusually heavy load, this should be left at the default.
#auth_timeout: 60
# Number of consecutive SaltReqTimeoutError that are acceptable when trying to
# authenticate.
#auth_tries: 7
# If authentication fails due to SaltReqTimeoutError during a ping_interval,
# cause sub minion process to restart.
#auth_safemode: False
# Ping Master to ensure connection is alive (minutes).
#ping_interval: 0
# To auto recover minions if master changes IP address (DDNS)
# auth_tries: 10
# auth_safemode: False
# ping_interval: 90
#
# Minions won't know master is missing until a ping fails. After the ping fail,
# the minion will attempt authentication and likely fails out and cause a restart.
# When the minion restarts it will resolve the masters IP and attempt to reconnect.
# If you don't have any problems with syn-floods, don't bother with the
# three recon_* settings described below, just leave the defaults!
#
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
# to reconnect immediately, if the socket is disconnected (for example if
# the master processes are restarted). In large setups this will have all
# minions reconnect immediately which might flood the master (the ZeroMQ-default
# is usually a 100ms delay). To prevent this, these three recon_* settings
# can be used.
# recon_default: the interval in milliseconds that the socket should wait before
# trying to reconnect to the master (1000ms = 1 second)
#
# recon_max: the maximum time a socket should wait. each interval the time to wait
# is calculated by doubling the previous time. if recon_max is reached,
# it starts again at recon_default. Short example:
#
# reconnect 1: the socket will wait 'recon_default' milliseconds
# reconnect 2: 'recon_default' * 2
# reconnect 3: ('recon_default' * 2) * 2
# reconnect 4: value from previous interval * 2
# reconnect 5: value from previous interval * 2
# reconnect x: if value >= recon_max, it starts again with recon_default
#
# recon_randomize: generate a random wait time on minion start. The wait time will
# be a random value between recon_default and recon_default +
# recon_max. Having all minions reconnect with the same recon_default
# and recon_max value kind of defeats the purpose of being able to
# change these settings. If all minions have the same values and your
# setup is quite large (several thousand minions), they will still
# flood the master. The desired behavior is to have timeframe within
# all minions try to reconnect.
#
# Example on how to use these settings. The goal: have all minions reconnect within a
# 60 second timeframe on a disconnect.
# recon_default: 1000
# recon_max: 59000
# recon_randomize: True
#
# Each minion will have a randomized reconnect value between 'recon_default'
# and 'recon_default + recon_max', which in this example means between 1000ms
# 60000ms (or between 1 and 60 seconds). The generated random-value will be
# doubled after each attempt to reconnect. Lets say the generated random
# value is 11 seconds (or 11000ms).
# reconnect 1: wait 11 seconds
# reconnect 2: wait 22 seconds
# reconnect 3: wait 33 seconds
# reconnect 4: wait 44 seconds
# reconnect 5: wait 55 seconds
# reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
# reconnect 7: wait 11 seconds
# reconnect 8: wait 22 seconds
# reconnect 9: wait 33 seconds
# reconnect x: etc.
#
# In a setup with ~6000 thousand hosts these settings would average the reconnects
# to about 100 per second and all hosts would be reconnected within 60 seconds.
# recon_default: 100
# recon_max: 5000
# recon_randomize: False
#
#
# The loop_interval sets how long in seconds the minion will wait between
# evaluating the scheduler and running cleanup tasks. This defaults to a
# sane 60 seconds, but if the minion scheduler needs to be evaluated more
# often lower this value
#loop_interval: 60
# The grains_refresh_every setting allows for a minion to periodically check
# its grains to see if they have changed and, if so, to inform the master
# of the new grains. This operation is moderately expensive, therefore
# care should be taken not to set this value too low.
#
# Note: This value is expressed in __minutes__!
#
# A value of 10 minutes is a reasonable default.
#
# If the value is set to zero, this check is disabled.
#grains_refresh_every: 1
# Cache grains on the minion. Default is False.
#grains_cache: False
# Grains cache expiration, in seconds. If the cache file is older than this
# number of seconds then the grains cache will be dumped and fully re-populated
# with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
# is not enabled.
# grains_cache_expiration: 300
# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc
# Overwrite the default tcp ports used by the minion when in tcp mode
#tcp_pub_port: 4510
#tcp_pull_port: 4511
# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# minion event bus. The value is expressed in bytes.
#max_event_size: 1048576
# To detect failed master(s) and fire events on connect/disconnect, set
# master_alive_interval to the number of seconds to poll the masters for
# connection events.
#
#master_alive_interval: 30
# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
# Include a config file from some other path:
# include: /usr/local/etc/salt/extra_config
#
# Include config from several files and directories:
#include:
# - /usr/local/etc/salt/extra_config
# - /etc/roles/webserver
#
#
#
##### Minion module management #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion.
#disable_modules: [cmd,test]
#disable_returners: []
#
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#utils_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#providers:
# pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#
# Specify a max size (in bytes) for modules on import. This feature is currently
# only supported on *nix operating systems and requires psutil.
# modules_max_memory: -1
##### State Management Settings #####
###########################################
# The default renderer to use in SLS files. This is configured as a
# pipe-delimited expression. For example, jinja|yaml will first run jinja
# templating on the SLS file, and then load the result as YAML. This syntax is
# documented in further depth at the following URL:
#
# https://docs.saltproject.io/en/latest/ref/renderers/#composing-renderers
#
# NOTE: The "shebang" prefix (e.g. "#!jinja|yaml") described in the
# documentation linked above is for use in an SLS file to override the default
# renderer, it should not be used when configuring the renderer here.
#
#renderer: jinja|yaml
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution. Defaults to False.
#failhard: False
#
# Reload the modules prior to a highstate run.
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default, this is
# enabled and can be disabled by changing this value to False.
#clean_dynamic_modules: True
#
# Normally, the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#environment: None
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master
#startup_states: ''
#
# List of states to run when the minion starts up if startup_states is 'sls':
#sls_list:
# - edit.vim
# - hyper
#
# Top file to execute if startup_states is 'top':
#top_file: ''
# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
# - pkg
#
#state_aggregate: False
##### File Directory Settings #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.
# Set the file client. The client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to "local". Setting a local file_client runs the
# minion in masterless mode.
#file_client: remote
# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /usr/local/etc/salt/states/
# dev:
# - /usr/local/etc/salt/states/dev/services
# - /usr/local/etc/salt/states/dev/states
# prod:
# - /usr/local/etc/salt/states/prod/services
# - /usr/local/etc/salt/states/prod/states
#
#file_roots:
# base:
# - /usr/local/etc/salt/states
# The hash_type is the hash to use when discovering the hash of a file in
# the local fileserver. The default is sha256 but sha224, sha384 and sha512
# are also supported.
#
# WARNING: While md5 and sha1 are also supported, do not use it due to the high chance
# of possible collisions and thus security breach.
#
# WARNING: While md5 is also supported, do not use it due to the high chance
# of possible collisions and thus security breach.
#
# Warning: Prior to changing this value, the minion should be stopped and all
# Salt caches should be cleared.
#hash_type: sha256
# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
# base:
# - /usr/local/etc/salt/pillar
#
#
###### Security settings #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False
# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False
# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
# The state_output setting controls which results will be output full multi line
# full, terse - each state will be full/terse
# mixed - only states with errors will be full
# changes - states with changes and errors will be full
# full_id, mixed_id, changes_id and terse_id are also allowed;
# when set, the state ID will be used as name in the output
#state_output: full
# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False
# The state_output_profile setting changes whether profile information
# will be shown for each state run.
#state_output_profile: True
# The state_output_pct setting changes whether success and failure information
# as a percent of total actions will be shown for each state run.
#state_output_pct: False
# The state_compress_ids setting aggregates information about states which have
# multiple "names" under the same state ID in the highstate output.
#state_compress_ids: False
# Fingerprint of the master public key to validate the identity of your Salt master
# before the initial key exchange. The master fingerprint can be found by running
# "salt-key -F master" on the Salt master.
#master_finger: ''
###### Thread settings #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#multiprocessing: True
##### Logging settings #####
##########################################
# The location of the minion log file
# The minion log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/minion
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#
#log_file: /var/log/salt/minion
#key_logfile: /var/log/salt/key
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'.
#
# The following log levels are considered INSECURE and may log sensitive data:
# ['profile', 'garbage', 'trace', 'debug', 'all']
#
# Default: 'warning'
#log_level: warning
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
# Default: 'warning'
#log_level_logfile:
# The date and time format used in log messages. Allowed date/time formatting
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well. Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '%(colorlevel)s %(colormsg)s'
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#
#log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
# This can be used to control logging levels more specificically. This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
# log_granular_levels:
# 'salt': 'warning'
# 'salt.modules': 'debug'
#
#log_granular_levels: {}
# To diagnose issues with minions disconnecting or missing returns, ZeroMQ
# supports the use of monitor sockets # to log connection events. This
# feature requires ZeroMQ 4.0 or higher.
#
# To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
# debug level or higher.
#
# A sample log event is as follows:
#
# [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
# 'value': 27, 'description': 'EVENT_DISCONNECTED'}
#
# All events logged will include the string 'ZeroMQ event'. A connection event
# should be logged on the as the minion starts up and initially connects to the
# master. If not, check for debug log level and that the necessary version of
# ZeroMQ is installed.
#
#zmq_monitor: False
###### Module configuration #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
#
###### Update settings ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []
###### Keepalive settings ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.
# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True
# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300
# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1
# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1
###### Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'
###### Returner settings ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
Minion Blackout ConfigurationNew in version 2016.3.0. Salt supports minion blackouts. When a minion is in blackout mode, all remote execution commands are disabled. This allows production minions to be put "on hold", eliminating the risk of an untimely configuration change. Minion blackouts are configured via a special pillar key, minion_blackout. If this key is set to True, then the minion will reject all incoming commands, except for saltutil.refresh_pillar. (The exception is important, so minions can be brought out of blackout mode) Salt also supports an explicit whitelist of additional functions that will be allowed during blackout. This is configured with the special pillar key minion_blackout_whitelist, which is formed as a list: minion_blackout_whitelist: Access Control SystemNew in version 0.10.4. Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt. These interfaces include, the peer system, the external auth system and the publisher acl system. The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration. Now specific functions can be opened up to specific minions from specific users in the case of external auth and publisher ACLs, and for specific minions in the case of the peer system. Publisher ACL systemThe salt publisher ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master. NOTE: publisher_acl is useful for allowing local system
users to run Salt commands without giving them root access. If you can log
into the Salt master directly, then publisher_acl allows you to use
Salt without root privileges. If the local system is configured to
authenticate against a remote system, like LDAP or Active Directory, then
publisher_acl will interact with the remote system transparently.
external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service. For more information and examples, see this Access Control System section. The publisher ACL system is configured in the master configuration file via the publisher_acl configuration option. Under the publisher_acl configuration option the users open to send commands are specified and then a list of the minion functions which will be made available to specified user. Both users and functions could be specified by exact match, shell glob or regular expression. This configuration is much like the external_auth configuration: publisher_acl: Permission IssuesDirectories required for publisher_acl must be modified to be readable by the users specified: chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master NOTE: In addition to the changes above you will also need to
modify the permissions of /var/log/salt and the existing log file to be
writable by the user(s) which will be running the commands. If you do not wish
to do this then you must disable logging or Salt will generate errors as it
cannot write to the logs as the system users.
If you are upgrading from earlier versions of salt you must also remove any existing user keys and re-start the Salt master: rm /var/cache/salt/.*key service salt-master restart Whitelist and BlacklistSalt's authentication systems can be configured by specifying what is allowed using a whitelist, or by specifying what is disallowed using a blacklist. If you specify a whitelist, only specified operations are allowed. If you specify a blacklist, all operations are allowed except those that are blacklisted. See publisher_acl and publisher_acl_blacklist. External Authentication SystemSalt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP. NOTE: eAuth using the PAM external auth system requires
salt-master to be run as root as this system needs root access to check
authentication.
NOTE: publisher_acl is useful for allowing local system
users to run Salt commands without giving them root access. If you can log
into the Salt master directly, then publisher_acl allows you to use
Salt without root privileges. If the local system is configured to
authenticate against a remote system, like LDAP or Active Directory, then
publisher_acl will interact with the remote system transparently.
external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service. For more information and examples, see this Access Control System section. External Authentication System ConfigurationThe external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the access control system: external_auth: The above configuration allows the user thatch to execute functions in the test and network modules on the minions that match the web* target. User steve and the users whose logins start with admin, are granted unrestricted access to minion commands. Salt respects the current PAM configuration in place, and uses the 'login' service to authenticate. NOTE: The PAM module does not allow authenticating as
root.
NOTE: state.sls and state.highstate will return "Failed to
authenticate!" if the request timeout is reached. Use -t flag to increase
the timeout
To allow access to wheel modules or runner modules the following @ syntax must be used: external_auth: NOTE: The runner/wheel markup is different, since there are no
minions to scope the acl to.
NOTE: Globs will not match wheel or runners! They must be
explicitly allowed with @wheel or @runner.
WARNING: All users that have external authentication privileges
are allowed to run saltutil.findjob. Be aware that this could
inadvertently expose some data such as minion IDs.
Matching syntaxThe structure of the external_auth dictionary can take the following shapes. User and function matches are exact matches, shell glob patterns or regular expressions; minion matches are compound targets. By user: external_auth: By user, by minion: external_auth: By user, by runner/wheel: external_auth: By user, by runner+wheel module: external_auth: GroupsTo apply permissions to a group of users in an external authentication system, append a % to the ID: external_auth: Limiting by function argumentsPositional arguments or keyword arguments to functions can also be whitelisted. New in version 2016.3.0. external_auth: The rules:
args: UsageThe external authentication system can then be used from the command-line by any user on the same system as the master with the -a option: $ salt -a pam web\* test.version The system will ask the user for the credentials required by the authentication system and then publish the command. TokensWith external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens. Tokens are short term authorizations and can be easily created by just adding a -T option when authenticating: $ salt -T -a pam web\* test.version Now a token will be created that has an expiration of 12 hours (by default). This token is stored in a file named salt_token in the active user's home directory. Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires. Token expiration time can be set in the Salt master config file. LDAP and Active DirectoryNOTE: LDAP usage requires that you have installed
python-ldap.
Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface) OpenLDAP and similar systemsLDAP configuration happens in the Salt master configuration file. Server configuration values and their defaults: # Server to auth against auth.ldap.server: localhost # Port to connect via auth.ldap.port: 389 # Use TLS when connecting auth.ldap.tls: False # Use STARTTLS when connecting auth.ldap.starttls: False # LDAP scope level, almost always 2 auth.ldap.scope: 2 # Server specified in URI format auth.ldap.uri: '' # Overrides .ldap.server, .ldap.port, .ldap.tls above # Verify server's TLS certificate auth.ldap.no_verify: False # Bind to LDAP anonymously to determine group membership # Active Directory does not allow anonymous binds without special configuration # In addition, if auth.ldap.anonymous is True, empty bind passwords are not permitted. auth.ldap.anonymous: False # FOR TESTING ONLY, this is a VERY insecure setting. # If this is True, the LDAP bind password will be ignored and # access will be determined by group membership alone with # the group memberships being retrieved via anonymous bind auth.ldap.auth_by_group_membership_only: False # Require authenticating user to be part of this Organizational Unit # This can be blank if your LDAP schema does not use this kind of OU auth.ldap.groupou: 'Groups' # Object Class for groups. An LDAP search will be done to find all groups of this # class to which the authenticating user belongs. auth.ldap.groupclass: 'posixGroup' # Unique ID attribute name for the user auth.ldap.accountattributename: 'memberUid' # These are only for Active Directory auth.ldap.activedirectory: False auth.ldap.persontype: 'person' auth.ldap.minion_stripdomains: [] # Redhat Identity Policy Audit auth.ldap.freeipa: False Authenticating to the LDAP ServerThere are two phases to LDAP authentication. First, Salt authenticates to search for a users' Distinguished Name and group membership. The user it authenticates as in this phase is often a special LDAP system user with read-only access to the LDAP directory. After Salt searches the directory to determine the actual user's DN and groups, it re-authenticates as the user running the Salt commands. If you are already aware of the structure of your DNs and permissions in your LDAP store are set such that users can look up their own group memberships, then the first and second users can be the same. To tell Salt this is the case, omit the auth.ldap.bindpw parameter. Note this is not the same thing as using an anonymous bind. Most LDAP servers will not permit anonymous bind, and as mentioned above, if auth.ldap.anonymous is False you cannot use an empty password. You can template the binddn like this: auth.ldap.basedn: dc=saltstack,dc=com
auth.ldap.binddn: uid={{ username }},cn=users,cn=accounts,dc=saltstack,dc=com
Salt will use the password entered on the salt command line in place of the bindpw. To use two separate users, specify the LDAP lookup user in the binddn directive, and include a bindpw like so auth.ldap.binddn: uid=ldaplookup,cn=sysaccounts,cn=etc,dc=saltstack,dc=com auth.ldap.bindpw: mypassword As mentioned before, Salt uses a filter to find the DN associated with a user. Salt substitutes the {{ username }} value for the username when querying LDAP auth.ldap.filter: uid={{ username }}
Determining Group Memberships (OpenLDAP / non-Active Directory)For OpenLDAP, to determine group membership, one can specify an OU that contains group data. This is prepended to the basedn to create a search path. Then the results are filtered against auth.ldap.groupclass, default posixGroup, and the account's 'name' attribute, memberUid by default. auth.ldap.groupou: Groups Note that as of 2017.7, auth.ldap.groupclass can refer to either a groupclass or an objectClass. For some LDAP servers (notably OpenLDAP without the memberOf overlay enabled) to determine group membership we need to know both the objectClass and the memberUid attributes. Usually for these servers you will want a auth.ldap.groupclass of posixGroup and an auth.ldap.groupattribute of memberUid. LDAP servers with the memberOf overlay will have entries similar to auth.ldap.groupclass: person and auth.ldap.groupattribute: memberOf. When using the ldap('DC=domain,DC=com') eauth operator, sometimes the records returned from LDAP or Active Directory have fully-qualified domain names attached, while minion IDs instead are simple hostnames. The parameter below allows the administrator to strip off a certain set of domain names so the hostnames looked up in the directory service can match the minion IDs. auth.ldap.minion_stripdomains: ['.external.bigcorp.com', '.internal.bigcorp.com'] Determining Group Memberships (Active Directory)Active Directory handles group membership differently, and does not utilize the groupou configuration variable. AD needs the following options in the master config: auth.ldap.activedirectory: True
auth.ldap.filter: sAMAccountName={{username}}
auth.ldap.accountattributename: sAMAccountName
auth.ldap.groupclass: group
auth.ldap.persontype: person
To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the following LDAP search: (&(<value of auth.ldap.accountattributename>={{username}})
This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP query is executed: (&(member=<distinguishedName from search above>) external_auth: To configure a LDAP group, append a % to the ID: external_auth: In addition, if there are a set of computers in the directory service that should be part of the eAuth definition, they can be specified like this: external_auth: The string inside ldap() above is any valid LDAP/AD tree limiter. OU= in particular is permitted as long as it would return a list of computer objects. Peer CommunicationSalt 0.9.0 introduced the capability for Salt minions to publish commands. The intent of this feature is not for Salt minions to act as independent brokers one with another, but to allow Salt minions to pass commands to each other. In Salt 0.10.0 the ability to execute runners from the master was added. This allows for the master to return collective data from runners back to the minions via the peer interface. The peer interface is configured through two options in the master configuration file. For minions to send commands from the master the peer configuration is used. To allow for minions to execute runners from the master the peer_run configuration is used. Since this presents a viable security risk by allowing minions access to the master publisher the capability is turned off by default. The minions can be allowed access to the master publisher on a per minion basis based on regular expressions. Minions with specific ids can be allowed access to certain Salt modules and functions. Peer ConfigurationThe configuration is done under the peer setting in the Salt master configuration file, here are a number of configuration possibilities. The simplest approach is to enable all communication for all minions, this is only recommended for very secure environments. peer: This configuration will allow minions with IDs ending in example.com access to the test, ps, and pkg module functions. peer: The configuration logic is simple, a regular expression is passed for matching minion ids, and then a list of expressions matching minion functions is associated with the named minion. For instance, this configuration will also allow minions ending with foo.org access to the publisher. peer: NOTE: Functions are matched using regular expressions.
Peer Runner CommunicationConfiguration to allow minions to execute runners from the master is done via the peer_run option on the master. The peer_run configuration follows the same logic as the peer option. The only difference is that access is granted to runner modules. To open up access to all minions to all runners: peer_run: This configuration will allow minions with IDs ending in example.com access to the manage and jobs runner functions. peer_run: NOTE: Functions are matched using regular expressions.
Using Peer CommunicationThe publish module was created to manage peer communication. The publish module comes with a number of functions to execute peer communication in different ways. Currently there are three functions in the publish module. These examples will show how to test the peer system via the salt-call command. To execute test.version on all minions: # salt-call publish.publish \* test.version To execute the manage.up runner: # salt-call publish.runner manage.up To match minions using other matchers, use tgt_type: # salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.version tgt_type='compound' NOTE: In pre-2017.7.0 releases, use expr_form instead of
tgt_type.
When to Use Each Authentication Systempublisher_acl is useful for allowing local system users to run Salt commands without giving them root access. If you can log into the Salt master directly, then publisher_acl allows you to use Salt without root privileges. If the local system is configured to authenticate against a remote system, like LDAP or Active Directory, then publisher_acl will interact with the remote system transparently. external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service. ExamplesThe access controls are manifested using matchers in these configurations: publisher_acl: In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets (all matchers are supported except the compound one). external_auth: The above allows for all minions to be hit by test.version by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands. NOTE: Functions are matched using regular expressions.
Job ManagementNew in version 0.9.7. Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems. The Minion proc SystemSalt Minions maintain a proc directory in the Salt cachedir. The proc directory maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/{master|minion}/proc. Functions in the saltutil ModuleSalt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are:
These functions make up the core of the back end used to manage jobs at the minion level. The jobs RunnerA convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner. The jobs runner contains a number of functions... activeThe active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on. # salt-run jobs.active lookup_jidWhen jobs are executed the return data is sent back to the master and cached. By default it is cached for 86400 seconds, but this can be configured via the keep_jobs_seconds option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display. # salt-run jobs.lookup_jid <job id number> list_jobsBefore finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned. # salt-run jobs.list_jobs Scheduling JobsSalt's scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master. Scheduling can be enabled by multiple methods:
NOTE: The scheduler executes different functions on the master
and minions. When running on the master the functions reference runner
functions, when running on the minion the functions specify execution
functions.
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion-logging-settings. States are executed on the minion, as all states are. You can pass positional arguments and provide a YAML dict of named arguments. schedule: This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour). schedule: This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds. schedule: This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds. Schedule by Date and TimeNew in version 2014.7.0. Frequency of jobs can also be specified using date strings supported by the Python dateutil library. This requires the Python dateutil library to be installed. schedule: This will schedule the command: state.sls httpd test=True at 5:00 PM minion localtime. schedule: This will schedule the command: state.sls httpd test=True at 5:00 PM on Monday, Wednesday and Friday, and 3:00 PM on Tuesday and Thursday. schedule: whens: The Salt scheduler also allows custom phrases to be used for the when parameter. These whens can be stored as either pillar values or grain values. schedule: This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8:00 AM and 5:00 PM. The range parameter must be a dictionary with the date strings using the dateutil format. schedule: Using the invert option for range, this will schedule the command state.sls httpd test=True every 3600 seconds (every hour) until the current time is between the hours of 8:00 AM and 5:00 PM. The range parameter must be a dictionary with the date strings using the dateutil format. schedule: This will schedule the function pkg.install to be executed once at the specified time. The schedule entry job1 will not be removed after the job completes, therefore use schedule.delete to manually remove it afterwards. The default date format is ISO 8601 but can be overridden by also specifying the once_fmt option, like this: schedule: Maximum Parallel Jobs RunningNew in version 2014.7.0. The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage. The default for maxrunning is 1. schedule: Cron-like ScheduleNew in version 2014.7.0. schedule: The scheduler also supports scheduling jobs using a cron like format. This requires the Python croniter library. Job Data ReturnNew in version 2015.5.0. By default, data about jobs runs from the Salt scheduler is returned to the master. Setting the return_job parameter to False will prevent the data from being sent back to the Salt master. schedule: Job MetadataNew in version 2015.5.0. It can be useful to include specific data to differentiate a job from other jobs. Using the metadata parameter special values can be associated with a scheduled job. These values are not used in the execution of the job, but can be used to search for specific jobs later if combined with the return_job parameter. The metadata parameter must be specified as a dictionary, othewise it will be ignored. schedule: Run on StartNew in version 2015.5.0. By default, any job scheduled based on the startup time of the minion will run the scheduled job when the minion starts up. Sometimes this is not the desired situation. Using the run_on_start parameter set to False will cause the scheduler to skip this first run and wait until the next scheduled run: schedule: Until and AfterNew in version 2015.8.0. schedule: Using the until argument, the Salt scheduler allows you to specify an end time for a scheduled job. If this argument is specified, jobs will not run once the specified time has passed. Time should be specified in a format supported by the dateutil library. This requires the Python dateutil library to be installed. New in version 2015.8.0. schedule: Using the after argument, the Salt scheduler allows you to specify an start time for a scheduled job. If this argument is specified, jobs will not run until the specified time has passed. Time should be specified in a format supported by the dateutil library. This requires the Python dateutil library to be installed. Scheduling Statesschedule: Scheduling HighstatesTo set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar: schedule: Time intervals can be specified as seconds, minutes, hours, or days. Scheduling RunnersRunner executions can also be specified on the master within the master configuration file: schedule: The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours. Scheduler With ReturnerThe scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database: schedule: Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling. Managing the Job CacheThe Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. This job cache is called the Default Job Cache. Default Job CacheA number of options are available when configuring the job cache. The default caching system uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configuration or management. The default job cache is a temporary cache and jobs will be stored for 86400 seconds. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs_seconds parameter in the Salt Master configuration file. The value passed in is measured in seconds: keep_jobs_seconds: 86400 Reducing the Size of the Default Job CacheThe Default Job Cache can sometimes be a burden on larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir. However, you can disable the job_cache by setting it to False in the Salt Master configuration file. Setting this value to False means that the Salt Master will no longer cache minion returns, but a JID directory and jid file for each job will still be created. This JID directory is necessary for checking for and preventing JID collisions. The default location for the job cache is in the /var/cache/salt/master/jobs/ directory. Setting the job_cache to False in addition to setting the keep_jobs_seconds option to a smaller value, such as 3600, in the Salt Master configuration file will reduce the size of the Default Job Cache, and thus the burden on the Salt Master. NOTE: Changing the keep_jobs_seconds option sets the
number of seconds to keep old job information and defaults to 86400
seconds. Do not set this value to 0 when trying to make the cache
cleaner run more frequently, as this means the cache cleaner will never
run.
Additional Job Cache OptionsMany deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache. See Storing Job Results in an External System. Storing Job Results in an External SystemAfter a job executes, job results are returned to the Salt Master by each Salt Minion. These results are stored in the Default Job Cache. In addition to the Default Job Cache, Salt provides two additional mechanisms to send job results to other systems (databases, local syslog, and others):
The major difference between these two mechanism is from where results are returned (from the Salt Master or Salt Minion). Configuring either of these options will also make the Jobs Runner functions to automatically query the remote stores for information. External Job Cache - Minion-Side ReturnerWhen an External Job Cache is configured, data is returned to the Default Job Cache on the Salt Master like usual, and then results are also sent to an External Job Cache using a Salt returner module running on the Salt Minion. [image]
Master Job Cache - Master-Side ReturnerNew in version 2014.7.0. Instead of configuring an External Job Cache on each Salt Minion, you can configure the Master Job Cache to send job results from the Salt Master instead. In this configuration, Salt Minions send data to the Default Job Cache as usual, and then the Salt Master sends the data to the external system using a Salt returner module running on the Salt Master. [image]
Configure an External or Master Job CacheStep 1: Understand Salt ReturnersBefore you configure a job cache, it is essential to understand Salt returner modules ("returners"). Returners are pluggable Salt Modules that take the data returned by jobs, and then perform any necessary steps to send the data to an external system. For example, a returner might establish a connection, authenticate, and then format and transfer data. The Salt Returner system provides the core functionality used by the External and Master Job Cache systems, and the same returners are used by both systems. Salt currently provides many different returners that let you connect to a wide variety of systems. A complete list is available at all Salt returners. Each returner is configured differently, so make sure you read and follow the instructions linked from that page. For example, the MySQL returner requires:
A simpler returner, such as Slack or HipChat, requires:
Step 2: Configure the ReturnerAfter you understand the configuration and have the external system ready, the configuration requirements must be declared. External Job CacheThe returner configuration settings can be declared in the Salt Minion configuration file, the Minion's pillar data, or the Minion's grains. If external_job_cache configuration settings are specified in more than one place, the options are retrieved in the following order. The first configuration location that is found is the one that will be used.
Master Job CacheThe returner configuration settings for the Master Job Cache should be declared in the Salt Master's configuration file. Configuration File ExamplesMySQL requires: mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306 Slack requires: slack.channel: 'channel' slack.api_key: 'key' slack.from_name: 'name' After you have configured the returner and added settings to the configuration file, you can enable the External or Master Job Cache. Step 3: Enable the External or Master Job CacheConfiguration is a single line that specifies an already-configured returner to use to send all job data to an external system. External Job CacheTo enable a returner as the External Job Cache (Minion-side), add the following line to the Salt Master configuration file: ext_job_cache: <returner> For example: ext_job_cache: mysql NOTE: When configuring an External Job Cache (Minion-side), the
returner settings are added to the Minion configuration file, but the External
Job Cache setting is configured in the Master configuration file.
Master Job CacheTo enable a returner as a Master Job Cache (Master-side), add the following line to the Salt Master configuration file: master_job_cache: <returner> For example: master_job_cache: mysql Verify that the returner configuration settings are in the Master configuration file, and be sure to restart the salt-master service after you make configuration changes. (service salt-master restart). LoggingThe Salt Project tries to get the logging to work for you and help us solve any issues you might find along the way. If you want to get some more information on the nitty-gritty of salt's logging system, please head over to the logging development document, if all you're after is salt's logging configurations, please continue reading. Log LevelsThe log levels are ordered numerically such that setting the log level to a specific level will record all log statements at that level and higher. For example, setting log_level: error will log statements at error, critical, and quiet levels, although nothing should be logged at quiet level. Most of the logging levels are defined by default in Python's logging library and can be found in the official Python documentation. Salt uses some more levels in addition to the standard levels. All levels available in salt are shown in the table below. NOTE: Python dependencies used by salt may define and use
additional logging levels. For example, the Python 2 version of the
multiprocessing standard Python library uses the levels
subwarning, 25 and subdebug, 5.
Any log level below the info level is INSECURE and may log sensitive data. This currently includes: #. profile #. debug #. trace #. garbage #. all Available Configuration Settingslog_fileThe log records can be sent to a regular file, local path name, or network location. Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log), with rsyslogd(8) configured for network logging. The format for remote addresses is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> Where log-facility is the symbolic name of a syslog facility as defined in the SysLogHandler documentation. It defaults to LOG_USER. Default: Dependent of the binary being executed, for example, for salt-master, /var/log/salt/master. Examples: log_file: /var/log/salt/master log_file: /var/log/salt/minion log_file: file:///dev/log log_file: file:///dev/log/LOG_DAEMON log_file: udp://loghost:10514 log_levelDefault: warning The level of log record messages to send to the console. One of all, garbage, trace, debug, profile, info, warning, error, critical, quiet. log_level: warning NOTE: Add log_level: quiet in salt configuration file to
completely disable logging. In case of running salt in command line use
--log-level=quiet instead.
log_level_logfileDefault: info The level of messages to send to the log file. One of all, garbage, trace, debug, profile, info, warning, error, critical, quiet. log_level_logfile: warning log_datefmtDefault: %H:%M:%S The date and time format used in console log messages. Allowed date/time formatting matches those used in time.strftime(). log_datefmt: '%H:%M:%S' log_datefmt_logfileDefault: %Y-%m-%d %H:%M:%S The date and time format used in log file messages. Allowed date/time formatting matches those used in time.strftime(). log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' log_fmt_consoleDefault: [%(levelname)-8s] %(message)s The format of the console logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes to colorize console log output: "%(colorlevel)s" # log level name colorized by level "%(colorname)s" # colorized module name "%(colorprocess)s" # colorized process number "%(colormsg)s" # log message colorized by level NOTE: The %(colorlevel)s, %(colorname)s, and
%(colorprocess) LogRecord attributes also include padding and enclosing
brackets, [ and ] to match the default values of their
collateral non-colorized LogRecord attributes.
log_fmt_console: '[%(levelname)-8s] %(message)s' log_fmt_logfileDefault: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s The format of the log file logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes that include padding and enclosing brackets [ and ]: "%(bracketlevel)s" # equivalent to [%(levelname)-8s] "%(bracketname)s" # equivalent to [%(name)-17s] "%(bracketprocess)s" # equivalent to [%(process)5s] log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' log_granular_levelsDefault: {} This can be used to control logging levels more specifically, based on log call name. The example sets the main salt library at the 'warning' level, sets salt.modules to log at the debug level, and sets a custom module to the all level: log_granular_levels: You can determine what log call name to use here by adding %(module)s to the log format. Typically, it is the path of the file which generates the log without the trailing .py and with path separators replaced with . log_fmt_jidDefault: [JID: %(jid)s] The format of the JID when added to logging messages. log_fmt_jid: '[JID: %(jid)s]' External Logging HandlersBesides the internal logging handlers used by salt, there are some external which can be used, see the external logging handlers document. External Logging Handlers
salt.log_handlers.fluent_modFluent Logging HandlerNew in version 2015.8.0. This module provides some fluentd logging handlers. Fluent Logging HandlerIn the fluent configuration file: <source> Then, to send logs via fluent in Logstash format, add the following to the salt (master and/or minion) configuration file: fluent_handler: To send logs via fluent in the Graylog raw json format, add the following to the salt (master and/or minion) configuration file: fluent_handler: The above also illustrates the tags option, which allows one to set descriptive (or useful) tags on records being sent. If not provided, this defaults to the single tag: 'salt'. Also note that, via Graylog "magic", the 'facility' of the logged message is set to 'SALT' (the portion of the tag after the first period), while the tag itself will be set to simply 'salt_master'. This is a feature, not a bug :) Note: There is a third emitter, for the GELF format, but it is largely untested, and I don't currently have a setup supporting this config, so while it runs cleanly and outputs what LOOKS to be valid GELF, any real-world feedback on its usefulness, and correctness, will be appreciated. Log LevelThe fluent_handler configuration section accepts an additional setting log_level. If not set, the logging level used will be the one defined for log_level in the global configuration file section.
salt.log_handlers.log4mongo_modLog4Mongo Logging HandlerThis module provides a logging handler for sending salt logs to MongoDB ConfigurationIn the salt configuration file (e.g. /usr/local/etc/salt/{master,minion}): log4mongo_handler: Log LevelIf not set, the log_level will be set to the level defined in the global configuration file setting.
salt.log_handlers.logstash_modLogstash Logging HandlerNew in version 0.17.0. This module provides some Logstash logging handlers. UDP Logging HandlerFor versions of Logstash before 1.2.0: In the salt configuration file: logstash_udp_handler: In the Logstash configuration file: input {
For version 1.2.0 of Logstash and newer: In the salt configuration file: logstash_udp_handler: In the Logstash configuration file: input {
Please read the UDP input configuration page for additional information. ZeroMQ Logging HandlerFor versions of Logstash before 1.2.0: In the salt configuration file: logstash_zmq_handler: In the Logstash configuration file: input {
For version 1.2.0 of Logstash and newer: In the salt configuration file: logstash_zmq_handler: In the Logstash configuration file: input {
Please read the ZeroMQ input configuration page for additional information.
Log LevelBoth the logstash_udp_handler and the logstash_zmq_handler configuration sections accept an additional setting log_level. If not set, the logging level used will be the one defined for log_level in the global configuration file section. HWMThe high water mark for the ZMQ socket setting. Only applicable for the logstash_zmq_handler.
salt.log_handlers.sentry_modSentry Logging HandlerNew in version 0.17.0. This module provides a Sentry logging handler. Sentry is an open source error tracking platform that provides deep context about exceptions that happen in production. Details about stack traces along with the context variables available at the time of the exception are easily browsable and filterable from the online interface. For more details please see Sentry.
Configuring the python Sentry client, Raven, should be done under the sentry_handler configuration key. Additional context may be provided for corresponding grain item(s). At the bare minimum, you need to define the DSN. As an example: sentry_handler: More complex configurations can be achieved, for example: sentry_handler:
All the client configuration keys are supported, please see the Raven client documentation. The default logging level for the sentry handler is ERROR. If you wish to define a different one, define log_level under the sentry_handler configuration key: sentry_handler: The available log levels are those also available for the salt cli tools and configuration; salt --help should give you the required information. Threaded TransportsRaven's documents rightly suggest using its threaded transport for critical applications. However, don't forget that if you start having troubles with Salt after enabling the threaded transport, please try switching to a non-threaded transport to see if that fixes your problem. Salt File ServerSalt comes with a simple file server suitable for distributing files to the Salt minions. The file server is a stateless ZeroMQ server that is built into the Salt master. The main intent of the Salt file server is to present files for use in the Salt state system. With this said, the Salt file server can be used for any general file transfer from the master to the minions. File Server BackendsIn Salt 0.12.0, the modular fileserver was introduced. This feature added the ability for the Salt Master to integrate different file server backends. File server backends allow the Salt file server to act as a transparent bridge to external resources. A good example of this is the git backend, which allows Salt to serve files sourced from one or more git repositories, but there are several others as well. Click here for a full list of Salt's fileserver backends. Enabling a Fileserver BackendFileserver backends can be enabled with the fileserver_backend option. fileserver_backend: See the documentation for each backend to find the correct value to add to fileserver_backend in order to enable them. Using Multiple BackendsIf fileserver_backend is not defined in the Master config file, Salt will use the roots backend, but the fileserver_backend option supports multiple backends. When more than one backend is in use, the files from the enabled backends are merged into a single virtual filesystem. When a file is requested, the backends will be searched in order for that file, and the first backend to match will be the one which returns the file. fileserver_backend: With this configuration, the environments and files defined in the file_roots parameter will be searched first, and if the file is not found then the git repositories defined in gitfs_remotes will be searched. Defining EnvironmentsJust as the order of the values in fileserver_backend matters, so too does the order in which different sources are defined within a fileserver environment. For example, given the below file_roots configuration, if both /usr/local/etc/salt/states/dev/foo.txt and /srv/salt/prod/foo.txt exist on the Master, then salt://foo.txt would point to /usr/local/etc/salt/states/dev/foo.txt in the dev environment, but it would point to /usr/local/etc/salt/states/prod/foo.txt in the base environment. file_roots: Similarly, when using the git backend, if both repositories defined below have a hotfix23 branch/tag, and both of them also contain the file bar.txt in the root of the repository at that branch/tag, then salt://bar.txt in the hotfix23 environment would be served from the first repository. gitfs_remotes: NOTE: Environments map differently based on the fileserver
backend. For instance, the mappings are explicitly defined in roots
backend, while in the VCS backends (git, hg, svn) the
environments are created from branches/tags/bookmarks/etc. For the
minion backend, the files are all in a single environment, which is
specified by the minionfs_env option.
See the documentation for each backend for a more detailed explanation of how environments are mapped. Requesting Files from Specific EnvironmentsThe Salt fileserver supports multiple environments, allowing for SLS files and other files to be isolated for better organization. For the default backend (called roots), environments are defined using the roots option. Other backends (such as gitfs) define environments in their own ways. For a list of available fileserver backends, see here. Querystring SyntaxAny salt:// file URL can specify its fileserver environment using a querystring syntax, like so: salt://path/to/file?saltenv=foo In Reactor configurations, this method must be used to pull files from an environment other than base. In StatesMinions can be instructed which environment to use both globally, and for a single state, and multiple methods for each are available: GloballyA minion can be pinned to an environment using the environment option in the minion config file. Additionally, the environment can be set for a single call to the following functions:
NOTE: When the saltenv parameter is used to trigger a
highstate using either state.apply or state.highstate,
only states from that environment will be applied.
On a Per-State BasisWithin an individual state, there are two ways of specifying the environment. The first is to add a saltenv argument to the state. This example will pull the file from the config environment: /etc/foo/bar.conf: Another way of doing the same thing is to use the querystring syntax described above: /etc/foo/bar.conf: NOTE: Specifying the environment using either of the above
methods is only necessary in cases where a state from one environment needs to
access files from another environment. If the SLS file containing this state
was in the config environment, then it would look in that environment
by default.
File Server ConfigurationThe Salt file server is a high performance file server written in ZeroMQ. It manages large files quickly and with little overhead, and has been optimized to handle small files in an extremely efficient manner. The Salt file server is an environment aware file server. This means that files can be allocated within many root directories and accessed by specifying both the file path and the environment to search. The individual environments can span across multiple directory roots to create overlays and to allow for files to be organized in many flexible ways. Periodic RestartsThe file server will restart periodically. The reason for this is to prevent any files erver backends which may not properly handle resources from endlessly consuming memory. A notable example of this is using a git backend with the pygit2 library. How often the file server restarts can be controlled with the fileserver_interval in your master's config file. EnvironmentsThe Salt file server defaults to the mandatory base environment. This environment MUST be defined and is used to download files when no environment is specified. Environments allow for files and sls data to be logically separated, but environments are not isolated from each other. This allows for logical isolation of environments by the engineer using Salt, but also allows for information to be used in multiple environments. Directory OverlayThe environment setting is a list of directories to publish files from. These directories are searched in order to find the specified file and the first file found is returned. This means that directory data is prioritized based on the order in which they are listed. In the case of this file_roots configuration: file_roots: If a file's URI is salt://httpd/httpd.conf, it will first search for the file at /usr/local/etc/salt/states/base/httpd/httpd.conf. If the file is found there it will be returned. If the file is not found there, then /usr/local/etc/salt/states/failover/httpd/httpd.conf will be used for the source. This allows for directories to be overlaid and prioritized based on the order they are defined in the configuration. It is also possible to have file_roots which supports multiple environments: file_roots: This example ensures that each environment will check the associated environment directory for files first. If a file is not found in the appropriate directory, the system will default to using the base directory. Local File ServerNew in version 0.9.8. The file server can be rerouted to run from the minion. This is primarily to enable running Salt states without a Salt master. To use the local file server interface, copy the file server data to the minion and set the file_roots option on the minion to point to the directories copied from the master. Once the minion file_roots option has been set, change the file_client option to local to make sure that the local file server interface is used. The cp ModuleThe cp module is the home of minion side file server operations. The cp module is used by the Salt state system, salt-cp, and can be used to distribute files presented by the Salt file server. Escaping Special CharactersThe salt:// url format can potentially contain a query string, for example salt://dir/file.txt?saltenv=base. You can prevent the fileclient/fileserver from interpreting ? as the initial token of a query string by referencing the file with salt://| rather than salt://. /etc/marathon/conf/?checkpoint: EnvironmentsSince the file server is made to work with the Salt state system, it supports environments. The environments are defined in the master config file and when referencing an environment the file specified will be based on the root directory of the environment. get_fileThe cp.get_file function can be used on the minion to download a file from the master, the syntax looks like this: salt '*' cp.get_file salt://vimrc /etc/vimrc This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc Template rendering can be enabled on both the source and destination file names like so: salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja
This example would instruct all Salt minions to download the vimrc from a directory with the same name as their OS grain and copy it to /etc/vimrc For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty-printed JSON or YAML files). To use compression, use the gzip named argument. Valid values are integers from 1 to 9, where 1 is the lightest compression and 9 the heaviest. In other words, 1 uses the least CPU on the master (and minion), while 9 uses the most. salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5 Finally, note that by default cp.get_file does not create new destination directories if they do not exist. To change this, use the makedirs argument: salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True In this example, /etc/vim/ would be created if it didn't already exist. get_dirThe cp.get_dir function can be used on the minion to download an entire directory from the master. The syntax is very similar to get_file: salt '*' cp.get_dir salt://etc/apache2 /etc cp.get_dir supports template rendering and gzip compression arguments just like get_file: salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja
File Server Client InstanceA client instance is available which allows for modules and applications to be written which make use of the Salt file server. The file server uses the same authentication and encryption used by the rest of the Salt system for network communication. fileclient ModuleThe salt/fileclient.py module is used to set up the communication from the minion to the master. When creating a client instance using the fileclient module, the minion configuration needs to be passed in. When using the fileclient module from within a minion module the built in __opts__ data can be passed: import salt.minion import salt.fileclient def get_file(path, dest, saltenv="base"): Creating a fileclient instance outside of a minion module where the __opts__ data is not available, it needs to be generated: import salt.fileclient import salt.config def get_file(path, dest, saltenv="base"): Git Fileserver Backend WalkthroughNOTE: This walkthrough assumes basic knowledge of Salt. To get
up to speed, check out the Salt Walkthrough.
The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes. Branches and tags become Salt fileserver environments. NOTE: Branching and tagging can result in a lot of
potentially-conflicting top files, for this reason it may be useful to
set top_file_merging_strategy to same in the minions' config
files if the top files are being managed in a GitFS repo.
Installing DependenciesBoth pygit2 and GitPython are supported Python interfaces to git. If compatible versions of both are installed, pygit2 will be preferred. In these cases, GitPython can be forced using the gitfs_provider parameter in the master config file. NOTE: It is recommended to always run the most recent version
of any the below dependencies. Certain features of GitFS may not be available
without the most recent version of the chosen library.
pygit2The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible. For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install pygit2: # yum install python-pygit2 Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it: # apt-get install python-pygit2 If pygit2 is not packaged for the platform on which the Master is running, the pygit2 website has installation instructions here. Keep in mind however that following these instructions will install libgit2 and pygit2 without system packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development libraries to be present before libgit2 is built. On some Debian-based distros pkg-config is also required to link libgit2 with libssh2. NOTE: If you are receiving the error "Unsupported URL
Protocol" in the Salt Master log when making a connection using SSH,
review the libssh2 details listed above.
Additionally, version 0.21.0 of pygit2 introduced a dependency on python-cffi, which in turn depends on newer releases of libffi. Upgrading libffi is not advisable as several other applications depend on it, so on older LTS linux releases pygit2 0.20.3 and libgit2 0.20.0 is the recommended combination. WARNING: pygit2 is actively developed and frequently
makes non-backwards-compatible API changes, even in minor releases.
It is not uncommon for pygit2 upgrades to result in errors in Salt.
Please take care when upgrading pygit2, and pay close attention to the
changelog, keeping an eye out for API changes. Errors can be reported
on the SaltStack issue tracker.
RedHat Pygit2 IssuesThe release of RedHat/CentOS 7.3 upgraded both python-cffi and http-parser, both of which are dependencies for pygit2/libgit2. Both pygit2 and libgit2 packages (which are from the EPEL repository) should be upgraded to the most recent versions, at least to 0.24.2. The below errors will show up in the master log if an incompatible python-pygit2 package is installed: 2017-02-10 09:07:34,892 [salt.utils.gitfs ][ERROR ][11211] Import pygit2 failed: CompileError: command 'gcc' failed with exit status 1 2017-02-10 09:07:34,907 [salt.utils.gitfs ][ERROR ][11211] gitfs is configured but could not be loaded, are pygit2 and libgit2 installed? 2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed. 2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting The below errors will show up in the master log if an incompatible libgit2 package is installed: 2017-02-15 18:04:45,211 [salt.utils.gitfs ][ERROR ][6211] Error occurred fetching gitfs remote 'https://foo.com/bar.git': No Content-Type header in response A restart of the salt-master daemon and gitfs cache directory clean up may be required to allow http(s) repositories to continue to be fetched. Debian Pygit2 IssuesThe Debian repos currently have older versions of pygit2 (package python3-pygit2). These older versions may have issues using newer SSH keys (see [this issue](https://github.com/saltstack/salt/issues/61790)). Instead, pygit2 can be installed from Pypi, but you will need a version that matches the libgit2 version from Debian. This is version 1.6.1. # apt-get install libgit2 # salt-pip install pygit2==1.6.1 --no-deps Note that the above instructions assume a onedir installation. The need for --no-deps is to prevent the CFFI package from mismatching with Salt. GitPythonGitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum: # yum install GitPython Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged: # apt-get install python-git GitPython requires the git CLI utility to work. If installed from a system package, then git should already be installed, but if installed via pip then it may still be necessary to install git separately. For MacOS users, GitPython comes bundled in with the Salt installer, but git must still be installed for it to work properly. Git can be installed in several ways, including by installing XCode. WARNING: GitPython advises against the use of its library for
long-running processes (such as a salt-master or salt-minion). Please see
their warning on potential leaks of system resources:
https://github.com/gitpython-developers/GitPython#leakage-of-system-resources.
WARNING: Keep in mind that if GitPython has been previously
installed on the master using pip (even if it was subsequently uninstalled),
then it may still exist in the build cache (typically
/tmp/pip-build-root/GitPython) if the cache is not cleared after
installation. The package in the build cache will override any requirement
specifiers, so if you try upgrading to version 0.3.2.RC1 by running pip
install 'GitPython==0.3.2.RC1' then it will ignore this and simply install
the version from the cache directory. Therefore, it may be necessary to delete
the GitPython directory from the build cache in order to ensure that the
specified version is installed.
WARNING: GitPython 2.0.9 and newer is not compatible with
Python 2.6. If installing GitPython using pip on a machine running
Python 2.6, make sure that a version earlier than 2.0.9 is installed. This can
be done on the CLI by running pip install 'GitPython<2.0.9', or in a
pip.installed state using the following SLS:
GitPython: Simple ConfigurationTo use the gitfs backend, only two configuration changes are required on the master:
fileserver_backend: NOTE: git also works here. Prior to the 2018.3.0
release, only git would work.
gitfs_remotes: SSH remotes can also be configured using scp-like syntax: gitfs_remotes: Information on how to authenticate to SSH remotes can be found here.
NOTE: In a master/minion setup, files from a gitfs remote are
cached once by the master, so minions do not need direct access to the git
repository.
Multiple RemotesThe gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files. A simple scenario illustrates this cascading lookup behavior: If the gitfs_remotes option specifies three remotes: gitfs_remotes: And each repository contains some files: first.git: Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:
NOTE: This example is purposefully contrived to illustrate the
behavior of the gitfs backend. This example should not be read as a
recommended way to lay out files and git repos.
The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo. WARNING: Salt versions prior to 2014.1.0 are not tolerant of
changing the order of remotes or modifying the URI of existing remotes. In
those versions, when modifying remotes it is a good idea to remove the gitfs
cache directory (/var/cache/salt/master/gitfs) before restarting the
salt-master service.
Per-remote Configuration ParametersNew in version 2014.7.0. The following master config parameters are global (that is, they apply to all configured gitfs remotes):
NOTE: pygit2 only supports disabling SSL verification in
versions 0.23.2 and newer.
These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage: gitfs_provider: pygit2 gitfs_base: develop gitfs_remotes: IMPORTANT: There are two important distinctions which should be
noted for per-remote configuration:
The all_saltenvs parameter is new in the 2018.3.0 release. In the example configuration above, the following is true:
The use of http:// (instead of https://) is
permitted here only because authentication is not being used.
Otherwise, the insecure_auth parameter must be used (as in the fourth
remote) to force Salt to authenticate to an http:// remote.
Per-Saltenv Configuration ParametersNew in version 2016.11.0. For more granular control, Salt allows the following three things to be overridden for individual saltenvs within a given repo:
Here is an example: gitfs_root: salt gitfs_saltenv: Given the above configuration, the following is true:
Custom RefspecsNew in version 2017.7.0. GitFS will by default fetch remote branches and tags. However, sometimes it can be useful to fetch custom refs (such as those created for GitHub pull requests). To change the refspecs GitFS fetches, use the gitfs_refspecs config option: gitfs_refspecs: In the above example, in addition to fetching remote branches and tags, GitHub's custom refs for pull requests and merged pull requests will also be fetched. These special head refs represent the head of the branch which is requesting to be merged, and the merge refs represent the result of the base branch after the merge. IMPORTANT: When using custom refspecs, the destination of the
fetched refs must be under refs/remotes/origin/, preferably in a
subdirectory like in the example above. These custom refspecs will map as
environment names using their relative path underneath
refs/remotes/origin/. For example, assuming the configuration above,
the head branch for pull request 12345 would map to fileserver environment
pr/12345 (slash included).
Refspecs can be configured on a per-remote basis. For example, the below configuration would only alter the default refspecs for the second GitFS remote. The first remote would only fetch branches and tags (the default). gitfs_remotes: Global RemotesNew in version 2018.3.0: for all_saltenvs, 3001 for fallback The all_saltenvs per-remote configuration parameter overrides the logic Salt uses to map branches/tags to fileserver environments (i.e. saltenvs). This allows a single branch/tag to appear in all GitFS saltenvs. NOTE: all_saltenvs only works within GitFS. That
is, files in a branch configured using all_saltenvs will not
show up in a fileserver environment defined via some other fileserver backend
(e.g. file_roots).
The fallback global or per-remote configuration can also be used. This is very useful in particular when working with salt formulas. Prior to the addition of this feature, it was necessary to push a branch/tag to the remote repo for each saltenv in which that formula was to be used. If the formula needed to be updated, this update would need to be reflected in all of the other branches/tags. This is both inconvenient and not scalable. With all_saltenvs, it is now possible to define your formula once, in a single branch. gitfs_remotes: If you want to also test working branches of the formula repository, use fallback: gitfs_remotes: Update IntervalsPrior to the 2018.3.0 release, GitFS would update its fileserver backends as part of a dedicated "maintenance" process, in which various routine maintenance tasks were performed. This tied the update interval to the loop_interval config option, and also forced all fileservers to update at the same interval. Now it is possible to make GitFS update at its own interval, using gitfs_update_interval: gitfs_update_interval: 180 gitfs_remotes: Using the above configuration, the first remote would update every three minutes, while the second remote would update every two minutes. Configuration Order of PrecedenceThe order of precedence for GitFS configuration is as follows (each level overrides all levels below it):
gitfs_remotes:
gitfs_saltenv:
gitfs_remotes:
gitfs_mountpoint: salt://bar NOTE: The one exception to the above is when
all_saltenvs is used. This value overrides all logic for mapping
branches/tags to fileserver environments. So, even if gitfs_saltenv is
used to globally override the mapping for a given saltenv, all_saltenvs
would take precedence for any remote which uses it.
It's important to note however that any root and mountpoint values configured in gitfs_saltenv (or per-saltenv configuration) would be unaffected by this. Serving from a SubdirectoryThe gitfs_root parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver. Assume the below layout: .gitignore README.txt foo/ foo/bar/ foo/bar/one.txt foo/bar/two.txt foo/bar/three.txt foo/baz/ foo/baz/top.sls foo/baz/edit/vim.sls foo/baz/edit/vimrc foo/baz/nginx/init.sls The below configuration would serve only the files under foo/baz, ignoring the other files in the repository: gitfs_remotes: The root can also be configured on a per-remote basis. MountpointsNew in version 2014.7.0. The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver. Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository). The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf. gitfs_remotes: Mountpoints can also be configured on a per-remote basis. Using gitfs in Masterless ModeSince 2014.7.0, gitfs can be used in masterless mode. To do so, simply add the gitfs configuration parameters (and set fileserver_backend) in the _minion_ config file instead of the master config file. Using gitfs Alongside Other BackendsSometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master. The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends: fileserver_backend: Then the roots backend (the default backend of files in /usr/local/etc/salt/states) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched. NOTE: This can be used together with file_roots
accepting __env__ as a catch-all environment, since 2018.3.5 and
2019.2.1:
file_roots: Branches, Environments, and Top FilesWhen using the GitFS backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier. There is one exception to this rule: the master branch is implicitly mapped to the base environment. So, for a typical base, qa, dev setup, the following branches could be used: master qa dev To map a branch other than master as the base environment, use the gitfs_base parameter. gitfs_base: salt-base The base can also be configured on a per-remote basis. Use Case: Code Promotion (dev -> qa -> base)When running a highstate, the top.sls files from all of the different branches and tags will be merged into one. This does not work well with the use case where changes are tested in development branches before being merged upstream towards production, because if the same SLS file from multiple environments is part of the highstate, it can result in non-unique state IDs, which will cause an error in the state compiler and not allow the highstate to proceed. To accomplish this use case, you should do three things:
Consider the following example top file and SLS file: top.sls {{ saltenv }}:
mystuff.sls manage_mystuff: Imagine for a moment that you need to change your mystuff.conf. So, you go to your dev branch, edit mystuff/files/mystuff.conf, and commit and push. If you have only done the first two steps recommended above, and you run your highstate, you will end up with conflicting IDs: myminion: This is because, in the absence of an explicit saltenv, all environments' top files are considered. Each environment looks at only its own top.sls, but because the mystuff.sls exists in each branch, they all get pulled into the highstate, resulting in these conflicting IDs. This is why explicitly setting your saltenv is important for this use case. There are two ways of explicitly defining the saltenv:
salt myminion state.apply saltenv=dev A couple notes about setting the saltenv at runtime:
If you branched qa off of master, and dev off of qa, you can merge changes from dev into qa, and then merge qa into master to promote your changes to from dev to qa to prod. Environment Whitelist/BlacklistNew in version 2014.7.0. The gitfs_saltenv_whitelist and gitfs_saltenv_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and regular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag. gitfs_saltenv_whitelist: NOTE: v1.*, in this example, will match as both a glob
and a regular expression (though it will have been matched as a glob, since
globs are evaluated before regular expressions).
The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:
Authenticationpygit2New in version 2014.7.0. Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of pygit2 supported by Salt for gitfs. NOTE: The examples below make use of per-remote configuration
parameters, a feature new to Salt 2014.7.0. More information on these can be
found here.
HTTPSFor HTTPS repositories which require authentication, the username and password can be provided like so: gitfs_remotes: If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter: gitfs_remotes: SSHSSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent:
Both gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured per-remote). For example: gitfs_remotes: Finally, the SSH host key must be added to the known_hosts file. NOTE: There is a known issue with public-key SSH authentication
to Microsoft Visual Studio (VSTS) with pygit2. This is due to a bug or lack of
support for VSTS in older libssh2 releases. Known working releases include
libssh2 1.7.0 and later, and known incompatible releases include 1.5.0 and
older. At the time of this writing, 1.6.0 has not been tested.
Since upgrading libssh2 would require rebuilding many other packages (curl, etc.), followed by a rebuild of libgit2 and a reinstall of pygit2, an easier workaround for systems with older libssh2 is to use GitPython with a passphraseless key for authentication. GitPythonHTTPSFor HTTPS repositories which require authentication, the username and password can be configured in one of two ways. The first way is to include them in the URL using the format https://<user>:<password>@<url>, like so: gitfs_remotes: The other way would be to configure the authentication in /var/lib/salt/.netrc: machine domain.tld login git password mypassword If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter: gitfs_remotes: SSHOnly passphrase-less SSH public key authentication is supported using GitPython. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython. gitfs_remotes: Since GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root). If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here's an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs: Host github.com The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository. It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding StrictHostKeyChecking no to the entry in ~/.ssh/config Host github.com However, this is generally regarded as insecure, and is not recommended. Adding the SSH Host Key to the known_hosts FileTo use SSH authentication, it is necessary to have the remote repository's SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function: # salt mymaster ssh.set_known_host user=root hostname=github.com mymaster: If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH: $ su - Password: # ssh github.com The authenticity of host 'github.com (192.30.252.128)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts. Permission denied (publickey). It doesn't matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file. Verifying the FingerprintTo verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap: $ nmap -p 22 github.com --script ssh-hostkey Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT Nmap scan report for github.com (192.30.252.129) Host is up (0.17s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA) |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA) 80/tcp open http 443/tcp open https 9418/tcp open git Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds Another way is to check one's own known_hosts file, using this one-liner: $ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan github.com 2>/dev/null` | awk '{print $2}'
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
WARNING: AWS tracks usage of nmap and may flag it as abuse. On AWS
hosts, the ssh-keygen method is recommended for host key
verification.
NOTE: As of OpenSSH 6.8 the SSH fingerprint is now shown
as a base64-encoded SHA256 checksum of the host key. So, instead of the
fingerprint looking like
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48, it would look like
SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Refreshing gitfs Upon PushBy default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:
update_fileserver:
reactor:
#!/usr/bin/env sh salt-call event.fire_master update salt/fileserver/gitfs/update
#!/usr/bin/env sh sudo -u root salt-call event.fire_master update salt/fileserver/gitfs/update
Cmnd_Alias SALT_GIT_HOOK = /bin/salt-call event.fire_master update salt/fileserver/gitfs/update Defaults!SALT_GIT_HOOK !requiretty ALL ALL=(root) NOPASSWD: SALT_GIT_HOOK The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor. Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent. The root user name in the hook script and sudo policy should be changed to match the user under which the minion is running. Using Git as an External Pillar SourceThe git external pillar (a.k.a. git_pillar) has been rewritten for the 2015.8.0 release. This rewrite brings with it pygit2 support (allowing for access to authenticated repositories), as well as more granular support for per-remote configuration. This configuration schema is detailed here. Why aren't my custom modules/states/etc. syncing to my Minions?In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again. This issue is worked around in Salt 0.16.4 and newer. MinionFS Backend WalkthroughNew in version 2014.1.0. NOTE: This walkthrough assumes basic knowledge of Salt and
cp.push. To get up to speed, check out the Salt
Walkthrough.
Sometimes it is desirable to deploy a file located on one minion to one or more other minions. This is supported in Salt, and can be accomplished in two parts:
This walkthrough will show how to use both of these features. Enabling File PushTo set the master to accept files pushed from minions, the file_recv option in the master config file must be set to True (the default is False). file_recv: True NOTE: This change requires a restart of the salt-master
service.
Pushing FilesOnce this has been done, files can be pushed to the master using the cp.push function: salt 'minion-id' cp.push /path/to/the/file This command will store the file in a subdirectory named minions under the master's cachedir. On most masters, this path will be /var/cache/salt/master/minions. Within this directory will be one directory for each minion which has pushed a file to the master, and underneath that the full path to the file on the minion. So, for example, if a minion with an ID of dev1 pushed a file /var/log/myapp.log to the master, it would be saved to /var/cache/salt/master/minions/dev1/var/log/myapp.log. Serving Pushed Files Using MinionFSWhile it is certainly possible to add /var/cache/salt/master/minions to the master's file_roots and serve these files, it may only be desirable to expose files pushed from certain minions. Adding /var/cache/salt/master/minions/<minion-id> for each minion that needs to be exposed can be cumbersome and prone to errors. Enter minionfs. This fileserver backend will make files pushed using cp.push available to the Salt fileserver, and provides an easy mechanism to restrict which minions' pushed files are made available. Simple ConfigurationTo use the minionfs backend, add minionfs to the list of backends in the fileserver_backend configuration option on the master: file_recv: True fileserver_backend: NOTE: minion also works here. Prior to the 2018.3.0
release, only minion would work.
Also, as described earlier, file_recv: True is needed to enable the master to receive files pushed from minions. As always, changes to the master configuration require a restart of the salt-master service. Files made available via minionfs are by default located at salt://<minion-id>/path/to/file. Think back to the earlier example, in which dev1 pushed a file /var/log/myapp.log to the master. With minionfs enabled, this file would be addressable in Salt at salt://dev1/var/log/myapp.log. If many minions have pushed to the master, this will result in many directories in the root of the Salt fileserver. For this reason, it is recommended to use the minionfs_mountpoint config option to organize these files underneath a subdirectory: minionfs_mountpoint: salt://minionfs Using the above mountpoint, the file in the example would be located at salt://minionfs/dev1/var/log/myapp.log. Restricting Certain Minions' Files from Being Available Via MinionFSA whitelist and blacklist can be used to restrict the minions whose pushed files are available via minionfs. These lists can be managed using the minionfs_whitelist and minionfs_blacklist config options. Click the links for both of them for a detailed explanation of how to use them. A more complex configuration example, which uses both a whitelist and blacklist, can be found below: file_recv: True fileserver_backend: Potential Concerns
Salt Package ManagerThe Salt Package Manager, or SPM, enables Salt formulas to be packaged to simplify distribution to Salt masters. The design of SPM was influenced by other existing packaging systems including RPM, Yum, and Pacman. [image] NOTE: The previous diagram shows each SPM component as a
different system, but this is not required. You can build packages and host
the SPM repo on a single Salt master if you'd like.
Packaging System The packaging system is used to package the state, pillar, file templates, and other files used by your formula into a single file. After a formula package is created, it is copied to the Repository System where it is made available to Salt masters. See Building SPM Packages Repo System The Repo system stores the SPM package and metadata files and makes them available to Salt masters via http(s), ftp, or file URLs. SPM repositories can be hosted on a Salt Master, a Salt Minion, or on another system. See Distributing SPM Packages Salt Master SPM provides Salt master settings that let you configure the URL of one or more SPM repos. You can then quickly install packages that contain entire formulas to your Salt masters using SPM. See Installing SPM Packages Contents Building SPM PackagesThe first step when using Salt Package Manager is to build packages for each of of the formulas that you want to distribute. Packages can be built on any system where you can install Salt. Package Build OverviewTo build a package, all state, pillar, jinja, and file templates used by your formula are assembled into a folder on the build system. These files can be cloned from a Git repository, such as those found at the saltstack-formulas organization on GitHub, or copied directly to the folder. The following diagram demonstrates a typical formula layout on the build system: [image] In this example, all formula files are placed in a myapp-formula folder. This is the folder that is targeted by the spm build command when this package is built. Within this folder, pillar data is placed in a pillar.example file at the root, and all state, jinja, and template files are placed within a subfolder that is named after the application being packaged. State files are typically contained within a subfolder, similar to how state files are organized in the state tree. Any non-pillar files in your package that are not contained in a subfolder are placed at the root of the spm state tree. Additionally, a FORMULA file is created and placed in the root of the folder. This file contains package metadata that is used by SPM. Package Installation OverviewWhen building packages, it is useful to know where files are installed on the Salt master. During installation, all files except pillar.example and FORMULA are copied directly to the spm state tree on the Salt master (located at \srv\spm\salt). If a pillar.example file is present in the root, it is renamed to <formula name>.sls.orig and placed in the pillar_path. [image] NOTE: Even though the pillar data file is copied to the pillar
root, you still need to manually assign this pillar data to systems using the
pillar top file. This file can also be duplicated and renamed so the
.orig version is left intact in case you need to restore it
later.
Building an SPM Formula Package
spm build /path/to/salt-packages-source/myapp-formula
Types of PackagesSPM supports different types of packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /usr/local/etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name. formulaBy default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default). reactorBy default, files from this type of package live in the /srv/spm/reactor/ directory. confThe files in this type of package are configuration files for Salt, which normally live in the /usr/local/etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package). Technical InformationPackages are built using BZ2-compressed tarballs. By default, the package database is stored using the sqlite3 driver (see Loader Modules below). Support for these are built into Python, and so no external dependencies are needed. All other files belonging to SPM use YAML, for portability and ease of use and maintainability. SPM-Specific Loader ModulesSPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules. Package DatabaseBy default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself. Please see the SPM Development Guide for information on creating new modules for package database management. Package FilesBy default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on. Please see the SPM Development Guide for information on creating new modules for package file management. Distributing SPM PackagesSPM packages can be distributed to Salt masters over HTTP(S), FTP, or through the file system. The SPM repo can be hosted on any system where you can install Salt. Salt is installed so you can run the spm create_repo command when you update or add a package to the repo. SPM repos do not require the salt-master, salt-minion, or any other process running on the system. NOTE: If you are hosting the SPM repo on a system where you can
not or do not want to install Salt, you can run the spm create_repo
command on the build system and then copy the packages and the generated
SPM-METADATA file to the repo. You can also install SPM files
directly on a Salt master, bypassing the repository
completely.
Setting up a Package RepositoryAfter packages are built, the generated SPM files are placed in the srv/spm_build folder. Where you place the built SPM files on your repository server depends on how you plan to make them available to your Salt masters. You can share the srv/spm_build folder on the network, or copy the files to your FTP or Web server. Adding a Package to the repositoryNew packages are added by simply copying the SPM file to the repo folder, and then generating repo metadata. Generate Repo MetadataEach time you update or add an SPM package to your repository, issue an spm create_repo command: spm create_repo /srv/spm_build SPM generates the repository metadata for all of the packages in that directory and places it in an SPM-METADATA file at the folder root. This command is used even if repository metadata already exists in that directory. Installing SPM PackagesSPM packages are installed to your Salt master, where they are available to Salt minions using all of Salt's package management functions. Configuring Remote RepositoriesBefore SPM can use a repository, two things need to happen. First, the Salt master needs to know where the repository is through a configuration process. Then it needs to pull down the repository metadata. Repository Configuration FilesRepositories are configured by adding each of them to the /usr/local/etc/salt/spm.repos.d/spm.repo file on each Salt master. This file contains the name of the repository, and the link to the repository: my_repo: For HTTP/HTTPS Basic authorization you can define credentials: my_repo: Beware of unauthorized access to this file, please set at least 0640 permissions for this configuration file: The URL can use http, https, ftp, or file. my_repo: Updating Local Repository MetadataAfter the repository is configured on the Salt master, repository metadata is downloaded using the spm update_repo command: spm update_repo NOTE: A file for each repo is placed in
/var/cache/salt/spm on the Salt master after you run the
update_repo command. If you add a repository and it does not seem to be
showing up, check this path to verify that the repository was found.
Update File RootsSPM packages are installed to the srv/spm/salt folder on your Salt master. This path needs to be added to the file roots on your Salt master manually. file_roots: Restart the salt-master service after updating the file_roots setting. Installing PackagesTo install a package, use the spm install command: spm install apache WARNING: Currently, SPM does not check to see if files are already
in place before installing them. That means that existing files will be
overwritten without warning.
Installing directly from an SPM fileYou can also install SPM packages using a local SPM file using the spm local install command: spm local install /srv/spm/apache-201506-1.spm An SPM repository is not required when using spm local install. PillarsIf an installed package includes Pillar data, be sure to target the installed pillar to the necessary systems using the pillar Top file. Removing PackagesPackages may be removed after they are installed using the spm remove command. spm remove apache If files have been modified, they will not be removed. Empty directories will also be removed. SPM ConfigurationThere are a number of options that are specific to SPM. They may be configured in the master configuration file, or in SPM's own spm configuration file (normally located at /usr/local/etc/salt/spm). If configured in both places, the spm file takes precedence. In general, these values will not need to be changed from the defaults. spm_logfileDefault: /var/log/salt/spm Where SPM logs messages. spm_repos_configDefault: /usr/local/etc/salt/spm.repos SPM repositories are configured with this file. There is also a directory which corresponds to it, which ends in .d. For instance, if the filename is /usr/local/etc/salt/spm.repos, the directory will be /etc/salt/spm.repos.d/. spm_cache_dirDefault: /var/cache/salt/spm When SPM updates package repository metadata and downloads packaged, they will be placed in this directory. The package database, normally called packages.db, also lives in this directory. spm_dbDefault: /var/cache/salt/spm/packages.db The location and name of the package database. This database stores the names of all of the SPM packages installed on the system, the files that belong to them, and the metadata for those files. spm_build_dirDefault: /srv/spm_build When packages are built, they will be placed in this directory. spm_build_excludeDefault: ['.git'] When SPM builds a package, it normally adds all files in the formula directory to the package. Files listed here will be excluded from that package. This option requires a list to be specified. spm_build_exclude: Types of PackagesSPM supports different types of formula packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /usr/local/etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name. formulaBy default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default). reactorBy default, files from this type of package live in the /srv/spm/reactor/ directory. confThe files in this type of package are configuration files for Salt, which normally live in the /usr/local/etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package). FORMULA FileIn addition to the formula itself, a FORMULA file must exist which describes the package. An example of this file is: name: apache os: RedHat, Debian, Ubuntu, SUSE, FreeBSD os_family: RedHat, Debian, Suse, FreeBSD version: 201506 release: 2 summary: Formula for installing Apache description: Formula for installing Apache Required FieldsThis file must contain at least the following fields: nameThe name of the package, as it will appear in the package filename, in the repository metadata, and the package database. Even if the source formula has -formula in its name, this name should probably not include that. For instance, when packaging the apache-formula, the name should be set to apache. osThe value of the os grain that this formula supports. This is used to help users know which operating systems can support this package. os_familyThe value of the os_family grain that this formula supports. This is used to help users know which operating system families can support this package. versionThe version of the package. While it is up to the organization that manages this package, it is suggested that this version is specified in a YYYYMM format. For instance, if this version was released in June 2015, the package version should be 201506. If multiple releases are made in a month, the release field should be used. minimum_versionMinimum recommended version of Salt to use this formula. Not currently enforced. releaseThis field refers primarily to a release of a version, but also to multiple versions within a month. In general, if a version has been made public, and immediate updates need to be made to it, this field should also be updated. summaryA one-line description of the package. descriptionA more detailed description of the package which can contain more than one line. Optional FieldsThe following fields may also be present. top_level_dirThis field is optional, but highly recommended. If it is not specified, the package name will be used. Formula repositories typically do not store .sls files in the root of the repository; instead they are stored in a subdirectory. For instance, an apache-formula repository would contain a directory called apache, which would contain an init.sls, plus a number of other related files. In this instance, the top_level_dir should be set to apache. Files outside the top_level_dir, such as README.rst, FORMULA, and LICENSE will not be installed. The exceptions to this rule are files that are already treated specially, such as pillar.example and _modules/. dependenciesA comma-separated list of packages that must be installed along with this package. When this package is installed, SPM will attempt to discover and install these packages as well. If it is unable to, then it will refuse to install this package. This is useful for creating packages which tie together other packages. For instance, a package called wordpress-mariadb-apache would depend upon wordpress, mariadb, and apache. optionalA comma-separated list of packages which are related to this package, but are neither required nor necessarily recommended. This list is displayed in an informational message when the package is installed to SPM. recommendedA comma-separated list of optional packages that are recommended to be installed with the package. This list is displayed in an informational message when the package is installed to SPM. filesA files section can be added, to specify a list of files to add to the SPM. Such a section might look like: files: When files are specified, then only those files will be added to the SPM, regardless of what other files exist in the directory. They will also be added in the order specified, which is useful if you have a need to lay down files in a specific order. As can be seen in the example above, you may also tag files as being a specific type. This is done by pre-pending a filename with its type, followed by a pipe (|) character. The above example contains a document file and a readme. The available file types are:
The first 5 of these types (c, d, g, l, r) will be placed in /usr/share/salt/spm/ by default. This can be changed by setting an spm_share_dir value in your /usr/local/etc/salt/spm configuration file. The last two types (s and m) are currently ignored, but they are reserved for future use. Pre and Post StatesIt is possible to run Salt states before and after installing a package by using pre and post states. The following sections may be declared in a FORMULA:
Sections with pre in their name are evaluated before a package is installed and sections with post are evaluated after a package is installed. local states are evaluated before tgt states. Each of these sections needs to be evaluated as text, rather than as YAML. Consider the following block: pre_local_state: > Note that this declaration uses > after pre_local_state. This is a YAML marker that marks the next multi-line block as text, including newlines. It is important to use this marker whenever declaring pre or post states, so that the text following it can be evaluated properly. local Stateslocal states are evaluated locally; this is analogous to issuing a state run using a salt-call --local command. These commands will be issued on the local machine running the spm command, whether that machine is a master or a minion. local states do not require any special arguments, but they must still use the > marker to denote that the state is evaluated as text, not a data structure. pre_local_state: > tgt Statestgt states are issued against a remote target. This is analogous to issuing a state using the salt command. As such it requires that the machine that the spm command is running on is a master. Because tgt states require that a target be specified, their code blocks are a little different. Consider the following state: pre_tgt_state: With tgt states, the state data is placed under a data section, inside the *_tgt_state code block. The target is of course specified as a tgt and you may also optionally specify a tgt_type (the default is glob). You still need to use the > marker, but this time it follows the data line, rather than the *_tgt_state line. Templating StatesThe reason that state data must be evaluated as text rather than a data structure is because that state data is first processed through the rendering engine, as it would be with a standard state run. This means that you can use Jinja or any other supported renderer inside of Salt. All formula variables are available to the renderer, so you can reference FORMULA data inside your state if you need to: pre_tgt_state: You may also declare your own variables inside the FORMULA. If SPM doesn't recognize them then it will ignore them, so there are no restrictions on variable names, outside of avoiding reserved words. By default the renderer is set to jinja|yaml. You may change this by changing the renderer setting in the FORMULA itself. Building a PackageOnce a FORMULA file has been created, it is placed into the root of the formula that is to be turned into a package. The spm build command is used to turn that formula into a package: spm build /path/to/saltstack-formulas/apache-formula The resulting file will be placed in the build directory. By default this directory is located at /srv/spm/. Loader ModulesWhen an execution module is placed in <file_roots>/_modules/ on the master, it will automatically be synced to minions, the next time a sync operation takes place. Other modules are also propagated this way: state modules can be placed in _states/, and so on. When SPM detects a file in a package which resides in one of these directories, that directory will be placed in <file_roots> instead of in the formula directory with the rest of the files. Removing PackagesPackages may be removed once they are installed using the spm remove command. spm remove apache If files have been modified, they will not be removed. Empty directories will also be removed. Technical InformationPackages are built using BZ2-compressed tarballs. By default, the package database is stored using the sqlite3 driver (see Loader Modules below). Support for these are built into Python, and so no external dependencies are needed. All other files belonging to SPM use YAML, for portability and ease of use and maintainability. SPM-Specific Loader ModulesSPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules. Package DatabaseBy default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself. Please see the SPM Development Guide for information on creating new modules for package database management. Package FilesBy default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on. Please see the SPM Development Guide for information on creating new modules for package file management. Types of PackagesSPM supports different types of formula packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /usr/local/etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name. formulaBy default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default). reactorBy default, files from this type of package live in the /srv/spm/reactor/ directory. confThe files in this type of package are configuration files for Salt, which normally live in the /usr/local/etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package). SPM Development GuideThis document discusses developing additional code for SPM. SPM-Specific Loader ModulesSPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules. Each function that accepts arguments has a set of required and optional arguments. Take note that SPM will pass all arguments in, and therefore each function must accept each of those arguments. However, arguments that are marked as required are crucial to SPM's core functionality, while arguments that are marked as optional are provided as a benefit to the module, if it needs to use them. Package DatabaseBy default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself. Modules for managing the package database are stored in the salt/spm/pkgdb/ directory. A number of functions must exist to support database management. init()Get a database connection, and initialize the package database if necessary. This function accepts no arguments. If a database is used which supports a connection object, then that connection object is returned. For instance, the sqlite3 module returns a connect() object from the sqlite3 library: def myfunc(): SPM itself will not use this connection object; it will be passed in as-is to the other functions in the module. Therefore, when you set up this object, make sure to do so in a way that is easily usable throughout the module. info()Return information for a package. This generally consists of the information that is stored in the FORMULA file in the package. The arguments that are passed in, in order, are package (required) and conn (optional). package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). list_files()Return a list of files for an installed package. Only the filename should be returned, and no other information. The arguments that are passed in, in order, are package (required) and conn (optional). package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). register_pkg()Register a package in the package database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required), formula_def (required), and conn (optional). name is the name of the package, as specified in the FORMULA. formula_def is the contents of the FORMULA file, as a dict. conn is the connection object returned from init(). register_file()Register a file in the package database. Nothing is expected to be returned from this function. The arguments that are passed in are name (required), member (required), path (required), digest (optional), and conn (optional). name is the name of the package. member is a tarfile object for the package file. It is included, because it contains most of the information for the file. path is the location of the file on the local filesystem. digest is the SHA1 checksum of the file. conn is the connection object returned from init(). unregister_pkg()Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required) and conn (optional). name is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). unregister_file()Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required), pkg (optional) and conn (optional). name is the path of the file, as it was installed on the filesystem. pkg is the name of the package that the file belongs to. conn is the connection object returned from init(). db_exists()Check to see whether the package database already exists. This is the path to the package database file. This function will return True or False. The only argument that is expected is db_, which is the package database file. Package FilesBy default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on. Modules for managing the package database are stored in the salt/spm/pkgfiles/ directory. A number of functions must exist to support file management. init()Initialize the installation location for the package files. Normally these will be directory paths, but other external destinations such as databases can be used. For this reason, this function will return a connection object, which can be a database object. However, in the default local module, this object is a dict containing the paths. This object will be passed into all other functions. Three directories are used for the destinations: formula_path, pillar_path, and reactor_path. formula_path is the location of most of the files that will be installed. The default is specific to the operating system, but is normally /usr/local/etc/salt/states/. pillar_path is the location that the pillar.example file will be installed to. The default is specific to the operating system, but is normally /usr/local/etc/salt/pillar/. reactor_path is the location that reactor files will be installed to. The default is specific to the operating system, but is normally /srv/reactor/. check_existing()Check the filesystem for existing files. All files for the package will be checked, and if any are existing, then this function will normally state that SPM will refuse to install the package. This function returns a list of the files that exist on the system. The arguments that are passed into this function are, in order: package (required), pkg_files (required), formula_def (formula_def), and conn (optional). package is the name of the package that is to be installed. pkg_files is a list of the files to be checked. formula_def is a copy of the information that is stored in the FORMULA file. conn is the file connection object. install_file()Install a single file to the destination (normally on the filesystem). Nothing is expected to be returned from this function. This function returns the final location that the file was installed to. The arguments that are passed into this function are, in order, package (required), formula_tar (required), member (required), formula_def (required), and conn (optional). package is the name of the package that is to be installed. formula_tar is the tarfile object for the package. This is passed in so that the function can call formula_tar.extract() for the file. member is the tarfile object which represents the individual file. This may be modified as necessary, before being passed into formula_tar.extract(). formula_def is a copy of the information from the FORMULA file. conn is the file connection object. remove_file()Remove a single file from file system. Normally this will be little more than an os.remove(). Nothing is expected to be returned from this function. The arguments that are passed into this function are, in order, path (required) and conn (optional). path is the absolute path to the file to be removed. conn is the file connection object. hash_file()Returns the hexdigest hash value of a file. The arguments that are passed into this function are, in order, path (required), hashobj (required), and conn (optional). path is the absolute path to the file. hashobj is a reference to hashlib.sha1(), which is used to pull the hexdigest() for the file. conn is the file connection object. This function will not generally be more complex than: def hash_file(path, hashobj, conn=None): path_exists()Check to see whether the file already exists on the filesystem. Returns True or False. This function expects a path argument, which is the absolute path to the file to be checked. path_isdir()Check to see whether the path specified is a directory. Returns True or False. This function expects a path argument, which is the absolute path to be checked. Storing Data in Other DatabasesThe SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes. SDB was added to Salt in version 2014.7.0. SDB ConfigurationIn order to use the SDB interface, a configuration profile must be set up. To be available for master commands, such as runners, it needs to be configured in the master configuration. For modules executed on a minion, it can be set either in the minion configuration file, or as a pillar. The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used. For instance, a profile called mykeyring, which uses the system service in the keyring module would look like: mykeyring: It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well. SDB URIsSDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configuration areas, without a lot of overhead. The basic format of an SDB URI is: sdb://<profile>/<args> The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible. For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like: kevinopenstack: And the URI used to reference the password might look like: sdb://kevinopenstack/password Getting, Setting and Deleting SDB ValuesOnce an SDB driver is configured, you can use the sdb execution module to get, set and delete values from it. There are two functions that may appear in most SDB modules: get, set and delete. Getting a value requires only the SDB URI to be specified. To retrieve a value from the kevinopenstack profile above, you would use: salt-call sdb.get sdb://kevinopenstack/password WARNING: The vault driver previously only supported
splitting the path and key with a question mark. This has since been
deprecated in favor of using the standard / to split the path and key. The use
of the questions mark will still be supported to ensure backwards
compatibility, but please use the preferred method using /. The deprecated
approach required the full path to where the key is stored, followed by a
question mark, followed by the key to be retrieved. If you were using a
profile called myvault, you would use a URI that looks like:
salt-call sdb.get 'sdb://myvault/secret/salt?saltstack' Instead of the above please use the preferred URI using / instead: salt-call sdb.get 'sdb://myvault/secret/salt/saltstack' Setting a value uses the same URI as would be used to retrieve it, followed by the value as another argument. salt-call sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome' Deleting values (if supported by the driver) is done pretty much the same way as getting them. Provided that you have a profile called mykvstore that uses a driver allowing to delete values you would delete a value as shown below: salt-call sdb.delete 'sdb://mykvstore/foobar' The sdb.get, sdb.set and sdb.delete functions are also available in the runner system: salt-run sdb.get 'sdb://myvault/secret/salt/saltstack' salt-run sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome' salt-run sdb.delete 'sdb://mykvstore/foobar' Using SDB URIs in FilesSDB URIs can be used in both configuration files, and files that are processed by the renderer system (jinja, mako, etc.). In a configuration file (such as /usr/local/etc/salt/master, /etc/salt/minion, /etc/salt/cloud, etc.), make an entry as usual, and set the value to the SDB URI. For instance: mykey: sdb://myetcd/mykey To retrieve this value using a module, the module in question must use the config.get function to retrieve configuration values. This would look something like: mykey = __salt__["config.get"]("mykey")
Templating renderers use a similar construct. To get the mykey value from above in Jinja, you would use: {{ salt['config.get']('mykey') }}
When retrieving data from configuration files using config.get, the SDB URI need only appear in the configuration file itself. If you would like to retrieve a key directly from SDB, you would call the sdb.get function directly, using the SDB URI. For instance, in Jinja: {{ salt['sdb.get']('sdb://myetcd/mykey') }}
When writing Salt modules, it is not recommended to call sdb.get directly, as it requires the user to provide values in SDB, using a specific URI. Use config.get instead. Writing SDB ModulesThere is currently one function that MUST exist in any SDB module (get()), one that SHOULD exist (set_()) and one that MAY exist (delete()). If using a (set_()) function, a __func_alias__ dictionary MUST be declared in the module as well: __func_alias__ = {
This is because set is a Python built-in, and therefore functions should not be created which are called set(). The __func_alias__ functionality is provided via Salt's loader interfaces, and allows legally-named functions to be referred to using names that would otherwise be unwise to use. The get() function is required, as it will be called via functions in other areas of the code which make use of the sdb:// URI. For example, the config.get function in the config execution module uses this function. The set_() function may be provided, but is not required, as some sources may be read-only, or may be otherwise unwise to access via a URI (for instance, because of SQL injection attacks). The delete() function may be provided as well, but is not required, as many sources may be read-only or restrict such operations. A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples of most, if not all, of the types of functionality that are available not only for SDB modules, but for Salt modules in general. Running the Salt Master/Minion as an Unprivileged UserWhile the default setup runs the master and minion as the root user, some may consider it an extra measure of security to run the master as a non-root user. Keep in mind that doing so does not change the master's capability to access minions as the user they are running as. Due to this many feel that running the master as a non-root user does not grant any real security advantage which is why the master has remained as root by default. NOTE: Some of Salt's operations cannot execute correctly when
the master is not running as root, specifically the pam external auth system,
as this system needs root access to check authentication.
As of Salt 0.9.10 it is possible to run Salt as a non-root user. This can be done by setting the user parameter in the master configuration file. and restarting the salt-master service. The minion has its own user parameter as well, but running the minion as an unprivileged user will keep it from making changes to things like users, installed packages, etc. unless access controls (sudo, etc.) are setup on the minion to permit the non-root user to make the needed changes. In order to allow Salt to successfully run as a non-root user, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
Ownership can be easily changed with chown, like so: # chown -R user /usr/local/etc/salt /var/cache/salt /var/log/salt /var/run/salt WARNING: Running either the master or minion with the
root_dir parameter specified will affect these paths, as will setting
options like pki_dir, cachedir, log_file, and other
options that normally live in the above directories.
Using cron with SaltThe Salt Minion can initiate its own highstate using the salt-call command. $ salt-call state.apply This will cause the minion to check in with the master and ensure it is in the correct "state". Use cron to initiate a highstateIf you would like the Salt Minion to regularly check in with the master you can use cron to run the salt-call command: 0 0 * * * salt-call state.apply The above cron entry will run a highstate every day at midnight. NOTE: When executing Salt using cron, keep in mind that the
default PATH for cron may not include the path for any scripts or commands
used by Salt, and it may be necessary to set the PATH accordingly in the
crontab:
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin 0 0 * * * salt-call state.apply Hardening SaltThis topic contains tips you can use to secure and harden your Salt environment. How you best secure and harden your Salt environment depends heavily on how you use Salt, where you use Salt, how your team is structured, where you get data from, and what kinds of access (internal and external) you require. IMPORTANT: The guidance here should be taken in combination with
Salt Best Practices.
IMPORTANT: Refer to the Receiving security announcements
documentation in order to stay updated and secure.
WARNING: For historical reasons, Salt requires PyCrypto as a
"lowest common denominator". However, PyCrypto is
unmaintained and best practice is to manually upgrade to use a more
maintained library such as PyCryptodome. See Issue #52674 and
Issue #54115 for more info
General hardening tips
Salt hardening tipsWARNING: Grains can be set by users that have access to the minion
configuration files on the local system, making them less secure than other
identifiers in Salt. Avoid storing sensitive data, such as passwords or keys,
on minions. Instead, make use of Storing Static Data in the Pillar
and/or Storing Data in Other Databases.
IMPORTANT: Jinja supports a secure, sandboxed template
execution environment that Salt takes advantage of. Other text
Renderers do not support this functionality, so Salt highly recommends
usage of jinja / jinja|yaml.
Rotating keysThere are several reasons to rotate keys. One example is exposure or a compromised key. An easy way to rotate a key is to remove the existing keys and let the salt-master or salt-minion process generate new keys on restart. Rotate a minion keyRun the following on the Salt minion: salt-call saltutil.regen_keys systemctl stop salt-minion Run the following on the Salt master: salt-key -d <minion-id> Run the following on the Salt minion: systemctl start salt-minion Run the following on the Salt master: salt-key -a <minion-id> Rotate a master keyRun the following on the Salt master: systemctl stop salt-master
rm <pki_dir>/master.{pem,pub}
systemctl start salt-master
Run the following on the Salt minion: systemctl stop salt-minion rm <pki_dir>/minion_master.pub systemctl start salt-minion Hardening of syndic setupsSyndics must be run as the same user as their syndic master process. The master of master's will include publisher ACL information in jobs sent to downstream masters via syndics. This means that any minions connected directly to a master of masters will also receive ACL information in jobs being published. For the most secure setup, only connect syndics directly to master of masters. Security disclosure policy
gpg public key: -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBGZpxDsBEACz8yoRBXaJiifaWz3wd4FLSO18mgH7H/+0iNTbV1ZwhgGEtWTF Z31HfrsbxVgICoMgFYt8WKnc4MHZLIgDfTuCFQpf7PV/VqRBAknZwQKEAjHfrYNz Q1vy3CeKC1qcKQISEQr7VFf58sOC8GJ54jLLc2rCsg9cXI6yvUFtGwL9Qv7g/NZn rtLjc4NZIKdIvSt+/PtooQtsz0jfLMdMpMFa41keH3MknIbydBUnGj7eC8ANN/iD Re2QHAW2KfQh3Ocuh/DpJ0/dwbzXmXfMWHk30E+s31TfdLiFt1Iz5kZDF8iHrDMq x39/GGmF10y5rfq43V1Ucxm+1tl5Km0JcX6GpPUtgRpfUYAxwxfGfezt4PjYRYH2 mNxXXPLsnVTvdWPTvS0msSrcTHmnU5His38I6goXI7dLZm0saqoWi3sqEQ8TPS6/ DkLtYjpb/+dql+KrXD7erd3j8KKflIXn7AEsv+luNk6czGOKgdG9agkklzOHfEPc xOGmaFfe/1mu8HxgaCuhNAQWlk79ZC+GAm0sBZIQAQRtABgag5vWr16hVix7BPMG Fp8+caOVv6qfQ7gBmJ3/aso6OzyOxsluVxQRt94EjPTm0xuwb1aYNJOhEj9cPkjQ XBjo3KN0rwcAViR/fdUzrIV1sn2hms0v5WZ+TDtz1w0OpLZOwe23BDE1+QARAQAB tEJTYWx0IFByb2plY3QgU2VjdXJpdHkgVGVhbSA8c2FsdHByb2plY3Qtc2VjdXJp dHkucGRsQGJyb2FkY29tLmNvbT6JAlcEEwEKAEEWIQSZ7ybyZGktJJc6cAfov3an N2VKBgUCZmnEOwIbAwUJB4TOAAULCQgHAgIiAgYVCgkICwIEFgIDAQIeBwIXgAAK CRDov3anN2VKBk7rD/9QdcYdNGfk96W906HlVpb3JCwT0t9T7ElP97Ot0YN6LqMj vVQpxWYi7riUSyt1FtlCAM+hmghImzILF9LKDRCZ1H5UStI/u9T53cZpUZtVW/8R bUNBCl495UcgioIZG5DsfZ/GdBOgY+hQfdgh7HC8a8A/owCt2hHbnth970NQ+LHb /0ERLfOHRxozgPBhze8Vqf939KlteM5ljgTw/IkJJIsxJi4C6pQntSHvB3/Bq/Nw Kf3vk3XYFtVibeQODSVvc6useo+SNGV/wsK/6kvh/vfP9Trv/GMOn/89Bj2aL1PR M382E6sDB9d22p4ehVgbcOpkwHtr9DGerK9xzfG4aUjLu9qVD5Ep3gqKSsCe+P8z bpADdVCnk+Vdp3Bi+KI7buSkqfbZ0m9vCY3ei1fMiDiTTjvNliL5QCO6PvYNYiDw +LLImrQThv55ZRQsRRT7J6A94kwDoI6zcBEalv/aPws0nQHJtgWRUpmy5RcbVu9Z QBXlUpCzCB+gGaGRE1u0hCfuvkbcG1pXFFBdSUuAK4o4ktiRALVUndELic/PU1nR jwo/+j0SGw/jTwqVChUfLDZbiAQ2JICoVpZ+e1zQfsxa/yDu2e4D543SvNFHDsxh bsBeCsopzJSA0n2HAdYvPxOPoWVvZv+U8ZV3EEVOUgsO5//cRJddCgLU89Q4DrkC DQRmacQ7ARAAsz8jnpfw3DCRxdCVGiqWAtgj8r2gx5n1wJsKsgvyGQdKUtPwlX04 7w13lIDT2DwoXFozquYsTn9XkIoWbVckqo0NN/V7/QxIZIYTqRcFXouHTbXDJm5C tsvfDlnTsaplyRawPU2mhYg39/lzIt8zIjvy5zo/pElkRP5m03nG+ItrsHN6CCvf ZiRxme6EQdn+aoHh2GtICL8+c3HvQzTHYKxFn84Ibt3uNxwt+Mu6YhG9tkYMQQk5 SkYA4CYAaw2Lc/g0ee36iqw/5d79M8YcQtHhy5zzqgdEvExjFPdowV1hhFIEkNkM uqIAknXVesqLLw2hPeYmyhYQqeBKIrWmBhBKX9c0vMYkDDH3T/sSylVhH0QAXP6E WmLja3E1ov6pt6j7j/wWzC9LSMFDJI2yWCeOE1oea5D89tH6XvsGRTiog62zF/9a 77197iIa0+o91chp4iLkzDvuK8pVujPx8bNsK8jlJ+OW73NmliCVg+hecoFLNsri /TsBngFNVcu79Q1XfyvoDdR2C09ItCBEZGt6LOlq/+ATUw1aBz6L1hvLBtiR3Hfu X31YlbxdvVPjlzg6O6GXSfnokNTWv2mVXWTRIrP0RrKvMyiNPXVW7EunUuXI0Axk Xg3E5kAjKXkBXzoCTCVz/sXPLjvjI0x3Z7obgPpcTi9h5DIX6PFyK/kAEQEAAYkC PAQYAQoAJhYhBJnvJvJkaS0klzpwB+i/dqc3ZUoGBQJmacQ7AhsMBQkHhM4AAAoJ EOi/dqc3ZUoGDeAQAKbyiHA1sl0fnvcZxoZ3mWA/Qesddp7Nv2aEW8I3hAJoTVml ZvMxk8leZgsQJtSsVDNnxeyW+WCIUkhxmd95UlkTTj5mpyci1YrxAltPJ2TWioLe F2doP8Y+4iGnaV+ApzWG33sLr95z37RKVdMuGk/O5nLMeWnSPA7HHWJCxECMm0SH uI8aby8w2aBZ1kOMFB/ToEEzLBu9fk+zCzG3uH8QhdciMENVhsyBSULIrmwKglyI VQwj2dXHyekQh7QEHV+CdKMfs3ZOANwm52OwjaK0dVb3IMFGvlUf4UXXfcXwLAkj vW+Ju4kLGxVQpOlh1EBain9WOaHZGh6EGuTpjJO32PyRq8iSMNb8coeonoPFWrE/ A5dy3z5x5CZhJ6kyNwYs/9951r30Ct9qNZo9WZwp8AGQVs+J9XEYnZIWXnO1hdKs dRStPvY7VqS500t8eWqWRfCLgofZAb9Fv7SwTPQ2G7bOuTXmQKAIEkU9vzo5XACu AtR/9bC9ghNnlNuH4xiViBclrq2dif/I2ZwItpQHjuCDeMKz9kdADRI0tuNPpRHe QP1YpURW+I+PYZzNgbnwzl6Bxo7jCHFgG6BQ0ih5sVwEDhlXjSejd8CNMYEy3ElL xJLUpltwXLZSrJEXYjtJtnh0om71NXes0OyWE1cL4+U6WA9Hho6xedjk2bai =pPmt -----END PGP PUBLIC KEY BLOCK----- The SaltStack Security Team is available at saltproject-security.pdl@broadcom.com for security-related bug reports or questions. We request the disclosure of any security-related bugs or issues be reported non-publicly until such time as the issue can be resolved and a security-fix release can be prepared. At that time we will release the fix and make a public announcement with upgrade instructions and download locations. Security response procedureSaltStack takes security and the trust of our customers and users very seriously. Our disclosure policy is intended to resolve security issues as quickly and safely as is possible.
Receiving security announcementsThe following mailing lists, per the previous tasks identified in our response procedure, will receive security-relevant notifications:
In addition to the mailing lists, SaltStack also provides the following resources:
Salt ChannelsOne of the fundamental features of Salt is remote execution. Salt has two basic "channels" for communicating with minions. Each channel requires a client (minion) and a server (master) implementation to work within Salt. These pairs of channels will work together to implement the specific message passing required by the channel interface. Channels use Transports for sending and receiving messages. Pub ChannelThe pub (or pubish) channel is how a master sends a job (payload) to a minion. This is a basic pub/sub paradigm, which has specific targeting semantics. All data which goes across the publish system should be encrypted such that only members of the Salt cluster can decrypt the published payloads. Req ChannelThe req channel is how the minions send data to the master. This interface is primarily used for fetching files and returning job returns. The req channels have two basic interfaces when talking to the master. send is the basic method that guarantees the message is encrypted at least so that only minions attached to the same master can read it-- but no guarantee of minion-master confidentiality, whereas the crypted_transfer_decode_dictentry method does guarantee minion-master confidentiality. The req channel is also used by the salt cli to publish jobs to the master. Salt TransportTransports in Salt are used by Channels to send messages between Masters, Minions, and the Salt CLI. Transports can be brokerless or brokered. There are two types of server / client implementations needed to implement a channel. Publish ServerThe publish server implements a publish / subscribe paradigm and is used by Minions to receive jobs from Masters. Publish ClientThe publish client subscribes to, and receives messages from a Publish Server. Request ServerThe request server implements a request / reply paradigm. Every request sent by the client must receive exactly one reply. Request ClientThe request client sends requests to a Request Server and receives a reply message. ZeroMQ TransportNOTE: ZeroMQ is the current default transport within Salt
ZeroMQ is a messaging library with bindings into many languages. ZeroMQ implements a socket interface for message passing, with specific semantics for the socket type. Publish Server and ClientThe publish server and client are implemented using ZeroMQ's pub/sub sockets. By default we don't use ZeroMQ's filtering, which means that all publish jobs are sent to all minions and filtered minion side. ZeroMQ does have publisher side filtering which can be enabled in salt using zmq_filtering. Request Server and ClientThe request server and client are implemented using ZeroMQ's req/rep sockets. These sockets enforce a send/recv pattern, which forces salt to serialize messages through these socket pairs. This means that although the interface is asynchronous on the minion we cannot send a second message until we have received the reply of the first message. TCP TransportThe tcp transport is an implementation of Salt's transport using raw tcp sockets. Since this isn't using a pre-defined messaging library we will describe the wire protocol, message semantics, etc. in this document. The tcp transport is enabled by changing the transport setting to tcp on each Salt minion and Salt master. transport: tcp WARNING: We currently recommend that when using Syndics that all
Masters and Minions use the same transport. We're investigating a report of an
error when using mixed transport types at very heavy loads.
Wire ProtocolThis implementation over TCP focuses on flexibility over absolute efficiency. This means we are okay to spend a couple of bytes of wire space for flexibility in the future. That being said, the wire framing is quite efficient and looks like: msgpack({'head': SOMEHEADER, 'body': SOMEBODY})
Since msgpack is an iterably parsed serialization, we can simply write the serialized payload to the wire. Within that payload we have two items "head" and "body". Head contains header information (such as "message id"). The Body contains the actual message that we are sending. With this flexible wire protocol we can implement any message semantics that we'd like-- including multiplexed message passing on a single socket. TLS SupportNew in version 2016.11.1. The TCP transport allows for the master/minion communication to be optionally wrapped in a TLS connection. Enabling this is simple, the master and minion need to be using the tcp connection, then the ssl option is enabled. The ssl option is passed as a dict and corresponds to the options passed to the Python ssl.wrap_socket function. A simple setup looks like this, on the Salt Master add the ssl option to the master configuration file: ssl: The minimal ssl option in the minion configuration file looks like this: ssl: True
# Versions below 2016.11.4:
ssl: {}
Specific options can be sent to the minion also, as defined in the Python ssl.wrap_socket function. NOTE: While setting the ssl_version is not required, we
recommend it. Some older versions of python do not support the latest TLS
protocol and if this is the case for your version of python we strongly
recommend upgrading your version of Python. Ciphers specification might be
omitted, but strongly recommended as otherwise all available ciphers will be
enabled.
CryptoThe current implementation uses the same crypto as the zeromq transport. Publish Server and ClientFor the publish server and client we send messages without "message ids" which the remote end interprets as a one-way send. NOTE: As of Salt 2016.3.0, publishes using list
targeting are sent only to relevant minions and not broadcasted.
As of Salt 3005, publishes using pcre and glob targeting are also sent only to relevant minions and not broadcasted. Other targeting types are always sent to all minions and rely on minion-side filtering. NOTE: Salt CLI defaults to glob targeting type, so in
order to target specific minions without broadcast, you need to use -L
option, such as salt -L my.minion test.ping, for masters before
3005.
Request Server and ClientFor the request server and client we send messages with a "message id". This "message id" allows us to multiplex messages across the socket. Master Tops SystemIn 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master. The old external_nodes option has been removed. The master tops system provides a pluggable and extendable replacement for it, allowing for multiple different subsystems to provide top file data. Using the new master_tops option is simple: master_tops: for Cobbler or: master_tops: for Reclass. master_tops: for Varstack. It's also possible to create custom master_tops modules. Simply place them into salt://_tops in the Salt fileserver and use the saltutil.sync_tops runner to sync them. If this runner function is not available, they can manually be placed into extmods/tops, relative to the master cachedir (in most cases the full path will be /var/cache/salt/master/extmods/tops). Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a bare-bones example: /usr/local/etc/salt/master: master_tops: customtop.py: (custom master_tops module) import logging import sys # Define the module's virtual name __virtualname__ = "customtop" log = logging.getLogger(__name__) def __virtual__(): salt minion state.show_top should then display something like: $ salt minion state.show_top minion NOTE: If a master_tops module returns top file data for
a given minion, it will be added to the states configured in the top file. It
will not replace it altogether. The 2018.3.0 release adds additional
functionality allowing a minion to treat master_tops as the single source of
truth, irrespective of the top file.
ReturnersBy default the return values of the commands sent to the Salt minions are returned to the Salt master, however anything at all can be done with the results data. By using a Salt returner, results data can be redirected to external data-stores for analysis and archival. Returners pull their configuration values from the Salt minions. Returners are only configured once, which is generally at load time. The returner interface allows the return data to be sent to any system that can receive data. This means that return data can be sent to a Redis server, a MongoDB server, a MySQL server, or any system. SEE ALSO: Full list of builtin returners
Using ReturnersAll Salt commands will return the command data back to the master. Specifying returners will ensure that the data is _also_ sent to the specified returner interfaces. Specifying what returners to use is done when the command is invoked: salt '*' test.version --return redis_return This command will ensure that the redis_return returner is used. It is also possible to specify multiple returners: salt '*' test.version --return mongo_return,redis_return,cassandra_return In this scenario all three returners will be called and the data from the test.version command will be sent out to the three named returners. Writing a ReturnerReturners are Salt modules that allow the redirection of results data to targets other than the Salt Master. Returners Are Easy To Write!Writing a Salt returner is straightforward. A returner is a Python module containing at minimum a returner function. Other optional functions can be included to add support for master_job_cache, Storing Job Results in an External System, and Event Returners.
salt-call --local --metadata test.version --out=pprint import redis import salt.utils.json def returner(ret): The above example of a returner set to send the data to a Redis server serializes the data as JSON and sets it in redis. Using Custom Returner ModulesPlace custom returners in a _returners/ directory within the file_roots specified by the master config file. Custom returners are distributed when any of the following are called:
Any custom returners which have been synced to a minion that are named the same as one of Salt's default set of returners will take the place of the default returner with the same name. Naming the ReturnerNote that a returner's default name is its filename (i.e. foo.py becomes returner foo), but that its name can be overridden by using a __virtual__ function. A good example of this can be found in the redis returner, which is named redis_return.py but is loaded as simply redis: try: Master Job Cache Supportmaster_job_cache, Storing Job Results in an External System, and Event Returners. Salt's master_job_cache allows returners to be used as a pluggable replacement for the Default Job Cache. In order to do so, a returner must implement the following functions: NOTE: The code samples contained in this section were taken
from the cassandra_cql returner.
def prep_jid(nocache, passed_jid=None): # pylint: disable=unused-argument
import salt.utils.json def save_load(jid, load, minions=None):
def get_load(jid): External Job Cache SupportSalt's Storing Job Results in an External System extends the master_job_cache. External Job Cache support requires the following functions in addition to what is required for Master Job Cache support:
Sample: {
Sample: {
Sample: {
Sample: {
Please refer to one or more of the existing returners (i.e. mysql, cassandra_cql) if you need further clarification. Event SupportAn event_return function must be added to the returner module to allow events to be logged from a master via the returner. A list of events are passed to the function by the master. The following example was taken from the MySQL returner. In this example, each event is inserted into the salt_events table keyed on the event tag. The tag contains the jid and therefore is guaranteed to be unique. import salt.utils.json def event_return(events): Testing the ReturnerThe returner, prep_jid, save_load, get_load, and event_return functions can be tested by configuring the master_job_cache and Event Returners in the master config file and submitting a job to test.version each minion from the master. Once you have successfully exercised the Master Job Cache functions, test the External Job Cache functions using the ret execution module. salt-call ret.get_jids cassandra_cql --output=json salt-call ret.get_fun cassandra_cql test.version --output=json salt-call ret.get_minions cassandra_cql --output=json salt-call ret.get_jid cassandra_cql 20150330121011408195 --output=json Event ReturnersFor maximum visibility into the history of events across a Salt infrastructure, all events seen by a salt master may be logged to one or more returners. To enable event logging, set the event_return configuration option in the master config to the returner(s) which should be designated as the handler for event returns. NOTE: Not all returners support event returns. Verify a
returner has an event_return() function before using.
NOTE: On larger installations, many hundreds of events may be
generated on a busy master every second. Be certain to closely monitor the
storage of a given returner as Salt can easily overwhelm an underpowered
server with thousands of returns.
Full List of Returnersreturner modules
salt.returners.appoptics_returnSalt returner to return highstate stats to AppOptics Metrics To enable this returner the minion will need the AppOptics Metrics client importable on the Python path and the following values configured in the minion or master config. The AppOptics python client can be found at: https://github.com/appoptics/python-appoptics-metrics appoptics.api_token: abc12345def An example configuration that returns the total number of successes and failures for your salt highstate runs (the default) would look like this: return: appoptics appoptics.api_token: <token string here> The returner publishes the following metrics to AppOptics:
You can add a tags section to specify which tags should be attached to all metrics created by the returner. appoptics.tags: If no tags are explicitly configured, then the tag key host_hostname_alias will be set, with the minion's id grain being the value. In addition to the requested tags, for a highstate run each of these will be tagged with the key:value of state_type: highstate. In order to return metrics for state.sls runs (distinct from highstates), you can specify a list of state names to the key appoptics.sls_states like so: appoptics.sls_states: This will report success and failure counts on runs of the role_salt_master.netapi, role_redis.config, and role_smarty.dummy states in addition to highstates. This will report the same metrics as above, but for these runs the metrics will be tagged with state_type: sls and state_name set to the name of the state that was invoked, e.g. role_salt_master.netapi.
salt.returners.carbon_returnTake data from salt and "return" it into a carbon receiver Add the following configuration to the minion configuration file: carbon.host: <server ip address> carbon.port: 2003 Errors when trying to convert data to numbers may be ignored by setting carbon.skip_on_error to True: carbon.skip_on_error: True By default, data will be sent to carbon using the plaintext protocol. To use the pickle protocol, set carbon.mode to pickle: carbon.mode: pickle
Carbon settings may also be configured as: carbon: Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.carbon: To use the carbon returner, append '--return carbon' to the salt command. salt '*' test.ping --return carbon To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return carbon --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return carbon --return_kwargs '{"skip_on_error": False}'
[module].[function].[minion_id].[metric path [...]].[metric name] salt.returners.cassandra_cql_returnReturn data to a cassandra server New in version 2015.5.0.
cassandra: Use the following cassandra database schema: CREATE KEYSPACE IF NOT EXISTS salt Required python modules: cassandra-driver To use the cassandra returner, append '--return cassandra_cql' to the salt command. ex: salt '*' test.ping --return_cql cassandra Note: if your Cassandra instance has not been tuned much you may benefit from altering some timeouts in cassandra.yaml like so: # How long the coordinator should wait for read operations to complete read_request_timeout_in_ms: 5000 # How long the coordinator should wait for seq or index scans to complete range_request_timeout_in_ms: 20000 # How long the coordinator should wait for writes to complete write_request_timeout_in_ms: 20000 # How long the coordinator should wait for counter writes to complete counter_write_request_timeout_in_ms: 10000 # How long a coordinator should continue to retry a CAS operation # that contends with other proposals for the same row cas_contention_timeout_in_ms: 5000 # How long the coordinator should wait for truncates to complete # (This can be much longer, because unless auto_snapshot is disabled # we need to flush first so we can snapshot before removing the data.) truncate_request_timeout_in_ms: 60000 # The default timeout for other, miscellaneous operations request_timeout_in_ms: 20000 As always, your mileage may vary and your Cassandra cluster may have different needs. SaltStack has seen situations where these timeouts can resolve some stacktraces that appear to come from the Datastax Python driver.
salt.returners.couchbase_returnSimple returner for Couchbase. Optional configuration settings are listed below, along with sane defaults. couchbase.host: 'salt' couchbase.port: 8091 couchbase.bucket: 'salt' couchbase.ttl: 86400 couchbase.password: 'password' couchbase.skip_verify_views: False To use the couchbase returner, append '--return couchbase' to the salt command. ex: salt '*' test.ping --return couchbase To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return couchbase --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return couchbase --return_kwargs '{"bucket": "another-salt"}'
All of the return data will be stored in documents as follows: JIDload: load obj tgt_minions: list of minions targeted nocache: should we not cache the return data JID/MINION_IDreturn: return_data full_ret: full load of job return
salt.returners.couchdb_returnSimple returner for CouchDB. Optional configuration settings are listed below, along with sane defaults: couchdb.db: 'salt' couchdb.url: 'http://salt:5984/' Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.couchdb.db: 'salt' alternative.couchdb.url: 'http://salt:5984/' To use the couchdb returner, append --return couchdb to the salt command. Example: salt '*' test.ping --return couchdb To use the alternative configuration, append --return_config alternative to the salt command. New in version 2015.5.0. salt '*' test.ping --return couchdb --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return couchdb --return_kwargs '{"db": "another-salt"}'
On concurrent database accessAs this returner creates a couchdb document with the salt job id as document id and as only one document with a given id can exist in a given couchdb database, it is advised for most setups that every minion be configured to write to it own database (the value of couchdb.db may be suffixed with the minion id), otherwise multi-minion targeting can lead to losing output:
salt.returners.elasticsearch_returnReturn data to an elasticsearch server for indexing.
To enable this returner the elasticsearch python client must be installed on the desired minions (all or some subset). Please see documentation of elasticsearch execution module for a valid connection configuration. WARNING: The index that you wish to store documents will be
created by Elasticsearch automatically if doesn't exist yet. It is highly
recommended to create predefined index templates with appropriate mapping(s)
that will be used by Elasticsearch upon index creation. Otherwise you will
have problems as described in #20826.
To use the returner per salt call: salt '*' test.ping --return elasticsearch In order to have the returner apply to all minions: ext_job_cache: elasticsearch
NOTE: The following options are valid for 'state.apply', 'state.sls' and 'state.highstate' functions only.
elasticsearch:
salt.returners.etcd_returnReturn data to an etcd server or cluster In order to return to an etcd server, a profile should be created in the master configuration file: my_etcd_config: It is technically possible to configure etcd without using a profile, but this is not considered to be a best practice, especially when multiple etcd servers or clusters are available. etcd.host: 127.0.0.1 etcd.port: 2379 In order to choose whether to use etcd API v2 or v3, you can put the following configuration option in the same place as your etcd configuration. This option defaults to true, meaning you will use v2 unless you specify otherwise. etcd.require_v2: True When using API v3, there are some specific options available to be configured within your etcd profile. They are defaulted to the following... etcd.encode_keys: False etcd.encode_values: True etcd.raw_keys: False etcd.raw_values: False etcd.unicode_errors: "surrogateescape" etcd.encode_keys indicates whether you want to pre-encode keys using msgpack before adding them to etcd. NOTE: If you set etcd.encode_keys to True, all
recursive functionality will no longer work. This includes tree and
ls and all other methods if you set recurse/recursive to
True. This is due to the fact that when encoding with msgpack, keys
like /salt and /salt/stack will have differing byte prefixes,
and etcd v3 searches recursively using prefixes.
etcd.encode_values indicates whether you want to pre-encode values using msgpack before adding them to etcd. This defaults to True to avoid data loss on non-string values wherever possible. etcd.raw_keys determines whether you want the raw key or a string returned. etcd.raw_values determines whether you want the raw value or a string returned. etcd.unicode_errors determines what you policy to follow when there are encoding/decoding errors. Additionally, two more options must be specified in the top-level configuration in order to use the etcd returner: etcd.returner: my_etcd_config etcd.returner_root: /salt/return The etcd.returner option specifies which configuration profile to use. The etcd.returner_root option specifies the path inside etcd to use as the root of the returner system. Once the etcd options are configured, the returner may be used: CLI Example: salt '*' test.ping --return etcd
A username and password can be set: etcd.username: larry # Optional; requires etcd.password to be set etcd.password: 123pass # Optional; requires etcd.username to be set You can also set a TTL (time to live) value for the returner: etcd.ttl: 5 Authentication with username and password, and ttl, currently requires the master branch of python-etcd. You may also specify different roles for read and write operations. First, create the profiles as specified above. Then add: etcd.returner_read_profile: my_etcd_read etcd.returner_write_profile: my_etcd_write
salt.returners.highstate_returnReturn the results of a highstate (or any other state function that returns data in a compatible format) via an HTML email or HTML file. New in version 2017.7.0. Similar results can be achieved by using the smtp returner with a custom template, except an attempt at writing such a template for the complex data structure returned by highstate function had proven to be a challenge, not to mention that the smtp module doesn't support sending HTML mail at the moment. The main goal of this returner was to produce an easy to read email similar to the output of highstate outputter used by the CLI. This returner could be very useful during scheduled executions, but could also be useful for communicating the results of a manual execution. Returner configuration is controlled in a standard fashion either via highstate group or an alternatively named group. salt '*' state.highstate --return highstate To use the alternative configuration, append '--return_config config-name' salt '*' state.highstate --return highstate --return_config simple Here is an example of what the configuration might look like: simple.highstate: The report_failures, report_changes, and report_everything flags provide filtering of the results. If you want an email to be sent every time, then report_everything is your choice. If you want to be notified only when changes were successfully made use report_changes. And report_failures will generate an email if there were failures. The configuration allows you to run a salt module function in case of success (success_function) or failure (failure_function). Any salt function, including ones defined in the _module folder of your salt repo, could be used here and its output will be displayed under the 'extra' heading of the email. Supported values for report_format are html, json, and yaml. The latter two are typically used for debugging purposes, but could be used for applying a template at some later stage. The values for report_delivery are smtp or file. In case of file delivery the only other applicable option is file_output. In case of smtp delivery, smtp_* options demonstrated by the example above could be used to customize the email. As you might have noticed, the success and failure subjects contain {id} and {host} values. Any other grain name could be used. As opposed to using {{grains['id']}}, which will be rendered by the master and contain master's values at the time of pillar generation, these will contain minion values at the time of execution.
salt.returners.influxdb_returnReturn data to an influxdb server. New in version 2015.8.0. To enable this returner the minion will need the python client for influxdb installed and the following values configured in the minion or master config, these are the defaults: influxdb.db: 'salt' influxdb.user: 'salt' influxdb.password: 'salt' influxdb.host: 'localhost' influxdb.port: 8086 Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.influxdb.db: 'salt' alternative.influxdb.user: 'salt' alternative.influxdb.password: 'salt' alternative.influxdb.host: 'localhost' alternative.influxdb.port: 6379 To use the influxdb returner, append '--return influxdb' to the salt command. salt '*' test.ping --return influxdb To use the alternative configuration, append '--return_config alternative' to the salt command. salt '*' test.ping --return influxdb --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return influxdb --return_kwargs '{"db": "another-salt"}'
salt.returners.kafka_returnReturn data to a Kafka topic
To enable this returner install confluent-kafka and enable the following settings in the minion config: returner.kafka.topic: 'topic' To use the kafka returner, append --return kafka to the Salt command, eg; salt '*' test.ping --return kafka
salt.returners.librato_returnSalt returner to return highstate stats to Librato To enable this returner the minion will need the Librato client importable on the Python path and the following values configured in the minion or master config. The Librato python client can be found at: https://github.com/librato/python-librato librato.email: example@librato.com librato.api_token: abc12345def This return supports multi-dimension metrics for Librato. To enable support for more metrics, the tags JSON object can be modified to include other tags. Adding EC2 Tags example: If ec2_tags:region were desired within the tags for multi-dimension. The tags could be modified to include the ec2 tags. Multiple dimensions are added simply by adding more tags to the submission. pillar_data = __salt__['pillar.raw']()
q.add(metric.name, value, tags={'Name': ret['id'],'Region': pillar_data['ec2_tags']['Name']})
salt.returners.localThe local returner is used to test the returner interface, it just prints the return data to the console to verify that it is being passed properly To use the local returner, append '--return local' to the salt command. ex: salt '*' test.ping --return local
salt.returners.local_cacheReturn data to local job cache
salt.returners.mattermost_returnerReturn salt data via mattermost New in version 2017.7.0. The following fields can be set in the minion conf file: mattermost.hook (required) mattermost.username (optional) mattermost.channel (optional) Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: mattermost.channel mattermost.hook mattermost.username mattermost settings may also be configured as: mattermost: To use the mattermost returner, append '--return mattermost' to the salt command. salt '*' test.ping --return mattermost To override individual configuration items, append --return_kwargs '{'key:': 'value'}' to the salt command. salt '*' test.ping --return mattermost --return_kwargs '{'channel': '#random'}'
salt.returners.memcache_returnReturn data to a memcache server To enable this returner the minion will need the python client for memcache installed and the following values configured in the minion or master config, these are the defaults. memcache.host: 'localhost' memcache.port: '11211' Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location. alternative.memcache.host: 'localhost' alternative.memcache.port: '11211' python2-memcache uses 'localhost' and '11211' as syntax on connection. To use the memcache returner, append '--return memcache' to the salt command. salt '*' test.ping --return memcache To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return memcache --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return memcache --return_kwargs '{"host": "hostname.domain.com"}'
salt.returners.mongo_future_returnReturn data to a mongodb server Required python modules: pymongo This returner will send data from the minions to a MongoDB server. MongoDB server can be configured by using host, port, db, user and password settings or by connection string URI (for pymongo > 2.3). To configure the settings for your MongoDB server, add the following lines to the minion config files: mongo.db: <database name> mongo.host: <server ip address> mongo.user: <MongoDB username> mongo.password: <MongoDB user password> mongo.port: 27017 Or single URI: mongo.uri: URI where uri is in the format: mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]] Example: mongodb://db1.example.net:27017/mydatabase mongodb://db1.example.net:27017,db2.example.net:2500/?replicaSet=test mongodb://db1.example.net:27017,db2.example.net:2500/?replicaSet=test&connectTimeoutMS=300000 More information on URI format can be found in https://docs.mongodb.com/manual/reference/connection-string/ You can also ask for indexes creation on the most common used fields, which should greatly improve performance. Indexes are not created by default. mongo.indexes: true Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.mongo.db: <database name> alternative.mongo.host: <server ip address> alternative.mongo.user: <MongoDB username> alternative.mongo.password: <MongoDB user password> alternative.mongo.port: 27017 Or single URI: alternative.mongo.uri: URI This mongo returner is being developed to replace the default mongodb returner in the future and should not be considered API stable yet. To use the mongo returner, append '--return mongo' to the salt command. salt '*' test.ping --return mongo To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return mongo --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
salt.returners.mongo_returnReturn data to a mongodb server Required python modules: pymongo This returner will send data from the minions to a MongoDB server. To configure the settings for your MongoDB server, add the following lines to the minion config files. mongo.db: <database name> mongo.host: <server ip address> mongo.user: <MongoDB username> mongo.password: <MongoDB user password> mongo.port: 27017 Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location. alternative.mongo.db: <database name> alternative.mongo.host: <server ip address> alternative.mongo.user: <MongoDB username> alternative.mongo.password: <MongoDB user password> alternative.mongo.port: 27017 To use the mongo returner, append '--return mongo' to the salt command. salt '*' test.ping --return mongo_return To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return mongo_return --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
salt.returners.multi_returnerRead/Write multiple returners
salt.returners.mysqlReturn data to a mysql server
To enable this returner, the minion will need the python client for mysql installed and the following values configured in the minion or master config. These are the defaults: mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306 SSL is optional. The defaults are set to None. If you do not want to use SSL, either exclude these options or set them to None. mysql.ssl_ca: None mysql.ssl_cert: None mysql.ssl_key: None Alternative configuration values can be used by prefacing the configuration with alternative.. Any values not found in the alternative configuration will be pulled from the default location. As stated above, SSL configuration is optional. The following ssl options are simply for illustration purposes: alternative.mysql.host: 'salt' alternative.mysql.user: 'salt' alternative.mysql.pass: 'salt' alternative.mysql.db: 'salt' alternative.mysql.port: 3306 alternative.mysql.ssl_ca: '/etc/pki/mysql/certs/localhost.pem' alternative.mysql.ssl_cert: '/etc/pki/mysql/certs/localhost.crt' alternative.mysql.ssl_key: '/etc/pki/mysql/certs/localhost.key' Should you wish the returner data to be cleaned out every so often, set keep_jobs_seconds to the number of hours for the jobs to live in the tables. Setting it to 0 will cause the data to stay in the tables. The default setting for keep_jobs_seconds is set to 86400. Should you wish to archive jobs in a different table for later processing, set archive_jobs to True. Salt will create 3 archive tables
and move the contents of jids, salt_returns, and salt_events that are more than keep_jobs_seconds seconds old to these tables. Use the following mysql database schema: CREATE DATABASE `salt` Required python modules: MySQLdb To use the mysql returner, append '--return mysql' to the salt command. salt '*' test.ping --return mysql To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return mysql --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return mysql --return_kwargs '{"db": "another-salt"}'
salt.returners.nagios_nrdp_returnReturn salt data to Nagios The following fields can be set in the minion conf file: nagios.url (required) nagios.token (required) nagios.service (optional) nagios.check_type (optional) Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: nagios.url nagios.token nagios.service Nagios settings may also be configured as:
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return nagios --return_kwargs '{"service": "service-name"}'
salt.returners.odbcReturn data to an ODBC compliant server. This driver was developed with Microsoft SQL Server in mind, but theoretically could be used to return data to any compliant ODBC database as long as there is a working ODBC driver for it on your minion platform. To enable this returner the minion will need On Linux: unixodbc (http://www.unixodbc.org) pyodbc (pip
install pyodbc) The FreeTDS ODBC driver for SQL Server
(http://www.freetds.org) or another compatible ODBC driver
On Windows: TBD
unixODBC and FreeTDS need to be configured via /etc/odbcinst.ini and /etc/odbc.ini. /etc/odbcinst.ini: [TDS] Description=TDS Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so (Note the above Driver line needs to point to the location of the FreeTDS shared library. This example is for Ubuntu 14.04.) /etc/odbc.ini: [TS] Description = "Salt Returner" Driver=TDS Server = <your server ip or fqdn> Port = 1433 Database = salt Trace = No Also you need the following values configured in the minion or master config. Configure as you see fit: returner.odbc.dsn: 'TS' returner.odbc.user: 'salt' returner.odbc.passwd: 'salt' Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.returner.odbc.dsn: 'TS' alternative.returner.odbc.user: 'salt' alternative.returner.odbc.passwd: 'salt' Running the following commands against Microsoft SQL Server in the desired database as the appropriate user should create the database tables correctly. Replace with equivalent SQL for other ODBC-compliant servers
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return odbc --return_kwargs '{"dsn": "dsn-name"}'
salt.returners.pgjsonbReturn data to a PostgreSQL server with json data stored in Pg's jsonb data type
NOTE: There are three PostgreSQL returners. Any can function as
an external master job cache. but each has different features.
SaltStack recommends returners.pgjsonb if you are working with a
version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review returners.postgres and
returners.postgres_local_cache to see which module best suits your
particular needs.
To enable this returner, the minion will need the python client for PostgreSQL installed and the following values configured in the minion or master config. These are the defaults: returner.pgjsonb.host: 'salt' returner.pgjsonb.user: 'salt' returner.pgjsonb.pass: 'salt' returner.pgjsonb.db: 'salt' returner.pgjsonb.port: 5432 SSL is optional. The defaults are set to None. If you do not want to use SSL, either exclude these options or set them to None. returner.pgjsonb.sslmode: None returner.pgjsonb.sslcert: None returner.pgjsonb.sslkey: None returner.pgjsonb.sslrootcert: None returner.pgjsonb.sslcrl: None New in version 2017.5.0. Alternative configuration values can be used by prefacing the configuration with alternative.. Any values not found in the alternative configuration will be pulled from the default location. As stated above, SSL configuration is optional. The following ssl options are simply for illustration purposes: alternative.pgjsonb.host: 'salt' alternative.pgjsonb.user: 'salt' alternative.pgjsonb.pass: 'salt' alternative.pgjsonb.db: 'salt' alternative.pgjsonb.port: 5432 alternative.pgjsonb.ssl_ca: '/etc/pki/mysql/certs/localhost.pem' alternative.pgjsonb.ssl_cert: '/etc/pki/mysql/certs/localhost.crt' alternative.pgjsonb.ssl_key: '/etc/pki/mysql/certs/localhost.key' Should you wish the returner data to be cleaned out every so often, set keep_jobs_seconds to the number of seconds for the jobs to live in the tables. Setting it to 0 or leaving it unset will cause the data to stay in the tables. Should you wish to archive jobs in a different table for later processing, set archive_jobs to True. Salt will create 3 archive tables;
and move the contents of jids, salt_returns, and salt_events that are more than keep_jobs_seconds seconds old to these tables. New in version 2019.2.0. Use the following Pg database schema: CREATE DATABASE salt Required python modules: Psycopg2 To use this returner, append '--return pgjsonb' to the salt command. salt '*' test.ping --return pgjsonb To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return pgjsonb --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return pgjsonb --return_kwargs '{"db": "another-salt"}'
salt.returners.postgresReturn data to a postgresql server NOTE: There are three PostgreSQL returners. Any can function as
an external master job cache. but each has different features.
SaltStack recommends returners.pgjsonb if you are working with a
version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review returners.postgres and
returners.postgres_local_cache to see which module best suits your
particular needs.
To enable this returner the minion will need the psycopg2 installed and the following values configured in the minion or master config: returner.postgres.host: 'salt' returner.postgres.user: 'salt' returner.postgres.passwd: 'salt' returner.postgres.db: 'salt' returner.postgres.port: 5432 Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.returner.postgres.host: 'salt' alternative.returner.postgres.user: 'salt' alternative.returner.postgres.passwd: 'salt' alternative.returner.postgres.db: 'salt' alternative.returner.postgres.port: 5432 Running the following commands as the postgres user should create the database correctly: psql << EOF CREATE ROLE salt WITH PASSWORD 'salt'; CREATE DATABASE salt WITH OWNER salt; EOF psql -h localhost -U salt << EOF -- -- Table structure for table 'jids' -- DROP TABLE IF EXISTS jids; CREATE TABLE jids ( Required python modules: psycopg2 To use the postgres returner, append '--return postgres' to the salt command. salt '*' test.ping --return postgres To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return postgres --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return postgres --return_kwargs '{"db": "another-salt"}'
salt.returners.postgres_local_cacheUse a postgresql server for the master job cache. This helps the job cache to cope with scale. NOTE: There are three PostgreSQL returners. Any can function as
an external master job cache. but each has different features.
SaltStack recommends returners.pgjsonb if you are working with a
version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review returners.postgres and
returners.postgres_local_cache to see which module best suits your
particular needs.
To enable this returner the minion will need the psycopg2 installed and the following values configured in the master config: master_job_cache: postgres_local_cache master_job_cache.postgres.host: 'salt' master_job_cache.postgres.user: 'salt' master_job_cache.postgres.passwd: 'salt' master_job_cache.postgres.db: 'salt' master_job_cache.postgres.port: 5432 Running the following command as the postgres user should create the database correctly: psql << EOF CREATE ROLE salt WITH PASSWORD 'salt'; CREATE DATABASE salt WITH OWNER salt; EOF In case the postgres database is a remote host, you'll need this command also: ALTER ROLE salt WITH LOGIN; and then: psql -h localhost -U salt << EOF -- -- Table structure for table 'jids' -- DROP TABLE IF EXISTS jids; CREATE TABLE jids ( Required python modules: psycopg2
salt.returners.pushover_returnerReturn salt data via pushover (http://www.pushover.net) New in version 2016.3.0. The following fields can be set in the minion conf file: pushover.user (required) pushover.token (required) pushover.title (optional) pushover.device (optional) pushover.priority (optional) pushover.expire (optional) pushover.retry (optional) pushover.profile (optional) NOTE: The user here is your user key, not
the email address you use to login to pushover.net.
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.pushover.user alternative.pushover.token alternative.pushover.title alternative.pushover.device alternative.pushover.priority alternative.pushover.expire alternative.pushover.retry PushOver settings may also be configured as:
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. salt '*' test.ping --return pushover --return_kwargs '{"title": "Salt is awesome!"}'
salt.returners.rawfile_jsonTake data from salt and "return" it into a raw file containing the json, with one line per event. Add the following to the minion or master configuration file. rawfile_json.filename: <path_to_output_file> Default is /var/log/salt/events. Common use is to log all events on the master. This can generate a lot of noise, so you may wish to configure batch processing and/or configure the event_return_whitelist or event_return_blacklist to restrict the events that are written.
salt.returners.redis_returnReturn data to a redis server To enable this returner the minion will need the python client for redis installed and the following values configured in the minion or master config, these are the defaults: redis.db: '0' redis.host: 'salt' redis.port: 6379 New in version 2018.3.1: Alternatively a UNIX socket can be specified by unix_socket_path: redis.db: '0' redis.unix_socket_path: /var/run/redis/redis.sock Cluster Mode Example: redis.db: '0' redis.cluster_mode: true redis.cluster.skip_full_coverage_check: true redis.cluster.startup_nodes: Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.redis.db: '0' alternative.redis.host: 'salt' alternative.redis.port: 6379 To use the redis returner, append '--return redis' to the salt command. salt '*' test.ping --return redis To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return redis --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return redis --return_kwargs '{"db": "another-salt"}'
Redis Cluster Mode Options:
redis.cluster.startup_nodes
Most cloud hosted redis clusters will require this to be
set to True
salt.returners.sentry_returnSalt returner that reports execution results back to sentry. The returner will inspect the payload to identify errors and flag them as such. Pillar needs something like: raven: or using a dsn: raven: https://pypi.python.org/pypi/raven must be installed. The pillar can be hidden on sentry return by setting hide_pillar: true. The tags list (optional) specifies grains items that will be used as sentry tags, allowing tagging of events in the sentry ui. To report only errors to sentry, set report_errors_only: true.
salt.returners.slack_returnerReturn salt data via slack New in version 2015.5.0. The following fields can be set in the minion conf file: slack.channel (required) slack.api_key (required) slack.username (required) slack.as_user (required to see the profile picture of your bot) slack.profile (optional) slack.changes(optional, only show changes and failed states) slack.only_show_failed(optional, only show failed states) slack.yaml_format(optional, format the json in yaml format) Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: slack.channel slack.api_key slack.username slack.as_user Slack settings may also be configured as: slack: To use the Slack returner, append '--return slack' to the salt command. salt '*' test.ping --return slack To use the alternative configuration, append '--return_config alternative' to the salt command. salt '*' test.ping --return slack --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return slack --return_kwargs '{"channel": "#random"}'
salt.returners.slack_webhook_returnReturn salt data via Slack using Incoming Webhooks
The following fields can be set in the minion conf file: slack_webhook.webhook (required, the webhook id. Just the part after: 'https://hooks.slack.com/services/')
slack_webhook.success_title (optional, short title for succeeded states. By default: '{id} | Succeeded')
slack_webhook.failure_title (optional, short title for failed states. By default: '{id} | Failed')
slack_webhook.author_icon (optional, a URL that with a small 16x16px image. Must be of type: GIF, JPEG, PNG, and BMP)
slack_webhook.show_tasks (optional, show identifiers for changed and failed tasks. By default: False)
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: slack_webhook.webhook slack_webhook.success_title slack_webhook.failure_title slack_webhook.author_icon slack_webhook.show_tasks Slack settings may also be configured as: slack_webhook: To use the Slack returner, append '--return slack_webhook' to the salt command. salt '*' test.ping --return slack_webhook To use the alternative configuration, append '--return_config alternative' to the salt command. salt '*' test.ping --return slack_webhook --return_config alternative
salt.returners.sms_returnReturn data by SMS. New in version 2015.5.0.
To enable this returner the minion will need the python twilio library installed and the following values configured in the minion or master config: twilio.sid: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' twilio.token: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' twilio.to: '+1415XXXXXXX' twilio.from: '+1650XXXXXXX' To use the sms returner, append '--return sms' to the salt command. salt '*' test.ping --return sms
salt.returners.smtp_returnReturn salt data via email The following fields can be set in the minion conf file. Fields are optional unless noted otherwise.
Below is an example of the above settings in a Salt Minion configuration file: smtp.from: me@example.net smtp.to: you@example.com smtp.host: localhost smtp.port: 1025 Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location. For example: alternative.smtp.username: saltdev alternative.smtp.password: saltdev alternative.smtp.tls: True To use the SMTP returner, append '--return smtp' to the salt command. salt '*' test.ping --return smtp To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return smtp --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return smtp --return_kwargs '{"to": "user@domain.com"}'
An easy way to test the SMTP returner is to use the development SMTP server built into Python. The command below will start a single-threaded SMTP server that prints any email it receives to the console. python -m smtpd -n -c DebuggingServer localhost:1025 New in version 2016.11.0. It is possible to send emails with selected Salt events by configuring event_return option for Salt Master. For example: event_return: smtp event_return_whitelist: Also you need to create additional file /usr/local/etc/salt/states/templates/email.j2 with email body template: act: {{act}}
id: {{id}}
result: {{result}}
This configuration enables Salt Master to send an email when accepting or rejecting minions keys.
salt.returners.splunkSend json response data to Splunk via the HTTP Event Collector Requires the following config values to be specified in config or pillar: splunk_http_forwarder: Run a test by using salt-call test.ping --return splunk Written by Scott Pack (github.com/scottjpack)
salt.returners.sqlite3Insert minion return data into a sqlite3 database
Sqlite3 is a serverless database that lives in a single file. In order to use this returner the database file must exist, have the appropriate schema defined, and be accessible to the user whom the minion process is running as. This returner requires the following values configured in the master or minion config: sqlite3.database: /usr/lib/salt/salt.db sqlite3.timeout: 5.0 Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.sqlite3.database: /usr/lib/salt/salt.db alternative.sqlite3.timeout: 5.0 Use the commands to create the sqlite3 database and tables: sqlite3 /usr/lib/salt/salt.db << EOF -- -- Table structure for table 'jids' -- CREATE TABLE jids ( To use the sqlite returner, append '--return sqlite3' to the salt command. salt '*' test.ping --return sqlite3 To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return sqlite3 --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return sqlite3 --return_kwargs '{"db": "/var/lib/salt/another-salt.db"}'
salt.returners.syslog_returnReturn data to the host operating system's syslog facility To use the syslog returner, append '--return syslog' to the salt command. salt '*' test.ping --return syslog The following fields can be set in the minion conf file: syslog.level (optional, Default: LOG_INFO) syslog.facility (optional, Default: LOG_USER) syslog.tag (optional, Default: salt-minion) syslog.options (list, optional, Default: []) Available levels, facilities, and options can be found in the syslog docs for your python version. NOTE: The default tag comes from sys.argv[0] which is
usually "salt-minion" but could be different based on the specific
environment.
Configuration example: syslog.level: 'LOG_ERR' syslog.facility: 'LOG_DAEMON' syslog.tag: 'mysalt' syslog.options: Of course you can also nest the options: syslog: Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: alternative.syslog.level: 'LOG_WARN' alternative.syslog.facility: 'LOG_NEWS' To use the alternative configuration, append --return_config alternative to the salt command. New in version 2015.5.0. salt '*' test.ping --return syslog --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return syslog --return_kwargs '{"level": "LOG_DEBUG"}'
NOTE: Syslog server implementations may have limits on the
maximum record size received by the client. This may lead to job return data
being truncated in the syslog server's logs. For example, for rsyslog on
RHEL-based systems, the default maximum record size is approximately 2KB
(which return data can easily exceed). This is configurable in rsyslog.conf
via the $MaxMessageSize config parameter. Please consult your syslog
implmentation's documentation to determine how to adjust this limit.
salt.returners.telegram_returnReturn salt data via Telegram. The following fields can be set in the minion conf file: telegram.chat_id (required) telegram.token (required) Telegram settings may also be configured as: telegram: To use the Telegram return, append '--return telegram' to the salt command. salt '*' test.ping --return telegram
salt.returners.xmpp_returnReturn salt data via xmpp
The following fields can be set in the minion conf file: xmpp.jid (required) xmpp.password (required) xmpp.recipient (required) xmpp.profile (optional) Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location: xmpp.jid xmpp.password xmpp.recipient xmpp.profile XMPP settings may also be configured as: xmpp: To use the XMPP returner, append '--return xmpp' to the salt command. salt '*' test.ping --return xmpp To use the alternative configuration, append '--return_config alternative' to the salt command. New in version 2015.5.0. salt '*' test.ping --return xmpp --return_config alternative To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command. New in version 2016.3.0. salt '*' test.ping --return xmpp --return_kwargs '{"recipient": "someone-else@xmpp.example.com"}'
salt.returners.zabbix_returnReturn salt data to Zabbix The following Type: "Zabbix trapper" with "Type of information" Text items are required: Key: salt.trap.info Key: salt.trap.warning Key: salt.trap.high To use the Zabbix returner, append '--return zabbix' to the salt command. ex: salt '*' test.ping --return zabbix
RenderersThe Salt state system operates by gathering information from common data types such as lists, dictionaries, and strings that would be familiar to any developer. Salt Renderers translate input from the format in which it is written into Python data structures. The default renderer is set in the master/minion configuration file using the renderer config option, which defaults to jinja|yaml. Two Kinds of RenderersRenderers fall into one of two categories, based on what they output: text or data. Some exceptions to this would be the pure python and gpg renderers which could be used in either capacity. Text RenderersIMPORTANT: Jinja supports a secure, sandboxed template
execution environment that Salt takes advantage of. Other text
Renderers do not support this functionality, so Salt highly recommends
usage of jinja / jinja|yaml.
A text renderer returns text. These include templating engines such as jinja, mako, and genshi, as well as the gpg renderer. The following are all text renderers:
Data RenderersA data renderer returns a Python data structure (typically a dictionary). The following are all data renderers:
Overriding the Default RendererIt can sometimes be beneficial to write an SLS file using a renderer other than the default one. This can be done by using a "shebang"-like syntax on the first line of the SLS file: Here is an example of using the pure python renderer to install a package: #!py def run(): This would be equivalent to the following: include: Composing Renderers (a.k.a. The "Render Pipeline")A render pipeline can be composed from other renderers by connecting them in a series of "pipes" (i.e. |). The renderers will be evaluated from left to right, with each renderer receiving the result of the previous renderer's execution. Take for example the default renderer (jinja|yaml). The file is evaluated first a jinja template, and the result of that template is evaluated as a YAML document. Other render pipeline combinations include:
The following is a contrived example SLS file using the jinja|mako|yaml render pipeline: #!jinja|mako|yaml An_Example: IMPORTANT: Keep in mind that not all renderers can be used alone or
with any other renderers. For example, text renderers shouldn't be used alone
as their outputs are just strings, which still need to be parsed by another
renderer to turn them into Python data structures.
For example, it would not make sense to use yaml|jinja because the output of the yaml renderer is a Python data structure, and the jinja renderer only accepts text as input. Therefore, when combining renderers, you should know what each renderer accepts as input and what it returns as output. One way of thinking about it is that you can chain together multiple text renderers, but the pipeline must end in a data renderer. Similarly, since the text renderers in Salt don't accept data structures as input, a text renderer should usually not come after a data renderer. It's technically possible to write a renderer that takes a data structure as input and returns a string, but no such renderer is distributed with Salt. Writing RenderersA custom renderer must be a Python module which implements a render function. This function must implement three positional arguments:
The first is the important one, and the 2nd and 3rd must be included since Salt needs to pass this info to each render, even though it is only used by template renderers. Renderers should be written so that the data argument can accept either strings or file-like objects as input. For example: import mycoolmodule from salt.ext import six def render(data, saltenv="base", sls="", **kwargs): Custom renderers should be placed within salt://_renderers/, so that they can be synced to minions. They are synced when any of the following are run:
Any custom renderers which have been synced to a minion, that are named the same as one of Salt's default set of renderers, will take the place of the default renderer with the same name. NOTE: Renderers can also be synced from
salt://_renderers/ to the Master using either the
saltutil.sync_renderers or saltutil.sync_all runner
function.
ExamplesThe best place to find examples of renderers is in the Salt source code. Documentation for renderers included with Salt can be found here: salt/renderers Here is a simple YAML renderer example: import salt.utils.yaml from salt.utils.yamlloader import SaltYamlSafeLoader from salt.ext import six def render(yaml_data, saltenv="", sls="", **kws): Full List of Renderersrenderer modulesIMPORTANT: Jinja supports a secure, sandboxed template
execution environment that Salt takes advantage of. Other text
Renderers do not support this functionality, so Salt highly recommends
usage of jinja / jinja|yaml.
salt.renderers.aws_kmsRenderer that will decrypt ciphers encrypted using AWS KMS Envelope Encryption. Any key in the data to be rendered can be a urlsafe_b64encoded string, and this renderer will attempt to decrypt it before passing it off to Salt. This allows you to safely store secrets in source control, in such a way that only your Salt master can decrypt them and distribute them only to the minions that need them. The typical use-case would be to use ciphers in your pillar data, and keep the encrypted data key on your master. This way developers with appropriate AWS IAM privileges can add new secrets quickly and easily. This renderer requires the boto3 Python library. SetupFirst, set up your AWS client. For complete instructions on configuration the AWS client, please read the boto3 configuration documentation. By default, this renderer will use the default AWS profile. You can override the profile name in salt configuration. For example, if you have a profile in your aws client configuration named "salt", you can add the following salt configuration: aws_kms: The rest of these instructions assume that you will use the default profile for key generation and setup. If not, export AWS_PROFILE and set it to the desired value. Once the aws client is configured, generate a KMS customer master key and use that to generate a local data key. # data_key=$(aws kms generate-data-key --key-id your-key-id --key-spec AES_256 To apply the renderer on a file-by-file basis add the following line to the top of any pillar with gpg data in it: #!yaml|aws_kms Now with your renderer configured, you can include your ciphers in your pillar data like so: #!yaml|aws_kms a-secret: gAAAAABaj5uzShPI3PEz6nL5Vhk2eEHxGXSZj8g71B84CZsVjAAtDFY1mfjNRl-1Su9YVvkUzNjI4lHCJJfXqdcTvwczBYtKy0Pa7Ri02s10Wn1tF0tbRwk=
salt.renderers.cheetahCheetah Renderer for Salt
salt.renderers.dsonDSON Renderer for Salt This renderer is intended for demonstration purposes. Information on the DSON spec can be found here. This renderer requires Dogeon (installable via pip)
salt.renderers.genshiGenshi Renderer for Salt
salt.renderers.gpgRenderer that will decrypt GPG ciphers Any value in the SLS file can be a GPG cipher, and this renderer will decrypt it before passing it off to Salt. This allows you to safely store secrets in source control, in such a way that only your Salt master can decrypt them and distribute them only to the minions that need them. The typical use-case would be to use ciphers in your pillar data, and keep a secret key on your master. You can put the public key in source control so that developers can add new secrets quickly and easily. This renderer requires the gpg binary. No python libraries are required as of the 2015.8.0 release. GPG HomedirThe default GPG Homedir <gpg-homedir> is ~/.gnupg and needs to be set using gpg --homedir. Be very careful to not forget this option. It is also important to run gpg commands as the user that owns the keys directory. If the salt-master runs as user salt, then use su - salt before running any gpg commands. In some cases, it's preferable to have gpg keys stored on removable media or other non-standard locations. This can be done using the gpg_keydir option on the salt master. This will also require using a different path to --homedir. The --homedir argument can be configured for the current user using echo 'homedir /usr/local/etc/salt/gpgkeys' >> ~/.gnupg, but this should be used with caution to avoid potential confusion. gpg_keydir: <path/to/homedir> GPG KeysGPG key pairs include both a public and private key. The private key is akin to a password and should be kept secure by the owner. A public key is used to encrypt data being sent to the owner of the private key. This means that the public key will be freely distributed so that others can encrypt pillar data without access to the secret key. New Key PairTo create a new GPG key pair for encrypting data, log in to the master as root and run the following: # mkdir -p /usr/local/etc/salt/gpgkeys # chmod 0700 /usr/local/etc/salt/gpgkeys # gpg --homedir /usr/local/etc/salt/gpgkeys --gen-key Do not supply a password for the keypair and use a name that makes sense for your application. NOTE: In some situations, gpg may be starved of entropy and
will take an incredibly long time to finish. Two common tools to generate
(less secure) pseudo-random data are rng-tools and
haveged.
The new keys can be seen and verified using --list-secret-keys: # gpg --homedir /usr/local/etc/salt/gpgkeys --list-secret-keys /usr/local/etc/salt/gpgkeys/pubring.kbx ----------------------------- sec rsa4096 2002-05-12 [SC] [expires: 2012-05-10] In the example above, our KEY-ID is 2DC47B416EE8C3484450B450A4D44406274AF44E. Export Public KeyTo export a public key suitable for public distribution: # gpg --homedir /usr/local/etc/salt/gpgkeys --armor --export <KEY-ID> > exported_pubkey.asc Import Public KeyUsers wishing to import the public key into their local keychain may run: $ gpg --import exported_pubkey.asc Export (Save) Private KeyThis key protects all gpg-encrypted pillar data and should be backed up to a safe and secure location. This command will generate a backup of secret keys in the /usr/local/etc/salt/gpgkeys directory to the gpgkeys.secret file: # gpg --homedir /usr/local/etc/salt/gpgkeys --export-secret-keys --export-options export-backup -o gpgkeys.secret Salt does not support password-protected private keys, which means this file is essentially a clear-text password (just add --armor). Fortunately, it is trivial to pass this export back to gpg to be encrypted with symmetric key: # gpg --homedir /usr/local/etc/salt/gpgkeys --export-secret-keys --export-options export-backup | gpg --symmetric -o gpgkeys.gpg NOTE: In some cases, particularly when using su/sudo, gpg gets
confused and needs to be told which TTY to use; this can be done with:
export GPG_TTY=$(tty).
Import (Restore) Private KeyTo import/restore a private key, create a directory with the correct permissions and import using gpg. # mkdir -p /usr/local/etc/salt/gpgkeys # chmod 0700 /usr/local/etc/salt/gpgkeys # gpg --homedir /usr/local/etc/salt/gpgkeys --import gpgkeys.secret If the export was encrypted using a symmetric key, then decrypt first with: # gpg --decrypt gpgkeys.gpg | gpg --homedir /usr/local/etc/salt/gpgkeys --import Adjust trust level of imported keysIn some cases, importing existing keys may not be enough and the trust level of the key needs to be adjusted. This can be done by editing the key. The KEY-ID and the actual trust level of the key can be seen by listing the already imported keys. If the trust-level is not ultimate it needs to be changed by running gpg --homedir /usr/local/etc/salt/gpgkeys --edit-key <KEY-ID> This will open an interactive shell for the management of the GPG encryption key. Type trust to be able to set the trust level for the key and then select 5 (I trust ultimately). Then quit the shell by typing save. Encrypting DataIn order to encrypt data to a recipient (salt), the public key must be imported into the local keyring. Importing the public key is described above in the Import Public Key <gpg-importpubkey:> section. To generate a cipher from a secret: $ echo -n 'supersecret' | gpg --trust-model always -ear <KEY-ID> To apply the renderer on a file-by-file basis add the following line to the top of any pillar with gpg data in it: #!yaml|gpg Now with your renderer configured, you can include your ciphers in your pillar data like so: #!yaml|gpg a-secret: | Encrypted CLI Pillar DataNew in version 2016.3.0. Functions like state.highstate and state.sls allow for pillar data to be passed on the CLI. salt myminion state.highstate pillar="{'mypillar': 'foo'}"
Starting with the 2016.3.0 release of Salt, it is now possible for this pillar data to be GPG-encrypted, and to use the GPG renderer to decrypt it. Replacing NewlinesTo pass encrypted pillar data on the CLI, the ciphertext must have its newlines replaced with a literal backslash-n (\n), as newlines are not supported within Salt CLI arguments. There are a number of ways to do this: With awk or Perl: # awk
ciphertext=`echo -n "supersecret" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | awk '{printf "%s\\n",$0} END {print ""}'`
# Perl
ciphertext=`echo -n "supersecret" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | perl -pe 's/\n/\\n/g'`
With Python: import subprocess secret, stderr = subprocess.Popen( ciphertext=`python /path/to/script.py` The ciphertext can be included in the CLI pillar data like so: salt myminion state.sls secretstuff pillar_enc=gpg pillar="{secret_pillar: '$ciphertext'}"
The pillar_enc=gpg argument tells Salt that there is GPG-encrypted pillar data, so that the CLI pillar data is passed through the GPG renderer, which will iterate recursively though the CLI pillar dictionary to decrypt any encrypted values. Encrypting the Entire CLI Pillar DictionaryIf several values need to be encrypted, it may be more convenient to encrypt the entire CLI pillar dictionary. Again, this can be done in several ways: With awk or Perl: # awk
ciphertext=`echo -n "{'secret_a': 'CorrectHorseBatteryStaple', 'secret_b': 'GPG is fun!'}" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | awk '{printf "%s\\n",$0} END {print ""}'`
# Perl
ciphertext=`echo -n "{'secret_a': 'CorrectHorseBatteryStaple', 'secret_b': 'GPG is fun!'}" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | perl -pe 's/\n/\\n/g'`
With Python: import subprocess
pillar_data = {'secret_a': 'CorrectHorseBatteryStaple',
ciphertext=`python /path/to/script.py` With the entire pillar dictionary now encrypted, it can be included in the CLI pillar data like so: salt myminion state.sls secretstuff pillar_enc=gpg pillar="$ciphertext" ConfigurationThe default behaviour of this renderer is to log a warning if a block could not be decrypted; in other words, it just returns the ciphertext rather than the encrypted secret. This behaviour can be changed via the gpg_decrypt_must_succeed configuration option. If set to True, any gpg block that cannot be decrypted raises a SaltRenderError exception, which registers an error in _errors during rendering. In the Chlorine release, the default behavior will be reversed and an error message will be added to _errors by default.
salt.renderers.hjsonhjson renderer for Salt See the hjson documentation for more information
salt.renderers.jinjaJinja loading utils to enable a more powerful backend for jinja templates IMPORTANT: Jinja supports a secure, sandboxed template
execution environment that Salt takes advantage of. Other text
Renderers do not support this functionality, so Salt highly recommends
usage of jinja / jinja|yaml.
data = {
yaml = {{ data|yaml }}
json = {{ data|json }}
python = {{ data|python }}
xml = {{ {'root_node': data}|xml }}
will be rendered as: yaml = {bar: 42, baz: [1, 2, 3], foo: true, qux: 2.0}
json = {"baz": [1, 2, 3], "foo": true, "bar": 42, "qux": 2.0}
python = {'bar': 42, 'baz': [1, 2, 3], 'foo': True, 'qux': 2.0}
xml = """<<?xml version="1.0" ?>
The yaml filter takes an optional flow_style parameter to control the default-flow-style parameter of the YAML dumper. {{ data|yaml(False) }}
will be rendered as: bar: 42 baz: Load filters Strings and variables can be deserialized with load_yaml and load_json tags and filters. It allows one to manipulate data directly in templates, easily: {%- set yaml_src = "{foo: it works}"|load_yaml %}
{%- set json_src = '{"bar": "for real"}'|load_json %}
Dude, {{ yaml_src.foo }} {{ json_src.bar }}!
will be rendered as: Dude, it works for real! Load tags Salt implements load_yaml and load_json tags. They work like the import tag, except that the document is also deserialized. Syntaxes are {% load_yaml as [VARIABLE] %}[YOUR DATA]{% endload %} and {% load_json as [VARIABLE] %}[YOUR DATA]{% endload %} For example: {% load_yaml as yaml_src %}
will be rendered as: Dude, it works for real! Import tags External files can be imported and made available as a Jinja variable. {% import_yaml "myfile.yml" as myfile %}
{% import_json "defaults.json" as defaults %}
{% import_text "completeworksofshakespeare.txt" as poems %}
Catalog import_* and load_* tags will automatically expose their target variable to import. This feature makes catalog of data to handle. for example: # doc1.sls
{% load_yaml as var1 %}
# doc2.sls
{% from "doc1.sls" import var1, var2 as local2 %}
{{ var1.foo }} {{ local2.bar }}
** Escape Filters ** New in version 2017.7.0. Allows escaping of strings so they can be interpreted literally by another function. For example: regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}
will be rendered as: regex_escape = https\:\/\/example\.com\?foo\=bar\%20baz ** Set Theory Filters ** New in version 2017.7.0. Performs set math using Jinja filters. For example: unique = {{ ['foo', 'foo', 'bar'] | unique }}
will be rendered as: unique = ['foo', 'bar'] ** Salt State Parameter Format Filters ** New in version 3005. Renders a formatted multi-line YAML string from a Python dictionary. Each key/value pair in the dictionary will be added as a single-key dictionary to a list that will then be sent to the YAML formatter. For example: {% set thing_params = {
will be rendered as: .. code-block:: yaml salt.renderers.jsonJSON Renderer for Salt
salt.renderers.json5JSON5 Renderer for Salt New in version 2016.3.0. JSON5 is an unofficial extension to JSON. See http://json5.org/ for more information. This renderer requires the json5 python bindings, installable via pip.
salt.renderers.makoMako Renderer for Salt This renderer requires the Mako library. To install Mako, do the following:
salt.renderers.msgpack
salt.renderers.naclRenderer that will decrypt NACL ciphers Any key in the SLS file can be an NACL cipher, and this renderer will decrypt it before passing it off to Salt. This allows you to safely store secrets in source control, in such a way that only your Salt master can decrypt them and distribute them only to the minions that need them. The typical use-case would be to use ciphers in your pillar data, and keep a secret key on your master. You can put the public key in source control so that developers can add new secrets quickly and easily. This renderer requires the libsodium library binary and PyNacl >= 1.0 SetupTo set things up, first generate a keypair. On the master, run the following: # salt-call --local nacl.keygen sk_file=/root/.nacl Using encrypted pillarTo encrypt secrets, copy the public key to your local machine and run: $ salt-call --local nacl.enc datatoenc pk_file=/root/.nacl.pub To apply the renderer on a file-by-file basis add the following line to the top of any pillar with nacl encrypted data in it: #!yaml|nacl Now with your renderer configured, you can include your ciphers in your pillar data like so: #!yaml|nacl a-secret: "NACL[MRN3cc+fmdxyQbz6WMF+jq1hKdU5X5BBI7OjK+atvHo1ll+w1gZ7XyWtZVfq9gK9rQaMfkDxmidJKwE0Mw==]"
salt.renderers.passPass Renderer for Saltpass is an encrypted on-disk password store. New in version 2017.7.0. SetupNote: <user> needs to be replaced with the user salt-master will be running as. Have private gpg loaded into user's gpg keyring load_private_gpg_key: Said private key's public key should have been used when encrypting pass entries that are of interest for pillar data. Fetch and keep local pass git repo up-to-date update_pass: Install pass binary pass: Salt master configuration options # If the prefix is *not* set (default behavior), all template variables are # considered for fetching secrets from Pass. Those that cannot be resolved # to a secret are passed through. # # If the prefix is set, only the template variables with matching prefix are # considered for fetching the secrets, other variables are passed through. # # For ease of use it is recommended to set the following options as well: # renderer: 'jinja|yaml|pass' # pass_strict_fetch: true # pass_variable_prefix: 'pass:' # If set to 'true', error out when unable to fetch a secret for a template variable. pass_strict_fetch: true # Set GNUPGHOME env for Pass. # Defaults to: ~/.gnupg pass_gnupghome: <path> # Set PASSWORD_STORE_DIR env for Pass. # Defaults to: ~/.password-store pass_dir: <path>
salt.renderers.pyPure python state rendererTo use this renderer, the SLS file should contain a function called run which returns highstate data. The highstate data is a dictionary containing identifiers as keys, and execution dictionaries as values. For example the following state declaration in YAML: common_packages: translates to: {'common_packages': {'pkg.installed': [{'pkgs': ['curl', 'vim']}]}}
In this module, a few objects are defined for you, giving access to Salt's execution functions, grains, pillar, etc. They are:
When used in a scenario where additional user-provided context data is supplied (such as with file.managed), the additional data will typically be injected into the script as one or more global variables: /etc/http/conf/http.conf: When writing a reactor SLS file the global context data (same as context {{ data }} for states written with Jinja + YAML) is available. The following YAML + Jinja state declaration: {% if data['id'] == 'mysql1' %}
highstate_run:
translates to: if data['id'] == 'mysql1': Full Example
salt.renderers.pydslA Python-based DSL
The pydsl renderer allows one to author salt formulas (.sls files) in pure Python using a DSL that's easy to write and easy to read. Here's an example: #!pydsl
apache = state('apache')
apache.pkg.installed()
apache.service.running()
state('/var/www/index.html') \
Notice that any Python code is allow in the file as it's really a Python module, so you have the full power of Python at your disposal. In this module, a few objects are defined for you, including the usual (with __ added) __salt__ dictionary, __grains__, __pillar__, __opts__, __env__, and __sls__, plus a few more: __file__
local file system path to the sls module.
__pydsl__ Salt PyDSL object, useful for configuring DSL behavior
per sls rendering.
include Salt PyDSL function for creating Include
declaration's.
extend Salt PyDSL function for creating Extend
declaration's.
state Salt PyDSL function for creating ID
declaration's.
A state ID declaration is created with a state(id) function call. Subsequent state(id) call with the same id returns the same object. This singleton access pattern applies to all declaration objects created with the DSL. state('example')
assert state('example') is state('example')
assert state('example').cmd is state('example').cmd
assert state('example').cmd.running is state('example').cmd.running
The id argument is optional. If omitted, an UUID will be generated and used as the id. state(id) returns an object under which you can create a State declaration object by accessing an attribute named after any state module available in Salt. state('example').cmd
state('example').file
state('example').pkg
...
Then, a Function declaration object can be created from a State declaration object by one of the following two ways:
state('example').file.managed(...)
state('example').file('managed', ...)
With either way of creating a Function declaration object, any Function arg declaration's can be passed as keyword arguments to the call. Subsequent calls of a Function declaration will update the arg declarations. state('example').file('managed', source='salt://webserver/index.html')
state('example').file.managed(source='salt://webserver/index.html')
As a shortcut, the special name argument can also be passed as the first or second positional argument depending on the first or second way of calling the State declaration object. In the following two examples ls -la is the name argument. state('example').cmd.run('ls -la', cwd='/')
state('example').cmd('run', 'ls -la', cwd='/')
Finally, a Requisite declaration object with its Requisite reference's can be created by invoking one of the requisite methods (see State Requisites) on either a Function declaration object or a State declaration object. The return value of a requisite call is also a Function declaration object, so you can chain several requisite calls together. Arguments to a requisite call can be a list of State declaration objects and/or a set of keyword arguments whose names are state modules and values are IDs of ID declaration's or names of Name declaration's. apache2 = state('apache2')
apache2.pkg.installed()
state('libapache2-mod-wsgi').pkg.installed()
# you can call requisites on function declaration
apache2.service.running() \
Include declaration objects can be created with the include function, while Extend declaration objects can be created with the extend function, whose arguments are just Function declaration objects. include('edit.vim', 'http.server')
extend(state('apache2').service.watch(file='/etc/httpd/httpd.conf')
The include function, by default, causes the included sls file to be rendered as soon as the include function is called. It returns a list of rendered module objects; sls files not rendered with the pydsl renderer return None's. This behavior creates no Include declaration's in the resulting high state data structure. import types
# including multiple sls returns a list.
_, mod = include('a-non-pydsl-sls', 'a-pydsl-sls')
assert _ is None
assert isinstance(slsmods[1], types.ModuleType)
# including a single sls returns a single object
mod = include('a-pydsl-sls')
# myfunc is a function that calls state(...) to create more states.
mod.myfunc(1, 2, "three")
Notice how you can define a reusable function in your pydsl sls module and then call it via the module returned by include. It's still possible to do late includes by passing the delayed=True keyword argument to include. include('edit.vim', 'http.server', delayed=True)
Above will just create a Include declaration in the rendered result, and such call always returns None. Special integration with the cmd stateTaking advantage of rendering a Python module, PyDSL allows you to declare a state that calls a pre-defined Python function when the state is executed. greeting = "hello world" def helper(something, *args, **kws): The cmd.call state function takes care of calling our helper function with the arguments we specified in the states, and translates the return value of our function into a structure expected by the state system. See salt.states.cmd.call() for more information. Implicit ordering of statesSalt states are explicitly ordered via Requisite declaration's. However, with pydsl it's possible to let the renderer track the order of creation for Function declaration objects, and implicitly add require requisites for your states to enforce the ordering. This feature is enabled by setting the ordered option on __pydsl__. NOTE: this feature is only available if your minions are using
Python >= 2.7.
include('some.sls.file')
A = state('A').cmd.run(cwd='/var/tmp')
extend(A)
__pydsl__.set(ordered=True)
for i in range(10):
Notice that the ordered option needs to be set after any extend calls. This is to prevent pydsl from tracking the creation of a state function that's passed to an extend call. Above example should create states from 0 to 9 that will output 0, one, two, 3, ... 9, in that order. It's important to know that pydsl tracks the creations of Function declaration objects, and automatically adds a require requisite to a Function declaration object that requires the last Function declaration object created before it in the sls file. This means later calls (perhaps to update the function's Function arg declaration) to a previously created function declaration will not change the order. Render time state executionWhen Salt processes a salt formula file, the file is rendered to salt's high state data representation by a renderer before the states can be executed. In the case of the pydsl renderer, the .sls file is executed as a python module as it is being rendered which makes it easy to execute a state at render time. In pydsl, executing one or more states at render time can be done by calling a configured ID declaration object. #!pydsl
s = state() # save for later invocation
# configure it
s.cmd.run('echo at render time', cwd='/')
s.file.managed('target.txt', source='salt://source.txt')
s() # execute the two states now
Once an ID declaration is called at render time it is detached from the sls module as if it was never defined. NOTE: If implicit ordering is enabled (i.e., via
__pydsl__.set(ordered=True)) then the first invocation of a
ID declaration object must be done before a new Function
declaration is created.
Integration with the stateconf rendererThe salt.renderers.stateconf renderer offers a few interesting features that can be leveraged by the pydsl renderer. In particular, when using with the pydsl renderer, we are interested in stateconf's sls namespacing feature (via dot-prefixed id declarations), as well as, the automatic start and goal states generation. Now you can use pydsl with stateconf like this: #!pydsl|stateconf -ps
include('xxx', 'yyy')
# ensure that states in xxx run BEFORE states in this file.
extend(state('.start').stateconf.require(stateconf='xxx::goal'))
# ensure that states in yyy run AFTER states in this file.
extend(state('.goal').stateconf.require_in(stateconf='yyy::start'))
__pydsl__.set(ordered=True)
...
-s enables the generation of a stateconf start state, and -p lets us pipe high state data rendered by pydsl to stateconf. This example shows that by require-ing or require_in-ing the included sls' start or goal states, it's possible to ensure that the included sls files can be made to execute before or after a state in the including sls file. Importing custom Python modulesTo use a custom Python module inside a PyDSL state, place the module somewhere that it can be loaded by the Salt loader, such as _modules in the /usr/local/etc/salt/states directory. Then, copy it to any minions as necessary by using saltutil.sync_modules. To import into a PyDSL SLS, one must bypass the Python importer and insert it manually by getting a reference from Python's sys.modules dictionary. For example: #!pydsl|stateconf -ps def main():
salt.renderers.pyobjectsPython renderer that includes a Pythonic Object based interface
Let's take a look at how you use pyobjects in a state file. Here's a quick example that ensures the /tmp directory is in the correct state.
Nice and Pythonic! By using the "shebang" syntax to switch to the pyobjects renderer we can now write our state data using an object based interface that should feel at home to python developers. You can import any module and do anything that you'd like (with caution, importing sqlalchemy, django or other large frameworks has not been tested yet). Using the pyobjects renderer is exactly the same as using the built-in Python renderer with the exception that pyobjects provides you with an object based interface for generating state data. Creating state dataPyobjects takes care of creating an object for each of the available states on the minion. Each state is represented by an object that is the CamelCase version of its name (i.e. File, Service, User, etc), and these objects expose all of their available state functions (i.e. File.managed, Service.running, etc). The name of the state is split based upon underscores (_), then each part is capitalized and finally the parts are joined back together. Some examples:
Context Managers and requisitesHow about something a little more complex. Here we're going to get into the core of how to use pyobjects to write states.
The objects that are returned from each of the magic method calls are setup to be used a Python context managers (with) and when you use them as such all declarations made within the scope will automatically use the enclosing state as a requisite! The above could have also been written use direct requisite statements as.
You can use the direct requisite statement for referencing states that are generated outside of the current file.
The last thing that direct requisites provide is the ability to select which of the SaltStack requisites you want to use (require, require_in, watch, watch_in, use & use_in) when using the requisite as a context manager.
The above example would cause all declarations inside the scope of the context manager to automatically have their watch_in set to Service("my-service"). Including and ExtendingTo include other states use the include() function. It takes one name per state to include. To extend another state use the extend() function on the name when creating a state.
Importing from other state filesLike any Python project that grows you will likely reach a point where you want to create reusability in your state tree and share objects between state files, Map Data (described below) is a perfect example of this. To facilitate this Python's import statement has been augmented to allow for a special case when working with a Salt state tree. If you specify a Salt url (salt://...) as the target for importing from then the pyobjects renderer will take care of fetching the file for you, parsing it with all of the pyobjects features available and then place the requested objects in the global scope of the template being rendered. This works for all types of import statements; import X, from X import Y, and from X import Y as Z.
See the Map Data section for a more practical use. Caveats:
Salt objectIn the spirit of the object interface for creating state data pyobjects also provides a simple object interface to the __salt__ object. A function named salt exists in scope for your sls files and will dispatch its attributes to the __salt__ dictionary. The following lines are functionally equivalent:
Pillar, grain, mine & config dataPyobjects provides shortcut functions for calling pillar.get, grains.get, mine.get & config.get on the __salt__ object. This helps maintain the readability of your state files. Each type of data can be access by a function of the same name: pillar(), grains(), mine() and config(). The following pairs of lines are functionally equivalent:
Opts dictionary and SLS namePyobjects provides variable access to the minion options dictionary and the SLS name that the code resides in. These variables are the same as the opts and sls variables available in the Jinja renderer. The following lines show how to access that information.
Map DataWhen building complex states or formulas you often need a way of building up a map of data based on grain data. The most common use of this is tracking the package and service name differences between distributions. To build map data using pyobjects we provide a class named Map that you use to build your own classes with inner classes for each set of values for the different grain matches.
NOTE: By default, the os_family grain will be used as
the target for matching. This can be overridden by specifying a
__grain__ attribute.
If a __match__ attribute is defined for a given class, then that value will be matched against the targeted grain, otherwise the class name's value will be be matched. Given the above example, the following is true:
That said, sometimes a minion may match more than one class. For instance, in the above example, Ubuntu minions will match both the Debian and Ubuntu classes, since Ubuntu has an os_family grain of Debian and an os grain of Ubuntu. As of the 2017.7.0 release, the order is dictated by the order of declaration, with classes defined later overriding earlier ones. Additionally, 2017.7.0 adds support for explicitly defining the ordering using an optional attribute called priority. Given the above example, os_family matches will be processed first, with os matches processed after. This would have the effect of assigning smbd as the service attribute on Ubuntu minions. If the priority item was not defined, or if the order of the items in the priority tuple were reversed, Ubuntu minions would have a service attribute of samba, since os_family matches would have been processed second. To use this new data you can import it into your state file and then access your attributes. To access the data in the map you simply access the attribute name on the base class that is extending Map. Assuming the above Map was in the file samba/map.sls, you could do the following.
salt.renderers.stateconf
This module provides a custom renderer that processes a salt file with a specified templating engine (e.g. Jinja) and a chosen data renderer (e.g. YAML), extracts arguments for any stateconf.set state, and provides the extracted arguments (including Salt-specific args, such as require, etc) as template context. The goal is to make writing reusable/configurable/parameterized salt files easier and cleaner. To use this renderer, either set it as the default renderer via the renderer option in master/minion's config, or use the shebang line in each individual sls file, like so: #!stateconf. Note, due to the way this renderer works, it must be specified as the first renderer in a render pipeline. That is, you cannot specify #!mako|yaml|stateconf, for example. Instead, you specify them as renderer arguments: #!stateconf mako . yaml. Here's a list of features enabled by this renderer.
#!stateconf yaml . jinja .vim: Above will be translated into: some.file::vim: Notice how that if a state under a dot-prefixed state id has no name argument then one will be added automatically by using the state id with the leading dot stripped off. The leading dot trick can be used with extending state ids as well, so you can include relatively and extend relatively. For example, when extending a state in salt://some/other_file.sls, e.g.: #!stateconf yaml . jinja include: Above will be pre-processed into: include:
#!stateconf yaml . jinja .sls_params: This even works with include + extend so that you can override the default configured arguments by including the salt file and then extend the stateconf.set states that come from the included salt file. (IMPORTANT: Both the included and the extending sls files must use the stateconf renderer for this ``extend`` to work!) Notice that the end of configuration marker (# --- end of state config --) is needed to separate the use of 'stateconf.set' form the rest of your salt file. The regex that matches such marker can be configured via the stateconf_end_marker option in your master or minion config file. Sometimes, it is desirable to set a default argument value that's based on earlier arguments in the same stateconf.set. For example, it may be tempting to do something like this: #!stateconf yaml . jinja .apache: However, this won't work. It can however be worked around like so: #!stateconf yaml . jinja .apache:
#!stateconf yaml . jinja include: If the above is written in a salt file at salt://some/where.sls then it will include salt://some/apache.sls, salt://some/db/mysql.sls and salt://app/django.sls, and exclude salt://some/users.ssl. Actually, it does that by rewriting the above include and exclude into: include:
When writing sls files with this renderer, one should avoid using what can be defined in a name argument of a state as the state's id. That is, avoid writing states like this: /path/to/some/file: Instead, define the state id and the name argument separately for each state. Also, the ID should be something meaningful and easy to reference within a requisite (which is a good habit anyway, and such extra indirection would also makes the sls file easier to modify later). Thus, the above states should be written like this: add-some-file: Moreover, when referencing a state from a requisite, you should reference the state's id plus the state name rather than the state name plus its name argument. (Yes, in the above example, you can actually require the file: /path/to/some/file, instead of the file: add-some-file). The reason is that this renderer will re-write or rename state id's and their references for state id's prefixed with .. So, if you reference name then there's no way to reliably rewrite such reference. salt.renderers.toml
salt.renderers.wempy
salt.renderers.yamlUnderstanding YAMLThe default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt. Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files. Rule One: IndentationYAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs. Rule Two: ColonsPython dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays. Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space: my_key: my_value In Python, the above maps to: {"my_key": "my_value"}
Dictionaries can be nested: first_level_dict_key: And in Python: {"first_level_dict_key": {"second_level_dict_key": "value_in_second_level_dict"}}
Rule Three: DashesTo represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation. - list_value_one - list_value_two - list_value_three Lists can be the value of a key-value pair. This is quite common in Salt: my_dictionary: ReferenceYAML Renderer for Salt For YAML usage information see Understanding YAML.
salt.renderers.yamlexYAMLEX renderer is a replacement of the YAML renderer. It's 100% YAML with a pinch of Salt magic:
Instructed aggregation within the !aggregation and the !reset tags: #!yamlex
foo: !aggregate first
foo: !aggregate second
bar: !aggregate {first: foo}
bar: !aggregate {second: bar}
baz: !aggregate 42
qux: !aggregate default
!reset qux: !aggregate my custom data
is roughly equivalent to foo: [first, second]
bar: {first: foo, second: bar}
baz: [42]
qux: [my custom data]
Reference
USING SALTThis section describes the fundamental components and concepts that you need to understand to use Salt. GrainsSalt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties. The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems. Grain data is relatively static, though if system information changes (for example, if network settings are changed), or if a new value is assigned to a custom grain, grain data is refreshed. NOTE: Grains resolve to lowercase letters. For example,
FOO, and foo target the same grain.
Listing GrainsAvailable grains can be listed by using the 'grains.ls' module: salt '*' grains.ls Grains data can be listed by using the 'grains.items' module: salt '*' grains.items Using grains in a stateTo use a grain in a state you can access it via {{ grains['key'] }}. Grains in the Minion ConfigGrains can also be statically assigned within the minion configuration file. Just add the option grains and pass options to it: grains: Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes it possible to target based on specific data about your deployment, as in the example above. Grains in /usr/local/etc/salt/grainsIf you do not want to place your custom static grains in the minion config file, you can also put them in /usr/local/etc/salt/grains on the minion. They are configured in the same way as in the above example, only without a top-level grains: key: roles: NOTE: Grains in /usr/local/etc/salt/grains are ignored
if you specify the same grains in the minion config.
NOTE: Grains are static, and since they are not often changed,
they will need a grains refresh when they are updated. You can do this by
calling: salt minion saltutil.refresh_modules
NOTE: You can equally configure static grains for Proxy
Minions. As multiple Proxy Minion processes can run on the same machine, you
need to index the files using the Minion ID, under
/usr/local/etc/salt/proxy.d/<minion ID>/grains. For example, the
grains for the Proxy Minion router1 can be defined under
/usr/local/etc/salt/proxy.d/router1/grains, while the grains for the
Proxy Minion switch7 can be put in
/usr/local/etc/salt/proxy.d/switch7/grains.
Matching Grains in the Top FileWith correctly configured grains on the Minion, the top file used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration: 'roles:webserver': For this example to work, you would need to have defined the grain role for the minions you wish to match. Writing GrainsWARNING: Grains can be set by users that have access to the minion
configuration files on the local system, making them less secure than other
identifiers in Salt. Avoid storing sensitive data, such as passwords or keys,
on minions. Instead, make use of Storing Static Data in the Pillar
and/or Storing Data in Other Databases.
The grains are derived by executing all of the "public" functions (i.e. those which do not begin with an underscore) found in the modules located in the Salt's core grains code, followed by those in any custom grains modules. The functions in a grains module must return a Python dictionary, where the dictionary keys are the names of grains, and each key's value is that value for that grain. Custom grains modules should be placed in a subdirectory named _grains located under the file_roots specified by the master config file. The default path would be /usr/local/etc/salt/states/_grains. Custom grains modules will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains or saltutil.sync_all functions. Grains modules are easy to write, and (as noted above) only need to return a dictionary. For example: def yourfunction(): The name of the function does not matter and will not factor into the grains data at all; only the keys/values returned become part of the grains. When to Use a Custom GrainBefore adding new grains, consider what the data is and remember that grains should (for the most part) be static data. If the data is something that is likely to change, consider using Pillar or an execution module instead. If it's a simple set of key/value pairs, pillar is a good match. If compiling the information requires that system commands be run, then putting this information in an execution module is likely a better idea. Good candidates for grains are data that is useful for targeting minions in the top file or the Salt CLI. The name and data structure of the grain should be designed to support many platforms, operating systems or applications. Also, keep in mind that Jinja templating in Salt supports referencing pillar data as well as invoking functions from execution modules, so there's no need to place information in grains to make it available to Jinja templates. For example: ...
...
{{ salt['module.function_name']('argument_1', 'argument_2') }}
{{ pillar['my_pillar_key'] }}
...
...
WARNING: Custom grains will not be available in the top file until
after the first highstate. To make custom grains available on a
minion's first highstate, it is recommended to use this example to
ensure that the custom grains are synced when the minion starts.
Loading Custom GrainsIf you have multiple functions specifying grains that are called from a main function, be sure to prepend grain function names with an underscore. This prevents Salt from including the loaded grains from the grain functions in the final grain data structure. For example, consider this custom grain file: #!/usr/bin/env python def _my_custom_grain(): The output of this example renders like so: # salt-call --local grains.items local: However, if you don't prepend the my_custom_grain function with an underscore, the function will be rendered twice by Salt in the items output: once for the my_custom_grain call itself, and again when it is called in the main function: # salt-call --local grains.items local: ---------- PrecedenceCore grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows:
Each successive evaluation overrides the previous ones, so any grains defined by custom grains modules synced to minions that have the same name as a core grain will override that core grain. Similarly, grains from /usr/local/etc/salt/minion override both core grains and custom grain modules, and grains in _grains will override any grains of the same name. For custom grains, if the function takes an argument grains, then the previously rendered grains will be passed in. Because the rest of the grains could be rendered in any order, the only grains that can be relied upon to be passed in are core grains. This was added in the 2019.2.0 release. Examples of GrainsThe core module in the grains package is where the main grains are loaded by the Salt minion and provides the principal example of how to write grains: salt/grains/core.py Syncing GrainsSyncing grains can be done a number of ways. They are automatically synced when state.highstate is called, or (as noted above) the grains can be manually synced and reloaded by calling the saltutil.sync_grains or saltutil.sync_all functions. NOTE: When the grains_cache is set to False, the grains
dictionary is built and stored in memory on the minion. Every time the minion
restarts or saltutil.refresh_grains is run, the grain dictionary is
rebuilt from scratch.
Storing Static Data in the PillarPillar is an interface for Salt designed to offer global values that can be distributed to minions. Pillar data is managed in a similar way as the Salt State Tree. Pillar was added to Salt in version 0.9.8 NOTE: Storing sensitive data
Pillar data is compiled on the master. Additionally, pillar data for a given minion is only accessible by the minion for which it is targeted in the pillar configuration. This makes pillar useful for storing sensitive data specific to a particular minion. Declaring the Master PillarThe Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server. Like file_roots, the pillar_roots option maps environments to directories. The pillar data is then mapped to minions based on matchers in a top file which is laid out in the same way as the state top file. Salt pillars can use the same matcher types as the standard top file. conf_master:pillar_roots is configured just like file_roots. For example: pillar_roots: This example configuration declares that the base environment will be located in the /usr/local/etc/salt/pillar directory. It must not be in a subdirectory of the state tree. The top file used matches the name of the top file used for States, and has the same structure: /usr/local/etc/salt/pillar/top.sls base: In the above top file, it is declared that in the base environment, the glob matching all minions will have the pillar data found in the packages pillar available to it. Assuming the pillar_roots value of /usr/local/etc/salt/pillar taken from above, the packages pillar would be located at /usr/local/etc/salt/pillar/packages.sls. Any number of matchers can be added to the base environment. For example, here is an expanded version of the Pillar top file stated above: /usr/local/etc/salt/pillar/top.sls: base: In this expanded top file, minions that match web* will have access to the /usr/local/etc/salt/pillar/packages.sls file, as well as the /srv/pillar/vim.sls file. Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties. Here is an example using the grains matcher to target pillars to minions by their os grain: dev: Pillar definitions can also take a keyword argument ignore_missing. When the value of ignore_missing is True, all errors for missing pillar files are ignored. The default value for ignore_missing is False. Here is an example using the ignore_missing keyword parameter to ignore errors for missing pillar files: base: Assuming that the pillar servers exists in the fileserver backend and the pillar systems doesn't, all pillar data from servers pillar is delivered to minions and no error for the missing pillar systems is noted under the key _errors in the pillar data delivered to minions. Should the ignore_missing keyword parameter have the value False, an error for the missing pillar systems would produce the value Specified SLS 'servers' in environment 'base' is not available on the salt master under the key _errors in the pillar data delivered to minions. /usr/local/etc/salt/pillar/packages.sls {% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
company: Foo Industries
IMPORTANT: See Is Targeting using Grain Data Secure? for
important security information.
The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respectively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries. Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dictionary: apache: git: Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary. Note that you cannot just list key/value-information in top.sls. Instead, target a minion to a pillar file and then list the keys and values in the pillar. Here is an example top file that illustrates this point: base: And the actual pillar file at '/usr/local/etc/salt/pillar/common_pillar.sls': foo: bar boo: baz NOTE: When working with multiple pillar environments, assuming
that each pillar environment has its own top file, the jinja placeholder {{
saltenv }} can be used in place of the environment name:
{{ saltenv }}:
Yes, this is {{ saltenv }}, and not {{ pillarenv }}. The reason for this is because the Pillar top files are parsed using some of the same code which parses top files when running states, so the pillar environment takes the place of {{ saltenv }} in the jinja context. Dynamic Pillar EnvironmentsIf environment __env__ is specified in pillar_roots, all environments that are not explicitly specified in pillar_roots will map to the directories from __env__. This allows one to use dynamic git branch based environments for state/pillar files with the same file-based pillar applying to all environments. For example: pillar_roots: New in version 2017.7.5,2018.3.1. Taking it one step further, __env__ can also be used in the pillar_root filesystem path. It will be replaced with the actual pillarenv and searched for Pillar data to provide to the minion. Note this substitution ONLY occurs for the __env__ environment. For instance, this configuration: pillar_roots: is equivalent to this static configuration: pillar_roots: New in version 3005. Pillar Namespace FlatteningThe separate pillar SLS files all merge down into a single dictionary of key-value pairs. When the same key is defined in multiple SLS files, this can result in unexpected behavior if care is not taken to how the pillar SLS files are laid out. For example, given a top.sls containing the following: base: with packages.sls containing: bind: bind9 and services.sls containing: bind: named Then a request for the bind pillar key will only return named. The bind9 value will be lost, because services.sls was evaluated later. NOTE: Pillar files are applied in the order they are listed in
the top file. Therefore conflicting keys will be overwritten in a 'last one
wins' manner! For example, in the above scenario conflicting key values in
services will overwrite those in packages because it's at the
bottom of the list.
It can be better to structure your pillar files with more hierarchy. For example the package.sls file could be configured like so: packages: This would make the packages pillar key a nested dictionary containing a bind key. Pillar Dictionary MergingIf the same pillar key is defined in multiple pillar SLS files, and the keys in both files refer to nested dictionaries, then the content from these dictionaries will be recursively merged. For example, keeping the top.sls the same, assume the following modifications to the pillar SLS files: packages.sls: bind: services.sls: bind: The resulting pillar dictionary will be: $ salt-call pillar.get bind local: Since both pillar SLS files contained a bind key which contained a nested dictionary, the pillar dictionary's bind key contains the combined contents of both SLS files' bind keys. Including Other PillarsNew in version 0.16.0. Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file: include: The full include form allows two additional options -- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar: include: With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls. In-Memory Pillar Data vs. On-Demand Pillar DataSince compiling pillar data is computationally expensive, the minion will maintain a copy of the pillar data in memory to avoid needing to ask the master to recompile and send it a copy of the pillar data each time pillar data is requested. This in-memory pillar data is what is returned by the pillar.item, pillar.get, and pillar.raw functions. Also, for those writing custom execution modules, or contributing to Salt's existing execution modules, the in-memory pillar data is available as the __pillar__ dunder dictionary. The in-memory pillar data is generated on minion start, and can be refreshed using the saltutil.refresh_pillar function: salt '*' saltutil.refresh_pillar This function triggers the minion to asynchronously refresh the in-memory pillar data and will always return None. In contrast to in-memory pillar data, certain actions trigger pillar data to be compiled to ensure that the most up-to-date pillar data is available. These actions include:
Performing these actions will not refresh the in-memory pillar data. So, if pillar data is modified, and then states are run, the states will see the updated pillar data, but pillar.item, pillar.get, and pillar.raw will not see this data unless refreshed using saltutil.refresh_pillar. If you are using the Pillar Cache and have set pillar_cache to True, the pillar cache can be updated either when you run saltutil.refresh_pillar, or using the pillar runner function pillar.clear_pillar_cache: salt-run pillar.clear_pillar_cache 'minion' The pillar will not be updated when running pillar.items or a state for example. If you are using a Salt version before 3003, you would need to manually delete the cache file, located in Salt's master cache. For example, on linux the file would be in this directory: /var/cache/salt/master/pillar_cache/ How Pillar Environments Are HandledWhen multiple pillar environments are used, the default behavior is for the pillar data from all environments to be merged together. The pillar dictionary will therefore contain keys from all configured environments. The pillarenv minion config option can be used to force the minion to only consider pillar configuration from a single environment. This can be useful in cases where one needs to run states with alternate pillar data, either in a testing/QA environment or to test changes to the pillar data before pushing them live. For example, assume that the following is set in the minion config file: pillarenv: base This would cause that minion to ignore all other pillar environments besides base when compiling the in-memory pillar data. Then, when running states, the pillarenv CLI argument can be used to override the minion's pillarenv config value: salt '*' state.apply mystates pillarenv=testing The above command will run the states with pillar data sourced exclusively from the testing environment, without modifying the in-memory pillar data. NOTE: When running states, the pillarenv CLI option does
not require a pillarenv option to be set in the minion config file.
When pillarenv is left unset, as mentioned above all configured
environments will be combined. Running states with pillarenv=testing in
this case would still restrict the states' pillar data to just that of the
testing pillar environment.
Starting in the 2017.7.0 release, it is possible to pin the pillarenv to the effective saltenv, using the pillarenv_from_saltenv minion config option. When this is set to True, if a specific saltenv is specified when running states, the pillarenv will be the same. This essentially makes the following two commands equivalent: salt '*' state.apply mystates saltenv=dev salt '*' state.apply mystates saltenv=dev pillarenv=dev However, if a pillarenv is specified, it will override this behavior. So, the following command will use the qa pillar environment but source the SLS files from the dev saltenv: salt '*' state.apply mystates saltenv=dev pillarenv=qa So, if a pillarenv is set in the minion config file, pillarenv_from_saltenv will be ignored, and passing a pillarenv on the CLI will temporarily override pillarenv_from_saltenv. Viewing Pillar DataTo view pillar data, use the pillar execution module. This module includes several functions, each of them with their own use. These functions include:
The pillar.get FunctionNew in version 0.14.0. The pillar.get function works much in the same way as the get method in a python dict, but with an enhancement: nested dictionaries can be traversed using a colon as a delimiter. If a structure like this is in pillar: foo: Extracting it from the raw pillar in an sls formula or file template is done this way: {{ pillar['foo']['bar']['baz'] }}
Now, with the new pillar.get function the data can be safely gathered and a default can be set, allowing the template to fall back if the value is not available: {{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier. NOTE: pillar.get() vs salt['pillar.get']()
It should be noted that within templating, the pillar variable is just a dictionary. This means that calling pillar.get() inside of a template will just use the default dictionary .get() function which does not include the extra : delimiter functionality. It must be called using the above syntax (salt['pillar.get']('foo:bar:baz', 'qux')) to get the salt function, instead of the default dictionary behavior. Setting Pillar Data at the Command LinePillar data can be set at the command line like the following example: salt '*' state.apply pillar='{"cheese": "spam"}'
This will add a pillar key of cheese with its value set to spam. NOTE: Be aware that when sending sensitive data via pillar on
the command-line that the publication containing that data will be received by
all minions and will not be restricted to the targeted minions. This may
represent a security concern in some cases.
Pillar EncryptionSalt's renderer system can be used to decrypt pillar data. This allows for pillar items to be stored in an encrypted state, and decrypted during pillar compilation. Encrypted Pillar SLSNew in version 2017.7.0. Consider the following pillar SLS file: secrets: When the pillar data is compiled, the results will be decrypted: # salt myminion pillar.items myminion: Salt must be told what portions of the pillar data to decrypt. This is done using the decrypt_pillar config option: decrypt_pillar: The notation used to specify the pillar item(s) to be decrypted is the same as the one used in pillar.get function. If a different delimiter is needed, it can be specified using the decrypt_pillar_delimiter config option: decrypt_pillar: The name of the renderer used to decrypt a given pillar item can be omitted, and if so it will fall back to the value specified by the decrypt_pillar_default config option, which defaults to gpg. So, the first example above could be rewritten as: decrypt_pillar: Encrypted Pillar Data on the CLINew in version 2016.3.0. The following functions support passing pillar data on the CLI via the pillar argument:
Triggering decryption of this CLI pillar data can be done in one of two ways:
# salt myminion pillar.items pillar_enc=gpg pillar='{foo: "-----BEGIN PGP MESSAGE-----\n\nhQEMAw2B674HRhwSAQf+OvPqEdDoA2fk15I5dYUTDoj1yf/pVolAma6iU4v8Zixn\nRDgWsaAnFz99FEiFACsAGDEFdZaVOxG80T0Lj+PnW4pVy0OXmXHnY2KjV9zx8FLS\nQxfvmhRR4t23WSFybozfMm0lsN8r1vfBBjbK+A72l0oxN78d1rybJ6PWNZiXi+aC\nmqIeunIbAKQ21w/OvZHhxH7cnIiGQIHc7N9nQH7ibyoKQzQMSZeilSMGr2abAHun\nmLzscr4wKMb+81Z0/fdBfP6g3bLWMJga3hSzSldU9ovu7KR8rDJI1qOlENj3Wm8C\nwTpDOB33kWIKMqiAjY3JFtb5MCHrafyggwQL7cX1+tI+AbSO6kZpbcDfzetb77LZ\nxc5NWnnGK4pGoqq4MAmZshw98RpecSHKMosto2gtiuWCuo9Zn5cV/FbjZ9CTWrQ=\n=0hO/\n-----END PGP MESSAGE-----"}'
The newlines in this example are specified using a literal \n. Newlines can be replaced with a literal \n using sed: $ echo -n bar | gpg --armor --trust-model always --encrypt -r user@domain.tld | sed ':a;N;$!ba;s/\n/\\n/g' NOTE: Using pillar_enc will perform the decryption
minion-side, so for this to work it will be necessary to set up the keyring in
/usr/local/etc/salt/gpgkeys on the minion just as one would typically
do on the master. The easiest way to do this is to first export the keys from
the master:
# gpg --homedir /usr/local/etc/salt/gpgkeys --export-secret-key -a user@domain.tld >/tmp/keypair.gpg Then, copy the file to the minion, setup the keyring, and import: # mkdir -p /usr/local/etc/salt/gpgkeys # chmod 0700 /usr/local/etc/salt/gpgkeys # gpg --homedir /usr/local/etc/salt/gpgkeys --list-keys # gpg --homedir /usr/local/etc/salt/gpgkeys --import --allow-secret-key-import keypair.gpg The --list-keys command is run create a keyring in the newly-created directory. Pillar data which is decrypted minion-side will still be securely transferred to the master, since the data sent between minion and master is encrypted with the master's public key.
Adding New Renderers for DecryptionThose looking to add new renderers for decryption should look at the gpg renderer for an example of how to do so. The function that performs the decryption should be recursive and be able to traverse a mutable type such as a dictionary, and modify the values in-place. Once the renderer has been written, decrypt_pillar_renderers should be modified so that Salt allows it to be used for decryption. If the renderer is being submitted upstream to the Salt project, the renderer should be added in salt/renderers/. Additionally, the following should be done:
Binary Data in the PillarSalt has partial support for binary pillar data. NOTE: There are some situations (such as salt-ssh) where only
text (ASCII or Unicode) is allowed.
The simplest way to embed binary data in your pillar is to make use of YAML's built-in binary data type, which requires base64 encoded data. salt_pic: !!binary Then you can use it as a contents_pillar in a state: /tmp/salt.png: It is also possible to add ASCII-armored encrypted data to pillars, as mentioned in the Pillar Encryption section. Master Config in PillarFor convenience the data stored in the master configuration file can be made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration. This option is disabled by default. To enable the master config from being added to the pillar set pillar_opts to True in the minion config file: pillar_opts: True Minion Config in PillarMinion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module, set the following pillar variable: mysql.pass: hardtoguesspassword Master Provided Pillar ErrorBy default if there is an error rendering a pillar, the detailed error is hidden and replaced with: Rendering SLS 'my.sls' failed. Please see master log for details. The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password! To have the master provide the detailed error that could potentially carry protected data set pillar_safe_render_error to False: pillar_safe_render_error: False Pillar WalkthroughNOTE: This walkthrough assumes that the reader has already
completed the initial Salt walkthrough.
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion. NOTE: Grains and Pillar are sometimes confused, just remember
that Grains are data about a minion which is stored or generated from the
minion. This is why information like the OS and CPU type are found in Grains.
Pillar is information about a minion or many minions stored or generated on
the Salt Master.
Pillar data is useful for:
Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available. Setting Up PillarThe pillar is already running in Salt by default. To see the minion's pillar data: salt '*' pillar.items NOTE: Prior to version 0.16.2, this function is named
pillar.data. This function name is still supported for backwards
compatibility.
By default, the contents of the master configuration file are not loaded into pillar for all minions. This default is stored in the pillar_opts setting, which defaults to False. The contents of the master configuration file can be made available to minion pillar files. This makes global configuration of services and systems very easy, but note that this may not be desired or appropriate if sensitive data is stored in the master's configuration file. To enable the master configuration file to be available to minion as pillar, set pillar_opts: True in the master configuration file, and then for appropriate minions also set pillar_opts: True in the minion(s) configuration file. Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /usr/local/etc/salt/pillar. NOTE: The pillar location can be configured via the
pillar_roots option inside the master configuration file. It must not
be in a subdirectory of the state tree or file_roots. If the pillar is under
file_roots, any pillar targeting can be bypassed by minions.
To start setting up the pillar, the /usr/local/etc/salt/pillar directory needs to be present: mkdir /usr/local/etc/salt/pillar Now create a simple top file, following the same format as the top file used for states: /usr/local/etc/salt/pillar/top.sls: base: This top file associates the data.sls file to all minions. Now the /usr/local/etc/salt/pillar/data.sls file needs to be populated: /usr/local/etc/salt/pillar/data.sls: info: some data To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master: salt '*' saltutil.refresh_pillar Now that the minions have the new pillar, it can be retrieved: salt '*' pillar.items The key info should now appear in the returned pillar data. More Complex DataUnlike states, pillar files do not need to define formulas. This example sets up user data with a UID: /usr/local/etc/salt/pillar/users/init.sls: users: NOTE: The same directory lookups that exist in states exist in
pillar, so the file users/init.sls can be referenced with users
in the top file.
The top file will need to be updated to include this sls file: /usr/local/etc/salt/pillar/top.sls: base: Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja: /usr/local/etc/salt/states/users/init.sls {% for user, uid in pillar.get('users', {}).items() %}
{{user}}:
This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file. Parameterizing States With PillarPillar data can be accessed in state files to customise behavior for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don't apply. A simple example is to set up a mapping of package names in pillar for separate Linux distributions: /usr/local/etc/salt/pillar/pkg/init.sls: pkgs: The new pkg sls needs to be added to the top file: /usr/local/etc/salt/pillar/top.sls: base: Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized: /usr/local/etc/salt/states/apache/init.sls: apache: Or, if no pillar is available a default can be set as well: NOTE: The function pillar.get used in this example was
added to Salt in version 0.14.0
/usr/local/etc/salt/states/apache/init.sls: apache: In the above example, if the pillar value pillar['pkgs']['apache'] is not set in the minion's pillar, then the default of httpd will be used. NOTE: Under the hood, pillar is just a Python dict, so Python
dict methods such as get and items can be used.
Pillar Makes Simple States Grow EasilyOne of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states. A simple formula: /usr/local/etc/salt/states/edit/vim.sls: vim: Can be easily transformed into a powerful, parameterized formula: /usr/local/etc/salt/states/edit/vim.sls: vim: Where the vimrc source location can now be changed via pillar: /usr/local/etc/salt/pillar/edit/vim.sls: {% if grains['id'].startswith('dev') %}
vimrc: salt://edit/dev_vimrc
{% elif grains['id'].startswith('qa') %}
vimrc: salt://edit/qa_vimrc
{% else %}
vimrc: salt://edit/vimrc
{% endif %}
Ensuring that the right vimrc is sent out to the correct minions. The pillar top file must include a reference to the new sls pillar file: /usr/local/etc/salt/pillar/top.sls: base: Setting Pillar Data on the Command LinePillar data can be set on the command line when running state.apply <salt.modules.state.apply_() like so: salt '*' state.apply pillar='{"foo": "bar"}'
salt '*' state.apply my_sls_file pillar='{"hello": "world"}'
Nested pillar values can also be set via the command line: salt '*' state.sls my_sls_file pillar='{"foo": {"bar": "baz"}}'
Lists can be passed via command line pillar data as follows: salt '*' state.sls my_sls_file pillar='{"some_list": ["foo", "bar", "baz"]}'
NOTE: If a key is passed on the command line that already
exists on the minion, the key that is passed in will overwrite the entire
value of that key, rather than merging only the specified value set via the
command line.
The example below will swap the value for vim with telnet in the previously specified list, notice the nested pillar dict: salt '*' state.apply edit.vim pillar='{"pkgs": {"vim": "telnet"}}'
This will attempt to install telnet on your minions, feel free to uninstall the package or replace telnet value with anything else. NOTE: Be aware that when sending sensitive data via pillar on
the command-line that the publication containing that data will be received by
all minions and will not be restricted to the targeted minions. This may
represent a security concern in some cases.
More On PillarPillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location. Reference information on pillar and the external pillar interface can be found in the Salt documentation: Pillar Minion Config in PillarMinion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module: mysql.pass: hardtoguesspassword This is very convenient when you need some dynamic configuration change that you want to be applied on the fly. For example, there is a chicken and the egg problem if you do this: mysql-admin-passwd: The second state will fail, because you changed the root password and the minion didn't notice it. Setting mysql.pass in the pillar, will help to sort out the issue. But always change the root admin password in the first place. This is very helpful for any module that needs credentials to apply state changes: mysql, keystone, etc. Targeting MinionsTargeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof. For example the command salt web1 apache.signal restart to restart the Apache httpd server specifies the machine web1 as the target and the command will only be run on that one minion. Similarly when using States, the following top file specifies that only the web1 minion should execute the contents of webserver.sls: base: The simple target specifications, glob, regex, and list will cover many use cases, and for some will cover all use cases, but more powerful options exist. Targeting with GrainsThe Grains interface was built into Salt to allow minions to be targeted by system properties. So minions running on a particular operating system can be called to execute a function, or a specific kernel. Calling via a grain is done by passing the -G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a glob expression: "os:Arch*". salt -G 'os:Fedora' test.version Will return True from all of the minions running Fedora. To discover what grains are available and what the values are, execute the grains.item salt function: salt '*' grains.items More info on using targeting with grains can be found here. Compound TargetingNew in version 0.9.5. Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example: salt -C 'G@os:Debian and webser* or E@db.*' test.version In this example any minion who's id starts with webser and is running Debian, or any minion who's id starts with db will be matched. The type of matcher defaults to glob, but can be specified with the corresponding letter followed by the @ symbol. In the above example a grain is used with G@ as well as a regular expression with E@. The webser* target does not need to be prefaced with a target type specifier because it is a glob. More info on using compound targeting can be found here. Node Group TargetingNew in version 0.9.5. For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called nodegroups. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions. nodegroups: Advanced Targeting MethodsThere are many ways to target individual minions or groups of minions in Salt: Matching the minion idEach minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion's id configuration setting. TIP: minion id and minion keys
The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host. GlobbingThe default matching that Salt utilizes is shell-style globbing around the minion id. This also works for states in the top file. NOTE: You must wrap salt calls that use globbing in
single-quotes to prevent the shell from expanding the globs before Salt is
invoked.
Match all minions: salt '*' test.version Match all minions in the example.net domain or any of the example domains: salt '*.example.net' test.version salt '*.example.*' test.version Match all the webN minions in the example.net domain (web1.example.net, web2.example.net … webN.example.net): salt 'web?.example.net' test.version Match the web1 through web5 minions: salt 'web[1-5]' test.version Match the web1 and web3 minions: salt 'web[1,3]' test.version Match the web-x, web-y, and web-z minions: salt 'web-[x-z]' test.version NOTE: For additional targeting methods please review the
compound matchers documentation.
Regular ExpressionsMinions can be matched using Perl-compatible regular expressions (which is globbing on steroids and a ton of caffeine). Match both web1-prod and web1-devel minions: salt -E 'web1-(prod|devel)' test.version When using regular expressions in a State's top file, you must specify the matcher as the first option. The following example executes the contents of webserver.sls on the above-mentioned minions. base: ListsAt the most basic level, you can specify a flat list of minion IDs: salt -L 'web1,web2,web3' test.version Targeting using GrainsGrain data can be used when targeting minions. For example, the following matches all CentOS minions: salt -G 'os:CentOS' test.version Match all minions with 64-bit CPUs, and return number of CPU cores for each matching minion: salt -G 'cpuarch:x86_64' grains.item num_cpus Additionally, globs can be used in grain matches, and grains that are nested in a dictionary can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called ec2_tags, which itself is a dictionary with a key named environment, which has a value that contains the word production: salt -G 'ec2_tags:environment:*production*' IMPORTANT: See Is Targeting using Grain Data Secure? for
important security information.
Targeting using PillarPillar data can be used when targeting minions. This allows for ultimate control and flexibility when targeting minions. NOTE: To start using Pillar targeting it is required to make a
Pillar data cache on Salt Master for each Minion via following commands:
salt '*' saltutil.refresh_pillar or salt '*' saltutil.sync_all.
Also Pillar data cache will be populated during the highstate run. Once
Pillar data changes, you must refresh the cache by running above commands for
this targeting method to work correctly.
Example: salt -I 'somekey:specialvalue' test.version Like with Grains, it is possible to use globbing as well as match nested values in Pillar, by adding colons for each level that is being traversed. The below example would match minions with a pillar named foo, which is a dict containing a key bar, with a value beginning with baz: salt -I 'foo:bar:baz*' test.version Subnet/IP Address MatchingMinions can easily be matched based on IP address, or by subnet (using CIDR notation). salt -S 192.168.40.20 test.version salt -S 2001:db8::/64 test.version Ipcidr matching can also be used in compound matches salt -C 'S@10.0.0.0/24 and G@os:Debian' test.version It is also possible to use in both pillar and state-matching '172.16.0.0/12': Compound matchersCompound matchers allow very granular minion targeting using any of Salt's matchers. The default matcher is a glob match, just as with CLI and top file matching. To match using anything other than a glob, prefix the match string with the appropriate letter from the table below, followed by an @ sign.
Matchers can be joined using boolean and, or, and not operators. For example, the following string matches all Debian minions with a hostname that begins with webserv, as well as any minions that have a hostname which matches the regular expression web-dc1-srv.*: salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.version That same example expressed in a top file looks like the following: base: New in version 2015.8.0. Excluding a minion based on its ID is also possible: salt -C 'not web-dc1-srv' test.version Versions prior to 2015.8.0 a leading not was not supported in compound matches. Instead, something like the following was required: salt -C '* and not G@kernel:Darwin' test.version Excluding a minion based on its ID was also possible: salt -C '* and not web-dc1-srv' test.version Precedence MatchingMatchers can be grouped together with parentheses to explicitly declare precedence amongst groups. salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.version NOTE: Be certain to note that spaces are required between the
parentheses and targets. Failing to obey this rule may result in incorrect
targeting!
Alternate DelimitersNew in version 2015.8.0. Matchers that target based on a key value pair use a colon (:) as a delimiter. Matchers with a Yes in the Alt Delimiters column in the previous table support specifying an alternate delimiter character. This is done by specifying an alternate delimiter character between the leading matcher character and the @ pattern separator character. This avoids incorrect interpretation of the pattern in the case that : is part of the grain or pillar data structure traversal. salt -C 'J|@foo|bar|^foo:bar$ or J!@gitrepo!https://github.com:example/project.git' test.ping Node groupsNodegroups are declared using a compound target specification. The compound target documentation can be found here. The nodegroups master config file parameter is used to define nodegroups. Here's an example nodegroup configuration within /usr/local/etc/salt/master: nodegroups: NOTE: The L within group1 is matching a list of minions,
while the G in group2 is matching specific grains. See the compound
matchers documentation for more details.
As of the 2017.7.0 release of Salt, group names can also be prepended with a dash. This brings the usage in line with many other areas of Salt. For example: nodegroups: New in version 2015.8.0. NOTE: Nodegroups can reference other nodegroups as seen in
group3. Ensure that you do not have circular references. Circular
references will be detected and cause partial expansion with a logged error
message.
New in version 2015.8.0. Compound nodegroups can be either string values or lists of string values. When the nodegroup is A string value will be tokenized by splitting on whitespace. This may be a problem if whitespace is necessary as part of a pattern. When a nodegroup is a list of strings then tokenization will happen for each list element as a whole. To match a nodegroup on the CLI, use the -N command-line option: salt -N group1 test.version New in version 2019.2.0. NOTE: The N@ classifier historically could not be used
in compound matches within the CLI or top file, it was only recognized
in the nodegroups master config file parameter. As of the 2019.2.0
release, this limitation no longer exists.
To match a nodegroup in your top file, make sure to put - match: nodegroup on the line directly following the nodegroup name. base: NOTE: When adding or modifying nodegroups to a master
configuration file, the master must be restarted for those changes to be fully
recognized.
A limited amount of functionality, such as targeting with -N from the command-line may be available without a restart. Defining Nodegroups as Lists of Minion IDsA simple list of minion IDs would traditionally be defined like this: nodegroups: They can now also be defined as a YAML list, like this: nodegroups: New in version 2016.11.0. Batch SizeThe -b (or --batch-size) option allows commands to be executed on only a specified number of minions at a time. Both percentages and finite numbers are supported. salt '*' -b 10 test.version salt -G 'os:RedHat' --batch-size 25% apache.signal restart This will only run test.version on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching os:RedHat at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load balancer or doing maintenance on BSD firewalls using carp much easier with salt. The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions. New in version 2016.3. The --batch-wait argument can be used to specify a number of seconds to wait after a minion returns, before sending the command to a new minion. SECO RangeSECO range is a cluster-based metadata store developed and maintained by Yahoo! The Range project is hosted here: https://github.com/ytoolshed/range Learn more about range here: https://github.com/ytoolshed/range/wiki/ PrerequisitesTo utilize range support in Salt, a range server is required. Setting up a range server is outside the scope of this document. Apache modules are included in the range distribution. With a working range server, cluster files must be defined. These files are written in YAML and define hosts contained inside a cluster. Full documentation on writing YAML range files is here: https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec Additionally, the Python seco range libraries must be installed on the salt master. One can verify that they have been installed correctly via the following command: python -c 'import seco.range' If no errors are returned, range is installed successfully on the salt master. Preparing SaltRange support must be enabled on the salt master by setting the hostname and port of the range server inside the master configuration file: range_server: my.range.server.com:80 Following this, the master must be restarted for the change to have an effect. Targeting with RangeOnce a cluster has been defined, it can be targeted with a salt command by using the -R or --range flags. For example, given the following range YAML file being served from a range server: $ cat /etc/range/test.yaml CLUSTER: host1..100.test.com APPS: One might target host1 through host100 in the test.com domain with Salt as follows: salt --range %test:CLUSTER test.version The following salt command would target three hosts: frontend, backend, and mysql: salt --range %test:APPS test.version Loadable MatchersNew in version 2019.2.0. Internally targeting is implemented with chunks of code called Matchers. As of the 2019.2.0 release, matchers can be loaded dynamically. Currently new matchers cannot be created, but existing matchers can have their functionality altered or extended. For more information on Matchers see MatchersNew in version 3000. Matchers are modules that provide Salt's targeting abilities. As of the 3000 release, matchers can be dynamically loaded. Currently new matchers cannot be created because the required plumbing for the CLI does not exist yet. Existing matchers may have their functionality altered or extended. For details of targeting methods, see the Targeting topic. A matcher module must have a function called match(). This function ends up becoming a method on the Matcher class. All matcher functions require at least two arguments, self (because the function will be turned into a method), and tgt, which is the actual target string. The grains and pillar matchers also take a delimiter argument and should default to DEFAULT_TARGET_DELIM. Like other Salt loadable modules, modules that override built-in functionality can be placed in file_roots in a special directory and then copied to the minion through the normal sync process. saltutil.sync_all will transfer all loadable modules, and the 3000 release introduces saltutil.sync_matchers. For matchers, the directory is /usr/local/etc/salt/states/_matchers (assuming your file_roots is set to the default /usr/local/etc/salt/states). As an example, let's modify the list matcher to have the separator be a '/' instead of the default ','. from __future__ import absolute_import, print_function, unicode_literals from salt.ext import six # pylint: disable=3rd-party-module-not-gated def match(self, tgt): Place this code in a file called list_match.py in a _matchers directory in your file_roots. Sync this down to your minions with saltutil.sync_matchers. Then attempt to match with the following, replacing minionX with three of your minions. salt -L 'minion1/minion2/minion3' test.ping Three of your minions should respond. The current supported matchers and associated filenames are
The Salt MineThe Salt Mine is used to collect arbitrary data from Minions and store it on the Master. This data is then made available to all Minions via the salt.modules.mine module. Mine data is gathered on the Minion and sent back to the Master where only the most recent data is maintained (if long term data is required use returners or the external job cache). Mine vs GrainsMine data is designed to be much more up-to-date than grain data. Grains are refreshed on a very limited basis and are largely static data. Mines are designed to replace slow peer publishing calls when Minions need data from other Minions. Rather than having a Minion reach out to all the other Minions for a piece of data, the Salt Mine, running on the Master, can collect it from all the Minions every Mine Interval, resulting in almost fresh data at any given time, with much less overhead. Mine FunctionsTo enable the Salt Mine the mine_functions option needs to be applied to a Minion. This option can be applied via the Minion's configuration file, or the Minion's Pillar. The mine_functions option dictates what functions are being executed and allows for arguments to be passed in. The list of functions are available in the salt.module. If no arguments are passed, an empty list must be added like in the test.ping function in the example below: mine_functions: In the example above salt.modules.network.ip_addrs has additional filters to help narrow down the results. In the above example IP addresses are only returned if they are on a eth0 interface and in the 10.0.0.0/8 IP range. Changed in version 3000. The format to define mine_functions has been changed to allow the same format as used for module.run. The old format (above) will still be supported. mine_functions: Minion-side Access ControlNew in version 3000. Mine functions can be targeted to only be available to specific minions. This uses the same targeting parameters as Targeting Minions but with keywords allow_tgt and allow_tgt_type. When a minion requests a function from the salt mine that is not allowed to be requested by that minion (i.e. when looking up the combination of allow_tgt and allow_tgt_type and the requesting minion is not in the list) it will get no data, just as if the requested function is not present in the salt mine. mine_functions: Mine Functions AliasesFunction aliases can be used to provide friendly names, usage intentions or to allow multiple calls of the same function with different arguments. There is a different syntax for passing positional and key-value arguments. Mixing positional and key-value arguments is not supported. New in version 2014.7.0. mine_functions: Changed in version 3000. With the addition of the module.run-like format for defining mine_functions, the method of adding aliases remains similar. Just add a mine_function kwarg with the name of the real function to call, making the key below mine_functions the alias: mine_functions: Mine IntervalThe Salt Mine functions are executed when the Minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the Minion via the mine_interval option in the minion config: mine_interval: 60 Mine in Salt-SSHAs of the 2015.5.0 release of salt, salt-ssh supports mine.get. Because the Minions cannot provide their own mine_functions configuration, we retrieve the args for specified mine functions in one of three places, searched in the following order:
The mine_functions are formatted exactly the same as in normal salt, just stored in a different location. Here is an example of a flat roster containing mine_functions: test: NOTE: Because of the differences in the architecture of
salt-ssh, mine.get calls are somewhat inefficient. Salt must make a new
salt-ssh call to each of the Minions in question to retrieve the requested
data, much like a publish call. However, unlike publish, it must run the
requested function as a wrapper function, so we can retrieve the function args
from the pillar of the Minion in question. This results in a non-trivial delay
in retrieving the requested data.
Minions Targeting with MineThe mine.get function supports various methods of Minions targeting to fetch Mine data from particular hosts, such as glob or regular expression matching on Minion id (name), grains, pillars and compound matches. See the salt.modules.mine module documentation for the reference. NOTE: Pillar data needs to be cached on Master for pillar
targeting to work with Mine. Read the note in relevant section.
ExampleOne way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all Minions with the "web" grain to add them to the pool of load balanced servers. /usr/local/etc/salt/pillar/top.sls: base: /usr/local/etc/salt/pillar/web.sls: mine_functions: Then trigger the minions to refresh their pillar data by running: salt '*' saltutil.refresh_pillar Verify that the results are showing up in the pillar on the minions by executing the following and checking for network.ip_addrs in the output: salt '*' pillar.items Which should show that the function is present on the minion, but not include the output: minion1.example.com: Mine data is typically only updated on the master every 60 minutes, this can be modified by setting: /usr/local/etc/salt/minion.d/mine.conf: mine_interval: 5 To force the mine data to update immediately run: salt '*' mine.update Setup the salt.states.file.managed state in /usr/local/etc/salt/states/haproxy.sls: haproxy_config: Create the Jinja template in /usr/local/etc/salt/states/haproxy_config: <...file contents snipped...>
{% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', tgt_type='grain') | dictsort() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}
<...file contents snipped...>
In the above example, server will be expanded to the minion_id. NOTE: The expr_form argument will be renamed to tgt_type
in the 2017.7.0 release of Salt.
RunnersSalt runners are convenience applications executed with the salt-run command. Salt runners work similarly to Salt execution modules however they execute on the Salt master itself instead of remote Salt minions. A Salt runner can be a simple client call or a complex application. SEE ALSO: The full list of runners
Writing Salt RunnersA Salt runner is written in a similar manner to a Salt execution module. Both are Python modules which contain functions and each public function is a runner which may be executed via the salt-run command. For example, if a Python module named test.py is created in the runners directory and contains a function called foo, the test runner could be invoked with the following command: # salt-run test.foo Runners have several options for controlling output. Any print statement in a runner is automatically also fired onto the master event bus where. For example: def a_runner(outputter=None, display_progress=False): The above would result in an event fired as follows: Event fired at Tue Jan 13 15:26:45 2015
*************************
Tag: salt/run/20150113152644070246/print
Data:
{'_stamp': '2015-01-13T15:26:45.078707',
A runner may also send a progress event, which is displayed to the user during runner execution and is also passed across the event bus if the display_progress argument to a runner is set to True. A custom runner may send its own progress event by using the __jid_event_.fire_event() method as shown here: if display_progress: The above would produce output on the console reading: A progress message as well as an event on the event similar to: Event fired at Tue Jan 13 15:21:20 2015
*************************
Tag: salt/run/20150113152118341421/progress
Data:
{'_stamp': '2015-01-13T15:21:20.390053',
A runner could use the same approach to send an event with a customized tag onto the event bus by replacing the second argument (progress) with whatever tag is desired. However, this will not be shown on the command-line and will only be fired onto the event bus. Synchronous vs. AsynchronousA runner may be fired asynchronously which will immediately return control. In this case, no output will be display to the user if salt-run is being used from the command-line. If used programmatically, no results will be returned. If results are desired, they must be gathered either by firing events on the bus from the runner and then watching for them or by some other means. NOTE: When running a runner in asynchronous mode, the
--progress flag will not deliver output to the salt-run CLI. However,
progress events will still be fired on the bus.
In synchronous mode, which is the default, control will not be returned until the runner has finished executing. To add custom runners, put them in a directory and add it to runner_dirs in the master configuration file. ExamplesExamples of runners can be found in the Salt distribution: salt/runners A simple runner that returns a well-formatted list of the minions that are responding to Salt calls could look like this: # Import salt modules import salt.client def up(): Salt EnginesNew in version 2015.8.0. Salt Engines are long-running, external system processes that leverage Salt.
Salt engines enhance and replace the external processes functionality. ConfigurationSalt engines are configured under an engines top-level section in your Salt master or Salt minion configuration. Provide a list of engines and parameters under this section. engines: New in version 3000. Multiple copies of a particular Salt engine can be configured by including the engine_module parameter in the engine configuration. engines: Salt engines must be in the Salt path, or you can add the engines_dirs option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines. This option should be formatted as a list of directories to search, such as: engines_dirs: Writing an EngineAn example Salt engine, salt/engines/test.py, is available in the Salt source. To develop an engine, the only requirement is that your module implement the start() function. What is YAML and How To Use ItThe default renderer for SLS files is the YAML renderer. What is YAMLWhat does YAML stand for? It's an acronym for YAML Ain't Markup Language. The Official YAML Website defines YAML as: ...a human friendly data serialization standard
for all programming languages.
However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt. Defining YAMLThough YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files. Rule One: IndentationYAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs. Rule Two: ColonsPython dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays. Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space: my_key: my_value In Python, the above maps to: {"my_key": "my_value"}
Alternatively, a value can be associated with a key through indentation. my_key: NOTE: The above syntax is valid YAML but is uncommon in SLS
files because most often, the value for a key is not singular but instead is a
list of values.
In Python, the above maps to: {"my_key": "my_value"}
Dictionaries can be nested: first_level_dict_key: And in Python: {"first_level_dict_key": {"second_level_dict_key": "value_in_second_level_dict"}}
Rule Three: DashesTo represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation. - list_value_one - list_value_two - list_value_three Lists can be the value of a key-value pair. This is quite common in Salt: my_dictionary: In Python, the above maps to: {"my_dictionary": ["list_value_one", "list_value_two", "list_value_three"]}
Learning more about YAMLOne easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output. Here are some excellent links for experimenting with and referencing YAML:
TemplatingJinja statements and expressions are allowed by default in SLS files. See Understanding Jinja. Understanding JinjaJinja is the default templating language in SLS files. IMPORTANT: Jinja supports a secure, sandboxed template
execution environment that Salt takes advantage of. Other text
Renderers do not support this functionality, so Salt highly recommends
usage of jinja / jinja|yaml.
Jinja in StatesJinja is evaluated before YAML, which means it is evaluated before the States are run. The most basic usage of Jinja in state files is using control structures to wrap conditional or redundant state elements: {% if grains['os'] != 'FreeBSD' %}
tcsh:
In this example, the first if block will only be evaluated on minions that aren't running FreeBSD, and the second block changes the file name based on the os grain. Writing if-else blocks can lead to very redundant state files however. In this case, using pillars, or using a previously defined variable might be easier: {% set motd = ['/etc/motd'] %}
{% if grains['os'] == 'Debian' %}
Using a variable set by the template, the for loop will iterate over the list of MOTD files to update, adding a state block for each file. The filter_by function can also be used to set variables based on grains: {% set auditd = salt['grains.filter_by']({
'RedHat': { 'package': 'audit' },
'Debian': { 'package': 'auditd' },
}) %}
Include and ImportIncludes and imports can be used to share common, reusable state configuration between state files and between files. {% from 'lib.sls' import test %}
This would import the test template variable or macro, not the test state element, from the file lib.sls. In the case that the included file performs checks against grains, or something else that requires context, passing the context into the included file is required: {% from 'lib.sls' import test with context %}
Includes must use full paths, like so: spam/eggs.jinja
Including Context During Include/ImportBy adding with context to the include/import directive, the current context can be passed to an included/imported template. {% import 'openssl/vars.sls' as ssl with context %}
MacrosMacros are helpful for eliminating redundant code. Macros are most useful as mini-templates to repeat blocks of strings with a few parameterized variables. Be aware that stripping whitespace from the template block, as well as contained blocks, may be necessary to emulate a variable return from the macro. # init.sls
{% from 'lib.sls' import pythonpkg with context %}
python-virtualenv:
# lib.sls
{% macro pythonpkg(pkg) -%}
This would define a macro that would return a string of the full package name, depending on the packaging system's naming convention. The whitespace of the macro was eliminated, so that the macro would return a string without line breaks, using whitespace control. Template InheritanceTemplate inheritance works fine from state files and files. The search path starts at the root of the state tree or pillar. ErrorsSaltstack allows raising custom errors using the raise jinja function. {{ raise('Custom Error') }}
When rendering the template containing the above statement, a TemplateError exception is raised, causing the rendering to fail with the following message: TemplateError: Custom Error FiltersSaltstack extends builtin filters with these custom filters: strftimeConverts any time related object into a time based string. It requires valid strftime directives. An exhaustive list can be found here in the Python documentation. {% set curtime = None | strftime() %}
Fuzzy dates require the timelib Python module is installed. {{ "2002/12/25"|strftime("%y") }}
{{ "1040814000"|strftime("%Y-%m-%d") }}
{{ datetime|strftime("%u") }}
{{ "tomorrow"|strftime }}
sequenceEnsure that parsed data is a sequence. yaml_encodeSerializes a single object into a YAML scalar with any necessary handling for escaping special characters. This will work for any scalar YAML data type: ints, floats, timestamps, booleans, strings, unicode. It will not work for multi-objects such as sequences or maps. {%- set bar = 7 %}
{%- set baz = none %}
{%- set zip = true %}
{%- set zap = 'The word of the day is "salty"' %}
{%- load_yaml as foo %}
bar: {{ bar|yaml_encode }}
baz: {{ baz|yaml_encode }}
zip: {{ zip|yaml_encode }}
zap: {{ zap|yaml_encode }}
{%- endload %}
In the above case {{ bar }} and {{ foo.bar }} should be identical and {{ baz }} and {{ foo.baz }} should be identical. yaml_dquoteSerializes a string into a properly-escaped YAML double-quoted string. This is useful when the contents of a string are unknown and may contain quotes or unicode that needs to be preserved. The resulting string will be emitted with opening and closing double quotes. {%- set bar = '"The quick brown fox . . ."' %}
{%- set baz = 'The word of the day is "salty".' %}
{%- load_yaml as foo %}
bar: {{ bar|yaml_dquote }}
baz: {{ baz|yaml_dquote }}
{%- endload %}
In the above case {{ bar }} and {{ foo.bar }} should be identical and {{ baz }} and {{ foo.baz }} should be identical. If variable contents are not guaranteed to be a string then it is better to use yaml_encode which handles all YAML scalar types. yaml_squoteSimilar to the yaml_dquote filter but with single quotes. Note that YAML only allows special escapes inside double quotes so yaml_squote is not nearly as useful (viz. you likely want to use yaml_encode or yaml_dquote). dict_to_sls_yaml_paramsNew in version 3005. Renders a formatted multi-line YAML string from a Python dictionary. Each key/value pair in the dictionary will be added as a single-key dictionary to a list that will then be sent to the YAML formatter. Example: {% set thing_params = {
Returns: thing: to_boolNew in version 2017.7.0. Returns the logical value of an element. Example: {{ 'yes' | to_bool }}
{{ 'true' | to_bool }}
{{ 1 | to_bool }}
{{ 'no' | to_bool }}
Will be rendered as: True True True False exactly_n_trueNew in version 2017.7.0. Tests that exactly N items in an iterable are "truthy" (neither None, False, nor 0). Example: {{ ['yes', 0, False, 'True'] | exactly_n_true(2) }}
Returns: True exactly_one_trueNew in version 2017.7.0. Tests that exactly one item in an iterable is "truthy" (neither None, False, nor 0). Example: {{ ['yes', False, 0, None] | exactly_one_true }}
Returns: True quoteNew in version 2017.7.0. This text will be wrapped in quotes. regex_searchNew in version 2017.7.0. Scan through string looking for a location where this regular expression produces a match. Returns None in case there were no matches found Example: {{ 'abcdefabcdef' | regex_search('BC(.*)', ignorecase=True) }}
Returns: ("defabcdef",)
regex_matchNew in version 2017.7.0. If zero or more characters at the beginning of string match this regular expression, otherwise returns None. Example: {{ 'abcdefabcdef' | regex_match('BC(.*)', ignorecase=True) }}
Returns: None regex_replaceNew in version 2017.7.0. Searches for a pattern and replaces with a sequence of characters. Example: {% set my_text = 'yes, this is a TEST' %}
{{ my_text | regex_replace(' ([a-z])', '__\\1', ignorecase=True) }}
Returns: yes,__this__is__a__TEST uuidNew in version 2017.7.0. Return a UUID. Example: {{ 'random' | uuid }}
Returns: 3652b285-26ad-588e-a5dc-c2ee65edc804 is_listNew in version 2017.7.0. Return if an object is list. Example: {{ [1, 2, 3] | is_list }}
Returns: True is_iterNew in version 2017.7.0. Return if an object is iterable. Example: {{ [1, 2, 3] | is_iter }}
Returns: True minNew in version 2017.7.0. Return the minimum value from a list. Example: {{ [1, 2, 3] | min }}
Returns: 1 maxNew in version 2017.7.0. Returns the maximum value from a list. Example: {{ [1, 2, 3] | max }}
Returns: 3 avgNew in version 2017.7.0. Returns the average value of the elements of a list Example: {{ [1, 2, 3] | avg }}
Returns: 2 unionNew in version 2017.7.0. Return the union of two lists. Example: {{ [1, 2, 3] | union([2, 3, 4]) | join(', ') }}
Returns: 1, 2, 3, 4 intersectNew in version 2017.7.0. Return the intersection of two lists. Example: {{ [1, 2, 3] | intersect([2, 3, 4]) | join(', ') }}
Returns: 2, 3 differenceNew in version 2017.7.0. Return the difference of two lists. Example: {{ [1, 2, 3] | difference([2, 3, 4]) | join(', ') }}
Returns: 1 symmetric_differenceNew in version 2017.7.0. Return the symmetric difference of two lists. Example: {{ [1, 2, 3] | symmetric_difference([2, 3, 4]) | join(', ') }}
Returns: 1, 4 flattenNew in version 3005. Flatten a list. {{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
Flatten only the first level of a list: {{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
Preserve nulls in a list, by default flatten removes them. {{ [3, None, [4, [2]] ] | flatten(levels=1, preserve_nulls=True) }}
# => [3, None, 4, [2]]
combinationsNew in version 3005. Invokes the combinations function from the itertools library. See the itertools documentation for more information. {% for one, two in "ABCD" | combinations(2) %}{{ one~two }} {% endfor %}
# => AB AC AD BC BD CD
combinations_with_replacementNew in version 3005. Invokes the combinations_with_replacement function from the itertools library. See the itertools documentation for more information. {% for one, two in "ABC" | combinations_with_replacement(2) %}{{ one~two }} {% endfor %}
# => AA AB AC BB BC CC
compressNew in version 3005. Invokes the compress function from the itertools library. See the itertools documentation for more information. {% for val in "ABCDEF" | compress([1,0,1,0,1,1]) %}{{ val }} {% endfor %}
# => A C E F
permutationsNew in version 3005. Invokes the permutations function from the itertools library. See the itertools documentation for more information. {% for one, two in "ABCD" | permutations(2) %}{{ one~two }} {% endfor %}
# => AB AC AD BA BC BD CA CB CD DA DB DC
productNew in version 3005. Invokes the product function from the itertools library. See the itertools documentation for more information. {% for one, two in "ABCD" | product("xy") %}{{ one~two }} {% endfor %}
# => Ax Ay Bx By Cx Cy Dx Dy
zipNew in version 3005. Invokes the native Python zip function. The zip function returns a zip object, which is an iterator of tuples where the first item in each passed iterator is paired together, and then the second item in each passed iterator are paired together etc. If the passed iterators have different lengths, the iterator with the least items decides the length of the new iterator. {% for one, two in "ABCD" | zip("xy") %}{{ one~two }} {% endfor %}
# => Ax By
zip_longestNew in version 3005. Invokes the zip_longest function from the itertools library. See the itertools documentation for more information. {% for one, two in "ABCD" | zip_longest("xy", fillvalue="-") %}{{ one~two }} {% endfor %}
# => Ax By C- D-
method_callNew in version 3001. Returns a result of object's method call. Example #1: {{ [1, 2, 1, 3, 4] | method_call('index', 1, 1, 3) }}
Returns: 2 This filter can be used with the map filter to apply object methods without using loop constructs or temporary variables. Example #2: {% set host_list = ['web01.example.com', 'db01.example.com'] %}
{% set host_list_split = [] %}
{% for item in host_list %}
Example #3: {{ host_list|map('method_call', 'split', '.', 1)|list }}
Return of examples #2 and #3: [[web01, example.com], [db01, example.com]] is_sortedNew in version 2017.7.0. Return True if an iterable object is already sorted. Example: {{ [1, 2, 3] | is_sorted }}
Returns: True compare_listsNew in version 2017.7.0. Compare two lists and return a dictionary with the changes. Example: {{ [1, 2, 3] | compare_lists([1, 2, 4]) }}
Returns: {"new": [4], "old": [3]}
compare_dictsNew in version 2017.7.0. Compare two dictionaries and return a dictionary with the changes. Example: {{ {'a': 'b'} | compare_dicts({'a': 'c'}) }}
Returns: {"a": {"new": "c", "old": "b"}}
is_hexNew in version 2017.7.0. Return True if the value is hexadecimal. Example: {{ '0xabcd' | is_hex }}
{{ 'xyzt' | is_hex }}
Returns: True False contains_whitespaceNew in version 2017.7.0. Return True if a text contains whitespaces. Example: {{ 'abcd' | contains_whitespace }}
{{ 'ab cd' | contains_whitespace }}
Returns: False True substring_in_listNew in version 2017.7.0. Return True if a substring is found in a list of string values. Example: {{ 'abcd' | substring_in_list(['this', 'is', 'an abcd example']) }}
Returns: True check_whitelist_blacklistNew in version 2017.7.0. Check a whitelist and/or blacklist to see if the value matches it. This filter can be used with either a whitelist or a blacklist individually, or a whitelist and a blacklist can be passed simultaneously. If whitelist is used alone, value membership is checked against the whitelist only. If the value is found, the function returns True. Otherwise, it returns False. If blacklist is used alone, value membership is checked against the blacklist only. If the value is found, the function returns False. Otherwise, it returns True. If both a whitelist and a blacklist are provided, value membership in the blacklist will be examined first. If the value is not found in the blacklist, then the whitelist is checked. If the value isn't found in the whitelist, the function returns False. Whitelist Example: {{ 5 | check_whitelist_blacklist(whitelist=[5, 6, 7]) }}
Returns: True Blacklist Example: {{ 5 | check_whitelist_blacklist(blacklist=[5, 6, 7]) }}
False date_formatNew in version 2017.7.0. Converts unix timestamp into human-readable string. Example: {{ 1457456400 | date_format }}
{{ 1457456400 | date_format('%d.%m.%Y %H:%M') }}
Returns: 2017-03-08 08.03.2017 17:00 to_numNew in version 2017.7.0. New in version 2018.3.0: Renamed from str_to_num to to_num. Converts a string to its numerical value. Example: {{ '5' | to_num }}
Returns: 5 to_bytesNew in version 2017.7.0. Converts string-type object to bytes. Example: {{ 'wall of text' | to_bytes }}
NOTE: This option may have adverse effects when using the
default renderer, jinja|yaml. This is due to the fact that YAML
requires proper handling in regard to special characters. Please see the
section on YAML ASCII support in the YAML Idiosyncrasies
documentation for more information.
json_encode_listNew in version 2017.7.0. New in version 2018.3.0: Renamed from json_decode_list to json_encode_list. When you encode something you get bytes, and when you decode, you get your locale's encoding (usually a unicode type). This filter was incorrectly-named when it was added. json_decode_list will be supported until the 3003 release. Deprecated since version 2018.3.3,2019.2.0: The tojson filter accomplishes what this filter was designed to do, making this filter redundant. Recursively encodes all string elements of the list to bytes. Example: {{ [1, 2, 3] | json_encode_list }}
Returns: [1, 2, 3] json_encode_dictNew in version 2017.7.0. New in version 2018.3.0: Renamed from json_decode_dict to json_encode_dict. When you encode something you get bytes, and when you decode, you get your locale's encoding (usually a unicode type). This filter was incorrectly-named when it was added. json_decode_dict will be supported until the 3003 release. Deprecated since version 2018.3.3,2019.2.0: The tojson filter accomplishes what this filter was designed to do, making this filter redundant. Recursively encodes all string items in the dictionary to bytes. Example: Assuming that pillar['foo'] contains {u'a': u'\u0414'}, and your locale is en_US.UTF-8: {{ pillar['foo'] | json_encode_dict }}
Returns: {"a": "\xd0\x94"}
tojsonNew in version 2018.3.3,2019.2.0. Dumps a data structure to JSON. This filter was added to provide this functionality to hosts which have a Jinja release older than version 2.9 installed. If Jinja 2.9 or newer is installed, then the upstream version of the filter will be used. See the upstream docs for more information. random_hashNew in version 2017.7.0. New in version 2018.3.0: Renamed from rand_str to random_hash to more accurately describe what the filter does. rand_str will be supported to ensure backwards compatibility but please use the preferred random_hash. Generates a random number between 1 and the number passed to the filter, and then hashes it. The default hash type is the one specified by the minion's hash_type config option, but an alternate hash type can be passed to the filter as an argument. Example: {% set num_range = 99999999 %}
{{ num_range | random_hash }}
{{ num_range | random_hash('sha512') }}
Returns: 43ec517d68b6edd3015b3edc9a11367b d94a45acd81f8e3107d237dbc0d5d195f6a52a0d188bc0284c0763ece1eac9f9496fb6a531a296074c87b3540398dace1222b42e150e67c9301383fde3d66ae5 random_sampleNew in version 3005. Returns a given sample size from a list. The seed parameter can be used to return a predictable outcome. Example: {% set my_list = ["one", "two", "three", "four"] %}
{{ my_list | random_sample(2) }}
Returns: ["four", "one"] random_shuffleNew in version 3005. Returns a shuffled copy of an input list. The seed parameter can be used to return a predictable outcome. Example: {% set my_list = ["one", "two", "three", "four"] %}
{{ my_list | random_shuffle }}
Returns: ["four", "three", "one", "two"] set_dict_key_valueNew in version 3000. Allows you to set a value in a nested dictionary without having to worry if all the nested keys actually exist. Missing keys will be automatically created if they do not exist. The default delimiter for the keys is ':', however, with the delimiter-parameter, a different delimiter can be specified. Examples:
Returns:
append_dict_key_valueNew in version 3000. Allows you to append to a list nested (deep) in a dictionary without having to worry if all the nested keys (or the list itself) actually exist. Missing keys will automatically be created if they do not exist. The default delimiter for the keys is ':', however, with the delimiter-parameter, a different delimiter can be specified. Examples:
Returns:
extend_dict_key_valueNew in version 3000. Allows you to extend a list nested (deep) in a dictionary without having to worry if all the nested keys (or the list itself) actually exist. Missing keys will automatically be created if they do not exist. The default delimiter for the keys is ':', however, with the delimiter-parameter, a different delimiter can be specified. Examples:
Returns:
update_dict_key_valueNew in version 3000. Allows you to update a dictionary nested (deep) in another dictionary without having to worry if all the nested keys actually exist. Missing keys will automatically be created if they do not exist. The default delimiter for the keys is ':', however, with the delimiter-parameter, a different delimiter can be specified. Examples:
md5New in version 2017.7.0. Return the md5 digest of a string. Example: {{ 'random' | md5 }}
Returns: 7ddf32e17a6ac5ce04a8ecbf782ca509 sha256New in version 2017.7.0. Return the sha256 digest of a string. Example: {{ 'random' | sha256 }}
Returns: a441b15fe9a3cf56661190a0b93b9dec7d04127288cc87250967cf3b52894d11 sha512New in version 2017.7.0. Return the sha512 digest of a string. Example: {{ 'random' | sha512 }}
Returns: 811a90e1c8e86c7b4c0eef5b2c0bf0ec1b19c4b1b5a242e6455be93787cb473cb7bc9b0fdeb960d00d5c6881c2094dd63c5c900ce9057255e2a4e271fc25fef1 base64_encodeNew in version 2017.7.0. Encode a string as base64. Example: {{ 'random' | base64_encode }}
Returns: cmFuZG9t base64_decodeNew in version 2017.7.0. Decode a base64-encoded string. {{ 'Z2V0IHNhbHRlZA==' | base64_decode }}
Returns: get salted hmacNew in version 2017.7.0. Verify a challenging hmac signature against a string / shared-secret. Returns a boolean value. Example: {{ 'get salted' | hmac('shared secret', 'eBWf9bstXg+NiP5AOwppB5HMvZiYMPzEM9W5YMm/AmQ=') }}
Returns: True http_queryNew in version 2017.7.0. Return the HTTP reply object from a URL. Example: {{ 'http://jsonplaceholder.typicode.com/posts/1' | http_query }}
Returns: {
traverseNew in version 2018.3.3. Traverse a dict or list using a colon-delimited target string. The target 'foo:bar:0' will return data['foo']['bar'][0] if this value exists, and will otherwise return the provided default value. Example: {{ {'a1': {'b1': {'c1': 'foo'}}, 'a2': 'bar'} | traverse('a1:b1', 'default') }}
Returns: {"c1": "foo"}
{{ {'a1': {'b1': {'c1': 'foo'}}, 'a2': 'bar'} | traverse('a2:b2', 'default') }}
Returns: "default" json_queryNew in version 3000. A port of Ansible json_query Jinja filter to make queries against JSON data using JMESPath language. Could be used to filter pillar data, yaml maps, and together with http_query. Depends on the jmespath Python module. Examples: Example 1: {{ [1, 2, 3, 4, [5, 6]] | json_query('[]') }}
Example 2: {{
{"machines": [
Returns: Example 1: [1, 2, 3, 4, 5, 6] Example 2: ['a', 'c'] Example 3: [80, 25, 22] to_snake_caseNew in version 3000. Converts a string from camelCase (or CamelCase) to snake_case. Example: {{ camelsWillLoveThis | to_snake_case }}
Returns: Example: camels_will_love_this to_camelcaseNew in version 3000. Converts a string from snake_case to camelCase (or UpperCamelCase if so indicated). Example 1: {{ snake_case_for_the_win | to_camelcase }}
Example 2: {{ snake_case_for_the_win | to_camelcase(uppercamel=True) }}
Returns: Example 1: snakeCaseForTheWin Example 2: SnakeCaseForTheWin human_to_bytesNew in version 3005. Given a human-readable byte string (e.g. 2G, 30MB, 64KiB), return the number of bytes. Will return 0 if the argument has unexpected form. Example 1: {{ "32GB" | human_to_bytes }}
Example 2: {{ "32GB" | human_to_bytes(handle_metric=True) }}
Example 3: {{ "32" | human_to_bytes(default_unit="GiB") }}
Returns: Example 1: 34359738368 Example 2: 32000000000 Example 3: 34359738368 Networking FiltersThe following networking-related filters are supported: is_ipNew in version 2017.7.0. Return if a string is a valid IP Address. {{ '192.168.0.1' | is_ip }}
Additionally accepts the following options:
Example - test if a string is a valid loopback IP address. {{ '192.168.0.1' | is_ip(options='loopback') }}
is_ipv4New in version 2017.7.0. Returns if a string is a valid IPv4 address. Supports the same options as is_ip. {{ '192.168.0.1' | is_ipv4 }}
is_ipv6New in version 2017.7.0. Returns if a string is a valid IPv6 address. Supports the same options as is_ip. {{ 'fe80::' | is_ipv6 }}
ipaddrNew in version 2017.7.0. From a list, returns only valid IP entries. Supports the same options as is_ip. The list can contains also IP interfaces/networks. Example: {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipaddr }}
Returns: ["192.168.0.1", "fe80::"] ipv4New in version 2017.7.0. From a list, returns only valid IPv4 entries. Supports the same options as is_ip. The list can contains also IP interfaces/networks. Example: {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipv4 }}
Returns: ["192.168.0.1"] ipv6New in version 2017.7.0. From a list, returns only valid IPv6 entries. Supports the same options as is_ip. The list can contains also IP interfaces/networks. Example: {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipv6 }}
Returns: ["fe80::"] ipwrapNew in version 3006.0. From a string, list, or tuple, returns any IPv6 addresses wrapped in square brackets([]) Example: {{ ['192.0.2.1', 'foo', 'bar', 'fe80::', '2001:db8::1/64'] | ipwrap }}
Returns: ["192.0.2.1", "foo", "bar", "[fe80::]", "[2001:db8::1]/64"] network_hostsNew in version 2017.7.0. Return the list of hosts within a networks. This utility works for both IPv4 and IPv6. NOTE: When running this command with a large IPv6 network, the
command will take a long time to gather all of the hosts.
Example: {{ '192.168.0.1/30' | network_hosts }}
Returns: ["192.168.0.1", "192.168.0.2"] network_sizeNew in version 2017.7.0. Return the size of the network. This utility works for both IPv4 and IPv6. Example: {{ '192.168.0.1/8' | network_size }}
Returns: 16777216 gen_macNew in version 2017.7.0. Generates a MAC address with the defined OUI prefix. Common prefixes:
Example: {{ '00:50' | gen_mac }}
Returns: 00:50:71:52:1C mac_str_to_bytesNew in version 2017.7.0. Converts a string representing a valid MAC address to bytes. Example: {{ '00:11:22:33:44:55' | mac_str_to_bytes }}
NOTE: This option may have adverse effects when using the
default renderer, jinja|yaml. This is due to the fact that YAML
requires proper handling in regard to special characters. Please see the
section on YAML ASCII support in the YAML Idiosyncrasies
documentation for more information.
dns_checkNew in version 2017.7.0. Return the ip resolved by dns, but do not exit on failure, only raise an exception. Obeys system preference for IPv4/6 address resolution. Example: {{ 'www.google.com' | dns_check(port=443) }}
Returns: '172.217.3.196' File filtersis_text_fileNew in version 2017.7.0. Return if a file is text. Uses heuristics to guess whether the given file is text or binary, by reading a single block of bytes from the file. If more than 30% of the chars in the block are non-text, or there are NUL ('x00') bytes in the block, assume this is a binary file. Example: {{ '/usr/local/etc/salt/master' | is_text_file }}
Returns: True is_binary_fileNew in version 2017.7.0. Return if a file is binary. Detects if the file is a binary, returns bool. Returns True if the file is a bin, False if the file is not and None if the file is not available. Example: {{ '/usr/local/etc/salt/master' | is_binary_file }}
Returns: False is_empty_fileNew in version 2017.7.0. Return if a file is empty. Example: {{ '/usr/local/etc/salt/master' | is_empty_file }}
Returns: False file_hashsumNew in version 2017.7.0. Return the hashsum of a file. Example: {{ '/usr/local/etc/salt/master' | file_hashsum }}
Returns: 02d4ef135514934759634f10079653252c7ad594ea97bd385480c532bca0fdda list_filesNew in version 2017.7.0. Return a recursive list of files under a specific path. Example: {{ '/usr/local/etc/salt/' | list_files | join('\n') }}
Returns: /usr/local/etc/salt/master /usr/local/etc/salt/proxy /usr/local/etc/salt/minion /usr/local/etc/salt/pillar/top.sls /usr/local/etc/salt/pillar/device1.sls path_joinNew in version 2017.7.0. Joins absolute paths. Example: {{ '/usr/local/etc/salt/' | path_join('pillar', 'device1.sls') }}
Returns: /usr/local/etc/salt/pillar/device1.sls whichNew in version 2017.7.0. Python clone of /usr/bin/which. Example: {{ 'salt-master' | which }}
Returns: /usr/local/salt/virtualenv/bin/salt-master TestsSaltstack extends builtin tests with these custom tests: equaltoTests the equality between two values. Can be used in an if statement directly: {% if 1 is equalto(1) %}
If clause evaluates to True or with the selectattr filter: {{ [{'value': 1}, {'value': 2} , {'value': 3}] | selectattr('value', 'equalto', 3) | list }}
Returns: [{"value": 3}]
matchTests that a string matches the regex passed as an argument. Can be used in a if statement directly: {% if 'a' is match('[a-b]') %}
If clause evaluates to True or with the selectattr filter: {{ [{'value': 'a'}, {'value': 'b'}, {'value': 'c'}] | selectattr('value', 'match', '[b-e]') | list }}
Returns: [{"value": "b"}, {"value": "c"}]
Test supports additional optional arguments: ignorecase, multiline Escape filtersregex_escapeNew in version 2017.7.0. Allows escaping of strings so they can be interpreted literally by another function. Example: regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}
will be rendered as: regex_escape = https\:\/\/example\.com\?foo\=bar\%20baz Set Theory FiltersuniqueNew in version 2017.7.0. Performs set math using Jinja filters. Example: unique = {{ ['foo', 'foo', 'bar'] | unique }}
will be rendered as: unique = ['foo', 'bar'] Global FunctionsSalt Project extends builtin global functions with these custom global functions: ifelseEvaluate each pair of arguments up to the last one as a (matcher, value) tuple, returning value if matched. If none match, returns the last argument. The ifelse function is like a multi-level if-else statement. It was inspired by CFEngine's ifelse function which in turn was inspired by Oracle's DECODE function. It must have an odd number of arguments (from 1 to N). The last argument is the default value, like the else clause in standard programming languages. Every pair of arguments before the last one are evaluated as a pair. If the first one evaluates true then the second one is returned, as if you had used the first one in a compound match expression. Boolean values can also be used as the first item in a pair, as it will be translated to a match that will always match ("*") or never match ("SALT_IFELSE_MATCH_NOTHING") a target system. This is essentially another way to express the match.filter_by functionality in way that's familiar to CFEngine or Oracle users. Consider using match.filter_by unless this function fits your workflow. {{ ifelse('foo*', 'fooval', 'bar*', 'barval', 'defaultval', minion_id='bar03') }}
Jinja in FilesJinja can be used in the same way in managed files: # redis.sls /etc/redis/redis.conf: # lib.sls
{% set port = 6379 %}
# redis.conf
{% from 'lib.sls' import port with context %}
port {{ port }}
bind {{ bind }}
As an example, configuration was pulled from the file context and from an external template file. NOTE: Macros and variables can be shared across templates. They
should not start with one or more underscores, and should be managed by one of
the following tags: macro, set, load_yaml,
load_json, import_yaml and import_json.
Escaping JinjaOccasionally, it may be necessary to escape Jinja syntax. There are two ways to do this in Jinja. One is escaping individual variables or strings and the other is to escape entire blocks. To escape a string commonly used in Jinja syntax such as {{, you can use the following syntax: {{ '{{' }}
For larger blocks that contain Jinja syntax that needs to be escaped, you can use raw blocks: {% raw %}
See the Escaping section of Jinja's documentation to learn more. A real-word example of needing to use raw tags to escape a larger block of code is when using file.managed with the contents_pillar option to manage files that contain something like consul-template, which shares a syntax subset with Jinja. Raw blocks are necessary here because the Jinja in the pillar would be rendered before the file.managed is ever called, so the Jinja syntax must be escaped: {% raw %}
- contents_pillar: |
Calling Salt FunctionsThe Jinja renderer provides a shorthand lookup syntax for the salt dictionary of execution function. New in version 2014.7.0. # The following two function calls are equivalent.
{{ salt['cmd.run']('whoami') }}
{{ salt.cmd.run('whoami') }}
DebuggingThe show_full_context function can be used to output all variables present in the current Jinja context. New in version 2014.7.0. Context is: {{ show_full_context()|yaml(False) }}
LogsNew in version 2017.7.0. Yes, in Salt, one is able to debug a complex Jinja template using the logs. For example, making the call: {%- do salt.log.error('testing jinja logging') -%}
Will insert the following message in the minion logs: 2017-02-01 01:24:40,728 [salt.module.logmod][ERROR ][3779] testing jinja logging ProfilingNew in version 3002. When working with a very large codebase, it becomes increasingly imperative to trace inefficiencies with state and pillar render times. The profile jinja block enables the user to get finely detailed information on the most expensive areas in the codebase. Profiling blocksAny block of jinja code can be wrapped in a profile block. The syntax for a profile block is {% profile as '<name>' %}<jinja code>{% endprofile %}, where <name> can be any string. The <name> token will appear in the log at the profile level along with the render time of the block. # /usr/local/etc/salt/states/example.sls
{%- profile as 'local data' %}
The profile block in the example.sls state will emit the following log statement: # salt-call --local -l profile state.apply example [...] [PROFILE ] Time (in seconds) to render profile block 'local data': 0.9385035037994385 [...] Profiling importsUsing the same logic as the profile block, the import_yaml, import_json, and import_text blocks will emit similar statements at the profile log level. # /usr/local/etc/salt/states/data.sls
{%- set values = {'counter': 0} %}
{%- for i in range(524288) %}
# /usr/local/etc/salt/states/example.sls
{%- import_yaml 'data.sls' as imported %}
test:
For import_* blocks, the profile log statement has the following form: # salt-call --local -l profile state.apply example [...] [PROFILE ] Time (in seconds) to render import_yaml 'data.sls': 1.5500736236572266 [...] Python MethodsA powerful feature of jinja that is only hinted at in the official jinja documentation is that you can use the native python methods of the variable type. Here is the python documentation for string methods. {% set hostname,domain = grains.id.partition('.')[::2] %}{{ hostname }}
{% set strings = grains.id.split('-') %}{{ strings[0] }}
Custom Execution ModulesCustom execution modules can be used to supplement or replace complex Jinja. Many tasks that require complex looping and logic are trivial when using Python in a Salt execution module. Salt execution modules are easy to write and distribute to Salt minions. Functions in custom execution modules are available in the Salt execution module dictionary just like the built-in execution modules: {{ salt['my_custom_module.my_custom_function']() }}
Custom Jinja filtersGiven that all execution modules are available in the Jinja template, one can easily define a custom module as in the previous paragraph and use it as a Jinja filter. However, please note that it will not be accessible through the pipe. For example, instead of: {{ my_variable | my_jinja_filter }}
The user will need to define my_jinja_filter function under an extension module, say my_filters and use as: {{ salt.my_filters.my_jinja_filter(my_variable) }}
The greatest benefit is that you are able to access thousands of existing functions, e.g.:
{{ salt.dnsutil.AAAA('www.google.com') }}
{{ salt.redis.hget('foo_hash', 'bar_field') }}
{{ salt.route.show('0.0.0.0/0') }}
Tutorials IndexAutoaccept minions from GrainsNew in version 2018.3.0. To automatically accept minions based on certain characteristics, e.g. the uuid you can specify certain grain values on the salt master. Minions with matching grains will have their keys automatically accepted.
autosign_grains_dir: /usr/local/etc/salt/autosign_grains
Place a file named like the grain in the autosign_grains_dir and write the values that should be accepted automatically inside that file. For example to automatically accept minions based on their uuid create a file named /usr/local/etc/salt/autosign_grains/uuid: 8f7d68e2-30c5-40c6-b84a-df7e978a03ee 1d3c5473-1fbc-479e-b0c7-877705a0730f If already running, the master must be restarted for these config changes to take effect. The master is now setup to accept minions with either of the two specified uuids. Multiple values must always be written into separate lines. Lines starting with a # are ignored.
autosign_grains: Now you should be able to start salt-minion and run salt-call state.apply or any other salt commands that require master authentication. Salt as a Cloud ControllerIn Salt 0.14.0, an advanced cloud control system was introduced, allowing private cloud VMs to be managed directly with Salt. This system is generally referred to as Salt Virt. The Salt Virt system already exists and is installed within Salt itself. This means that besides setting up Salt, no additional salt code needs to be deployed. NOTE: The libvirt python module and the certtool
binary are required.
The main goal of Salt Virt is to facilitate a very fast and simple cloud that can scale and is fully featured. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image and disk management, and virtual machine migration with and without shared storage. This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well. Setting up HypervisorsThe first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces. Installing Hypervisor SoftwareSalt Virt is made to be hypervisor agnostic but currently, the only fully implemented hypervisor is KVM via libvirt. The required software for a hypervisor is libvirt and kvm. For advanced features, install libguestfs or qemu-nbd. NOTE: Libguestfs and qemu-nbd allow for virtual machine images
to be mounted before startup and get pre-seeded with configurations and a salt
minion.
This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys. NOTE: Package names and setup used is Red Hat specific.
Different package names will be required for different platforms.
libvirt: Hypervisor Network SetupThe hypervisors will need to be running a network bridge to serve up network devices for virtual machines. This formula will set up a standard bridge on a hypervisor connecting the bridge to eth0: eth0: Virtual Machine Network SetupSalt Virt comes with a system to model the network interfaces used by the deployed virtual machines. By default, a single interface is created for the deployed virtual machine and is bridged to br0. To get going with the default networking setup, ensure that the bridge interface named br0 exists on the hypervisor and is bridged to an active network device. NOTE: To use more advanced networking in Salt Virt, read the
Salt Virt Networking document:
Salt Virt Networking Libvirt StateOne of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate authority, and uses pillar to distribute them. This is managed via the libvirt state. Simply execute this formula on the minion to ensure that the certificate is in place and up to date: NOTE: The above formula includes the calls needed to set up
libvirt keys.
libvirt_keys: Getting Virtual Machine Images ReadySalt Virt requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform. Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors. Virtual Machine generation applications are available for many platforms:
url vmbuilder-formula
Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into /usr/local/etc/salt/states and it can now be used by Salt Virt. For purposes of this demo, the file name centos.img will be used. Existing Virtual Machine ImagesMany existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK. CentOSThese images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: https://wiki.centos.org/Cloud/OpenNebula Fedora LinuxImages for Fedora Linux can be found here: https://alt.fedoraproject.org/cloud openSUSEhttps://download.opensuse.org/distribution/leap/15.1/jeos/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-kvm-and-xen-Current.qcow2.meta4 SUSEhttps://www.suse.com/products/server/jeos Ubuntu LinuxImages for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/ Using Salt VirtWith hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands using the virt runner. Start by running a Salt Virt hypervisor info command: salt-run virt.host_info This will query the running hypervisor(s) for stats and display useful information such as the number of CPUs and amount of memory. You can also list all VMs and their current states on all hypervisor nodes: salt-run virt.list Now that hypervisors are available a virtual machine can be provisioned, the virt.init routine will create a new virtual machine: salt-run virt.init centos1 2 512 salt://centos.img The Salt Virt runner will now automatically select a hypervisor to deploy the new virtual machine on. Using salt:// assumes that the CentOS virtual machine image is located in the root of the Salt File Server on the master. When images are cloned (i.e. copied locally after retrieval from the file server), the destination directory on the hypervisor minion is determined by the virt:images config option; by default this is /usr/local/etc/salt/states-images/. When a VM is initialized using virt.init, the image is copied to the hypervisor using cp.cache_file and will be mounted and seeded with a minion. Seeding includes setting pre-authenticated keys on the new machine. A minion will only be installed if one can not be found on the image using the default arguments to seed.apply. NOTE: The biggest bottleneck in starting VMs is when the Salt
Minion needs to be installed. Making sure that the source VM images already
have Salt installed will GREATLY speed up virtual machine deployment.
You can also deploy an image on a particular minion by directly calling the virt execution module with an absolute image path. This can be quite handy for testing: salt 'hypervisor*' virt.init centos1 2 512 image=/var/lib/libvirt/images/centos.img Now that the new VM has been prepared, it can be seen via the virt.query command: salt-run virt.query This command will return data about all of the hypervisors and respective virtual machines. Now that the new VM is booted, it should have contacted the Salt Master. A test.ping will reveal if the new VM is running. QEMU Copy on Write SupportFor fast image cloning, you can use the qcow disk image format. Pass the enable_qcow flag and a .qcow2 image path to virt.init: salt 'hypervisor*' virt.init centos1 2 512 image=/var/lib/libvirt/images/centos.qcow2 enable_qcow=True start=False NOTE: Beware that attempting to boot a qcow image too quickly
after cloning can result in a race condition where libvirt may try to boot the
machine before image seeding has completed. For that reason, it is recommended
to also pass start=False to virt.init.
Also know that you must not modify the original base image without first making a copy and then rebasing all overlay images onto it. See the qemu-img rebase usage docs. Migrating Virtual MachinesSalt Virt comes with full support for virtual machine migration. Using the libvirt state in the above formula makes migration possible. A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up; the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular, port 16514 needs to be opened on hypervisors: iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT NOTE: More in-depth information regarding distribution specific
firewall settings can be found in:
Opening the Firewall up for Salt Salt also needs the virt:tunnel option to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tunnel and to use port 16514. Without virt:tunnel, libvirt tries to bind to random ports when running migrations. To turn on virt:tunnel, simply apply it to the master config file: virt: Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change: salt \* saltutil.refresh_modules Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine: salt-run virt.migrate centos <new hypervisor> VNC ConsolesAlthough not enabled by default, Salt Virt can also set up VNC consoles allowing for remote visual consoles to be opened up. When creating a new VM using virt.init, pass the enable_vnc=True parameter to have a console configured for the new VM. The information from a virt.query routine will display the VNC console port for the specific VMs: centos The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor's firewall. Once the port is open, then the console can be easily opened via vncviewer: vncviewer hyper6:5900 By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine. ConclusionNow with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt. Running Salt States and Commands in Docker ContainersThe 2016.11.0 release of Salt introduces the ability to execute Salt States and Salt remote execution commands directly inside of Docker containers. This addition makes it possible to not only deploy fresh containers using Salt States. This also allows for running containers to be audited and modified using Salt, but without running a Salt Minion inside the container. Some of the applications include security audits of running containers as well as gathering operating data from containers. This new feature is simple and straightforward, and can be used via a running Salt Minion, the Salt Call command, or via Salt SSH. For this tutorial we will use the salt-call command, but like all salt commands these calls are directly translatable to salt and salt-ssh. Step 1 - Install DockerSince setting up Docker is well covered in the Docker documentation we will make no such effort to describe it here. Please see the Docker Installation Documentation for installing and setting up Docker: https://docs.docker.com/engine/installation/ The Docker integration also requires that the docker-py library is installed. This can easily be done using pip or via your system package manager: pip install docker-py Step 2 - Install SaltFor this tutorial we will be using Salt Call, which is available in the salt-minion package, please follow the Salt install guide. Step 3 - Create With Salt StatesNext some Salt States are needed, for this example a very basic state which installs vim is used, but anything Salt States can do can be done here, please see the Salt States Introduction Tutorial to learn more about Salt States: https://docs.saltproject.io/en/stage/getstarted/config/ For this tutorial, simply create a small state file in /usr/local/etc/salt/states/vim.sls: vim: NOTE: The base image you choose will need to have python 2.6 or
2.7 installed. We are hoping to resolve this constraint in a future release.
If base is omitted the default image used is a minimal openSUSE image with Python support, maintained by SUSE Next run the docker.sls_build command: salt-call --local dockerng.sls_build test base=my_base_image mods=vim Now we have a fresh image called test to work with and vim has been installed. Step 4 - Running Commands Inside the ContainerSalt can now run remote execution functions inside the container with another simple salt-call command: salt-call --local dockerng.call test test.version salt-call --local dockerng.call test network.interfaces salt-call --local dockerng.call test disk.usage salt-call --local dockerng.call test pkg.list_pkgs salt-call --local dockerng.call test service.running httpd salt-call --local dockerng.call test cmd.run 'ls -l /etc' Automatic Updates / Frozen DeploymentsNew in version 0.10.3.d. Salt has support for the Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs. Getting StartedTo build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A. This process does work on Windows. Directions are available at https://github.com/saltstack/salt-windows-install for details on installing Salt in Windows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows. Install bbfreeze, and then esky from PyPI in order to enable the bdist_esky command in setup.py. Salt itself must also be installed, in addition to its dependencies. Building and FreezingOnce you have your tools installed and the environment configured, use setup.py to prepare the distribution files. python setup.py sdist python setup.py bdist Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable. python setup.py bdist_esky There will be an appropriately versioned salt-VERSION.zip in dist/ if everything went smoothly. WindowsC:\Python27\lib\site-packages\zmq will need to be added to the PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up. Using the Frozen BuildUnpack the zip file in the desired install location. Scripts like salt-minion and salt-call will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the Esky documentation for more information) To support updating your minions in the wild, put the builds on a web server that the minions can reach. salt.modules.saltutil.update() will trigger an update and (optionally) a restart of the minion service under the new version. TroubleshootingA Windows minion isn't respondingThe process dispatch on Windows is slower than it is on *nix. It may be necessary to add '-t 15' to salt commands to give minions plenty of time to return. Windows and the Visual Studio RedistThe Visual C++ 2008 32-bit redistributable will need to be installed on all Windows minions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If a no OPENSSL_Applink error appears on the console when trying to start a frozen minion, the redistributable is not installed. Mixed Linux environments and YumThe Yum Python module doesn't appear to be available on any of the standard Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen build should created on that platform to support all the Linux nodes. Remember to build the virtualenv with --system-site-packages so that the yum module is included. Automatic (Python) module discoveryAutomatic (Python) module discovery does not work with the late-loaded scheme that Salt uses for (Salt) modules. Any misbehaving modules will need to be explicitly added to the freezer_includes in Salt's setup.py. Always check the zipped application to make sure that the necessary modules were included. ESXi Proxy MinionNew in version 2015.8.4. NOTE: This tutorial assumes basic knowledge of Salt. To get up
to speed, check out the Salt Walkthrough.
This tutorial also assumes a basic understanding of Salt Proxy Minions. If you're unfamiliar with Salt's Proxy Minion system, please read the Salt Proxy Minion documentation and the Salt Proxy Minion End-to-End Example tutorial. The third assumption that this tutorial makes is that you also have a basic understanding of ESXi hosts. You can learn more about ESXi hosts on VMware's various resources. Salt's ESXi Proxy Minion allows a VMware ESXi host to be treated as an individual Salt Minion, without installing a Salt Minion on the ESXi host. Since an ESXi host may not necessarily run on an OS capable of hosting a Python stack, the ESXi host can't run a regular Salt Minion directly. Therefore, Salt's Proxy Minion functionality enables you to designate another machine to host a proxy process that "proxies" communication from the Salt Master to the ESXi host. The master does not know or care that the ESXi target is not a "real" Salt Minion. More in-depth conceptual reading on Proxy Minions can be found in the Proxy Minion section of Salt's documentation. Salt's ESXi Proxy Minion was added in the 2015.8.4 release of Salt. NOTE: Be aware that some functionality for the ESXi Proxy
Minion may depend on the type of license attached the ESXi host(s).
For example, certain services are only available to manipulate service state or policies with a VMware vSphere Enterprise or Enterprise Plus license, while others are available with a Standard license. The ntpd service is restricted to an Enterprise Plus license, while ssh is available via the Standard license. Please see the vSphere Comparison page for more information. DependenciesManipulation of the ESXi host via a Proxy Minion requires the machine running the Proxy Minion process to have the ESXCLI package (and all of its dependencies) and the pyVmomi Python Library to be installed. ESXi PasswordThe ESXi Proxy Minion uses VMware's API to perform tasks on the host as if it was a regular Salt Minion. In order to access the API that is already running on the ESXi host, the ESXi host must have a username and password that is used to log into the host. The username is usually root. Before Salt can access the ESXi host via VMware's API, a default password must be set on the host. pyVmomiThe pyVmomi Python library must be installed on the machine that is running the proxy process. pyVmomi can be installed via pip: pip install pyVmomi NOTE: Version 6.0 of pyVmomi has some problems with SSL error
handling on certain versions of Python. If using version 6.0 of pyVmomi, the
machine that you are running the proxy minion process from must have either
Python 2.6, Python 2.7.9, or newer. This is due to an upstream dependency in
pyVmomi 6.0 that is not supported in Python version 2.7 to 2.7.8. If the
version of Python running the proxy process is not in the supported range, you
will need to install an earlier version of pyVmomi. See Issue #29537
for more information.
Based on the note above, to install an earlier version of pyVmomi than the version currently listed in PyPi, run the following: pip install pyVmomi==5.5.0.2014.1.1 The 5.5.0.2014.1.1 is a known stable version that the original ESXi Proxy Minion was developed against. ESXCLICurrently, about a third of the functions used for the ESXi Proxy Minion require the ESXCLI package be installed on the machine running the Proxy Minion process. The ESXCLI package is also referred to as the VMware vSphere CLI, or vCLI. VMware provides vCLI package installation instructions for vSphere 5.5 and vSphere 6.0. Once all of the required dependencies are in place and the vCLI package is installed, you can check to see if you can connect to your ESXi host by running the following command: esxcli -s <host-location> -u <username> -p <password> system syslog config get If the connection was successful, ESXCLI was successfully installed on your system. You should see output related to the ESXi host's syslog configuration. ConfigurationThere are several places where various configuration values need to be set in order for the ESXi Proxy Minion to run and connect properly. Proxy Config FileOn the machine that will be running the Proxy Minion process(es), a proxy config file must be in place. This file should be located in the /usr/local/etc/salt/ directory and should be named proxy. If the file is not there by default, create it. This file should contain the location of your Salt Master that the Salt Proxy will connect to. Example Proxy Config File: # /usr/local/etc/salt/proxy master: <salt-master-location> Pillar ProfilesProxy minions get their configuration from Salt's Pillar. Every proxy must have a stanza in Pillar and a reference in the Pillar top-file that matches the Proxy ID. At a minimum for communication with the ESXi host, the pillar should look like this: proxy: Some other optional settings are protocol and port. These can be added to the pillar configuration. proxytypeThe proxytype key and value pair is critical, as it tells Salt which interface to load from the proxy directory in Salt's install hierarchy, or from /usr/local/etc/salt/states/_proxy on the Salt Master (if you have created your own proxy module, for example). To use this ESXi Proxy Module, set this to esxi. hostThe location, or ip/dns, of the ESXi host. Required. usernameThe username used to login to the ESXi host, such as root. Required. passwordsA list of passwords to be used to try and login to the ESXi host. At least one password in this list is required. The proxy integration will try the passwords listed in order. It is configured this way so you can have a regular password and the password you may be updating for an ESXi host either via the vsphere.update_host_password execution module function or via the esxi.password_present state function. This way, after the password is changed, you should not need to restart the proxy minion--it should just pick up the new password provided in the list. You can then change pillar at will to move that password to the front and retire the unused ones. Use-case/reasoning for using a list of passwords: You are setting up an ESXi host for the first time, and the host comes with a default password. You know that you'll be changing this password during your initial setup from the default to a new password. If you only have one password option, and if you have a state changing the password, any remote execution commands or states that run after the password change will not be able to run on the host until the password is updated in Pillar and the Proxy Minion process is restarted. This allows you to use any number of potential fallback passwords. NOTE: When a password is changed on the host to one in the list
of possible passwords, the further down on the list the password is, the
longer individual commands will take to return. This is due to the nature of
pyVmomi's login system. We have to wait for the first attempt to fail before
trying the next password on the list.
This scenario is especially true, and even slower, when the proxy minion first starts. If the correct password is not the first password on the list, it may take up to a minute for test.version to respond with salt's version installed (Example: 2018.3.4. Once the initial authorization is complete, the responses for commands will be a little faster. To avoid these longer waiting periods, SaltStack recommends moving the correct password to the top of the list and restarting the proxy minion at your earliest convenience. protocolIf the ESXi host is not using the default protocol, set this value to an alternate protocol. Default is https. For example: portIf the ESXi host is not using the default port, set this value to an alternate port. Default is 443. Example Configuration FilesAn example of all of the basic configurations that need to be in place before starting the Proxy Minion processes includes the Proxy Config File, Pillar Top File, and any individual Proxy Minion Pillar files. In this example, we'll assuming there are two ESXi hosts to connect to. Therefore, we'll be creating two Proxy Minion config files, one config for each ESXi host. Proxy Config File: # /usr/local/etc/salt/proxy master: <salt-master-location> Pillar Top File: # /usr/local/etc/salt/pillar/top.sls base: Pillar Config File for the first ESXi host, esxi-1: # /usr/local/etc/salt/pillar/esxi-1.sls proxy: Pillar Config File for the second ESXi host, esxi-2: # /usr/local/etc/salt/pillar/esxi-2.sls proxy: Starting the Proxy MinionOnce all of the correct configuration files are in place, it is time to start the proxy processes!
salt-proxy --proxyid='esxi-1' -l debug
# salt-key -L Accepted Keys: Denied Keys: Unaccepted Keys: esxi-1 Rejected Keys: # # salt-key -a esxi-1 The following keys are going to be accepted: Unaccepted Keys: esxi-1 Proceed? [n/Y] y Key for minion esxi-1 accepted.
salt-proxy --proxyid='esxi-2' -d
# salt-key -L Accepted Keys: esxi-1 Denied Keys: Unaccepted Keys: esxi-2 Rejected Keys: # # salt-key -a esxi-1 The following keys are going to be accepted: Unaccepted Keys: esxi-2 Proceed? [n/Y] y Key for minion esxi-1 accepted.
# salt 'esxi-*' test.version esxi-1: Executing CommandsNow that you've configured your Proxy Minions and have them responding successfully to a test.version, we can start executing commands against the ESXi hosts via Salt. It's important to understand how this particular proxy works, and there are a couple of important pieces to be aware of in order to start running remote execution and state commands against the ESXi host via a Proxy Minion: the vSphere Execution Module, the ESXi Execution Module, and the ESXi State Module. vSphere Execution ModuleThe Salt.modules.vsphere is a standard Salt execution module that does the bulk of the work for the ESXi Proxy Minion. If you pull up the docs for it you'll see that almost every function in the module takes credentials (username and password) and a target host argument. When credentials and a host aren't passed, Salt runs commands through pyVmomi or ESXCLI against the local machine. If you wanted, you could run functions from this module on any machine where an appropriate version of pyVmomi and ESXCLI are installed, and that machine would reach out over the network and communicate with the ESXi host. You'll notice that most of the functions in the vSphere module require a host, username, and password. These parameters are contained in the Pillar files and passed through to the function via the proxy process that is already running. You don't need to provide these parameters when you execute the commands. See the Running Remote Execution Commands section below for an example. ESXi Execution ModuleIn order for the Pillar information set up in the Configuration section above to be passed to the function call in the vSphere Execution Module, the salt.modules.esxi execution module acts as a "shim" between the vSphere execution module functions and the proxy process. The "shim" takes the authentication credentials specified in the Pillar files and passes them through to the host, username, password, and optional protocol and port options required by the vSphere Execution Module functions. If the function takes more positional, or keyword, arguments you can append them to the call. It's this shim that speaks to the ESXi host through the proxy, arranging for the credentials and hostname to be pulled from the Pillar section for the ESXi Proxy Minion. Because of the presence of the shim, to lookup documentation for what functions you can use to interface with the ESXi host, you'll want to look in salt.modules.vsphere instead of salt.modules.esxi. Running Remote Execution CommandsTo run commands from the Salt Master to execute, via the ESXi Proxy Minion, against the ESXi host, you use the esxi.cmd <vsphere-function-name> syntax to call functions located in the vSphere Execution Module. Both args and kwargs needed for various vsphere execution module functions must be passed through in a kwarg- type manor. For example: salt 'esxi-*' esxi.cmd system_info salt 'exsi-*' esxi.cmd get_service_running service_name='ssh' ESXi State ModuleThe ESXi State Module functions similarly to other state modules. The "shim" provided by the ESXi Execution Module passes the necessary host, username, and password credentials through, so those options don't need to be provided in the state. Other than that, state files are written and executed just like any other Salt state. See the salt.modules.esxi state for ESXi state functions. The follow state file is an example of how to configure various pieces of an ESXi host including enabling SSH, uploading and SSH key, configuring a coredump network config, syslog, ntp, enabling VMotion, resetting a host password, and more. # /usr/local/etc/salt/states/configure-esxi.sls configure-host-ssh: States are called via the ESXi Proxy Minion just as they would on a regular minion. For example: salt 'esxi-*' state.sls configure-esxi test=true salt 'esxi-*' state.sls configure-esxi Relevant Salt Files and Resources
Opening the Firewall up for SaltThe Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master. NOTE: No firewall configuration needs to be done on Salt
minions. These changes refer to the master only.
Fedora 18 and beyond / RHEL 7 / CentOS 7Starting with Fedora 18 FirewallD is the tool that is used to dynamically manage the firewall rules on a host. It has support for IPv4/6 settings and the separation of runtime and permanent configurations. To interact with FirewallD use the command line client firewall-cmd. firewall-cmd example: firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp A network zone defines the security level of trust for the network. The user should choose an appropriate zone value for their setup. Possible values include: drop, block, public, external, dmz, work, home, internal, trusted. Don't forget to reload after you made your changes. firewall-cmd --reload RHEL 6 / CentOS 6The lokkit command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port. lokkit example: lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp The system-config-firewall-tui command provides a text-based interface to modifying the firewall. system-config-firewall-tui: system-config-firewall-tui openSUSESalt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with: SuSEfirewall2 open SuSEfirewall2 start If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2 command makes opening iptables firewall ports very simple via the command line. SuSEfirewall example: SuSEfirewall2 open EXT TCP 4505 SuSEfirewall2 open EXT TCP 4506 The firewall module in YaST2 provides a text-based interface to modifying the firewall. YaST2: yast2 firewall WindowsWindows Firewall is the default component of Microsoft Windows that provides firewalling and packet filtering. There are many 3rd party firewalls available for Windows, some of which use rules from the Windows Firewall. If you are experiencing problems see the vendor's specific documentation for opening the required ports. The Windows Firewall can be configured using the Windows Interface or from the command line. Windows Firewall (interface):
Windows Firewall (command line): The Windows Firewall rule can be created by issuing a single command. Run the following command from the command line or a run prompt: netsh advfirewall firewall add rule name="Salt" dir=in action=allow protocol=TCP localport=4505-4506 iptablesDifferent Linux distributions store their iptables (also known as netfilter) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary. Fedora / RHEL / CentOS: /etc/sysconfig/iptables Arch Linux: /etc/iptables/iptables.rules Debian Follow these instructions: https://wiki.debian.org/iptables Once you've found your firewall rules, you'll need to add the below line to allow traffic on tcp/4505 and tcp/4506: -A INPUT -m state --state new -m tcp -p tcp --dport 4505:4506 -j ACCEPT Ubuntu Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with: ufw allow salt pf.confThe BSD-family of operating systems uses packet filter (pf). The following example describes the addition to pf.conf needed to access the Salt master. pass in on $int_if proto tcp from any to $int_if port 4505:4506 Once this addition has been made to the pf.conf the rules will need to be reloaded. This can be done using the pfctl command. pfctl -vf /etc/pf.conf Whitelist communication to MasterThere are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment. Here is an example Linux iptables ruleset to be set on the Master: # Allow Minions from these networks -I INPUT -s 10.1.2.0/24 -p tcp --dports 4505:4506 -j ACCEPT -I INPUT -s 10.1.3.0/24 -p tcp --dports 4505:4506 -j ACCEPT # Allow Salt to communicate with Master on the loopback interface -A INPUT -i lo -p tcp --dports 4505:4506 -j ACCEPT # Reject everything else -A INPUT -p tcp --dports 4505:4506 -j REJECT NOTE: The important thing to note here is that the salt
command needs to communicate with the listening network socket of
salt-master on the loopback interface. Without this you will see
no outgoing Salt traffic from the master, even for a simple salt '*'
test.version, because the salt client never reached the
salt-master to tell it to carry out the execution.
HTTP ModulesThis tutorial demonstrates using the various HTTP modules available in Salt. These modules wrap the Python tornado, urllib2, and requests libraries, extending them in a manner that is more consistent with Salt workflows. The salt.utils.http LibraryThis library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality. Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below. This library can be imported with: import salt.utils.http Configuring LibrariesThis library can make use of either tornado, which is required by Salt, urllib2, which ships with Python, or requests, which can be installed separately. By default, tornado will be used. In order to switch to urllib2, set the following variable: backend: urllib2 In order to switch to requests, set the following variable: backend: requests This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions. salt.utils.http.query()This function forms a basic query, but with some add-ons not present in the tornado, urllib2, and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations. HTTPS Request MethodsA basic query can be performed by calling this function with no more than a single URL: salt.utils.http.query("http://example.com")
By default the query will be performed with a GET method. The method can be overridden with the method argument: salt.utils.http.query("http://example.com/delete/url", "DELETE")
When using the POST method (and others, such as PUT), extra data is usually sent as well. This data can be sent directly (would be URL encoded when necessary), or in whatever format is required by the remote server (XML, JSON, plain text, etc). salt.utils.http.query( Data Formatting and TemplatingBear in mind that the data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated): salt.utils.http.query( To pass through a file that contains jinja + yaml templating (the default): salt.utils.http.query( To pass through a file that contains mako templating: salt.utils.http.query( Because this function uses Salt's own rendering system, any Salt renderer can be used. Because Salt's renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary. salt.utils.http.query( HeadersHeaders may also be passed through, either as a header_list, a header_dict, or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict. salt.utils.http.query( Because much of the data that would be templated between headers and data may be the same, the template_dict is the same for both. Correcting possible variable name collisions is up to the user. AuthenticationThe query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively. salt.utils.http.query("http://example.com", username="larry", password="5700g3543v4r")
Cookies and SessionsCookies are also supported, using Python's built-in cookielib. However, they are turned off by default. To turn cookies on, set cookies to True. salt.utils.http.query("http://example.com", cookies=True)
By default cookies are stored in Salt's cache directory, normally /var/cache/salt, as a file called cookies.txt. However, this location may be changed with the cookie_jar argument: salt.utils.http.query( By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was chosen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla: salt.utils.http.query( Because Salt commands are normally one-off commands that are piped together, this library cannot normally behave as a normal browser, with session cookies that persist across multiple HTTP requests. However, the session can be persisted in a separate cookie jar. The default filename for this file, inside Salt's cache directory, is cookies.session.p. This can also be changed. salt.utils.http.query( The format of this file is msgpack, which is consistent with much of the rest of Salt's internal structure. Historically, the extension for this file is .p. There are no current plans to make this configurable. ProxyIf the tornado backend is used (tornado is the default), proxy information configured in proxy_host, proxy_port, proxy_username, proxy_password and no_proxy from the __opts__ dictionary will be used. Normally these are set in the minion configuration file. proxy_host: proxy.my-domain proxy_port: 31337 proxy_username: charon proxy_password: obolus no_proxy: ['127.0.0.1', 'localhost'] salt.utils.http.query("http://example.com", opts=__opts__, backend="tornado")
Return DataNOTE: Return data encoding
If decode is set to True, query() will attempt to decode the return data. decode_type defaults to auto. Set it to a specific encoding, xml, for example, to override autodetection. Because Salt's http library was designed to be used with REST interfaces, query() will attempt to decode the data received from the remote server when decode is set to True. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded. JSON data is translated into a dict using Python's built-in json library. XML is translated using salt.utils.xml_util, which will use Python's built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set: salt.utils.http.query("http://example.com", decode_type="xml")
Once translated, the return dict from query() will include a dict called dict. If the data is not to be translated using one of these methods, decoding may be turned off. salt.utils.http.query("http://example.com", decode=False)
If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below). The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on. salt.utils.http.query("http://example.com", status=True, headers=True, text=True)
The return from these will be found in the return dict as status, headers and text, respectively. Writing Return Data to FilesIt is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this. salt.utils.http.query( SSL VerificationBy default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off. salt.utils.http.query("https://example.com", verify_ssl=False)
CA BundlesThe requests library has its own method of detecting which CA (certificate authority) bundle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable. salt.utils.http.query("https://example.com", ca_bundle="/path/to/ca_bundle.pem")
Updating CA BundlesThe update_ca_bundle() function can be used to update the bundle file at a specified location. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bundle will be downloaded from the cURL website. CAUTION: The target and the source should always be specified! Failure to specify the target may result in the file being written to the wrong location on the local system. Failure to specify the source may cause the upstream URL to receive excess unnecessary traffic, and may cause a file to be download which is hazardous or does not meet the needs of the user. salt.utils.http.update_ca_bundle( The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively. ca_bundle: /path/to/ca-bundle.crt ca_bundle_url: https://example.com/path/to/ca-bundle.crt If Salt is unable to auto-detect the location of the CA bundle, it will raise an error. The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file. salt.utils.http.update_ca_bundle( Test ModeThis function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent. Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a custom, non-destructive URL to be used for testing when necessary. Execution ModuleThe http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary. Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here. All methods for the library are available in the execution module, as kwargs. salt myminion http.query http://example.com/restapi method=POST \ Runner ModuleLike the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config. All methods for the library are available in the runner module, as kwargs. salt-run http.query http://example.com/restapi method=POST \ State ModuleThe state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which specifies a pattern to look for in the return text. By default, this will perform a string comparison of looking for the value of match in the return text. In Python terms this looks like: def myfunc(): If more complex pattern matching is required, a regular expression can be used by specifying a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python's re.search() rather than re.match(). Therefore, the following states are valid: http://example.com/restapi: In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument: http://example.com/: If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting. Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively. Using Salt at scaleThe focus of this tutorial will be building a Salt infrastructure for handling large numbers of minions. This will include tuning, topology, and best practices. For how to install the Salt Master, see the Salt install guide. NOTE: This tutorial is intended for large installations,
although these same settings won't hurt, it may not be worth the complexity to
smaller installations.
When used with minions, the term 'many' refers to at least a thousand and 'a few' always means 500. For simplicity reasons, this tutorial will default to the standard ports used by Salt. The MasterThe most common problems on the Salt Master are:
The first three are all "thundering herd" problems. To mitigate these issues we must configure the minions to back-off appropriately when the Master is under heavy load. The fourth is caused by masters with little hardware resources in combination with a possible bug in ZeroMQ. At least that's what it looks like till today (Issue 118651, Issue 5948, Mail thread) To fully understand each problem, it is important to understand, how Salt works. Very briefly, the Salt Master offers two services to the minions.
All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle Master, there will only be connections on port 4505. Too many minions authingWhen the Minion service is first started up, it will connect to its Master's publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once. The connection itself usually isn't the culprit, the more likely cause of master-side issues is the authentication that the Minion must do with the Master. If the Master is too heavily loaded to handle the auth request it will time it out. The Minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the Minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max. Too many minions re-authingThis is most likely to happen in the testing phase of a Salt deployment, when all Minion keys have already been accepted, but the framework is being tested and parameters are frequently changed in the Salt Master's configuration file(s). The Salt Master generates a new AES key to encrypt its publications at certain events such as a Master restart or the removal of a Minion key. If you are encountering this problem of too many minions re-authing against the Master, you will need to recalibrate your setup to reduce the rate of events like a Master restart or Minion key removal (salt-key -d). When the Master generates a new AES key, the minions aren't notified of this but will discover it on the next pub job they receive. When the Minion receives such a job it will then re-auth with the Master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the random_reauth_delay: 60 in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all minions are reachable via Salt commands. Too many minions re-connectingBy default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load. To tune the minions sockets reconnect attempts, there are a few values in the sample configuration file (default values) recon_default: 1000 recon_max: 5000 recon_randomize: True
To tune this values to an existing environment, a few decision have to be made.
These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements. Here is an example scenario with the goal, to have all minions reconnect within a 60 second time-frame on a Salt Master service restart. recon_default: 1000 recon_max: 59000 recon_randomize: True Each Minion will have a randomized reconnect value between 'recon_default' and 'recon_default + recon_max', which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior). Lets say the generated random value is 11 seconds (or 11000ms). reconnect 1: wait 11 seconds reconnect 2: wait 22 seconds reconnect 3: wait 33 seconds reconnect 4: wait 44 seconds reconnect 5: wait 55 seconds reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) reconnect 7: wait 11 seconds reconnect 8: wait 22 seconds reconnect 9: wait 33 seconds reconnect x: etc. With a thousand minions this will mean 1000/60 = ~16 round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more minions might raise the problem again. Too many minions returning at onceThis can also happen during the testing phase, if all minions are addressed at once with $ salt * disk.usage it may cause thousands of minions trying to return their data to the Salt Master open port 4506. Also causing a flood of syn-flood if the Master can't handle that many returns at once. This can be easily avoided with Salt's batch mode: $ salt * disk.usage -b 50 This will only address 50 minions at once while looping through all addressed minions. Too few resourcesThe masters resources always have to match the environment. There is no way to give good advise without knowing the environment the Master is supposed to run in. But here are some general tuning tips for different situations: The Master is CPU boundIn installations with large or with complex pillar files, it is possible for the master to exhibit poor performance as a result of having to render many pillar files at once. This exhibit itself in a number of ways, both as high load on the master and on minions which block on waiting for their pillar to be delivered to them. To reduce pillar rendering times, it is possible to cache pillars on the master. To do this, see the set of master configuration options which are prefixed with pillar_cache. If many pillars are encrypted using gpg renderer, it is possible to cache GPG data. To do this, see the set of master configuration options which are prefixed with gpg_cache. NOTE: Caching pillars or GPG data on the master may introduce
security considerations. Be certain to read caveats outlined in the master
configuration file to understand how pillar caching may affect a master's
ability to protect sensitive data!
The Master is disk IO boundBy default, the Master saves every Minion's return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is: cachedir: /var/cache/salt and then in the /proc directory. Each job return for every Minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by keep_jobs_seconds: 86400 250 jobs/day * 2000 minions returns = 500,000 files a day Use and External Job CacheAn external job cache allows for job storage to be placed on an external system, such as a database.
If a master has many accepted keys, it may take a long time to publish a job because the master must first determine the matching minions and deliver that information back to the waiting client before the job can be published. To mitigate this, a key cache may be enabled. This will reduce the load on the master to a single file open instead of thousands or tens of thousands. This cache is updated by the maintenance process, however, which means that minions with keys that are accepted may not be targeted by the master for up to sixty seconds by default. To enable the master key cache, set key_cache: 'sched' in the master configuration file. Disable The Job CacheThe job cache is a central component of the Salt Master and many aspects of the Salt Master will not function correctly without a running job cache. Disabling the job cache is STRONGLY DISCOURAGED and should not be done unless the master is being used to execute routines that require no history or reliable feedback! The job cache can be disabled: job_cache: False How to Convert Jinja Logic to an Execution ModuleNOTE: This tutorial assumes a basic knowledge of Salt states
and specifically experience using the maps.jinja idiom.
This tutorial was written by a salt user who was told "if your maps.jinja is too complicated, write an execution module!". If you are experiencing over-complicated jinja, read on. The Problem: Jinja Gone WildIt is often said in the Salt community that "Jinja is not a Programming Language". There's an even older saying known as Maslow's hammer. It goes something like "if all you have is a hammer, everything looks like a nail". Jinja is a reliable hammer, and so is the maps.jinja idiom. Unfortunately, it can lead to code that looks like the following. # storage/maps.yaml
{% import_yaml 'storage/defaults.yaml' as default_settings %}
{% set storage = default_settings.storage %}
{% do storage.update(salt['grains.filter_by']({
This is an example from the author's salt formulae demonstrating misuse of jinja. Aside from being difficult to read and maintain, accessing the logic it contains from a non-jinja renderer while probably possible is a significant barrier! RefactorThe first step is to reduce the maps.jinja file to something reasonable. This gives us an idea of what the module we are writing needs to do. There is a lot of logic around selecting a storage server ip. Let's move that to an execution module. # storage/maps.yaml
{% import_yaml 'storage/defaults.yaml' as default_settings %}
{% set storage = default_settings.storage %}
{% do storage.update(salt['grains.filter_by']({
And then, write the module. Note how the module encapsulates all of the logic around finding the storage server IP. # _modules/storage.py #!python """ Functions related to storage servers. """ import re def ips(): ConclusionThat was... surprisingly straight-forward. Now the logic is available in every renderer, instead of just Jinja. Best of all, it can be maintained in Python, which is a whole lot easier than Jinja. Using Apache Libcloud for declarative and procedural multi-cloud orchestrationNew in version 2018.3.0. NOTE: This walkthrough assumes basic knowledge of Salt and Salt
States. To get up to speed, check out the Salt Walkthrough.
Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API. Apache Libcloud supports over 60 cloud platforms, including Amazon, Microsoft Azure, DigitalOcean, Google Cloud Platform and OpenStack.
These modules are designed as a way of having a multi-cloud deployment and abstracting simple differences between platform to design a high-availability architecture. The Apache Libcloud functionality is available through both execution modules and Salt states. Configuring DriversDrivers can be configured in the Salt Configuration/Minion settings. All libcloud modules expect a list of "profiles" to be configured with authentication details for each driver. Each driver will have a string identifier, these can be found in the libcloud.<api>.types.Provider class for each API, https://libcloud.readthedocs.io/en/latest/supported_providers.html Some drivers require additional parameters, which are documented in the Apache Libcloud documentation. For example, GoDaddy DNS expects "shopper_id", which is the customer ID. These additional parameters can be added to the profile settings and will be passed directly to the driver instantiation method. libcloud_dns: You can have multiple profiles for a single driver, for example if you wanted 2 DNS profiles for Amazon Route53, naming them "route53_prod" and "route54_test" would help your administrators distinguish their purpose. libcloud_dns: Using the execution modulesAmongst over 60 clouds that Apache Libcloud supports, you can add profiles to your Salt configuration to access and control these clouds. Each of the libcloud execution modules exposes the common API methods for controlling Compute, DNS, Load Balancers and Object Storage. To see which functions are supported across specific clouds, see the Libcloud supported methods documentation. The module documentation explains each of the API methods and how to leverage them.
For example, listing buckets in the Google Storage platform: $ salt-call libcloud_storage.list_containers google The Apache Libcloud storage module can be used to synchronize files between multiple storage clouds, such as Google Storage, S3 and OpenStack Swift salt '*' libcloud_storage.download_object DeploymentTools test.sh /tmp/test.sh google_storage Using the state modulesFor each configured profile, the assets available in the API (e.g. storage objects, containers, DNS records and load balancers) can be deployed via Salt's state system. The state module documentation explains the specific states that each module supports
For DNS, the state modules can be used to provide DNS resilience for multiple nameservers, for example: libcloud_dns: And then in a state file: webserver: This could be combined with a multi-cloud load balancer deployment, webserver: Extended parameters can be passed to the specific cloud, for example you can specify the region with the Google Cloud API, because create_balancer can accept a ex_region argument. Adding this argument to the state will pass the additional command to the driver. lb_test: Accessing custom arguments in execution modulesSome cloud providers have additional functionality that can be accessed on top of the base API, for example the Google Cloud Engine load balancer service offers the ability to provision load balancers into a specific region. Looking at the API documentation, we can see that it expects an ex_region in the create_balancer method, so when we execute the salt command, we can add this additional parameter like this: $ salt myminion libcloud_storage.create_balancer my_balancer 80 http profile1 ex_region=us-east1 $ salt myminion libcloud_storage.list_container_objects my_bucket profile1 ex_prefix=me Accessing custom methods in Libcloud driversSome cloud APIs have additional methods that are prefixed with ex_ in Apache Libcloud, these methods are part of the non-standard API but can still be accessed from the Salt modules for libcloud_storage, libcloud_loadbalancer and libcloud_dns. The extra methods are available via the extra command, which expects the name of the method as the first argument, the profile as the second and then accepts a list of keyword arguments to pass onto the driver method, for example, accessing permissions in Google Storage objects: $ salt myminion libcloud_storage.extra ex_get_permissions google container_name=my_container object_name=me.jpg --out=yaml Example profilesGoogle CloudUsing Service Accounts with GCE, you can provide a path to the JSON file and the project name in the parameters. google: LXC Management with SaltNOTE: This walkthrough assumes basic knowledge of Salt. To get
up to speed, check out the Salt Walkthrough.
DependenciesManipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:
ProfilesProfiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual minions. There are two types of profiles:
Container ProfilesLXC container profiles are defined underneath the lxc.container_profile config option: lxc.container_profile: Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data: In the Master config file: lxc.container_profile: In the Pillar data lxc.container_profile: Any minion with the above Pillar data would have the size parameter in the centos profile overridden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value. NOTE: In the 2014.7.x release cycle and earlier, container
profiles are defined under lxc.profile. This parameter will still work
in version 2015.5.0, but is deprecated and will be removed in a future
release. Please note however that the profile merging feature described above
will only work with profiles defined under lxc.container_profile, and
only in versions 2015.5.0 and later.
Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles:
Network ProfilesLXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity. WARNING: on pre 2015.5.2, you need to specify explicitly
the network bridge
lxc.network_profile: As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data: In the Master config file: lxc.network_profile: In the Pillar data lxc.network_profile: Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same. NOTE: In the 2014.7.x release cycle and earlier, network
profiles are defined under lxc.nic. This parameter will still work in
version 2015.5.0, but is deprecated and will be removed in a future release.
Please note however that the profile merging feature described above will only
work with profiles defined under lxc.network_profile, and only in
versions 2015.5.0 and later.
The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.container.conf).
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a container-by-container basis, for instance using the nic_opts argument to lxc.create: salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'
WARNING: The ipv4, ipv6, gateway, and
link (bridge) settings in network profiles / nic_opts will only work if
the container doesn't redefine the network configuration (for example in
/etc/sysconfig/network-scripts/ifcfg-<interface_name> on
RHEL/CentOS, or /etc/network/interfaces on Debian/Ubuntu/etc.). Use
these with caution. The container images installed using the download
template, for instance, typically are configured for eth0 to use DHCP, which
will conflict with static IP addresses set at the container level.
NOTE: For LXC < 1.0.7 and DHCP support, set ipv4.gateway:
'auto' is your network profile, ie.:
lxc.network_profile.nic: Old lxc support (<1.0.7)With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration. Thus you'll need lxc.network_profile.foo: Tricky network setups ExamplesThis example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs. The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking. lxc.network_profile.foo: Creating a Container on the CLIFrom a TemplateLXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case. There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu template uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired. The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the centos LXC template, one can simply run the following command: salt mycentosminion lxc.create container1 template=centos For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values:
The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of centos, a release of 6, and an arch of amd64. Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used: salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'
NOTE: These command-line options can be placed into a
container profile, like so:
lxc.container_profile.cent6: The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line. Cloning an Existing ContainerTo clone a container, use the lxc.clone function: salt myminion lxc.clone container2 orig=container1 Using a Container ImageWhile cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt's lxc.create is capable of installing a container from a tar archive of another container's rootfs. To create an image of a container named cent6, run the following command as root: tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs NOTE: Before doing this, it is recommended that the container
is stopped.
The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image parameter with lxc.create: salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz NOTE: Making images of containers with LVM backing
For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM backing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs: mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs umount /var/lib/lxc/cent6/rootfs WARNING: One caveat of using this method of container creation is
that /etc/hosts is left unmodified. This could cause confusion for some
distros if salt-minion is later installed on the container, as the functions
that determine the hostname take /etc/hosts into account.
Additionally, when creating an rootfs image, be sure to remove /usr/local/etc/salt/minion_id and make sure that id is not defined in /usr/local/etc/salt/minion, as this will cause similar issues. Initializing a New Container as a Salt MinionThe above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init function can be used. This function will do the following:
By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted. salt myminion lxc.init test1 profile=centos salt-key -a test1 For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step: salt-run lxc.init test1 host=myminion profile=centos Running Commands Within a ContainerFor containers which are not running their own Minion, commands can be run within the container in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below. 2015.5.0 and NewerNew functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents:
2014.7.x and EarlierEarlier Salt releases use a single function (lxc.run_cmd) to run commands within containers. Whether stdout, stderr, etc. are returned depends on how the function is invoked. To run a command and return the stdout: salt myminion lxc.run_cmd web1 'tail /var/log/messages' To run a command and return the stderr: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True To run a command and return the retcode: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False To run a command and return all information: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True Container Management Using salt-cloudSalt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter Container Management Using StatesSeveral states are being renamed or otherwise modified in version 2015.5.0. The information in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states. Ensuring a Container Is PresentTo ensure the existence of a named container, use the lxc.present state. Here are some examples: # Using a template web1: WARNING: The lxc.present state will not modify an existing
container (in other words, it will not re-create the container). If an
lxc.present state is run on an existing container, there will be no
change and the state will return a True result.
The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose. Ensuring a Container Does Not ExistTo ensure that a named container is not present, use the lxc.absent state. For example: web1: Ensuring a Container is Running/Stopped/FrozenContainers can be in one of three states:
Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states: web1: Remote execution tutorialBefore continuing make sure you have a working Salt installation by following the instructions in the Salt install guide.
Order your minions aroundNow that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components: salt '<target>' <function> [arguments] SEE ALSO: salt manpage
targetThe target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example: salt '*' test.version salt '*.example.org' test.version Targets can be based on minion system information using the Grains system: salt -G 'os:Ubuntu' test.version SEE ALSO: Grains system
Targets can be filtered by regular expression: salt -E 'virtmach[0-9]' test.version Targets can be explicitly specified in a list: salt -L 'foo,bar,baz,quo' test.version Or Multiple target types can be combined in one command: salt -C 'G@os:Ubuntu and webser* or E@database.*' test.version functionA function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions: salt '*' sys.doc Here are some examples: Show all currently available minions: salt '*' test.version Run an arbitrary shell command: salt '*' cmd.run 'uname -a' SEE ALSO: the full list of modules
argumentsSpace-delimited arguments to the function: salt '*' cmd.exec_code python 'import sys; print sys.version' Optional, keyword arguments are also supported: salt '*' pip.install salt timeout=5 upgrade=True They are always in the form of kwarg=argument. Multi Master TutorialAs of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions. NOTE: If you need failover capabilities with multiple masters,
there is also a MultiMaster-PKI setup available, that uses a different
topology MultiMaster-PKI with Failover Tutorial
In 0.16.0, the masters do not share any information, keys need to be accepted on both masters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the file_roots are kept consistent. Beginning with Salt 2016.11.0, the Pluggable Minion Data Cache was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion pillar information cached on the Salt Master. By default, Salt uses the localfs cache module, but other external data stores can be used instead. Using a pluggable minion cache modules allows for the data stored on a Salt Master about Salt Minions to be replicated on other Salt Masters the Minion is connected to. Please see the Minion Data Cache documentation for more information and configuration examples. Summary of Steps
Prepping a Redundant MasterThe first task is to prepare the redundant master. If the redundant master is already running, stop it. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master's identifying key pair was generated and placed in the master's pki_dir. The default location of the master's key pair is /usr/local/etc/salt/pki/master/. Take the private key, master.pem, and copy it to the same location on the redundant master. Do the same for the master's public key, master.pub. Assuming that no minions have yet been connected to the new redundant master, it is safe to delete any existing key in this location and replace it. NOTE: There is no logical limit to the number of redundant
masters that can be used.
Once the new key is in place, the redundant master can be safely started. Configure MinionsSince minions need to be master-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters: master: Now the minion can be safely restarted. NOTE: If the ipc_mode for the minion is set to TCP (default in
Windows), then each minion in the multi-minion setup (one per master) needs
its own tcp_pub_port and tcp_pull_port.
If these settings are left as the default 4510/4511, each minion object will receive a port 2 higher than the previous. Thus the first minion will get 4510/4511, the second will get 4512/4513, and so on. If these port decisions are unacceptable, you must configure tcp_pub_port and tcp_pull_port with lists of ports for each master. The length of these lists should match the number of masters, and there should not be overlap in the lists. Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions. NOTE: Minions can automatically detect failed masters and
attempt to reconnect to them quickly. To enable this functionality, set
master_alive_interval in the minion config and specify a number of
seconds to poll the masters for connection status.
If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates. Sharing Files Between MastersSalt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered. Minion KeysMinion keys can be accepted the normal way using salt-key on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt-key on both masters or sharing the /usr/local/etc/salt/pki/master/{minions,minions_pre,minions_rejected} directories between masters. NOTE: While sharing the /usr/local/etc/salt/pki/master
directory will work, it is strongly discouraged, since allowing access to the
master.pem key outside of Salt creates a SERIOUS security
risk.
File_RootsThe file_roots contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters. The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage. IMPORTANT: If using gitfs/git_pillar with the cachedir shared
between masters using GlusterFS, nfs, or another network filesystem,
and the masters are running Salt 2015.5.9 or later, it is strongly recommended
not to turn off gitfs_global_lock/git_pillar_global_lock as
doing so will cause lock files to be removed if they were created by a
different master.
Pillar_RootsPillar roots should be given the same considerations as file_roots. Master ConfigurationsWhile reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent. These access control options include but are not limited to:
Multi-Master-PKI Tutorial With FailoverThis tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails. The individual steps are
Please note, that it is advised to have good knowledge of
the salt- authentication and communication-process to understand this
tutorial. All of the settings described here, go on top of the default
authentication/communication process.
MotivationThe default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time. Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure? NOTE: There is also a MultiMaster-Tutorial with a different
approach and topology than this one, that might also suite your needs or might
even be better suited Multi-Master Tutorial
It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted. The GoalSetup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity. Prepping the master to sign its public keyFor signing to work, both master and minion must have the signing and/or verification settings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify. The easiest way to have the master sign its public key is to set master_sign_pubkey: True After restarting the salt-master service, the master will automatically generate a new key-pair master_sign.pem master_sign.pub A custom name can be set for the signing key-pair by setting master_sign_key_name: <name_without_suffix> The master will then generate that key-pair upon restart and use it for creating the public keys signature attached to the auth-reply. The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:master_use_pubkey_signature settings described below. If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair. Prepping the minion to verify received public keysThe minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory. /usr/local/etc/salt/pki/minion/master_sign.pub IMPORTANT: DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE
MASTER AND ONLY THERE!
When that is done, enable the signature checking in the minions configuration verify_master_pubkey_sign: True and restart the minion. For the first try, the minion should be run in manual debug mode. salt-minion -l debug Upon connecting to the master, the following lines should appear on the output: [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key If the signature verification fails, something went wrong and it will look like this [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Failed to verify signature of public key [CRITICAL] The Salt Master server's public key did not authenticate! In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master. Once the verification is successful, the minion can be started in daemon mode again. For the paranoid among us, its also possible to verify the publication whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc. If that is desired, enable the setting always_verify_signature: True Multiple Masters For A MinionConfiguring multiple masters on a minion is done by specifying two settings:
master: master_type: failover This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set random_master: True The master-list will then be shuffled before the first connection attempt. The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master. For the minion to be able to detect if its still connected to its current master enable the check for it master_alive_interval: <seconds> If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled). Testing the setupAt least two running masters are needed to test the failover setup. Both masters should be running and the minion should be running on the command line in debug mode salt-minion -l debug The minion will connect to the first master from its master list [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key A test.version on the master the minion is currently connected to should be run to test connectivity. If successful, that master should be turned off. A firewall-rule denying the minions packets will also do the trick. Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile. [INFO ] Connection to master 172.16.0.10 lost [INFO ] Trying to tune in to next master from master-list The minion will then remove the current master from the list and try connecting to the next master [INFO ] Removing possibly failed master 172.16.0.10 from list of masters [WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11 [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.11 If everything is configured correctly, the new masters public key will be verified successfully [DEBUG ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub the authentication with the new master is successful [INFO ] Received signed and verified master pubkey from master 172.16.0.11 [DEBUG ] Decrypting the current master AES key [DEBUG ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem [INFO ] Authentication with master successful! and the minion can be pinged again from its new master. Performance TuningWith the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power. To avoid that, the master can use a pre-created signature of its public-key. The signature is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies. Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc. That signature can be created with salt-key --gen-signature This will create a default signature file in the master pki-directory /usr/local/etc/salt/pki/master/master_pubkey_signature It is a simple text-file with the binary-signature converted to base64. If no signing-pair is present yet, this will auto-create the signing pair and the signature file in one call salt-key --gen-signature --auto-create Telling the master to use the pre-created signature is done with master_use_pubkey_signature: True That requires the file 'master_pubkey_signature' to be present in the masters pki-directory with the correct signature. If the signature file is named differently, its name can be set with master_pubkey_signature: <filename> With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from. Verifying that everything works is done the same way as above. How the signing and verification worksThe default key-pair of the salt-master is /usr/local/etc/salt/pki/master/master.pem /usr/local/etc/salt/pki/master/master.pub To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is: master_sign.pem master_sign.pub The combination of the master.* and master_sign.* key-pairs give the possibility of generating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion). The signature of the masters public-key in master.pub is computed with master_sign.pem master.pub M2Crypto.EVP.sign_update() This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion. With the signing-pairs public-key available to the minion, the attached signature can be verified with master_sign.pub master.pub M2Cryptos EVP.verify_update(). When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys). DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF
USE.
Packaging External Modules for SaltExternal Modules Setuptools Entry-Points SupportThe salt loader was enhanced to look for external modules by looking at the salt.loader entry-point: https://setuptools.readthedocs.io/en/latest/pkg_resources.html#entry-points
pkg_resources should be installed, which is normally included in setuptools. https://setuptools.readthedocs.io/en/latest/pkg_resources.html
The package which has custom engines, minion modules, outputters, etc, should require setuptools and should define the following entry points in its setup function: from setuptools import setup, find_packages setup( The above setup script example mentions a loader module. here's an example of how <package>/<loader-module>.py it should look: # -*- coding: utf-8 -*- # Import python libs import os PKG_DIR = os.path.abspath(os.path.dirname(__file__)) def engines_dirs(): Preseed Minion with Accepted KeyIn some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to let your developers provision new development machines on the fly. SEE ALSO: Many ways to preseed minion keys
Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below. salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to). salt-api exposes an HTTP call to Salt's REST API to generate and download the new minion keys as a tarball. There is a general four step process to do this:
root@saltmaster# salt-key --gen-keys=[key_name] Pick a name for the key, such as the minion's id.
root@saltmaster# cp key_name.pub /usr/local/etc/salt/pki/master/minions/[minion_id] It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file.
There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/ )
You will want to place the minion keys before starting the salt-minion daemon: /usr/local/etc/salt/pki/minion/minion.pem /usr/local/etc/salt/pki/minion/minion.pub Once in place, you should be able to start salt-minion and run salt-call state.apply or any other salt commands that require master authentication. Salt Masterless QuickstartRunning a masterless salt-minion lets you use Salt's configuration management for a single machine without calling out to a Salt master on another machine. Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
It is also useful for testing out state trees before deploying to a production setup. Bootstrap Salt MinionThe salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell: curl -L https://bootstrap.saltstack.com -o bootstrap_salt.sh sudo sh bootstrap_salt.sh Before run the script, it is a good practice to verify the checksum of the downloaded file. You can verify the checksum with SHA256 by running this command: test $(sha256sum bootstrap_salt.sh | awk '{print $1}') \
NOTE: The previous example is the preferred method because by
downloading the script you can investigate the contents of the bootstrap
script or using it again later. Alternatively, if you want to download the
bash script and run it immediately, use:
curl -L https://bootstrap.saltproject.io | sudo sh -s -- See the salt-bootstrap documentation for other one liners. When using Vagrant to test out salt, the Vagrant salt provisioner will provision the VM for you. Telling Salt to Run MasterlessTo instruct the minion to not look for a master, the file_client configuration option needs to be set in the minion configuration file. By default the file_client is set to remote so that the minion gathers file server and pillar data from the salt master. When setting the file_client option to local the minion is configured to not gather this data from the master. file_client: local Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources. Configuration which resided in the master configuration (e.g. /usr/local/etc/salt/master) should be moved to the minion configuration since the minion does not read the master configuration. NOTE: When running Salt in masterless mode, do not run the
salt-minion daemon. Otherwise, it will attempt to connect to a master and
fail. The salt-call command stands on its own and does not need the
salt-minion daemon.
Create State TreeFollowing the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored. The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed. NOTE: For a complete explanation on Salt States, see the
tutorial.
/usr/local/etc/salt/states/top.sls: base:
/usr/local/etc/salt/states/webserver.sls: apache: # ID declaration NOTE: The apache package has different names on different
platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is
httpd and on Arch it is apache
The only thing left is to provision our minion using salt-call. Salt-callThe salt-call command is used to run remote execution functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data: salt-call --local state.apply The --local flag tells the salt-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions. To provide verbose output, use -l debug: salt-call --local state.apply -l debug The minion first examines the top.sls file and determines that it is a part of the group matched by * glob and that the webserver SLS should be applied. It then examines the webserver.sls file and finds the apache state, which installs the Apache package. The minion should now have Apache installed, and the next step is to begin learning how to write more complex states. running salt as normal user tutorialBefore continuing make sure you have a working Salt installation by following the instructions in the Salt install guide.
Running Salt functions as non root userIf you don't want to run salt cloud as root or even install it you can configure it to have a virtual root in your working directory. The salt system uses the salt.syspath module to find the variables If you run the salt-build, it will generated in: ./build/lib.linux-x86_64-2.7/salt/_syspaths.py To generate it, run the command: python setup.py build Copy the generated module into your salt directory cp ./build/lib.linux-x86_64-2.7/salt/_syspaths.py salt/_syspaths.py Edit it to include needed variables and your new paths # you need to edit this _your_current_dir_ = ... ROOT_DIR = _your_current_dir_ + "/salt/root" # you need to edit this _location_of_source_code_ = ... INSTALL_DIR = _location_of_source_code_ CONFIG_DIR = ROOT_DIR + "/usr/local/etc/salt" CACHE_DIR = ROOT_DIR + "/var/cache/salt" SOCK_DIR = ROOT_DIR + "/var/run/salt" SRV_ROOT_DIR = ROOT_DIR + "/srv" BASE_FILE_ROOTS_DIR = ROOT_DIR + "/usr/local/etc/salt/states" BASE_PILLAR_ROOTS_DIR = ROOT_DIR + "/usr/local/etc/salt/pillar" BASE_MASTER_ROOTS_DIR = ROOT_DIR + "/usr/local/etc/salt/states-master" LOGS_DIR = ROOT_DIR + "/var/log/salt" PIDFILE_DIR = ROOT_DIR + "/var/run" CLOUD_DIR = INSTALL_DIR + "/cloud" BOOTSTRAP = CLOUD_DIR + "/deploy/bootstrap-salt.sh" Create the directory structure mkdir -p root/usr/local/etc/salt root/var/cache/run root/run/salt root/srv root/usr/local/etc/salt/states root/usr/local/etc/salt/pillar root/srv/salt-master root/var/log/salt root/var/run Populate the configuration files: cp -r conf/* root/usr/local/etc/salt/ Edit your root/usr/local/etc/salt/master configuration that is used by salt-cloud: user: *your user name* Run like this: PYTHONPATH=`pwd` scripts/salt-cloud Salt BootstrapThe Salt Bootstrap Script allows a user to install the Salt Minion or Master on a variety of system distributions and versions. The Salt Bootstrap Script is a shell script is known as bootstrap-salt.sh. It runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap Script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The Salt Bootstrap Script is maintained in a separate repo from Salt, complete with its own issues, pull requests, contributing guidelines, release protocol, etc. To learn more, please see the Salt Bootstrap repo links:
NOTE: The Salt Bootstrap script can be found in the Salt repo
under the salt/cloud/deploy/bootstrap-salt.sh path. Any changes to this
file will be overwritten! Bug fixes and feature additions must be submitted
via the Salt Bootstrap repo. Please see the Salt Bootstrap Script's
Release Process for more information.
Standalone MinionSince the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
NOTE: When running Salt in masterless mode, it is not required
to run the salt-minion daemon. By default the salt-minion daemon will attempt
to connect to a master and fail. The salt-call command stands on its own and
does not need the salt-minion daemon.
As of version 2016.11.0 you can have a running minion (with engines and beacons) without a master connection. If you wish to run the salt-minion daemon you will need to set the master_type configuration setting to be set to 'disable'. Minion ConfigurationThroughout this document there are several references to setting different options to configure a masterless Minion. Salt Minions are easy to configure via a configuration file that is located, by default, in /usr/local/etc/salt/minion. Note, however, that on FreeBSD systems, the minion configuration file is located in /usr/local/usr/local/etc/salt/minion. You can learn more about minion configuration options in the Configuring the Salt Minion docs. Telling Salt Call to Run MasterlessThe salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt-call the file_client configuration option needs to be set. By default the file_client is set to remote so that the minion knows that file server and pillar data are to be gathered from the master. When setting the file_client option to local the minion is configured to not gather this data from the master. file_client: local Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources. Running States MasterlessThe state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /usr/local/etc/salt/states for the base environment just like on the master: file_roots: Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the file_client option set to local and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master. Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion. This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows. The declared state can now be executed with: salt-call state.apply Or the salt-call command can be executed with the --local flag, this makes it unnecessary to change the configuration file: salt-call state.apply --local External PillarsExternal pillars are supported when running in masterless mode. How Do I Use Salt States?Simplicity, Simplicity, Simplicity Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple) The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management. NOTE: This is just the beginning of using states, make sure to
read up on pillar Pillar next.
It is All Just DataBefore delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn't critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is. SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer. The Top FileThe example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here. Default Data - YAMLBy default Salt represents the SLS data in what is one of the simplest serialization formats available - YAML. A typical SLS file will often look like this in YAML: NOTE: These demos use some generic service and package names,
different distributions often use different names for packages and services.
For instance apache should be replaced with httpd on a Red Hat
system. Salt uses the name of the init script, systemd name, upstart name etc.
based on what the underlying service management for the platform. To get a
list of the available service names on a platform execute the service.get_all
salt function.
Information on how to make states work with multiple distributions is later in the tutorial. apache: This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way. The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated. The second and third lines contain the state module function to be run, in the format <state_module>.<function>. The pkg.installed state module function ensures that a software package is installed via the system's native package manager. The service.running state module function ensures that a given system daemon is running. Finally, on line four, is the word require. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package. Adding Configs and UsersWhen setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up. apache: This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: watch. Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The require statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed. Next, the require statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The watch statement will run the state's watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state's watcher will be run. The service state's watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service. Moving Beyond a Single SLSWhen setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source - salt://apache/httpd.conf. That file will need to be available as well. The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files. The Apache example would be laid out in the root of the Salt file server like this: apache/init.sls apache/httpd.conf So the httpd.conf is just a file in the apache directory, and is referenced directly.
But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example: ssh/init.sls: openssh-client: ssh/server.sls: include: NOTE: Notice that we use two similar ways of denoting that a
file is managed by Salt. In the /etc/ssh/sshd_config state section
above, we use the file.managed state declaration whereas with the
/etc/ssh/banner state section, we use the file state declaration
and add a managed attribute to that state declaration. Both ways
produce an identical result; the first way -- using file.managed -- is
merely a shortcut.
Now our State Tree looks like this: apache/init.sls apache/httpd.conf ssh/init.sls ssh/server.sls ssh/banner ssh/ssh_config ssh/sshd_config This example now introduces the include statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demonstrated - extended. The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files. Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial. Extending Included SLS DataSometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed. In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python. ssh/custom-server.sls: include: python/mod_python.sls: include: The custom-server.sls file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner. In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package.
Understanding the Render SystemSince SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided. The default rendering system is the jinja|yaml renderer. The jinja|yaml renderer will first pass the template through the Jinja2 templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files. Other renderers available are yaml_mako and yaml_wempy which each use the Mako or Wempy templating system respectively rather than the jinja templating system, and more notably, the pure Python or py, pydsl & pyobjects renderers. The py renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the pydsl renderer provides a flexible, domain-specific language for authoring SLS data in Python; and the pyobjects renderer gives you a "Pythonic" interface to building state data. NOTE: The templating engines described above aren't just
available in SLS files. They can also be used in file.managed states,
making file management much more dynamic and flexible. Some examples for using
templates in managed files can be found in the documentation for the file
state, as well as the MooseFS example below.
Getting to Know the Default - jinja|yamlThe default renderer - jinja|yaml, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here: https://jinja.palletsprojects.com/en/2.11.x/ When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, salt, grains, and pillar. The salt object allows for any Salt function to be called from within the template, and grains allows for the Grains to be accessed from within the template. A few examples: apache/init.sls: apache: This example is simple. If the os grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd. A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver: moosefs/chunk.sls: include: This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the salt object is used multiple times to call shell commands to gather data. Introducing the Python, PyDSL, and the Pyobjects RenderersSometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree. This example shows a very basic Python SLS file: python/django.sls: #!py def run(): This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the py renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt HighState data structure. Alternatively, using the pydsl renderer, the above example can be written more succinctly as: #!pydsl
include("python", delayed=True)
state("django").pkg.installed()
The pyobjects renderer provides an "Pythonic" object based approach for building the state data. The above example could be written as: #!pyobjects
include("python")
Pkg.installed("django")
These Python examples would look like this if they were written in YAML: include: This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS. Running and Debugging Salt StatesOnce the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute salt '*' state.apply on the command line. If you get back only hostnames with a : after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the salt-call command to examine the output for errors: salt-call state.apply -l debug This should help troubleshoot the issue. The minion can also be started in the foreground in debug mode by running salt-minion -l debug. Next ReadingWith an understanding of states, the next recommendation is to become familiar with Salt's pillar interface: Pillar Walkthrough
States tutorial, part 1 - Basic UsageThe purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference. This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running. Before continuing make sure you have a working Salt installation by following the instructions in the Salt install guide.
Setting up the Salt State TreeStates are stored in text files on the master and transferred to the minions on demand via the master's File Server. The collection of state files make up the State Tree. To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (file_roots) and uncomment the following lines: file_roots: NOTE: If you are deploying on FreeBSD via ports, the
file_roots path defaults to
/usr/local/usr/local/etc/salt/states.
Restart the Salt master in order to pick up this change: pkill salt-master salt-master -d Preparing the Top FileOn the master, in the directory uncommented in the previous step, (/usr/local/etc/salt/states by default), create a new file called top.sls and add the following: base: The top file is separated into environments (discussed later). The default environment is base. Under the base environment a collection of minion matches is defined; for now simply specify all hosts (*).
base: Create an sls fileIn the same directory as the top file, create a file named webserver.sls, containing the following: apache: # ID declaration The first line, called the ID declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed. NOTE: The package name for the Apache httpd web server may
differ depending on OS or distro — for example, on Fedora it is
httpd but on Debian/Ubuntu it is apache2.
The second line, called the State declaration, defines which of the Salt States we are using. In this example, we are using the pkg state to ensure that a given package is installed. The third line, called the Function declaration, defines which function in the pkg state module to call.
Install the packageNext, let's run the state we created. Open a terminal on the master and run: salt '*' state.apply Our master is instructing all targeted minions to run state.apply. When this function is executed without any SLS targets, a minion will download the top file and attempt to match the expressions within it. When the minion does match an expression the modules listed for it will be downloaded, compiled, and executed. NOTE: This action is referred to as a "highstate",
and can be run using the state.highstate function. However, to make the
usage easier to understand ("highstate" is not necessarily an
intuitive name), a state.apply function was added in version 2015.5.0,
which when invoked without any SLS names will trigger a highstate.
state.highstate still exists and can be used, but the documentation (as
can be seen above) has been updated to reference state.apply, so keep
the following in mind as you read the documentation:
Once completed, the minion will report back with a summary of all actions taken and all changes made. WARNING: If you have created custom grain modules, they
will not be available in the top file until after the first highstate.
To make custom grains available on a minion's first highstate, it is
recommended to use this example to ensure that the custom grains
are synced when the minion starts.
salt-minion -l debug
salt-minion Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds: salt -t 60 For best results, combine all three: salt-minion -l debug # On the minion salt '*' state.apply -t 60 # On the master Next stepsThis tutorial focused on getting a simple Salt States configuration working. Part 2 will build on this example to cover more advanced sls syntax and will explore more of the states that ship with Salt. States tutorial, part 2 - More Complex States, RequisitesNOTE: This tutorial builds on topics covered in part 1.
It is recommended that you begin there.
In the last part of the Salt States tutorial we covered the basics of installing a package. We will now modify our webserver.sls file to have requirements, and use even more Salt States. Call multiple StatesYou can specify multiple State declaration under an ID declaration. For example, a quick modification to our webserver.sls to also start Apache if it is not running: apache: Try stopping Apache before running state.apply once again and observe the output. NOTE: For those running RedhatOS derivatives (Centos, AWS), you
will want to specify the service name to be httpd. More on state service here,
service state. With the example above, just add "- name:
httpd" above the require line and with the same spacing.
Require other statesWe now have a working installation of Apache so let's add an HTML file to customize our website. It isn't exactly useful to have a website without a webserver so we don't want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your webserver/init.sls file: apache: line 7 is the ID declaration. In this example it is the location we want to install our custom HTML file. (Note: the default location that Apache serves may differ from the above on your OS or distro. /srv/www could also be a likely place to look.) Line 8 the State declaration. This example uses the Salt file state. Line 9 is the Function declaration. The managed function will download a file from the master and install it in the location specified. Line 10 is a Function arg declaration which, in this example, passes the source argument to the managed function. Line 11 is a Requisite declaration. Line 12 is a Requisite reference which refers to a state and an ID. In this example, it is referring to the ID declaration from our example in part 1. This declaration tells Salt not to install the HTML file until Apache is installed. Next, create the index.html file and save it in the webserver directory: <!DOCTYPE html> <html> Last, call state.apply again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt's File Server: salt '*' state.apply Verify that Apache is now serving your custom HTML.
/etc/httpd/extra/httpd-vhosts.conf: If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a Name declaration which explained in Part 3. Next stepsIn part 3 we will discuss how to use includes, extends, and templating to make a more complete State Tree configuration. States tutorial, part 3 - Templating, Includes, ExtendsNOTE: This tutorial builds on topics covered in part 1
and part 2. It is recommended that you begin there.
This part of the tutorial will cover more advanced templating and configuration techniques for sls files. Templating SLS modulesSLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is Jinja2 and may be configured by changing the renderer value in the master config. All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this: {% for usr in ['moe','larry','curly'] %}
{{ usr }}:
This templated sls file once generated will look like this: moe: Here's a more complex example: # Comments in yaml start with a hash symbol.
# Since jinja rendering occurs before yaml parsing, if you want to include jinja
# in the comments you may need to escape them using 'jinja' comments to prevent
# jinja from trying to render something which is not well-defined jinja.
# e.g.
# {# iterate over the Three Stooges using a {% for %}..{% endfor %} loop
# with the iterator variable {{ usr }} becoming the state ID. #}
{% for usr in 'moe','larry','curly' %}
{{ usr }}:
Using Grains in SLS modulesOften times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules: apache: Using Environment Variables in SLS modulesYou can use salt['environ.get']('VARNAME') to use an environment variable in a Salt state. MYENVVAR="world" salt-call state.template test.sls Create a file with contents from an environment variable: Error checking: {% set myenvvar = salt['environ.get']('MYENVVAR') %}
{% if myenvvar %}
Create a file with contents from an environment variable:
Calling Salt modules from templatesAll of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules. The Salt module functions are also made available in the template context as salt: The following example illustrates calling the group_to_gid function in the file execution module with a single positional argument called some_group_that_exists. moe: One way to think about this might be that the gid key is being assigned a value equivalent to the following python pseudo-code: import salt.modules.file
file.group_to_gid("some_group_that_exists")
Note that for the above example to work, some_group_that_exists must exist before the state file is processed by the templating engine. Below is an example that uses the network.hw_addr function to retrieve the MAC address for eth0: salt["network.hw_addr"]("eth0")
To examine the possible arguments to each execution module function, one can examine the module reference documentation: Advanced SLS module syntaxLastly, we will cover some incredibly useful techniques for more complex State trees. Include declarationA previous example showed how to spread a Salt tree across several files. Similarly, Requisites and Other Global State Arguments span multiple files by using an Include declaration. For example: python/python-libs.sls: python-dateutil: python/django.sls: include: Extend declarationYou can modify previous declarations by using an Extend declaration. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed: apache/apache.sls: apache: apache/mywebsite.sls: include:
Name declarationYou can override the ID declaration by using a Name declaration. For example, the previous example is a bit more maintainable if rewritten as follows: apache/mywebsite.sls: include: Names declarationEven more powerful is using a Names declaration to override the ID declaration for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop: stooges: Next stepsIn part 4 we will discuss how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production. States tutorial, part 4NOTE: This tutorial builds on topics covered in part 1,
part 2, and part 3. It is recommended that you begin
there.
This part of the tutorial will show how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production. Salt fileserver path inheritanceSalt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS: # In the master config file (/usr/local/etc/salt/master) file_roots: Salt's fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top-most match "wins". For example, if /usr/local/etc/salt/states/foo.txt and /mnt/salt-nfs/base/foo.txt both exist, then salt://foo.txt will point to /usr/local/etc/salt/states/foo.txt. NOTE: When using multiple fileserver backends, the order in
which they are listed in the fileserver_backend parameter also matters.
If both roots and git backends contain a file with the same
relative path, and roots appears before git in the
fileserver_backend list, then the file in roots will
"win", and the file in gitfs will be ignored.
A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this. Environment configurationConfigure a multiple-environment setup like so: file_roots: Given the path inheritance described above, files within /usr/local/etc/salt/states/prod would be available in all environments. Files within /usr/local/etc/salt/states/qa would be available in both qa, and dev. Finally, the files within /usr/local/etc/salt/states/dev would only be available within the dev environment. Based on the order in which the roots are defined, new files/states can be placed within /usr/local/etc/salt/states/dev, and pushed out to the dev hosts for testing. Those files/states can then be moved to the same relative path within /usr/local/etc/salt/states/qa, and they are now available only in the dev and qa environments, allowing them to be pushed to QA hosts and tested. Finally, if moved to the same relative path within /usr/local/etc/salt/states/prod, the files are now available in all three environments. Requesting files from specific fileserver environmentsSee here for documentation on how to request files from specific environments. Practical ExampleAs an example, consider a simple website, installed to /var/www/foobarcom. Below is a top.sls that can be used to deploy the website: /usr/local/etc/salt/states/prod/top.sls: base: Using pillar, roles can be assigned to the hosts: /usr/local/etc/salt/pillar/top.sls: base: /usr/local/etc/salt/pillar/webserver/prod.sls: webserver_role: prod /usr/local/etc/salt/pillar/webserver/qa.sls: webserver_role: qa /usr/local/etc/salt/pillar/webserver/dev.sls: webserver_role: dev And finally, the SLS to deploy the website: /usr/local/etc/salt/states/prod/webserver/foobarcom.sls: {% if pillar.get('webserver_role', '') %}
/var/www/foobarcom:
Given the above SLS, the source for the website should initially be placed in /usr/local/etc/salt/states/dev/webserver/src/foobarcom. First, let's deploy to dev. Given the configuration in the top file, this can be done using state.apply: salt --pillar 'webserver_role:dev' state.apply However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the foobarcom website, by invoking state.apply with the desired SLS target as an argument: salt --pillar 'webserver_role:dev' state.apply webserver.foobarcom Once the site has been tested in dev, then the files can be moved from /usr/local/etc/salt/states/dev/webserver/src/foobarcom to /usr/local/etc/salt/states/qa/webserver/src/foobarcom, and deployed using the following: salt --pillar 'webserver_role:qa' state.apply webserver.foobarcom Finally, once the site has been tested in qa, then the files can be moved from /usr/local/etc/salt/states/qa/webserver/src/foobarcom to /usr/local/etc/salt/states/prod/webserver/src/foobarcom, and deployed using the following: salt --pillar 'webserver_role:prod' state.apply webserver.foobarcom Thanks to Salt's fileserver inheritance, even though the files have been moved to within /usr/local/etc/salt/states/prod, they are still available from the same salt:// URI in both the qa and dev environments. Continue LearningThe best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the saltstack-formulas collection of repositories. If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you. One of the best places to talk to the community is on the Salt Project Slack workspace. In addition, by continuing to the Orchestrate Runner docs, you can learn about the powerful orchestration of which Salt is capable. States Tutorial, Part 5 - Orchestration with SaltThis was moved to Orchestrate Runner. Syslog-ng usageOverviewSyslog_ng state module is for generating syslog-ng configurations. You can do the following things:
There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng. ConfigurationUsers can create syslog-ng configuration statements with the syslog_ng.config function. It requires a name and a config parameter. The name parameter determines the name of the generated statement and the config parameter holds a parsed YAML structure. A statement can be declared in the following forms (both are equivalent): source.s_localhost: s_localhost: The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document. Quotation
Full exampleThe following configuration is an example, how a complete syslog-ng configuration looks like: # Set the location of the configuration file set_location: The syslog_ng.reloaded function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser, etc.) has a name, this function uses the id as the name, otherwise (log statement) its purpose is like a mandatory comment. After execution this example the syslog_ng state will generate this file: #Generated by Salt on 2014-08-18 00:11:11
@version: 3.6
options {
Users can include arbitrary texts in the generated configuration with using the config statement (see the example above). Syslog_ng module functionsYou can use syslog_ng.set_binary_path to set the directory which contains the syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH, you don't need to use this function. There is also a syslog_ng.set_config_file function to set the location of the configuration file. ExamplesSimple sourcesource s_tail {
s_tail: OR s_tail: OR source.s_tail: Complex sourcesource s_gsoc2014 {
s_gsoc2014: Filterfilter f_json {
f_json: Templatetemplate t_demo_filetemplate {
t_demo_filetemplate: Rewriterewrite r_set_message_to_MESSAGE {
r_set_message_to_MESSAGE: Global optionsoptions {
global_options: Loglog {
l_gsoc2014: Salt in 10 MinutesNOTE: Welcome to SaltStack! I am excited that you are
interested in Salt and starting down the path to better infrastructure
management. I developed (and am continuing to develop) Salt with the goal of
making the best software available to manage computers of almost any kind. I
hope you enjoy working with Salt and that the software can solve your real
world needs!
Getting StartedWhat is Salt?Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure. The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States. Installing SaltSaltStack has been made to be very easy to install and get started. The Salt install guide provides instructions for all supported platforms. Starting SaltSalt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master. Setting Up the Salt MasterTurning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager: On Systemd based platforms (newer Debian, openSUSE, Fedora): systemctl start salt-master On Upstart based systems (Ubuntu, older Fedora/RHEL): service salt-master start On SysV Init systems (Gentoo, older Debian etc.): /etc/init.d/salt-master start Alternatively, the Master can be started directly on the command-line: salt-master -d The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output: salt-master -l debug The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here. Finding the Salt MasterWhen a minion starts, by default it searches for a system that resolves to the salt hostname on the network. If found, the minion initiates the handshake and key authentication process with the Salt master. This means that the easiest configuration approach is to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master: NOTE: The default location of the configuration files is
/usr/local/etc/salt. Most platforms adhere to this convention, but
platforms such as FreeBSD and Microsoft Windows place this file in different
locations.
/usr/local/etc/salt/minion: master: saltmaster.example.com Setting up a Salt MinionNOTE: The Salt Minion can operate with or without a Salt
Master. This walk-through assumes that the minion will be connected to the
master, for information on how to run a master-less minion please see the
master-less quick-start guide:
Masterless Minion Quickstart Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly: As a daemon: salt-minion -d In the foreground in debug mode: salt-minion -l debug When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached (in /usr/local/etc/salt/minion_id by default). This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not localhost:
If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first privately-routable IP address is used. If all else fails, then localhost is used as a fallback. NOTE: Overriding the id
The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id. Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key. Using salt-keySalt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master. The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master: salt-key -L The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys: salt-key -A NOTE: Keys should be verified! Print the master key fingerprint
by running salt-key -F master on the Salt master. Copy the
master.pub fingerprint from the Local Keys section, and then set this
value as the master_finger in the minion configuration file. Restart
the Salt minion.
On the master, run salt-key -f minion-id to print the fingerprint of the minion's public key that was received by the master. On the minion, run salt-call key.finger --local to print the fingerprint of the minion key. On the master: # salt-key -f foo.domain.com Unaccepted Keys: foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 On the minion: # salt-call key.finger --local local: If they match, approve the key with salt-key -a foo.domain.com. Sending the First CommandsNow that the minion is connected to the master and authenticated, the master can start to command the minion. Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution. The salt command is comprised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this: salt '*' test.version The * is the target, which specifies all minions. test.version tells the minion to run the test.version function. In the case of test.version, test refers to a execution module. version refers to the version function contained in the aforementioned test module. NOTE: Execution modules are the workhorses of Salt. They do the
work on the system to perform various tasks, such as manipulating files and
restarting services.
The result of running this command will be the master instructing all of the minions to execute test.version in parallel and return the result. Using test.version is a good way of confirming that a minion is connected, and reaffirm to the user the salt version(s) they have installed on the minions. NOTE: Each minion registers itself with a unique minion ID.
This ID defaults to the minion's hostname, but can be explicitly defined in
the minion config as well by using the id parameter.
Of course, there are hundreds of other modules that can be called just as test.version can. For example, the following would return disk usage on all targeted minions: salt '*' disk.usage Getting to Know the FunctionsSalt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function: salt '*' sys.doc This will display a very large list of available functions and documentation on them. NOTE: Module documentation is also available on the
web.
These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt. NOTE: Salt comes with many plugin systems. The functions that
are available via the salt command are called Execution
Modules.
Helpful Functions to KnowThe cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all: salt '*' cmd.run 'ls -l /etc' The pkg functions automatically map local system package managers to the same salt functions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.: salt '*' pkg.install vim NOTE: Some custom Linux spins and derivatives of other
distributions are not properly detected by Salt. If the above command returns
an error message saying that pkg.install is not available, then you may
need to override the pkg provider. This process is explained
here.
The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc: salt '*' network.interfaces Changing the Output FormatThe default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the pprint outputter can be used to display the return data using Python's pprint module: root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint
{'myminion': {'pythonpath': ['/usr/lib64/python2.7',
The full list of Salt outputters, as well as example output, can be found here. salt-callThe examples so far have described running commands from the Master using the salt command, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call. Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here. GrainsSalt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users. Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing. A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the grains.setval function. TargetingSalt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry* will match larry1 and larry2, and a glob of *1 will match larry1 and curly1. Many other targeting systems can be used other than globs, these systems include:
The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions. Passing in ArgumentsMany of the functions available accept arguments which can be passed in on the command line: salt '*' pkg.install vim This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input than just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line: salt '*' test.echo 'foo: bar' In this case Salt translates the string 'foo: bar' into the dictionary "{'foo': 'bar'}" NOTE: Any line that contains a newline will not be parsed by
YAML.
Salt StatesNow that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management. The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately. NOTE: Before diving into the state system, a brief overview of
how states are constructed will make many of the concepts clearer. Salt states
are based on data modeling and build on a low level data structure that is
used to execute each state function. Then more logical layers are built on top
of each other.
The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate. Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset. The First SLS FormulaThe state system is built on SLS (SaLt State) formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /usr/local/etc/salt/states named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied. /usr/local/etc/salt/states/vim.sls: vim: Now install vim on the minions by calling the SLS directly: salt '*' state.apply vim This command will invoke the state system and run the vim SLS. Now, to beef up the vim SLS formula, a vimrc can be added: /usr/local/etc/salt/states/vim.sls: vim: Now the desired vimrc needs to be copied into the Salt file server to /usr/local/etc/salt/states/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted for. The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file. NOTE: Salt does not need to be restarted/reloaded or have the
master manipulated in any way when changing SLS formulas. They are instantly
available.
Adding Some DepthObviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file: /usr/local/etc/salt/states/nginx/init.sls: nginx: A few concepts are introduced in this SLS formula. First is the service statement which ensures that the nginx service is running. Of course, the nginx service can't be started unless the package is installed -- hence the require statement which sets up a dependency between the two. The require statement makes sure that the required component is executed before and that it results in success. NOTE: The require option belongs to a family of options
called requisites. Requisites are a powerful component of Salt States,
for more information on how requisites work and what is available see:
Requisites
Also evaluation ordering is available in Salt as well: Ordering States This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command: salt '*' state.apply nginx NOTE: state.apply is just another remote execution
function, just like test.version or disk.usage. It simply takes
the name of an SLS file as an argument.
Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change: /usr/local/etc/salt/states/edit/vim.sls: vim: Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed. Next ReadingTwo walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar.
An understanding of Pillar is extremely helpful in using States. Getting Deeper Into StatesTwo more in-depth States tutorials exist, which delve much more deeply into States functionality.
These tutorials include much more in-depth information including templating SLS formulas etc. So Much More!This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt:
A few more tutorials are also available:
This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents. The macOS (Maverick) Developer Step By Step Guide To Salt InstallationThis document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on macOS. NOTE: This guide is aimed at developers who wish to run Salt in
a virtual machine. The official (Linux) walkthrough can be found
here.
The 5 Cent Salt IntroSince you're here you've probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here's a brief overview of a Salt cluster:
NOTE: This tutorial contains a third important configuration
file, not to be confused with the previous two: the virtual machine
provisioning configuration file. This in itself is not specifically tied to
Salt, but it also contains some Salt configuration. More on that in step 3.
Also note that all configuration files are YAML files. So indentation
matters.
NOTE: Salt also works with "masterless" configuration
where a minion is autonomous (in which case salt can be seen as a local
configuration tool), or in "multiple master" configuration. See the
documentation for more on that.
Before Digging In, The Architecture Of The Salt ClusterSalt MasterThe "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files. Salt MinionWe'll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution. Step 1 - Configuring The Salt Master On Your MacSee the Salt install guide for macOS installation instructions. Because Salt has a lot of dependencies that are not built in macOS, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it's great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they're configuring a brand new machine and have to do it all over again. It also lets you uninstall things easily. NOTE: Brew is a Ruby program (Ruby is installed by default with
your Mac). Brew downloads, compiles, and links software. The linking phase is
when compiled software is deployed on your machine. It may conflict with
manually installed software, especially in the /usr/local directory. It's ok,
remove the manually installed version then refresh the link by typing brew
link 'packageName'. Brew has a brew doctor command that can help
you troubleshoot. It's a great command, use it often. Brew requires xcode
command line tools. When you run brew the first time it asks you to install
them if they're not already on your system. Brew installs software in
/usr/local/bin (system bins are in /usr/bin). In order to use those bins you
need your $PATH to search there first. Brew tells you if your $PATH needs to
be fixed.
TIP: Use the keyboard shortcut cmd + shift + period in
the "open" macOS dialog box to display hidden files and folders,
such as .profile.
Install HomebrewInstall Homebrew here https://brew.sh/ Or just type ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Now type the following commands in your terminal (you may want to type brew doctor after each to make sure everything's fine): brew install python brew install swig brew install zmq NOTE: zmq is ZeroMQ. It's a fantastic library used for server
to server network communication and is at the core of Salt efficiency.
Install SaltYou should now have everything ready to launch this command: pip install salt NOTE: There should be no need for sudo pip install salt.
Brew installed Python for your user, so you should have all the access. In
case you would like to check, type which python to ensure that it's
/usr/local/bin/python, and which pip which should be
/usr/local/bin/pip.
Now type python in a terminal then, import salt. There should be no errors. Now exit the Python terminal using exit(). Create The Master ConfigurationIf the default /usr/local/etc/salt/master configuration file was not created, copy-paste it from here: https://docs.saltproject.io/en/latest/ref/configuration/examples.html#configuration-examples-master NOTE: /usr/local/etc/salt/master is a file, not a
folder.
Salt Master configuration changes. The Salt master needs a few customization to be able to run on macOS: sudo launchctl limit maxfiles 4096 8192 In the /usr/local/etc/salt/master file, change max_open_files to 8192 (or just add the line: max_open_files: 8192 (no quote) if it doesn't already exists). You should now be able to launch the Salt master: sudo salt-master --log-level=all There should be no errors when running the above command. NOTE: This command is supposed to be a daemon, but for toying
around, we'll keep it running on a terminal to monitor the activity.
Now that the master is set, let's configure a minion on a VM. Step 2 - Configuring The Minion VMThe Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we're going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration. Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we'll use it to:
Install VirtualBoxGo get it here: https://www.virtualbox.org/wiki/Downloads (click on VirtualBox for macOS hosts => x86/amd64) Install VagrantGo get it here: https://www.vagrantup.com/downloads.html and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double-click to install it. Make sure the vagrant command is found when run in the terminal. Type vagrant. It should display a list of commands. Create The Minion VM FolderCreate a folder in which you will store your minion's VM. In this tutorial, it's going to be a minion folder in the $home directory. cd $home mkdir minion Initialize VagrantFrom the minion folder, type vagrant init This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3. Import Precise64 Ubuntu Boxvagrant box add precise64 http://files.vagrantup.com/precise64.box NOTE: This box is added at the global Vagrant level. You only
need to do it once as each VM will use this same file.
Modify the VagrantfileModify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to: config.vm.box = "precise64" Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use): config.vm.network :private_network, ip: "192.168.33.10" At this point you should have a VM that can run, although there won't be much in it. Let's check that. Checking The VMFrom the $home/minion folder type: vagrant up A log showing the VM booting should be present. Once it's done you'll be back to the terminal: ping 192.168.33.10 The VM should respond to your ping request. Now log into the VM in ssh using Vagrant again: vagrant ssh You should see the shell prompt change to something similar to vagrant@precise64:~$ meaning you're inside the VM. From there, enter the following: ping 10.0.2.2 NOTE: That ip is the ip of your VM host (the macOS host). The
number is a VirtualBox default and is displayed in the log after the Vagrant
ssh command. We'll use that IP to tell the minion where the Salt master is.
Once you're done, end the ssh session by typing exit.
It's now time to connect the VM to the salt master Step 3 - Connecting Master and MinionCreating The Minion Configuration FileCreate the /usr/local/etc/salt/minion file. In that file, put the following lines, giving the ID for this minion, and the IP of the master: master: 10.0.2.2 id: 'minion1' file_client: remote Minions authenticate with the master using keys. Keys are generated automatically if you don't provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once. Preseed minion keysFrom the minion folder on your Mac run: sudo salt-key --gen-keys=minion1 This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership: sudo chown youruser:yourgroup minion1.pem sudo chown youruser:yourgroup minion1.pub Then copy the .pub file into the list of accepted minions: sudo cp minion1.pub /usr/local/etc/salt/pki/master/minions/minion1 Modify Vagrantfile to Use Salt ProvisionerLet's now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties): # salt-vagrant config config.vm.provision :salt do |salt| Now destroy the vm and recreate it from the /minion folder: vagrant destroy vagrant up If everything is fine you should see the following message: "Bootstrapping Salt... (this may take a while) Salt successfully configured and installed!" Checking Master-Minion CommunicationTo make sure the master and minion are talking to each other, enter the following: sudo salt '*' test.version You should see your minion answering with its salt version. It's now time to do some configuration. Step 4 - Configure Services to Install On the MinionIn this step we'll use the Salt master to instruct our minion to install Nginx. Checking the system's original stateFirst, make sure that an HTTP server is not installed on our minion. When opening a browser directed at http://192.168.33.10/ You should get an error saying the site cannot be reached. Initialize the top.sls fileSystem configuration is done in /usr/local/etc/salt/states/top.sls (and subfiles/folders), and then applied by running the state.apply function to have the Salt master order its minions to update their instructions and run the associated commands. First Create an empty file on your Salt master (macOS machine): touch /usr/local/etc/salt/states/top.sls When the file is empty, or if no configuration is found for our minion an error is reported: sudo salt 'minion1' state.apply This should return an error stating: No Top file or external nodes data matches found. Create The Nginx ConfigurationNow is finally the time to enter the real meat of our server's configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed. Insert the following lines into /usr/local/etc/salt/states/top.sls (which should current be empty). base: Now create /usr/local/etc/salt/states/bin/nginx.sls containing the following: nginx: Check Minion StateFinally, run the state.apply function again: sudo salt 'minion1' state.apply You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to http://192.168.33.10/, you should see the standard Nginx welcome page. Congratulations! Where To Go From HereA full description of configuration management within Salt (sls files among other things) is available here: https://docs.saltproject.io/en/latest/index.html#configuration-management Salt's Test Suite: An IntroductionNOTE: This tutorial makes a couple of assumptions. The first
assumption is that you have a basic knowledge of Salt. To get up to speed,
check out the Salt Walkthrough.
The second assumption is that your Salt development environment is already configured and that you have a basic understanding of contributing to the Salt codebase. If you're unfamiliar with either of these topics, please refer to the Installing Salt for Development and the Contributing pages, respectively. Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. Salt's test suite is located under the tests directory in the root of Salt's code base and is divided into two main types of tests: unit tests and integration tests. The unit and integration sub-test-suites are located in the tests directory, which is where the majority of Salt's test cases are housed. Getting Set Up For TestsFirst of all you will need to ensure you install nox. pip install nox Test Directory StructureAs noted in the introduction to this tutorial, Salt's test suite is located in the tests directory in the root of Salt's code base. From there, the tests are divided into two groups integration and unit. Within each of these directories, the directory structure roughly mirrors the directory structure of Salt's own codebase. For example, the files inside tests/integration/modules contains tests for the files located within salt/modules. NOTE: tests/integration and tests/unit are the
only directories discussed in this tutorial. With the exception of the
tests/runtests.py file, which is used below in the Running the Test
Suite section, the other directories and files located in tests are
outside the scope of this tutorial.
Integration vs. UnitGiven that Salt's test suite contains two powerful, though very different, testing approaches, when should you write integration tests and when should you write unit tests? Integration tests use Salt masters, minions, and a syndic to test salt functionality directly and focus on testing the interaction of these components. Salt's integration test runner includes functionality to run Salt execution modules, runners, states, shell commands, salt-ssh commands, salt-api commands, and more. This provides a tremendous ability to use Salt to test itself and makes writing such tests a breeze. Integration tests are the preferred method of testing Salt functionality when possible. Unit tests do not spin up any Salt daemons, but instead find their value in testing singular implementations of individual functions. Instead of testing against specific interactions, unit tests should be used to test a function's logic. Unit tests should be used to test a function's exit point(s) such as any return or raises statements. Unit tests are also useful in cases where writing an integration test might not be possible. While the integration test suite is extremely powerful, unfortunately at this time, it does not cover all functional areas of Salt's ecosystem. For example, at the time of this writing, there is not a way to write integration tests for Proxy Minions. Since the test runner will need to be adjusted to account for Proxy Minion processes, unit tests can still provide some testing support in the interim by testing the logic contained inside Proxy Minion functions. Running the Test SuiteOnce all of the requirements are installed, the nox command is used to instantiate Salt's test suite: nox -e 'test-3(coverage=False)' The command above, if executed without any options, will run the entire suite of integration and unit tests. Some tests require certain flags to run, such as destructive tests. If these flags are not included, then the test suite will only perform the tests that don't require special attention. At the end of the test run, you will see a summary output of the tests that passed, failed, or were skipped. You can pass any pytest options after the nox command like so: nox -e 'test-3(coverage=False)' -- tests/unit/modules/test_ps.py The above command will run the test_ps.py test with the zeromq transport, python3, and pytest. Pass any pytest options after -- Running Integration TestsSalt's set of integration tests use Salt to test itself. The integration portion of the test suite includes some built-in Salt daemons that will spin up in preparation of the test run. This list of Salt daemon processes includes:
These various daemons are used to execute Salt commands and functionality within the test suite, allowing you to write tests to assert against expected or unexpected behaviors. A simple example of a test utilizing a typical master/minion execution module command is the test for the test_ping function in the tests/integration/modules/test_test.py file: def test_ping(self): The test above is a very simple example where the test.ping function is executed by Salt's test suite runner and is asserting that the minion returned with a True response. Test Selection OptionsIf you want to run only a subset of tests, this is easily done with pytest. You only need to point the test runner to the directory. For example if you want to run all integration module tests: nox -e 'test-3(coverage=False)' -- tests/integration/modules/ Running Unit TestsIf you want to run only the unit tests, you can just pass the unit test directory as an option to the test runner. The unit tests do not spin up any Salt testing daemons as the integration tests do and execute very quickly compared to the integration tests. nox -e 'test-3(coverage=False)' -- tests/unit/ Running Specific TestsThere are times when a specific test file, test class, or even a single, individual test need to be executed, such as when writing new tests. In these situations, you should use the pytest syntax to select the specific tests. For running a single test file, such as the pillar module test file in the integration test directory, you must provide the file path. nox -e 'test-3(coverage=False)' -- tests/pytests/integration/modules/test_pillar.py Some test files contain only one test class while other test files contain multiple test classes. To run a specific test class within the file, append the name of the test class to the end of the file path: nox -e 'test-3(coverage=False)' -- tests/pytests/integration/modules/test_pillar.py::PillarModuleTest To run a single test within a file, append both the name of the test class the individual test belongs to, as well as the name of the test itself: nox -e 'test-3(coverage=False)' -- tests/pytests/integration/modules/test_pillar.py::PillarModuleTest::test_data The following command is an example of how to execute a single test found in the tests/unit/modules/test_cp.py file: nox -e 'test-3(coverage=False)' -- tests/pytests/unit/modules/test_cp.py::CpTestCase::test_get_file_not_found Writing Tests for SaltOnce you're comfortable running tests, you can now start writing them! Be sure to review the Integration vs. Unit section of this tutorial to determine what type of test makes the most sense for the code you're testing. NOTE: There are many decorators, naming conventions, and code
specifications required for Salt test files. We will not be covering all of
the these specifics in this tutorial. Please refer to the testing
documentation links listed below in the Additional Testing
Documentation section to learn more about these requirements.
In the following sections, the test examples assume the "new" test is added to a test file that is already present and regularly running in the test suite and is written with the correct requirements. Writing Integration TestsSince integration tests validate against a running environment, as explained in the Running Integration Tests section of this tutorial, integration tests are very easy to write and are generally the preferred method of writing Salt tests. The following integration test is an example taken from the test.py file in the tests/integration/modules directory. This test uses the run_function method to test the functionality of a traditional execution module command. The run_function method uses the integration test daemons to execute a module.function command as you would with Salt. The minion runs the function and returns. The test also uses Python's Assert Functions to test that the minion's return is expected. def test_ping(self): Args can be passed in to the run_function method as well: def test_echo(self): The next example is taken from the tests/integration/modules/test_aliases.py file and demonstrates how to pass kwargs to the run_function call. Also note that this test uses another salt function to ensure the correct data is present (via the aliases.set_target call) before attempting to assert what the aliases.get_target call should return. def test_set_target(self): Using multiple Salt commands in this manner provides two useful benefits. The first is that it provides some additional coverage for the aliases.set_target function. The second benefit is the call to aliases.get_target is not dependent on the presence of any aliases set outside of this test. Tests should not be dependent on the previous execution, success, or failure of other tests. They should be isolated from other tests as much as possible. While it might be tempting to build out a test file where tests depend on one another before running, this should be avoided. SaltStack recommends that each test should test a single functionality and not rely on other tests. Therefore, when possible, individual tests should also be broken up into singular pieces. These are not hard-and-fast rules, but serve more as recommendations to keep the test suite simple. This helps with debugging code and related tests when failures occur and problems are exposed. There may be instances where large tests use many asserts to set up a use case that protects against potential regressions. NOTE: The examples above all use the run_function option
to test execution module functions in a traditional master/minion environment.
To see examples of how to test other common Salt components such as runners,
salt-api, and more, please refer to the Integration Test Class Examples
documentation.
Destructive vs Non-destructive TestsSince Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation. To write a destructive test, decorate the test function with the destructive_test: @pytest.mark.destructive_test def test_pkg_install(salt_cli): Writing Unit TestsAs explained in the Integration vs. Unit section above, unit tests should be written to test the logic of a function. This includes focusing on testing return and raises statements. Substantial effort should be made to mock external resources that are used in the code being tested. External resources that should be mocked include, but are not limited to, APIs, function calls, external data either globally available or passed in through function arguments, file data, etc. This practice helps to isolate unit tests to test Salt logic. One handy way to think about writing unit tests is to "block all of the exits". More information about how to properly mock external resources can be found in Salt's Unit Test documentation. Salt's unit tests utilize Python's mock class as well as MagicMock. The @patch decorator is also heavily used when "blocking all the exits". A simple example of a unit test currently in use in Salt is the test_get_file_not_found test in the tests/pytests/unit/modules/test_cp.py file. This test uses the @patch decorator and MagicMock to mock the return of the call to Salt's cp.hash_file execution module function. This ensures that we're testing the cp.get_file function directly, instead of inadvertently testing the call to cp.hash_file, which is used in cp.get_file. def test_get_file_not_found(self): Note that Salt's cp module is imported at the top of the file, along with all of the other necessary testing imports. The get_file function is then called directed in the testing function, instead of using the run_function method as the integration test examples do above. The call to cp.get_file returns an empty string when a hash_file isn't found. Therefore, the example above is a good illustration of a unit test "blocking the exits" via the @patch decorator, as well as testing logic via asserting against the return statement in the if clause. In this example we used the python assert to verify the return from cp.get_file. Pytest allows you to use these asserts when writing your tests and, in fact, plain asserts is the preferred way to assert anything in your tests. As Salt dives deeper into Pytest, the use of unittest.TestClass will be replaced by plain test functions, or test functions grouped in a class, which does not subclass unittest.TestClass, which, of course, doesn't work with unittest assert functions. There are more examples of writing unit tests of varying complexities available in the following docs:
NOTE: Considerable care should be made to ensure that you're
testing something useful in your test functions. It is very easy to fall into
a situation where you have mocked so much of the original function that the
test results in only asserting against the data you have provided. This
results in a poor and fragile unit test.
Add a python module dependency to the test runThe test dependencies for python modules are managed under the requirements/static/ci directory. You will need to add your module to the appropriate file under requirements/static/ci. When pre-commit is run it will create all of the needed requirement files under requirements/static/ci/py3{6,7,8,9}. Nox will then use these files to install the requirements for the tests. Add a system dependency to the test runIf you need to add a system dependency for the test run, this will need to be added in the salt-ci-images repo. This repo uses salt states to install system dependencies. You need to update the state-tree/golden-images-provision.sls file with your dependency to ensure it is installed. Once your PR is merged the core team will need to promote the new images with your new dependency installed. Checking for Log MessagesTo test to see if a given log message has been emitted, the following pattern can be used def test_issue_58763_a(tmp_path, modules, state_tree, caplog): Test GroupsSalt has four groups
Pytest Decorators
@pytest.mark.core_test def test_ping(self): You can also mark all the tests in file. pytestmark = [ You can enable or disable test groups locally by passing there respected flag:
In your PR you can enable or disable test groups by setting a label. All thought the fast, slow and core tests specified in the change file will always run.
Additional Testing DocumentationIn addition to this tutorial, there are some other helpful resources and documentation that go into more depth on Salt's test runner, writing tests for Salt code, and general Python testing documentation. Please see the follow references for more information:
TroubleshootingThe intent of the troubleshooting section is to introduce solutions to a number of common issues encountered by users and the tools that are available to aid in developing States and Salt code. Troubleshooting the Salt MasterIf your Salt master is having issues such as minions not returning data, slow execution times, or a variety of other issues, the following links contain details on troubleshooting the most common issues encountered: Troubleshooting the Salt MasterRunning in the ForegroundA great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the master in the foreground: # salt-master -l debug Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground. What Ports does the Master Need Open?For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms. If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt. Too many open filesThe salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter: 12:45:29,289 [salt.master ][INFO ] Starting Salt worker process 38 Too many open files sock != -1 (tcp_listener.cpp:335) The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default): [root@salt-master ~]# ulimit -n 1024 If this value is not equal to at least twice the number of minions, then it will need to be raised. For example, in an environment with 1800 minions, the nofile limit should be set to no less than 3600. This can be done by creating the file /etc/security/limits.d/99-salt.conf, with the following contents: root hard nofile 4096 root soft nofile 4096 Replace root with the user under which the master runs, if different. If your master does not have an /etc/security/limits.d directory, the lines can simply be appended to /etc/security/limits.conf. As with any change to resource limits, it is best to stay logged into your current shell and open another shell to run ulimit -n again and verify that the changes were applied correctly. Additionally, if your master is running upstart, it may be necessary to specify the nofile limit in /etc/default/salt-master if upstart isn't respecting your resource limits: limit nofile 4096 4096 NOTE: The above is simply an example of how to set these
values, and you may wish to increase them even further if your Salt master is
doing more than just running Salt.
Salt Master Stops RespondingThere are known bugs with ZeroMQ versions less than 2.1.11 which can cause the Salt master to not respond properly. If you're running a ZeroMQ version greater than or equal to 2.1.9, you can work around the bug by setting the sysctls net.core.rmem_max and net.core.wmem_max to 16777216. Next, set the third field in net.ipv4.tcp_rmem and net.ipv4.tcp_wmem to at least 16777216. You can do it manually with something like: # echo 16777216 > /proc/sys/net/core/rmem_max # echo 16777216 > /proc/sys/net/core/wmem_max # echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem # echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem Or with the following Salt state: net.core.rmem_max: Live Python Debug OutputIf the master seems to be unresponsive, a SIGUSR1 can be passed to the salt-master threads to display what piece of code is executing. This debug information can be invaluable in tracking down bugs. To pass a SIGUSR1 to the master, first make sure the master is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so: # salt-master -l debug Then pass the signal to the master when it seems to be unresponsive: # killall -SIGUSR1 salt-master When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible. Live Salt-Master ProfilingWhen faced with performance problems one can turn on master process profiling by sending it SIGUSR2. # killall -SIGUSR2 salt-master This will activate yappi profiler inside salt-master code, then after some time one must send SIGUSR2 again to stop profiling and save results to file. If run in foreground salt-master will report filename for the results, which are usually located under /tmp on Unix-based OSes and c:\temp on windows. Make sure you have yappi installed. Results can then be analyzed with kcachegrind or similar tool. Make sure you have yappi installed. On Windows, in the absence of kcachegrind, a simple file-based workflow to create profiling graphs could use gprof2dot, graphviz and this batch file: :: :: Converts callgrind* profiler output to *.pdf, via *.dot :: @echo off del *.dot.pdf for /r %%f in (callgrind*) do ( echo "%%f" Commands Time Out or Do Not Return OutputDepending on your OS (this is most common on Ubuntu due to apt-get) you may sometimes encounter times where a state.apply, or other long running commands do not return output. By default the timeout is set to 5 seconds. The timeout value can easily be increased by modifying the timeout line within your /usr/local/etc/salt/master configuration file. Having keys accepted for Salt minions that no longer exist or are not reachable also increases the possibility of timeouts, since the Salt master waits for those systems to return command results. Passing the -c Option to Salt Returns a Permissions ErrorUsing the -c option with the Salt command modifies the configuration directory. When the configuration file is read it will still base data off of the root_dir setting. This can result in unintended behavior if you are expecting files such as /usr/local/etc/salt/pki to be pulled from the location specified with -c. Modify the root_dir setting to address this behavior. Salt Master Doesn't Return Anything While Running jobsWhen a command being run via Salt takes a very long time to return (package installations, certain scripts, etc.) the master may drop you back to the shell. In most situations the job is still running but Salt has exceeded the set timeout before returning. Querying the job queue will provide the data of the job but is inconvenient. This can be resolved by either manually using the -t option to set a longer timeout when running commands (by default it is 5 seconds) or by modifying the master configuration file: /usr/local/etc/salt/master and setting the timeout value to change the default timeout for all commands, and then restarting the salt-master service. If a state.apply run takes too long, you can find a bottleneck by adding the --out=profile option. Salt Master Auth FloodingIn large installations, care must be taken not to overwhealm the master with authentication requests. Several options can be set on the master which mitigate the chances of an authentication flood from causing an interruption in service. NOTE: recon_default:
The average number of seconds to wait between reconnection attempts.
Running states locallyTo debug the states, you can use call locally. salt-call -l trace --local state.highstate The top.sls file is used to map what SLS modules get loaded onto what minions via the state system. It is located in the file defined in the file_roots variable of the salt master configuration file which is defined by found in CONFIG_DIR/master, normally /usr/local/etc/salt/master The default configuration for the file_roots is: file_roots: So the top file is defaulted to the location /usr/local/etc/salt/states/top.sls Salt Master UmaskThe salt master uses a cache to track jobs as they are published and returns come back. The recommended umask for a salt-master is 022, which is the default for most users on a system. Incorrect umasks can result in permission-denied errors when the master tries to access files in its cache. Troubleshooting the Salt MinionIn the event that your Salt minion is having issues, a variety of solutions and suggestions are available. Please refer to the following links for more information: Troubleshooting the Salt MinionRunning in the ForegroundA great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion in the foreground: # salt-minion -l debug Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground. What Ports does the Minion Need Open?No ports need to be opened on the minion, as it makes outbound connections to the master. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms. If you have netcat installed, you can check port connectivity from the minion with the nc command: $ nc -v -z salt.master.ip.addr 4505 Connection to salt.master.ip.addr 4505 port [tcp/unknown] succeeded! $ nc -v -z salt.master.ip.addr 4506 Connection to salt.master.ip.addr 4506 port [tcp/unknown] succeeded! The Nmap utility can also be used to check if these ports are open: # nmap -sS -q -p 4505-4506 salt.master.ip.addr Starting Nmap 6.40 ( http://nmap.org ) at 2013-12-29 19:44 CST Nmap scan report for salt.master.ip.addr (10.0.0.10) Host is up (0.0026s latency). PORT STATE SERVICE 4505/tcp open unknown 4506/tcp open unknown MAC Address: 00:11:22:AA:BB:CC (Intel) Nmap done: 1 IP address (1 host up) scanned in 1.64 seconds If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt. Tools like tcptraceroute can also be used to determine if an intermediate device or firewall is blocking the needed TCP ports. Using salt-callThe salt-call command was originally developed for aiding in the development of new Salt modules. Since then, many applications have been developed for running any Salt module locally on a minion. These range from the original intent of salt-call (development assistance), to gathering more verbose output from calls like state.apply. When initially creating your state tree, it is generally recommended to invoke highstates by running state.apply directly from the minion with salt-call, rather than remotely from the master. This displays far more information about the execution than calling it remotely. For even more verbosity, increase the loglevel using the -l argument: # salt-call -l debug state.apply The main difference between using salt and using salt-call is that salt-call is run from the minion, and it only runs the selected function on that minion. By contrast, salt is run from the master, and requires you to specify the minions on which to run the command using salt's targeting system. Live Python Debug OutputIf the minion seems to be unresponsive, a SIGUSR1 can be passed to the process to display what piece of code is executing. This debug information can be invaluable in tracking down bugs. To pass a SIGUSR1 to the minion, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so: # salt-minion -l debug Then pass the signal to the minion when it seems to be unresponsive: # killall -SIGUSR1 salt-minion When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible. Multiprocessing in Execution ModulesAs is outlined in github issue #6300, Salt cannot use python's multiprocessing pipes and queues from execution modules. Multiprocessing from the execution modules is perfectly viable, it is just necessary to use Salt's event system to communicate back with the process. The reason for this difficulty is that python attempts to pickle all objects in memory when communicating, and it cannot pickle function objects. Since the Salt loader system creates and manages function objects this causes the pickle operation to fail. Salt Minion Doesn't Return Anything While Running Jobs LocallyWhen a command being run via Salt takes a very long time to return (package installations, certain scripts, etc.) the minion may drop you back to the shell. In most situations the job is still running but Salt has exceeded the set timeout before returning. Querying the job queue will provide the data of the job but is inconvenient. This can be resolved by either manually using the -t option to set a longer timeout when running commands (by default it is 5 seconds) or by modifying the minion configuration file: /usr/local/etc/salt/minion and setting the timeout value to change the default timeout for all commands, and then restarting the salt-minion service. NOTE: Modifying the minion timeout value is not required when
running commands from a Salt Master. It is only required when running commands
locally on the minion.
If a state.apply run takes too long, you can find a bottleneck by adding the --out=profile option. Running in the ForegroundA great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion and/or master in the foreground: salt-master -l debug salt-minion -l debug Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground. What Ports do the Master and Minion Need Open?No ports need to be opened up on each minion. For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall. You can check port connectivity from the minion with the nc command: nc -v -z salt.master.ip 4505 nc -v -z salt.master.ip 4506 There is also a firewall configuration document that might help as well. If you've enabled the right TCP ports on your operating system or Linux distribution's firewall and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt. Using salt-callThe salt-call command was originally developed for aiding in the development of new Salt modules. Since then, many applications have been developed for running any Salt module locally on a minion. These range from the original intent of salt-call, development assistance, to gathering more verbose output from calls like state.apply. When initially creating your state tree, it is generally recommended to invoke state.apply directly from the minion with salt-call, rather than remotely from the master. This displays far more information about the execution than calling it remotely. For even more verbosity, increase the loglevel using the -l argument: salt-call -l debug state.apply The main difference between using salt and using salt-call is that salt-call is run from the minion, and it only runs the selected function on that minion. By contrast, salt is run from the master, and requires you to specify the minions on which to run the command using salt's targeting system. Too many open filesThe salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter: 12:45:29,289 [salt.master ][INFO ] Starting Salt worker process 38 Too many open files sock != -1 (tcp_listener.cpp:335) The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default): [root@salt-master ~]# ulimit -n 1024 And modify that value to be at least equal to the number of minions x 2. This setting can be changed in limits.conf as the nofile value(s), and activated upon new a login of the specified user. So, an environment with 1800 minions, would need 1800 x 2 = 3600 as a minimum. Salt Master Stops RespondingThere are known bugs with ZeroMQ versions less than 2.1.11 which can cause the Salt master to not respond properly. If you're running a ZeroMQ version greater than or equal to 2.1.9, you can work around the bug by setting the sysctls net.core.rmem_max and net.core.wmem_max to 16777216. Next, set the third field in net.ipv4.tcp_rmem and net.ipv4.tcp_wmem to at least 16777216. You can do it manually with something like: # echo 16777216 > /proc/sys/net/core/rmem_max # echo 16777216 > /proc/sys/net/core/wmem_max # echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem # echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem Or with the following Salt state: net.core.rmem_max: Salt and SELinuxCurrently there are no SELinux policies for Salt. For the most part Salt runs without issue when SELinux is running in Enforcing mode. This is because when the minion executes as a daemon the type context is changed to initrc_t. The problem with SELinux arises when using salt-call or running the minion in the foreground, since the type context stays unconfined_t. This problem is generally manifest in the rpm install scripts when using the pkg module. Until a full SELinux Policy is available for Salt the solution to this issue is to set the execution context of salt-call and salt-minion to rpm_exec_t: # CentOS 5 and RHEL 5: chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-minion chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-call # CentOS 6 and RHEL 6: chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-minion chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-call This works well, because the rpm_exec_t context has very broad control over other types. Red Hat Enterprise Linux 5Salt requires Python 2.6 or 2.7. Red Hat Enterprise Linux 5 and its variants come with Python 2.4 installed by default. When installing on RHEL 5 from the EPEL repository this is handled for you. But, if you run Salt from git, be advised that its dependencies need to be installed from EPEL and that Salt needs to be run with the python26 executable. Common YAML GotchasAn extensive list of YAML idiosyncrasies has been compiled: YAML IdiosyncrasiesOne of Salt's strengths, the use of existing serialization systems for representing SLS data, can also backfire. YAML is a general purpose system and there are a number of things that would seem to make sense in an sls file that cause YAML issues. It is wise to be aware of these issues. While reports or running into them are generally rare they can still crop up at unexpected times. Spaces vs TabsYAML uses spaces, period. Do not use tabs in your SLS files! If strange errors are coming up in rendering SLS files, make sure to check that no tabs have crept in! In Vim, after enabling search highlighting with: :set hlsearch, you can check with the following key sequence in normal mode(you can hit ESC twice to be sure): /, Ctrl-v, Tab, then hit Enter. Also, you can convert tabs to 2 spaces by these commands in Vim: :set tabstop=2 expandtab and then :retab. IndentationThe suggested syntax for YAML files is to use 2 spaces for indentation, but YAML will follow whatever indentation system that the individual file uses. Indentation of two spaces works very well for SLS files given the fact that the data is uniform and not deeply nested. Nested DictionariesWhen dictionaries are nested within other data structures (particularly lists), the indentation logic sometimes changes. Examples of where this might happen include context and default options from the file.managed state: /etc/http/conf/http.conf: Notice that while the indentation is two spaces per level, for the values under the context and defaults options there is a four-space indent. If only two spaces are used to indent, then those keys will be considered part of the same dictionary that contains the context key, and so the data will not be loaded correctly. If using a double indent is not desirable, then a deeply-nested dict can be declared with curly braces: /etc/http/conf/http.conf: Here is a more concrete example of how YAML actually handles these indentations, using the Python interpreter on the command line: >>> import yaml
>>> yaml.safe_load(
... """mystate:
... file.managed:
... - context:
... some: var"""
... )
{'mystate': {'file.managed': [{'context': {'some': 'var'}}]}}
>>> yaml.safe_load(
... """mystate:
... file.managed:
... - context:
... some: var"""
... )
{'mystate': {'file.managed': [{'some': 'var', 'context': None}]}}
Note that in the second example, some is added as another key in the same dictionary, whereas in the first example, it's the start of a new dictionary. That's the distinction. context is a common example because it is a keyword arg for many functions, and should contain a dictionary. Multi-line StringsSimilarly, when a multi-line string is nested within a list item (such as when using the contents argument for a file.managed state), the indentation must be doubled. Take for example the following state: /tmp/foo.txt: This is invalid YAML, and will result in a rather cryptic error when you try to run the state: myminion: The correct indentation would be as follows: /tmp/foo.txt: True/False, Yes/No, On/OffPyYAML will load these values as boolean True or False. Un-capitalized versions will also be loaded as booleans (true, false, yes, no, on, and off). This can be especially problematic when constructing Pillar data. Make sure that your Pillars which need to use the string versions of these values are enclosed in quotes. Pillars will be parsed twice by salt, so you'll need to wrap your values in multiple quotes, including double quotation marks (" ") and single quotation marks (' '). Note that spaces are included in the quotation type examples for clarity. Multiple quoting examples looks like this: - '"false"' - "'True'" - "'YES'" - '"No"' NOTE: When using multiple quotes in this manner, they must be
different. Using "" "" or '' '' won't work
in this case (spaces are included in examples for clarity).
The '%' SignThe % symbol has a special meaning in YAML, it needs to be passed as a string literal: cheese: Time ExpressionsPyYAML will load a time expression as the integer value of that, assuming HH:MM. So for example, 12:00 is loaded by PyYAML as 720. An excellent explanation for why can be found here. To keep time expressions like this from being loaded as integers, always quote them. NOTE: When using a jinja load_yaml map, items must be
quoted twice. For example:
{% load_yaml as wsus_schedule %}
FRI_10:
YAML does not like "Double Short Decs"If I can find a way to make YAML accept "Double Short Decs" then I will, since I think that double short decs would be awesome. So what is a "Double Short Dec"? It is when you declare a multiple short decs in one ID. Here is a standard short dec, it works great: vim: The short dec means that there are no arguments to pass, so it is not required to add any arguments, and it can save space. YAML though, gets upset when declaring multiple short decs, for the record... THIS DOES NOT WORK: vim: Similarly declaring a short dec in the same ID dec as a standard dec does not work either... ALSO DOES NOT WORK: fred: The correct way is to define them like this: vim: Alternatively, they can be defined the "old way", or with multiple "full decs": vim: YAML supports only plain ASCIIAccording to YAML specification, only ASCII characters can be used. Within double-quotes, special characters may be represented with C-style escape sequences starting with a backslash ( \ ). Examples: - micro: "\u00b5" - copyright: "\u00A9" - A: "\x41" - alpha: "\u0251" - Alef: "\u05d0" List of usable Unicode characters will help you to identify correct numbers. Python can also be used to discover the Unicode number for a character: repr("Text with wrong characters i need to figure out")
This shell command can find wrong characters in your SLS files: find . -name '*.sls' -exec grep --color='auto' -P -n '[^\x00-\x7F]' \{} \;
Alternatively you can toggle the yaml_utf8 setting in your master configuration file. This is still an experimental setting but it should manage the right encoding conversion in salt after yaml states compilations. Underscores stripped in Integer DefinitionsIf a definition only includes numbers and underscores, it is parsed by YAML as an integer and all underscores are stripped. To ensure the object becomes a string, it should be surrounded by quotes. More information here. Here's an example: >>> import yaml
>>> yaml.safe_load("2013_05_10")
20130510
>>> yaml.safe_load('"2013_05_10"')
'2013_05_10'
Automatic datetime conversionIf there is a value in a YAML file formatted 2014-01-20 14:23:23 or similar, YAML will automatically convert this to a Python datetime object. These objects are not msgpack serializable, and so may break core salt functionality. If values such as these are needed in a salt YAML file (specifically a configuration file), they should be formatted with surrounding strings to force YAML to serialize them as strings: >>> import yaml
>>> yaml.safe_load("2014-01-20 14:23:23")
datetime.datetime(2014, 1, 20, 14, 23, 23)
>>> yaml.safe_load('"2014-01-20 14:23:23"')
'2014-01-20 14:23:23'
Additionally, numbers formatted like XXXX-XX-XX will also be converted (or YAML will attempt to convert them, and error out if it doesn't think the date is a real one). Thus, for example, if a minion were to have an ID of 4017-16-20 the minion would not start because YAML would complain that the date was out of range. The workaround is the same, surround the offending string with quotes: >>> import yaml
>>> yaml.safe_load("4017-16-20")
Traceback (most recent call last):
Keys Limited to 1024 CharactersSimple keys are limited by the YAML Spec to a single line, and cannot be longer that 1024 characters. PyYAML enforces these limitations (see here), and therefore anything parsed as YAML in Salt is subject to them. Live Python Debug OutputIf the minion or master seems to be unresponsive, a SIGUSR1 can be passed to the processes to display where in the code they are running. If encountering a situation like this, this debug information can be invaluable. First make sure the master of minion are running in the foreground: salt-master -l debug salt-minion -l debug Then pass the signal to the master or minion when it seems to be unresponsive: killall -SIGUSR1 salt-master killall -SIGUSR1 salt-minion Also under BSD and macOS in addition to SIGUSR1 signal, debug subroutine set up for SIGINFO which has an advantage of being sent by Ctrl+T shortcut. When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon this information can be invaluable. Salt 0.16.x minions cannot communicate with a 0.17.x masterAs of release 0.17.1 you can no longer run different versions of Salt on your Master and Minion servers. This is due to a protocol change for security purposes. The Salt team will continue to attempt to ensure versions are as backwards compatible as possible. Debugging the Master and MinionA list of common master and minion troubleshooting steps provide a starting point for resolving issues you may encounter. Frequently Asked QuestionsFAQ
Is Salt open-core?No. Salt is 100% committed to being open-source, including all of our APIs. It is developed under the Apache 2.0 license, allowing it to be used in both open and proprietary projects. To expand on this a little: There is much argument over the actual definition of "open core". From our standpoint, Salt is open source because
SaltStack the company does make proprietary products which use Salt and its libraries, like company is free to do, but we do so via the APIs, NOT by forking Salt and creating a different, closed-source version of it for paying customers. I think I found a bug! What should I do?The salt-users mailing list as well as the salt IRC channel can both be helpful resources to confirm if others are seeing the issue and to assist with immediate debugging. To report a bug to the Salt project, please follow the instructions in reporting a bug. What ports should I open on my firewall?Minions need to be able to connect to the Master on TCP ports 4505 and 4506. Minions do not need any inbound ports open. More detailed information on firewall settings can be found here. I'm seeing weird behavior (including but not limited to packages not installing their users properly)This is often caused by SELinux. Try disabling SELinux or putting it in permissive mode and see if the weird behavior goes away. My script runs every time I run a state.apply. Why?You are probably using cmd.run rather than cmd.wait. A cmd.wait state will only run when there has been a change in a state that it is watching. A cmd.run state will run the corresponding command every time (unless it is prevented from running by the unless or onlyif arguments). More details can be found in the documentation for the cmd states. When I run test.ping, why don't the Minions that aren't responding return anything? Returning False would be helpful.When you run test.ping the Master tells Minions to run commands/functions, and listens for the return data, printing it to the screen when it is received. If it doesn't receive anything back, it doesn't have anything to display for that Minion. There are a couple options for getting information on Minions that are not responding. One is to use the verbose (-v) option when you run salt commands, as it will display "Minion did not return" for any Minions which time out. salt -v '*' pkg.install zsh Another option is to use the manage.down runner: salt-run manage.down Also, if the Master is under heavy load, it is possible that the CLI will exit without displaying return data for all targeted Minions. However, this doesn't mean that the Minions did not return; this only means that the Salt CLI timed out waiting for a response. Minions will still send their return data back to the Master once the job completes. If any expected Minions are missing from the CLI output, the jobs.list_jobs runner can be used to show the job IDs of the jobs that have been run, and the jobs.lookup_jid runner can be used to get the return data for that job. salt-run jobs.list_jobs salt-run jobs.lookup_jid 20130916125524463507 If you find that you are often missing Minion return data on the CLI, only to find it with the jobs runners, then this may be a sign that the worker_threads value may need to be increased in the master config file. Additionally, running your Salt CLI commands with the -t option will make Salt wait longer for the return data before the CLI command exits. For instance, the below command will wait up to 60 seconds for the Minions to return: salt -t 60 '*' test.ping How does Salt determine the Minion's id?If the Minion id is not configured explicitly (using the id parameter), Salt will determine the id based on the hostname. Exactly how this is determined varies a little between operating systems and is described in detail here. I'm trying to manage packages/services but I get an error saying that the state is not available. Why?Salt detects the Minion's operating system and assigns the correct package or service management module based on what is detected. However, for certain custom spins and OS derivatives this detection fails. In cases like this, an issue should be opened on our tracker, with the following information:
salt <minion_id> grains.items | grep os
Why aren't my custom modules/states/etc. available on my Minions?Custom modules are synced to Minions when saltutil.sync_modules, or saltutil.sync_all is run. Similarly, custom states are synced to Minions when saltutil.sync_states, or saltutil.sync_all is run. They are both also synced when a highstate is triggered. As of the 2019.2.0 release, as well as 2017.7.7 and 2018.3.2 in their respective release cycles, the sync argument to state.apply/state.sls can be used to sync custom types when running individual SLS files. Other custom types (renderers, outputters, etc.) have similar behavior, see the documentation for the saltutil module for more information. This reactor example can be used to automatically sync custom types when the minion connects to the master, to help with this chicken-and-egg issue. Module X isn't available, even though the shell command it uses is installed. Why?This is most likely a PATH issue. Did you custom-compile the software which the module requires? RHEL/CentOS/etc. in particular override the root user's path in /etc/init.d/functions, setting it to /sbin:/usr/sbin:/bin:/usr/bin, making software installed into /usr/local/bin unavailable to Salt when the Minion is started using the initscript. In version 2014.1.0, Salt will have a better solution for these sort of PATH-related issues, but recompiling the software to install it into a location within the PATH should resolve the issue in the meantime. Alternatively, you can create a symbolic link within the PATH using a file.symlink state. /usr/bin/foo: Can I run different versions of Salt on my Master and Minion?This depends on the versions. In general, it is recommended that Master and Minion versions match. When upgrading Salt, the master(s) should always be upgraded first. Backwards compatibility for minions running newer versions of salt than their masters is not guaranteed. Whenever possible, backwards compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability. Recent examples of backwards compatibility breakage include the 0.17.1 release (where all backwards compatibility was broken due to a security fix), and the 2014.1.0 release (which retained compatibility between 2014.1.0 masters and 0.17 minions, but broke compatibility for 2014.1.0 minions and older masters). Does Salt support backing up managed files?Yes. Salt provides an easy to use addition to your file.managed states that allow you to back up files via backup_mode, backup_mode can be configured on a per state basis, or in the minion config (note that if set in the minion config this would simply be the default method to use, you still need to specify that the file should be backed up!). Is it possible to deploy a file to a specific minion, without other minions having access to it?The Salt fileserver does not yet support access control, but it is still possible to do this. As of Salt 2015.5.0, the file_tree external pillar is available, and allows the contents of a file to be loaded as Pillar data. This external pillar is capable of assigning Pillar values both to individual minions, and to nodegroups. See the documentation for details on how to set this up. Once the external pillar has been set up, the data can be pushed to a minion via a file.managed state, using the contents_pillar argument: /etc/my_super_secret_file: In this example, the source file would be located in a directory called secret_files underneath the file_tree path for the minion. The syntax for specifying the pillar variable is the same one used for pillar.get, with a colon representing a nested dictionary. WARNING: Deploying binary contents using the file.managed
state is only supported in Salt 2015.8.4 and newer.
What is the best way to restart a Salt Minion daemon using Salt after upgrade?Updating the salt-minion package requires a restart of the salt-minion service. But restarting the service while in the middle of a state run interrupts the process of the Minion running states and sending results back to the Master. A common way to workaround that is to schedule restarting the Minion service in the background by issuing a salt-call command calling service.restart function. This prevents the Minion being disconnected from the Master immediately. Otherwise you would get Minion did not return. [Not connected] message as the result of a state run. Upgrade without automatic restartDoing the Minion upgrade seems to be a simplest state in your SLS file at first. But the operating systems such as Debian GNU/Linux, Ubuntu and their derivatives start the service after the package installation by default. To prevent this, we need to create policy layer which will prevent the Minion service to restart right after the upgrade: {%- if grains['os_family'] == 'Debian' %}
Disable starting services:
Restart using statesNow we can apply the workaround to restart the Minion in reliable way. The following example works on UNIX-like operating systems: {%- if grains['os'] != 'Windows' %}
Restart Salt Minion:
Note that restarting the salt-minion service on Windows operating systems is not always necessary when performing an upgrade. The installer stops the salt-minion service, removes it, deletes the contents of the \salt\bin directory, installs the new code, re-creates the salt-minion service, and starts it (by default). The restart step would be necessary during the upgrade process, however, if the minion config was edited after the upgrade or installation. If a minion restart is necessary, the state above can be edited as follows: Restart Salt Minion: However, it requires more advanced tricks to upgrade from legacy version of Salt (before 2016.3.0) on UNIX-like operating systems, where executing commands in the background is not supported. You also may need to schedule restarting the Minion service using masterless mode after all other states have been applied for Salt versions earlier than 2016.11.0. This allows the Minion to keep the connection to the Master alive for being able to report the final results back to the Master, while the service is restarting in the background. This state should run last or watch for the pkg state changes: Restart Salt Minion: Restart using remote executionsRestart the Minion from the command line: salt -G kernel:Windows cmd.run_bg 'C:\salt\salt-call.bat service.restart salt-minion' salt -C 'not G@kernel:Windows' cmd.run_bg 'salt-call service.restart salt-minion' Waiting for minions to come back onlineA common issue in performing automated restarts of a salt minion, for example during an orchestration run, is that it will break the orchestration since the next statement is likely to be attempted before the minion is back online. This can be remedied by inserting a blocking waiting state that only returns when the selected minions are back up (note: this will only work in orchestration states since manage.up needs to run on the master): Wait for salt minion: This will, after an initial delay of 3 seconds, execute the manage.up-runner targeted specifically for my_minion. It will do this every period seconds until the expected data is returned. The default timeout is 60s but can be configured as well. Salting the Salt MasterIn order to configure a master server via states, the Salt master can also be "salted" in order to enforce state on the Salt master as well as the Salt minions. Salting the Salt master requires a Salt minion to be installed on the same machine as the Salt master. Once the Salt minion is installed, the minion configuration file must be pointed to the local Salt master: master: 127.0.0.1 Once the Salt master has been "salted" with a Salt minion, it can be targeted just like any other minion. If the minion on the salted master is running, the minion can be targeted via any usual salt command. Additionally, the salt-call command can execute operations to enforce state on the salted master without requiring the minion to be running. More information about salting the Salt master can be found in the salt-formula for salt itself: https://github.com/saltstack-formulas/salt-formula Restarting the salt-master service using execution module or application of state could be done the same way as for the Salt minion described above. Is Targeting using Grain Data Secure?WARNING: Grains can be set by users that have access to the minion
configuration files on the local system, making them less secure than other
identifiers in Salt. Avoid storing sensitive data, such as passwords or keys,
on minions. Instead, make use of Storing Static Data in the Pillar
and/or Storing Data in Other Databases.
Because grains can be set by users that have access to the minion configuration files on the local system, grains are considered less secure than other identifiers in Salt. Use caution when targeting sensitive operations or setting pillar values based on grain data. The only grain which can be safely used is grains['id'] which contains the Minion ID. When possible, you should target sensitive operations and data using the Minion ID. If the Minion ID of a system changes, the Salt Minion's public key must be re-accepted by an administrator on the Salt Master, making it less vulnerable to impersonation attacks. Why Did the Value for a Grain Change on Its Own?This is usually the result of an upstream change in an OS distribution that replaces or removes something that Salt was using to detect the grain. Fortunately, when this occurs, you can use Salt to fix it with a command similar to the following: salt -G 'grain:ChangedValue' grains.setvals "{'grain': 'OldValue'}"
(Replacing grain, ChangedValue, and OldValue with the grain and values that you want to change / set.) You should also file an issue describing the change so it can be fixed in Salt. Salt Best PracticesSalt's extreme flexibility leads to many questions concerning the structure of configuration files. This document exists to clarify these points through examples and code. IMPORTANT: The guidance here should be taken in combination with
Hardening Salt.
General rules
Grains can be set by users that have access to the minion
configuration files on the local system, making them less secure than other
identifiers in Salt. Avoid storing sensitive data, such as passwords or keys,
on minions. Instead, make use of Storing Static Data in the Pillar
and/or Storing Data in Other Databases.
Structuring States and FormulasWhen structuring Salt States and Formulas it is important to begin with the directory structure. A proper directory structure clearly defines the functionality of each state to the user via visual inspection of the state's name. Reviewing the MySQL Salt Formula it is clear to see the benefits to the end-user when reviewing a sample of the available states: /usr/local/etc/salt/states/mysql/files/ /usr/local/etc/salt/states/mysql/client.sls /usr/local/etc/salt/states/mysql/map.jinja /usr/local/etc/salt/states/mysql/python.sls /usr/local/etc/salt/states/mysql/server.sls This directory structure would lead to these states being referenced in a top file in the following way: base: This clear definition ensures that the user is properly informed of what each state will do. Another example comes from the url vim-formula: /usr/local/etc/salt/states/vim/files/ /usr/local/etc/salt/states/vim/absent.sls /usr/local/etc/salt/states/vim/init.sls /usr/local/etc/salt/states/vim/map.jinja /usr/local/etc/salt/states/vim/nerdtree.sls /usr/local/etc/salt/states/vim/pyflakes.sls /usr/local/etc/salt/states/vim/salt.sls Once again viewing how this would look in a top file: /usr/local/etc/salt/states/top.sls: base: The usage of a clear top-level directory as well as properly named states reduces the overall complexity and leads a user to both understand what will be included at a glance and where it is located. In addition Formulas should be used as often as possible. NOTE: Formulas repositories on the saltstack-formulas GitHub
organization should not be pointed to directly from systems that automatically
fetch new updates such as GitFS or similar tooling. Instead formulas
repositories should be forked on GitHub or cloned locally, where unintended,
automatic changes will not take place.
Structuring Pillar FilesPillars are used to store secure and insecure data pertaining to minions. When designing the structure of the /usr/local/etc/salt/pillar directory, the pillars contained within should once again be focused on clear and concise data which users can easily review, modify, and understand. The /usr/local/etc/salt/pillar/ directory is primarily controlled by top.sls. It should be noted that the pillar top.sls is not used as a location to declare variables and their values. The top.sls is used as a way to include other pillar files and organize the way they are matched based on environments or grains. An example top.sls may be as simple as the following: /usr/local/etc/salt/pillar/top.sls: base: Any number of matchers can be added to the base environment. For example, here is an expanded version of the Pillar top file stated above: /usr/local/etc/salt/pillar/top.sls: base: Or an even more complicated example, using a variety of matchers in numerous environments: /usr/local/etc/salt/pillar/top.sls: base: It is clear to see through these examples how the top file provides users with power but when used incorrectly it can lead to confusing configurations. This is why it is important to understand that the top file for pillar is not used for variable definitions. Each SLS file within the /usr/local/etc/salt/pillar/ directory should correspond to the states which it matches. This would mean that the apache pillar file should contain data relevant to Apache. Structuring files in this way once again ensures modularity, and creates a consistent understanding throughout our Salt environment. Users can expect that pillar variables found in an Apache state will live inside of an Apache pillar: /usr/local/etc/salt/pillar/apache.sls: apache: While this pillar file is simple, it shows how a pillar file explicitly relates to the state it is associated with. Variable FlexibilitySalt allows users to define variables in SLS files. When creating a state variables should provide users with as much flexibility as possible. This means that variables should be clearly defined and easy to manipulate, and that sane defaults should exist in the event a variable is not properly defined. Looking at several examples shows how these different items can lead to extensive flexibility. Although it is possible to set variables locally, this is generally not preferred: /usr/local/etc/salt/states/apache/conf.sls: {% set name = 'httpd' %}
{% set tmpl = 'salt://apache/files/httpd.conf' %}
include:
When generating this information it can be easily transitioned to the pillar where data can be overwritten, modified, and applied to multiple states, or locations within a single state: /usr/local/etc/salt/pillar/apache.sls: apache: /usr/local/etc/salt/states/apache/conf.sls: {% from "apache/map.jinja" import apache with context %}
include:
This flexibility provides users with a centralized location to modify variables, which is extremely important as an environment grows. Modularity Within StatesEnsuring that states are modular is one of the key concepts to understand within Salt. When creating a state a user must consider how many times the state could be re-used, and what it relies on to operate. Below are several examples which will iteratively explain how a user can go from a state which is not very modular to one that is: /usr/local/etc/salt/states/apache/init.sls: httpd: The example above is probably the worst-case scenario when writing a state. There is a clear lack of focus by naming both the pkg/service, and managed file directly as the state ID. This would lead to changing multiple requires within this state, as well as others that may depend upon the state. Imagine if a require was used for the httpd package in another state, and then suddenly it's a custom package. Now changes need to be made in multiple locations which increases the complexity and leads to a more error prone configuration. There is also the issue of having the configuration file located in the init, as a user would be unable to simply install the service and use the default conf file. Our second revision begins to address the referencing by using - name, as opposed to direct ID references: /usr/local/etc/salt/states/apache/init.sls: apache: The above init file is better than our original, yet it has several issues which lead to a lack of modularity. The first of these problems is the usage of static values for items such as the name of the service, the name of the managed file, and the source of the managed file. When these items are hard coded they become difficult to modify and the opportunity to make mistakes arises. It also leads to multiple edits that need to occur when changing these items (imagine if there were dozens of these occurrences throughout the state!). There is also still the concern of the configuration file data living in the same state as the service and package. In the next example steps will be taken to begin addressing these issues. Starting with the addition of a map.jinja file (as noted in the Formula documentation), and modification of static values: /usr/local/etc/salt/states/apache/map.jinja: {% set apache = salt['grains.filter_by']({
/usr/local/etc/salt/pillar/apache.sls: apache: /usr/local/etc/salt/states/apache/init.sls: {% from "apache/map.jinja" import apache with context %}
apache:
The changes to this state now allow us to easily identify the location of the variables, as well as ensuring they are flexible and easy to modify. While this takes another step in the right direction, it is not yet complete. Suppose the user did not want to use the provided conf file, or even their own configuration file, but the default apache conf. With the current state setup this is not possible. To attain this level of modularity this state will need to be broken into two states. /usr/local/etc/salt/states/apache/map.jinja: {% set apache = salt['grains.filter_by']({
/usr/local/etc/salt/pillar/apache.sls: apache: /usr/local/etc/salt/states/apache/init.sls: {% from "apache/map.jinja" import apache with context %}
apache:
/usr/local/etc/salt/states/apache/conf.sls: {% from "apache/map.jinja" import apache with context %}
include:
This new structure now allows users to choose whether they only wish to install the default Apache, or if they wish, overwrite the default package, service, configuration file location, or the configuration file itself. In addition to this the data has been broken between multiple files allowing for users to identify where they need to change the associated data. Storing Secure DataSecure data refers to any information that you would not wish to share with anyone accessing a server. This could include data such as passwords, keys, or other information. As all data within a state is accessible by EVERY server that is connected it is important to store secure data within pillar. This will ensure that only those servers which require this secure data have access to it. In this example a use can go from an insecure configuration to one which is only accessible by the appropriate hosts: /usr/local/etc/salt/states/mysql/testerdb.sls: testdb: /usr/local/etc/salt/states/mysql/user.sls: include: Many users would review this state and see that the password is there in plain text, which is quite problematic. It results in several issues which may not be immediately visible. The first of these issues is clear to most users -- the password being visible in this state. This means that any minion will have a copy of this, and therefore the password which is a major security concern as minions may not be locked down as tightly as the master server. The other issue that can be encountered is access by users on the master. If everyone has access to the states (or their repository), then they are able to review this password. Keeping your password data accessible by only a few users is critical for both security and peace of mind. There is also the issue of portability. When a state is configured this way it results in multiple changes needing to be made. This was discussed in the sections above but it is a critical idea to drive home. If states are not portable it may result in more work later! Fixing this issue is relatively simple, the content just needs to be moved to the associated pillar: /usr/local/etc/salt/pillar/mysql.sls: mysql: /usr/local/etc/salt/states/mysql/testerdb.sls: testdb: /usr/local/etc/salt/states/mysql/user.sls: include: Now that the database details have been moved to the associated pillar file, only machines which are targeted via pillar will have access to these details. Access to users who should not be able to review these details can also be prevented while ensuring that they are still able to write states which take advantage of this information. REMOTE EXECUTIONRunning pre-defined or arbitrary commands on remote hosts, also known as remote execution, is the core function of Salt. The following links explore modules and returners, which are two key elements of remote execution. Salt Execution Modules Salt execution modules are called by the remote execution system to perform a wide variety of tasks. These modules provide functionality such as installing packages, restarting a service, running a remote command, transferring files, and so on.
Running Commands on Salt MinionsSalt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use. Using the Salt client commands can be easily sent to the minions. Each of these commands accepts an explicit --config option to point to either the master or minion configuration file. If this option is not provided and the default configuration file does not exist then Salt falls back to use the environment variables SALT_MASTER_CONFIG and SALT_MINION_CONFIG. SEE ALSO: Configuration
Using the Salt CommandThe Salt command needs a few components to send information to the Salt minions. The target minions need to be defined, the function to call and any arguments the function requires. Defining the Target MinionsThe first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob: salt '*foo.com' sys.doc Salt can also define the target minions with regular expressions: salt -E '.*' cmd.run 'ls -l | grep foo' Or to explicitly list hosts, salt can take a list: salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo' More Powerful TargetsSee Targeting. Calling the FunctionThe function to call on the specified target is placed after the target specification. New in version 0.9.8. Functions may also accept arguments, space-delimited: salt '*' cmd.exec_code python 'import sys; print sys.version' Optional, keyword arguments are also supported: salt '*' pip.install salt timeout=5 upgrade=True They are always in the form of kwarg=argument. Arguments are formatted as YAML: salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Note: dictionaries must have curly braces around them (like the env keyword argument above). This was changed in 0.15.1: in the above example, the first argument used to be parsed as the dictionary {'echo "Hello': '$FIRST_NAME"'}. This was generally not the expected behavior. If you want to test what parameters are actually passed to a module, use the test.arg_repr command: salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Finding available minion functionsThe Salt functions are self documenting, all of the function documentation can be retried from the minions via the sys.doc() function: salt '*' sys.doc Compound Command ExecutionIf a series of commands needs to be sent to a single target specification then the commands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands. Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted. Executing compound commands if done by passing a comma delimited list of functions, followed by a comma delimited list of arguments: salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo The trick to look out for here, is that if a function is being passed no arguments, then there needs to be a placeholder for the absent arguments. This is why in the above example, there are two commas right next to each other. test.ping takes no arguments, so we need to add another comma, otherwise Salt would attempt to pass "foo" to test.ping. If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example: salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo You may change the arguments separator using the --args-separator option: salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo CLI CompletionShell completion scripts for the Salt CLI are available in the pkg Salt source directory. Writing Execution ModulesSalt execution modules are the functions called by the salt command. Modules Are Easy to Write!Writing Salt execution modules is straightforward. A Salt execution module is a Python or Cython module placed in a directory called _modules/ at the root of the Salt fileserver. When using the default fileserver backend (i.e. roots), unless environments are otherwise defined in the file_roots config option, the _modules/ directory would be located in /usr/local/etc/salt/states/_modules on most systems. Modules placed in _modules/ will be synced to the minions when any of the following Salt functions are called:
Modules placed in _modules/ will be synced to masters when any of the following Salt runners are called:
Note that a module's default name is its filename (i.e. foo.py becomes module foo), but that its name can be overridden by using a __virtual__ function. If a Salt module has errors and cannot be imported, the Salt minion will continue to load without issue and the module with errors will simply be omitted. If adding a Cython module the file must be named <modulename>.pyx so that the loader knows that the module needs to be imported as a Cython module. The compilation of the Cython module is automatic and happens when the minion starts, so only the *.pyx file is required. Zip Archives as ModulesPython 2.3 and higher allows developers to directly import zip archives containing Python code. By setting enable_zip_modules to True in the minion config, the Salt loader will be able to import .zip files in this fashion. This allows Salt module developers to package dependencies with their modules for ease of deployment, isolation, etc. For a user, Zip Archive modules behave just like other modules. When executing a function from a module provided as the file my_module.zip, a user would call a function within that module as my_module.<function>. Creating a Zip Archive ModuleA Zip Archive module is structured similarly to a simple Python package. The .zip file contains a single directory with the same name as the module. The module code traditionally in <module_name>.py goes in <module_name>/__init__.py. The dependency packages are subdirectories of <module_name>/. Here is an example directory structure for the lumberjack module, which has two library dependencies (sleep and work) to be included. modules $ ls -R lumberjack __init__.py sleep work lumberjack/sleep: __init__.py lumberjack/work: __init__.py The contents of lumberjack/__init__.py show how to import and use these included libraries. # Libraries included in lumberjack.zip from lumberjack import sleep, work def is_ok(person): Then, create the zip: modules $ zip -r lumberjack lumberjack Once placed in file_roots, Salt users can distribute and use lumberjack.zip like any other module. $ sudo salt minion1 saltutil.sync_modules minion1: Cross Calling Execution ModulesAll of the Salt execution modules are available to each other and modules can call functions available in other execution modules. The variable __salt__ is packed into the modules after they are loaded into the Salt minion. The __salt__ variable is a Python dictionary containing all of the Salt functions. Dictionary keys are strings representing the names of the modules and the values are the functions themselves. Salt modules can be cross-called by accessing the value in the __salt__ dict: def foo(bar): This code will call the run function in the cmd module and pass the argument bar to it. Calling Execution Modules on the Salt MasterNew in version 2016.11.0. Execution modules can now also be called via the salt-run command using the salt runner. Preloaded Execution Module DataWhen interacting with execution modules often it is nice to be able to read information dynamically about the minion or to load in configuration parameters for a module. Salt allows for different types of data to be loaded into the modules by the minion. Grains DataThe values detected by the Salt Grains on the minion are available in a Python dictionary named __grains__ and can be accessed from within callable objects in the Python modules. To see the contents of the grains dictionary for a given system in your deployment run the grains.items() function: salt 'hostname' grains.items --output=pprint Any value in a grains dictionary can be accessed as any other Python dictionary. For example, the grain representing the minion ID is stored in the id key and from an execution module, the value would be stored in __grains__['id']. Module ConfigurationSince parameters for configuring a module may be desired, Salt allows for configuration information from the minion configuration file to be passed to execution modules. Since the minion configuration file is a YAML document, arbitrary configuration data can be passed in the minion config that is read by the modules. It is therefore strongly recommended that the values passed in the configuration file match the module name. A value intended for the test execution module should be named test.<value>. The test execution module contains usage of the module configuration and the default configuration file for the minion contains the information and format used to pass data to the modules. salt.modules.test, conf/minion. __init__ FunctionIf you want your module to have different execution modes based on minion configuration, you can use the __init__(opts) function to perform initial module setup. The parameter opts is the complete minion configuration, as also available in the __opts__ dict. """ Cheese module initialization example """ def __init__(opts): Strings and UnicodeAn execution module author should always assume that strings fed to the module have already decoded from strings into Unicode. In Python 2, these will be of type 'Unicode' and in Python 3 they will be of type str. Calling from a state to other Salt sub-systems, should pass Unicode (or bytes if passing binary data). In the rare event that a state needs to write directly to disk, Unicode should be encoded to a string immediately before writing to disk. An author may use __salt_system_encoding__ to learn what the encoding type of the system is. For example, 'my_string'.encode(__salt_system_encoding__'). Outputter ConfigurationSince execution module functions can return different data, and the way the data is printed can greatly change the presentation, Salt allows for a specific outputter to be set on a function-by-function basis. This is done be declaring an __outputter__ dictionary in the global scope of the module. The __outputter__ dictionary contains a mapping of function names to Salt outputters. __outputter__ = {"run": "txt"}
This will ensure that the txt outputter is used to display output from the run function. Virtual ModulesVirtual modules let you override the name of a module in order to use the same name to refer to one of several similar modules. The specific module that is loaded for a virtual name is selected based on the current platform or environment. For example, packages are managed across platforms using the pkg module. pkg is a virtual module name that is an alias for the specific package manager module that is loaded on a specific system (for example, yumpkg on RHEL/CentOS systems , and aptpkg on Ubuntu). Virtual module names are set using the __virtual__ function and the virtual name. __virtual__ FunctionThe __virtual__ function returns either a string, True, False, or False with an error string. If a string is returned then the module is loaded using the name of the string as the virtual name. If True is returned the module is loaded using the current module name. If False is returned the module is not loaded. False lets the module perform system checks and prevent loading if dependencies are not met. Since __virtual__ is called before the module is loaded, __salt__ will be unreliable as not all modules will be available at this point in time. The __pillar__ and __grains__ "dunder" dictionaries are available however. NOTE: Modules which return a string from __virtual__
that is already used by a module that ships with Salt will _override_ the
stock module.
Returning Error Information from __virtual__Optionally, Salt plugin modules, such as execution, state, returner, beacon, etc. modules may additionally return a string containing the reason that a module could not be loaded. For example, an execution module called cheese and a corresponding state module also called cheese, both depending on a utility called enzymes should have __virtual__ functions that handle the case when the dependency is unavailable. """ Cheese execution (or returner/beacon/etc.) module """ try: """ Cheese state module. Note that this works in state modules because it is guaranteed that execution modules are loaded first """ def __virtual__(): ExamplesThe package manager modules are among the best examples of using the __virtual__ function. A table of all the virtual pkg modules can be found here. Overriding Virtual Module ProvidersSalt often uses OS grains (os, osrelease, os_family, etc.) to determine which module should be loaded as the virtual module for pkg, service, etc. Sometimes this OS detection is incomplete, with new distros popping up, existing distros changing init systems, etc. The virtual modules likely to be affected by this are in the list below (click each item for more information):
If Salt is using the wrong module for one of these, first of all, please report it on the issue tracker, so that this issue can be resolved for a future release. To make it easier to troubleshoot, please also provide the grains.items output, taking care to redact any sensitive information. Then, while waiting for the SaltStack development team to fix the issue, Salt can be made to use the correct module using the providers option in the minion config file: providers: The above example will force the minion to use the systemd module to provide service management, and the aptpkg module to provide package management. For per-state provider overrides, see documentation on state providers. Logging RestrictionsAs a rule, logging should not be done anywhere in a Salt module before it is loaded. This rule apples to all code that would run before the __virtual__() function, as well as the code within the __virtual__() function itself. If logging statements are made before the virtual function determines if the module should be loaded, then those logging statements will be called repeatedly. This clutters up log files unnecessarily. Exceptions may be considered for logging statements made at the trace level. However, it is better to provide the necessary information by another means. One method is to return error information in the __virtual__() function. __virtualname____virtualname__ is a variable that is used by the documentation build system to know the virtual name of a module without calling the __virtual__ function. Modules that return a string from the __virtual__ function must also set the __virtualname__ variable. To avoid setting the virtual name string twice, you can implement __virtual__ to return the value set for __virtualname__ using a pattern similar to the following: # Define the module's virtual name __virtualname__ = "pkg" def __virtual__(): The __virtual__() function can return a True or False boolean, a tuple, or a string. If it returns a True value, this __virtualname__ module-level attribute can be set as seen in the above example. This is the string that the module should be referred to as. When __virtual__() returns a tuple, the first item should be a boolean and the second should be a string. This is typically done when the module should not load. The first value of the tuple is False and the second is the error message to display for why the module did not load. For example: def __virtual__(): DocumentationSalt execution modules are documented. The sys.doc() function will return the documentation for all available modules: salt '*' sys.doc The sys.doc function simply prints out the docstrings found in the modules; when writing Salt execution modules, please follow the formatting conventions for docstrings as they appear in the other modules. Adding Documentation to Salt ModulesIt is strongly suggested that all Salt modules have documentation added. To add documentation add a Python docstring to the function. def spam(eggs): Now when the sys.doc call is executed the docstring will be cleanly returned to the calling terminal. Documentation added to execution modules in docstrings will automatically be added to the online web-based documentation. Add Execution Module MetadataWhen writing a Python docstring for an execution module, add information about the module using the following field lists: :maintainer: Thomas Hatch <thatch@saltstack.com, Seth House <shouse@saltstack.com> :maturity: new :depends: python-mysqldb :platform: all The maintainer field is a comma-delimited list of developers who help maintain this module. The maturity field indicates the level of quality and testing for this module. Standard labels will be determined. The depends field is a comma-delimited list of modules that this module depends on. The platform field is a comma-delimited list of platforms that this module is known to run on. Log OutputYou can call the logger from custom modules to write messages to the minion logs. The following code snippet demonstrates writing log messages: import logging
log = logging.getLogger(__name__)
log.info("Here is Some Information")
log.warning("You Should Not Do That")
log.error("It Is Busted")
Aliasing FunctionsSometimes one wishes to use a function name that would shadow a python built-in. A common example would be set(). To support this, append an underscore to the function definition, def set_():, and use the __func_alias__ feature to provide an alias to the function. __func_alias__ is a dictionary where each key is the name of a function in the module, and each value is a string representing the alias for that function. When calling an aliased function from a different execution module, state module, or from the cli, the alias name should be used. __func_alias__ = {
Private FunctionsIn Salt, Python callable objects contained within an execution module are made available to the Salt minion for use. The only exception to this rule is a callable object with a name starting with an underscore _. Objects Loaded Into the Salt Miniondef foo(bar): Objects NOT Loaded into the Salt Miniondef _foobar(baz): # Preceded with an _ Useful Decorators for ModulesDepends DecoratorWhen writing execution modules there are many times where some of the module will work on all hosts but some functions have an external dependency, such as a service that needs to be installed or a binary that needs to be present on the system. Instead of trying to wrap much of the code in large try/except blocks, a decorator can be used. If the dependencies passed to the decorator don't exist, then the salt minion will remove those functions from the module on that host. If a fallback_function is defined, it will replace the function instead of removing it import logging from salt.utils.decorators import depends log = logging.getLogger(__name__) try: In addition to global dependencies the depends decorator also supports raw booleans. from salt.utils.decorators import depends HAS_DEP = False try: ExecutorsExecutors are used by minion to execute module functions. Executors can be used to modify the functions behavior, do any pre-execution steps or execute in a specific way like sudo executor. Executors could be passed as a list and they will be used one-by-one in the order. If an executor returns None the next one will be called. If an executor returns non-None the execution sequence is terminated and the returned value is used as a result. It's a way executor could control modules execution working as a filter. Note that executor could actually not execute the function but just do something else and return None like splay executor does. In this case some other executor have to be used as a final executor that will actually execute the function. See examples below. Executors list could be passed by minion config file in the following way: module_executors: The same could be done by command line: salt -t 40 --module-executors='[splay, direct_call]' --executor-opts='{splaytime: 30}' '*' test.version
And the same command called via netapi will look like this: curl -sSk https://localhost:8000 \ SEE ALSO: The full list of executors
Writing Salt ExecutorsA Salt executor is written in a similar manner to a Salt execution module. Executor is a python module placed into the executors folder and containing the execute function with the following signature: def execute(opts, data, func, args, kwargs): ... Where the args are:
Specific options could be passed to the executor via minion config or via executor_opts argument. For instance to access splaytime option set by minion config executor should access opts.get('splaytime'). To access the option set by commandline or API data.get('executor_opts', {}).get('splaytime') should be used. So if an option is safe and must be accessible by user executor should check it in both places, but if an option is unsafe it should be read from the only config ignoring the passed request data. There is also a function named all_missing_func which the name of the func is passed, which can be used to verify if the command should still be run, even if it is not loaded in minion_mods. CONFIGURATION MANAGEMENTSalt contains a robust and flexible configuration management framework, which is built on the remote execution core. This framework executes on the minions, allowing effortless, simultaneous configuration of tens of thousands of hosts, by rendering language specific state files. The following links provide resources to learn more about state and renderers.
NOTE: Salt execution modules are different from state modules
and cannot be called as a state in an SLS file. In other words, this will not
work:
moe: You must use the module states to call execution modules directly. Here's an example: rename_moe:
State System ReferenceSalt offers an interface to manage the configuration or "state" of the Salt minions. This interface is a fully capable mechanism used to enforce the state of systems from a central manager. Mod Aggregate State Runtime ModificationsNew in version 2014.7.0. The mod_aggregate system was added in the 2014.7.0 release of Salt and allows for runtime modification of the executing state data. Simply put, it allows for the data used by Salt's state system to be changed on the fly at runtime, kind of like a configuration management JIT compiler or a runtime import system. All in all, it makes Salt much more dynamic. How it WorksThe best example is the pkg state. One of the major requests in Salt has long been adding the ability to install all packages defined at the same time. The mod_aggregate system makes this a reality. While executing Salt's state system, when a pkg state is reached the mod_aggregate function in the state module is called. For pkg this function scans all of the other states that are slated to run, and picks up the references to name and pkgs, then adds them to pkgs in the first state. The result is a single call to yum, apt-get, pacman, etc as part of the first package install. How to Use itNOTE: Since this option changes the basic behavior of the state
runtime, after it is enabled states should be executed using test=True
to ensure that the desired behavior is preserved.
In config filesThe first way to enable aggregation is with a configuration option in either the master or minion configuration files. Salt will invoke mod_aggregate the first time it encounters a state module that has aggregate support. If this option is set in the master config it will apply to all state runs on all minions, if set in the minion config it will only apply to said minion. Enable for all states: state_aggregate: True Enable for only specific state modules: state_aggregate: In statesThe second way to enable aggregation is with the state-level aggregate keyword. In this configuration, Salt will invoke the mod_aggregate function the first time it encounters this keyword. Any additional occurrences of the keyword will be ignored as the aggregation has already taken place. The following example will trigger mod_aggregate when the lamp_stack state is processed resulting in a single call to the underlying package manager. lamp_stack: Adding mod_aggregate to a State ModuleAdding a mod_aggregate routine to an existing state module only requires adding an additional function to the state module called mod_aggregate. The mod_aggregate function just needs to accept three parameters and return the low data to use. Since mod_aggregate is working on the state runtime level it does need to manipulate low data. The three parameters are low, chunks, and running. The low option is the low data for the state execution which is about to be called. The chunks is the list of all of the low data dictionaries which are being executed by the runtime and the running dictionary is the return data from all of the state executions which have already be executed. This example, simplified from the pkg state, shows how to create mod_aggregate functions: def mod_aggregate(low, chunks, running): Altering StatesNOTE: This documentation has been moved here.
File State BackupsIn 0.10.2 a new feature was added for backing up files that are replaced by the file.managed and file.recurse states. The new feature is called the backup mode. Setting the backup mode is easy, but it can be set in a number of places. The backup_mode can be set in the minion config file: backup_mode: minion Or it can be set for each file: /etc/ssh/sshd_config: The backup_mode can be set to any of the following options:
Backed-up FilesThe files will be saved in the minion cachedir under the directory named file_backup. The files will be in the location relative to where they were under the root filesystem and be appended with a timestamp. This should make them easy to browse. Interacting with BackupsStarting with version 0.17.0, it will be possible to list, restore, and delete previously-created backups. ListingThe backups for a given file can be listed using file.list_backups: # salt foo.bar.com file.list_backups /tmp/foo.txt foo.bar.com: RestoringRestoring is easy using file.restore_backup, just pass the path and the numeric id found with file.list_backups: # salt foo.bar.com file.restore_backup /tmp/foo.txt 1 foo.bar.com: The existing file will be backed up, just in case, as can be seen if file.list_backups is run again: # salt foo.bar.com file.list_backups /tmp/foo.txt foo.bar.com: NOTE: Since no state is being run, restoring a file will not
trigger any watches for the file. So, if you are restoring a config file for a
service, it will likely still be necessary to run a
service.restart.
DeletingDeleting backups can be done using file.delete_backup: # salt foo.bar.com file.delete_backup /tmp/foo.txt 0 foo.bar.com: Understanding State Compiler OrderingNOTE: This tutorial is an intermediate level tutorial. Some
basic understanding of the state system and writing Salt Formulas is
assumed.
Salt's state system is built to deliver all of the power of configuration management systems without sacrificing simplicity. This tutorial is made to help users understand in detail just how the order is defined for state executions in Salt. This tutorial is written to represent the behavior of Salt as of version 0.17.0. Compiler BasicsTo understand ordering in depth some very basic knowledge about the state compiler is very helpful. No need to worry though, this is very high level! High Data and Low DataWhen defining Salt Formulas in YAML the data that is being represented is referred to by the compiler as High Data. When the data is initially loaded into the compiler it is a single large python dictionary, this dictionary can be viewed raw by running: salt '*' state.show_highstate This "High Data" structure is then compiled down to "Low Data". The Low Data is what is matched up to create individual executions in Salt's configuration management system. The low data is an ordered list of single state calls to execute. Once the low data is compiled the evaluation order can be seen. The low data can be viewed by running: salt '*' state.show_lowstate NOTE: The state execution module contains MANY functions for
evaluating the state system and is well worth a read! These routines can be
very useful when debugging states or to help deepen one's understanding of
Salt's state system.
As an example, a state written thusly: apache: Will have High Data which looks like this represented in json: {
The subsequent Low Data will look like this: [ This tutorial discusses the Low Data evaluation and the state runtime. Ordering LayersSalt defines 2 order interfaces which are evaluated in the state runtime and defines these orders in a number of passes. Definition OrderNOTE: The Definition Order system can be disabled by turning
the option state_auto_order to False in the master configuration
file.
The top level of ordering is the Definition Order. The Definition Order is the order in which states are defined in salt formulas. This is very straightforward on basic states which do not contain include statements or a top file, as the states are just ordered from the top of the file, but the include system starts to bring in some simple rules for how the Definition Order is defined. Looking back at the "Low Data" and "High Data" shown above, the order key has been transparently added to the data to enable the Definition Order. The Include StatementBasically, if there is an include statement in a formula, then the formulas which are included will be run BEFORE the contents of the formula which is including them. Also, the include statement is a list, so they will be loaded in the order in which they are included. In the following case: foo.sls include: bar.sls include: baz.sls include: In the above case if state.apply foo were called then the formulas will be loaded in the following order:
The order FlagThe Definition Order happens transparently in the background, but the ordering can be explicitly overridden using the order flag in states: apache: This order flag will over ride the definition order, this makes it very simple to create states that are always executed first, last or in specific stages, a great example is defining a number of package repositories that need to be set up before anything else, or final checks that need to be run at the end of a state run by using order: last or order: -1. When the order flag is explicitly set the Definition Order system will omit setting an order for that state and directly use the order flag defined. Lexicographical Fall-backSalt states were written to ALWAYS execute in the same order. Before the introduction of Definition Order in version 0.17.0 everything was ordered lexicographically according to the name of the state, then function then id. This is the way Salt has always ensured that states always run in the same order regardless of where they are deployed, the addition of the Definition Order method mealy makes this finite ordering easier to follow. The lexicographical ordering is still applied but it only has any effect when two order statements collide. This means that if multiple states are assigned the same order number that they will fall back to lexicographical ordering to ensure that every execution still happens in a finite order. NOTE: If running with state_auto_order: False the
order key is not set automatically, since the Lexicographical order can
be derived from other keys.
Requisite OrderingSalt states are fully declarative, in that they are written to declare the state in which a system should be. This means that components can require that other components have been set up successfully. Unlike the other ordering systems, the Requisite system in Salt is evaluated at runtime. The requisite system is also built to ensure that the ordering of execution never changes, but is always the same for a given set of states. This is accomplished by using a runtime that processes states in a completely predictable order instead of using an event loop based system like other declarative configuration management systems. Runtime Requisite EvaluationThe requisite system is evaluated as the components are found, and the requisites are always evaluated in the same order. This explanation will be followed by an example, as the raw explanation may be a little dizzying at first as it creates a linear dependency evaluation sequence. The "Low Data" is an ordered list or dictionaries, the state runtime evaluates each dictionary in the order in which they are arranged in the list. When evaluating a single dictionary it is checked for requisites, requisites are evaluated in order, require then watch then prereq. NOTE: If using requisite in statements like require_in and
watch_in these will be compiled down to require and watch statements before
runtime evaluation.
Each requisite contains an ordered list of requisites, these requisites are looked up in the list of dictionaries and then executed. Once all requisites have been evaluated and executed then the requiring state can safely be run (or not run if requisites have not been met). This means that the requisites are always evaluated in the same order, again ensuring one of the core design principals of Salt's State system to ensure that execution is always finite is intact. Simple Runtime Evaluation ExampleGiven the above "Low Data" the states will be evaluated in the following order:
Best PracticeThe best practice in Salt is to choose a method and stick with it, official states are written using requisites for all associations since requisites create clean, traceable dependency trails and make for the most portable formulas. To accomplish something similar to how classical imperative systems function all requisites can be omitted and the failhard option then set to True in the master configuration, this will stop all state runs at the first instance of a failure. In the end, using requisites creates very tight and fine grained states, not using requisites makes full sequence runs and while slightly easier to write, and gives much less control over the executions. Extending External SLS DataSometimes a state defined in one SLS file will need to be modified from a separate SLS file. A good example of this is when an argument needs to be overwritten or when a service needs to watch an additional state. The Extend DeclarationThe standard way to extend is via the extend declaration. The extend declaration is a top level declaration like include and encapsulates ID declaration data included from other SLS files. A standard extend looks like this: include: A few critical things happened here, first off the SLS files that are going to be extended are included, then the extend dec is defined. Under the extend dec 2 IDs are extended, the apache ID's file state is overwritten with a new name and source. Then the ssh server is extended to watch the banner file in addition to anything it is already watching. Extend is a Top Level DeclarationThis means that extend can only be called once in an sls, if it is used twice then only one of the extend blocks will be read. So this is WRONG: include: The Requisite "in" StatementSince one of the most common things to do when extending another SLS is to add states for a service to watch, or anything for a watcher to watch, the requisite in statement was added to 0.9.8 to make extending the watch and require lists easier. The ssh-server extend statement above could be more cleanly defined like so: include: Rules to Extend ByThere are a few rules to remember when extending states:
Failhard Global OptionNormally, when a state fails Salt continues to execute the remainder of the defined states and will only refuse to execute states that require the failed state. But the situation may exist, where you would want all state execution to stop if a single state execution fails. The capability to do this is called failing hard. State Level FailhardA single state can have a failhard set, this means that if this individual state fails that all state execution will immediately stop. This is a great thing to do if there is a state that sets up a critical config file and setting a require for each state that reads the config would be cumbersome. A good example of this would be setting up a package manager early on: /etc/yum.repos.d/company.repo: In this situation, the yum repo is going to be configured before other states, and if it fails to lay down the config file, than no other states will be executed. It is possible to override a Global Failhard (see below) by explicitly setting it to False in the state. Global FailhardIt may be desired to have failhard be applied to every state that is executed, if this is the case, then failhard can be set in the master configuration file. Setting failhard in the master configuration file will result in failing hard when any minion gathering states from the master have a state fail. This is NOT the default behavior, normally Salt will only fail states that require a failed state. Using the global failhard is generally not recommended, since it can result in states not being executed or even checked. It can also be confusing to see states failhard if an admin is not actively aware that the failhard has been set. To use the global failhard set failhard to True in the master configuration file. Global State ArgumentsNOTE: This documentation has been moved here.
Highstate data structure definitionsThe Salt State TreeA state tree is a collection of SLS files and directories that live under the directory specified in file_roots. NOTE: Directory names or filenames in the state tree cannot
contain a period, with the exception of the period in the .sls file
suffix.
Top fileThe main state file that instructs minions what environment and modules to use during state execution. Configurable via state_top. SEE ALSO: A detailed description of the top file
Include declarationDefines a list of Module reference strings to include in this SLS. Occurs only in the top level of the SLS data structure. Example: include: Module referenceThe name of a SLS module defined by a separate SLS file and residing on the Salt Master. A module named edit.vim is a reference to the SLS file salt://edit/vim.sls. ID declarationDefines an individual highstate component. Always references a value of a dictionary containing keys referencing State declaration and Requisite declaration. Can be overridden by a Name declaration or a Names declaration. Occurs on the top level or under the Extend declaration. Must be unique across entire state tree. If the same ID declaration is used twice, only the first one matched will be used. All subsequent ID declarations with the same name will be ignored. NOTE: Naming gotchas
In Salt versions earlier than 0.9.7, ID declarations containing dots would result in unpredictable output. Extend declarationExtends a Name declaration from an included SLS module. The keys of the extend declaration always refer to an existing ID declaration which have been defined in included SLS modules. Occurs only in the top level and defines a dictionary. States cannot be extended more than once in a single state run. Extend declarations are useful for adding-to or overriding parts of a State declaration that is defined in another SLS file. In the following contrived example, the shown mywebsite.sls file is include -ing and extend -ing the apache.sls module in order to add a watch declaration that will restart Apache whenever the Apache configuration file, mywebsite changes. include: SEE ALSO: watch_in and require_in
Sometimes it is more convenient to use the watch_in or require_in syntax instead of extending another SLS file. State Requisites State declarationA list which contains one string defining the Function declaration and any number of Function arg declaration dictionaries. Can, optionally, contain a number of additional components like the name override components — name and names. Can also contain requisite declarations. Occurs under an ID declaration. Requisite declarationA list containing requisite references. Used to build the action dependency tree. While Salt states are made to execute in a deterministic order, this order is managed by requiring and watching other Salt states. Occurs as a list component under a State declaration or as a key under an ID declaration. Requisite referenceA single key dictionary. The key is the name of the referenced State declaration and the value is the ID of the referenced ID declaration. Occurs as a single index in a Requisite declaration list. Function declarationThe name of the function to call within the state. A state declaration can contain only a single function declaration. For example, the following state declaration calls the installed function in the pkg state module: httpd: The function can be declared inline with the state as a shortcut. The actual data structure is compiled to this form: httpd: Where the function is a string in the body of the state declaration. Technically when the function is declared in dot notation the compiler converts it to be a string in the state declaration list. Note that the use of the first example more than once in an ID declaration is invalid yaml. INVALID: httpd: When passing a function without arguments and another state declaration within a single ID declaration, then the long or "standard" format needs to be used since otherwise it does not represent a valid data structure. VALID: httpd: Occurs as the only index in the State declaration list. Function arg declarationA single key dictionary referencing a Python type which is to be passed to the named Function declaration as a parameter. The type must be the data type expected by the function. Occurs under a Function declaration. For example in the following state declaration user, group, and mode are passed as arguments to the managed function in the file state module: /etc/http/conf/http.conf: Name declarationOverrides the name argument of a State declaration. If name is not specified the ID declaration satisfies the name argument. The name is always a single key dictionary referencing a string. Overriding name is useful for a variety of scenarios. For example, avoiding clashing ID declarations. The following two state declarations cannot both have /etc/motd as the ID declaration: motd_perms: Another common reason to override name is if the ID declaration is long and needs to be referenced in multiple places. In the example below it is much easier to specify mywebsite than to specify /etc/apache2/sites-available/mywebsite.com multiple times: mywebsite: Names declarationExpands the contents of the containing State declaration into multiple state declarations, each with its own name. For example, given the following state declaration: python-pkgs: Once converted into the lowstate data structure the above state declaration will be expanded into the following three state declarations: python-django: Other values can be overridden during the expansion by providing an additional dictionary level. New in version 2014.7.0. ius: Large exampleHere is the layout in yaml using the names of the highdata structure components. <Include Declaration>: Include and ExcludeSalt SLS files can include other SLS files and exclude SLS files that have been otherwise included. This allows for an SLS file to easily extend or manipulate other SLS files. IncludeWhen other SLS files are included, everything defined in the included SLS file will be added to the state run. When including define a list of SLS formulas to include: include: The include statement will include SLS formulas from the same environment that the including SLS formula is in. But the environment can be explicitly defined in the configuration to override the running environment, therefore if an SLS formula needs to be included from an external environment named "dev" the following syntax is used: include: NOTE: include does not simply inject the states where you place it in the SLS file. If you need to guarantee order of execution, consider using requisites.
Relative IncludeIn Salt 0.16.0, the capability to include SLS formulas which are relative to the running SLS formula was added. Simply precede the formula name with a .: include: In Salt 2015.8, the ability to include SLS formulas which are relative to the parents of the running SLS formula was added. In order to achieve this, precede the formula name with more than one . (dot). Much like Python's relative import abilities, two or more leading dots represent a relative include of the parent or parents of the current package, with each . representing one level after the first. The following SLS configuration, if placed within example.dev.virtual, would result in example.http and base being included respectively: include: ExcludeThe exclude statement, added in Salt 0.10.3, allows an SLS to hard exclude another SLS file or a specific id. The component is excluded after the high data has been compiled, so nothing should be able to override an exclude. Since the exclude can remove an id or an sls the type of component to exclude needs to be defined. An exclude statement that verifies that the running highstate does not contain the http sls and the /etc/vimrc id would look like this: exclude: NOTE: The current state processing flow checks for duplicate
IDs before processing excludes. An error occurs if duplicate IDs are present
even if one of the IDs is targeted by an exclude.
State System LayersThe Salt state system is comprised of multiple layers. While using Salt does not require an understanding of the state layers, a deeper understanding of how Salt compiles and manages states can be very beneficial. Function CallThe lowest layer of functionality in the state system is the direct state function call. State executions are executions of single state functions at the core. These individual functions are defined in state modules and can be called directly via the state.single command. salt '*' state.single pkg.installed name='vim' Low ChunkThe low chunk is the bottom of the Salt state compiler. This is a data representation of a single function call. The low chunk is sent to the state caller and used to execute a single state function. A single low chunk can be executed manually via the state.low command. salt '*' state.low '{name: vim, state: pkg, fun: installed}'
The passed data reflects what the state execution system gets after compiling the data down from sls formulas. Low StateThe Low State layer is the list of low chunks "evaluated" in order. To see what the low state looks like for a highstate, run: salt '*' state.show_lowstate This will display the raw lowstate in the order which each low chunk will be evaluated. The order of evaluation is not necessarily the order of execution, since requisites are evaluated at runtime. Requisite execution and evaluation is finite; this means that the order of execution can be ascertained with 100% certainty based on the order of the low state. High DataHigh data is the data structure represented in YAML via SLS files. The High data structure is created by merging the data components rendered inside sls files (or other render systems). The High data can be easily viewed by executing the state.show_highstate or state.show_sls functions. Since this data is a somewhat complex data structure, it may be easier to read using the json, yaml, or pprint outputters: salt '*' state.show_highstate --out yaml salt '*' state.show_sls edit.vim --out pprint SLSAbove "High Data", the logical layers are no longer technically required to be executed, or to be executed in a hierarchy. This means that how the High data is generated is optional and very flexible. The SLS layer allows for many mechanisms to be used to render sls data from files or to use the fileserver backend to generate sls and file data from external systems. The SLS layer can be called directly to execute individual sls formulas. NOTE: SLS Formulas have historically been called "SLS
files". This is because a single SLS was only constituted in a single
file. Now the term "SLS Formula" better expresses how a
compartmentalized SLS can be expressed in a much more dynamic way by combining
pillar and other sources, and the SLS can be dynamically generated.
To call a single SLS formula named edit.vim, execute state.apply and pass edit.vim as an argument: salt '*' state.apply edit.vim HighStateCalling SLS directly logically assigns what states should be executed from the context of the calling minion. The Highstate layer is used to allow for full contextual assignment of what is executed where to be tied to groups of, or individual, minions entirely from the master. This means that the environment of a minion, and all associated execution data pertinent to said minion, can be assigned from the master without needing to execute or configure anything on the target minion. This also means that the minion can independently retrieve information about its complete configuration from the master. To execute the highstate use state.apply: salt '*' state.apply OrchestrateThe orchestrate layer expresses the highest functional layer of Salt's automated logic systems. The Overstate allows for stateful and functional orchestration of routines from the master. The orchestrate defines in data execution stages which minions should execute states, or functions, and in what order using requisite logic. The Orchestrate RunnerNOTE: This documentation has been moved here.
Ordering StatesThe way in which configuration management systems are executed is a hotly debated topic in the configuration management world. Two major philosophies exist on the subject, to either execute in an imperative fashion where things are executed in the order in which they are defined, or in a declarative fashion where dependencies need to be mapped between objects. Imperative ordering is finite and generally considered easier to write, but declarative ordering is much more powerful and flexible but generally considered more difficult to create. Salt has been created to get the best of both worlds. States are evaluated in a finite order, which guarantees that states are always executed in the same order, and the states runtime is declarative, making Salt fully aware of dependencies via the requisite system. State Auto OrderingSalt always executes states in a finite manner, meaning that they will always execute in the same order regardless of the system that is executing them. This evaluation order makes it easy to know what order the states will be executed in, but it is important to note that the requisite system will override the ordering defined in the files, and the order option, described below, will also override the order in which states are executed. This ordering system can be disabled in preference of lexicographic (classic) ordering by setting the state_auto_order option to False in the master configuration file. Otherwise, state_auto_order defaults to True. How compiler ordering is managed is described further in Understanding State Compiler Ordering. Requisite StatementsNOTE: The behavior of requisites changed in version 0.9.7 of
Salt. This documentation applies to requisites in version 0.9.7 and
later.
Often when setting up states any single action will require or depend on another action. Salt allows for the building of relationships between states with requisite statements. A requisite statement ensures that the named state is evaluated before the state requiring it. There are three types of requisite statements in Salt, require, watch, and prereq. These requisite statements are applied to a specific state declaration: httpd: In this example, the require requisite is used to declare that the file /etc/httpd/conf/httpd.conf should only be set up if the pkg state executes successfully. The requisite system works by finding the states that are required and executing them before the state that requires them. Then the required states can be evaluated to see if they have executed correctly. Require statements can refer to any state defined in Salt. The basic examples are pkg, service, and file, but any used state can be referenced. In addition to state declarations such as pkg, file, etc., sls type requisites are also recognized, and essentially allow 'chaining' of states. This provides a mechanism to ensure the proper sequence for complex state formulas, especially when the discrete states are split or groups into separate sls files: include: In this example, the httpd service running state will not be applied (i.e., the httpd service will not be started) unless both the httpd package is installed AND the network state is satisfied. NOTE: Requisite matching
Requisites match on both the ID Declaration and the name parameter. Therefore, if using the pkgs or sources argument to install a list of packages in a pkg state, it's important to note that it is impossible to match an individual package in the list, since all packages are installed as a single state. Multiple RequisitesThe requisite statement is passed as a list, allowing for the easy addition of more requisites. Both requisite types can also be separately declared: httpd: In this example, the httpd service is only going to be started if the package, user, group, and file are executed successfully. Requisite DocumentationFor detailed information on each of the individual requisites, please look here. The Order OptionBefore using the order option, remember that the majority of state ordering should be done with a Requisite declaration, and that a requisite declaration will override an order option, so a state with order option should not require or required by other states. The order option is used by adding an order number to a state declaration with the option order: vim: By adding the order option to 1 this ensures that the vim package will be installed in tandem with any other state declaration set to the order 1. Any state declared without an order option will be executed after all states with order options are executed. But this construct can only handle ordering states from the beginning. Certain circumstances will present a situation where it is desirable to send a state to the end of the line. To do this, set the order to last: vim: Running States in ParallelIntroduced in Salt version 2017.7.0 it is now possible to run select states in parallel. This is accomplished very easily by adding the parallel: True option to your state declaration: nginx: Now nginx will be started in a separate process from the normal state run and will therefore not block additional states. Parallel States and RequisitesParallel States still honor requisites. If a given state requires another state that has been run in parallel then the state runtime will wait for the required state to finish. Given this example: sleep 10: The sleep 10 will be started first, then the state system will block on starting nginx until the sleep 10 completes. Once nginx has been ensured to be running then the sleep 5 will start. This means that the order of evaluation of Salt States and requisites are still honored, and given that in the above case, parallel: True does not actually speed things up. To run the above state much faster make sure that the sleep 5 is evaluated before the nginx state sleep 10: Now both of the sleep calls will be started in parallel and nginx will still wait for the state it requires, but while it waits the sleep 5 state will also complete. Things to be Careful ofParallel States do not prevent you from creating parallel conflicts on your system. This means that if you start multiple package installs using Salt then the package manager will block or fail. If you attempt to manage the same file with multiple states in parallel then the result can produce an unexpected file. Make sure that the states you choose to run in parallel do not conflict, or else, like in any parallel programming environment, the outcome may not be what you expect. Doing things like just making all states run in parallel will almost certainly result in unexpected behavior. With that said, running states in parallel should be safe the vast majority of the time and the most likely culprit for unexpected behavior is running multiple package installs in parallel. State ProvidersNew in version 0.9.8. Salt predetermines what modules should be mapped to what uses based on the properties of a system. These determinations are generally made for modules that provide things like package and service management. Sometimes in states, it may be necessary to use an alternative module to provide the needed functionality. For instance, an very old Arch Linux system may not be running systemd, so instead of using the systemd service module, you can revert to the default service module: httpd: In this instance, the basic service module (which manages sysvinit-based services) will replace the systemd module which is used by default on Arch Linux. This change only affects this one state though. If it is necessary to make this override for most or every service, it is better to just override the provider in the minion config file, as described here. Also, keep in mind that this only works for states with an identically-named virtual module (pkg, service, etc.). Arbitrary Module RedirectsThe provider statement can also be used for more powerful means, instead of overwriting or extending the module used for the named service an arbitrary module can be used to provide certain functionality. emacs: In this example, the state is being instructed to use a custom module to invoke commands. Arbitrary module redirects can be used to dramatically change the behavior of a given state. Requisites and Other Global State ArgumentsRequisitesThe Salt requisite system is used to create relationships between states. This provides a method to easily define inter-dependencies between states. These dependencies are expressed by declaring the relationships using state names and IDs or names. The generalized form of a requisite target is <state name>: <ID or name>. The specific form is defined as a Requisite Reference. A common use-case for requisites is ensuring a package has been installed before trying to ensure the service is running. In the following example, Salt will ensure nginx has been installed before trying to manage the service. If the package could not be installed, Salt will not try to manage the service. nginx: Without the requisite defined, salt would attempt to install the package and then attempt to manage the service even if the installation failed. These requisites always form dependencies in a predictable single direction. Each requisite has an alternate <requisite>_in form that can be used to establish a "reverse" dependency--useful in for loops. In the end, a single dependency map is created and everything is executed in a finite and predictable order. Requisite matchingRequisites typically need two pieces of information for matching:
nginx: Identifier matchingRequisites match on both the ID Declaration and the name parameter. This means that, in the "Deploy server package" example above, a require requisite would match with Deploy server package or /usr/local/share/myapp.tar.xz, so either of the following versions for "Extract server package" is correct: # (Archive arguments omitted for simplicity) # Match by ID declaration Extract server package: Wildcard matching in requisitesNew in version 0.9.8. Wildcard matching is supported for state identifiers.
Note that this does not follow glob rules - dots and slashes are not special, and it is matching against state identifiers, not file paths. In the example below, a change in any state managing an apache config file will reload/restart the service: apache2: A leading or bare * must be quoted to avoid confusion with YAML references: /etc/letsencrypt/renewal-hooks/deploy/install.sh: Omitting state moduleNew in version 2016.3.0. In version 2016.3.0, the state module name was made optional. If the state module is omitted, all states matching the identifier will be required, regardless of which module they are using. - require: Requisites TypesAll requisite types have a corresponding _in form:
Several requisite types have a corresponding requisite_any form:
There is no combined form of _any and _in requisites, such as require_any_in! Lastly, onfail has one special onfail_all form to account for when AND logic is desired instead of the default OR logic of onfail/onfail_any (which are equivalent). All requisites define specific relationships and always work with the dependency logic defined above. requireThe use of require builds a dependency that prevents a state from executing until all required states execute successfully. If any required state fails, then the state will fail due to requisites. In the following example, the service state will not be checked unless both file states execute without failure. nginx: Require SLS FileAs of Salt 0.16.0, it is possible to require an entire sls file. Do this by first including the sls file and then setting a state to require the included sls file: include: This will add a require to all of the state declarations found in the given sls file. This means that bar will require every state within foo. This makes it very easy to batch large groups of states easily in any requisite statement. onchangesNew in version 2014.7.0. The onchanges requisite makes a state only apply if the required states generate changes, and if the watched state's "result" is True (does not fail). This can be a useful way to execute a post hook after changing aspects of a system. If a state has multiple onchanges requisites then the state will trigger if any of the watched states changes. myservice: In the example above, cmd.run will run only if there are changes in the file.managed state. An easy mistake to make is using onchanges_in when onchanges is the correct choice, as seen in this next example. myservice: This will set up a requisite relationship in which the cmd.run state always executes, and the file.managed state only executes if the cmd.run state has changes (which it always will, since the cmd.run state includes the command results as changes). It may semantically seem like the cmd.run state should only run when there are changes in the file state, but remember that requisite relationships involve one state watching another state, and a requisite_in does the opposite: it forces the specified state to watch the state with the requisite_in. NOTE: An onchanges requisite has no effect on SLS
requisites (monitoring for changes in an included SLS). Only the individual
state IDs from an included SLS can be monitored.
watchA watch requisite is used to add additional behavior when there are changes in other states. This is done using the mod_watch function available from the execution module and will execute any time a watched state changes. NOTE: If a state should only execute when another state has
changes, and otherwise do nothing, the onchanges requisite should be
used instead of watch. watch is designed to add
additional behavior when there are changes, but otherwise the state
executes normally.
NOTE: A watch requisite has no effect on SLS requisites
(watching for changes in an included SLS). Only the individual state IDs from
an included SLS can be watched.
A good example of using watch is with a service.running state. When a service watches a state, then the service is reloaded/restarted when the watched state changes, in addition to Salt ensuring that the service is running. ntpd: Another useful example of watch is using salt to ensure a configuration file is present and in a correct state, ensure the service is running, and trigger service nginx reload instead of service nginx restart in order to avoid dropping any connections. nginx: NOTE: Not all state modules contain mod_watch. If
mod_watch is absent from the watching state module, the watch
requisite behaves exactly like a require requisite.
The state containing the watch requisite is defined as the watching state. The state specified in the watch statement is defined as the watched state. When the watched state executes, it will return a dictionary containing a key named "changes". Here are two examples of state return dictionaries, shown in json for clarity: {
{
If the "result" of the watched state is True, the watching state will execute normally, and if it is False, the watching state will never run. This part of watch mirrors the functionality of the require requisite. If the "result" of the watched state is True and the "changes" key contains a populated dictionary (changes occurred in the watched state), then the watch requisite can add additional behavior. This additional behavior is defined by the mod_watch function within the watching state module. If the mod_watch function exists in the watching state module, it will be called in addition to the normal watching state. The return data from the mod_watch function is what will be returned to the master in this case; the return data from the main watching function is discarded. If the "changes" key contains an empty dictionary, the watch requisite acts exactly like the require requisite (the watching state will execute if "result" is True, and fail if "result" is False in the watched state). NOTE: If the watching state changes key contains values,
then mod_watch will not be called. If you're using watch or
watch_in then it's a good idea to have a state that only enforces one
attribute - such as splitting out service.running into its own state
and have service.enabled in another.
One common source of confusion is expecting mod_watch to be called for every necessary change. You might be tempted to write something like this: httpd: If your service is already running but not enabled, you might expect that Salt will be able to tell that since the config file changed your service needs to be restarted. This is not the case. Because the service needs to be enabled, that change will be made and mod_watch will never be triggered. In this case, changes to your apache.conf will fail to be loaded. If you want to ensure that your service always reloads the correct way to handle this is either ensure that your service is not running before applying your state, or simply make sure that service.running is in a state on its own: enable-httpd: Now that service.running is its own state, changes to service.enabled will no longer prevent mod_watch from getting triggered, so your httpd service will get restarted like you want. listenNew in version 2014.7.0. A listen requisite is used to trigger the mod_watch function of a state module. Rather than modifying execution order, the mod_watch state created by listen will execute at the end of the state run. restart-apache2: This example will cause apache2 to restart when the apache2.conf file is changed, but the apache2 restart will happen at the end of the state run. restart-apache2: This example does the same as the above example, but puts the state argument on the file resource, rather than the service resource. prereqNew in version 0.16.0. The prereq requisite works similar to onchanges except that it uses the result from test=True on the observed state to determine if it should run prior to the observed state being run. The best way to define how prereq operates is displayed in the following practical example: When a service should be shut down because underlying code is going to change, the service should be off-line while the update occurs. In this example, graceful-down is the pre-requiring state and site-code is the pre-required state. graceful-down: In this case, the apache server will only be shut down if the site-code state expects to deploy fresh code via the file.recurse call. The site-code deployment will only be executed if the graceful-down run completes successfully. When a prereq requisite is evaluated, the pre-required state reports if it expects to have any changes. It does this by running the pre-required single state as a test-run by enabling test=True. This test-run will return a dictionary containing a key named "changes". (See the watch section above for examples of "changes" dictionaries.) If the "changes" key contains a populated dictionary, it means that the pre-required state expects changes to occur when the state is actually executed, as opposed to the test-run. The pre-requiring state will now run. If the pre-requiring state executes successfully, the pre-required state will then execute. If the pre-requiring state fails, the pre-required state will not execute. If the "changes" key contains an empty dictionary, this means that changes are not expected by the pre-required state. Neither the pre-required state nor the pre-requiring state will run. onfailNew in version 2014.7.0. The onfail requisite allows for reactions to happen strictly as a response to the failure of another state. This can be used in a number of ways, such as sending a notification or attempting an alternate task or thread of tasks when an important state fails. The onfail requisite is applied in the same way as require and watch: primary_mount: build_site: The default behavior of the onfail when multiple requisites are listed is the opposite of other requisites in the salt state engine, it acts by default like any() instead of all(). This means that when you list multiple onfail requisites on a state, if any fail the requisite will be satisfied. If you instead need all logic to be applied, you can use onfail_all form: test_site_a: In this contrived example notify_site_down will run when both 10.0.0.1 and 10.0.0.2 fail to respond to ping. NOTE: Setting failhard (globally or in the failing
state) to True will cause onfail, onfail_in and
onfail_any requisites to be ignored. If you want to combine a global
failhard set to True with onfail, onfail_in or
onfail_any, you will have to explicitly set failhard to False
(overriding the global setting) in the state that could fail.
NOTE: Beginning in the 2016.11.0 release of Salt,
onfail uses OR logic for multiple listed onfail requisites.
Prior to the 2016.11.0 release, onfail used AND logic. See
Issue #22370 for more information. Beginning in the Neon release
of Salt, a new onfail_all requisite form is available if AND logic is
desired.
useThe use requisite is used to inherit the arguments passed in another id declaration. This is useful when many files need to have the same defaults. /etc/foo.conf: The use statement was developed primarily for the networking states but can be used on any states in Salt. This makes sense for the networking state because it can define a long list of options that need to be applied to multiple network interfaces. The use statement does not inherit the requisites arguments of the targeted state. This means also a chain of use requisites would not inherit inherited options. The _in version of requisitesDirect requisites form a dependency in a single direction. This makes it possible for Salt to detect cyclical dependencies and helps prevent faulty logic. In some cases, often in loops, it is desirable to establish a dependency in the opposite direction. All direct requisites have an _in counterpart that behaves the same but forms the dependency in the opposite direction. The following sls examples will produce the exact same dependency mapping. httpd: httpd: In the following example, Salt will not try to manage the nginx service or any configuration files unless the nginx package is installed because of the pkg: nginx requisite. nginx: php.sls include: mod_python.sls include: Now the httpd server will only start if both php and mod_python are first verified to be installed. Thus allowing for a requisite to be defined "after the fact". {% for cfile in salt.pillar.get('nginx:config_files') %}
/etc/nginx/conf.d/{{ cfile }}:
In this scenario, listen_in is a better choice than require_in because the listen requisite will trigger mod_watch behavior which will wait until the end of state execution and then reload the service. The _any version of requisitesNew in version 2018.3.0. Some requisites have an _any counterpart that changes the requisite behavior from all() to any(). A: In this example A will run because at least one of the requirements specified, B or C, will succeed. myservice: In this example, cmd.run would be run only if either of the file.managed states generated changes and at least one of the watched state's "result" is True. Altering StatesThe state altering system is used to make sure that states are evaluated exactly as the user expects. It can be used to double check that a state preformed exactly how it was expected to, or to make 100% sure that a state only runs under certain conditions. The use of unless or onlyif options help make states even more stateful. The check_cmd option helps ensure that the result of a state is evaluated correctly. reloadreload_modules is a boolean option that forces salt to reload its modules after a state finishes. reload_pillar and reload_grains can also be set. See Reloading Modules. grains_refresh: unlessNew in version 2014.7.0. The unless requisite specifies that a state should only run when any of the specified commands return False. The unless requisite operates as NAND and is useful in giving more granular control over when a state should execute. NOTE: Under the hood unless calls cmd.retcode with python_shell=True. This means the commands referenced by unless will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion. Also be aware that the boolean value is determined by the shell's concept of True and False, rather than Python's concept of True and False. vim: In the example above, the state will only run if either the vim-enhanced package is not installed (returns False) or if /usr/bin/vim does not exist (returns False). The state will run if both commands return False. However, the state will not run if both commands return True. Unless checks are resolved for each name to which they are associated. For example: deploy_app: In the above case, some_check will be run prior to _each_ name -- once for first_deploy_cmd and a second time for second_deploy_cmd. Changed in version 3000: The unless requisite can take a module as a dictionary field in unless. The dictionary must contain an argument fun which is the module that is being run, and everything else must be passed in under the args key or will be passed as individual kwargs to the module function. install apache on debian based distros: set mysql root password: Changed in version sodium: For modules which return a deeper data structure, the get_return key can be used to access results. test: Changed in version 3006.0: Since the unless requisite utilizes cmd.retcode, certain parameters included in the state are passed along to cmd.retcode. On occasion this can cause issues, particularly if the shell option in a user.present is set to /sbin/nologin and this shell is passed along to cmd.retcode. This would cause cmd.retcode to run the command using that shell which would fail regardless of the result of the command. By including shell in cmd_opts_exclude, that parameter would not be passed along to the call to cmd.retcode. jim_nologin: onlyifNew in version 2014.7.0. The onlyif requisite specifies that if each command listed in onlyif returns True, then the state is run. If any of the specified commands return False, the state will not run. NOTE: Under the hood onlyif calls cmd.retcode with python_shell=True. This means the commands referenced by onlyif will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion. Also be aware that the boolean value is determined by the shell's concept of True and False, rather than Python's concept of True and False. stop-volume: The above example ensures that the stop_volume and delete modules only run if the gluster commands return a 0 ret value. Changed in version 3000: The onlyif requisite can take a module as a dictionary field in onlyif. The dictionary must contain an argument fun which is the module that is being run, and everything else must be passed in under the args key or will be passed as individual kwargs to the module function. install apache on redhat based distros: arbitrary file example: Changed in version sodium: For modules which return a deeper data structure, the get_return key can be used to access results. test: Changed in version 3006.0: Since the onlyif requisite utilizes cmd.retcode, certain parameters included in the state are passed along to cmd.retcode. On occasion this can cause issues, particularly if the shell option in a user.present is set to /sbin/nologin and this shell is passed along to cmd.retcode. This would cause cmd.retcode to run the command using that shell which would fail regardless of the result of the command. By including shell in cmd_opts_exclude, that parameter would not be passed along to the call to cmd.retcode. jim_nologin: createsNew in version 3001. The creates requisite specifies that a state should only run when any of the specified files do not already exist. Like unless, creates requisite operates as NAND and is useful in giving more granular control over when a state should execute. This was previously used by the cmd and docker_container states. contrived creates example: creates also accepts a list of files, in which case this state will run if any of the files do not exist: creates list: runasNew in version 2017.7.0. The runas global option is used to set the user which will be used to run the command in the cmd.run module. django: In the above state, the pip command run by cmd.run will be run by the daniel user. runas_passwordNew in version 2017.7.2. The runas_password global option is used to set the password used by the runas global option. This is required by cmd.run on Windows when runas is specified. It will be set when runas_password is defined in the state. run_script: In the above state, the Powershell script run by cmd.run will be run by the frank user with the password supersecret. check_cmdNew in version 2014.7.0. Check Command is used for determining that a state did or did not run as expected. NOTE: Under the hood check_cmd calls cmd.retcode with python_shell=True. This means the command will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion. comment-repo: This will attempt to do a replace on all enabled=0 in the .repo file, and replace them with enabled=1. The check_cmd is just a bash command. It will do a grep for enabled=0 in the file, and if it finds any, it will return a 0, which will be inverted by the leading !, causing check_cmd to set the state as failed. If it returns a 1, meaning it didn't find any enabled=0, it will be inverted by the leading !, returning a 0, and declaring the function succeeded. NOTE: This requisite check_cmd functions differently than the check_cmd of the file.managed state. Overriding ChecksThere are two commands used for the above checks. mod_run_check is used to check for onlyif and unless. If the goal is to override the global check for these to variables, include a mod_run_check in the salt/states/ file. mod_run_check_cmd is used to check for the check_cmd options. To override this one, include a mod_run_check_cmd in the states file for the state. Fire Event NotificationsNew in version 2015.8.0. The fire_event option in a state will cause the minion to send an event to the Salt Master upon completion of that individual state. The following example will cause the minion to send an event to the Salt Master with a tag of salt/state_result/20150505121517276431/dasalt/nano and the result of the state will be the data field of the event. Notice that the name of the state gets added to the tag. nano_stuff: In the following example instead of setting fire_event to True, fire_event is set to an arbitrary string, which will cause the event to be sent with this tag: salt/state_result/20150505121725642845/dasalt/custom/tag/nano/finished nano_stuff: Retrying StatesNew in version 2017.7.0. The retry option in a state allows it to be executed multiple times until a desired result is obtained or the maximum number of attempts have been made. The retry option can be configured by the attempts, until, interval, and splay parameters. The attempts parameter controls the maximum number of times the state will be run. If not specified or if an invalid value is specified, attempts will default to 2. The until parameter defines the result that is required to stop retrying the state. If not specified or if an invalid value is specified, until will default to True The interval parameter defines the amount of time, in seconds, that the system will wait between attempts. If not specified or if an invalid value is specified, interval will default to 30. The splay parameter allows the interval to be additionally spread out. If not specified or if an invalid value is specified, splay defaults to 0 (i.e. no splaying will occur). The following example will run the pkg.installed state until it returns True or it has been run 5 times. Each attempt will be 60 seconds apart and the interval will be splayed up to an additional 10 seconds: my_retried_state: The following example will run the pkg.installed state with all the defaults for retry. The state will run up to 2 times, each attempt being 30 seconds apart, or until it returns True. install_nano: The following example will run the file.exists state every 30 seconds up to 15 times or until the file exists (i.e. the state returns True). wait_for_file: Return data from a retried stateWhen a state is retried, the returned output is as follows: The result return value is the result from the final run. For example, imagine a state set to retry up to three times or until True. If the state returns False on the first run and then True on the second, the result of the state will be True. The started return value is the started from the first run. The duration return value is the total duration of all attempts plus the retry intervals. The comment return value will include the result and comment from all previous attempts. For example: wait_for_file: Would return similar to the following. The state result in this case is False (file.exist was run 10 times with a 2 second interval, but the file specified did not exist on any run).
Run State With a Different UmaskNew in version 3002: NOTE: not available on Windows The umask state argument can be used to run a state with a different umask. Prior to version 3002 this was available to cmd states, but it is now a global state argument that can be applied to any state. cleanup_script: Startup StatesSometimes it may be desired that the salt minion execute a state run when it is started. This alleviates the need for the master to initiate a state run on a new minion and can make provisioning much easier. As of Salt 0.10.3 the minion config reads options that allow for states to be executed at startup. The options are startup_states, sls_list, and top_file. The startup_states option can be passed one of a number of arguments to define how to execute states. The available options are: Examples:Execute state.apply to run the highstate when starting the minion: startup_states: highstate Execute the sls files edit.vim and hyper: startup_states: sls sls_list: State TestingExecuting a Salt state run can potentially change many aspects of a system and it may be desirable to first see what a state run is going to change before applying the run. Salt has a test interface to report on exactly what will be changed, this interface can be invoked on any of the major state run functions: salt '*' state.apply test=True salt '*' state.apply mysls test=True salt '*' state.single test=True The test run is mandated by adding the test=True option to the states. The return information will show states that will be applied in yellow and the result is reported as None. Default TestIf the value test is set to True in the minion configuration file then states will default to being executed in test mode. If this value is set then states can still be run by calling test=False: salt '*' state.apply test=False salt '*' state.apply mysls test=False salt '*' state.single test=False The Top FileIntroductionMost infrastructures are made up of groups of machines, each machine in the group performing a role similar to others. Those groups of machines work in concert with each other to create an application stack. To effectively manage those groups of machines, an administrator needs to be able to create roles for those groups. For example, a group of machines that serve front-end web traffic might have roles which indicate that those machines should all have the Apache webserver package installed and that the Apache service should always be running. In Salt, the file which contains a mapping between groups of machines on a network and the configuration roles that should be applied to them is called a top file. Top files are named top.sls by default and they are so-named because they always exist in the "top" of a directory hierarchy that contains state files. That directory hierarchy is called a state tree. A Basic ExampleTop files have three components:
The relationship between these three components is nested as follows:
Putting these concepts together, we can describe a scenario in which all minions with an ID that begins with web have an apache state applied to them: base: # Apply SLS files from the directory root for the 'base' environment EnvironmentsEnvironments are directory hierarchies which contain a top file and a set of state files. Environments can be used in many ways, however there is no requirement that they be used at all. In fact, the most common way to deploy Salt is with a single environment, called base. It is recommended that users only create multiple environments if they have a use case which specifically calls for multiple versions of state trees. Getting Started with Top FilesEach environment is defined inside a salt master configuration variable called, file_roots . In the most common single-environment setup, only the base environment is defined in file_roots along with only one directory path for the state tree. file_roots: In the above example, the top file will only have a single environment to pull from. Next is a simple single-environment top file placed in /usr/local/etc/salt/states/top.sls, illustrating that for the environment called base, all minions will have the state files named core.sls and edit.sls applied to them. base: Assuming the file_roots configuration from above, Salt will look in the /usr/local/etc/salt/states directory for core.sls and edit.sls. Multiple EnvironmentsIn some cases, teams may wish to create versioned state trees which can be used to test Salt configurations in isolated sets of systems such as a staging environment before deploying states into production. For this case, multiple environments can be used to accomplish this task. To create multiple environments, the file_roots option can be expanded: file_roots: In the above, we declare three environments: dev, qa and prod. Each environment has a single directory assigned to it. Our top file references the environments: dev: As seen above, the top file now declares the three environments and for each, target expressions are defined to map minions to state files. For example, all minions which have an ID beginning with the string webserver will have the webserver state from the requested environment assigned to it. In this manner, a proposed change to a state could first be made in a state file in /usr/local/etc/salt/states/dev and then be applied to development webservers before moving the state into QA by copying the state file into /usr/local/etc/salt/states/qa. Choosing an Environment to TargetThe top file is used to assign a minion to an environment unless overridden using the methods described below. The environment in the top file must match valid fileserver environment (a.k.a. saltenv) in order for any states to be applied to that minion. When using the default fileserver backend, environments are defined in file_roots. The states that will be applied to a minion in a given environment can be viewed using the state.show_top function. Minions may be pinned to a particular environment by setting the environment value in the minion configuration file. In doing so, a minion will only request files from the environment to which it is assigned. The environment may also be dynamically selected at runtime by passing it to the salt, salt-call or salt-ssh command. This is most commonly done with functions in the state module by using the saltenv argument. For example, to run a highstate on all minions, using only the top file and SLS files in the prod environment, run: salt '*' state.highstate saltenv=prod. NOTE: Not all functions accept saltenv as an argument,
see the documentation for an individual function documentation to
verify.
ShorthandIf you assign only one SLS to a system, as in this example, a shorthand is also available: base: Advanced Minion TargetingIn the examples above, notice that all of the target expressions are globs. The default match type in top files (since version 2014.7.0) is actually the compound matcher, not the glob matcher as in the CLI. A single glob, when passed through the compound matcher, acts the same way as matching by glob, so in most cases the two are indistinguishable. However, there is an edge case in which a minion ID contains whitespace. While it is not recommended to include spaces in a minion ID, Salt will not stop you from doing so. However, since compound expressions are parsed word-by-word, if a minion ID contains spaces it will fail to match. In this edge case, it will be necessary to explicitly use the glob matcher: base: The available match types which can be set for a target expression in the top file are:
Below is a slightly more complex top file example, showing some of the above match types: # All files will be taken from the file path specified in the base # environment in the ``file_roots`` configuration value. base: How Top Files Are CompiledWhen a highstate is executed and an environment is specified (either using the environment config option or by passing the saltenv when executing the highstate), then that environment's top file is the only top file used to assign states to minions, and only states from the specified environment will be run. The remainder of this section applies to cases in which a highstate is executed without an environment specified. With no environment specified, the minion will look for a top file in each environment, and each top file will be processed to determine the SLS files to run on the minions. By default, the top files from each environment will be merged together. In configurations with many environments, such as with GitFS where each branch and tag is treated as a distinct environment, this may cause unexpected results as SLS files from older tags cause defunct SLS files to be included in the highstate. In cases like this, it can be helpful to set top_file_merging_strategy to same to force each environment to use its own top file. top_file_merging_strategy: same Another option would be to set state_top_saltenv to a specific environment, to ensure that any top files in other environments are disregarded: state_top_saltenv: base With GitFS, it can also be helpful to simply manage each environment's top file separately, and/or manually specify the environment when executing the highstate to avoid any complicated merging scenarios. gitfs_saltenv_whitelist and gitfs_saltenv_blacklist can also be used to hide unneeded branches and tags from GitFS to reduce the number of top files in play. When using multiple environments, it is not necessary to create a top file for each environment. The easiest-to-maintain approach is to use a single top file placed in the base environment. This is often infeasible with GitFS though, since branching/tagging can easily result in extra top files. However, when only the default (roots) fileserver backend is used, a single top file in the base environment is the most common way of configuring a highstate. The following minion configuration options affect how top files are compiled when no environment is specified, it is recommended to follow the below four links to learn more about how these options work:
Top File Compilation ExamplesFor the scenarios below, assume the following configuration: /usr/local/etc/salt/master: file_roots: /usr/local/etc/salt/states/base/top.sls: base: /usr/local/etc/salt/states/dev/top.sls: base: NOTE: For the purposes of these examples, there is no top file
in the qa environment.
Scenario 1 - dev Environment SpecifiedIn this scenario, the highstate was either invoked with saltenv=dev or the minion has environment: dev set in the minion config file. The result will be that only the dev2 SLS from the dev environment will be part of the highstate, and it will be applied to minion2, while minion1 will have no states applied to it. If the base environment were specified, the result would be that only the base1 SLS from the base environment would be part of the highstate, and it would be applied to all minions. If the qa environment were specified, the highstate would exit with an error. Scenario 2 - No Environment Specified, top_file_merging_strategy is "merge"In this scenario, assuming that the base environment's top file was evaluated first, the base1, dev1, and qa1 states would be applied to all minions. If, for instance, the qa environment is not defined in /usr/local/etc/salt/states/base/top.sls, then because there is no top file for the qa environment, no states from the qa environment would be applied. Scenario 3 - No Environment Specified, top_file_merging_strategy is "same"Changed in version 2016.11.0: In prior versions, "same" did not quite work as described below (see here). This has now been corrected. It was decided that changing something like top file handling in a point release had the potential to unexpectedly impact users' top files too much, and it would be better to make this correction in a feature release. In this scenario, base1 from the base environment is applied to all minions. Additionally, dev2 from the dev environment is applied to minion2. If default_top is unset (or set to base, which happens to be the default), then qa1 from the qa environment will be applied to all minions. If default_top were set to dev, then both qa1 and qa2 from the qa environment would be applied to all minions. Scenario 4 - No Environment Specified, top_file_merging_strategy is "merge_all"New in version 2016.11.0. In this scenario, all configured states in all top files are applied. From the base environment, base1 would be applied to all minions, with base2 being applied only to minion1. From the dev environment, dev1 would be applied to all minions, with dev2 being applied only to minion2. Finally, from the qa environment, both the qa1 and qa2 states will be applied to all minions. Note that the qa1 states would not be applied twice, even though qa1 appears twice. SLS Template Variable ReferenceWARNING: In the 3005 release sls_path, tplfile, and
tpldir have had some significant improvements which have the potential
to break states that rely on old and broken functionality.
The template engines available to sls files and file templates come loaded with a number of context variables. These variables contain information and functions to assist in the generation of templates. See each variable below for its availability -- not all variables are available in all templating contexts. SaltThe salt variable is available to abstract the salt library functions. This variable is a python dictionary containing all of the functions available to the running salt minion. It is available in all salt templates. {% for file in salt['cmd.run']('ls -1 /opt/to_remove').splitlines() %}
/opt/to_remove/{{ file }}:
OptsThe opts variable abstracts the contents of the minion's configuration file directly to the template. The opts variable is a dictionary. It is available in all templates. {{ opts['cachedir'] }}
The config.get function also searches for values in the opts dictionary. PillarThe pillar dictionary can be referenced directly, and is available in all templates: {{ pillar['key'] }}
Using the pillar.get function via the salt variable is generally recommended since a default can be safely set in the event that the value is not available in pillar and dictionaries can be traversed directly: {{ salt['pillar.get']('key', 'failover_value') }}
{{ salt['pillar.get']('stuff:more:deeper') }}
GrainsThe grains dictionary makes the minion's grains directly available, and is available in all templates: {{ grains['os'] }}
The grains.get function can be used to traverse deeper grains and set defaults: {{ salt['grains.get']('os') }}
saltenvThe saltenv variable is available in only in sls files when gathering the sls from an environment. {{ saltenv }}
SLS Only VariablesThe following are only available when processing sls files. If you need these in other templates, you can usually pass them in as template context. slsThe sls variable contains the sls reference value, and is only available in the actual SLS file (not in any files referenced in that SLS). The sls reference value is the value used to include the sls in top files or via the include option. {{ sls }}
slspathThe slspath variable contains the path to the directory of the current sls file. The value of slspath in files referenced in the current sls depends on the reference method. For jinja includes slspath is the path to the current directory of the file. For salt includes slspath is the path to the directory of the included file. If current sls file is in root of the file roots, this will return "" {{ slspath }}
sls_pathA version of slspath with underscores as path separators instead of slashes. So, if slspath is path/to/state then sls_path is path_to_state {{ sls_path }}
slsdotpathA version of slspath with dots as path separators instead of slashes. So, if slspath is path/to/state then slsdotpath is path.to.state. This is same as sls if sls points to a directory instead if a file. {{ slsdotpath }}
slscolonpathA version of slspath with colons (:) as path separators instead of slashes. So, if slspath is path/to/state then slscolonpath is path:to:state. {{ slscolonpath }}
tplpathFull path to sls template file being process on local disk. This is usually pointing to a copy of the sls file in a cache directory. This will be in OS specific format (Windows vs POSIX). (It is probably best not to use this.) {{ tplpath }}
tplfileRelative path to exact sls template file being processed relative to file roots. {{ tplfile }}
tpldirDirectory, relative to file roots, of the current sls file. If current sls file is in root of the file roots, this will return ".". This is usually identical to slspath except in case of root-level sls, where this will return a ".". A Common use case for this variable is to generate relative salt urls like: my-file: tpldotA version of tpldir with dots as path separators instead of slashes. So, if tpldir is path/to/state then tpldot is path.to.state. NOTE: if tpldir is ., this will be set to "" {{ tpldot }}
State ModulesState Modules are the components that map to actual enforcement and management of Salt states. States are Easy to Write!State Modules should be easy to write and straightforward. The information passed to the SLS data structures will map directly to the states modules. Mapping the information from the SLS data is simple, this example should illustrate: /usr/local/etc/salt/master: # maps to "name", unless a "name" argument is specified below Therefore this SLS data can be directly linked to a module, function, and arguments passed to that function. This does issue the burden, that function names, state names and function arguments should be very human readable inside state modules, since they directly define the user interface.
Best PracticesA well-written state function will follow these steps: NOTE: This is an extremely simplified example. Feel free to
browse the source code for Salt's state modules to see other
examples.
def myfunc():
result = __salt__["modname.check"](name)
def myfunc():
def myfunc():
result = __salt__["modname.install"](name)
ret["changes"] = __salt__["modname.check"](name) As you can see here, we are setting the changes key in the return dictionary to the result of the modname.check function (just as we did in step 4). The assumption here is that the information-gathering function will return a dictionary explaining what changes need to be made. This may or may not fit your use case.
def myfunc(): Using Custom State ModulesBefore the state module can be used, it must be distributed to minions. This can be done by placing them into salt://_states/. They can then be distributed manually to minions by running saltutil.sync_states or saltutil.sync_all. Alternatively, when running a highstate custom types will automatically be synced. NOTE: Writing state modules with hyphens in the filename will cause issues with !pyobjects routines. Best practice to stick to underscores. Any custom states which have been synced to a minion, that are named the same as one of Salt's default set of states, will take the place of the default state with the same name. Note that a state module's name defaults to one based on its filename (i.e. foo.py becomes state module foo), but that its name can be overridden by using a __virtual__ function. Cross Calling Execution Modules from StatesAs with Execution Modules, State Modules can also make use of the __salt__ and __grains__ data. See cross calling execution modules. It is important to note that the real work of state management should not be done in the state module unless it is needed. A good example is the pkg state module. This module does not do any package management work, it just calls the pkg execution module. This makes the pkg state module completely generic, which is why there is only one pkg state module and many backend pkg execution modules. On the other hand some modules will require that the logic be placed in the state module, a good example of this is the file module. But in the vast majority of cases this is not the best approach, and writing specific execution modules to do the backend work will be the optimal solution. Cross Calling State ModulesAll of the Salt state modules are available to each other and state modules can call functions available in other state modules. The variable __states__ is packed into the modules after they are loaded into the Salt minion. The __states__ variable is a Python dictionary containing all of the state modules. Dictionary keys are strings representing the names of the modules and the values are the functions themselves. Salt state modules can be cross-called by accessing the value in the __states__ dict: ret = __states__["file.managed"](name="/tmp/myfile", source="salt://myfile") This code will call the managed function in the file state module and pass the arguments name and source to it. Return DataA State Module must return a dict containing the following keys/values:
ret["changes"].update({"my_pkg_name": {"old": "", "new": "my_pkg_name-1.0"}})
Test mode does not predict if the changes will be
successful or not, and hence the result for pending changes is usually
None.
However, if a state is going to fail and this can be determined in test mode without applying the change, False can be returned.
NOTE: States should not return data which cannot be serialized
such as frozensets.
Sub State RunsSome states can return multiple state runs from an external engine. State modules that extend tools like Puppet, Chef, Ansible, and idem can run multiple external states and then return their results individually in the "sub_state_run" portion of their return as long as their individual state runs are formatted like salt states with low and high data. For example, the idem state module can execute multiple idem states via it's runtime and report the status of all those runs by attaching them to "sub_state_run" in it's state return. These sub_state_runs will be formatted and printed alongside other salt states. Example: state_return = {
Test StateAll states should check for and support test being passed in the options. This will return data about what changes would occur if the state were actually run. An example of such a check could look like this: def myfunc(): Make sure to test and return before performing any real actions on the minion. NOTE: Be sure to refer to the result table listed above
and displaying any possible changes when writing support for test.
Looking for changes in a state is essential to test=true functionality.
If a state is predicted to have no changes when test=true (or test:
true in a config file) is used, then the result of the final state
should not be None.
Watcher FunctionIf the state being written should support the watch requisite then a watcher function needs to be declared. The watcher function is called whenever the watch requisite is invoked and should be generic to the behavior of the state itself. The watcher function should accept all of the options that the normal state functions accept (as they will be passed into the watcher function). A watcher function typically is used to execute state specific reactive behavior, for instance, the watcher for the service module restarts the named service and makes it useful for the watcher to make the service react to changes in the environment. The watcher function also needs to return the same data that a normal state function returns. Mod_init InterfaceSome states need to execute something only once to ensure that an environment has been set up, or certain conditions global to the state behavior can be predefined. This is the realm of the mod_init interface. A state module can have a function called mod_init which executes when the first state of this type is called. This interface was created primarily to improve the pkg state. When packages are installed the package metadata needs to be refreshed, but refreshing the package metadata every time a package is installed is wasteful. The mod_init function for the pkg state sets a flag down so that the first, and only the first, package installation attempt will refresh the package database (the package database can of course be manually called to refresh via the refresh option in the pkg state). The mod_init function must accept the Low State Data for the given executing state as an argument. The low state data is a dict and can be seen by executing the state.show_lowstate function. Then the mod_init function must return a bool. If the return value is True, then the mod_init function will not be executed again, meaning that the needed behavior has been set up. Otherwise, if the mod_init function returns False, then the function will be called the next time. A good example of the mod_init function is found in the pkg state module: def mod_init(low): The mod_init function in the pkg state accepts the low state data as low and then checks to see if the function being called is going to install packages, if the function is not going to install packages then there is no need to refresh the package database. Therefore if the package database is prepared to refresh, then return True and the mod_init will not be called the next time a pkg state is evaluated, otherwise return False and the mod_init will be called next time a pkg state is evaluated. Log OutputYou can call the logger from custom modules to write messages to the minion logs. The following code snippet demonstrates writing log messages: import logging
log = logging.getLogger(__name__)
log.info("Here is Some Information")
log.warning("You Should Not Do That")
log.error("It Is Busted")
Strings and UnicodeA state module author should always assume that strings fed to the module have already decoded from strings into Unicode. In Python 2, these will be of type 'Unicode' and in Python 3 they will be of type str. Calling from a state to other Salt sub-systems, such as execution modules should pass Unicode (or bytes if passing binary data). In the rare event that a state needs to write directly to disk, Unicode should be encoded to a string immediately before writing to disk. An author may use __salt_system_encoding__ to learn what the encoding type of the system is. For example, 'my_string'.encode(__salt_system_encoding__'). Full State Module ExampleThe following is a simplistic example of a full state module and function. Remember to call out to execution modules to perform all the real work. The state module should only perform "before" and "after" checks.
salt '*' saltutil.sync_states
human_friendly_state_id: # An arbitrary state ID declaration. Example state moduleimport salt.exceptions def enforce_custom_thing(name, foo, bar=True): State ManagementState management, also frequently called Software Configuration Management (SCM), is a program that puts and keeps a system into a predetermined state. It installs software packages, starts or restarts services or puts configuration files in place and watches them for changes. Having a state management system in place allows one to easily and reliably configure and manage a few servers or a few thousand servers. It allows configurations to be kept under version control. Salt States is an extension of the Salt Modules that we discussed in the previous remote execution tutorial. Instead of calling one-off executions the state of a system can be easily defined and then enforced. Understanding the Salt State System ComponentsThe Salt state system is comprised of a number of components. As a user, an understanding of the SLS and renderer systems are needed. But as a developer, an understanding of Salt states and how to write the states is needed as well. NOTE: States are compiled and executed only on minions that
have been targeted. To execute functions directly on masters, see
runners.
Salt SLS SystemThe primary system used by the Salt state system is the SLS system. SLS stands for SaLt State. The Salt States are files which contain the information about how to configure Salt minions. The states are laid out in a directory tree and can be written in many different formats. The contents of the files and the way they are laid out is intended to be as simple as possible while allowing for maximum flexibility. The files are laid out in states and contains information about how the minion needs to be configured. SLS File LayoutSLS files are laid out in the Salt file server. A simple layout can look like this: top.sls ssh.sls sshd_config users/init.sls users/admin.sls salt/master.sls web/init.sls The top.sls file is a key component. The top.sls files is used to determine which SLS files should be applied to which minions. The rest of the files with the .sls extension in the above example are state files. Files without a .sls extensions are seen by the Salt master as files that can be downloaded to a Salt minion. States are translated into dot notation. For example, the ssh.sls file is seen as the ssh state and the users/admin.sls file is seen as the users.admin state. Files named init.sls are translated to be the state name of the parent directory, so the web/init.sls file translates to the web state. In Salt, everything is a file; there is no "magic translation" of files and file types. This means that a state file can be distributed to minions just like a plain text or binary file. SLS FilesThe Salt state files are simple sets of data. Since SLS files are just data they can be represented in a number of different ways. The default format is YAML generated from a Jinja template. This allows for the states files to have all the language constructs of Python and the simplicity of YAML. State files can then be complicated Jinja templates that translate down to YAML, or just plain and simple YAML files. The State files are simply common data structures such as dictionaries and lists, constructed using a templating language such as YAML. Here is an example of a Salt State: vim: This short stanza will ensure that vim is installed, Salt is installed and up to date, the salt-master and salt-minion daemons are running and the Salt minion configuration file is in place. It will also ensure everything is deployed in the right order and that the Salt services are restarted when the watched file updated. The Top FileThe top file controls the mapping between minions and the states which should be applied to them. The top file specifies which minions should have which SLS files applied and which environments they should draw those SLS files from. The top file works by specifying environments on the top-level. Each environment contains target expressions to match minions. Finally, each target expression contains a list of Salt states to apply to matching minions: base: This above example uses the base environment which is built into the default Salt setup. The base environment has target expressions. The first one matches all minions, and the SLS files below it apply to all minions. The second expression is a regular expression that will match all minions with an ID matching saltmaster.* and specifies that for those minions, the salt.master state should be applied. IMPORTANT: Since version 2014.7.0, the default matcher (when one is
not explicitly defined as in the second expression in the above example) is
the compound matcher. Since this matcher parses individual words in the
expression, minion IDs containing spaces will not match properly using this
matcher. Therefore, if your target expression is designed to match a minion ID
containing spaces, it will be necessary to specify a different match type
(such as glob). For example:
base: A full table of match types available in the top file can be found here. Reloading ModulesSome Salt states require that specific packages be installed in order for the module to load. As an example the pip state module requires the pip package for proper name and version parsing. In most of the common cases, Salt is clever enough to transparently reload the modules. For example, if you install a package, Salt reloads modules because some other module or state might require just that package which was installed. On some edge-cases salt might need to be told to reload the modules. Consider the following state file which we'll call pep8.sls: python-pip: The above example installs pip using easy_install from setuptools and installs pep8 using pip, which, as told earlier, requires pip to be installed system-wide. Let's execute this state: salt-call state.apply pep8 The execution output would be something like: ---------- If we executed the state again the output would be: ---------- Since we installed pip using cmd, Salt has no way to know that a system-wide package was installed. On the second execution, since the required pip package was installed, the state executed correctly. NOTE: Salt does not reload modules on every state run because
doing so would greatly slow down state execution.
So how do we solve this edge-case? reload_modules! reload_modules is a boolean option recognized by salt on all available states which forces salt to reload its modules once a given state finishes. The modified state file would now be: python-pip: Let's run it, once: salt-call state.apply pep8 The output is: ---------- RETURN CODESWhen the salt or salt-call CLI commands result in an error, the command will exit with a return code of 1. Error cases consist of the following:
Retcode PassthroughIn addition to the cases listed above, if a state or remote-execution function sets a nonzero value in the retcode key of the __context__ dictionary, the command will exit with a return code of 1. For those developing custom states and execution modules, using __context__['retcode'] can be a useful way of signaling that an error has occurred: if something_went_wrong: This is actually how states signal that they have failed. Different cases result in different codes being set in the __context__ dictionary:
When the --retcode-passthrough flag is used with salt-call, then salt-call will exit with whichever retcode was set in the __context__ dictionary, rather than the default behavior which simply exits with 1 for any error condition. UTILITY MODULES - CODE REUSE IN CUSTOM MODULESNew in version 2015.5.0. Changed in version 2016.11.0: These can now be synced to the Master for use in custom Runners, and in custom execution modules called within Pillar SLS files. When extending Salt by writing custom (state modules), execution modules, etc., sometimes there is a need for a function to be available to more than just one kind of custom module. For these cases, Salt supports what are called "utility modules". These modules are like normal execution modules, but instead of being invoked in Salt code using __salt__, the __utils__ prefix is used instead. For example, assuming the following simple utility module, saved to salt://_utils/foo.py # -*- coding: utf-8 -*- """ My utils module --------------- This module contains common functions for use in my other custom types. """ def bar(): Once synced to a minion, this function would be available to other custom Salt types like so: # -*- coding: utf-8 -*- """ My awesome execution module --------------------------- """ def observe_the_awesomeness(): Utility modules, like any other kind of Salt extension, support using a __virtual__ function to conditionally load them, or load them under a different namespace. For instance, if the utility module above were named salt://_utils/mymodule.py it could be made to be loaded as the foo utility module with a __virtual__ function. # -*- coding: utf-8 -*- """ My utils module --------------- This module contains common functions for use in my other custom types. """ def __virtual__(): New in version 2018.3.0: Instantiating objects from classes declared in util modules works with Master side modules, such as Runners, Outputters, etc. Also you could even write your utility modules in object oriented fashion: # -*- coding: utf-8 -*- """ My OOP-style utils module ------------------------- This module contains common functions for use in my other custom types. """ class Foo(object): And import them into other custom modules: # -*- coding: utf-8 -*- """ My awesome execution module --------------------------- """ import mymodule def observe_the_awesomeness(): These are, of course, contrived examples, but they should serve to show some of the possibilities opened up by writing utility modules. Keep in mind though that states still have access to all of the execution modules, so it is not necessary to write a utility module to make a function available to both a state and an execution module. One good use case for utility modules is one where it is necessary to invoke the same function from a custom outputter/returner, as well as an execution module. Utility modules placed in salt://_utils/ will be synced to the minions when a highstate is run, as well as when any of the following Salt functions are called:
As of the 2019.2.0 release, as well as 2017.7.7 and 2018.3.2 in their respective release cycles, the sync argument to state.apply/state.sls can be used to sync custom types when running individual SLS files. To sync to the Master, use either of the following:
EVENTS & REACTOREvent SystemThe Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt. The event system uses a publish-subscribe pattern, otherwise know as pub/sub. Event BusThe event system is comprised of a two primary components, which make up the concept of an Event Bus:
Events are published onto the event bus and event bus subscribers listen for the published events. The event bus is used for both inter-process communication as well as network transport in Salt. Inter-process communication is provided through UNIX domain sockets (UDX). The Salt Master and each Salt Minion has their own event bus. Event typesSalt Master EventsThese events are fired on the Salt Master event bus. This list is not comprehensive. Authentication events
NOTE: Minions fire auth events on fairly regular basis for a
number of reasons. Writing reactors to respond to events through the auth
cycle can lead to infinite reactor event loops (minion tries to auth, reactor
responds by doing something that generates another auth event, minion sends
auth event, etc.). Consider reacting to salt/key or
salt/minion/<MID>/start or firing a custom event tag
instead.
Start events
Key events
WARNING: If a master is in auto_accept mode,
salt/key events will not be fired when the keys are accepted. In
addition, pre-seeding keys (like happens through Salt-Cloud) will not
cause firing of these events.
Job events
Runner Events
Presence Events
Cloud EventsUnlike other Master events, salt-cloud events are not fired on behalf of a Salt Minion. Instead, salt-cloud events are fired on behalf of a VM. This is because the minion-to-be may not yet exist to fire events to or also may have been destroyed. This behavior is reflected by the name variable in the event data for salt-cloud events as compared to the id variable for Salt Minion-triggered events.
Listening for EventsSalt's event system is used heavily within Salt and it is also written to integrate heavily with existing tooling and scripts. There is a variety of ways to consume it. From the CLIThe quickest way to watch the event bus is by calling the state.event runner: salt-run state.event pretty=True That runner is designed to interact with the event bus from external tools and shell scripts. See the documentation for more examples. Remotely via the REST APISalt's event bus can be consumed salt.netapi.rest_cherrypy.app.Events as an HTTP stream from external tools or services. curl -SsNk https://salt-api.example.com:8000/events?token=05A3 From PythonPython scripts can access the event bus only as the same system user that Salt is running as. The event system is accessed via the event library and can only be accessed by the same system user that Salt is running as. To listen to events a SaltEvent object needs to be created and then the get_event function needs to be run. The SaltEvent object needs to know the location that the Salt Unix sockets are kept. In the configuration this is the sock_dir option. The sock_dir option defaults to "/var/run/salt/master" on most systems. The following code will check for a single event: import salt.config
import salt.utils.event
opts = salt.config.client_config("/usr/local/etc/salt/master")
event = salt.utils.event.get_event("master", sock_dir=opts["sock_dir"], opts=opts)
data = event.get_event()
Events will also use a "tag". Tags allow for events to be filtered by prefix. By default all events will be returned. If only authentication events are desired, then pass the tag "salt/auth". The get_event method has a default poll time assigned of 5 seconds. To change this time set the "wait" option. The following example will only listen for auth events and will wait for 10 seconds instead of the default 5. data = event.get_event(wait=10, tag="salt/auth") To retrieve the tag as well as the event data, pass full=True: evdata = event.get_event(wait=10, tag="salt/job", full=True) tag, data = evdata["tag"], evdata["data"] Instead of looking for a single event, the iter_events method can be used to make a generator which will continually yield salt events. The iter_events method also accepts a tag but not a wait time: for data in event.iter_events(tag="salt/auth"): And finally event tags can be globbed, such as they can be in the Reactor, using the fnmatch library. import fnmatch
import salt.config
import salt.utils.event
opts = salt.config.client_config("/usr/local/etc/salt/master")
sevent = salt.utils.event.get_event("master", sock_dir=opts["sock_dir"], opts=opts)
while True:
Firing EventsIt is possible to fire events on either the minion's local bus or to fire events intended for the master. To fire a local event from the minion on the command line call the event.fire execution function: salt-call event.fire '{"data": "message to be sent in the event"}' 'tag'
To fire an event to be sent up to the master from the minion call the event.send execution function. Remember YAML can be used at the CLI in function arguments: salt-call event.send 'myco/mytag/success' '{success: True, message: "It works!"}'
If a process is listening on the minion, it may be useful for a user on the master to fire an event to it. An example of listening local events on a minion on a non-Windows system: # Job on minion
import salt.utils.event
opts = salt.config.minion_config("/usr/local/etc/salt/minion")
event = salt.utils.event.MinionEvent(opts)
for evdata in event.iter_events(match_type="regex", tag="custom/.*"):
And an example of listening local events on a Windows system: # Job on minion import salt.utils.event opts = salt.config.minion_config(salt.minion.DEFAULT_MINION_OPTS) event = salt.utils.event.MinionEvent(opts) for evdata in event.iter_events(match_type="regex", tag="custom/.*"): salt minionname event.fire '{"data": "message for the minion"}' 'customtag/african/unladen'
Firing Events from PythonFrom Salt execution modulesEvents can be very useful when writing execution modules, in order to inform various processes on the master when a certain task has taken place. This is easily done using the normal cross-calling syntax: # /usr/local/etc/salt/states/_modules/my_custom_module.py def do_something(): From Custom Python ScriptsFiring events from custom Python code is quite simple and mirrors how it is done at the CLI: import salt.client caller = salt.client.Caller() ret = caller.cmd( BeaconsBeacons let you use the Salt event system to monitor non-Salt processes. The beacon system allows the minion to hook into a variety of system processes and continually monitor these processes. When monitored activity occurs in a system process, an event is sent on the Salt event bus that can be used to trigger a reactor. Salt beacons can currently monitor and send Salt events for many system activities, including:
See beacon modules for a current list. NOTE: Salt beacons are an event generation mechanism. Beacons
leverage the Salt reactor system to make changes when beacon events
occur.
Configuring BeaconsSalt beacons do not require any changes to the system components that are being monitored, everything is configured using Salt. Beacons are typically enabled by placing a beacons: top level block in /usr/local/etc/salt/minion or any file in /etc/salt/minion.d/ such as /usr/local/etc/salt/minion.d/beacons.conf or add it to pillars for that minion: beacons: The beacon system, like many others in Salt, can also be configured via the minion pillar, grains, or local config file. NOTE: The inotify beacon only works on OSes that have
inotify kernel support. Currently this excludes FreeBSD, macOS, and
Windows.
All beacon configuration is done using list based configuration. New in version Neon. Multiple copies of a particular Salt beacon can be configured by including the beacon_module parameter in the beacon configuration. beacons: Beacon Monitoring IntervalBeacons monitor on a 1-second interval by default. To set a different interval, provide an interval argument to a beacon. The following beacons run on 5- and 10-second intervals: beacons: Avoiding Event LoopsIt is important to carefully consider the possibility of creating a loop between a reactor and a beacon. For example, one might set up a beacon which monitors whether a file is read which in turn fires a reactor to run a state which in turn reads the file and re-fires the beacon. To avoid these types of scenarios, the disable_during_state_run argument may be set. If a state run is in progress, the beacon will not be run on its regular interval until the minion detects that the state run has completed, at which point the normal beacon interval will resume. beacons: NOTE: For beacon writers: If you need extra stuff to happen,
like closing file handles for the disable_during_state_run to actually
work, you can add a close() function to the beacon to run those extra
things. See the inotify beacon.
Beacon ExampleThis example demonstrates configuring the inotify beacon to monitor a file for changes, and then restores the file to its original contents if a change was made. NOTE: The inotify beacon requires Pyinotify on the minion,
install it using salt myminion pkg.install python-inotify.
Create Watched FileCreate the file named /etc/important_file and add some simple content: important_config: True Add Beacon Configs to MinionOn the Salt minion, add the following configuration to /usr/local/etc/salt/minion.d/beacons.conf: beacons: Save the configuration file and restart the minion service. The beacon is now set up to notify salt upon modifications made to the file. NOTE: The disable_during_state_run: True parameter
prevents the inotify beacon from generating reactor events due to salt
itself modifying the file.
View Events on the MasterOn your Salt master, start the event runner using the following command: salt-run state.event pretty=true This runner displays events as they are received by the master on the Salt event bus. To test the beacon you set up in the previous section, make and save a modification to /etc/important_file. You'll see an event similar to the following on the event bus: {
This indicates that the event is being captured and sent correctly. Now you can create a reactor to take action when this event occurs. Create a ReactorThis reactor reverts the file named /etc/important_file to the contents provided by salt each time it is modified. Reactor SLSOn your Salt master, create a file named /srv/reactor/revert.sls. NOTE: If the /srv/reactor directory doesn't exist,
create it.
mkdir -p /srv/reactor Add the following to /srv/reactor/revert.sls: revert-file: NOTE: In addition to setting
disable_during_state_run: True for an inotify beacon whose reaction is
to modify the watched file, it is important to ensure the state applied is
also idempotent.
State SLSCreate the state sls file referenced by the reactor sls file. This state file will be located at /usr/local/etc/salt/states/maintain_important_file.sls. important_file: Master ConfigConfigure the master to map the inotify beacon event to the revert reaction in /usr/local/etc/salt/master.d/reactor.conf: reactor: NOTE: You can have only one top level reactor section,
so if one already exists, add this code to the existing section. See
here to learn more about reactor SLS syntax.
Start the Salt Master in Debug ModeTo help with troubleshooting, start the Salt master in debug mode: service salt-master stop salt-master -l debug When debug logging is enabled, event and reactor data are displayed so you can discover syntax and other issues. Trigger the ReactorOn your minion, make and save another change to /etc/important_file. On the Salt master, you'll see debug messages that indicate the event was received and the state.apply job was sent. When you inspect the file on the minion, you'll see that the file contents have been restored to important_config: True. All beacons are configured using a similar process of enabling the beacon, writing a reactor SLS (and state SLS if needed), and mapping a beacon event to the reactor SLS. Writing Beacon PluginsBeacon plugins use the standard Salt loader system, meaning that many of the constructs from other plugin systems holds true, such as the __virtual__ function. The important function in the Beacon Plugin is the beacon function. When the beacon is configured to run, this function will be executed repeatedly by the minion. The beacon function therefore cannot block and should be as lightweight as possible. The beacon also must return a list of dicts, each dict in the list will be translated into an event on the master. Beacons may also choose to implement a validate function which takes the beacon configuration as an argument and ensures that it is valid prior to continuing. This function is called automatically by the Salt loader when a beacon is loaded. Please see the inotify beacon as an example. The beacon FunctionThe beacons system will look for a function named beacon in the module. If this function is not present then the beacon will not be fired. This function is called on a regular basis and defaults to being called on every iteration of the minion, which can be tens to hundreds of times a second. This means that the beacon function cannot block and should not be CPU or IO intensive. The beacon function will be passed in the configuration for the executed beacon. This makes it easy to establish a flexible configuration for each called beacon. This is also the preferred way to ingest the beacon's configuration as it allows for the configuration to be dynamically updated while the minion is running by configuring the beacon in the minion's pillar. The Beacon ReturnThe information returned from the beacon is expected to follow a predefined structure. The returned value needs to be a list of dictionaries (standard python dictionaries are preferred, no ordered dicts are needed). The dictionaries represent individual events to be fired on the minion and master event buses. Each dict is a single event. The dict can contain any arbitrary keys but the 'tag' key will be extracted and added to the tag of the fired event. The return data structure would look something like this: [{"changes": ["/foo/bar"], "tag": "foo"}, {"changes": ["/foo/baz"], "tag": "bar"}]
Calling Execution ModulesExecution modules are still the preferred location for all work and system interaction to happen in Salt. For this reason the __salt__ variable is available inside the beacon. Please be careful when calling functions in __salt__, while this is the preferred means of executing complicated routines in Salt not all of the execution modules have been written with beacons in mind. Watch out for execution modules that may be CPU intense or IO bound. Please feel free to add new execution modules and functions to back specific beacons. Distributing Custom BeaconsCustom beacons can be distributed to minions via the standard methods, see Modular Systems. Reactor SystemSalt's Reactor system gives Salt the ability to trigger actions in response to an event. It is a simple interface to watching Salt's event bus for event tags that match a given pattern and then running one or more commands in response. This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed. Event SystemA basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations. The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top-level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dictionary, which contains information about the event. Mapping Events to Reactor SLS FilesReactor SLS files and event tags are associated in the master config file. By default this is /usr/local/etc/salt/master, or /etc/salt/master.d/reactor.conf. New in version 2014.7.0: Added Reactor support for salt:// file paths. In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run. reactor: # Master config section "reactor" NOTE: In the above example, salt://reactor/mycustom.sls
refers to the base environment. To pull this file from a different
environment, use the querystring syntax (e.g.
salt://reactor/mycustom.sls?saltenv=reactor).
Reactor SLS files are similar to State and Pillar SLS files. They are by default YAML + Jinja templates and are passed familiar context variables. Click here for more detailed information on the variables available in Jinja templating. Here is the SLS for a simple reaction: {% if data['id'] == 'mysql1' %}
highstate_run:
This simple reactor file uses Jinja to further refine the reaction to be made. If the id in the event data is mysql1 (in other words, if the name of the minion is mysql1) then the following reaction is defined. The same data structure and compiler used for the state system is used for the reactor system. The only difference is that the data is matched up to the salt command API and the runner system. In this example, a command is published to the mysql1 minion with a function of state.apply, which performs a highstate. Similarly, a runner can be called: {% if data['data']['custom_var'] == 'runit' %}
call_runit_orch:
This example will execute the state.orchestrate runner and initiate an execution of the runit orchestrator located at /usr/local/etc/salt/states/orchestrate/runit.sls. Types of Reactions
NOTE: The local and caller reaction types will
likely be renamed in a future release. These reaction types were named after
Salt's internal client interfaces, and are not intuitively named. Both
local and caller will continue to work in Reactor SLS files,
however.
Where to Put Reactor SLS FilesReactor SLS files can come both from files local to the master, and from any of backends enabled via the fileserver_backend config option. Files placed in the Salt fileserver can be referenced using a salt:// URL, just like they can in State SLS files. It is recommended to place reactor and orchestrator SLS files in their own uniquely-named subdirectories such as orch/, orchestrate/, react/, reactor/, etc., to keep them organized. Writing Reactor SLSThe different reaction types were developed separately and have historically had different methods for passing arguments. For the 2017.7.2 release a new, unified configuration schema has been introduced, which applies to all reaction types. The old config schema will continue to be supported, and there is no plan to deprecate it at this time. Local ReactionsA local reaction runs a remote-execution function on the targeted minions. The old config schema required the positional and keyword arguments to be manually separated by the user under arg and kwarg parameters. However, this is not very user-friendly, as it forces the user to distinguish which type of argument is which, and make sure that positional arguments are ordered properly. Therefore, the new config schema is recommended if the master is running a supported release. The below two examples are equivalent:
This reaction would be equivalent to running the following Salt command: salt -G 'kernel:Linux' state.single pkg.installed name=zsh fromrepo=updates NOTE: Any other parameters in the
LocalClient().cmd_async() method can be passed at the same indentation
level as tgt.
NOTE: tgt_type is only required when the target
expression defined in tgt uses a target type other than a minion
ID glob.
The tgt_type argument was named expr_form in releases prior to 2017.7.0. Runner ReactionsRunner reactions execute runner functions locally on the master. The old config schema called for passing arguments to the reaction directly under the name of the runner function. However, this can cause unpredictable interactions with the Reactor system's internal arguments. It is also possible to pass positional and keyword arguments under arg and kwarg like above in local reactions, but as noted above this is not very user-friendly. Therefore, the new config schema is recommended if the master is running a supported release. NOTE: State ids of reactors for runners and wheels should all
be unique. They can overwrite each other when added to the async queue causing
lost reactions.
The below two examples are equivalent:
Assuming that the event tag is foo, and the data passed to the event is {'bar': 'baz'}, then this reaction is equivalent to running the following Salt command: salt-run state.orchestrate mods=orchestrate.deploy_app pillar='{"event_tag": "foo", "event_data": {"bar": "baz"}}'
Wheel ReactionsWheel reactions run wheel functions locally on the master. Like runner reactions, the old config schema called for wheel reactions to have arguments passed directly under the name of the wheel function (or in arg or kwarg parameters). NOTE: State ids of reactors for runners and wheels should all
be unique. They can overwrite each other when added to the async queue causing
lost reactions.
The below two examples are equivalent:
Caller ReactionsCaller reactions run remote-execution functions on a minion daemon's Reactor system. To run a Reactor on the minion, it is necessary to configure the Reactor Engine in the minion config file, and then setup your watched events in a reactor section in the minion config file as well. NOTE: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions. Both the old and new config schemas involve passing arguments under an args parameter. However, the old config schema only supports positional arguments. Therefore, the new config schema is recommended if the masterless minion is running a supported release. The below two examples are equivalent:
This reaction is equivalent to running the following Salt command: salt-call file.touch name=/tmp/foo Best Practices for Writing Reactor SLS FilesThe Reactor works as follows:
Matching and rendering Reactor SLS files is done sequentially in a single process. For that reason, reactor SLS files should contain few individual reactions (one, if at all possible). Also, keep in mind that reactions are fired asynchronously (with the exception of caller) and do not support requisites. Complex Jinja templating that calls out to slow remote-execution or runner functions slows down the rendering and causes other reactions to pile up behind the current one. The worker pool is designed to handle complex and long-running processes like orchestration jobs. Therefore, when complex tasks are in order, orchestration is a natural fit. Orchestration SLS files can be more complex, and use requisites. Performing a complex task using orchestration lets the Reactor system fire off the orchestration job and proceed with processing other reactions. Jinja ContextReactor SLS files only have access to a minimal Jinja context. grains and pillar are not available. The salt object is available for calling remote-execution or runner functions, but it should be used sparingly and only for quick tasks for the reasons mentioned above. In addition to the salt object, the following variables are available in the Jinja context:
The data dict will contain an id key containing the minion ID, if the event was fired from a minion, and a data key containing the data passed to the event. Advanced State System CapabilitiesReactor SLS files, by design, do not support requisites, ordering, onlyif/unless conditionals and most other powerful constructs from Salt's State system. Complex Master-side operations are best performed by Salt's Orchestrate system so using the Reactor to kick off an Orchestrate run is a very common pairing. For example: # /usr/local/etc/salt/master.d/reactor.conf
# A custom event containing: {"foo": "Foo!", "bar: "bar*", "baz": "Baz!"}
reactor:
# /srv/reactor/some_event.sls invoke_orchestrate_file: # /usr/local/etc/salt/states/orchestrate/do_complex_thing.sls
{% set tag = salt.pillar.get('event_tag') %}
{% set data = salt.pillar.get('event_data') %}
# Pass data from the event to a custom runner function.
# The function expects a 'foo' argument.
do_first_thing:
Beacons and ReactorsAn event initiated by a beacon, when it arrives at the master will be wrapped inside a second event, such that the data object containing the beacon information will be data['data'], rather than data. For example, to access the id field of the beacon event in a reactor file, you will need to reference {{ data['data']['id'] }} rather than {{ data['id'] }} as for events initiated directly on the event bus. Similarly, the data dictionary attached to the event would be located in {{ data['data']['data'] }} instead of {{ data['data'] }}. See the beacon documentation for examples. Manually Firing an EventFrom the MasterUse the event.send runner: salt-run event.send foo '{orchestrate: refresh}'
From the MinionTo fire an event to the master from a minion, call event.send: salt-call event.send foo '{orchestrate: refresh}'
To fire an event to the minion's local event bus, call event.fire: salt-call event.fire '{orchestrate: refresh}' foo
Referencing Data Passed in EventsAssuming any of the above examples, any reactor SLS files triggered by watching the event tag foo will execute with {{ data['data']['orchestrate'] }} equal to 'refresh'. Getting Information About EventsThe best way to see exactly what events have been fired and what data is available in each event is to use the state.event runner. SEE ALSO: Common Salt Events
Example usage: salt-run state.event pretty=True Example output: salt/job/20150213001905721678/new {
Debugging the ReactorThe best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file).
salt-master -l debug
[DEBUG ] Gathering reactors for tag foo/bar [DEBUG ] Compiling reactions for tag foo/bar [DEBUG ] Rendered data from file: /path/to/the/reactor_file.sls: <... Rendered output appears here. ...> The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it. Passing Event Data to Minions or Orchestration as PillarAn interesting trick to pass data from the Reactor SLS file to state.apply is to pass it as inline Pillar data since both functions take a keyword argument named pillar. The following example uses Salt's Reactor to listen for the event that is fired when the key for a new minion is accepted on the master using salt-key. /usr/local/etc/salt/master.d/reactor.conf: reactor: The Reactor then fires a :state.apply command targeted to the HAProxy servers and passes the ID of the new minion from the event to the state file via inline Pillar. /usr/local/etc/salt/states/haproxy/react_new_minion.sls: {% if data['act'] == 'accept' and data['id'].startswith('web') %}
add_new_minion_to_pool:
The above command is equivalent to the following command at the CLI: salt 'haproxy*' state.apply haproxy.refresh_pool pillar='{new_minion: minionid}'
This works with Orchestrate files as well: call_some_orchestrate_file: Which is equivalent to the following command at the CLI: salt-run state.orchestrate orchestrate.some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar lookup syntax. The following example is grabbing web server names and IP addresses from Salt Mine. If this state is invoked from the Reactor then the custom Pillar value from above will be available and the new minion will be added to the pool but with the disabled flag so that HAProxy won't yet direct traffic to it. /usr/local/etc/salt/states/haproxy/refresh_pool.sls: {% set new_minion = salt['pillar.get']('new_minion') %}
listen web *:80
A Complete ExampleIn this example, we're going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We'll also add that we don't want all servers being automatically accepted. For this example, we'll assume that all hosts that have an id that starts with 'ink' will be automatically accepted and have state.apply executed. On top of this, we're going to add that a host coming up that was replaced (meaning a new key) will also be accepted. Our master configuration will be rather simple. All minions that attempt to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching. /usr/local/etc/salt/master.d/reactor.conf: reactor: In this SLS file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected. We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default. /srv/reactor/auth-pending.sls: {# Ink server failed to authenticate -- remove accepted key #}
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
No if statements are needed here because we already limited this action to just Ink servers in the master configuration. /srv/reactor/auth-complete.sls: {# When an Ink server connects, run state.apply. #}
highstate_run:
The above will also return the highstate result data using the smtp_return returner (use virtualname like when using from the command line with --return). The returner needs to be configured on the minion for this to work. See salt.returners.smtp_return documentation for that. Syncing Custom Types on Minion StartSalt will sync all custom types (by running a saltutil.sync_all) on every highstate. However, there is a chicken-and-egg issue where, on the initial highstate, a minion will not yet have these custom types synced when the top file is first compiled. This can be worked around with a simple reactor which watches for salt/minion/*/start events, which each minion fires when it first starts up and connects to the master. On the master, create /srv/reactor/sync_grains.sls with the following contents: sync_grains: And in the master config file, add the following reactor configuration: reactor: This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed. Other types can be synced by replacing local.saltutil.sync_grains with local.saltutil.sync_modules, local.saltutil.sync_all, or whatever else suits the intended use case. Also, if it is not desirable that every minion syncs on startup, the * can be replaced with a different glob to narrow down the set of minions which will match that reactor (e.g. salt/minion/appsrv*/start, which would only match minion IDs beginning with appsrv). Reactor Tuning for Large-Scale InstallationsThe reactor uses a thread pool implementation that's contained inside salt.utils.process.ThreadPool. It uses Python's stdlib Queue to enqueue jobs which are picked up by standard Python threads. If the queue is full, False is simply returned by the firing method on the thread pool. As such, there are a few things to say about the selection of proper values for the reactor. For situations where it is expected that many long-running jobs might be executed by the reactor, reactor_worker_hwm should be increased or even set to 0 to bound it only by available memory. If set to zero, a close eye should be kept on memory consumption. If many long-running jobs are expected and execution concurrency and performance are a concern, you may also increase the value for reactor_worker_threads. This will control the number of concurrent threads which are pulling jobs from the queue and executing them. Obviously, this bears a relationship to the speed at which the queue itself will fill up. The price to pay for this value is that each thread will contain a copy of Salt code needed to perform the requested action. ORCHESTRATIONOrchestrate RunnerExecuting states or highstate on a minion is perfect when you want to ensure that minion configured and running the way you want. Sometimes however you want to configure a set of minions all at once. For example, if you want to set up a load balancer in front of a cluster of web servers you can ensure the load balancer is set up first, and then the same matching configuration is applied consistently across the whole cluster. Orchestration is the way to do this. The Orchestrate RunnerNew in version 0.17.0. NOTE: Orchestrate Deprecates OverState
The Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with some advantages:
The Orchestrate Runner replaced the OverState system in Salt 2015.8.0. The orchestrate runner generalizes the Salt state system to a Salt master context. Whereas the state.sls, state.highstate, et al. functions are concurrently and independently executed on each Salt minion, the state.orchestrate runner is executed on the master, giving it a master-level view and control over requisites, such as state ordering and conditionals. This allows for inter minion requisites, like ordering the application of states on different minions that must not happen simultaneously, or for halting the state run on all minions if a minion fails one of its states. The state.sls, state.highstate, et al. functions allow you to statefully manage each minion and the state.orchestrate runner allows you to statefully manage your entire infrastructure. Writing SLS FilesOrchestrate SLS files are stored in the same location as State SLS files. This means that both file_roots and gitfs_remotes impact what SLS files are available to the reactor and orchestrator. It is recommended to keep reactor and orchestrator SLS files in their own uniquely named subdirectories such as _orch/, orch/, _orchestrate/, react/, _reactor/, etc. This will avoid duplicate naming and will help prevent confusion. Executing the Orchestrate RunnerThe Orchestrate Runner command format is the same as for the state.sls function, except that since it is a runner, it is executed with salt-run rather than salt. Assuming you have a state.sls file called /usr/local/etc/salt/states/orch/webserver.sls the following command, run on the master, will apply the states defined in that file. salt-run state.orchestrate orch.webserver NOTE: state.orch is a synonym for
state.orchestrate
Changed in version 2014.1.1: The runner function was renamed to state.orchestrate to avoid confusion with the state.sls execution function. In versions 0.17.0 through 2014.1.0, state.sls must be used. Masterless OrchestrationNew in version 2016.11.0. To support salt orchestration on masterless minions, the Orchestrate Runner is available as an execution module. The syntax for masterless orchestration is exactly the same, but it uses the salt-call command and the minion configuration must contain the file_mode: local option. Alternatively, use salt-call --local on the command line. salt-call --local state.orchestrate orch.webserver NOTE: Masterless orchestration supports only the
salt.state command in an sls file; it does not (currently) support the
salt.function command.
ExamplesFunctionTo execute a function, use salt.function: # /usr/local/etc/salt/states/orch/cleanfoo.sls cmd.run: salt-run state.orchestrate orch.cleanfoo If you omit the "name" argument, the ID of the state will be the default name, or in the case of salt.function, the execution module function to run. You can specify the "name" argument to avoid conflicting IDs: copy_some_file: Fail FunctionsWhen running a remote execution function in orchestration, certain return values for those functions may indicate failure, while the function itself doesn't set a return code. For those circumstances, using a "fail function" allows for a more flexible means of assessing success or failure. A fail function can be written as part of a custom execution module. The function should accept one argument, and return a boolean result. For example: def check_func_result(retval): The function can then be referenced in orchestration SLS like so: do_stuff: IMPORTANT: Fail functions run on the master, so they must be
synced using salt-run saltutil.sync_modules.
StateTo execute a state, use salt.state. # /usr/local/etc/salt/states/orch/webserver.sls install_nginx: salt-run state.orchestrate orch.webserver HighstateTo run a highstate, set highstate: True in your state config: # /usr/local/etc/salt/states/orch/web_setup.sls webserver_setup: salt-run state.orchestrate orch.web_setup RunnerTo execute another runner, use salt.runner. For example to use the cloud.profile runner in your orchestration state additional options to replace values in the configured profile, use this: # /usr/local/etc/salt/states/orch/deploy.sls create_instance: To get a more dynamic state, use jinja variables together with inline pillar data. Using the same example but passing on pillar data, the state would be like this. # /usr/local/etc/salt/states/orch/deploy.sls
{% set servers = salt['pillar.get']('servers', 'test') %}
{% set master = salt['pillar.get']('master', 'salt') %}
create_instance:
To execute with pillar data. salt-run state.orch orch.deploy pillar='{"servers": "newsystem1",
"master": "mymaster"}'
Return Codes in Runner/Wheel JobsNew in version 2018.3.0. State (salt.state) jobs are able to report failure via the state return dictionary. Remote execution (salt.function) jobs are able to report failure by setting a retcode key in the __context__ dictionary. However, runner (salt.runner) and wheel (salt.wheel) jobs would only report a False result when the runner/wheel function raised an exception. As of the 2018.3.0 release, it is now possible to set a retcode in runner and wheel functions just as you can do in remote execution functions. Here is some example pseudocode: def myrunner(): This allows a custom runner/wheel function to report its failure so that requisites can accurately tell that a job has failed. More Complex OrchestrationMany states/functions can be configured in a single file, which when combined with the full suite of Requisites and Other Global State Arguments, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any Requisites and Other Global State Arguments, as is the default in SLS files since 0.17.0. bootstrap_servers: Given the above setup, the orchestration will be carried out as follows:
NOTE: Remember, salt-run is always executed on the
master.
Parsing Results ProgrammaticallyOrchestration jobs return output in a specific data structure. That data structure is represented differently depending on the outputter used. With the default outputter for orchestration, you get a nice human-readable output. Assume the following orchestration SLS: good_state: Running this using the default outputter would produce output which looks like this: fa5944a73aa8_master: ---------- However, using the json outputter, you can get the output in an easily loadable and parsable format: salt-run state.orchestrate test --out=json {
The 2018.3.0 release includes a couple fixes to make parsing this data easier and more accurate. The first is the ability to set a return code in a custom runner or wheel function, as noted above. The second is a change to how failures are included in the return data. Prior to the 2018.3.0 release, minions that failed a salt.state orchestration job would show up in the comment field of the return data, in a human-readable string that was not easily parsed. They are now included in the changes dictionary alongside the minions that succeeded. In addition, salt.function jobs which failed because the fail function returned False used to handle their failures in the same way salt.state jobs did, and this has likewise been corrected. Running States on the Master without a MinionThe orchestrate runner can be used to execute states on the master without using a minion. For example, assume that salt://foo.sls contains the following SLS: /etc/foo.conf: In this case, running salt-run state.orchestrate foo would be the equivalent of running a state.sls foo, but it would execute on the master only, and would not require a minion daemon to be running on the master. This is not technically orchestration, but it can be useful in certain use cases. LimitationsOnly one SLS target can be run at a time using this method, while using state.sls allows for multiple SLS files to be passed in a comma-separated list. SOLARISThis section contains details on Solaris specific quirks and workarounds. NOTE: Solaris refers to both Solaris 10 compatible platforms
like Solaris 10, illumos, SmartOS, OmniOS, OpenIndiana,... and Oracle Solaris
11 platforms.
Solaris-specific BehaviourSalt is capable of managing Solaris systems, however due to various differences between the operating systems, there are some things you need to keep in mind. This document will contain any quirks that apply across Salt or limitations in some modules. FQDN/UQDNOn Solaris platforms the FQDN will not always be properly detected. If an IPv6 address is configured pythons `socket.getfqdn()` fails to return a FQDN and returns the nodename instead. For a full breakdown see the following issue on github: #37027 GrainsNot all grains are available or some have empty or 0 as value. Mostly grains that are dependent on hardware discovery like: - num_gpus - gpus Also some resolver related grains like: - domain - dns:options - dns:sortlist SALT SSHGetting StartedSalt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt commands.
Salt SSH RosterThe roster system in Salt allows for remote minions to be easily defined. NOTE: See the SSH roster docs for more details.
Simply create the roster file, the default location is /usr/local/etc/salt/roster: web1: 192.168.42.1 This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created: web1: NOTE: sudo works only if NOPASSWD is set for user in
/etc/sudoers: fred ALL=(ALL) NOPASSWD: ALL
Deploy ssh key for salt-sshBy default, salt-ssh will generate key pairs for ssh, the default path will be /usr/local/etc/salt/pki/master/ssh/salt-ssh.rsa. The key generation happens when you run salt-ssh for the first time. You can use ssh-copy-id, (the OpenSSH key deployment tool) to deploy keys to your servers. ssh-copy-id -i /usr/local/etc/salt/pki/master/ssh/salt-ssh.rsa.pub user@server.demo.com One could also create a simple shell script, named salt-ssh-copy-id.sh as follows: #!/bin/bash if [ -z $1 ]; then NOTE: Be certain to chmod +x salt-ssh-copy-id.sh.
./salt-ssh-copy-id.sh user@server1.host.com ./salt-ssh-copy-id.sh user@server2.host.com Once keys are successfully deployed, salt-ssh can be used to control them. Alternatively ssh agent forwarding can be used by setting the priv to agent-forwarding. Calling Salt SSHNOTE: salt-ssh on target hosts without Python 3
The salt-ssh command requires at least python 3, which is not installed by default on some target hosts. An easy workaround in this situation is to use the -r option to run a raw shell command that installs python26: salt-ssh centos-5-minion -r 'yum -y install epel-release ; yum -y install python26' NOTE: salt-ssh on systems with Python 3.x
Salt, before the 2017.7.0 release, does not support Python 3.x which is the default on for example the popular 16.04 LTS release of Ubuntu. An easy workaround for this scenario is to use the -r option similar to the example above: salt-ssh ubuntu-1604-minion -r 'apt update ; apt install -y python-minimal' The salt-ssh command can be easily executed in the same way as a salt command: salt-ssh '*' test.version Commands with salt-ssh follow the same syntax as the salt command. The standard salt functions are available! The output is the same as salt and many of the same flags are available. Please see Salt SSH reference for all of the available options. Raw Shell CallsBy default salt-ssh runs Salt execution modules on the remote system, but salt-ssh can also execute raw shell commands: salt-ssh '*' -r 'ifconfig' States Via Salt SSHThe Salt State system can also be used with salt-ssh. The state system abstracts the same interface to the user in salt-ssh as it does when using standard salt. The intent is that Salt Formulas defined for standard salt will work seamlessly with salt-ssh and vice-versa. The standard Salt States walkthroughs function by simply replacing salt commands with salt-ssh. Targeting with Salt SSHDue to the fact that the targeting approach differs in salt-ssh, only glob and regex targets are supported as of this writing, the remaining target systems still need to be implemented. NOTE: By default, Grains are settable through salt-ssh.
By default, these grains will not be persisted across reboots.
See the "thin_dir" setting in Roster documentation for more details. Configuring Salt SSHSalt SSH takes its configuration from a master configuration file. Normally, this file is in /usr/local/etc/salt/master. If one wishes to use a customized configuration file, the -c option to Salt SSH facilitates passing in a directory to look inside for a configuration file named master. Minion ConfigNew in version 2015.5.1. Minion config options can be defined globally using the master configuration option ssh_minion_opts. It can also be defined on a per-minion basis with the minion_opts entry in the roster. Running Salt SSH as non-root userBy default, Salt read all the configuration from /usr/local/etc/salt/. If you are running Salt SSH with a regular user you have to modify some paths or you will get "Permission denied" messages. You have to modify two parameters: pki_dir and cachedir. Those should point to a full path writable for the user. It's recommended not to modify /usr/local/etc/salt for this purpose. Create a private copy of /usr/local/etc/salt for the user and run the command with -c /new/config/path. Define CLI Options with SaltfileIf you are commonly passing in CLI options to salt-ssh, you can create a Saltfile to automatically use these options. This is common if you're managing several different salt projects on the same server. So you can cd into a directory that has a Saltfile with the following YAML contents: salt-ssh: Instead of having to call salt-ssh --config-dir=path/to/config/dir --max-procs=30 --wipe \* test.version you can call salt-ssh \* test.version. Boolean-style options should be specified in their YAML representation. NOTE: The option keys specified must match the destination
attributes for the options specified in the parser
salt.utils.parsers.SaltSSHOptionParser. For example, in the case of the
--wipe command line option, its dest is configured to be
ssh_wipe and thus this is what should be configured in the
Saltfile. Using the names of flags for this option, being wipe:
True or w: True, will not work.
NOTE: For the Saltfile to be automatically detected it
needs to be named Saltfile with a capital S and be readable by
the user running salt-ssh.
At last you can create ~/.salt/Saltfile and salt-ssh will automatically load it by default. Advanced options with salt-sshSalt's ability to allow users to have custom grains and custom modules is also applicable to using salt-ssh. This is done through first packing the custom grains into the thin tarball before it is deployed on the system. For this to happen, the config file must be explicit enough to indicate where the custom grains are located on the machine like so: file_client: local file_roots: It's better to be explicit rather than implicit in this situation. This will allow urls all under salt:// to be resolved such as salt://_grains/custom_grain.py. One can confirm this action by executing a properly setup salt-ssh minion with salt-ssh minion grains.items. During this process, a saltutil.sync_all is ran to discover the thin tarball and then consumed. Output similar to this indicates a successful sync with custom grains. local: This is especially important when using a custom file_roots that differ from /usr/local/etc/salt/. NOTE: Please see
https://docs.saltproject.io/en/latest/topics/grains/ for more
information on grains and custom grains.
Debugging salt-sshOne common approach for debugging salt-ssh is to simply use the tarball that salt ships to the remote machine and call salt-call directly. To determine the location of salt-call, simply run salt-ssh with the -ltrace flag and look for a line containing the string, SALT_ARGV. This contains the salt-call command that salt-ssh attempted to execute. It is recommended that one modify this command a bit by removing the -l quiet, --metadata and --output json to get a better idea of what's going on the target system. Salt RostersSalt rosters are pluggable systems added in Salt 0.17.0 to facilitate the salt-ssh system. The roster system was created because salt-ssh needs a means to identify which systems need to be targeted for execution. SEE ALSO: roster modules
NOTE: The Roster System is not needed or used in standard Salt
because the master does not need to be initially aware of target systems,
since the Salt Minion checks itself into the master.
Since the roster system is pluggable, it can be easily augmented to attach to any existing systems to gather information about what servers are presently available and should be attached to by salt-ssh. By default the roster file is located at /usr/local/etc/salt/roster. How Rosters WorkThe roster system compiles a data structure internally referred to as targets. The targets is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the targets data structure. Targets DataThe information which can be stored in a roster target is the following: <Salt ID>: # The id to reference the target system with ssh_pre_flightA Salt-SSH roster option ssh_pre_flight was added in the 3001 release. This enables you to run a script before Salt-SSH tries to run any commands. You can set this option in the roster for a specific minion or use the roster_defaults to set it for all minions. This script will only run if the thin dir is not currently on the minion. This means it will only run on the first run of salt-ssh or if you have recently wiped out your thin dir. If you want to intentionally run the script again you have a couple of options:
ssh_pre_flight_argsAdditional arguments to the script running on the minion with ssh_pre_flight can be passed with specifying a list of arguments or a single string. In case of using single string distinct arguments will be passed to the script by splitting this string with the spaces. Target DefaultsThe roster_defaults dictionary in the master config is used to set the default login variables for minions in the roster so that the same arguments do not need to be passed with commandline arguments. roster_defaults: thin_dirSalt needs to upload a standalone environment to the target system, and this defaults to /tmp/salt-<hash>. This directory will be cleaned up per normal systems operation. If you need a persistent Salt environment, for instance to set persistent grains, this value will need to be changed. SSH Ext AlternativesIn the 2019.2.0 release the ssh_ext_alternatives feature was added. This allows salt-ssh to work across different supported python versions. You will need to ensure you have the following:
To enable using this feature you will need to edit the master configuration similar to below: ssh_ext_alternatives: WARNING: When using Salt versions >= 3001 and Python 2 is your
py-version you need to use an older version of Salt that supports
Python 2. For example, if using Salt-SSH version 3001 and you do not want to
install Python 3 on your target host you can use ssh_ext_alternatives's
path option. This option needs to point to a 2019.2.3 Salt installation
directory on your Salt-SSH host, which still supports Python 2.
auto_detectIn the 3001 release the auto_detect feature was added for ssh_ext_alternatives. This allows salt-ssh to automatically detect the path to all of your dependencies and does not require you to define them under dependencies. ssh_ext_alternatives: If py_bin is not set alongside auto_detect, it will attempt to auto detect the dependencies using the major version set in py-version. For example if you have [2, 7] set as your py-version, it will attempt to use the binary python2. You can also use auto_detect and dependencies together. ssh_ext_alternatives: If a dependency is defined in the dependencies list ssh_ext_alternatives will use this dependency, instead of the path that auto_detect finds. For example, if you define /opt/jinja2 under your dependencies for jinja2, it will not try to autodetect the file path to the jinja2 module, and will favor /opt/jinja2. Different Python VersionsThe 3001 release removed python 2 support in Salt. Even though this python 2 support is being dropped we have provided multiple ways to work around this with Salt-SSH. You can use the following options:
THORIUM COMPLEX REACTORThe original Salt Reactor is based on the idea of listening for a specific event and then reacting to it. This model comes with many logical limitations, for instance it is very difficult (and hacky) to fire a reaction based on aggregate data or based on multiple events. The Thorium reactor is intended to alleviate this problem in a very elegant way. Instead of using extensive jinja routines or complex python sls files the aggregation of data and the determination of what should run becomes isolated to the sls data logic, makes the definitions much cleaner. Starting the Thorium EngineTo enable the thorium engine add the following configuration to the engines section of your Salt Master or Minion configuration file and restart the daemon: engines: Thorium ModulesBecause of its specialized nature, Thorium uses its own set of modules. However, many of these modules are designed to wrap the more commonly-used Salt subsystems. These modules are:
There are other modules that ship with Thorium as well. Some of these will be highlighted later in this document. Writing Thorium FormulasLike some other Salt subsystems, Thorium uses its own directory structure. The default location for this structure is /srv/thorium/, but it can be changed using the thorium_roots setting in the master configuration file. This would explicitly set the roots to the default: thorium_roots: Example thorium_roots configuration: thorium_roots: It is also possible to use gitfs with Thorium, using the thoriumenv or thorium_top settings. Example using thorium_top: thorium_top: salt://thorium/top.sls gitfs_provider: pygit2 gitfs_remotes: NOTE: When using this method don't forget to prepend the
mountpoint to files served by this repo, for example top.sls:
base: Example using thoriumenv: thoriumenv: thorium gitfs_provider: pygit2 gitfs_remotes: NOTE: When using this method all state will run under the
defined environment, for example top.sls:
thorium: The Thorium top.sls FileThorium uses its own top.sls file, which follows the same convention as is found in /usr/local/etc/salt/states/: <srv>: For instance, a top.sls using a standard base environment and a single Thorium formula called key_clean, would look like: base: Take note that the target in a Thorium top.sls is not used; it only exists to follow the same convention as other top.sls files. Leave this set to '*' in your own Thorium top.sls. Thorium Formula FilesThorium SLS files are processed by the same state compiler that processes Salt state files. This means that features like requisites, templates, and so on are available. Let's take a look at an example, and then discuss each component of it. This formula uses Thorium to detect when a minion has disappeared and then deletes the key from the master when the minion has been gone for 60 seconds: statreg: There are two stanzas in this formula, whose IDs are statreg and keydel. The first stanza, statreg, tells Thorium to keep track of minion status beacons in its register. We'll talk more about the register in a moment. The second stanza, keydel, is the one that does the real work. It uses the key module to apply an expiration (using the timeout function) to a minion. Because delete is set to 60, this is a 60 second expiration. If a minion does not check in at least once every 60 seconds, its key will be deleted from the master. This particular function also allows you to use reject instead of delete, allowing for a minion to be rejected instead of deleted if it does not check in within the specified time period. There is also a require requisite in this stanza. It states that the key.timeout function will not be called unless the status.reg function in the statreg codeblock has been successfully called first. Thorium Links to BeaconsThe above example was added in the 2016.11.0 release of Salt and makes use of the status beacon also added in the 2016.11.0 release. For the above Thorium state to function properly you will also need to enable the status beacon in the minion configuration file: beacons: This will cause the minion to use the status beacon to check in with the master every 10 seconds. The Thorium RegisterIn order to keep track of information, Thorium uses an in-memory register (or rather, collection of registers) on the master. These registers are only populated when told to by a formula, and they normally will be erased when the master is restarted. It is possible to persist the registers to disk, but we'll get to that in a moment. The example above uses status.reg to populate a register for you, which is automatically used by the key.timeout function. However, you can set your own register values as well, using the reg module. Because Thorium watches the event bus, the reg module is designed to look for user-specified tags, and then extract data from the payload of events that match those tags. For instance, the following stanza will look for an event with a tag of my/custom/event: foo: When such an event is found, the data found in the payload dictionary key of bar will be stored in a register called foo. This register will store that data in a list. You may also use reg.set to add data to a set() instead. If you would like to see a copy of the register as it is stored in memory, you can use the file.save function: myreg: In this case, each time the register is updated, a copy will be saved in JSON format at /var/cache/salt/master/thorium/saves/myreg. If you would like to see when particular events are added to a list-type register, you may add a stamp option to reg.list (but not reg.set). With the above two stanzas put together, this would look like: foo: If you would like to only keep a certain number of the most recent register entries, you may also add a prune option to reg.list (but not reg.set): foo: This example will only keep the 50 most recent entries in the foo register. Using Register DataPutting data in a register is useless if you don't do anything with it. The check module is designed to examine register data and determine whether it matches the given parameters. For instance, the check.contains function will return True if the given value is contained in the specified register: foo: Used with a require requisite, we can call one of the wrapper modules and perform an operation. For example: shell_test: This stanza will only run if the check.contains function under the foo ID returns true (meaning the match was found). There are a number of other functions in the check module which use different means of comparing values:
There is also a function called check.event which does not examine the register. Instead, it looks directly at an event as it is coming in on the event bus, and returns True if that event's tag matches. For example: salt/foo/*/bar: This formula will look for an event whose tag is salt/foo/<anything>/bar and if it comes in, issue a test.version to all minions. Register PersistenceIt is possible to persist the register data to disk when a master is stopped gracefully, and reload it from disk when the master starts up again. This functionality is provided by the returner subsystem, and is enabled whenever any returner containing a load_reg and a save_reg function is used. SALT CLOUDConfigurationSalt Cloud provides a powerful interface to interact with cloud hosts. This interface is tightly integrated with Salt, and new virtual machines are automatically connected to your Salt master after creation. Since Salt Cloud is designed to be an automated system, most configuration is done using the following YAML configuration files:
Configuration InheritanceConfiguration settings are inherited in order from the cloud config => providers => profile. [image] For example, if you wanted to use the same image for all virtual machines for a specific provider, the image name could be placed in the provider file. This value is inherited by all profiles that use that provider, but is overridden if a image name is defined in the profile. Most configuration settings can be defined in any file, the main difference being how that setting is inherited. QuickStartThe Salt Cloud Quickstart walks you through defining a provider, a VM profile, and shows you how to create virtual machines using Salt Cloud. Note that if you installed Salt via Salt Bootstrap, it may not have automatically installed salt-cloud for you. Use your distribution's package manager to install the salt-cloud package from the same repo that you used to install Salt. These repos will automatically be setup by Salt Bootstrap. Alternatively, the -L option can be passed to the Salt Bootstrap script when installing Salt. The -L option will install salt-cloud and the required libcloud package. Using Salt Cloudsalt-cloudProvision virtual machines in the cloud with Salt Synopsissalt-cloud -m /usr/local/etc/salt/cloud.map salt-cloud -m /usr/local/etc/salt/cloud.map NAME salt-cloud -m /usr/local/etc/salt/cloud.map NAME1 NAME2 salt-cloud -p PROFILE NAME salt-cloud -p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6 DescriptionSalt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system. Options
Execution Options
Query Options
Cloud Providers Listings
Cloud Credentials
Output Options
highstate, json, key,
overstatestage, pprint, raw, txt, yaml, and
many others.
Some outputters are formatted only for data returned from specific functions. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
ExamplesTo create 4 VMs named web1, web2, db1, and db2 from specified profiles: salt-cloud -p fedora_rackspace web1 web2 db1 db2 To read in a map file and create all VMs specified therein: salt-cloud -m /path/to/cloud.map To read in a map file and create all VMs specified therein in parallel: salt-cloud -m /path/to/cloud.map -P To delete any VMs specified in the map file: salt-cloud -m /path/to/cloud.map -d To delete any VMs NOT specified in the map file: salt-cloud -m /path/to/cloud.map -H To display the status of all VMs specified in the map file: salt-cloud -m /path/to/cloud.map -Q See alsosalt-cloud(7) salt(7) salt-master(1) salt-minion(1) Salt Cloud basic usageSalt Cloud needs, at least, one configured Provider and Profile to be functional. Creating a VMTo create a VM with salt cloud, use command: salt-cloud -p <profile> name_of_vm Assuming there is a profile configured as following: fedora_rackspace: Then, the command to create new VM named fedora_http_01 is: salt-cloud -p fedora_rackspace fedora_http_01 Destroying a VMTo destroy a created-by-salt-cloud VM, use command: salt-cloud -d name_of_vm For example, to delete the VM created on above example, use: salt-cloud -d fedora_http_01 VM ProfilesSalt cloud designates virtual machines inside the profile configuration file. The profile configuration file defaults to /usr/local/etc/salt/cloud.profiles and is a yaml configuration. The syntax for declaring profiles is simple: fedora_rackspace: It should be noted that the script option defaults to bootstrap-salt, and does not normally need to be specified. Further examples in this document will not show the script option. A few key pieces of information need to be declared and can change based on the cloud provider. A number of additional parameters can also be inserted: centos_rackspace: The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command: salt-cloud --list-images openstack salt-cloud --list-sizes openstack Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file. Multiple Configuration FilesIn addition to /usr/local/etc/salt/cloud.profiles, profiles can also be specified in any file matching cloud.profiles.d/*conf which is a sub-directory relative to the profiles configuration file(with the above configuration file as an example, /usr/local/etc/salt/cloud.profiles.d/*.conf). This allows for more extensible configuration, and plays nicely with various configuration management tools as well as version control systems. Larger Examplerhel_ec2: Cloud Map FileA number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles. The map file is designed to be run once to create these more complex scenarios using salt-cloud. Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile: fedora_small: This map file can then be called to roll out all of these virtual machines. Map files are called from the salt-cloud command with the -m option: $ salt-cloud -m /path/to/mapfile Remember, that as with direct profile provisioning the -P option can be passed to create the virtual machines in parallel: $ salt-cloud -m /path/to/mapfile -P NOTE: Due to limitations in the GoGrid API, instances cannot be
provisioned in parallel with the GoGrid driver. Map files will work with
GoGrid, but the -P argument should not be used on maps referencing
GoGrid instances.
A map file can also be enforced to represent the total state of a cloud deployment by using the --hard option. When using the hard option any vms that exist but are not specified in the map file will be destroyed: $ salt-cloud -m /path/to/mapfile -P -H Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file. enable_hard_maps: True A map file can include grains and minion configuration options: fedora_small: Any top level data element from your profile may be overridden in the map file: fedora_small: As of Salt 2017.7.0, nested elements are merged, and can can be specified individually without having to repeat the complete definition for each top level data element. In this example a separate MAC is assigned to each VMware instance while inheriting device parameters for for disk and network configuration: nyc-vm: A map file may also be used with the various query options: $ salt-cloud -m /path/to/mapfile -Q
{'ec2': {'web1': {'id': 'i-e6aqfegb',
...or with the delete option: $ salt-cloud -m /path/to/mapfile -d The following virtual machines are set to be destroyed: WARNING: Specifying Nodes with Maps on the Command Line Specifying
the name of a node or nodes with the maps options on the command line is
not supported. This is especially important to remember when using
--destroy with maps; salt-cloud will ignore any arguments passed
in which are not directly relevant to the map file. When using
``--destroy`` with a map, every node in the map file will be deleted!
Maps don't provide any useful information for destroying individual nodes, and
should not be used to destroy a subset of a map.
Requiring Other InstancesThe requires directive can be used in map files to ensure that one instance is created and available before another is created. fedora_high: This requisite is passed to the instance definition dicitonary in a map file and accepts a list of instance names as defined in the map. Setting up New Salt MastersBootstrapping a new master in the map is as simple as: fedora_small: Notice that ALL bootstrapped minions from the map will answer to the newly created salt-master. To make any of the bootstrapped minions answer to the bootstrapping salt-master as opposed to the newly created salt-master, as an example: fedora_small: The above says the minion running on the newly created salt-master responds to the local master, ie, the master used to bootstrap these VMs. Another example: fedora_small: The above example makes the web3 minion answer to the local master, not the newly created master. Using Direct Map DataWhen using modules that access the CloudClient directly (notably, the cloud execution and runner modules), it is possible to pass in the contents of a map file, rather than a path to the location of the map file. Normally when using these modules, the path to the map file is passed in using: salt-run cloud.map_run /path/to/cloud.map To pass in the actual map data, use the map_data argument: salt-run cloud.map_run map_data='{"centos7": [{"saltmaster": {"minion": \
Cloud ActionsOnce a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on: $ salt-cloud -a reboot vm_name $ salt-cloud -a reboot vm1 vm2 vm2 Or you may specify a map which includes all VMs to perform the action on: $ salt-cloud -a reboot -m /path/to/mapfile The following is an example list of actions currently supported by salt-cloud: all providers: Another useful reference for viewing more salt-cloud actions is the Salt Cloud Feature Matrix. Cloud FunctionsCloud functions work much the same way as cloud actions, except that they don't perform an operation on a specific instance, and so do not need a machine name to be specified. However, since they perform an operation on a specific cloud provider, that provider must be specified. $ salt-cloud -f show_image ec2 image=ami-fd20ad94 There are three universal salt-cloud functions that are extremely useful for gathering information about instances on a provider basis:
$ salt-cloud -f list_nodes linode $ salt-cloud -f list_nodes_full linode $ salt-cloud -f list_nodes_select linode Another useful reference for viewing salt-cloud functions is the Salt Cloud Feature Matrix. Core ConfigurationInstall Salt CloudSalt Cloud is now part of Salt proper. It was merged in as of Salt version 2014.1.0. On Ubuntu, install Salt Cloud by using following command: sudo add-apt-repository ppa:saltstack/salt sudo apt-get update sudo apt-get install salt-cloud If using Salt Cloud on macOS, curl-ca-bundle must be installed. Presently, this package is not available via brew, but it is available using MacPorts: sudo port install curl-ca-bundle Salt Cloud depends on apache-libcloud. Libcloud can be installed via pip with pip install apache-libcloud. Installing Salt Cloud for developmentInstalling Salt for development enables Salt Cloud development as well, just make sure apache-libcloud is installed as per above paragraph. See these instructions: Installing Salt for development. Core ConfigurationA number of core configuration options and some options that are global to the VM profiles can be set in the cloud configuration file. By default this file is located at /usr/local/etc/salt/cloud. Thread Pool SizeWhen salt cloud is operating in parallel mode via the -P argument, you can control the thread pool size by specifying the pool_size parameter with a positive integer value. By default, the thread pool size will be set to the number of VMs that salt cloud is operating on. pool_size: 10 Minion ConfigurationThe default minion configuration is set up in this file. Minions created by salt-cloud derive their configuration from this file. Almost all parameters found in Configuring the Salt Minion can be used here. minion: In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default. Similar to most other settings, Minion configuration settings are inherited across configuration files. For example, the master setting might be contained in the main cloud configuration file as demonstrated above, but additional settings can be placed in the provider, profile or map configuration files: ec2-web: When salt cloud creates a new minion, it can automatically add grain information to the minion configuration file identifying the sources originally used to define it. The generated grain information will appear similar to: grains: The generation of the salt-cloud grain can be suppressed by the option enable_cloud_grains: 'False' in the cloud configuration file. Cloud Configuration SyntaxThe data specific to interacting with public clouds is set up here. Cloud provider configuration settings can live in several places. The first is in /usr/local/etc/salt/cloud: # /usr/local/etc/salt/cloud providers: Cloud provider configuration data can also be housed in /usr/local/etc/salt/cloud.providers or any file matching /usr/local/etc/salt/cloud.providers.d/*.conf. All files in any of these locations will be parsed for cloud provider data. Using the example configuration above: # /usr/local/etc/salt/cloud.providers # or could be /usr/local/etc/salt/cloud.providers.d/*.conf my-aws-config: NOTE: Salt Cloud provider configurations within
/etc/cloud.provider.d/ should not specify the providers starting
key.
It is also possible to have multiple cloud configuration blocks within the same alias block. For example: production-config: However, using this configuration method requires a change with profile configuration blocks. The provider alias needs to have the provider key value appended as in the following example: rhel_aws_dev: Notice that because of the multiple entries, one has to be explicit about the provider alias and name, from the above example, production-config: ec2. This data interactions with the salt-cloud binary regarding its --list-location, --list-images, and --list-sizes which needs a cloud provider as an argument. The argument used should be the configured cloud provider alias. If the provider alias has multiple entries, <provider-alias>: <provider-name> should be used. To allow for a more extensible configuration, --providers-config, which defaults to /usr/local/etc/salt/cloud.providers, was added to the cli parser. It allows for the providers' configuration to be added on a per-file basis. Pillar ConfigurationIt is possible to configure cloud providers using pillars. This is only used when inside the cloud module. You can setup a variable called cloud that contains your profile, provider, and map to pass that information to the cloud servers instead of having to copy the full configuration to every minion. In your pillar file, you would use something like this: cloud: Cloud ConfigurationsScalewayTo use Salt Cloud with Scaleway, you need to get an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link. If you do not have API token you can create one by clicking the "Create New Token" button on the right corner. my-scaleway-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-scaleway-config.
RackspaceRackspace cloud requires two configuration options; a user and an apikey: my-rackspace-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-rackspace-config.
Amazon AWSA number of configuration options are required for Amazon AWS including id, key, keyname, securitygroup, and private_key: my-aws-quick-start: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
either provider: my-aws-quick-start or provider:
my-aws-default.
LinodeLinode requires a single API key, but the default root password also needs to be set: my-linode-config: The password needs to be 8 characters and contain lowercase, uppercase, and numbers. NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-linode-config
Joyent CloudThe Joyent cloud requires three configuration parameters: The username and password that are used to log into the Joyent system, as well as the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning commands up to the freshly created virtual machine. my-joyent-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-joyent-config
GoGridTo use Salt Cloud with GoGrid, log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid: my-gogrid-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-gogrid-config.
OpenStackUsing Salt for OpenStack uses the shade <https://docs.openstack.org/shade/latest/> driver managed by the openstack-infra team. This driver can be configured using the /etc/openstack/clouds.yml file with os-client-config <https://docs.openstack.org/os-client-config/latest/> myopenstack: Or by just configuring the same auth block directly in the cloud provider config. myopenstack: Both of these methods support using the vendor <https://docs.openstack.org/os-client-config/latest/user/vendor-support.html> options. For more information, look at Openstack Cloud Driver Docs DigitalOceanUsing Salt for DigitalOcean requires a client_key and an api_key. These can be found in the DigitalOcean web interface, in the "My Settings" section, under the API Access tab. my-digitalocean-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-digital-ocean-config.
ParallelsUsing Salt with Parallels requires a user, password and URL. These can be obtained from your cloud provider. my-parallels-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-parallels-config.
ProxmoxUsing Salt with Proxmox requires a user, password, and URL. These can be obtained from your cloud host. Both PAM and PVE users can be used. my-proxmox-config: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: my-proxmox-config.
LXCThe lxc driver uses saltify to install salt and attach the lxc container as a new lxc minion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver. devhost10-lxc: And in the map file: devhost10-lxc: NOTE: In the cloud profile that uses this provider
configuration, the syntax for the provider required field would be
provider: devhost10-lxc.
SaltifyThe Saltify driver is a new, experimental driver designed to install Salt on a remote machine, virtual or bare metal, using SSH. This driver is useful for provisioning machines which are already installed, but not Salted. For more information about using this driver and for configuration examples, please see the Getting Started with Saltify documentation. VagrantThe Vagrant driver is a new, experimental driver for controlling a VagrantBox virtual machine, and installing Salt on it. The target host machine must be a working salt minion, which is controlled via the salt master using salt-api. For more information, see Getting Started With Vagrant. Extending Profiles and Cloud Providers ConfigurationAs of 0.8.7, the option to extend both the profiles and cloud providers configuration and avoid duplication was added. The extends feature works on the current profiles configuration, but, regarding the cloud providers configuration, only works in the new syntax and respective configuration files, i.e. /usr/local/etc/salt/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/*.conf. NOTE: Extending cloud profiles and providers is not recursive.
For example, a profile that is extended by a second profile is possible, but
the second profile cannot be extended by a third profile.
Also, if a profile (or provider) is extending another profile and each contains a list of values, the lists from the extending profile will override the list from the original profile. The lists are not merged together. Extending ProfilesSome example usage on how to use extends with profiles. Consider /usr/local/etc/salt/salt/cloud.profiles containing: development-instances: The above configuration, once parsed would generate the following profiles data: [ Pretty cool right? Extending ProvidersSome example usage on how to use extends within the cloud providers configuration. Consider /usr/local/etc/salt/salt/cloud.providers containing: my-develop-envs: The above configuration, once parsed would generate the following providers data: {
Windows ConfigurationSpinning up Windows MinionsIt is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images. RequirementsNOTE: Support winexe and impacket has been
deprecated and will be removed in 3001. These dependencies are replaced by
pypsexec and smbprotocol respectively. These are pure python
alternatives that are compatible with all supported python versions.
Salt Cloud makes use of impacket and winexe to set up the Windows Salt Minion installer. impacket is usually available as either the impacket or the python-impacket package, depending on the distribution. More information on impacket can be found at the project home:
winexe is less commonly available in distribution-specific repositories. However, it is currently being built for various distributions in 3rd party channels:
Optionally WinRM can be used instead of winexe if the python module pywinrm is available and WinRM is supported on the target Windows version. Information on pywinrm can be found at the project home:
Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com:
Self Signed Certificates with WinRMSalt-Cloud can use versions of pywinrm<=0.1.1 or pywinrm>=0.2.1. For versions greater than 0.2.1, winrm_verify_ssl needs to be set to False if the certificate is self signed and not verifiable. Firewall SettingsBecause Salt Cloud makes use of smbclient and winexe, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled. If supported by the cloud provider, a PowerShell script may be used to open up this port automatically, using the cloud provider's userdata. The following script would open up port 445, and apply the changes: <powershell> New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445 Set-Item (dir wsman:\localhost\Listener\*\Port -Recurse).pspath 445 -Force Restart-Service winrm </powershell> For EC2, this script may be saved as a file, and specified in the provider or profile configuration as userdata_file. For instance: my-ec2-config: NOTE: From versions 2016.11.0 and 2016.11.3, this file was
passed through the master's renderer to template it. However, this
caused issues with non-YAML data, so templating is no longer performed by
default. To template the userdata_file, add a userdata_template option
to the cloud profile:
my-ec2-config: If no userdata_template is set in the cloud profile, then the master configuration will be checked for a userdata_template value. If this is not set, then no templating will be performed on the userdata_file. To disable templating in a cloud profile when a userdata_template has been set in the master configuration file, simply set userdata_template to False in the cloud profile: my-ec2-config: If you are using WinRM on EC2 the HTTPS port for the WinRM service must also be enabled in your userdata. By default EC2 Windows images only have insecure HTTP enabled. To enable HTTPS and basic authentication required by pywinrm consider the following userdata example: <powershell>
New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445
New-NetFirewallRule -Name "WINRM5986" -DisplayName "WINRM5986" -Protocol TCP -LocalPort 5986
winrm quickconfig -q
winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="300"}'
winrm set winrm/config '@{MaxTimeoutms="1800000"}'
winrm set winrm/config/service/auth '@{Basic="true"}'
$SourceStoreScope = 'LocalMachine'
$SourceStorename = 'Remote Desktop'
$SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope
$SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)
$cert = $SourceStore.Certificates | Where-Object -FilterScript {
No certificate store is available by default on EC2 images and creating one does not seem possible without an MMC (cannot be automated). To use the default EC2 Windows images the above copies the RDP store. ConfigurationConfiguration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example: Setting the installer in /usr/local/etc/salt/cloud.providers: my-softlayer: The default Windows user is Administrator, and the default Windows password is blank. If WinRM is to be used use_winrm needs to be set to True. winrm_port can be used to specify a custom port (must be HTTPS listener). And winrm_verify_ssl can be set to False to use a self signed certificate. Auto-Generated Passwords on EC2On EC2, when the win_password is set to auto, Salt Cloud will query EC2 for an auto-generated password. This password is expected to take at least 4 minutes to generate, adding additional time to the deploy process. When the EC2 API is queried for the auto-generated password, it will be returned in a message encrypted with the specified keyname. This requires that the appropriate private_key file is also specified. Such a profile configuration might look like: windows-server-2012: Cloud Provider SpecificsGetting Started With Aliyun ECSThe Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud hosts in China. This cloud host can be used to manage aliyun instance using salt-cloud. http://www.aliyun.com/ DependenciesThis driver requires the Python requests library to be installed. ConfigurationUsing Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-aliyun-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory: aliyun_centos: Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-aliyun-config my-aliyun-config: Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-aliyun-config my-aliyun-config: Locations can be obtained using the --list-locations option for the salt-cloud command: my-aliyun-config: Security Group can be obtained using the -f list_securitygroup option for the salt-cloud command: # salt-cloud --location=cn-qingdao -f list_securitygroup my-aliyun-config my-aliyun-config: NOTE: Aliyun ECS REST API documentation is available from
Aliyun ECS API.
Getting Started With AzureNew in version 2014.1.0. WARNING: This cloud provider will be removed from Salt in version
3007 due to the deprecation of the "Classic" API for Azure. Please
migrate to Azure Resource Manager by March 1, 2023
Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed. More information about Azure is located at http://www.windowsazure.com/. Dependencies
ConfigurationSet up the provider config at /usr/local/etc/salt/cloud.providers.d/azure.conf: # Note: This example is for /usr/local/etc/salt/cloud.providers.d/azure.conf my-azure-config: The certificate used must be generated by the user. OpenSSL can be used to create the management certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally. To create the .pem file, execute the following command: openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /usr/local/etc/salt/azure.pem -out /etc/salt/azure.pem To create the .cer file, execute the following command: openssl x509 -inform pem -in /usr/local/etc/salt/azure.pem -outform der -out /etc/salt/azure.cer After creating these files, the .cer file will need to be uploaded to Azure via the "Upload a Management Certificate" action of the "Management Certificates" tab within the "Settings" section of the management portal. Optionally, a management_host may be configured, if necessary for the region. NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Cloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles: azure-ubuntu: These options are described in more detail below. Once configured, the profile can be realized with a salt command: salt-cloud -p azure-ubuntu newinstance This will create an salt minion instance named newinstance in Azure. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: salt newinstance test.version Profile OptionsThe following options are currently available for Azure. providerThe name of the provider as configured in /usr/local/etc/salt/cloud.providers.d/azure.conf. imageThe name of the image to use to create a VM. Available images can be viewed using the following command: salt-cloud --list-images my-azure-config sizeThe name of the size to use to create a VM. Available sizes can be viewed using the following command: salt-cloud --list-sizes my-azure-config locationThe name of the location to create a VM in. Available locations can be viewed using the following command: salt-cloud --list-locations my-azure-config affinity_groupThe name of the affinity group to create a VM in. Either a location or an affinity_group may be specified, but not both. See Affinity Groups below. ssh_usernameThe user to use to log into the newly-created VM to install Salt. ssh_passwordThe password to use to log into the newly-created VM to install Salt. slotThe environment to which the hosted service is deployed. Valid values are staging or production. When set to production, the resulting URL of the new VM will be <vm_name>.cloudapp.net. When set to staging, the resulting URL will contain a generated hash instead. media_linkThis is the URL of the container that will store the disk that this VM uses. Currently, this container must already exist. If a VM has previously been created in the associated account, a container should already exist. In the web interface, go into the Storage area and click one of the available storage selections. Click the Containers link, and then copy the URL from the container that will be used. It generally looks like: http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds service_nameThe name of the service in which to create the VM. If this is not specified, then a service will be created with the same name as the VM. virtual_network_nameOptional. The name of the virtual network for the VM to join. If this is not specified, then no virtual network will be joined. subnet_nameOptional. The name of the subnet in the virtual network for the VM to join. Requires that a virtual_network_name is specified. Show InstanceThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. salt-cloud -a show_instance myinstance Destroying VMsThere are certain options which can be specified in the global cloud configuration file (usually /usr/local/etc/salt/cloud) which affect Salt Cloud's behavior when a VM is destroyed. cleanup_disksNew in version 2015.8.0. Default is False. When set to True, Salt Cloud will wait for the VM to be destroyed, then attempt to destroy the main disk that is associated with the VM. cleanup_vhdsNew in version 2015.8.0. Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will ask Azure to delete the VHD associated with the disk that is also destroyed. cleanup_servicesNew in version 2015.8.0. Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will wait for the disk to be destroyed, then attempt to remove the service that is associated with the VM. Because the disk belongs to the service, the disk must be destroyed before the service can be. Managing Hosted ServicesNew in version 2015.8.0. An account can have one or more hosted services. A hosted service is required in order to create a VM. However, as mentioned above, if a hosted service is not specified when a VM is created, then one will automatically be created with the name of the name. The following functions are also available. create_serviceCreate a hosted service. The following options are available. nameRequired. The name of the hosted service to create. labelRequired. A label to apply to the hosted service. descriptionOptional. A longer description of the hosted service. locationRequired, if affinity_group is not set. The location in which to create the hosted service. Either the location or the affinity_group must be set, but not both. affinity_groupRequired, if location is not set. The affinity group in which to create the hosted service. Either the location or the affinity_group must be set, but not both. extended_propertiesOptional. Dictionary containing name/value pairs of hosted service properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters. CLI ExampleThe following example illustrates creating a hosted service. salt-cloud -f create_service my-azure name=my-service label=my-service location='West US' show_serviceReturn details about a specific hosted service. Can also be called with get_service. salt-cloud -f show_storage my-azure name=my-service list_servicesList all hosted services associates with the subscription. salt-cloud -f list_services my-azure-config delete_serviceDelete a specific hosted service. salt-cloud -f delete_service my-azure name=my-service Managing Storage AccountsNew in version 2015.8.0. Salt Cloud can manage storage accounts associated with the account. The following functions are available. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK. create_storageCreate a storage account. The following options are supported. nameRequired. The name of the storage account to create. labelRequired. A label to apply to the storage account. descriptionOptional. A longer description of the storage account. locationRequired, if affinity_group is not set. The location in which to create the storage account. Either the location or the affinity_group must be set, but not both. affinity_groupRequired, if location is not set. The affinity group in which to create the storage account. Either the location or the affinity_group must be set, but not both. extended_propertiesOptional. Dictionary containing name/value pairs of storage account properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters. geo_replication_enabledDeprecated. Replaced by the account_type parameter. account_typeSpecifies whether the account supports locally-redundant storage, geo-redundant storage, zone-redundant storage, or read access geo-redundant storage. Possible values are:
CLI ExampleThe following example illustrates creating a storage account. salt-cloud -f create_storage my-azure name=my-storage label=my-storage location='West US' list_storageList all storage accounts associates with the subscription. salt-cloud -f list_storage my-azure-config show_storageReturn details about a specific storage account. Can also be called with get_storage. salt-cloud -f show_storage my-azure name=my-storage update_storageUpdate details concerning a storage account. Any of the options available in create_storage can be used, but the name cannot be changed. salt-cloud -f update_storage my-azure name=my-storage label=my-storage delete_storageDelete a specific storage account. salt-cloud -f delete_storage my-azure name=my-storage show_storage_keysReturns the primary and secondary access keys for the specified storage account. salt-cloud -f show_storage_keys my-azure name=my-storage regenerate_storage_keysRegenerate storage account keys. Requires a key_type ("primary" or "secondary") to be specified. salt-cloud -f regenerate_storage_keys my-azure name=my-storage key_type=primary Managing DisksNew in version 2015.8.0. When a VM is created, a disk will also be created for it. The following functions are available for managing disks. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK. show_diskReturn details about a specific disk. Can also be called with get_disk. salt-cloud -f show_disk my-azure name=my-disk list_disksList all disks associates with the account. salt-cloud -f list_disks my-azure update_diskUpdate details for a disk. The following options are available. nameRequired. The name of the disk to update. has_operating_systemDeprecated. labelRequired. The label for the disk. media_linkDeprecated. The location of the disk in the account, including the storage container that it is in. This should not need to be changed. new_nameDeprecated. If renaming the disk, the new name. osDeprecated. CLI ExampleThe following example illustrates updating a disk. salt-cloud -f update_disk my-azure name=my-disk label=my-disk delete_diskDelete a specific disk. salt-cloud -f delete_disk my-azure name=my-disk Managing Service CertificatesNew in version 2015.8.0. Stored at the cloud service level, these certificates are used by your deployed services. For more information on service certificates, see the following link:
The following functions are available. list_service_certificatesList service certificates associated with the account. salt-cloud -f list_service_certificates my-azure show_service_certificateShow the data for a specific service certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates. Can also be called with get_service_certificate. salt-cloud -f show_service_certificate my-azure name=my_service_certificate \ add_service_certificateAdd a service certificate to the account. This requires that a certificate already exists, which is then added to the account. For more information on creating the certificate itself, see:
The following options are available. nameRequired. The name of the hosted service that the certificate will belong to. dataRequired. The base-64 encoded form of the pfx file. certificate_formatRequired. The service certificate format. The only supported value is pfx. passwordThe certificate password. salt-cloud -f add_service_certificate my-azure name=my-cert \ delete_service_certificateDelete a service certificate from the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates. salt-cloud -f delete_service_certificate my-azure \ Managing Management CertificatesNew in version 2015.8.0. A Azure management certificate is an X.509 v3 certificate used to authenticate an agent, such as Visual Studio Tools for Windows Azure or a client application that uses the Service Management API, acting on behalf of the subscription owner to manage subscription resources. Azure management certificates are uploaded to Azure and stored at the subscription level. The management certificate store can hold up to 100 certificates per subscription. These certificates are used to authenticate your Windows Azure deployment. For more information on management certificates, see the following link.
The following functions are available. list_management_certificatesList management certificates associated with the account. salt-cloud -f list_management_certificates my-azure show_management_certificateShow the data for a specific management certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_management_certificates. Can also be called with get_management_certificate. salt-cloud -f show_management_certificate my-azure name=my_management_certificate \ add_management_certificateManagement certificates must have a key length of at least 2048 bits and should reside in the Personal certificate store. When the certificate is installed on the client, it should contain the private key of the certificate. To upload to the certificate to the Microsoft Azure Management Portal, you must export it as a .cer format file that does not contain the private key. For more information on creating management certificates, see the following link:
The following options are available. public_keyA base64 representation of the management certificate public key. thumbprintThe thumb print that uniquely identifies the management certificate. dataThe certificate's raw data in base-64 encoded .cer format. salt-cloud -f add_management_certificate my-azure public_key='...PUBKEY...' \ delete_management_certificateDelete a management certificate from the account. The thumbprint can be obtained from list_management_certificates. salt-cloud -f delete_management_certificate my-azure thumbprint=0123456789ABCDEF Virtual Network ManagementNew in version 2015.8.0. The following are functions for managing virtual networks. list_virtual_networksList input endpoints associated with the deployment. salt-cloud -f list_virtual_networks my-azure service=myservice deployment=mydeployment Managing Input EndpointsNew in version 2015.8.0. Input endpoints are used to manage port access for roles. Because endpoints cannot be managed by the Azure Python SDK, Salt Cloud uses the API directly. With versions of Python before 2.7.9, the requests-python package needs to be installed in order for this to work. Additionally, the following needs to be set in the master's configuration file: backend: requests The following functions are available. list_input_endpointsList input endpoints associated with the deployment salt-cloud -f list_input_endpoints my-azure service=myservice deployment=mydeployment show_input_endpointShow an input endpoint associated with the deployment salt-cloud -f show_input_endpoint my-azure service=myservice \ add_input_endpointAdd an input endpoint to the deployment. Please note that there may be a delay before the changes show up. The following options are available. serviceRequired. The name of the hosted service which the VM belongs to. deploymentRequired. The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name. roleRequired. The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name. nameRequired. The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH. portRequired. The public (Internet-facing) port that is used for the endpoint. local_portOptional. The private port on the VM itself that will be matched with the port. This is typically the same as the port. If this value is not specified, it will be copied from port. protocolRequired. Either tcp or udp. enable_direct_server_returnOptional. If an internal load balancer exists in the account, it can be used with a direct server return. The default value is False. Please see the following article for an explanation of this option.
timeout_for_tcp_idle_connectionOptional. The default value is 4. Please see the following article for an explanation of this option.
CLI ExampleThe following example illustrates adding an input endpoint. salt-cloud -f add_input_endpoint my-azure service=myservice \ update_input_endpointUpdates the details for a specific input endpoint. All options from add_input_endpoint are supported. salt-cloud -f update_input_endpoint my-azure service=myservice \ delete_input_endpointDelete an input endpoint from the deployment. Please note that there may be a delay before the changes show up. The following items are required. CLI ExampleThe following example illustrates deleting an input endpoint. serviceThe name of the hosted service which the VM belongs to. deploymentThe name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name. roleThe name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name. nameThe name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH. salt-cloud -f delete_input_endpoint my-azure service=myservice \ Managing Affinity GroupsNew in version 2015.8.0. Affinity groups allow you to group your Azure services to optimize performance. All services and VMs within an affinity group will be located in the same region. For more information on Affinity groups, see the following link:
The following functions are available. list_affinity_groupsList input endpoints associated with the account salt-cloud -f list_affinity_groups my-azure show_affinity_groupShow an affinity group associated with the account salt-cloud -f show_affinity_group my-azure service=myservice \ create_affinity_groupCreate a new affinity group. The following options are supported. nameRequired. The name of the new affinity group. locationRequired. The region in which the affinity group lives. labelRequired. A label describing the new affinity group. descriptionOptional. A longer description of the affinity group. salt-cloud -f create_affinity_group my-azure name=my_affinity_group \ update_affinity_groupUpdate an affinity group's properties salt-cloud -f update_affinity_group my-azure name=my_group label=my_group delete_affinity_groupDelete a specific affinity group associated with the account salt-cloud -f delete_affinity_group my-azure name=my_affinity_group Managing Blob StorageNew in version 2015.8.0. Azure storage containers and their contents can be managed with Salt Cloud. This is not as elegant as using one of the other available clients in Windows, but it benefits Linux and Unix users, as there are fewer options available on those platforms. Blob Storage ConfigurationBlob storage must be configured differently than the standard Azure configuration. Both a storage_account and a storage_key must be specified either through the Azure provider configuration (in addition to the other Azure configuration) or via the command line. storage_account: mystorage storage_key: ffhj334fDSGFEGDFGFDewr34fwfsFSDFwe== storage_accountThis is one of the storage accounts that is available via the list_storage function. storage_keyBoth a primary and a secondary storage_key can be obtained by running the show_storage_keys function. Either key may be used. Blob FunctionsThe following functions are made available through Salt Cloud for managing blog storage. make_blob_urlCreates the URL to access a blob salt-cloud -f make_blob_url my-azure container=mycontainer blob=myblob containerName of the container. blobName of the blob. accountName of the storage account. If not specified, derives the host base from the provider configuration. protocolProtocol to use: 'http' or 'https'. If not specified, derives the host base from the provider configuration. host_baseLive host base URL. If not specified, derives the host base from the provider configuration. list_storage_containersList containers associated with the storage account salt-cloud -f list_storage_containers my-azure create_storage_containerCreate a storage container salt-cloud -f create_storage_container my-azure name=mycontainer nameName of container to create. meta_name_valuesOptional. A dict with name_value pairs to associate with the container as metadata. Example:{'Category':'test'} blob_public_accessOptional. Possible values include: container, blob fail_on_existSpecify whether to throw an exception when the container exists. show_storage_containerShow a container associated with the storage account salt-cloud -f show_storage_container my-azure name=myservice nameName of container to show. show_storage_container_metadataShow a storage container's metadata salt-cloud -f show_storage_container_metadata my-azure name=myservice nameName of container to show. lease_idIf specified, show_storage_container_metadata only succeeds if the container's lease is active and matches this ID. set_storage_container_metadataSet a storage container's metadata salt-cloud -f set_storage_container my-azure name=mycontainer \ nameName of existing container. meta_name_values ```````````` A dict containing name, value for metadata. Example: {'category':'test'} lease_id ```` If specified, set_storage_container_metadata only succeeds if the container's lease is active and matches this ID. show_storage_container_aclShow a storage container's acl salt-cloud -f show_storage_container_acl my-azure name=myservice nameName of existing container. lease_idIf specified, show_storage_container_acl only succeeds if the container's lease is active and matches this ID. set_storage_container_aclSet a storage container's acl salt-cloud -f set_storage_container my-azure name=mycontainer nameName of existing container. signed_identifiersSignedIdentifiers instance blob_public_accessOptional. Possible values include: container, blob lease_idIf specified, set_storage_container_acl only succeeds if the container's lease is active and matches this ID. delete_storage_containerDelete a container associated with the storage account salt-cloud -f delete_storage_container my-azure name=mycontainer nameName of container to create. fail_not_existSpecify whether to throw an exception when the container exists. lease_idIf specified, delete_storage_container only succeeds if the container's lease is active and matches this ID. lease_storage_containerLease a container associated with the storage account salt-cloud -f lease_storage_container my-azure name=mycontainer nameName of container to create. lease_actionRequired. Possible values: acquire|renew|release|break|change lease_idRequired if the container has an active lease. lease_durationSpecifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. For backwards compatibility, the default is 60, and the value is only used on an acquire operation. lease_break_periodOptional. For a break operation, this is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately. proposed_lease_idOptional for acquire, required for change. Proposed lease ID, in a GUID string format. list_blobsList blobs associated with the container salt-cloud -f list_blobs my-azure container=mycontainer containerThe name of the storage container prefixOptional. Filters the results to return only blobs whose names begin with the specified prefix. markerOptional. A string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items. The marker value is opaque to the client. maxresultsOptional. Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults or specifies a value greater than 5,000, the server will return up to 5,000 items. Setting maxresults to a value less than or equal to zero results in error response code 400 (Bad Request). includeOptional. Specifies one or more datasets to include in the response. To specify more than one of these options on the URI, you must separate each option with a comma. Valid values are: snapshots: delimiterOptional. When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string. show_blob_service_propertiesShow a blob's service properties salt-cloud -f show_blob_service_properties my-azure set_blob_service_propertiesSets the properties of a storage account's Blob service, including Windows Azure Storage Analytics. You can also use this operation to set the default request version for all incoming requests that do not have a version specified. salt-cloud -f set_blob_service_properties my-azure propertiesa StorageServiceProperties object. timeoutOptional. The timeout parameter is expressed in seconds. show_blob_propertiesReturns all user-defined metadata, standard HTTP properties, and system properties for the blob. salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob containerName of existing container. blobName of existing blob. lease_idRequired if the blob has an active lease. set_blob_propertiesSet a blob's properties salt-cloud -f set_blob_properties my-azure containerName of existing container. blobName of existing blob. blob_cache_controlOptional. Modifies the cache control string for the blob. blob_content_typeOptional. Sets the blob's content type. blob_content_md5Optional. Sets the blob's MD5 hash. blob_content_encodingOptional. Sets the blob's content encoding. blob_content_languageOptional. Sets the blob's content language. lease_idRequired if the blob has an active lease. blob_content_dispositionOptional. Sets the blob's Content-Disposition header. The Content-Disposition response header field conveys additional information about how to process the response payload, and also can be used to attach additional metadata. For example, if set to attachment, it indicates that the user-agent should not display the response, but instead show a Save As dialog with a filename other than the blob name specified. put_blobUpload a blob salt-cloud -f put_blob my-azure container=base name=top.sls blob_path=/usr/local/etc/salt/states/top.sls salt-cloud -f put_blob my-azure container=base name=content.txt blob_content='Some content' containerName of existing container. nameName of existing blob. blob_pathThe path on the local machine of the file to upload as a blob. Either this or blob_content must be specified. blob_contentThe actual content to be uploaded as a blob. Either this or blob_path must me specified. cache_controlOptional. The Blob service stores this value but does not use or modify it. content_languageOptional. Specifies the natural languages used by this resource. content_md5Optional. An MD5 hash of the blob content. This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes do not match, the operation will fail with error code 400 (Bad Request). blob_content_typeOptional. Set the blob's content type. blob_content_encodingOptional. Set the blob's content encoding. blob_content_languageOptional. Set the blob's content language. blob_content_md5Optional. Set the blob's MD5 hash. blob_cache_controlOptional. Sets the blob's cache control. meta_name_valuesA dict containing name, value for metadata. lease_idRequired if the blob has an active lease. get_blobDownload a blob salt-cloud -f get_blob my-azure container=base name=top.sls local_path=/usr/local/etc/salt/states/top.sls salt-cloud -f get_blob my-azure container=base name=content.txt return_content=True containerName of existing container. nameName of existing blob. local_pathThe path on the local machine to download the blob to. Either this or return_content must be specified. return_contentWhether or not to return the content directly from the blob. If specified, must be True or False. Either this or the local_path must be specified. snapshotOptional. The snapshot parameter is an opaque DateTime value that, when present, specifies the blob snapshot to retrieve. lease_idRequired if the blob has an active lease. progress_callbackcallback for progress with signature function(current, total) where current is the number of bytes transferred so far, and total is the size of the blob. max_connectionsMaximum number of parallel connections to use when the blob size exceeds 64MB. Set to 1 to download the blob chunks sequentially. Set to 2 or more to download the blob chunks in parallel. This uses more system resources but will download faster. max_retriesNumber of times to retry download of blob chunk if an error occurs. retry_waitSleep time in secs between retries. Getting Started With Azure ARMNew in version 2016.11.0. WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. Azure ARM (aka, the Azure Resource Manager) is a next generation version of the Azure portal and API. This document describes how to use Salt Cloud to create a virtual machine on Azure ARM, with Salt installed. More information about Azure is located at http://www.windowsazure.com/. Dependencies
Installation TipsBecause the azure library requires the cryptography library, which is compiled on-the-fly by pip, you may need to install the development tools for your operating system. Before you install azure with pip, you should make sure that the required libraries are installed. DebianFor Debian and Ubuntu, the following command will ensure that the required dependencies are installed: sudo apt-get install build-essential libssl-dev libffi-dev python-dev Red HatFor Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed: sudo yum install gcc libffi-devel python-devel openssl-devel ConfigurationSet up the provider config at /usr/local/etc/salt/cloud.providers.d/azurearm.conf: # Note: This example is for /usr/local/etc/salt/cloud.providers.d/azurearm.conf my-azurearm-config: Cloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles: azure-ubuntu-pass: These options are described in more detail below. Once configured, the profile can be realized with a salt command: salt-cloud -p azure-ubuntu newinstance This will create an salt minion instance named newinstance in Azure. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: salt newinstance test.version Profile OptionsThe following options are currently available for Azure ARM. providerThe name of the provider as configured in /usr/local/etc/salt/cloud.providers.d/azure.conf. imageRequired. The name of the image to use to create a VM. Available images can be viewed using the following command: salt-cloud --list-images my-azure-config As you will see in --list-images, image names are comprised of the following fields, separated by the pipe (|) character: publisher: For example, Canonical or MicrosoftWindowsServer offer: For example, UbuntuServer or WindowsServer sku: Such as 14.04.5-LTS or 2012-R2-Datacenter version: Such as 14.04.201612050 or latest It is possible to specify the URL or resource ID path of a custom image that you have access to, such as: https://<mystorage>.blob.core.windows.net/system/Microsoft.Compute/Images/<mystorage>/template-osDisk.01234567-890a-bcdef0123-4567890abcde.vhd or: /subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/myRG/providers/Microsoft.Compute/images/myImage sizeRequired. The name of the size to use to create a VM. Available sizes can be viewed using the following command: salt-cloud --list-sizes my-azure-config locationRequired. The name of the location to create a VM in. Available locations can be viewed using the following command: salt-cloud --list-locations my-azure-config ssh_usernameRequired for Linux. The admin user to add on the instance. It is also used to log into the newly-created VM to install Salt. ssh_keyfileRequired if using SSH key authentication. The path on the Salt master to the SSH private key used during the minion bootstrap process. ssh_publickeyfileUse either ssh_publickeyfile or ssh_password. The path on the Salt master to the SSH public key which will be pushed to the Linux VM. ssh_passwordUse either ssh_publickeyfile or ssh_password. The password for the admin user on the newly-created Linux virtual machine. win_usernameRequired for Windows. The user to use to log into the newly-created Windows VM to install Salt. win_passwordRequired for Windows. The password to use to log into the newly-created Windows VM to install Salt. win_installerRequired for Windows. The path to the Salt installer to be uploaded. resource_groupRequired. The resource group that all VM resources (VM, network interfaces, etc) will be created in. network_resource_groupOptional. If specified, then the VM will be connected to the virtual network in this resource group, rather than the parent resource group of the instance. The VM interfaces and IPs will remain in the configured resource_group with the VM. networkRequired. The virtual network that the VM will be spun up in. subnetOptional. The subnet inside the virtual network that the VM will be spun up in. Default is default. allocate_public_ipOptional. Default is False. If set to True, a public IP will be created and assigned to the VM. load_balancerOptional. The load-balancer for the VM's network interface to join. If specified the backend_pool option need to be set. backend_poolOptional. Required if the load_balancer option is set. The load-balancer's Backend Pool the VM's network interface will join. iface_nameOptional. The name to apply to the VM's network interface. If not supplied, the value will be set to <VM name>-iface0. dns_serversOptional. A list of the DNS servers to configure for the network interface (will be set on the VM by the DHCP of the VNET). my-azurearm-profile: availability_setOptional. If set, the VM will be added to the specified availability set. volumesOptional. A list of dictionaries describing data disks to attach to the instance can be specified using this setting. The data disk dictionaries are passed entirely to the Azure DataDisk object, so ad-hoc options can be handled as long as they are valid properties of the object. volumes: - disk_size_gb: 50 cleanup_disksOptional. Default is False. If set to True, disks will be cleaned up when the VM that they belong to is deleted. cleanup_vhdsOptional. Default is False. If set to True, VHDs will be cleaned up when the VM and disk that they belong to are deleted. Requires cleanup_disks to be set to True. cleanup_data_disksOptional. Default is False. If set to True, data disks (non-root volumes) will be cleaned up whtn the VM that they are attached to is deleted. Requires cleanup_disks to be set to True. cleanup_interfacesOptional. Default is False. Normally when a VM is deleted, its associated interfaces and IPs are retained. This is useful if you expect the deleted VM to be recreated with the same name and network settings. If you would like interfaces and IPs to be deleted when their associated VM is deleted, set this to True. userdataOptional. Any custom cloud data that needs to be specified. How this data is used depends on the operating system and image that is used. For instance, Linux images that use cloud-init will import this data for use with that program. Some Windows images will create a file with a copy of this data, and others will ignore it. If a Windows image creates a file, then the location will depend upon the version of Windows. This will be ignored if the userdata_file is specified. userdata_fileOptional. The path to a file to be read and submitted to Azure as user data. How this is used depends on the operating system that is being deployed. If used, any userdata setting will be ignored. userdata_sendkeysOptional. Set to True in order to generate salt minion keys and provide them as variables to the userdata script when running it through the template renderer. The keys can be referenced as {{opts['priv_key']}} and {{opts['pub_key']}}. userdata_templateOptional. Enter the renderer, such as jinja, to be used for the userdata script template. wait_for_ip_timeoutOptional. Default is 600. When waiting for a VM to be created, Salt Cloud will attempt to connect to the VM's IP address until it starts responding. This setting specifies the maximum time to wait for a response. wait_for_ip_intervalOptional. Default is 10. How long to wait between attempts to connect to the VM's IP. wait_for_ip_interval_multiplierOptional. Default is 1. Increase the interval by this multiplier after each request; helps with throttling. expire_publisher_cacheOptional. Default is 604800. When fetching image data using --list-images, a number of web calls need to be made to the Azure ARM API. This is normally very fast when performed using a VM that exists inside Azure itself, but can be very slow when made from an external connection. By default, the publisher data will be cached, and only updated every 604800 seconds (7 days). If you need the publisher cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the publisher cache. expire_offer_cacheOptional. Default is 518400. See expire_publisher_cache for details on why this exists. By default, the offer data will be cached, and only updated every 518400 seconds (6 days). If you need the offer cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the publiser cache. expire_sku_cacheOptional. Default is 432000. See expire_publisher_cache for details on why this exists. By default, the sku data will be cached, and only updated every 432000 seconds (5 days). If you need the sku cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the sku cache. expire_version_cacheOptional. Default is 345600. See expire_publisher_cache for details on why this exists. By default, the version data will be cached, and only updated every 345600 seconds (4 days). If you need the version cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the version cache. expire_group_cacheOptional. Default is 14400. See expire_publisher_cache for details on why this exists. By default, the resource group data will be cached, and only updated every 14400 seconds (4 hours). If you need the resource group cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the resource group cache. expire_interface_cacheOptional. Default is 3600. See expire_publisher_cache for details on why this exists. By default, the interface data will be cached, and only updated every 3600 seconds (1 hour). If you need the interface cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the interface cache. expire_network_cacheOptional. Default is 3600. See expire_publisher_cache for details on why this exists. By default, the network data will be cached, and only updated every 3600 seconds (1 hour). If you need the network cache to be updated at a different frequency, change this setting. Setting it to 0 will turn off the network cache. Other OptionsOther options relevant to Azure ARM. storage_accountRequired for actions involving an Azure storage account. storage_keyRequired for actions involving an Azure storage account. Show InstanceThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. salt-cloud -a show_instance myinstance Getting Started with CloudStackCloudStack is one the most popular cloud projects. It's an open source project to build public and/or private clouds. You can use Salt Cloud to launch CloudStack instances. Dependencies
ConfigurationUsing Salt for CloudStack, requires an API key and a secret key along with the API address endpoint information. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. exoscale: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory: exoscale-ubuntu: Locations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations exoscale-config exoscale: Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes exoscale exoscale: Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images exoscale exoscale: CloudStack specific settingssecuritygroupNew in version 2017.7.0. You can specify a list of security groups (by name or id) that should be assigned to the VM: exoscale: Getting Started With DigitalOceanDigitalOcean is a public cloud host that specializes in Linux instances. ConfigurationUsing Salt for DigitalOcean requires a personal_access_token, an ssh_key_file, and at least one SSH key name in ssh_key_names. More ssh_key_names can be added by separating each key with a comma. The personal_access_token can be found in the DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found under the "SSH Keys" section. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-digitalocean-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory: digitalocean-ubuntu: Locations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations my-digitalocean-config my-digitalocean-config: Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-digitalocean-config my-digitalocean-config: Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-digitalocean-config my-digitalocean-config: Profile Specifics:ssh_usernameIf using a FreeBSD image from DigitalOcean, you'll need to set the ssh_username setting to freebsd in your profile configuration. digitalocean-freebsd: userdata_fileNew in version 2016.11.6. Use userdata_file to specify the userdata file to upload for use with cloud-init if available. my-openstack-config: my-do-config: If no userdata_template is set in the cloud profile, then the master configuration will be checked for a userdata_template value. If this is not set, then no templating will be performed on the userdata_file. To disable templating in a cloud profile when a userdata_template has been set in the master configuration file, simply set userdata_template to False in the cloud profile: my-do-config: Miscellaneous InformationNOTE: DigitalOcean's concept of Applications is nothing
more than a pre-configured instance (same as a normal Droplet). You will find
examples such Docker 0.7 Ubuntu 13.04 x64 and Wordpress on Ubuntu
12.10 when using the --list-images option. These names can be used
just like the rest of the standard instances when specifying an image in the
cloud profile configuration.
NOTE: If your domain's DNS is managed with DigitalOcean, and
your minion name matches your DigitalOcean managed DNS domain, you can
automatically create A and AAA records for newly created droplets. Use
create_dns_record: True in your config to enable this. Adding
delete_dns_record: True to also delete records when a droplet is
destroyed is optional. Due to limitations in salt-cloud design, the destroy
code does not have access to the VM config data. WHETHER YOU ADD
create_dns_record: True OR NOT, salt-cloud WILL attempt to delete your
DNS records if the minion name matches. This will prevent advertising any
recycled IP addresses for destroyed minions.
NOTE: If you need to perform the bootstrap using the local
interface for droplets, this can be done by setting ssh_interface:
private in your config. By default the salt-cloud script would run on the
public interface however if firewall is preventing the connection to the
Droplet over the public interface you might need to set this option to connect
via private interface. Also, to use this feature private_networking:
True must be set in the config.
NOTE: Additional documentation is available from
DigitalOcean.
Getting Started With Dimension Data CloudDimension Data are a global IT Services company and form part of the NTT Group. Dimension Data provide IT-as-a-Service to customers around the globe on their cloud platform (Compute as a Service). The CaaS service is available either on one of the public cloud instances or as a private instance on premises. http://cloud.dimensiondata.com/ CaaS has its own non-standard API , SaltStack provides a wrapper on top of this API with common methods with other IaaS solutions and Public cloud providers. Therefore, you can use the Dimension Data module to communicate with both the public and private clouds. DependenciesThis driver requires the Python apache-libcloud and netaddr library to be installed. ConfigurationWhen you instantiate a driver you need to pass the following arguments to the driver constructor:
Possible regions:
# Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-dimensiondata-config: NOTE: In version 2015.8.0, the provider parameter in
cloud provider definitions was renamed to driver. This change was made
to avoid confusion with the provider parameter that is used in cloud
profile definitions. Cloud provider definitions now use driver to refer
to the Salt cloud module that provides the underlying functionality to connect
to a cloud host, while cloud profiles continue to use provider to refer
to provider configurations that you define.
ProfilesCloud ProfilesDimension Data images have an inbuilt size configuration, there is no list of sizes (although, if the command --list-sizes is run a default will be returned). Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-dimensiondata-config my-dimensiondata-config: ---------- dimensiondata: Locations can be obtained using the --list-locations option for the salt-cloud command: my-dimensiondata-config: NOTE: Dimension Data Cloud REST API documentation is available
from Dimension Data MCP 2.
Getting Started With AWS EC2Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support. Previously, the suggested driver for AWS EC2 was the aws driver. This has been deprecated in favor of the ec2 driver. Configuration using the old aws driver will still function, but that driver is no longer in active development. DependenciesThis driver requires the Python requests library to be installed. ConfigurationThe following example illustrates some of the options that can be set. These parameters are discussed in more detail below. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-ec2-southeast-public-ips: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access CredentialsThe id and key settings may be found in the Security Credentials area of the AWS Account page: https://portal.aws.amazon.com/gp/aws/securityCredentials Both are located in the Access Credentials area of the page, under the Access Keys tab. The id setting is labeled Access Key ID, and the key setting is labeled Secret Access Key. Note: if either id or key is set to 'use-instance-role-credentials' it is assumed that Salt is running on an AWS instance, and the instance role credentials will be retrieved and used. Since both the id and key are required parameters for the AWS ec2 provider, it is recommended to set both to 'use-instance-role-credentials' for this functionality. A "static" and "permanent" Access Key ID and Secret Key can be specified, but this is not recommended. Instance role keys are rotated on a regular basis, and are the recommended method of specifying AWS credentials. Windows Deploy TimeoutsFor Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to validating the Administrator password once AWS provides the credentials via the AWS API. Key PairsIn order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us-east-1 region can be configured at: https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs Keys in the us-west-1 region can be configured at https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs ...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600. Security GroupsAn instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us-east-1 region can be configured at: https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups ...and so on. A security group defines firewall rules which an instance will adhere to. If the salt-master is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt. IAM ProfileAmazon EC2 instances support the concept of an instance profile, which is a logical container for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associated with the IAM role. Scaffolding the profile is a 2-step configuration process:
> aws iam create-instance-profile --instance-profile-name PROFILE_NAME > aws iam add-role-to-instance-profile --instance-profile-name PROFILE_NAME --role-name ROLE_NAME Once the profile is created, you can use the PROFILE_NAME to configure your cloud profiles. Cloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles: base_ec2_private: The profile can now be realized with a salt command: # salt-cloud -p base_ec2 ami.example.com # salt-cloud -p base_ec2_public ami.example.com # salt-cloud -p base_ec2_private ami.example.com This will create an instance named ami.example.com in EC2. The minion that is installed on this instance will have an id of ami.example.com. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt 'ami.example.com' test.version Required SettingsThe following settings are always required for EC2: # Set the EC2 login data my-ec2-config: Optional SettingsEC2 allows a userdata file to be passed to the instance to be created. This functionality was added to Salt in the 2015.5.0 release. my-ec2-config: NOTE: From versions 2016.11.0 and 2016.11.3, this file was
passed through the master's renderer to template it. However, this
caused issues with non-YAML data, so templating is no longer performed by
default. To template the userdata_file, add a userdata_template option
to the cloud profile:
my-ec2-config: If no userdata_template is set in the cloud profile, then the master configuration will be checked for a userdata_template value. If this is not set, then no templating will be performed on the userdata_file. To disable templating in a cloud profile when a userdata_template has been set in the master configuration file, simply set userdata_template to False in the cloud profile: my-ec2-config: EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity. my-ec2-config: EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt-cloud command is run from another EC2 instance, the private IP should be used. my-ec2-config: Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami). my-ec2-config: Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file: my-ec2-config: Multiple security groups can also be specified in the same fashion: my-ec2-config: EC2 instances can be added to an AWS Placement Group by specifying the placementgroup option: my-ec2-config: Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Console. my-ec2-config: You can optionally specify tags to apply to the EC2 spot instance request. A spot instance request itself is an object in AWS. The following example will set two tags on the spot instance request. my-ec2-config: By default, the spot instance type is set to 'one-time', meaning it will be launched and, if it's ever terminated for whatever reason, it will not be recreated. If you would like your spot instances to be relaunched after a termination (by you or AWS), set the type to 'persistent'. NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your maximum bid. The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching:
If you find that you're being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS. wait_for_spot_interval: 1 wait_for_spot_interval_multiplier: 2 See the AWS Spot Instances documentation for more information. Block device mappings enable you to specify additional EBS volumes or instance store volumes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the AWS documentation for a listing of the available instance stores, and device names. my-ec2-config: You can also use block device mappings to change the size of the root device at the provisioning time. For example, assuming the root device is '/dev/sda', you can set its size to 100G by using the following configuration. my-ec2-config: Tagging of block devices can be set on a per device basis. For example, you may have multiple devices defined in your block_device_mappings structure. You have the option to set tags on any of one device or all of them as shown in the following configuration. my-ec2-config: You can configure any AWS valid tag name as shown in the above example, including 'Name'. If you do not configure the tag 'Name', it will be automatically created with a value set to the virtual machine name. If you configure the tag 'Name', the value you configure will be used rather than defaulting to the virtual machine name as shown in the following configuration. my-ec2-config: Existing EBS volumes may also be attached (not created) to your instances or you can create new EBS volumes based on EBS snapshots. To simply attach an existing volume use the volume_id parameter. device: /dev/xvdj volume_id: vol-12345abcd Or, to create a volume from an EBS snapshot, use the snapshot parameter. device: /dev/xvdj snapshot: snap-abcd12345 Note that volume_id will take precedence over the snapshot parameter. Tags can be set once an instance has been launched. my-ec2-config: Setting up a Master inside EC2Salt Cloud can configure Salt Masters as well as Minions. Use the make_master setting to use this functionality. my-ec2-config: When creating a Salt Master inside EC2 with make_master: True, or when the Salt Master is already located and configured inside EC2, by default, minions connect to the master's public IP address during Salt Cloud's provisioning process. Depending on how your security groups are defined, the minions may or may not be able to communicate with the master. In order to use the master's private IP in EC2 instead of the public IP, set the salt_interface to private_ips. my-ec2-config: Modify EC2 TagsOne of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags: salt-cloud -a get_tags mymachine salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff' salt-cloud -a del_tags mymachine tag1,tag2,tag3 It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances: salt-cloud -f get_tags my_ec2 resource_id=af5467ba salt-cloud -f set_tags my_ec2 resource_id=af5467ba tag1=somestuff salt-cloud -f del_tags my_ec2 resource_id=af5467ba tags=tag1,tag2,tag3 Rename EC2 InstancesAs mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys. salt-cloud -a rename mymachine newname=yourmachine Rename on DestroyWhen instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retains a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like: myinstance-DEL20f5b8ad4eb64ed88f2c428df80a1a0c In order to enable this, add rename_on_destroy line to the main configuration file: my-ec2-config: Listing ImagesNormally, images can be queried on a cloud provider by passing the --list-images argument to Salt Cloud. This still holds true for EC2: salt-cloud --list-images my-ec2-config However, the full list of images on EC2 is extremely large, and querying all of the available images may cause Salt Cloud to behave as if frozen. Therefore, the default behavior of this option may be modified, by adding an owner argument to the provider configuration: owner: aws-marketplace The possible values for this setting are amazon, aws-marketplace, self, <AWS account ID> or all. The default setting is amazon. Take note that all and aws-marketplace may cause Salt Cloud to appear as if it is freezing, as it tries to handle the large amount of data. It is also possible to perform this query using different settings without modifying the configuration files. To do this, call the avail_images function directly: salt-cloud -f avail_images my-ec2-config owner=aws-marketplace EC2 ImagesThe following are lists of available AMI images, generally sorted by OS. These lists are on 3rd-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team.
NOTE: If image of a profile does not start with ami-, latest image with that name will be used. For example, to create a CentOS 7 profile, instead of using the AMI like image: ami-1caef165, we can use its name like image: 'CentOS Linux 7 x86_64 HVM EBS ENA 1803_01'. We can also use a pattern like below to get the latest CentOS 7: profile-id: show_imageThis is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI. $ salt-cloud -f show_image ec2 image=ami-fd20ad94 show_instanceThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. $ salt-cloud -a show_instance myinstance ebs_optimizedThis argument enables switching of the EbsOptimized setting which default to 'false'. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance. This setting can be added to the profile or map file for an instance. If set to True, this setting will enable an instance to be EbsOptimized ebs_optimized: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: del_root_vol_on_destroyThis argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. If set, this setting will apply to the root EBS volume del_root_vol_on_destroy: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: del_all_vols_on_destroyThis argument overrides the default DeleteOnTermination setting in the AMI for the not-root EBS volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. If set, this setting will apply to any (non-root) volumes that were created by salt-cloud using the 'volumes' setting. The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance del_all_vols_on_destroy: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: The setting for this may be changed on all volumes of an existing instance using one of the following commands: salt-cloud -a delvol_on_destroy myinstance salt-cloud -a keepvol_on_destroy myinstance salt-cloud -a show_delvol_on_destroy myinstance The setting for this may be changed on a volume on an existing instance using one of the following commands: salt-cloud -a delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a delvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a keepvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a keepvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a show_delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a show_delvol_on_destroy myinstance volume_id=vol-1a2b3c4d EC2 Termination ProtectionEC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality. salt-cloud -a show_term_protect mymachine salt-cloud -a enable_term_protect mymachine salt-cloud -a disable_term_protect mymachine Alternate EndpointNormally, EC2 endpoints are build using the region and the service_url. The resulting endpoint would follow this pattern: ec2.<region>.<service_url> This results in an endpoint that looks like: ec2.us-east-1.amazonaws.com There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file: my-ec2-config: Volume ManagementThe EC2 driver has several functions and actions for management of EBS volumes. Creating VolumesA volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used. The following parameters may also be set (when providing a snapshot OR size):
salt-cloud -f create_volume ec2 zone=us-east-1b salt-cloud -f create_volume ec2 zone=us-east-1b size=10 salt-cloud -f create_volume ec2 zone=us-east-1b snapshot=snap12345678 salt-cloud -f create_volume ec2 size=10 type=standard salt-cloud -f create_volume ec2 size=10 type=gp2 salt-cloud -f create_volume ec2 size=10 type=io1 iops=1000 Attaching VolumesUnattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id, and device. salt-cloud -a attach_volume myinstance volume_id=vol-12345 device=/dev/sdb1 Show a VolumeThe details about an existing volume may be retrieved. salt-cloud -a show_volume myinstance volume_id=vol-12345 salt-cloud -f show_volume ec2 volume_id=vol-12345 Detaching VolumesAn existing volume may be detached from an instance. salt-cloud -a detach_volume myinstance volume_id=vol-12345 Deleting VolumesA volume that is not attached to an instance may be deleted. salt-cloud -f delete_volume ec2 volume_id=vol-12345 Managing Key PairsThe EC2 driver has the ability to manage key pairs. Creating a Key PairA key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immediately. salt-cloud -f create_keypair ec2 keyname=mykeypair Importing a Key Pairsalt-cloud -f import_keypair ec2 keyname=mykeypair file=/path/to/id_rsa.pub Show a Key PairThis function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon). salt-cloud -f show_keypair ec2 keyname=mykeypair Delete a Key PairThis function removes the key pair from Amazon. salt-cloud -f delete_keypair ec2 keyname=mykeypair Launching instances into a VPCSimple launching into a VPCIn the amazon web interface, identify the id or the name of the subnet into which your image should be created. Then, edit your cloud.profiles file like so:- profile-id: Note that 'subnetid' takes precedence over 'subnetname', but 'securitygroupid' and 'securitygroupname' are merged together to generate a single list for SecurityGroups of instances. Specifying interface propertiesNew in version 2014.7.0. Launching into a VPC allows you to specify more complex configurations for the network interfaces of your virtual machines, for example:- profile-id: Note that it is an error to assign a 'subnetid', 'subnetname', 'securitygroupid' or 'securitygroupname' to a profile where the interfaces are manually configured like this. These are both really properties of each network interface, not of the machine itself. Getting Started With GoGridGoGrid is a public cloud host that supports Linux and Windows. ConfigurationTo use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid: # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-gogrid-config: NOTE: A Note about using Map files with GoGrid:
Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances. NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory: gogrid_512: Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-gogrid-config my-gogrid-config: Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-gogrid-config my-gogrid-config: Assigning IPsNew in version 2015.8.0. The GoGrid API allows IP addresses to be manually assigned. Salt Cloud supports this functionality by allowing an IP address to be specified using the assign_public_ip argument. This likely makes the most sense inside a map file, but it may also be used inside a profile. gogrid_512: Getting Started With Google Compute EngineGoogle Compute Engine (GCE) is Google-infrastructure as a service that lets you run your large-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google's infrastructure. You can find out more about GCE and other Google Cloud Platform services at https://cloud.google.com. Dependencies
Changed in version 2017.7.0.
Google Compute Engine Setup
If you are using libcloud >= 0.17.0 it is
recommended that you use the JSON format file you downloaded
above and skip to the Provider Configuration section below, using the
JSON file in place of 'NEW.pem' in the documentation.
If you are using an older version of libcloud or are unsure of the version you have, please follow the instructions below to generate and format a new P12 key. In the new Service Account section, click Generate new P12 key, which will automatically download a .p12 private key file. The .p12 private key needs to be converted to a format compatible with libcloud. This new Google-generated private key was encrypted using notasecret as a passphrase. Use the following command and record the location of the converted private key and record the location for use in the service_account_private_key of the /usr/local/etc/salt/cloud file: openssl pkcs12 -in ORIG.p12 -passin pass:notasecret \ -nodes -nocerts | openssl rsa -out NEW.pem Provider ConfigurationSet up the provider cloud config at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/*.conf: gce-config: NOTE: Empty strings as values for
service_account_private_key and service_account_email_address
can be used on GCE instances. This will result in the service account assigned
to the GCE instance being used.
NOTE: The value provided for project must not contain
underscores or spaces and is labeled as "Project ID" on the Google
Developers Console.
NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profile ConfigurationSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/*.conf: my-gce-profile: The profile can be realized now with a salt command: salt-cloud -p my-gce-profile gce-instance This will create an salt minion instance named gce-instance in GCE. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with a salt-minion installed, connectivity to it can be verified with Salt: salt gce-instance test.version GCE Specific SettingsConsult the sample profile below for more information about GCE specific settings. Some of them are mandatory and are properly labeled below but typically also include a hard-coded default. Initial ProfileSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/gce.conf: my-gce-profile: imageImage is used to define what Operating System image should be used to for the instance. Examples are Debian 7 (wheezy) and CentOS 6. Required. sizeA 'size', in GCE terms, refers to the instance's 'machine type'. See the on-line documentation for a complete list of GCE machine types. Required. locationA 'location', in GCE terms, refers to the instance's 'zone'. GCE has the notion of both Regions (e.g. us-central1, europe-west1, etc) and Zones (e.g. us-central1-a, us-central1-b, etc). Required. networkUse this setting to define the network resource for the instance. All GCE projects contain a network named 'default' but it's possible to use this setting to create instances belonging to a different network resource. subnetworkUse this setting to define the subnetwork an instance will be created in. This requires that the network your instance is created under has a mode of 'custom' or 'auto'. Additionally, the subnetwork your instance is created under is associated with the location you provide. New in version 2017.7.0. labelsThis setting allows you to set labels on your GCE instances. It should be a dictionary and must be parse-able by the python ast.literal_eval() function to convert it to a python dictionary. New in version 3006. tagsGCE supports instance/network tags and this setting allows you to set custom tags. It should be a list of strings and must be parse-able by the python ast.literal_eval() function to convert it to a python list. metadataGCE supports instance metadata and this setting allows you to set custom metadata. It should be a hash of key/value strings and parse-able by the python ast.literal_eval() function to convert it to a python dictionary. use_persistent_diskUse this setting to ensure that when new instances are created, they will use a persistent disk to preserve data between instance terminations and re-creations. delete_boot_pdIn the event that you wish the boot persistent disk to be permanently deleted when you destroy an instance, set delete_boot_pd to True. ssh_interfaceNew in version 2015.5.0. Specify whether to use public or private IP for deploy script. Valid options are:
external_ipPer instance setting: Used a named fixed IP address to this host. Valid options are:
Optionally, pass the name of a GCE address to use a fixed IP address. If the address does not already exist, it will be created. ex_disk_typeGCE supports two different disk types, pd-standard and pd-ssd. The default disk type setting is pd-standard. To specify using an SSD disk, set pd-ssd as the value. New in version 2014.7.0. ip_forwardingGCE instances can be enabled to use IP Forwarding. When set to True, this options allows the instance to send/receive non-matching src/dst packets. Default is False. New in version 2015.8.1. Profile with scopesScopes can be specified by setting the optional ex_service_accounts key in your cloud profile. The following example enables the bigquery scope. my-gce-profile: Email can also be specified as an (optional) parameter. my-gce-profile: ...snip There can be multiple entries for scopes since ex-service_accounts accepts a list of dictionaries. For more information refer to the libcloud documentation on specifying service account scopes. SSH Remote AccessGCE instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Append something like this to /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/*.conf: my-gce-profile: If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at https://cloud.google.com/console. You could also add it via the metadata setting too: my-gce-profile: Single instance detailsThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. salt-cloud -a show_instance myinstance Destroy, persistent disks, and metadataAs noted in the provider configuration, it's possible to force the boot persistent disk to be deleted when you destroy the instance. The way that this has been implemented is to use the instance metadata to record the cloud profile used when creating the instance. When destroy is called, if the instance contains a salt-cloud-profile key, it's value is used to reference the matching profile to determine if delete_boot_pd is set to True. Be aware that any GCE instances created with salt cloud will contain this custom salt-cloud-profile metadata entry. List various resourcesIt's also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images. salt-cloud --list-locations gce salt-cloud --list-sizes gce salt-cloud --list-images gce Persistent DiskThe Compute Engine provider provides functions via salt-cloud to manage your Persistent Disks. You can create and destroy disks as well as attach and detach them from running instances. CreateWhen creating a disk, you can create an empty disk and specify its size (in GB), or specify either an 'image' or 'snapshot'. salt-cloud -f create_disk gce disk_name=pd location=us-central1-b size=200 DeleteDeleting a disk only requires the name of the disk to delete salt-cloud -f delete_disk gce disk_name=old-backup AttachAttaching a disk to an existing instance is really an 'action' and requires both an instance name and disk name. It's possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance). salt-cloud -a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes DetachDetaching a disk is also an action against an instance and only requires the name of the disk. Note that this does not safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance. salt-cloud -a detach_disk myinstance disk_name=pd Show diskIt's also possible to look up the details for an existing disk with either a function or an action. salt-cloud -a show_disk myinstance disk_name=pd salt-cloud -f show_disk gce disk_name=pd Create snapshotYou can take a snapshot of an existing disk's content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk. salt-cloud -f create_snapshot gce name=backup-20140226 disk_name=pd Delete snapshotYou can delete a snapshot when it's no longer needed by specifying the name of the snapshot. salt-cloud -f delete_snapshot gce name=backup-20140226 Show snapshotUse this function to look up information about the snapshot. salt-cloud -f show_snapshot gce name=backup-20140226 NetworkingCompute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones. Networks also support custom firewall rules. By default, traffic between instances on the same private network is open to all ports and protocols. Inbound SSH traffic (port 22) is also allowed but all other inbound traffic is blocked. Create networkNew networks require a name and CIDR range if they don't have a 'mode'. Optionally, 'mode' can be provided. Supported modes are 'auto', 'custom', 'legacy'. Optionally, 'description' can be provided to add an extra note to your network. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove existing instances to a network. salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 salt-cloud -f create_network gce name=mynet mode=auto description=some optional info. Changed in version 2017.7.0. Destroy networkDestroy a network by specifying the name. If a resource is currently using the target network an exception will be raised. salt-cloud -f delete_network gce name=mynet Show networkSpecify the network name to view information about the network. salt-cloud -f show_network gce name=mynet Create subnetworkNew subnetworks require a name, region, and CIDR range. Optionally, 'description' can be provided to add an extra note to your subnetwork. New instances can be created and added to this subnetwork by setting the subnetwork name during create. It is not possible to add/remove existing instances to a subnetwork. salt-cloud -f create_subnetwork gce name=mynet network=mynet region=us-central1 cidr=10.0.10.0/24 salt-cloud -f create_subnetwork gce name=mynet network=mynet region=us-central1 cidr=10.10.10.0/24 description=some info about my subnet. New in version 2017.7.0. Destroy subnetworkDestroy a subnetwork by specifying the name and region. If a resource is currently using the target subnetwork an exception will be raised. salt-cloud -f delete_subnetwork gce name=mynet region=us-central1 New in version 2017.7.0. Show subnetworkSpecify the subnetwork name to view information about the subnetwork. salt-cloud -f show_subnetwork gce name=mynet New in version 2017.7.0. Create addressCreate a new named static IP address in a region. salt-cloud -f create_address gce name=my-fixed-ip region=us-central1 Delete addressDelete an existing named fixed IP address. salt-cloud -f delete_address gce name=my-fixed-ip region=us-central1 Show addressView details on a named address. salt-cloud -f show_address gce name=my-fixed-ip region=us-central1 Create firewallYou'll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you'll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the 'default' network unless otherwise specified with a 'network' attribute. Firewalls also support instance tags for source/destination salt-cloud -f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp Delete firewallDeleting a firewall rule will prevent any previously allowed traffic for the named firewall rule. salt-cloud -f delete_fwrule gce name=web Show firewallUse this function to review an existing firewall rule's information. salt-cloud -f show_fwrule gce name=web Load BalancerCompute Engine possess a load-balancer feature for splitting traffic across multiple instances. Please reference the documentation for a more complete description. The load-balancer functionality is slightly different than that described in Google's documentation. The concept of TargetPool and ForwardingRule are consolidated in salt-cloud/libcloud. HTTP Health Checks are optional. HTTP Health CheckHTTP Health Checks can be used as a means to toggle load-balancing across instance members, or to detect if an HTTP site is functioning. A common use-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can temporarily have it return a non-200 response. A non-200 response to the load-balancer's health check will keep the LB from sending any new traffic to the "down" instance. Once the instance's health check URL beings returning 200-responses, the LB will again start to send traffic to it. Review Compute Engine's documentation for allowable parameters. You can use the following salt-cloud functions to manage your HTTP health checks. salt-cloud -f create_hc gce name=myhc path=/ port=80 salt-cloud -f delete_hc gce name=myhc salt-cloud -f show_hc gce name=myhc Load-balancerWhen creating a new load-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name. salt-cloud -f create_lb gce name=lb region=... ports=80 members=w1,w2,w3 salt-cloud -f delete_lb gce name=lb salt-cloud -f show_lb gce name=lb You can also create a load balancer using a named fixed IP addressby specifying the name of the address. If the address does not exist yet it will be created. salt-cloud -f create_lb gce name=my-lb region=us-central1 ports=234 members=s1,s2,s3 address=my-lb-ip Attach and Detach LBIt is possible to attach or detach an instance from an existing load-balancer. Both the instance and load-balancer must exist before using these functions. salt-cloud -f attach_lb gce name=lb member=w4 salt-cloud -f detach_lb gce name=lb member=oops Getting Started With HP CloudHP Cloud is a major public cloud platform and uses the libcloud openstack driver. The current version of OpenStack that HP Cloud uses is Havana. When an instance is booted, it must have a floating IP added to it in order to connect to it and further below you will see an example that adds context to this statement. Set up a cloud provider configuration fileTo use the openstack driver for HP Cloud, set up the cloud provider configuration file as in the example shown below: /usr/local/etc/salt/cloud.providers.d/hpcloud.conf: hpcloud-config: The subsequent example that follows is using the openstack driver. NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Compute RegionOriginally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region-a.geo-1), which each behaved each as a region. This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from: region-a.geo-1 -> US West region-b.geo-1 -> US East AuthenticationThe user is the same user as is used to log into the HP Cloud management UI. The tenant can be found in the upper left under "Project/Region/Scope". It is often named the same as user albeit with a -project1 appended. The password is of course what you created your account with. The management UI also has other information such as being able to select US East or US West. Set up a cloud profile config fileThe profile shown below is a know working profile for an Ubuntu instance. The profile configuration file is stored in the following location: /usr/local/etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf: hp_ae1_ubuntu: Some important things about the example above:
# salt-cloud --list-images hp_ae1
Launch an instanceTo instantiate a machine based on this profile (example): # salt-cloud -p hp_ae1_ubuntu ubuntu_instance_1 After several minutes, this will create an instance named ubuntu_instance_1 running in HP Cloud in the US East region and will set up the minion and then return information about the instance once completed. Manage the instanceOnce the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt ubuntu_instance_1 ping SSH to the instanceAdditionally, the instance can be accessed via SSH using the floating IP assigned to it # ssh ubuntu@<floating ip> Using a private IPAlternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions) The example below is a modified version of the previous example. Note the use of ssh_interface: hp_ae1_ubuntu: With this setup, salt-cloud will use the private IP address to ssh into the instance and set up the salt-minion Getting Started With JoyentJoyent is a public cloud host that supports SmartOS, Linux, FreeBSD, and Windows. DependenciesThis driver requires the Python requests library to be installed. ConfigurationThe Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-joyent-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory: joyent_512: Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-joyent-config my-joyent-config: Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-joyent-config my-joyent-config: SmartDataCenterThis driver can also be used with the Joyent SmartDataCenter project. More details can be found at: Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included: api_host_suffix: .api.myhostname.com Miscellaneous ConfigurationThe following configuration items can be set in either provider or profile configuration files. use_sslWhen set to True (the default), attach https:// to any URL that does not already have http:// or https:// included at the beginning. The best practice is to leave the protocol out of the URL, and use this setting to manage it. verify_sslWhen set to True (the default), the underlying web library will verify the SSL certificate. This should only be set to False for debugging.` Getting Started With LibvirtLibvirt is a toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes). This driver Salt cloud provider is currently geared towards libvirt with qemu-kvm. https://libvirt.org/ Host Dependencies
Salt-Cloud Dependencies
Provider ConfigurationFor every KVM host a provider needs to be set up. The provider currently maps to one libvirt daemon (e.g. one KVM host). Set up the provider cloud configuration file at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/*.conf. # Set up a provider with qemu+ssh protocol kvm-via-ssh: Cloud ProfilesVirtual machines get cloned from so called Cloud Profiles. Profiles can be set up at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/*.conf:
centos7: The profile can be realized now with a salt command: salt-cloud -p centos7 my-centos7-clone This will create an instance named my-centos7-clone on the cloud host. Also the minion id will be set to my-centos7-clone. If the command was executed on the salt-master, its Salt key will automatically be accepted on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: salt my-centos7-clone test.version Required SettingsThe following settings are always required for libvirt: centos7: SSH Key AuthenticationInstead of specifying a password, an authorized key can be used for the minion setup. Ensure that the ssh user of your base image has the public key you want to use in ~/.ssh/authorized_keys. If you want to use a non-root user you will likely want to configure salt-cloud to use sudo. An example using root: centos7: An example using a non-root user: centos7: Optional Settingscentos7: The clone_strategy controls how the clone is done. In case of full the disks are copied creating a standalone clone. If quick is used the disks of the base domain are used as backing disks for the clone. This results in nearly instantaneous clones at the expense of slower write performance. The quick strategy has a number of requirements:
The ip_source setting controls how the IP address of the cloned instance is determined. When using ip-learning the IP is requested from libvirt. This needs a recent libvirt version and may only work for NAT/routed networks where libvirt runs the dhcp server. Another option is to use qemu-agent this requires that the qemu-agent is installed and configured to run at startup in the base domain. The validate_xml setting is available to disable xml validation by libvirt when cloning. See also salt.cloud.clouds.libvirt Getting Started With LinodeLinode is a public cloud host with a focus on Linux instances. DependenciesThis driver requires the Python requests library to be installed. Provider ConfigurationConfiguration Options
Example ConfigurationSet up the provider cloud configuration file at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/*.conf. my-linode-provider: For use with APIv3 (deprecated): my-linode-provider-v3: Profile ConfigurationConfiguration Options
Example ConfigurationSet up a profile configuration in /usr/local/etc/salt/cloud.profiles.d/: my-linode-profile: The my-linode-profile can be realized now with a salt command: salt-cloud -p my-linode-profile my-linode-instance This will create a salt minion instance named my-linode-instance in Linode. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with a salt-minion installed, connectivity to it can be verified with Salt: salt my-linode-instance test.version A more advanced configuration utlizing all of the configuration options might look like: my-linode-profile-advanced: A legacy configuration for use with APIv3 might look like: my-linode-profile-v3: Migrating to APIv4Linode APIv3 has been deprecated and will be shutdown in the coming months. You can opt-in to using APIv4 by setting the api_version provider configuration option to v4. When switching to APIv4, you will also need to generate a new token. See here for more information. Notable ChangesMove from label references to ID references. The profile configuration parameters location, size, and image have moved from accepting label based references to IDs. See the profile configuration section for more details. The ``disk_size`` profile configuration parameter has been deprecated. The parameter will not be taken into account when creating new VMs while targeting APIv4. See the disk_size description under the profile configuration section for more details. The ``boot`` function no longer requires a ``config_id``. A config can be inferred by the API instead when booting. The ``clone`` function has renamed parameters to match convention. The old version of these parameters will not be supported when targeting APIv4. * datacenter_id has been deprecated in favor of location. * plan_id has been deprecated in favor of size. The ``get_plan_id`` function has been deprecated and will not be supported by APIv4. IDs are now the only way of referring to a "plan" (or type/size). Query UtilitiesListing SizesAvailable sizes can be obtained by running one of: salt-cloud --list-sizes my-linode-provider salt-cloud -f avail_sizes my-linode-provider This will list all Linode sizes/types which can be referenced in VM profiles. my-linode-config: Listing ImagesAvailable images can be obtained by running one of: salt-cloud --list-images my-linode-provider salt-cloud -f avail_images my-linode-provider This will list all Linode images which can be referenced in VM profiles. Official images are available under the linode namespace. my-linode-config: Listing LocationsAvailable locations can be obtained by running one of: salt-cloud --list-locations my-linode-provider salt-cloud -f avail_locations my-linode-provider This will list all Linode regions which can be referenced in VM profiles. my-linode-config: CloningTo clone a Linode, add a profile with a clonefrom key, and a script_args: -C. clonefrom should be the name of the Linode that is the source for the clone. script_args: -C passes a -C to the salt-bootstrap script, which only configures the minion and doesn't try to install a new copy of salt-minion. This way the minion gets new keys and the keys get pre-seeded on the master, and the /usr/local/etc/salt/minion file has the right minion 'id:' declaration. Cloning requires a post 2015-02-01 salt-bootstrap. It is safest to clone a stopped machine. To stop a machine run salt-cloud -a stop machine_to_clone To create a new machine based on another machine, add an entry to your linode cloud profile that looks like this: li-clone: Then run salt-cloud as normal, specifying -p li-clone. The profile name can be anything; It doesn't have to be li-clone. clonefrom: is the name of an existing machine in Linode from which to clone. Script_args: -C -F is necessary to avoid re-deploying Salt via salt-bootstrap. -C will just re-deploy keys so the new minion will not have a duplicate key or minion_id on the Master, and -F will force a rewrite of the Minion config file on the new Minion. If -F isn't provided, the new Minion will have the machine_to_clone's Minion ID, instead of its own Minion ID, which can cause problems. NOTE: Pull Request #733 to the salt-bootstrap repo makes
the -F argument non-necessary. Once that change is released into a
stable version of the Bootstrap Script, the -C argument will be
sufficient for the script_args setting.
If the machine_to_clone does not have Salt installed on it, refrain from using the script_args: -C -F altogether, because the new machine will need to have Salt installed. Getting Started With LXCThe LXC module is designed to install Salt in an LXC container on a controlled and possibly remote minion. In other words, Salt will connect to a minion, then from that minion:
Limitations
OperationSalt's LXC support does use lxc.init via the lxc.cloud_init_interface and seeds the minion via seed.mkconfig. You can provide to those lxc VMs a profile and a network profile like if you were directly using the minion module. Order of operation:
Provider configurationHere is a simple provider configuration: # Note: This example goes in /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. devhost10-lxc: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profile configurationPlease read LXC Management with Salt before anything else. And specially Profiles. Here are the options to configure your containers:
eth0: {'mac': '00:16:3e:01:29:40',
script_args="-c {0}"
Using profiles: # Note: This example would go in /usr/local/etc/salt/cloud.profiles or any file in the # /usr/local/etc/salt/cloud.profiles.d/ directory. devhost10-lxc: Using inline profiles (eg to override the network bridge): devhost11-lxc: Using a lxc template instead of a clone: devhost11-lxc: Static ip: # Note: This example would go in /usr/local/etc/salt/cloud.profiles or any file in the # /usr/local/etc/salt/cloud.profiles.d/ directory. devhost10-lxc: DHCP: # Note: This example would go in /usr/local/etc/salt/cloud.profiles or any file in the # /usr/local/etc/salt/cloud.profiles.d/ directory. devhost10-lxc: Driver Support
Getting Started With 1and11&1 is one of the world’s leading Web hosting providers. 1&1 currently offers a wide range of Web hosting products, including email solutions and high-end servers in 10 different countries including Germany, Spain, Great Britain and the United States. From domains to 1&1 MyWebsite to eBusiness solutions like Cloud Hosting and Web servers for complex tasks, 1&1 is well placed to deliver a high quality service to its customers. All 1&1 products are hosted in 1&1‘s high-performance, green data centers in the USA and Europe. Dependencies
Configuration
my-oneandone-config: AuthenticationThe api_key is used for API authorization. This token can be obtained from the CloudPanel in the Management section below Users. ProfilesHere is an example of a profile: oneandone_fixed_size: The following list explains some of the important properties.
salt-cloud --list-sizes oneandone
salt-cloud --list-images oneandone
salt-cloud --list-locations oneandone
Functions
sudo salt-cloud -f create_ssh_key my-oneandone-config name='SaltTest' description='SaltTestDescription'
sudo salt-cloud -f create_block_storage my-oneandone-config name='SaltTest2' description='SaltTestDescription' size=50 datacenter_id='5091F6D8CBFEF9C26ACE957C652D5D49' For more information concerning cloud profiles, see here. Getting Started with OpenNebulaOpenNebula is an open-source solution for the comprehensive management of virtualized data centers to enable the mixed use of private, public, and hybrid IaaS clouds. DependenciesThe driver requires Python's lxml library to be installed. It also requires an OpenNebula installation running version 4.12 or greater. ConfigurationThe following example illustrates some of the options that can be set. These parameters are discussed in more detail below. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-opennebula-provider: Access CredentialsThe Salt Cloud driver for OpenNebula was written using OpenNebula's native XML RPC API. Every interaction with OpenNebula's API requires a username and password to make the connection from the machine running Salt Cloud to API running on the OpenNebula instance. Based on the access credentials passed in, OpenNebula filters the commands that the user can perform or the information for which the user can query. For example, the images that a user can view with a --list-images command are the images that the connected user and the connected user's groups can access. Key PairsSalt Cloud needs to be able to access a virtual machine in order to install the Salt Minion by using a public/private key pair. The virtual machine will need to be seeded with the public key, which is laid down by the OpenNebula template. Salt Cloud then uses the corresponding private key, provided by the private_key setting in the cloud provider file, to SSH into the new virtual machine. To seed the virtual machine with the public key, the public key must be added to the OpenNebula template. If using the OpenNebula web interface, navigate to the template, then click Update. Click the Context tab. Under the Network & SSH section, click Add SSH Contextualization and paste the public key in the Public Key box. Don't forget to save your changes by clicking the green Update button. NOTE: The key pair must not have a pass-phrase.
Cloud ProfilesSet up an initial profile at either /usr/local/etc/salt/cloud.profiles or the /etc/salt/cloud.profiles.d/ directory. my-opennebula-profile: The profile can now be realized with a salt command: salt-cloud -p my-opennebula-profile my-new-vm This will create a new instance named my-new-vm in OpenNebula. The minion that is installed on this instance will have a minion id of my-new-vm. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: salt my-new-vm test.version OpenNebula uses an image --> template --> virtual machine paradigm where the template draws on the image, or disk, and virtual machines are created from templates. Because of this, there is no need to define a size in the cloud profile. The size of the virtual machine is defined in the template. Change Disk SizeYou can now change the size of a VM on creation by cloning an image and expanding the size. You can accomplish this by the following cloud profile settings below. my-opennebula-profile: There are currently two different disk_types a user can use: volatile and clone. Clone which is required when specifying devices will clone an image in open nebula and will expand it to the size specified in the profile settings. By default this will clone the image attached to the template specified in the profile but a user can add the image argument under the disk definition. For example the profile below will not use Ubuntu-14.04 for the cloned disk image. It will use the centos7-base-image image: my-opennebula-profile: If you want to use the image attached to the template set in the profile you can simply remove the image argument as show below. The profile below will clone the image Ubuntu-14.04 and expand the disk to 8GB.: my-opennebula-profile: A user can also currently specify swap or fs disks. Below is an example of this profile setting: my-opennebula-profile: The example above will attach both a swap disk and a ext3 filesystem with a size of 4GB. To note if you define other disks you have to define the image disk to clone because the template will write over the entire 'DISK=[]' template definition on creation. Required SettingsThe following settings are always required for OpenNebula: my-opennebula-config: Required Settings for VM DeploymentThe settings defined in the Required Settings section are required for all interactions with OpenNebula. However, when deploying a virtual machine via Salt Cloud, an additional setting, private_key, is also required: my-opennebula-config: Listing ImagesImages can be queried on OpenNebula by passing the --list-images argument to Salt Cloud: salt-cloud --list-images opennebula Listing LocationsIn OpenNebula, locations are defined as hosts. Locations, or "hosts", can be querried on OpenNebula by passing the --list-locations argument to Salt Cloud: salt-cloud --list-locations opennebula Listing SizesSizes are defined by templates in OpenNebula. As such, the --list-sizes call returns an empty dictionary since there are no sizes to return. Additional OpenNebula API FunctionalityThe Salt Cloud driver for OpenNebula was written using OpenNebula's native XML RPC API. As such, many --function and --action calls were added to the OpenNebula driver to enhance support for an OpenNebula infrastructure with additional control from Salt Cloud. See the OpenNebula function definitions for more information. Access via DNS entry instead of IPSome OpenNebula installations do not assign IP addresses to new VMs, instead they establish the new VM's hostname based on OpenNebula's name of the VM, and then allocate an IP out of DHCP with dynamic DNS attaching the hostname. This driver supports this behavior by adding the entry fqdn_base to the driver configuration or the OpenNebula profile with a value matching the base fully-qualified domain. For example: # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-opennebula-provider: Getting Started with OpenstackSee salt.cloud.clouds.openstack Getting Started With ParallelsParallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted using PCS. Further information can be found at: http://www.parallels.com/products/pcs/
# Set up the location of the salt master # minion:
my-parallels-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access CredentialsThe user, password, and url will be provided to you by your cloud host. These are all required in order for the PARALLELS driver to work. Cloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/parallels.conf: parallels-ubuntu: The profile can be realized now with a salt command: # salt-cloud -p parallels-ubuntu myubuntu This will create an instance named myubuntu on the cloud host. The minion that is installed on this instance will have an id of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt myubuntu test.version Required SettingsThe following settings are always required for PARALLELS:
PARALLELS.user: myuser PARALLELS.password: badpass PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
my-parallels-config: Optional SettingsUnlike other cloud providers in Salt Cloud, Parallels does not utilize a size setting. This is because Parallels allows the end-user to specify a more detailed configuration for their instances than is allowed by many other cloud hosts. The following options are available to be used in a profile, with their default settings listed. # Description of the instance. Defaults to the instance name. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpu_number: 1 cpu_power: 1000 # How many megabytes of RAM ram: 256 # Bandwidth available, in kbps bandwidth: 100 # How many public IPs will be assigned to this instance ip_num: 1 # Size of the instance disk (in GiB) disk_size: 10 # Username and password ssh_username: root password: <value from PARALLELS.password> # The name of the image, from ``salt-cloud --list-images parallels`` image: ubuntu-12.04-x86_64 Getting Started With ProfitBricksProfitBricks provides an enterprise-grade Infrastructure as a Service (IaaS) solution that can be managed through a browser-based "Data Center Designer" (DCD) tool or via an easy to use API. A unique feature of the ProfitBricks platform is that it allows you to define your own settings for cores, memory, and disk size without being tied to a particular server size. Dependencies
Configuration
my-profitbricks-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Virtual Data CenterProfitBricks uses the concept of Virtual Data Centers. These are logically separated from one another and allow you to have a self-contained environment for all servers, volumes, networking, snapshots, and so forth. A list of existing virtual data centers can be retrieved with the following command: salt-cloud -f list_datacenters my-profitbricks-config A new data center can be created with the following command: salt-cloud -f create_datacenter my-profitbricks-config name=example location=us/las description="my description" AuthenticationThe username and password are the same as those used to log into the ProfitBricks "Data Center Designer". ProfilesHere is an example of a profile: profitbricks_staging Locations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations my-profitbricks-config Images can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-images my-profitbricks-config Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-profitbricks-config Changed in version 2019.2.0: One or more public IP address can be reserved with the following command: # salt-cloud -f reserve_ipblock my-profitbricks-config location='us/ewr' size=1 Profile Specifics:The following list explains some of the important properties.
salt-cloud --list-sizes my-profitbricks-config
salt-cloud --list-images my-profitbricks-config
salt-cloud -f list_images my-profitbricks-config
Firewall Rule Name:
Firewall Rule Name:
For more information concerning cloud profiles, see here. Getting Started With ProxmoxProxmox Virtual Environment is a complete server virtualization management solution, based on OpenVZ(in Proxmox up to 3.4)/LXC(from Proxmox 4.0 and up) and full virtualization with KVM. Further information can be found at: https://www.proxmox.com Dependencies
Please note: This module allows you to create OpenVZ/LXC containers and KVM VMs, but installing Salt on it will only be done on containers rather than a KVM virtual machine.
my-proxmox-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access CredentialsThe user, password, and url will be provided to you by your cloud host. These are all required in order for the PROXMOX driver to work. Cloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/proxmox.conf:
proxmox-ubuntu: The profile can be realized now with a salt command: # salt-cloud -p proxmox-ubuntu myubuntu This will create an instance named myubuntu on the cloud host. The minion that is installed on this instance will have a hostname of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt myubuntu test.version Required SettingsThe following settings are always required for PROXMOX:
my-proxmox-config: Optional SettingsUnlike other cloud providers in Salt Cloud, Proxmox does not utilize a size setting. This is because Proxmox allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed. # Description of the instance. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpus: 1 cpuunits: 1000 # How many megabytes of RAM memory: 256 # How much swap space in MB swap: 256 # Whether to auto boot the vm after the host reboots onboot: 1 # Size of the instance disk (in GiB) disk: 10 # Host to create this vm on host: myvmhost # Nameservers. Defaults to host nameserver: 8.8.8.8 8.8.4.4 # Username and password ssh_username: root password: <value from PROXMOX.password> # The name of the image, from ``salt-cloud --list-images proxmox`` image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz # Whether or not to verify the SSL cert on the Proxmox host verify_ssl: False # Network interfaces, netX net0: name=eth0,bridge=vmbr0,ip=dhcp # Public key to add to /root/.ssh/authorized_keys. pubkey: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABA...' QEMUSome functionnalities works differently if you use 'qemu' as technology. In order to create a new VM with qemu, you need to specificy some more information. You can also clone a qemu template which already is on your Proxmox server. QEMU profile file (for a new VM): proxmox-win7: More information about these parameters can be found on Proxmox API (http://pve.proxmox.com/pve2-api-doc/) under the 'POST' method of nodes/{node}/qemu QEMU profile file (for a clone): proxmox-win7: More information can be found on Proxmox API under the 'POST' method of /nodes/{node}/qemu/{vmid}/clone NOTE: The Proxmox API offers a lot more options and parameters,
which are not yet supported by this salt-cloud 'overlay'. Feel free to add
your contribution by forking the github repository and modifying the following
file: salt/cloud/clouds/proxmox.py
An easy way to support more parameters for VM creation would be to add the names of the optional parameters in the 'create_nodes(vm_)' function, under the 'qemu' technology. But it requires you to dig into the code ... Getting Started With ScalewayScaleway is the first IaaS host worldwide to offer an ARM based cloud. It’s the ideal platform for horizontal scaling with BareMetal SSD servers. The solution provides on demand resources: it comes with on-demand SSD storage, movable IPs , images, security group and an Object Storage solution. https://scaleway.com ConfigurationUsing Salt for Scaleway, requires an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link. If you do not have API token you can create one by clicking the "Create New Token" button on the right corner. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-scaleway-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory: scaleway-ubuntu: Images can be obtained using the --list-images option for the salt-cloud command: #salt-cloud --list-images my-scaleway-config my-scaleway-config: Execute a query and return all information about the nodes running on configured cloud providers using the -Q option for the salt-cloud command: # salt-cloud -F [INFO ] salt-cloud starting [INFO ] Starting new HTTPS connection (1): api.scaleway.com my-scaleway-config: NOTE: Additional documentation about Scaleway can be found at
https://www.scaleway.com/docs.
Getting Started With SaltifyThe Saltify driver is a driver for installing Salt on existing machines (virtual or bare metal). DependenciesThe Saltify driver has no external dependencies. ConfigurationBecause the Saltify driver does not use an actual cloud provider host, it can have a simple provider configuration. The only thing that is required to be set is the driver name, and any other potentially useful information, like the location of the salt-master: # Note: This example is for /usr/local/etc/salt/cloud.providers file or any file in # the /usr/local/etc/salt/cloud.providers.d/ directory. my-saltify-config: However, if you wish to use the more advanced capabilities of salt-cloud, such as rebooting, listing, and disconnecting machines, then the salt master must fill the role usually performed by a vendor's cloud management system. The salt master must be running on the salt-cloud machine, and created nodes must be connected to the master. Additional information about which configuration options apply to which actions can be studied in the Saltify Module documentation and the Miscellaneous Salt Cloud Options document. ProfilesSaltify requires a separate profile to be configured for each machine that needs Salt installed [1]. The initial profile can be set up at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory. Each profile requires both an ssh_host and an ssh_username key parameter as well as either an key_filename or a password.
# /usr/local/etc/salt/cloud.profiles.d/saltify.conf salt-this-machine: The machine can now be "Salted" with the following command: salt-cloud -p salt-this-machine my-machine This will install salt on the machine specified by the cloud profile, salt-this-machine, and will give the machine the minion id of my-machine. If the command was executed on the salt-master, its Salt key will automatically be accepted by the master. Once a salt-minion has been successfully installed on the instance, connectivity to it can be verified with Salt: salt my-machine test.version Destroy OptionsNew in version 2018.3.0. For obvious reasons, the destroy action does not actually vaporize hardware. If the salt master is connected, it can tear down parts of the client machines. It will remove the client's key from the salt master, and can execute the following options: - remove_config_on_destroy: true Wake On LANNew in version 2018.3.0. In addition to connecting a hardware machine to a Salt master, you have the option of sending a wake-on-LAN magic packet to start that machine running. The "magic packet" must be sent by an existing salt minion which is on the same network segment as the target machine. (Or your router must be set up especially to route WoL packets.) Your target machine must be set up to listen for WoL and to respond appropriately. You must provide the Salt node id of the machine which will send the WoL packet (parameter wol_sender_node), and the hardware MAC address of the machine you intend to wake, (parameter wake_on_lan_mac). If both parameters are defined, the WoL will be sent. The cloud master will then sleep a while (parameter wol_boot_wait) to give the target machine time to boot up before we start probing its SSH port to begin deploying Salt to it. The default sleep time is 30 seconds. # /usr/local/etc/salt/cloud.profiles.d/saltify.conf salt-this-machine: Using Map FilesThe settings explained in the section above may also be set in a map file. An example of how to use the Saltify driver with a map file follows: # /usr/local/etc/salt/saltify-map make_salty: In this example, the names my-instance-0 and my-instance-1 will be the identifiers of the deployed minions. Note: The ssh_host directive is also used for Windows hosts, even though they do not typically run the SSH service. It indicates IP address or host name for the target system. Note: When using a cloud map with the Saltify driver, the name of the profile to use, in this case make_salty, must be defined in a profile config. For example: # /usr/local/etc/salt/cloud.profiles.d/saltify.conf make_salty: The machines listed in the map file can now be "Salted" by applying the following salt map command: salt-cloud -m /usr/local/etc/salt/saltify-map This command will install salt on the machines specified in the map and will give each machine their minion id of my-instance-0 and my-instance-1, respectively. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Connectivity to the new "Salted" instances can now be verified with Salt: salt 'my-instance-*' test.version Bulk DeploymentsWhen deploying large numbers of Salt Minions using Saltify, it may be preferable to organize the configuration in a way that duplicates data as little as possible. For example, if a group of target systems have the same credentials, they can be specified in the profile, rather than in a map file. # /usr/local/etc/salt/cloud.profiles.d/saltify.conf make_salty: # /usr/local/etc/salt/saltify-map make_salty: If ssh_host is not provided, its default value will be the Minion identifier (my-instance-0 and my-instance-1, in the example above). For deployments with working DNS resolution, this can save a lot of redundant data in the map. Here is an example map file using DNS names instead of IP addresses: # /usr/local/etc/salt/saltify-map make_salty: Credential VerificationBecause the Saltify driver does not actually create VM's, unlike other salt-cloud drivers, it has special behaviour when the deploy option is set to False. When the cloud configuration specifies deploy: False, the Saltify driver will attempt to authenticate to the target node(s) and return True for each one that succeeds. This can be useful to verify ports, protocols, services and credentials are correctly configured before a live deployment.
Getting Started With SoftLayerSoftLayer is a public cloud host, and baremetal hardware hosting service. DependenciesThe SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI: https://pypi.org/project/SoftLayer/ This package can be installed using pip or easy_install: # pip install softlayer # easy_install softlayer ConfigurationSet up the cloud config at /usr/local/etc/salt/cloud.providers: # Note: These examples are for /usr/local/etc/salt/cloud.providers NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access CredentialsThe user setting is the same user as is used to log into the SoftLayer Administration area. The apikey setting is found inside the Admin area after logging in:
ProfilesCloud ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles: base_softlayer_ubuntu: Most of the above items are required; optional items are specified below. imageImages to build an instance can be found using the --list-images option: # salt-cloud --list-images my-softlayer The setting used will be labeled as template. cpu_numberThis is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core): Note that the template (meaning, the image option) for both of these is the same, but the names suggests how many CPU cores are supported. ramThis is the amount of memory, in megabytes, that will be allocated to this instance. disk_sizeThe amount of disk space that will be allocated to this image, in gigabytes. base_softlayer_ubuntu: Using Multiple DisksNew in version 2015.8.1. SoftLayer allows up to 5 disks to be specified for a virtual machine upon creation. Multiple disks can be specified either as a list or a comma-delimited string. The first disk_size specified in the string or list will be the first disk size assigned to the VM. List Example: .. code-block:: yaml
String Example: .. code-block:: yaml
local_diskWhen true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned. hourly_billingWhen true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis. domainThe domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN. use_fqdnIf set to True, the Minion will be identified by the FQDN (Fully Qualified Domain Name) which is a result of combining the domain configuration value and the Minion name specified either via the CLI or a map file rather than only using the short host name, or Minion ID. Default is False. New in version 2016.3.0. For example, if the value of domain is example.com and a new VM was created via the CLI with salt-cloud -p base_softlayer_ubuntu my-vm, the resulting Minion ID would be my-vm.example.com. NOTE: When enabling the use_fqdn setting, the Minion ID
will be the FQDN and will interact with salt commands with the FQDN instead of
the short hostname. However, due to the way the SoftLayer API is constructed,
some Salt Cloud functions such as listing nodes or destroying VMs will only
list the short hostname of the VM instead of the FQDN.
Example output displaying the SoftLayer hostname quirk mentioned in the note above (note the Minion ID is my-vm.example.com, but the VM to be destroyed is listed with its short hostname, my-vm): # salt-key -L Accepted Keys: my-vm.example.com Denied Keys: Unaccepted Keys: Rejected Keys: # # # salt my-vm.example.com test.version my-vm.example.com: locationImages to build an instance can be found using the --list-locations option: # salt-cloud --list-location my-softlayer max_net_speedSpecifies the connection speed for the instance's network components. This setting is optional. By default, this is set to 10. post_uriSpecifies the uri location of the script to be downloaded and run after the instance is provisioned. New in version 2015.8.1. Example: .. code-block:: yaml
public_vlanIf it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. This setting is optional. If this setting is set to None, salt-cloud will connect to the private ip of the server. NOTE: If this setting is not provided and the server is not
built with a public vlan, private_ssh or private_wds will need
to be set to make sure that salt-cloud attempts to connect to the private
ip.
private_vlanIf it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. This setting is optional. private_networkIf a server is to only be used internally, meaning it does not have a public VLAN associated with it, this value would be set to True. This setting is optional. The default is False. private_ssh or private_wdsWhether to run the deploy script on the server using the public IP address or the private IP address. If set to True, Salt Cloud will attempt to SSH or WinRM into the new server using the private IP address. The default is False. This settiong is optional. global_identifierWhen creating an instance using a custom template, this option is set to the corresponding value obtained using the list_custom_images function. This option will not be used if an image is set, and if an image is not set, it is required. The profile can be realized now with a salt command: # salt-cloud -p base_softlayer_ubuntu myserver Using the above configuration, this will create myserver.example.com. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt 'myserver.example.com' test.version Dedicated HostSoflayer allows the creation of new VMs in a dedicated host. This means that you can order and pay a fixed amount for a bare metal dedicated host and use it to provision as many VMs as you can fit in there. If you want your VMs to be launched in a dedicated host, instead of Sofltayer's cloud, set the dedicated_host_id parameter in your profile. dedicated_host_idThe id of the dedicated host where the VMs should be created. If not set, VMs will be created in Softlayer's cloud instead. Bare metal ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles: base_softlayer_hw_centos: Most of the above items are required; optional items are specified below. imageImages to build an instance can be found using the --list-images option: # salt-cloud --list-images my-softlayer-hw A list of id`s and names will be provided. The `name will describe the operating system and architecture. The id will be the setting to be used in the profile. sizeSizes to build an instance can be found using the --list-sizes option: # salt-cloud --list-sizes my-softlayer-hw A list of id`s and names will be provided. The `name will describe the speed and quantity of CPU cores, and the amount of memory that the hardware will contain. The id will be the setting to be used in the profile. hddThere is currently only one size of hard disk drive (HDD) that is available for hardware instances on SoftLayer: 1267: 500GB SATA II The hdd setting in the profile should be 1267. Other sizes may be added in the future. locationLocations to build an instance can be found using the --list-images option: # salt-cloud --list-locations my-softlayer-hw A list of IDs and names will be provided. The location will describe the location in human terms. The id will be the setting to be used in the profile. domainThe domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN. vlanIf it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. port_speedSpecifies the speed for the instance's network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available:
bandwidthSpecifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available:
ActionsThe following actions are currently supported by the SoftLayer Salt Cloud driver. show_instanceThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. $ salt-cloud -a show_instance myinstance FunctionsThe following functions are currently supported by the SoftLayer Salt Cloud driver. list_vlansThis function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs. $ salt-cloud -f list_vlans my-softlayer $ salt-cloud -f list_vlans my-softlayer-hw The id returned in this list is necessary for the vlan option when creating an instance. list_custom_imagesThis function lists any custom templates associated with the account, that can be used to create a new instance. $ salt-cloud -f list_custom_images my-softlayer The globalIdentifier returned in this list is necessary for the global_identifier option when creating an image using a custom template. Optional Products for SoftLayer HWThe softlayer_hw driver supports the ability to add optional products, which are supported by SoftLayer's API. These products each have an ID associated with them, that can be passed into Salt Cloud with the optional_products option: softlayer_hw_test: These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here: Public Secondary IP Addresses
Primary IPv6 Addresses
Public Static IPv6 Addresses
OS-Specific Addon
Control Panel Software
Database Software
Anti-Virus & Spyware Protection
Insurance
Monitoring
Notification
Advanced Monitoring
Response
Intrusion Detection & Protection
Hardware & Software Firewalls
Getting Started With Tencent CloudTencent Cloud is a secure, reliable and high-performance cloud compute service provided by Tencent. It is the 2nd largest Cloud Provider in China. DependenciesThe Tencent Cloud driver for Salt Cloud requires the tencentcloud-sdk-python package, which is available at PyPI: https://pypi.org/project/tencentcloud-sdk-python/ This package can be installed using pip or easy_install: # pip install tencentcloud-sdk-python # easy_install tencentcloud-sdk-python Provider Configuration
my-tencentcloud-config: Configuration ParametersdriverRequired. tencentcloud to use this module. idRequired. Your Tencent Cloud secret id. keyRequired. Your Tencent Cloud secret key. locationOptional. If this value is not specified, the default is ap-guangzhou. Available locations can be found using the --list-locations option: # salt-cloud --list-location my-tencentcloud-config Profile ConfigurationTencent Cloud profiles require a provider, availability_zone, image and size. Set up an initial profile at /usr/local/etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/*.conf: tencentcloud-guangzhou-s1sm1: Configuration ParametersproviderRequired. Name of entry in salt/cloud.providers.d/??? file. availability_zoneRequired. The availability zone that the instance is located in. Available zones can be found using the list_availability_zones function: # salt-cloud -f list_availability_zones my-tencentcloud-config imageRequired. The image id to use for the instance. Available images can be found using the --list-images option: # salt-cloud --list-images my-tencentcloud-config sizeRequired. Instance type for instance can be found using the --list-sizes option. # salt-cloud --list-sizes my-tencentcloud-config securitygroupsOptional. A list of security group ids to associate with. Available security group ids can be found using the list_securitygroups function: # salt-cloud -f list_securitygroups my-tencentcloud-config Multiple security groups are supported: tencentcloud-guangzhou-s1sm1: hostnameOptional. The hostname of the instance. instance_charge_typeOptional. The charge type of the instance. Valid values are PREPAID, POSTPAID_BY_HOUR and SPOTPAID. The default is POSTPAID_BY_HOUR. instance_charge_type_prepaid_renew_flagOptional. When enabled, the instance will be renew automatically when it reaches the end of the prepaid tenancy. Valid values are NOTIFY_AND_AUTO_RENEW, NOTIFY_AND_MANUAL_RENEW and DISABLE_NOTIFY_AND_MANUAL_RENEW. NOTE: This value is only used when instance_charge_type
is set to PREPAID.
instance_charge_type_prepaid_periodOptional. The tenancy time in months of the prepaid instance, Valid values are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 24, 36. NOTE: This value is only used when instance_charge_type
is set to PREPAID.
allocate_public_ipOptional. Associate a public ip address with an instance in a VPC or Classic. Boolean value, default is false. internet_max_bandwidth_outOptional. Maximum outgoing bandwidth to the public network, measured in Mbps (Mega bits per second). Value range: [0, 100]. If this value is not specified, the default is 0 Mbps. internet_charge_typeOptional. Internet charge type of the instance. Valid values are BANDWIDTH_PREPAID, TRAFFIC_POSTPAID_BY_HOUR, BANDWIDTH_POSTPAID_BY_HOUR and BANDWIDTH_PACKAGE. The default is TRAFFIC_POSTPAID_BY_HOUR. key_nameOptional. The key pair to use for the instance, for example skey-16jig7tx. passwordOptional. Login password for the instance. private_ipOptional. The private ip to be assigned to this instance, must be in the provided subnet and available. project_idOptional. The project this instance belongs to, defaults to 0. vpc_idOptional. The id of a VPC network. If you want to create instances in a VPC network, this parameter must be set. subnet_idOptional. The id of a VPC subnet. If you want to create instances in VPC network, this parameter must be set. system_disk_sizeOptional. Size of the system disk. Value range: [50, 1000], and unit is GB. Default is 50 GB. system_disk_typeOptional. Type of the system disk. Valid values are CLOUD_BASIC, CLOUD_SSD and CLOUD_PREMIUM, default value is CLOUD_BASIC. ActionsThe following actions are supported by the Tencent Cloud Salt Cloud driver. show_instanceThis action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. $ salt-cloud -a show_instance myinstance show_diskReturn disk details about a specific instance. $ salt-cloud -a show_disk myinstance destroyDestroy a Tencent Cloud instance. $ salt-cloud -a destroy myinstance startStart a Tencent Cloud instance. $ salt-cloud -a start myinstance stopStop a Tencent Cloud instance. $ salt-cloud -a stop myinstance rebootReboot a Tencent Cloud instance. $ salt-cloud -a reboot myinstance FunctionsThe following functions are currently supported by the Tencent Cloud Salt Cloud driver. list_securitygroupsLists all Tencent Cloud security groups in current region. $ salt-cloud -f list_securitygroups my-tencentcloud-config list_availability_zonesLists all Tencent Cloud availability zones in current region. $ salt-cloud -f list_availability_zones my-tencentcloud-config list_custom_imagesLists any custom images associated with the account. These images can be used to create a new instance. $ salt-cloud -f list_custom_images my-tencentcloud-config show_imageReturn details about a specific image. This image can be used to create a new instance. $ salt-cloud -f show_image tencentcloud image=img-31tjrtph Getting Started With VagrantThe Vagrant driver is a new, experimental driver for spinning up a VagrantBox virtual machine, and installing Salt on it. DependenciesThe Vagrant driver itself has no external dependencies. The machine which will host the VagrantBox must be an already existing minion of the cloud server's Salt master. It must have Vagrant installed, and a Vagrant-compatible virtual machine engine, such as VirtualBox. (Note: The Vagrant driver does not depend on the salt-cloud VirtualBox driver in any way.) [Caution: The version of Vagrant packaged for apt install in Ubuntu 16.04 will not connect a bridged network adapter correctly. Use a version downloaded directly from the web site.] Include the Vagrant guest editions plugin: vagrant plugin install vagrant-vbguest. ConfigurationConfiguration of the client virtual machine (using VirtualBox, VMware, etc) will be done by Vagrant as specified in the Vagrantfile on the host machine. Salt-cloud will push the commands to install and provision a salt minion on the virtual machine, so you need not (perhaps should not) provision salt in your Vagrantfile, in most cases. If, however, your cloud master cannot open an SSH connection to the child VM, you may need to let Vagrant provision the VM with Salt, and use some other method (such as passing a pillar dictionary to the VM) to pass the master's IP address to the VM. The VM can then attempt to reach the salt master in the usual way for non-cloud minions. Specify the profile configuration argument as deploy: False to prevent the cloud master from trying. # Note: This example is for /usr/local/etc/salt/cloud.providers file or any file in # the /usr/local/etc/salt/cloud.providers.d/ directory. my-vagrant-config: Because the Vagrant driver needs a place to store the mapping between the node name you use for Salt commands and the Vagrantfile which controls the VM, you must configure your salt minion as a Salt smb server. (See host provisioning example below.) ProfilesVagrant requires a profile to be configured for each machine that needs Salt installed. The initial profile can be set up at /usr/local/etc/salt/cloud.profiles or in the /usr/local/etc/salt/cloud.profiles.d/ directory. Each profile requires a vagrantfile parameter. If the Vagrantfile has definitions for multiple machines then you need a machine parameter, Salt-cloud uses SSH to provision the minion. There must be a routable path from the cloud master to the VM. Usually, you will want to use a bridged network adapter for SSH. The address may not be known until DHCP assigns it. If ssh_host is not defined, and target_network is defined, the driver will attempt to read the address from the output of an ifconfig command. Lacking either setting, the driver will try to use the value Vagrant returns as its ssh_host, which will work only if the cloud master is running somewhere on the same host. The target_network setting should be used to identify the IP network your bridged adapter is expected to appear on. Use CIDR notation, like target_network: '2001:DB8::/32' or target_network: '192.0.2.0/24'. Profile configuration example: # /usr/local/etc/salt/cloud.profiles.d/vagrant.conf vagrant-machine: The machine can now be created and configured with the following command: salt-cloud -p vagrant-machine my-id This will create the machine specified by the cloud profile vagrant-machine, and will give the machine the minion id of my-id. If the cloud master is also the salt-master, its Salt key will automatically be accepted on the master. Once a salt-minion has been successfully installed on the instance, connectivity to it can be verified with Salt: salt my-id test.version Provisioning a Vagrant cloud host (example)In order to query or control minions it created, each host minion needs to track the Salt node names associated with any guest virtual machines on it. It does that using a Salt sdb database. The Salt sdb is not configured by default. The following example shows a simple installation. This example assumes:
# file /usr/local/etc/salt/minion.d/vagrant_sdb.conf on host computer "my_laptop" # -- this sdb database is required by the Vagrant module -- vagrant_sdb_data: # The sdb database must have this name. Remember to re-start your minion after changing its configuration files... sudo systemctl restart salt-minion
# -*- mode: ruby -*- # file /home/my_username/Vagrantfile on host computer "my_laptop" BEVY = "bevy1" DOMAIN = BEVY + ".test" # .test is an ICANN reserved non-public TLD # must supply a list of names to avoid Vagrant asking for interactive input def get_good_ifc() # try to find a working Ubuntu network adapter name # file /usr/local/etc/salt/cloud.profiles.d/my_vagrant_profiles.conf on bevymaster q1: # file /usr/local/etc/salt/cloud.providers.d/vagrant_provider.conf on bevymaster my_vagrant_provider: Create and use your new Salt minion
sudo salt-cloud -p q1 v1 sudo salt v1 network.ip_addrs
[NOTE:] if you are using MacOS, you need to type
ssh-add -K after each boot, unless you use one of the methods in
this gist.
ssh -A vagrant@< the bridged network address >
password: vagrant Getting Started with VEXXHOSTVEXXHOST is a cloud computing host which provides Canadian cloud computing services which are based in Monteral and use the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance after it's booted. Cloud Provider ConfigurationTo use the openstack driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below: /usr/local/etc/salt/cloud.providers.d/vexxhost.conf: In order to use the VEXXHOST public cloud, you will need to setup a cloud provider configuration file as in the example below which uses the OpenStack driver. my-vexxhost-config: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. AuthenticationAll of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you've logged in, you will need to click on "CloudConsole" and then click on "API Credentials". Cloud Profile ConfigurationIn order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively: # salt-cloud --list-images=vexxhost-config # salt-cloud --list-sizes=vexxhost-config Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS nb.2G instance. /usr/local/etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf: vh_ubuntu1204_2G: Provision an instanceTo create an instance based on the sample profile that we created above, you can run the following salt-cloud command. # salt-cloud -p vh_ubuntu1204_2G vh_instance1 Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it's complete. Once the instance has been setup, you can test connectivity to it by running the following command: # salt vh_instance1 test.version You can now continue to provision new instances and they will all automatically be set up as minions of the master you've defined in the configuration file. Getting Started With VirtualboxThe Virtualbox cloud module allows you to manage a local Virtualbox hypervisor. Remote hypervisors may come later on. DependenciesThe virtualbox module for Salt Cloud requires the Virtualbox SDK which is contained in a virtualbox installation from https://www.virtualbox.org/wiki/Downloads ConfigurationThe Virtualbox cloud module just needs to use the virtualbox driver for now. Virtualbox will be run as the running user. /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/virtualbox.conf: virtualbox-config: ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/virtualbox.conf: virtualbox-test:
So far only machines can only be cloned and automatically provisioned by Salt Cloud. ProvisioningIn order to provision when creating a new machine power_on and deploy have to be True. Furthermore to connect to the VM ssh_username and password will have to be set. sudo and sudo_password are the credentials for getting root access in order to deploy salt Actions
Functions
$ salt-cloud -f show_image virtualbox image=my_vm_name Getting Started With VMwareNew in version 2015.5.4. Author: Nitin Madhok <nmadhok@g.clemson.edu> The VMware cloud module allows you to manage VMware ESX, ESXi, and vCenter. DependenciesThe vmware module for Salt Cloud requires the pyVmomi package, which is available at PyPI: https://pypi.org/project/pyvmomi/ This package can be installed using pip or easy_install: pip install pyvmomi easy_install pyvmomi NOTE: Version 6.0 of pyVmomi has some problems with SSL error
handling on certain versions of Python. If using version 6.0 of pyVmomi, the
machine that you are running the proxy minion process from must have either
Python 2.7.9 or newer This is due to an upstream dependency in pyVmomi 6.0
that is not supported in Python version 2.6 to 2.7.8. If the version of Python
running the salt-cloud command is not in the supported range, you will need to
install an earlier version of pyVmomi. See Issue #29537 for more
information.
NOTE: pyVmomi doesn't expose the ability to specify the locale
when connecting to VMware. This causes parsing issues when connecting to an
instance of VMware running under a non-English locale. Until this feature is
added upstream Issue #38402 contains a workaround.
ConfigurationThe VMware cloud module needs the vCenter or ESX/ESXi URL, username and password to be set up in the cloud configuration at /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/vmware.conf: my-vmware-config: NOTE: Optionally, protocol and port can be
specified if the vCenter server is not using the defaults. Default is
protocol: https and port: 443.
NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider configuration was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile configuration. Cloud provider configuration now uses driver to refer to the salt-cloud driver that provides the underlying functionality to connect to a cloud provider, while cloud profile configuration continues to use provider to refer to the cloud provider configuration that you define. ProfilesSet up an initial profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/vmware.conf: vmware-centos6.5:
Cores per socket should be less than or equal to the
total number of vCPUs assigned to the VM/template.
New in version 2016.11.0.
ide:
For a clone operation, this argument is ignored.
Windows template should have "administrator"
account.
During network configuration (if network specified), it
is used to specify new administrator password for the machine.
https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.customization.UserData.html
https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.customization.UserData.html
Cloning a VMCloning VMs/templates is the easiest and the preferred way to work with VMs using the VMware driver. NOTE: Cloning operations are unsupported on standalone ESXi
hosts, a vCenter server will be required.
Example of a minimal profile: my-minimal-clone: When cloning a VM, all the profile configuration parameters are optional and the configuration gets inherited from the clone. Example to add/resize a disk: my-disk-example: Depending on the configuration of the VM that is getting cloned, the disk in the resulting clone will differ. NOTE:
Example to reconfigure the memory and number of vCPUs: my-disk-example: Instant Cloning a VMInstant Cloning a powered-ON VM is the easiest and the preferred way to work with VMs from controlled point in time using the VMware driver. NOTE: Instant Cloning operations are unsupported on standalone
ESXi hosts, a vCenter server will be required.
Example of a minimal profile when skipping optional parameters: my-minimal-clone: When Instant cloning a VM, all the profile configuration parameters are optional and the configuration gets inherited from the clone. Example to specify optional parameters : my-minimal-clone: Cloning a TemplateCloning a template works similar to cloning a VM except for the fact that a resource pool or cluster must be specified additionally in the profile. Example of a minimal profile: my-template-clone: Cloning from a SnapshotNew in version 2016.3.5. Cloning from a snapshot requires that one of the supported options be set in the cloud profile. Supported options are createNewChildDiskBacking, moveChildMostDiskBacking, moveAllDiskBackingsAndAllowSharing and moveAllDiskBackingsAndDisallowSharing. Example of a minimal profile: my-template-clone: Creating a VMNew in version 2016.3.0. Creating a VM from scratch means that more configuration has to be specified in the profile because there is no place to inherit configuration from. NOTE: Unlike most cloud drivers that use prepared images,
creating VMs using VMware cloud driver needs an installation method that
requires no human interaction. For Example: preseeded ISO, kickstart URL or
network PXE boot.
Example of a minimal profile: my-minimal-profile: NOTE: The example above contains the minimum required
configuration needed to create a VM from scratch. The resulting VM will only
have 1 VCPU, 32MB of RAM and will not have any storage or networking.
Example of a complete profile: my-complete-example: NOTE: Depending on VMware ESX/ESXi version, an exact match for
image might not be available. In such cases, the closest match to
another image should be used. In the example above, a Debian 8 VM is
created using the image debian7_64Guest which is for a Debian 7
guest.
Specifying disk backing modeNew in version 2016.3.5. Disk backing mode can now be specified when cloning a VM. This option can be set in the cloud profile as shown in example below: my-vm: Getting Started With XenThe Xen cloud driver works with Citrix XenServer. It can be used with a single XenServer or a XenServer resource pool. Setup DependenciesThis driver requires a copy of the freely available XenAPI.py Python module. Information about the Xen API Python module in the XenServer SDK can be found at https://pypi.org/project/XenAPI/ Place a copy of this module on your system. For example, it can be placed in the site packages location on your system. The location of site packages can be determined by running: python -m site --user-site Provider ConfigurationXen requires login credentials to a XenServer. Set up the provider cloud configuration file at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/*.conf. # /usr/local/etc/salt/cloud.providers.d/myxen.conf myxen: NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud module that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profile ConfigurationXen profiles require a provider and image.
Set up an initial profile at /usr/local/etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory: # file: /usr/local/etc/salt/cloud.profiles.d/xenprofiles.conf sles: The first example will create a clone of the sles12sp2-template in the same storage repository without deploying the Salt minion. The second example will make a copy of the image and deploy a new suse VM with the Salt minion installed. The third example will create a clone of the Windows 2012 template and deploy the Salt minion. The profile can be used with a salt command: salt-cloud -p suse xenvm02 This will create an salt minion instance named xenvm02 in Xen. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with a salt-minion installed, connectivity to it can be verified with Salt: salt xenvm02 test.version Listing SizesSizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes myxen NOTE: Since size information is build in a template this
command is not implemented.
Listing ImagesImages can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images myxen This command will return a list of templates with details. Listing LocationsLocations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations myxen Returns a list of resource pools. Miscellaneous OptionsMiscellaneous Salt Cloud OptionsThis page describes various miscellaneous options available in Salt Cloud Deploy Script ArgumentsCustom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: ec2-amazon: This has also been tested to work with pipes, if needed: script_args: '| head' Selecting the File TransportBy default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead. file_transport: sftp file_transport: scp Sync After InstallSalt allows users to create custom plugins such as execution, grains, and state modules which can be synchronised to minions to extend Salt with further functionality. This option will inform Salt Cloud to synchronise your custom modules to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file: sync_after_install: all The available options for this setting are: all beacons clouds engines executors grains log matchers modules output pillar proxymodules renderers returners sdb serializers states thorium utils A present and non-falsy value that doesn't match one of these list items will assume all, so sync_after_install: True and sync_after_install: all are equivalent (though the former will produce a warning). Setting Up New Salt MastersIt has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file. make_master: True This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package. The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map: master: Setting Up a Salt Syndic with Salt CloudIn addition to setting up new Salt Masters, syndics can also be provisioned using Salt Cloud. In order to set up a Salt Syndic via Salt Cloud, a Salt Master needs to be installed on the new machine and a master configuration file needs to be set up using the make_master setting. This setting can be defined either in a profile config file or in a map file: make_master: True To install the Salt Syndic, the only other specification that needs to be configured is the syndic_master key to specify the location of the master that the syndic will be reporting to. This modification needs to be placed in the master setting, which can be configured either in the profile, provider, or /usr/local/etc/salt/cloud config file: master: Many other Salt Syndic configuration settings and specifications can be passed through to the new syndic machine via the master configuration setting. See the Salt Syndic documentation for more information. SSH PortBy default ssh port is set to port 22. If you want to use a custom port in provider, profile, or map blocks use ssh_port option. New in version 2015.5.0. ssh_port: 2222 Delete SSH KeysWhen Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud host generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict. In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file: delete_sshkeys: True Keeping /tmp/ FilesWhen Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added: salt-cloud -p myprofile mymachine --keep-tmp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). Hide Output From Minion InstallBy default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output: display_ssh_output: False Connection TimeoutThere are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting its IP address, the VM's SSH port is available, etc. If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak.
You can tweak these settings globally, per cloud provider, or event per profile definition. wait_for_ip_timeoutThe amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud host. Default: varies by cloud provider ( between 5 and 25 minutes) wait_for_ip_intervalThe amount of time Salt Cloud should sleep while querying for the VM's IP. Default: varies by cloud provider ( between .5 and 10 seconds) ssh_connect_timeoutThe amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: varies by cloud provider (between 5 and 15 minutes) wait_for_passwd_timeoutThe amount of time until an ssh connection can be established via password or ssh key. Default: varies by cloud provider (mostly 15 seconds) wait_for_passwd_maxtriesThe number of attempts to connect to the VM until we abandon. Default: 15 attempts wait_for_fun_timeoutSome cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 15 minutes. wait_for_spot_timeoutThe amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. Default: 10 minutes Salt Cloud CacheSalt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality. update_cachedirOn supported cloud providers, whether or not to maintain a cache of nodes returned from a --full-query. The data will be stored in msgpack format under <SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p. This setting can be True or False. diff_cache_eventsWhen the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud host and the data in the cache, fire events which describe the changes. This setting can be True or False. Some of these events will contain data which describe a node. Because some of the fields returned may contain sensitive data, the cache_event_strip_fields configuration option exists to strip those fields from the event return. cache_event_strip_fields: The following are events that can be fired based on this data. salt/cloud/minionid/cache_node_newA new node was found on the cloud host which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event. salt/cloud/minionid/cache_node_missingA node that was previously listed in the cloud cachedir is no longer available on the cloud host. salt/cloud/minionid/cache_node_diffOne or more pieces of data in the cloud cachedir has changed on the cloud host. A dict containing both the old and the new data will be contained in the event. SSH Known HostsNormally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This is because it does not know what the host key is before starting (because it doesn't exist yet). If strict host key checking is turned on without the key in the known_hosts file, then the host will never be available, and cannot be bootstrapped. If a provider is able to determine the host key before trying to bootstrap it, that provider's driver can add it to the known_hosts file, and then turn on strict host key checking. This can be set up in the main cloud configuration file (normally /usr/local/etc/salt/cloud) or in the provider-specific configuration file: known_hosts_file: /path/to/.ssh/known_hosts If this is not set, it will default to /dev/null, and strict host key checking will be turned off. It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality. SSH AgentNew in version 2015.5.0. If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate. ssh_agent: True File Map UploadNew in version 2014.7.0. The file_map option allows an arbitrary group of files to be uploaded to the target system before running the deploy script. This functionality requires a provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2, gce, openstack and nova drivers. The file_map can be configured globally in /usr/local/etc/salt/cloud, or in any cloud provider or profile file. For example, to upload an extra package or a custom deploy script, a cloud profile using file_map might look like: ubuntu14: Running Pre-Flight CommandsNew in version 2018.3.0. To execute specified preflight shell commands on a VM before the deploy script is run, use the preflight_cmds option. These must be defined as a list in a cloud configuration file. For example: my-cloud-profile: These commands will run in sequence before the bootstrap script is executed. Force Minion ConfigNew in version 2018.3.0. The force_minion_config option requests the bootstrap process to overwrite an existing minion configuration file and public/private key files. Default: False This might be important for drivers (such as saltify) which are expected to take over a connection from a former salt master. my_saltify_provider: Troubleshooting StepsTroubleshooting Salt CloudThis page describes various steps for troubleshooting problems that may arise while using Salt Cloud. Virtual Machines Are Created, But Do Not RespondAre TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found here. Generic Troubleshooting StepsThis section describes a set of instructions that are useful to a large number of situations, and are likely to solve most issues that arise. Debug ModeFrequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious: salt-cloud -p myprofile myinstance -l debug Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured. Salt BootstrapBy default, Salt Cloud uses the Salt Bootstrap script to provision instances: This script is packaged with Salt Cloud, but may be updated without updating the Salt package: salt-cloud -u The Bootstrap LogIf the default deploy script was used, there should be a file in the /tmp/ directory called bootstrap-salt.log. This file contains the full output from the deployment, including any errors that may have occurred. Keeping Temp FilesSalt Cloud uploads minion-specific files to instances once they are available via SSH, and then executes a deploy script to put them into the correct place and install Salt. The --keep-tmp option will instruct Salt Cloud not to remove those files when finished with them, so that the user may inspect them for problems: salt-cloud -p myprofile myinstance --keep-tmp By default, Salt Cloud will create a directory on the target instance called /tmp/.saltcloud/. This directory should be owned by the user that is to execute the deploy script, and should have permissions of 0700. Most cloud hosts are configured to use root as the default initial user for deployment, and as such, this directory and all files in it should be owned by the root user. The /tmp/.saltcloud/ directory should the following files:
Unprivileged Primary UsersSome cloud hosts, most notably EC2, are configured with a different primary user. Some common examples are ec2-user, ubuntu, fedora, and bitnami. In these cases, the /tmp/.saltcloud/ directory and all files in it should be owned by this user. Some cloud hosts, such as EC2, are configured to not require these users to provide a password when using the sudo command. Because it is more secure to require sudo users to provide a password, other hosts are configured that way. If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration: sudo_password: mypassword /tmp/ is Mounted as noexecIt is more secure to mount the /tmp/ directory with a noexec option. This is uncommon on most cloud hosts, but very common in private environments. To see if the /tmp/ directory is mounted this way, run the following command: mount | grep tmp The if the output of this command includes a line that looks like this, then the /tmp/ directory is mounted as noexec: tmpfs on /tmp type tmpfs (rw,noexec) If this is the case, then the deploy_command will need to be changed in order to run the deploy script through the sh command, rather than trying to execute it directly. This may be specified in either the provider or the profile config: deploy_command: sh /tmp/.saltcloud/deploy.sh Please note that by default, Salt Cloud will place its files in a directory called /tmp/.saltcloud/. This may be also be changed in the provider or profile configuration: tmp_dir: /tmp/.saltcloud/ If this directory is changed, then the deploy_command need to be changed in order to reflect the tmp_dir configuration. Executing the Deploy Script ManuallyIf all of the files needed for deployment were successfully uploaded to the correct locations, and contain the correct permissions and ownerships, the deploy script may be executed manually in order to check for other issues: cd /tmp/.saltcloud/ ./deploy.sh Extending Salt CloudWriting Cloud Driver ModulesSalt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the salt/cloud/clouds directory of the salt source. There are two basic types of cloud modules. If a cloud host is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at: http://libcloud.apache.org/ Not every cloud host is supported by libcloud. Additionally, not every feature in a supported cloud host is necessarily supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud. All Driver ModulesThe following functions are required by all driver modules, whether or not they are based on libcloud. The __virtual__() FunctionThis function determines whether or not to make this cloud module available upon execution. Most often, it uses get_configured_provider() to determine if the necessary configuration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a True or False value. If the name of the driver used does not match the filename, then that name should be returned instead of True. An example of this may be seen in the Azure module: https://github.com/saltstack/salt/tree/master/salt/cloud/clouds/msazure.py The get_configured_provider() FunctionThis function uses config.is_provider_configured() to determine whether all required information for this driver has been configured. The last value in the list of required settings should be followed by a comma. Libcloud Based ModulesWriting a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project. The create() FunctionThe most important function that does need to be manually written is the create() function. This is what is used to request a virtual machine to be created by the cloud host, wait for it to become available, and then (optionally) log in and install Salt on it. A good example to follow for writing a cloud driver module based on libcloud is the module provided for Linode: https://github.com/saltstack/salt/tree/master/salt/cloud/clouds/linode.py The basic flow of a create() function is as follows:
At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate. When the create() function is called, it is passed a data structure called vm_. This dict contains a composite of information describing the virtual machine to be created. A dict called __opts__ is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables. The first thing the create() function must do is fire an event stating that it has started the create process. This event is tagged salt/cloud/<vm name>/creating. The payload contains the names of the VM, profile, and provider. A set of kwargs is then usually created, to describe the parameters required by the cloud host to request the virtual machine. An event is then fired to state that a virtual machine is about to be requested. It is tagged as salt/cloud/<vm name>/requesting. The payload contains most or all of the parameters that will be sent to the cloud host. Any private information (such as passwords) should not be sent in the event. After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud host does not currently support Windows. This will save time in the future if the host does eventually decide to support Windows. An event is then fired to state that the deploy process is about to begin. This event is tagged salt/cloud/<vm name>/deploying. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired. If any Windows options have been passed in, the salt.utils.cloud.deploy_windows() function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the salt.utils.cloud.deploy_script() will be called. Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module. The salt.utils.cloud.validate_windows_cred() function has been extended to take the number of retries and retry_delay parameters in case a specific cloud host has a delay between providing the Windows credentials and the credentials being available for use. In their create() function, or as a sub-function called during the creation process, developers should use the win_deploy_auth_retries and win_deploy_auth_retry_delay parameters from the provider configuration to allow the end-user the ability to customize the number of tries and delay between tries for their particular host. After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged salt/cloud/<vm name>/created. The payload contains the names of the VM, profile, and provider. Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud host. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-creation queries may not contain password information (depending upon the host). The libcloudfuncs FunctionsA number of other functions are required for all cloud hosts. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports: from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401 import salt.utils.functools And then a series of declarations will make the necessary functions available within the cloud module. get_size = salt.utils.functools.namespaced_function(get_size, globals()) get_image = salt.utils.functools.namespaced_function(get_image, globals()) avail_locations = salt.utils.functools.namespaced_function(avail_locations, globals()) avail_images = salt.utils.functools.namespaced_function(avail_images, globals()) avail_sizes = salt.utils.functools.namespaced_function(avail_sizes, globals()) script = salt.utils.functools.namespaced_function(script, globals()) destroy = salt.utils.functools.namespaced_function(destroy, globals()) list_nodes = salt.utils.functools.namespaced_function(list_nodes, globals()) list_nodes_full = salt.utils.functools.namespaced_function(list_nodes_full, globals()) list_nodes_select = salt.utils.functools.namespaced_function( If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal. These functions are required for all cloud modules, and are described in detail in the next section. Non-Libcloud Based ModulesIn some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in libcloudfuncs may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module. A good example of a non-libcloud driver is the DigitalOcean driver: https://github.com/saltstack/salt/tree/master/salt/cloud/clouds/digitalocean.py The create() FunctionThe create() function must be created as described in the libcloud-based module documentation. The get_size() FunctionThis function is only necessary for libcloud-based modules, and does not need to exist otherwise. The get_image() FunctionThis function is only necessary for libcloud-based modules, and does not need to exist otherwise. The avail_locations() FunctionThis function returns a list of locations available, if the cloud host uses multiple data centers. It is not necessary if the cloud host uses only one data center. It is normally called using the --list-locations option. salt-cloud --list-locations my-cloud-provider The avail_images() FunctionThis function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the --list-images option. salt-cloud --list-images my-cloud-provider The avail_sizes() FunctionThis function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU, and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU, and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the --list-sizes option. salt-cloud --list-sizes my-cloud-provider The script() FunctionThis function builds the deploy script to be used on the remote machine. It is likely to be moved into the salt.utils.cloud library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver. The destroy() FunctionThis function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is salt/cloud/<vm name>/destroying. Once the virtual machine has been destroyed, another event is fired. The tag for that event is salt/cloud/<vm name>/destroyed. This function is normally called with the -d options: salt-cloud -d myinstance The list_nodes() FunctionThis function returns a list of nodes available on this cloud provider, using the following fields:
No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the -Q option: salt-cloud -Q The list_nodes_full() FunctionAll information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the -F option: salt-cloud -F The list_nodes_select() FunctionThis function returns only the fields specified in the query.selection option in /usr/local/etc/salt/cloud. Because this function is so generic, all of the heavy lifting has been moved into the salt.utils.cloud library. A function to call list_nodes_select() still needs to be present. In general, the following code can be used as-is: def list_nodes_select(call=None): However, depending on the cloud provider, additional variables may be required. For instance, some modules use a conn object, or may need to pass other options into list_nodes_full(). In this case, be sure to update the function appropriately: def list_nodes_select(conn=None, call=None): This function is normally called with the -S option: salt-cloud -S The show_instance() FunctionThis function is used to display all of the information about a single node that is available from the cloud provider. The simplest way to provide this is usually to call list_nodes_full(), and return just the data for the requested node. It is normally called as an action: salt-cloud -a show_instance myinstance Actions and FunctionsExtra functionality may be added to a cloud provider in the form of an --action or a --function. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider. ActionsActions are calls that are performed against a specific instance or virtual machine. The show_instance action should be available in all cloud modules. Actions are normally called with the -a option: salt-cloud -a show_instance myinstance Actions must accept a name as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of call, with a default of None. Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like: def show_instance(name, call=None): Please note that generic kwargs, if used, are passed through to actions as kwargs and not **kwargs. An example of this is seen in the Functions section. FunctionsFunctions are called that are performed against a specific cloud provider. An optional function that is often useful is show_image, which describes an image in detail. Functions are normally called with the -f option: salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit' A function may accept any number of kwargs as appropriate, and must accept an argument of call with a default of None. Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like: def show_image(kwargs, call=None): Take note that generic kwargs are passed through to functions as kwargs and not **kwargs. Cloud deployment scriptsSalt Cloud works primarily by executing a script on the virtual machines as soon as they become available. The script that is executed is referenced in the cloud profile as the script. In older versions, this was the os argument. This was changed in 0.8.2. A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at: https://github.com/saltstack/salt-bootstrap Note that, somewhat counter-intuitively, this script is referenced as bootstrap-salt in the configuration. You can specify a deploy script in the cloud configuration file (/usr/local/etc/salt/cloud by default): script: bootstrap-salt Or in a provider: my-provider: Or in a profile: my-profile: If you do not specify a script argument in your cloud configuration file, provider configuration or profile configuration, the "bootstrap-salt" script will be used by default. Other Generic Deploy ScriptsIf you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree: curl-bootstrap curl-bootstrap-git python-bootstrap wget-bootstrap wget-bootstrap-git These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt-bootstrap script, such as updating to specific git tags. Custom Deploy ScriptsIf the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in shell and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include:
A good, well commented example of this process is the Fedora deployment script: https://github.com/saltstack/salt/blob/master/salt/cloud/deploy/Fedora.sh A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes. Custom deploy scripts are picked up from /usr/local/etc/salt/cloud.deploy.d by default, but you can change the location of deploy scripts with the cloud configuration deploy_scripts_search_path. Additionally, if your deploy script has the extension .sh, you can leave out the extension in your configuration. For example, if your custom deploy script is located in /usr/local/etc/salt/cloud.deploy.d/my_deploy.sh, you could specify it in a cloud profile like this: my-profile: You're also free to use the full path to the script if you like. Using full paths, your script doesn't have to live inside /usr/local/etc/salt/cloud.deploy.d or whatever you've configured with deploy_scripts_search_path. Post-Deploy CommandsOnce a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.apply, which would finish provisioning the VM. Another common option (for testing) is to use test.version. This is configured in the main cloud config file: start_action: state.apply This is currently considered to be experimental functionality, and may not work well with all cloud hosts. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead. Skipping the Deploy ScriptFor whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line: salt-cloud --no-deploy -p micro_aws my_instance Or it can be set from the main cloud config file: deploy: False Or it can be set from the provider's configuration: RACKSPACE.user: example_user RACKSPACE.apikey: 123984bjjas87034 RACKSPACE.deploy: False Or even on the VM's profile settings: ubuntu_aws: The default for deploy is True. In the profile, you may also set the script option to None: script: None This is the slowest option, since it still uploads the None deploy script and executes it. Updating Salt BootstrapSalt Bootstrap can be updated automatically with salt-cloud: salt-cloud -u salt-cloud --update-bootstrap Bear in mind that this updates to the latest stable version from: https://bootstrap.saltproject.io/stable/bootstrap-salt.sh To update Salt Bootstrap script to the develop version, run the following command on the Salt minion host with salt-cloud installed: salt-call config.gather_bootstrap_script 'https://bootstrap.saltproject.io/develop/bootstrap-salt.sh' Or just download the file manually: curl -L 'https://bootstrap.saltproject.io/develop' > /usr/local/etc/salt/cloud.deploy.d/bootstrap-salt.sh Keeping /tmp/ FilesWhen Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added: salt-cloud -p myprofile mymachine --keep-tmp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). Deploy Script ArgumentsCustom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: aws-amazon: This has also been tested to work with pipes, if needed: script_args: '| head' Using Salt Cloud from SaltUsing the Salt Modules for CloudIn addition to the salt-cloud command, Salt Cloud can be called from Salt, in a variety of different ways. Most users will be interested in either the execution module or the state module, but it is also possible to call Salt Cloud as a runner. Because the actual work will be performed on a remote minion, the normal Salt Cloud configuration must exist on any target minion that needs to execute a Salt Cloud command. Because Salt Cloud now supports breaking out configuration into individual files, the configuration is easily managed using Salt's own file.managed state function. For example, the following directories allow this configuration to be managed easily: /usr/local/etc/salt/cloud.providers.d/ /usr/local/etc/salt/cloud.profiles.d/ Minion KeysKeep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will not attempt to place any public minion keys on the master, unless the minion which was used to create the instance is also the Salt Master. This is because granting arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided. Execution ModuleThe cloud module is available to use from the command line. At the moment, almost every standard Salt Cloud feature is available to use. The following commands are available: list_imagesThis command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified: salt myminion cloud.list_images my-cloud-provider list_sizesThis command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing sizes requires a provider to be configured, and specified: salt myminion cloud.list_sizes my-cloud-provider list_locationsThis command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing locations requires a provider to be configured, and specified: salt myminion cloud.list_locations my-cloud-provider queryThis command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields:
This command may also be used to perform a full query or a select query, as described below. The following usages are available: salt myminion cloud.query salt myminion cloud.query list_nodes salt myminion cloud.query list_nodes_full full_queryThis command behaves like the query command, but lists all information concerning each instance as provided by the cloud provider, in addition to the fields returned by the query command. salt myminion cloud.full_query select_queryThis command behaves like the query command, but only returned select fields as defined in the /usr/local/etc/salt/cloud configuration file. A sample configuration for this section of the file might look like: query.selection: This configuration would only return the id and key_name fields, for those cloud providers that support those two fields. This would be called using the following command: salt myminion cloud.select_query profileThis command is used to create an instance using a profile that is configured on the target minion. Please note that the profile must be configured before this command can be used with it. salt myminion cloud.profile ec2-centos64-x64 my-new-instance Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation. createThis command is similar to the profile command, in that it is used to create a new instance. However, it does not require a profile to be pre-configured. Instead, all of the options that are normally configured in a profile are passed directly to Salt Cloud to create the instance: salt myminion cloud.create my-ec2-config my-new-instance \ Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation. destroyThis command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are non-reversable and should be used with caution. salt myminion cloud.destroy myinstance salt myminion cloud.destroy myinstance1,myinstance2 actionThis command implements both the action and the function commands used in the standard salt-cloud command. If one of the standard action commands is used, an instance name must be provided. If one of the standard function commands is used, a provider configuration must be named. salt myminion cloud.action start instance=myinstance salt myminion cloud.action show_image provider=my-ec2-config \ The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers:
State ModuleA subset of the execution module is available through the cloud state module. Not all functions are currently included, because there is currently insufficient code for them to perform statefully. For example, a command to create an instance may be issued with a series of options, but those options cannot currently be statefully managed. Additional states to manage these options will be released at a later time. cloud.presentThis state will ensure that an instance is present inside a particular cloud provider. Any option that is normally specified in the cloud.create execution module and function may be declared here, but only the actual presence of the instance will be managed statefully. my-instance-name: cloud.profileThis state will ensure that an instance is present inside a particular cloud provider. This function calls the cloud.profile execution module and function, but as with cloud.present, only the actual presence of the instance will be managed statefully. my-instance-name: cloud.absentThis state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is non-reversable and may be considered especially destructive when issued as a cloud state. my-instance-name: Runner ModuleThe cloud runner module is executed on the master, and performs actions using the configuration and Salt modules on the master itself. This means that any public minion keys will also be properly accepted by the master. Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner:
Outside of the standard usage of salt-run itself, commands are executed as usual: salt-run cloud.profile ec2-centos64-x86_64 my-instance-name CloudClientThe execution, state, and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a minion), create a client object and issue a command against it: import salt.cloud
import pprint
client = salt.cloud.CloudClient("/usr/local/etc/salt/cloud")
nodes = client.query()
pprint.pprint(nodes)
ReactorExamples of using the reactor with Salt Cloud are available in the ec2-autoscale-reactor and salt-cloud-reactor formulas. Feature ComparisonFeature MatrixA number of features are available in most cloud hosts, but not all are available everywhere. This may be because the feature isn't supported by the cloud host itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance). This matrix shows which features are available in which cloud hosts, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud hosts, and should not be used to make business decisions concerning choosing a cloud host. In most cases, adding support for a feature to Salt Cloud requires only a little effort. Legacy DriversBoth AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those hosts. The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace. Note for DevelopersWhen adding new features to a particular cloud host, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated. Standard FeaturesThese are features that are available for almost every cloud host.
[1] Yes, if salt-api is enabled. [2] Always returns {}. ActionsThese are features that are performed on a specific instance, and require an instance name to be passed in. For example: # salt-cloud -a attach_volume ami.example.com
FunctionsThese are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example: # salt-cloud -f list_images my_digitalocean
[1] Yes, if salt-api is enabled. TutorialsSalt Cloud QuickstartSalt Cloud is built-in to Salt, and the easiest way to run Salt Cloud is directly from your Salt Master. Note that if you installed Salt via Salt Bootstrap, it may not have automatically installed salt-cloud for you. Use your distribution's package manager to install the salt-cloud package from the same repo that you used to install Salt. These repos will automatically be setup by Salt Bootstrap. Alternatively, the -L option can be passed to the Salt Bootstrap script when installing Salt. The -L option will install salt-cloud and the required libcloud package. This quickstart walks you through the basic steps of setting up a cloud host and defining some virtual machines to create. NOTE: Salt Cloud has its own process and does not rely on the
Salt Master, so it can be installed on a standalone minion instead of your
Salt Master.
Define a ProviderThe first step is to add the credentials for your cloud host. Credentials and other settings provided by the cloud host are stored in provider configuration files. Provider configurations contain the details needed to connect to a cloud host such as EC2, GCE, Rackspace, etc., and any global options that you want set on your cloud minions (such as the location of your Salt Master). On your Salt Master, browse to /usr/local/etc/salt/cloud.providers.d/ and create a file called <provider>.conf, replacing <provider> with ec2, softlayer, and so on. The name helps you identify the contents, and is not important as long as the file ends in .conf. Next, browse to the Provider specifics and add any required settings for your cloud host to this file. Here is an example for Amazon EC2: my-ec2: The required configuration varies between cloud hosts so make sure you read the provider specifics. List Cloud Provider OptionsYou can now query the cloud provider you configured for available locations, images, and sizes. This information is used when you set up VM profiles. salt-cloud --list-locations <provider_name> # my-ec2 in the previous example salt-cloud --list-images <provider_name> salt-cloud --list-sizes <provider_name> Replace <provider_name> with the name of the provider configuration you defined. Create VM ProfilesOn your Salt Master, browse to /usr/local/etc/salt/cloud.profiles.d/ and create a file called <profile>.conf, replacing <profile> with ec2, softlayer, and so on. The file must end in .conf. You can now add any custom profiles you'd like to define to this file. Here are a few examples: micro_ec2: Notice that the provider in our profile matches the provider name that we defined? That is how Salt Cloud knows how to connect to a cloud host to create a VM with these attributes. Create VMsVMs are created by calling salt-cloud with the following options: salt-cloud -p <profile> <name1> <name2> ... For example: salt-cloud -p micro_ec2 minion1 minion2 Destroy VMsAdd a -d and the minion name you provided to destroy: salt-cloud -d minion1 minion2 Query VMsYou can view details about the VMs you've created using --query: salt-cloud --query Cloud MapNow that you know how to create and destoy individual VMs, next you should learn how to use a cloud map to create a number of VMs at once. Cloud maps let you define a map of your infrastructure and quickly provision any number of VMs. On subsequent runs, any VMs that do not exist are created, and VMs that are already configured are left unmodified. See Cloud Map File. Using Salt Cloud with the Event ReactorOne of the most powerful features of the Salt framework is the Event Reactor. As the Reactor was in development, Salt Cloud was regularly updated to take advantage of the Reactor upon completion. As such, various aspects of both the creation and destruction of instances with Salt Cloud fire events to the Salt Master, which can be used by the Event Reactor. Event StructureAs of this writing, all events in Salt Cloud have a tag, which includes the ID of the instance being managed, and a payload which describes the task that is currently being handled. A Salt Cloud tag looks like: salt/cloud/<minion_id>/<task> For instance, the first event fired when creating an instance named web1 would look like: salt/cloud/web1/creating Assuming this instance is using the ec2-centos profile, which is in turn using the ec2-config provider, the payload for this tag would look like: {"name": "web1", "profile": "ec2-centos", "provider": "ec2-config:ec2"}
Available EventsWhen an instance is created in Salt Cloud, whether by map, profile, or directly through an API, a minimum of five events are normally fired. More may be available, depending upon the cloud provider being used. Some of the common events are described below. salt/cloud/<minion_id>/creatingThis event states simply that the process to create an instance has begun. At this point in time, no actual work has begun. The payload for this event includes: name profile provider salt/cloud/<minion_id>/requestingSalt Cloud is about to make a request to the cloud provider to create an instance. At this point, all of the variables required to make the request have been gathered, and the payload of the event will reflect those variables which do not normally pose a security risk. What is returned here is dependent upon the cloud provider. Some common variables are: name image size location salt/cloud/<minion_id>/queryingThe instance has been successfully requested, but the necessary information to log into the instance (such as IP address) is not yet available. This event marks the beginning of the process to wait for this information. The payload for this event normally only includes the instance_id. salt/cloud/<minion_id>/waiting_for_sshThe information required to log into the instance has been retrieved, but the instance is not necessarily ready to be accessed. Following this event, Salt Cloud will wait for the IP address to respond to a ping, then wait for the specified port (usually 22) to respond to a connection, and on Linux systems, for SSH to become available. Salt Cloud will attempt to issue the date command on the remote system, as a means to check for availability. If no ssh_username has been specified, a list of usernames (starting with root) will be attempted. If one or more usernames was configured for ssh_username, they will be added to the beginning of the list, in order. The payload for this event normally only includes the ip_address. salt/cloud/<minion_id>/deployingThe necessary port has been detected as available, and now Salt Cloud can log into the instance, upload any files used for deployment, and run the deploy script. Once the script has completed, Salt Cloud will log back into the instance and remove any remaining files. A number of variables are used to deploy instances, and the majority of these will be available in the payload. Any keys, passwords or other sensitive data will be scraped from the payload. Most of the variables returned will be related to the profile or provider config, and any default values that could have been changed in the profile or provider, but weren't. salt/cloud/<minion_id>/createdThe deploy sequence has completed, and the instance is now available, Salted, and ready for use. This event is the final task for Salt Cloud, before returning instance information to the user and exiting. The payload for this event contains little more than the initial creating event. This event is required in all cloud providers. Filtering EventsWhen creating a VM, it is possible with certain tags to filter how much information is sent to the event bus. The tags that can be filtered on any provider are:
Other providers may allow other tags to be filtered; when that is the case, the documentation for that provider will contain more details. To filter information, create a section in your /usr/local/etc/salt/cloud file called filter_events. Create a section for each tag that you want to filter, using the last segment of the tag. For instance, use creating to represent salt/cloud/<minion_id>/creating: filter_events: Any keys listed here will be added to the default keys that are already set to be displayed for that provider. If you wish to start with a clean slate and only show the keys specified, add another option called use_defaults and set it to False. filter_events: Configuring the Event ReactorThe Event Reactor is built into the Salt Master process, and as such is configured via the master configuration file. Normally this will be a YAML file located at /usr/local/etc/salt/master. Additionally, master configuration items can be stored, in YAML format, inside the /usr/local/etc/salt/master.d/ directory. These configuration items may be stored in either location; however, they may only be stored in one location. For organizational and security purposes, it may be best to create a single configuration file, which contains only Event Reactor configuration, at /usr/local/etc/salt/master.d/reactor. The Event Reactor uses a top-level configuration item called reactor. This block contains a list of tags to be watched for, each of which also includes a list of sls files. For instance: reactor: The above configuration configures reactors for three different tags: one which is fired when a minion process has started and is available to receive commands, one which is fired when a cloud instance has been created, and one which is fired when a cloud instance is destroyed. Note that each tag contains a wildcard (*) in it. For each of these tags, this will normally refer to a minion_id. This is not required of event tags, but is very common. Reactor SLS FilesReactor sls files should be placed in the /srv/reactor/ directory for consistency between environments, but this is not currently enforced by Salt. Reactor sls files follow a similar format to other sls files in Salt. By default they are written in YAML and can be templated using Jinja, but since they are processed through Salt's rendering system, any available renderer (JSON, Mako, Cheetah, etc.) can be used. As with other sls files, each stanza will start with a declaration ID, followed by the function to run, and then any arguments for that function. For example: # /srv/reactor/cloud-alert.sls new_instance_alert: When the Event Reactor receives an event notifying it that a new instance has been created, this sls will create a new incident in PagerDuty, using the configured PagerDuty account. The declaration ID in this example is new_instance_alert. The function called is cmd.pagerduty.create_event. The cmd portion of this function specifies that an execution module and function will be called, in this case, the pagerduty.create_event function. Because an execution module is specified, a target (tgt) must be specified on which to call the function. In this case, a minion called alertserver has been used. Any arguments passed through to the function are declared in the kwarg block. Example: Reactor-Based HighstateWhen Salt Cloud creates an instance, by default it will install the Salt Minion onto the instance, along with any specified minion configuration, and automatically accept that minion's keys on the master. One of the configuration options that can be specified is startup_states, which is commonly set to highstate. This will tell the minion to immediately apply a highstate, as soon as it is able to do so. This can present a problem with some system images on some cloud hosts. For instance, Salt Cloud can be configured to log in as either the root user, or a user with sudo access. While some hosts commonly use images that lock out remote root access and require a user with sudo privileges to log in (notably EC2, with their ec2-user login), most cloud hosts fall back to root as the default login on all images, including for operating systems (such as Ubuntu) which normally disallow remote root login. For users of these operating systems, it is understandable that a highstate would include configuration to block remote root logins again. However, Salt Cloud may not have finished cleaning up its deployment files by the time the minion process has started, and kicked off a highstate run. Users have reported errors from Salt Cloud getting locked out while trying to clean up after itself. The goal of a startup state may be achieved using the Event Reactor. Because a minion fires an event when it is able to receive commands, this event can effectively be used inside the reactor system instead. The following will point the reactor system to the right sls file: reactor: And the following sls file will start a highstate run on the target minion: # /srv/reactor/startup_highstate.sls reactor_highstate: Because this event will not be fired until Salt Cloud has cleaned up after itself, the highstate run will not step on salt-cloud's toes. And because every file on the minion is configurable, including /usr/local/etc/salt/minion, the startup_states can still be configured for future minion restarts, if desired. SALT PROXY MINIONProxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not. There are some proxy modules available, but if your device interface is not currently supported you will most likely have to write the interface yourself, because there are an infinite number of controllable devices. Fortunately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis. Salt proxy-minions provide the 'plumbing' that allows device enumeration and discovery, control, status, remote execution, and state management. See the Proxy Minion Walkthrough for an end-to-end demonstration of a working REST-based proxy minion. See the Proxy Minion SSH Walkthrough for an end-to-end demonstration of a working SSH proxy minion. See Proxyminion States to configure and run salt-proxy on a remote minion. Specify all your master side proxy (pillar) configuration and use this state to remotely configure proxies on one or more minions. See Proxyminion Beacon to help with easy configuration and management of salt-proxy processes. New in 2017.7.0The proxy_merge_grains_in_module configuration variable introduced in 2016.3, has been changed, defaulting to True. The connection with the remote device is kept alive by default, when the module implements the alive function and proxy_keep_alive is set to True. The polling interval is set using the proxy_keep_alive_interval option which defaults to 1 minute. The developers are also able to use the proxy_always_alive, when designing a proxy module flexible enough to open the connection with the remote device only when required. New in 2016.11.0Proxy minions now support configuration files with names ending in '*.conf' and placed in /usr/local/etc/salt/proxy.d. Proxy minions can now be configured in /usr/local/etc/salt/proxy or /etc/salt/proxy.d instead of just pillar. Configuration format is the same as it would be in pillar. New in 2016.3The deprecated config option enumerate_proxy_minions has been removed. As mentioned in earlier documentation, the add_proxymodule_to_opts configuration variable defaults to False in this release. This means if you have proxymodules or other code looking in __opts__['proxymodule'] you will need to set this variable in your /usr/local/etc/salt/proxy file, or modify your code to use the __proxy__ injected variable. The __proxyenabled__ directive now only applies to grains and proxy modules themselves. Standard execution modules and state modules are not prevented from loading for proxy minions. Enhancements in grains processing have made the __proxyenabled__ directive somewhat redundant in dynamic grains code. It is still required, but best practices for the __virtual__ function in grains files have changed. It is now recommended that the __virtual__ functions check to make sure they are being loaded for the correct proxytype, example below: def __virtual__(): The try/except block above exists because grains are processed very early in the proxy minion startup process, sometimes earlier than the proxy key in the __opts__ dictionary is populated. Grains are loaded so early in startup that no dunder dictionaries are present, so __proxy__, __salt__, etc. are not available. Custom grains located in /usr/local/etc/salt/states/_grains and in the salt install grains directory can now take a single argument, proxy, that is identical to __proxy__. This enables patterns like def get_ip(proxy): Then the grain ip will contain the result of calling the get_ip() function in the proxymodule called proxymodulename. Proxy modules now benefit from including a function called initialized(). This function should return True if the proxy's init() function has been successfully called. This is needed to make grains processing easier. Finally, if there is a function called grains in the proxymodule, it will be executed on proxy-minion startup and its contents will be merged with the rest of the proxy's grains. Since older proxy-minions might have used other methods to call such a function and add its results to grains, this is config-gated by a new proxy configuration option called proxy_merge_grains_in_module. This defaults to True in the 2017.7.0 release. New in 2015.8.2BREAKING CHANGE: Adding the proxymodule variable to __opts__ is deprecated. The proxymodule variable has been moved a new globally-injected variable called __proxy__. A related configuration option called add_proxymodule_to_opts has been added and defaults to True. In the next major release, 2016.3.0, this variable will default to False. In the meantime, proxies that functioned under 2015.8.0 and .1 should continue to work under 2015.8.2. You should rework your proxy code to use __proxy__ as soon as possible. The rest_sample example proxy minion has been updated to use __proxy__. This change was made because proxymodules are a LazyLoader object, but LazyLoaders cannot be serialized. __opts__ gets serialized, and so things like saltutil.sync_all and state.highstate would throw exceptions. Support has been added to Salt's loader allowing custom proxymodules to be placed in salt://_proxy. Proxy minions that need these modules will need to be restarted to pick up any changes. A corresponding utility function, saltutil.sync_proxymodules, has been added to sync these modules to minions. In addition, a salt.utils helper function called is_proxy() was added to make it easier to tell when the running minion is a proxy minion. NOTE: This function was renamed to salt.utils.platform.is_proxy() for the 2018.3.0 release New in 2015.8Starting with the 2015.8 release of Salt, proxy processes are no longer forked off from a controlling minion. Instead, they have their own script salt-proxy which takes mostly the same arguments that the standard Salt minion does with the addition of --proxyid. This is the id that the salt-proxy will use to identify itself to the master. Proxy configurations are still best kept in Pillar and their format has not changed. This change allows for better process control and logging. Proxy processes can now be listed with standard process management utilities (ps from the command line). Also, a full Salt minion is no longer required (though it is still strongly recommended) on machines hosting proxies. Getting StartedThe following diagram may be helpful in understanding the structure of a Salt installation that includes proxy-minions: [image] The key thing to remember is the left-most section of the diagram. Salt's nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion. After the proxy minion is started and initiates its connection to the device, it connects back to the salt-master and for all intents and purposes looks like just another minion to the Salt master. To create support for a proxied device one needs to create four things:
Configuration parametersProxy minions require no configuration parameters in /usr/local/etc/salt/master. Salt's Pillar system is ideally suited for configuring proxy-minions (though they can be configured in /usr/local/etc/salt/proxy as well). Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples, which are based on the diagram above: /usr/local/etc/salt/pillar/top.sls base: /usr/local/etc/salt/pillar/net-device1.sls proxy: /usr/local/etc/salt/pillar/net-device2.sls proxy: /usr/local/etc/salt/pillar/net-device3.sls proxy: /usr/local/etc/salt/pillar/i2c-device4.sls proxy: /usr/local/etc/salt/pillar/i2c-device5.sls proxy: /usr/local/etc/salt/pillar/433wireless-device6.sls proxy: /usr/local/etc/salt/pillar/smsgate-device7.sls proxy: Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy-minion is managing. In the above example
Because of the way pillar works, each of the salt-proxy processes that fork off the proxy minions will only see the keys specific to the proxies it will be handling. Proxies can be configured in /usr/local/etc/salt/proxy or with files in /etc/salt/proxy.d as of Salt's 2016.11.0 release. Also, in general, proxy-minions are lightweight, so the machines that run them could conceivably control a large number of devices. To run more than one proxy from a single machine, simply start an additional proxy process with --proxyid set to the id to which you want the proxy to bind. It is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices. ProxymodulesA proxy module encapsulates all the code necessary to interface with a device. Proxymodules are located inside the salt.proxy module, or can be placed in the _proxy directory in your file_roots (default is /usr/local/etc/salt/states/_proxy. At a minimum a proxymodule object must implement the following functions: __virtual__(): This function performs the same duty that it does for other types of Salt modules. Logic goes here to determine if the module can be loaded, checking for the presence of Python modules on which the proxy depends. Returning False will prevent the module from loading. init(opts): Perform any initialization that the device needs. This is a good place to bring up a persistent connection to a device, or authenticate to create a persistent authorization token. initialized(): Returns True if init() was successfully called. shutdown(): Code to cleanly shut down or close a connection to a controlled device goes here. This function must exist, but can contain only the keyword pass if there is no shutdown logic required. ping(): While not required, it is highly recommended that this function also be defined in the proxymodule. The code for ping should contact the controlled device and make sure it is really available. alive(opts): Another optional function, it is used together with the proxy_keep_alive option (default: True). This function should return a boolean value corresponding to the state of the connection. If the connection is down, will try to restart (shutdown followed by init). The polling frequency is controlled using the proxy_keep_alive_interval option, in minutes. grains(): Rather than including grains in /usr/local/etc/salt/states/_grains or in the standard install directories for grains, grains can be computed and returned by this function. This function will be called automatically if proxy_merge_grains_in_module is set to True in /usr/local/etc/salt/proxy. This variable defaults to True in the release code-named 2017.7.0. Pre 2015.8 the proxymodule also must have an id() function. 2015.8 and following don't use this function because the proxy's id is required on the command line. Here is an example proxymodule used to interface to a very simple REST server. Code for the server is in the salt-contrib GitHub repository. This proxymodule enables "service" enumeration, starting, stopping, restarting, and status; "package" installation, and a ping. # -*- coding: utf-8 -*-
"""
This is a simple proxy-minion designed to connect to and communicate with
the bottle-based web service contained in https://github.com/saltstack/salt-contrib/tree/master/proxyminion_rest_example
"""
from __future__ import absolute_import
# Import python libs
import logging
import salt.utils.http
HAS_REST_EXAMPLE = True
# This must be present or the Salt loader won't load this module
__proxyenabled__ = ["rest_sample"]
# Variables are scoped to this module so we can have persistent data
# across calls to fns in here.
GRAINS_CACHE = {}
DETAILS = {}
# Want logging!
log = logging.getLogger(__file__)
# This does nothing, it's here just as an example and to provide a log
# entry when the module is loaded.
def __virtual__():
Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. By default, a proxy minion will have several grains taken from the host. Salt core code requires values for kernel, os, and os_family--all of these are forced to be proxy for proxy-minions. To add others to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here's an example. Note the function below called proxy_functions. It demonstrates how a grains function can take a single argument, which will be set to the value of __proxy__. Dunder variables are not yet injected into Salt processes at the time grains are loaded, so this enables us to get a handle to the proxymodule so we can cross-call the functions therein used to communicate with the controlled device. Note that as of 2016.3, grains values can also be calculated in a function called grains() in the proxymodule itself. This might be useful if a proxymodule author wants to keep all the code for the proxy interface in the same place instead of splitting it between the proxy and grains directories. This function will only be called automatically if the configuration variable proxy_merge_grains_in_module is set to True in the proxy configuration file (default /usr/local/etc/salt/proxy). This variable defaults to True in the release code-named 2017.7.0. The __proxyenabled__ directiveIn previous versions of Salt the __proxyenabled__ directive controlled loading of all Salt modules for proxies (e.g. grains, execution modules, state modules). From 2016.3 on, the only modules that respect __proxyenabled__ are grains and proxy modules. These modules need to be told which proxy they work with. __proxyenabled__ is a list, and can contain a single '*' to indicate a grains module works with all proxies. Example from salt/grains/rest_sample.py: # -*- coding: utf-8 -*- """ Generate baseline proxy minion grains """ from __future__ import absolute_import import salt.utils.platform __proxyenabled__ = ["rest_sample"] __virtualname__ = "rest_sample" def __virtual__(): Salt Proxy Minion End-to-End ExampleThe following is walkthrough that documents how to run a sample REST service and configure one or more proxy minions to talk to and control it.
pip install bottle==0.12.8
[image] Now, configure your salt-proxy.
master: localhost
base: This says that Salt's pillar should load some values for the proxy p8000 from the file /usr/local/etc/salt/pillar/p8000.sls (if you have not changed your default pillar_roots)
proxy: In other words, if your REST service is listening on port 8000 on 127.0.0.1 the 'url' key above should say url: http://127.0.0.1:8000
salt-proxy --proxyid=p8000 -l debug
salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted.
salt p8000 test.version
SSH ProxymodulesSee above for a general introduction to writing proxy modules. All of the guidelines that apply to REST are the same for SSH. This sections specifically talks about the SSH proxy module and explains the working of the example proxy module ssh_sample. Here is a simple example proxymodule used to interface to a device over SSH. Code for the SSH shell is in the salt-contrib GitHub repository. This proxymodule enables "package" installation. # -*- coding: utf-8 -*-
"""
This is a simple proxy-minion designed to connect to and communicate with
a server that exposes functionality via SSH.
This can be used as an option when the device does not provide
an api over HTTP and doesn't have the python stack to run a minion.
"""
from __future__ import absolute_import
# Import python libs
import salt.utils.json
import logging
# Import Salt's libs
from salt.utils.vt_helper import SSHConnection
from salt.utils.vt import TerminalException
# This must be present or the Salt loader won't load this module
__proxyenabled__ = ["ssh_sample"]
DETAILS = {}
# Want logging!
log = logging.getLogger(__file__)
# This does nothing, it's here just as an example and to provide a log
# entry when the module is loaded.
def __virtual__():
Connection SetupThe init() method is responsible for connection setup. It uses the host, username and password config variables defined in the pillar data. The prompt kwarg can be passed to SSHConnection if your SSH server's prompt differs from the example's prompt (Cmd). Instantiating the SSHConnection class establishes an SSH connection to the ssh server (using Salt VT). Command executionThe package_* methods use the SSH connection (established in init()) to send commands out to the SSH server. The sendline() method of SSHConnection class can be used to send commands out to the server. In the above example we send commands like pkg_list or pkg_install. You can send any SSH command via this utility. Output parsingOutput returned by sendline() is a tuple of strings representing the stdout and the stderr respectively. In the toy example shown we simply scrape the output and convert it to a python dictionary, as shown in the parse method. You can tailor this method to match your parsing logic. Connection teardownThe shutdown method is responsible for calling the close_connection() method of SSHConnection class. This ends the SSH connection to the server. For more information please refer to class SSHConnection. Salt Proxy Minion SSH End-to-End ExampleThe following is walkthrough that documents how to run a sample SSH service and configure one or more proxy minions to talk to and control it.
Now, configure your salt-proxy.
master: localhost multiprocessing: False
base: This says that Salt's pillar should load some values for the proxy p8000 from the file /usr/local/etc/salt/pillar/p8000.sls (if you have not changed your default pillar_roots)
proxy:
salt-proxy --proxyid=p8000 -l debug
salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted.
salt p8000 pkg.list_pkgs
New in version 2015.8.3. Proxy Minion BeaconThe salt proxy beacon is meant to facilitate configuring multiple proxies on one or many minions. This should simplify configuring and managing multiple salt-proxy processes.
base: This says that Salt's pillar should load some values for the proxy p8000 from the file /usr/local/etc/salt/pillar/p8000.sls (if you have not changed your default pillar_roots)
proxy: This should complete the proxy setup for p8000
beacons: Once this beacon is configured it will automatically start the salt-proxy process. If the salt-proxy process is terminated the beacon will re-start it.
salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted.
salt p8000 pkg.list_pkgs New in version 2015.8.2. Proxy Minion StatesSalt proxy state can be used to deploy, configure and run a salt-proxy instance on your minion. Configure proxy settings on the master side and the state configures and runs salt-proxy on the remote end.
base: This says that Salt's pillar should load some values for the proxy p8000 from the file /usr/local/etc/salt/pillar/p8000.sls (if you have not changed your default pillar_roots)
proxy:
salt-proxy-configure:
Example using state.sls to configure and run salt-proxy # salt device_minion state.sls salt_proxy This starts salt-proxy on device_minion
salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted.
salt p8000 pkg.list_pkgs NETWORK AUTOMATIONNetwork automation is a continuous process of automating the configuration, management and operations of a computer network. Although the abstraction could be compared with the operations on the server side, there are many particular challenges, the most important being that a network device is traditionally closed hardware able to run proprietary software only. In other words, the user is not able to install the salt-minion package directly on a traditional network device. For these reasons, most network devices can be controlled only remotely via proxy minions or using the Salt SSH. However, there are also vendors producing whitebox equipment (e.g. Arista, Cumulus) or others that have moved the operating system in the container (e.g. Cisco NX-OS, Cisco IOS-XR), allowing the salt-minion to be installed directly on the platform. New in Carbon (2016.11)The methodologies for network automation have been introduced in 2016.11.0. Network automation support is based on proxy minions.
NAPALMNAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an opensourced Python library that implements a set of functions to interact with different router vendor devices using a unified API. Being vendor-agnostic simplifies operations, as the configuration and interaction with the network device does not rely on a particular vendor. [image] Beginning with 2017.7.0, the NAPALM modules have been transformed so they can run in both proxy and regular minions. That means, if the operating system allows, the salt-minion package can be installed directly on the network gear. The interface between the network operating system and Salt in that case would be the corresponding NAPALM sub-package. For example, if the user installs the salt-minion on a Arista switch, the only requirement is napalm-eos. The following modules are available in 2017.7.0:
Getting startedInstall NAPALM - follow the notes and check the platform-specific dependencies. Salt's Pillar system is ideally suited for configuring proxy-minions (though they can be configured in /usr/local/etc/salt/proxy as well). Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples: /usr/local/etc/salt/pillar/top.sls base: /usr/local/etc/salt/pillar/router1.sls proxy: /usr/local/etc/salt/pillar/router2.sls proxy: /usr/local/etc/salt/pillar/switch1.sls proxy: /usr/local/etc/salt/pillar/switch2.sls proxy: /usr/local/etc/salt/pillar/cpe1.sls proxy: CLI examplesDisplay the complete running configuration on router1: $ sudo salt 'router1' net.config source='running' Retrieve the NTP servers configured on all devices: $ sudo salt '*' ntp.servers router1: Display the ARP tables on all Cisco devices running IOS-XR 5.3.3: $ sudo salt -G 'os:iosxr and version:5.3.3' net.arp Return operational details for interfaces from Arista switches: $ sudo salt -C 'sw* and os:eos' net.interfaces Execute traceroute from the edge of the network: $ sudo salt 'router*' net.traceroute 8.8.8.8 vrf='CUSTOMER1-VRF' Verbatim display from the CLI of Juniper routers: $ sudo salt -C 'router* and G@os:junos' net.cli 'show version and haiku' Retrieve the results of the RPM probes configured on Juniper MX960 routers: $ sudo salt -C 'router* and G@os:junos and G@model:MX960' probes.results Return the list of configured users on the CPEs: $ sudo salt 'cpe*' users.config Using the BGP finder, return the list of BGP neighbors that are down: $ sudo salt-run bgp.neighbors up=False Using the NET finder, determine the devices containing the pattern "PX-1234-LHR" in their interface description: $ sudo salt-run net.find PX-1234-LHR Cross-platform configuration management example: NTPAssuming that the user adds the following two lines under file_roots: file_roots: Define the list of NTP peers and servers wanted: /usr/local/etc/salt/pillar/ntp.sls ntp.servers: Include the new file: for example, if we want to have the same NTP servers on all network devices, we can add the following line inside the top.sls file: '*': /usr/local/etc/salt/pillar/top.sls base: Or include only where needed: /usr/local/etc/salt/pillar/top.sls base: Define the cross-vendor template: /usr/local/etc/salt/templates/ntp.jinja {%- if grains.vendor|lower == 'cisco' %}
Define the SLS state file, making use of the Netconfig state module: /usr/local/etc/salt/states/router/ntp.sls ntp_config_example: Run the state and assure NTP configuration consistency across your multi-vendor network: $ sudo salt 'router*' state.sls router.ntp Besides CLI, the state can be scheduled or executed when triggered by a certain event. JUNOSJuniper has developed a Junos specific proxy infrastructure which allows remote execution and configuration management of Junos devices without having to install SaltStack on the device. The infrastructure includes:
The execution and state modules are implemented using junos-eznc (PyEZ). Junos PyEZ is a microframework for Python that enables you to remotely manage and automate devices running the Junos operating system. Getting startedInstall PyEZ on the system which will run the Junos proxy minion. It is required to run Junos specific modules. pip install junos-eznc Next, set the master of the proxy minions. /usr/local/etc/salt/proxy master: <master_ip> Add the details of the Junos device. Device details are usually stored in salt pillars. If the you do not wish to store credentials in the pillar, one can setup passwordless ssh. /usr/local/etc/salt/pillar/vmx_details.sls proxy: Map the pillar file to the proxy minion. This is done in the top file. /usr/local/etc/salt/pillar/top.sls base: NOTE: Before starting the Junos proxy make sure that netconf is
enabled on the Junos device. This can be done by adding the following
configuration on the Junos device.
set system services netconf ssh Start the salt master. salt-master -l debug Then start the salt proxy. salt-proxy --proxyid=vmx -l debug Once the master and junos proxy minion have started, we can run execution and state modules on the proxy minion. Below are few examples. CLI examplesFor detailed documentation of all the junos execution modules refer: Junos execution module Display device facts. $ sudo salt 'vmx' junos.facts Refresh the Junos facts. This function will also refresh the facts which are stored in salt grains. (Junos proxy stores Junos facts in the salt grains) $ sudo salt 'vmx' junos.facts_refresh Call an RPC. $ sudo salt 'vmx' junos.rpc 'get-interface-information' '/var/log/interface-info.txt' terse=True Install config on the device. $ sudo salt 'vmx' junos.install_config 'salt://my_config.set' Shutdown the junos device. $ sudo salt 'vmx' junos.shutdown shutdown=True in_min=10 State file examplesFor detailed documentation of all the junos state modules refer: Junos state module Executing an RPC on Junos device and storing the output in a file. /usr/local/etc/salt/states/rpc.sls get-interface-information: Lock the junos device, load the configuration, commit it and unlock the device. /usr/local/etc/salt/states/load.sls lock the config: According to the device personality install appropriate image on the device. /usr/local/etc/salt/states/image_install.sls {% if grains['junos_facts']['personality'] == MX %}
salt://images/mx_junos_image.tgz:
Junos Syslog EngineJunos Syslog Engine is a Salt engine which receives data from various Junos devices, extracts event information and forwards it on the master/minion event bus. To start the engine on the salt master, add the following configuration in the master config file. The engine can also run on the salt minion. /usr/local/etc/salt/master engines: For junos_syslog engine to receive events, syslog must be set on the Junos device. This can be done via following configuration: set system syslog host <ip-of-the-salt-device> port xxx any any SALT VIRTThe Salt Virt cloud controller capability was initially added to Salt in version 0.14.0 as an alpha technology. The initial Salt Virt system supports core cloud operations:
Many features are currently under development to enhance the capabilities of the Salt Virt systems. NOTE: It is noteworthy that Salt was originally developed with
the intent of using the Salt communication system as the backbone to a cloud
controller. This means that the Salt Virt system is not an afterthought,
simply a system that took the back seat to other development. The original
attempt to develop the cloud control aspects of Salt was a project called
butter. This project never took off, but was functional and proves the early
viability of Salt to be a cloud controller.
WARNING: Salt Virt does not work with KVM that is running in a VM.
KVM must be running on the base hardware.
Salt Virt TutorialA tutorial about how to get Salt Virt up and running has been added to the tutorial section: Cloud Controller Tutorial The Salt Virt RunnerThe point of interaction with the cloud controller is the virt runner. The virt runner comes with routines to execute specific virtual machine routines. Reference documentation for the virt runner is available with the runner module documentation: Virt Runner Reference Based on Live State DataThe Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate. Deploy from Network or DiskVirtual Machine Disk ProfilesSalt Virt allows for the disks created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar. This configuration option is called virt.disk. The default virt.disk data structure looks like this: virt.disk: NOTE: The format and model does not need to be defined, Salt
will default to the optimal format used by the underlying hypervisor, in the
case of kvm this it is qcow2 and virtio.
This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine. Define More ProfilesMany environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished: virt.disk: This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm. Virtual Machine Network ProfilesSalt Virt allows for the network devices created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar. This configuration option is called virt:nic. By default the virt:nic option is empty but defaults to a data structure which looks like this: virt: NOTE: The model does not need to be defined, Salt will default
to the optimal model used by the underlying hypervisor, in the case of kvm
this model is virtio
This configuration sets up a network profile called default. The default profile creates a single Ethernet device on the virtual machine that is bridged to the hypervisor's br0 interface. This default setup does not require setting up the virt:nic configuration, and is the reason why a default install only requires setting up the br0 bridge device on the hypervisor. Define More ProfilesMany environments will require more complex network profiles and may require more than one profile, this can be easily accomplished: virt: This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm. ONEDIR PACKAGINGRelenv onedir packagingStarting in 3006, only onedir packaging will be available. The 3006 onedir packages are built with the relenv tool. Docker ContainersThe Salt Project uses docker containers to build our deb and rpm packages. If you are building your own packages you can use the same containers we build with in the Github piplines. These containers are documented here. How to build onedir only
pip install relenv
relenv toolchain fetch
relenv fetch --python=<python-version>
relenv create --python=<python-version> <relenv-package-path>
<relenv-package-path>/bin/pip install /path/to/salt How to build rpm packages
cd <path-to-salt-repo>
yum -y install python3 python3-pip openssl git rpmdevtools rpmlint systemd-units libxcrypt-compat git gnupg2 jq createrepo rpm-sign rustc cargo epel-release yum -y install patchelf pip install awscli pip install -r requirements/static/ci/py{python_version}/tools.txt
pip install -r requirements/static/ci/py{python_version}/changelog.txt
tools changelog update-rpm <salt-version>
Only the arch argument is required, the rest are
optional.
tools pkg build rpm --relenv-version <relenv-version> --python-version <python-version> --arch <arch> How to build deb packages
cd <path-to-salt-repo>
apt install -y apt-utils gnupg jq awscli python3 python3-venv python3-pip build-essential devscripts debhelper bash-completion git patchelf rustc pip install -r requirements/static/ci/py{python_version}/tools.txt
pip install -r requirements/static/ci/py{python_version}/changelog.txt
tools changelog update-deb <salt-version>
Only the arch argument is required, the rest are
optional.
tools pkg build deb --relenv-version <relenv-version> --python-version <python-version> --arch <arch> How to build MacOS packages
cd <path-to-salt-repo>
pip install -r requirements/static/ci/py{python_version}/tools.txt
Only the salt-version argument is required, the rest are
optional. Do note that you will not be able to sign the packages when building
them.
tools pkg build macos --salt-version <salt-version> How to build Windows packages
cd <path-to-salt-repo>
pip install -r requirements/static/ci/py{python_version}/tools.txt
Only the arch and salt-version arguments are required,
the rest are optional. Do note that you will not be able to sign the packages
when building them.
tools pkg build windows --salt-version <salt-version> --arch <arch> How to access python binaryThe python library is available in the install directory of the onedir package. For example on linux the default location would be /opt/saltstack/salt/bin/python3. Testing the packagesIf you want to test your built packages, or any other collection of salt packages post 3006.0, follow this guide Testing packagesThe package test suiteThe salt repo provides a test suite for testing basic functionality of our packages at <repo-root>/pkg/tests/. You can run the install, upgrade, and downgrade tests. These tests run automatically on most PRs that are submitted against Salt. WARNING: These tests make destructive changes to your system
because they install the built packages onto the system. They may also install
older versions in the case of upgrades or downgrades. To prevent destructive
changes, run the tests in an isolated system, preferably a virtual
machine.
SetupIn order to run the package tests, the relenv onedir and built packages need to be placed in the correct locations.
The following are a few ways this can be accomplished easily. You can ensure parity by installing the package test suite through a few possible methods:
Using toolsSalt has preliminary support for setting up the package test suite in the tools command suite that is located under <repo-root>/tools/testsuite/. This method requires the Github CLI tool gh (https://cli.github.com/) to be properly configured for interaction with the salt repo.
pip install -r requirements/static/ci/py{python_version}/tools.txt
tools ts setup --platform {linux|darwin|windows} --slug
<operating-system-slug> --pr <pr-number> --pkg
The most common use case is to test the packages built on a CI/CD run for a given PR. To see the possible options for each argument, and other ways to utilize this command, use the following: tools ts setup -h WARNING: You can only download artifacts from finished workflow
runs. This is something imposed by the GitHub API. To download artifacts from
a running workflow run, you either have to wait for the finish or cancel
it.
Downloading individuallyIf the tools ts setup command doesn't work, you can download, unzip, and place the artifacts in the correct locations manually. Typically, you want to test packages built on a CI/CD run for a given PR. This guide explains how to set up for running the package tests using those artifacts. An analogous process can be performed for artifacts from nightly builds.
Under the summary page for the most recent actions run
for that PR, there is a list of available artifacts from that run that can be
downloaded. Download the package artifacts by finding
salt-<major>.<minor>+<number>.<sha>-<arch>-<pkg-type>.
For example, the amd64 deb packages might look like:
salt-3006.2+123.01234567890-x86_64-deb.
The onedir artifact will look like salt-<major>.<minor>+<number>.<sha>-onedir-<platform>-<arch>.tar.xz. For instance, the macos x86_64 onedir may have the name salt-3006.2+123.01234567890-onedir-darwin-x86_64.tar.xz. NOTE: Windows onedir artifacts have .zip extensions
instead of tar.xz
While it is optional, it is recommended to download the nox session artifact as well. This will have the form of nox-<os-name>-test-pkgs-onedir-<arch>. The amd64 Ubuntu 20.04 nox artifact may look like nox-ubuntu-20.04-test-pkgs-onedir-x86_64.
Unzip the packages and place them in
<repo-root>/artifacts/pkg/.
You must unzip and untar the onedir packages and place them in <repo-root>/artifacts/. Windows onedir requires an additional unzip action. If you set it up correctly, the <repo-root>/artifacts/salt directory then contains the uncompressed onedir files. Additionally, decompress the nox artifact and place it under <repo-root>/.nox/. Running the testsYou can run the test suite run if all the artifacts are in the correct location. NOTE: You need root access to run the test artifacts. Run all
nox commands at the root of the salt repo and as the root user.
pip install nox
nox -e test-pkgs-onedir -- install
nox -e test-pkgs-onedir -- upgrade --prev-version <previous-version> You can run the downgrade tests in the same way, replacing upgrade with downgrade. NOTE: If you are testing upgrades or downgrades and classic
packages are available for your system, replace upgrade or
downgrade with upgrade-classic or downgrade-classic
respectively to test against those versions.
COMMAND LINE REFERENCEsalt-apisalt-apiStart interfaces used to remotely connect to the salt master Synopsissalt-api DescriptionThe Salt API system manages network api connectors for the Salt Master Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt-api(7) salt(7) salt-master(1) salt-callsalt-callSynopsissalt-call [options] DescriptionThe salt-call command is used to run module functions locally on a minion instead of executing them from the master. Salt-call is used to run a Standalone Minion, and was originally created for troubleshooting. The Salt Master is contacted to retrieve state files and other resources during execution unless the --local option is specified. NOTE: salt-call commands execute from the current user's
shell context, while salt commands execute from the system's default
context.
Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Output Options
highstate, json, key,
overstatestage, pprint, raw, txt, yaml, and
many others.
Some outputters are formatted only for data returned from specific functions. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
See alsosalt(1) salt-master(1) salt-minion(1) saltsaltSynopsissalt '*' [ options ] sys.doc
salt -E '.*' [ options ] sys.doc cmd salt -G 'os:Arch.*' [ options ] test.version salt -C 'G@os:Arch.* and webserv* or G@kernel:FreeBSD' [ options ] test.version DescriptionSalt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Target SelectionThe default matching that Salt utilizes is shell-style globbing around the minion id. See https://docs.python.org/3/library/fnmatch.html#module-fnmatch.
Output Options
highstate, json, key,
overstatestage, pprint, raw, txt, yaml, and
many others.
Some outputters are formatted only for data returned from specific functions. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
NOTE: If using --out=json, you will probably want
--static as well. Without the static option, you will get a separate
JSON string per minion which makes JSON output invalid as a whole. This is due
to using an iterative outputter. So if you want to feed it to a JSON parser,
use --static as well.
See alsosalt(7) salt-master(1) salt-minion(1) salt-cloudsalt-cpsalt-cpCopy a file or files to one or more minions Synopsissalt-cp '*' [ options ] SOURCE [SOURCE2 SOURCE3 ...] DEST salt-cp -E '.*' [ options ] SOURCE [SOURCE2 SOURCE3 ...] DEST salt-cp -G 'os:Arch.*' [ options ] SOURCE [SOURCE2 SOURCE3 ...] DEST Descriptionsalt-cp copies files from the master to all of the Salt minions matched by the specified target expression. NOTE: salt-cp uses Salt's publishing mechanism. This means the
privacy of the contents of the file on the wire is completely dependent upon
the transport in use. In addition, if the master or minion is running with
debug logging, the contents of the file will be logged to disk.
In addition, this tool is less efficient than the Salt fileserver when copying larger files. It is recommended to instead use cp.get_file to copy larger files to minions. However, this requires the file to be located within one of the fileserver directories. Changed in version 2016.3.7,2016.11.6,2017.7.0: Compression support added, disable with -n. Also, if the destination path ends in a path separator (i.e. /, or \ on Windows, the desitination will be assumed to be a directory. Finally, recursion is now supported, allowing for entire directories to be copied. Changed in version 2016.11.7,2017.7.2: Reverted back to the old copy mode to preserve backward compatibility. The new functionality added in 2016.6.6 and 2017.7.0 is now available using the -C or --chunked CLI arguments. Note that compression, recursive copying, and support for copying large files is only available in chunked mode. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Target SelectionThe default matching that Salt utilizes is shell-style globbing around the minion id. See https://docs.python.org/3/library/fnmatch.html#module-fnmatch.
See alsosalt(1) salt-master(1) salt-minion(1) salt-extendsalt-extendA utilty to generate extensions to the Salt source-code. This is used for :
Synopsissalt-extend --help Descriptionsalt-extend is a templating tool for extending SaltStack. If you're looking to add a module to SaltStack, then the salt-extend utility can guide you through the process. You can use Salt Extend to quickly create templated modules for adding new behaviours to some of the module subsystems within Salt. Salt Extend takes a template directory and merges it into a SaltStack source code directory. See also: Salt Extend. Options
See alsosalt-api(1) salt-call(1) salt-cloud(1) salt-cp(1) salt-key(1) salt-main(1) salt-master(1) salt-minion(1) salt-run(1) salt-ssh(1) salt-syndic(1) salt-keysalt-keySynopsissalt-key [ options ] DescriptionSalt-key executes simple management of Salt server public keys used for authentication. On initial connection, a Salt minion sends its public key to the Salt master. This key must be accepted using the salt-key command on the Salt master. Salt minion keys can be in one of the following states:
To change the state of a minion key, use -d to delete the key and then accept or reject the key. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Output Options
highstate, json, key,
overstatestage, pprint, raw, txt, yaml, and
many others.
Some outputters are formatted only for data returned from specific functions. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
Actions
Key Generation Options
See alsosalt(7) salt-master(1) salt-minion(1) salt-mastersalt-masterThe Salt master daemon, used to control the Salt minions Synopsissalt-master [ options ] DescriptionThe master daemon controls the Salt minions Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt(1) salt(7) salt-minion(1) salt-minionsalt-minionThe Salt minion daemon, receives commands from a remote Salt master. Synopsissalt-minion [ options ] DescriptionThe Salt minion receives commands from the central Salt master and replies with the results of said commands. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt(1) salt(7) salt-master(1) salt-proxysalt-proxyReceives commands from a Salt master and proxies these commands to devices that are unable to run a full minion. Synopsissalt-proxy [ options ] DescriptionThe Salt proxy minion receives commands from a Salt master, transmits appropriate commands to devices that are unable to run a minion, and replies with the results of said commands. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt(1) salt(7) salt-master(1) salt-minion(1) salt-runsalt-runExecute a Salt runner Synopsissalt-run RUNNER Descriptionsalt-run is the frontend command for executing Salt Runners. Salt runners are simple modules used to execute convenience functions on the master Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt(1) salt-master(1) salt-minion(1) salt-sshsalt-sshSynopsissalt-ssh '*' [ options ] sys.doc salt-ssh -E '.*' [ options ] sys.doc cmd DescriptionSalt SSH allows for salt routines to be executed using only SSH for transport Options
Authentication Options
Scan Roster Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Target SelectionThe default matching that Salt utilizes is shell-style globbing around the minion id. See https://docs.python.org/3/library/fnmatch.html#module-fnmatch.
Output Options
highstate, json, key,
overstatestage, pprint, raw, txt, yaml, and
many others.
Some outputters are formatted only for data returned from specific functions. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
NOTE: If using --out=json, you will probably want
--static as well. Without the static option, you will get a separate
JSON string per minion which makes JSON output invalid as a whole. This is due
to using an iterative outputter. So if you want to feed it to a JSON parser,
use --static as well.
See alsosalt(7) salt-master(1) salt-minion(1) salt-syndicsalt-syndicThe Salt syndic daemon, a special minion that passes through commands from a higher master Synopsissalt-syndic [ options ] DescriptionThe Salt syndic daemon, a special minion that passes through commands from a higher master. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
See alsosalt(1) salt-master(1) salt-minion(1) spmspmSalt Package Manager Synopsisspm <command> [<argument>] Descriptionspm is the frontend command for managing Salt packages. Packages normally only include formulas, meaning a group of SLS files that install into the file_roots on the Salt Master, but Salt modules can also be installed. Options
Logging OptionsLogging options which override any settings defined on the configuration files.
Commands
See alsosalt(1) salt-master(1) salt-minion(1) PILLARSSalt includes a number of built-in external pillars, listed at pillar modules. The below links contain documentation for the configuration options
Note that some of same the configuration options from the master are present in the minion configuration file, these are used in masterless mode. The source for the built-in Salt pillars can be found here: salt/pillar MASTER TOPSSalt includes a number of built-in subsystems to generate top file data, they are listed at master tops modules. The source for the built-in Salt master tops can be found here: salt/tops SALT MODULE REFERENCEThis section contains a list of the Python modules that are used to extend the various subsystems within Salt. auth modules
salt.auth.autoAn "Always Approved" eauth interface to test against, not intended for production use
salt.auth.djangoProvide authentication using Django Web Framework Django authentication depends on the presence of the django framework in the PYTHONPATH, the Django project's settings.py file being in the PYTHONPATH and accessible via the DJANGO_SETTINGS_MODULE environment variable. Django auth can be defined like any other eauth module: external_auth: This will authenticate Fred via Django and allow him to run any execution module and all runners. The authorization details can optionally be located inside the Django database. The relevant entry in the models.py file would look like this: class SaltExternalAuthModel(models.Model): The external_auth clause in the master config would then look like this: external_auth: When a user attempts to authenticate via Django, Salt will import the package indicated via the keyword ^model. That model must have the fields indicated above, though the model DOES NOT have to be named 'SaltExternalAuthModel'.
Database records such as:
Should result in an eauth config such as: fred:
salt.auth.fileProvide authentication using local files New in version 2018.3.0. The file auth module allows simple authentication via local files. Different filetypes are supported, including:
NOTE: The python-passlib library is required when using
a ^filetype of htpasswd or htdigest.
The simplest example is a plaintext file with usernames and passwords: external_auth: In this example the /etc/insecure-user-list.txt file would be formatted as so: dean:goneFishing gene:OceanMan ^filename is the only required parameter. Any parameter that begins with a ^ is passed directly to the underlying file authentication function via kwargs, with the leading ^ being stripped. The text file option is configurable to work with legacy formats: external_auth: This would authenticate users against a file of the following format: 46|trey|16a0034f90b06bf3c5982ed8ac41aab4 555|mike|b6e02a4d2cb2a6ef0669e79be6fd02e4 2001|page|14fce21db306a43d3b680da1a527847a 8888|jon|c4e94ba906578ccf494d71f45795c6cb NOTE: The hashutil.digest execution function is used for
comparing hashed passwords, so any algorithm supported by that function will
work.
There is also support for Apache-style htpasswd and htdigest files: external_auth: When using htdigest the ^realm must be set: external_auth:
NOTE: The following parameters are only used with the
text filetype.
salt.auth.keystoneProvide authentication using OpenStack Keystone
salt.auth.ldapProvide authentication using simple LDAP binds
salt.auth.mysqlProvide authentication using MySQL. When using MySQL as an authentication backend, you will need to create or use an existing table that has a username and a password column. To get started, create a simple table that holds just a username and a password. The password field will hold a SHA256 checksum. CREATE TABLE `users` ( To create a user within MySQL, execute the following statement. INSERT INTO users VALUES (NULL, 'diana', SHA2('secret', 256))
mysql_auth: The auth_sql contains the SQL that will validate a user to ensure they are correctly authenticated. This is where you can specify other SQL queries to authenticate users. Enable MySQL authentication. external_auth:
salt.auth.pamAuthenticate against PAM Provides an authenticate function that will allow the caller to authenticate a user against the Pluggable Authentication Modules (PAM) on the system. Implemented using ctypes, so no compilation is necessary. There is one extra configuration option for pam. The pam_service that is authenticated against. This defaults to login auth.pam.service: login NOTE: Solaris-like (SmartOS, OmniOS, ...) systems may need
auth.pam.service set to other.
NOTE: PAM authentication will not work for the root
user.
The Python interface to PAM does not support authenticating as root. NOTE: This module executes itself in a subprocess in order to
user the system python and pam libraries. We do this to avoid openssl version
conflicts when running under a salt onedir build.
salt.auth.pkiAuthenticate via a PKI certificate. NOTE: This module is Experimental and should be used with
caution
Provides an authenticate function that will allow the caller to authenticate a user via their public cert against a pre-defined Certificate Authority. TODO: Add a 'ca_dir' option to configure a directory of CA files, a la Apache.
Configure the CA cert in the master config file: external_auth: salt.auth.restProvide authentication using a REST call REST auth can be defined like any other eauth module: external_auth: If there are entries underneath the ^url entry then they are merged with any responses from the REST call. In the above example, assuming the REST call does not return any additional ACLs, this will authenticate Fred via a REST call and allow him to run any execution module and all runners. The REST call should return a JSON array that maps to a regular eauth YAML structure of a user as above.
salt.auth.sharedsecretProvide authentication using configured shared secret external_auth: The shared secret should be added to the master configuration, for example in /usr/local/etc/salt/master.d/sharedsecret.conf (make sure that file is only readable by the user running the master): sharedsecret: OIUHF_CHANGE_THIS_12h88 This auth module should be used with caution. It was initially designed to work with a frontal that takes care of authentication (for example kerberos) and places the shared secret in the HTTP headers to the salt-api call. This salt-api call should really be done on localhost to avoid someone eavesdropping on the shared secret. See the documentation for cherrypy to setup the headers in your frontal. New in version 2015.8.0.
salt.auth.yubicoProvide authentication using YubiKey. New in version 2015.5.0.
To get your YubiKey API key you will need to visit the website below. https://upgrade.yubico.com/getapikey/ The resulting page will show the generated Client ID (aka AuthID or API ID) and the generated API key (Secret Key). Make a note of both and use these two values in your /usr/local/etc/salt/master configuration. /usr/local/etc/salt/master
yubico_users: external_auth: Please wait five to ten minutes after generating the key before testing so that the API key will be updated on all the YubiCloud servers.
beacon modules
salt.beacons.adbBeacon to emit adb device state changes for Android devices New in version 2016.3.0.
beacons:
salt.beacons.aix_accountBeacon to fire event when we notice a AIX user is locked due to many failed login attempts. New in version 2018.3.0.
beacons:
salt.beacons.avahi_announceBeacon to announce via avahi (zeroconf) New in version 2016.11.0. Dependencies
The following are optional configuration settings:
Example Config beacons:
salt.beacons.bonjour_announceBeacon to announce via Bonjour (zeroconf)
The following are optional configuration settings:
Example Config beacons:
salt.beacons.btmpBeacon to fire events at failed login of users New in version 2015.5.0. Example Configuration# Fire events on all failed logins beacons: Use Case: Posting Failed Login Events to SlackThis can be done using the following reactor SLS: report-wtmp: Match the event like so in the master config file: reactor: NOTE: This approach uses the slack execution module
directly on the master, and therefore requires that the master has a slack API
key in its configuration:
slack: See the slack execution module documentation for more information. While you can use an individual user's API key to post to Slack, a bot user is likely better suited for this. The slack engine documentation has information on how to set up a bot user.
salt.beacons.cert_infoBeacon to monitor certificate expiration dates from files on the filesystem. New in version 3000.
beacons:
salt.beacons.diskusageBeacon to monitor disk usage. New in version 2015.5.0.
beacons: Windows drives must be quoted to avoid yaml syntax errors beacons: Regular expressions can be used as mount points. beacons: The first one will match all mounted disks beginning with "/", except /home The second one will match disks from A:to Z:on a Windows system Note that if a regular expression are evaluated after static mount points, which means that if a regular expression matches another defined mount point, it will override the previously defined threshold.
salt.beacons.glxinfoBeacon to emit when a display is available to a linux machine New in version 2016.3.0.
beacons:
salt.beacons.haproxyWatch current connections of haproxy server backends. Fire an event when over a specified threshold. New in version 2016.11.0.
beacons:
salt.beacons.inotifyWatch files and translate the changes into salt events
beacons: The mask list can contain the following events (the default mask is create, delete, and modify):
The mask can also contain the following options:
salt.beacons.journaldA simple beacon to watch journald for specific entries
beacons:
salt.beacons.junos_rre_keysJunos redundant routing engine beacon. NOTE: This beacon only works on the Juniper native
minion.
Copies salt-minion keys to the backup RE when present Configure with beacon: interval above is in seconds, 43200 is recommended (every 12 hours) salt.beacons.loadBeacon to emit system load averages
beacons:
salt.beacons.log_beaconBeacon to fire events at specific log messages. New in version 2017.7.0.
beacons: NOTE: regex matching is based on the re module
The defined tag is added to the beacon event tag. This is not the tag in the log. beacons:
salt.beacons.memusageBeacon to monitor memory usage. New in version 2016.3.0.
beacons:
salt.beacons.napalm_beaconWatch NAPALM functions and fire events on specific triggersNew in version 2018.3.0. NOTE: The NAPALM beacon only works only when running
under a regular Minion or a Proxy Minion, managed via NAPALM. Check the
documentation for the NAPALM proxy module.
The configuration accepts a list of Salt functions to be invoked, and the corresponding output hierarchy that should be matched against. To invoke a function with certain arguments, they can be specified using the _args key, or _kwargs for more specific key-value arguments. The match structure follows the output hierarchy of the NAPALM functions, under the out key. For example, the following is normal structure returned by the ntp.stats execution function: {
In order to fire events when the synchronization is lost with one of the NTP peers, e.g., 172.17.17.2, we can match it explicitly as: ntp.stats: There is one single nesting level, as the output of ntp.stats is just a list of dictionaries, and this beacon will compare each dictionary from the list with the structure examplified above. NOTE: When we want to match on any element at a certain level,
we can configure * to match anything.
Considering a more complex structure consisting on multiple nested levels, e.g., the output of the bgp.neighbors execution function, to check when any neighbor from the global routing table is down, the match structure would have the format: bgp.neighbors: The match structure above will match any BGP neighbor, with any network (* matches any AS number), under the global VRF. In other words, this beacon will push an event on the Salt bus when there's a BGP neighbor down. The right operand can also accept mathematical operations (i.e., <, <=, !=, >, >= etc.) when comparing numerical values. Configuration Example: beacons: Event structure example: {
The event examplified above has been fired when the device identified by the Minion id edge01.bjm01 has been synchronized with a NTP server at a stratum level greater than 5.
salt.beacons.network_infoBeacon to monitor statistics from ethernet adapters New in version 2015.5.0.
beacons: Emit beacon when any values are greater than configured values. beacons:
salt.beacons.network_settingsBeacon to monitor network adapter setting changes on Linux New in version 2016.3.0.
beacons: The config above will check for value changes on eth0 ipaddr and eth1 linkmode. It will also emit if the promiscuity value changes to 1. Beacon items can use the * wildcard to make a definition apply to several interfaces. For example an eth* would apply to all ethernet interfaces. Setting the argument coalesce = True will combine all the beacon results on a single event. The example below shows how to trigger coalesced results: beacons:
salt.beacons.pkgWatch for pkgs that have upgrades, then fire an event. New in version 2016.3.0.
beacons:
salt.beacons.proxy_exampleExample beacon to use with salt-proxy beacons:
beacons:
salt.beacons.psSend events covering process status
beacons: The config above sets up beacons to check that processes are running or stopped.
salt.beacons.salt_monitorA beacon to execute salt execution module functions. This beacon will fire only if the return data is "truthy". The function return, function name and args and/or kwargs, will be passed as data in the event. The configuration can accept a list of salt functions to execute every interval. Make sure to allot enough time via 'interval' key to allow all salt functions to execute. The salt functions will be executed sequentially. The elements in list of functions can be either a simple string (with no arguments) or a dictionary with a single key being the salt execution module and sub keys indicating args and / or kwargs. See example config below. beacons: salt.beacons.salt_proxyBeacon to manage and report the status of one or more salt proxy processes New in version 2015.8.3.
beacons:
salt.beacons.sensehat moduleMonitor temperature, humidity and pressure using the SenseHat of a Raspberry PiNew in version 2017.7.0.
beacons:
salt.beacons.serviceSend events covering service status
beacons: The config above sets up beacons to check for the salt-master and mysql services. The config also supports two other parameters for each service: onchangeonly: when onchangeonly is True the beacon will fire events only when the service status changes. Otherwise, it will fire an event at each beacon interval. The default is False. delay: when delay is greater than 0 the beacon will fire events only after the service status changes, and the delay (in seconds) has passed. Applicable only when onchangeonly is True. The default is 0. emitatstartup: when emitatstartup is False the beacon will not fire event when the minion is reload. Applicable only when onchangeonly is True. The default is True. uncleanshutdown: If uncleanshutdown is present it should point to the location of a pid file for the service. Most services will not clean up this pid file if they are shutdown uncleanly (e.g. via kill -9) or if they are terminated through a crash such as a segmentation fault. If the file is present, then the beacon will add uncleanshutdown: True to the event. If not present, the field will be False. The field is only added when the service is NOT running. Omitting the configuration variable altogether will turn this feature off. Please note that some init systems can remove the pid file if the service registers as crashed. One such example is nginx on CentOS 7, where the service unit removes the pid file when the service shuts down (IE: the pid file is observed as removed when kill -9 is sent to the nginx master process). The 'uncleanshutdown' option might not be of much use there, unless the unit file is modified. Here is an example that will fire an event 30 seconds after the state of nginx changes and report an uncleanshutdown. This example is for Arch, which places nginx's pid file in /run. beacons:
salt.beacons.shWatch the shell commands being executed actively. This beacon requires strace.
beacons:
salt.beacons.smartos_imgadmBeacon that fires events on image import/delete. ## minimal # - check for new images every 1 second (salt default) # - does not send events at startup beacons:
salt.beacons.smartos_vmadmBeacon that fires events on vm state changes ## minimal # - check for vm changes every 1 second (salt default) # - does not send events at startup beacons:
salt.beacons.statusThe status beacon is intended to send a basic health check event up to the master, this allows for event driven routines based on presence to be set up. The intention of this beacon is to add the config options to add monitoring stats to the health beacon making it a one stop shop for gathering systems health and status data New in version 2016.11.0. To configure this beacon to use the defaults, set up an empty dict for it in the minion config: beacons: By default, all of the information from the following execution module functions will be returned:
You can also configure your own set of functions to be returned: beacons: You may also configure only certain fields from each function to be returned. For instance, the loadavg function returns the following fields:
If you wanted to return only the 1-min and 5-min fields for loadavg then you would configure: beacons: Other functions only return a single value instead of a dictionary. With these, you may specify all or 0. The following are both valid: beacons: If a status function returns a list, you may return the index marker or markers for specific list items: beacons: WARNING: Not all status functions are supported for every
operating system. Be certain to check the minion log for errors after
configuring this beacon.
salt.beacons.swapusageBeacon to monitor swap usage. New in version 3003.
beacons:
salt.beacons.telegram_bot_msgBeacon to emit Telegram messages Requires the python-telegram-bot library
beacons:
salt.beacons.twilio_txt_msgBeacon to emit Twilio text messages
beacons:
salt.beacons.watchdogWatch files and translate the changes into salt events. New in version 2019.2.0.
beacons: The mask list can contain the following events (the default mask is create, modify delete, and move):
salt.beacons.wtmpBeacon to fire events at login of users as registered in the wtmp file New in version 2015.5.0. Example Configuration# Fire events on all logins beacons: How to Tell What An Event MeansIn the events that this beacon fires, a type of 7 denotes a login, while a type of 8 denotes a logout. These values correspond to the ut_type value from a wtmp/utmp event (see the wtmp manpage for more information). In the extremely unlikely case that your platform uses different values, they can be overridden using a ut_type key in the beacon configuration: beacons: This beacon's events include an action key which will be either login or logout depending on the event type. Changed in version 2019.2.0: action key added to beacon event, and ut_type config parameter added. Use Case: Posting Login/Logout Events to SlackThis can be done using the following reactor SLS: report-wtmp: Match the event like so in the master config file: reactor: NOTE: This approach uses the slack execution module
directly on the master, and therefore requires that the master has a slack API
key in its configuration:
slack: See the slack execution module documentation for more information. While you can use an individual user's API key to post to Slack, a bot user is likely better suited for this. The slack engine documentation has information on how to set up a bot user.
cache modulesFor understanding and usage of the cache modules see the Minion Data Cache topic.
salt.cache.consulMinion data cache plugin for Consul key/value data store. New in version 2016.11.2. Changed in version 3005: Timestamp/cache updated support added.
It is up to the system administrator to set up and configure the Consul infrastructure. All is needed for this plugin is a working Consul agent with a read-write access to the key-value store. The related documentation can be found in the Consul documentation. To enable this cache plugin, the master will need the python client for Consul installed. This can be easily installed with pip: pip install python-consul Optionally, depending on the Consul agent configuration, the following values could be set in the master config. These are the defaults: consul.host: 127.0.0.1 consul.port: 8500 consul.token: None consul.scheme: http consul.consistency: default consul.dc: dc1 consul.verify: True consul.timestamp_suffix: .tstamp # Added in 3005.0 In order to bring the cache APIs into conformity, in 3005.0 timestamp information gets stored as a separate {key}.tstamp key/value. If your existing functionality depends on being able to store normal keys with the .tstamp suffix, override the consul.timestamp_suffix default config. Related docs could be found in the python-consul documentation. To use the consul as a minion data cache backend, set the master cache config value to consul: cache: consul
salt.cache.etcd_cacheMinion data cache plugin for Etcd key/value data store. New in version 2018.3.0. Changed in version 3005. It is up to the system administrator to set up and configure the Etcd infrastructure. All is needed for this plugin is a working Etcd agent with a read-write access to the key-value store. The related documentation can be found in the Etcd documentation. To enable this cache plugin, the master will need the python client for Etcd installed. This can be easily installed with pip: pip install python-etcd NOTE: While etcd API v3 has been implemented in other places
within salt, etcd_cache does not support it at this time due to fundamental
differences in how the versions are designed and v3 not being compatible with
the cache API.
Optionally, depending on the Etcd agent configuration, the following values could be set in the master config. These are the defaults: etcd.host: 127.0.0.1 etcd.port: 2379 etcd.protocol: http etcd.allow_reconnect: True etcd.allow_redirect: False etcd.srv_domain: None etcd.read_timeout: 60 etcd.username: None etcd.password: None etcd.cert: None etcd.ca_cert: None Related docs could be found in the python-etcd documentation. To use the etcd as a minion data cache backend, set the master cache config value to etcd: cache: etcd In Phosphorus, ls/list was changed to always return the final name in the path. This should only make a difference if you were directly using ls on paths that were more or less nested than, for example: 1/2/3/4.
salt.cache.localfsCache data in filesystem. New in version 2016.11.0. The localfs Minion cache module is the default cache module and does not require any configuration. Expiration values can be set in the relevant config file (/usr/local/etc/salt/master for the master, /usr/local/etc/salt/cloud for Salt Cloud, etc).
salt.cache.mysql_cacheMinion data cache plugin for MySQL database. New in version 2018.3.0. It is up to the system administrator to set up and configure the MySQL infrastructure. All is needed for this plugin is a working MySQL server. WARNING: The mysql.database and mysql.table_name will be directly
added into certain queries. Salt treats these as trusted input.
The module requires the database (default salt_cache) to exist but creates its own table if needed. The keys are indexed using the bank and etcd_key columns. To enable this cache plugin, the master will need the python client for MySQL installed. This can be easily installed with pip: pip install pymysql Optionally, depending on the MySQL agent configuration, the following values could be set in the master config. These are the defaults: mysql.host: 127.0.0.1 mysql.port: 2379 mysql.user: None mysql.password: None mysql.database: salt_cache mysql.table_name: cache Related docs can be found in the python-mysql documentation. To use the mysql as a minion data cache backend, set the master cache config value to mysql: cache: mysql
salt.cache.redis_cacheRedisRedis plugin for the Salt caching subsystem. New in version 2017.7.0. Changed in version 3005. To enable this cache plugin, the master will need the python client for redis installed. This can be easily installed with pip: salt \* pip.install redis As Redis provides a simple mechanism for very fast key-value store, in order to provide the necessary features for the Salt caching subsystem, the following conventions are used:
For example, to store the key my-key under the bank root-bank/sub-bank/leaf-bank, the following hierarchy will be built: 127.0.0.1:6379> SMEMBERS $BANK_root-bank 1) "sub-bank" 127.0.0.1:6379> SMEMBERS $BANK_root-bank/sub-bank 1) "leaf-bank" 127.0.0.1:6379> SMEMBERS $BANKEYS_root-bank/sub-bank/leaf-bank 1) "my-key" 127.0.0.1:6379> GET $KEY_root-bank/sub-bank/leaf-bank/my-key "my-value" There are four types of keys stored:
These prefixes and the separator can be adjusted using the configuration options:
The connection details can be specified using:
cache.redis.cluster.startup_nodes
Most cloud hosted redis clusters will require this to be
set to True
The database index must be specified as string not as
integer value!
unix_socket_path: New in version 2018.3.1.
Path to a UNIX socket for access. Overrides host / port. Configuration Example: cache.redis.host: localhost cache.redis.port: 6379 cache.redis.db: '0' cache.redis.password: my pass cache.redis.bank_prefix: #BANK cache.redis.bank_keys_prefix: #BANKEYS cache.redis.key_prefix: #KEY cache.redis.timestamp_prefix: #TICKS cache.redis.separator: '@' Cluster Configuration Example: cache.redis.cluster_mode: true cache.redis.cluster.skip_full_coverage_check: true cache.redis.cluster.startup_nodes:
This is not quite optimal, as if we need to flush a bank having a very long list of sub-banks, the number of requests to build the sub-tree may grow quite big. An improvement for this would be loading a custom Lua script in the Redis instance of the user (using the register_script feature) and call it whenever we flush. This script would only need to build this sub-tree causing problems. It can be added later and the behaviour should not change as the user needs to explicitly allow Salt inject scripts in their Redis instance.
cloud modules
salt.cloud.clouds.aliyunAliYun ECS Cloud ModuleNew in version 2014.7.0. The Aliyun cloud module is used to control access to the aliyun ECS. http://www.aliyun.com/ Use of this module requires the id and key parameter to be set. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/aliyun.conf: my-aliyun-config:
salt-cloud -a destroy myinstance salt-cloud -d myinstance
salt-cloud -f list_monitor_data aliyun salt-cloud -f list_monitor_data aliyun name=AY14051311071990225bd
salt-cloud -a reboot myinstance
salt-cloud -a show_disk aliyun myinstance
salt-cloud -a start myinstance
salt-cloud -a stop myinstance salt-cloud -a stop myinstance force=True salt.cloud.clouds.azurearmAzure ARM Cloud ModuleNew in version 2016.11.0. Changed in version 2019.2.0. The Azure ARM cloud module is used to control access to Microsoft Azure Resource Manager WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Optional provider parameters:
Example /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/azure.conf configuration: my-azure-config with username and password:
extension_name: myvmextension
virtual_machine_name: myvm
settings: {"commandToExecute": "hostname"}
resource_group: < inferred from cloud configs > location: < inferred from cloud configs > publisher: < default: Microsoft.Azure.Extensions > virtual_machine_extension_type: < default: CustomScript > type_handler_version: < default: 2.0 > auto_upgrade_minor_version: < default: True > protected_settings: < default: None >
salt-cloud -d myminion salt-cloud -a destroy myminion service_name=myservice
salt-cloud -a start myminion
salt-cloud -a stop myminion salt.cloud.clouds.clcCenturyLink Cloud ModuleNew in version 2018.3.0. The CLC cloud module allows you to manage CLC Via the CLC SDK.
Dependencies
CLC SDKclc-sdk can be installed via pip: pip install clc-sdk NOTE: For sdk reference see:
https://github.com/CenturyLinkCloud/clc-python-sdk
Flaskflask can be installed via pip: pip install flask ConfigurationTo use this module: set up the clc-sdk, user, password, key in the cloud configuration at /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/clc.conf: my-clc-config: NOTE: The provider parameter in cloud provider
configuration was renamed to driver. This change was made to avoid
confusion with the provider parameter that is used in cloud profile
configuration. Cloud provider configuration now uses driver to refer to
the salt-cloud driver that provides the underlying functionality to connect to
a cloud provider, while cloud profile configuration continues to use
provider to refer to the cloud provider configuration that you
define.
salt.cloud.clouds.cloudstackCloudStack Cloud ModuleThe CloudStack cloud module is used to control access to a CloudStack based Public Cloud.
Use of this module requires the apikey, secretkey, host and path parameters. my-cloudstack-cloud-config:
[{'DeviceName': '/dev/sdb', 'VirtualName': 'ephemeral0'},
salt.cloud.clouds.digitaloceanDigitalOcean Cloud ModuleThe DigitalOcean cloud module is used to control access to the DigitalOcean VPS system. Use of this module requires a requires a personal_access_token, an ssh_key_file, and at least one SSH key name in ssh_key_names. More ssh_key_names can be added by separating each key with a comma. The personal_access_token can be found in the DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found under the "SSH Keys" section. # Note: This example is for /usr/local/etc/salt/cloud.providers or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-digital-ocean-config:
salt-cloud -f assign_floating_ip my-digitalocean-config droplet_id=1234567 floating_ip='45.55.96.47'
salt-cloud -f create_floating_ip my-digitalocean-config region='NYC2' salt-cloud -f create_floating_ip my-digitalocean-config droplet_id='1234567'
salt-cloud -f delete_floating_ip my-digitalocean-config floating_ip='45.55.96.47'
salt-cloud --destroy mymachine
salt-cloud -f list_floating_ips my-digitalocean-config
CLI Example: salt-cloud -a reboot droplet_name
salt-cloud -f show_floating_ip my-digitalocean-config floating_ip='45.55.96.47'
salt-cloud -f show_pricing my-digitalocean-config profile=my-profile
CLI Example: salt-cloud -a start droplet_name
CLI Example: salt-cloud -a stop droplet_name
salt-cloud -f unassign_floating_ip my-digitalocean-config floating_ip='45.55.96.47' salt.cloud.clouds.dimensiondataDimension Data Cloud ModuleThis is a cloud module for the Dimension Data Cloud, using the existing Libcloud driver for Dimension Data. # Note: This example is for /usr/local/etc/salt/cloud.providers # or any file in the # /usr/local/etc/salt/cloud.providers.d/ directory. my-dimensiondata-config:
salt-cloud -f create_lb dimensiondata \
CLI Example: salt-cloud -a stop vm_name
CLI Example: salt-cloud -a stop vm_name salt.cloud.clouds.ec2The EC2 Cloud ModuleThe EC2 cloud module is used to interact with the Amazon Elastic Compute Cloud.
my-ec2-config:
[{'DeviceName': '/dev/sdb', 'VirtualName': 'ephemeral0'},
CLI Exampe: salt-cloud -f create_snapshot my-ec2-config volume_id=vol-351d8826 salt-cloud -f create_snapshot my-ec2-config volume_id=vol-351d8826 \
CLI Examples: salt-cloud -f create_volume my-ec2-config zone=us-east-1b
salt-cloud -f create_volume my-ec2-config zone=us-east-1b tags='{"tag1": "val1", "tag2", "val2"}'
salt-cloud -a del_tags mymachine tags=mytag, salt-cloud -a del_tags mymachine tags=tag1,tag2,tag3 salt-cloud -a del_tags resource_id=vol-3267ab32 tags=tag1,tag2,tag3
salt-cloud -a delvol_on_destroy mymachine
TODO: Add all of the filters.
TODO: Add all of the filters.
salt-cloud --destroy mymachine
salt-cloud -a disable_term_protect mymachine
salt-cloud -a enable_term_protect mymachine
salt-cloud -a get_password_data mymachine salt-cloud -a get_password_data mymachine key_file=/root/ec2key.pem Note: PKCS1_v1_5 was added in PyCrypto 2.5
salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32
salt-cloud -a keepvol_on_destroy mymachine
salt-cloud -a reboot mymachine
salt-cloud -f register_image my-ec2-config ami_name=my_ami description="my description"
salt-cloud -a rename mymachine newname=yourmachine
salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff' salt-cloud -a set_tags resource_id=vol-3267ab32 tag=somestuff
salt-cloud -a show_delvol_on_destroy mymachine
salt-cloud -a show_instance myinstance ...or as a function (which requires either a name or instance_id): salt-cloud -f show_instance my-ec2 name=myinstance salt-cloud -f show_instance my-ec2 instance_id=i-d34db33f
salt-cloud -f show_pricing my-ec2-config profile=my-profile If pricing sources have not been cached, they will be downloaded. Once they have been cached, they will not be updated automatically. To manually update all prices, use the following command: salt-cloud -f update_pricing <provider> New in version 2015.8.0.
salt-cloud -a ssm_create_association ec2-instance-name ssm_document=ssm-document-name
salt-cloud -a ssm_describe_association ec2-instance-name ssm_document=ssm-document-name
salt-cloud -f update_pricing my-ec2-config salt-cloud -f update_pricing my-ec2-config type=linux New in version 2015.8.0.
salt.cloud.clouds.gceCopyright 2013 Google Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Google Compute Engine ModuleThe Google Compute Engine module. This module interfaces with Google Compute Engine (GCE). To authenticate to GCE, you will need to create a Service Account. To set up Service Account Authentication, follow the Google Compute Engine Setup instructions. Example Provider Configurationmy-gce-config:
salt-cloud -a attach_disk myinstance disk_name=mydisk mode=READ_WRITE
salt-cloud -f attach_lb gce name=lb member=myinstance
salt-cloud -f create_address gce name=my-ip region=us-central1 address=IP
Volumes are attached in the order in which they are given, thus on a new node the first volume will be /dev/sdb, the second /dev/sdc, and so on.
salt-cloud -f create_disk gce disk_name=pd size=300 location=us-central1-b
salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80
salt-cloud -f create_hc gce name=hc path=/healthy port=80
salt-cloud -f create_lb gce name=lb region=us-central1 ports=80
salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 mode=legacy description=optional salt-cloud -f create_network gce name=mynet description=optional
salt-cloud -f create_snapshot gce name=snap1 disk_name=pd
salt-cloud -f create_subnetwork gce name=mysubnet network=mynet1 region=us-west1 cidr=10.0.0.0/24 description=optional
salt-cloud -f delete_address gce name=my-ip
salt-cloud -f delete_disk gce disk_name=pd
salt-cloud -f delete_fwrule gce name=allow-http
salt-cloud -f delete_hc gce name=hc
salt-cloud -f delete_lb gce name=lb
salt-cloud -f delete_network gce name=mynet
salt-cloud -f delete_snapshot gce name=disk-snap-1
salt-cloud -f delete_subnetwork gce name=mysubnet network=mynet1 region=us-west1
salt-cloud -a destroy myinstance1 myinstance2 ... salt-cloud -d myinstance1 myinstance2 ...
salt-cloud -a detach_disk myinstance disk_name=mydisk
salt-cloud -f detach_lb gce name=lb member=myinstance
salt-cloud -a reboot myinstance
salt-cloud -f show_address gce name=mysnapshot region=us-central1
salt-cloud -a show_disk myinstance disk_name=mydisk salt-cloud -f show_disk gce disk_name=mydisk
salt-cloud -f show_fwrule gce name=allow-http
salt-cloud -f show_hc gce name=hc
salt-cloud -f show_lb gce name=lb
salt-cloud -f show_network gce name=mynet
salt-cloud -f show_pricing my-gce-config profile=my-profile
salt-cloud -f show_snapshot gce name=mysnapshot
salt-cloud -f show_subnetwork gce name=mysubnet region=us-west1
salt-cloud -a start myinstance
salt-cloud -a stop myinstance
salt-cloud -f update_pricing my-gce-config New in version 2015.8.0. salt.cloud.clouds.gogridGoGrid Cloud ModuleThe GoGrid cloud module. This module interfaces with the gogrid public cloud service. To use Salt Cloud with GoGrid log into the GoGrid web interface and create an api key. Do this by clicking on "My Account" and then going to the API Keys tab. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/gogrid.conf: my-gogrid-config: NOTE: A Note about using Map files with GoGrid:
Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances. NOTE: A Note about using Map files with GoGrid:
Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances.
salt-cloud -d vm_name
salt-cloud -Q
salt-cloud -F
salt-cloud -S
salt-cloud -f list_public_ips <provider> To list unavailable (assigned) IPs, use: CLI Example: salt-cloud -f list_public_ips <provider> state=assigned New in version 2015.8.0.
salt-cloud -a reboot vm_name New in version 2015.8.0.
salt-cloud -a show_instance vm_name New in version 2015.8.0.
salt-cloud -a start vm_name New in version 2015.8.0.
salt-cloud -a stop vm_name New in version 2015.8.0. salt.cloud.clouds.hetznerHetzner Cloud ModuleThe Hetzner cloud module is used to control access to the hetzner cloud. https://docs.hetzner.cloud/
Use of this module requires the key parameter to be set. my-hetzner-cloud-config:
salt-cloud --destroy mymachine
salt-cloud -a reboot mymachine
salt-cloud -a resize mymachine size=...
salt-cloud -a start mymachine
salt-cloud -a stop mymachine
salt.cloud.clouds.joyentJoyent Cloud ModuleThe Joyent Cloud module is used to interact with the Joyent cloud system. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/joyent.conf: my-joyent-config: When creating your profiles for the joyent cloud, add the location attribute to the profile, this will automatically get picked up when performing tasks associated with that vm. An example profile might look like: joyent_512: This driver can also be used with the Joyent SmartDataCenter project. More details can be found at: Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included: api_host_suffix: .api.myhostname.com
salt-cloud --list-images Can use a custom URL for images. Default is: image_url: images.joyent.com/images
salt-cloud --list-sizes
salt-cloud -p profile_name vm_name
salt-cloud -f delete_key joyent keyname=mykey
CLI Example: salt-cloud -d vm_name
salt-cloud -f import_key joyent keyname=mykey keyfile=/tmp/mykey.pub
salt-cloud -Q
salt-cloud -F
salt-cloud -a reboot vm_name
salt-cloud -a show_instance vm_name
salt-cloud -a start vm_name
salt-cloud -a stop vm_name
salt.cloud.clouds.libvirtLibvirt Cloud ModuleExample provider: # A provider maps to a libvirt instance my-libvirt-config: Example profile: base-itest: Tested on: - Fedora 26 (libvirt 3.2.1, qemu 2.9.1) - Fedora 25 (libvirt 1.3.3.2, qemu 2.6.1) - Fedora 23 (libvirt 1.2.18, qemu 2.4.1) - Centos 7 (libvirt 1.2.17, qemu 1.5.3)
@param name: @type name: str @param call: @type call: @return: True if all went well, otherwise an error message @rtype: bool|str
New in version 2017.7.3.
salt.cloud.clouds.linodeThe Linode Cloud ModuleThe Linode cloud module is used to interact with the Linode Cloud. You can target a specific version of the Linode API with the api_version parameter. The default is v3. ProviderThe following provider parameters are supported:
NOTE: APIv3 usage is deprecated and will be removed in a future
release in favor of APIv4. To move to APIv4 now, set the api_version
parameter in your provider configuration to v4. See the full migration
guide here
https://docs.saltproject.io/en/latest/topics/cloud/linode.html#migrating-to-apiv4.
Set up the provider configuration at /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/linode.conf: my-linode-provider: For use with APIv3 (deprecated): my-linode-provider-v3: ProfileThe following profile parameters are supported:
Set up a profile configuration in /usr/local/etc/salt/cloud.profiles.d/: my-linode-profile: Migrating to APIv4In order to target APIv4, ensure your provider configuration has api_version set to v4. You will also need to generate a new token for your account. See https://www.linode.com/docs/platform/api/getting-started-with-the-linode-api/#create-an-api-token There are a few changes to note: - There has been a general move from label references to ID references. The profile configuration parameters location, size, and image have moved from being label based references to IDs. See the profile section for more information. In addition to these inputs being changed, avail_sizes, avail_locations, and avail_images now output options sorted by ID instead of label. - The disk_size profile configuration parameter has been deprecated and will not be taken into account when creating new VMs while targeting APIv4.
salt-cloud --list-images my-linode-config salt-cloud -f avail_images my-linode-config
salt-cloud --list-locations my-linode-config salt-cloud -f avail_locations my-linode-config
salt-cloud --list-sizes my-linode-config salt-cloud -f avail_sizes my-linode-config
Can be called as an action (which requires a name): salt-cloud -a boot my-instance config_id=10 ...or as a function (which requires either a name or linode_id): salt-cloud -f boot my-linode-config name=my-instance config_id=10 salt-cloud -f boot my-linode-config linode_id=1225876 config_id=10
CLI Example: salt-cloud -f clone my-linode-config linode_id=1234567 datacenter_id=2 plan_id=5
New in version 2016.3.0.
CLI Example: salt-cloud -d vm_name
CLI Example: salt-cloud -f get_config_id my-linode-config name=my-linode salt-cloud -f get_config_id my-linode-config linode_id=1234567
CLI Example: salt-cloud -f get_linode my-linode-config name=my-instance salt-cloud -f get_linode my-linode-config linode_id=1234567
CLI Example: salt-cloud -f get_plan_id linode label="Nanode 1GB" salt-cloud -f get_plan_id linode label="Linode 2GB"
salt-cloud -Q salt-cloud --query salt-cloud -f list_nodes my-linode-config NOTE: The image label only displays information about
the VM's distribution vendor, such as "Debian" or "RHEL"
and does not display the actual image name. This is due to a limitation of the
Linode API.
salt-cloud -F salt-cloud --full-query salt-cloud -f list_nodes_full my-linode-config NOTE: The image label only displays information about
the VM's distribution vendor, such as "Debian" or "RHEL"
and does not display the actual image name. This is due to a limitation of the
Linode API.
salt-cloud -f list_nodes_min my-linode-config salt-cloud --function list_nodes_min my-linode-config
CLI Example: salt-cloud -a reboot vm_name
CLI Example: salt-cloud -a show_instance vm_name NOTE: The image label only displays information about
the VM's distribution vendor, such as "Debian" or "RHEL"
and does not display the actual image name. This is due to a limitation of the
Linode API.
salt-cloud -f show_pricing my-linode-config profile=my-linode-profile
CLI Example: salt-cloud -a stop vm_name
CLI Example: salt-cloud -a stop vm_name salt.cloud.clouds.lxcInstall Salt on an LXC ContainerNew in version 2014.7.0. Please read core config documentation.
salt.cloud.clouds.msazureAzure Cloud ModuleThe Azure cloud module is used to control access to Microsoft Azure WARNING: This cloud provider will be removed from Salt in version
3007 due to the deprecation of the "Classic" API for Azure. Please
migrate to Azure Resource Manager by March 1, 2023
A Management Certificate (.pem and .crt files) must be created and the .pem file placed on the same machine that salt-cloud is run from. Information on creating the pem file to use, and uploading the associated cer file can be found at: http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/ For users with Python < 2.7.9, backend must currently be set to requests. Example /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/azure.conf configuration: my-azure-config:
salt-cloud -f add_input_endpoint my-azure service=myservice \
salt-cloud -f add_management_certificate my-azure public_key='...PUBKEY...' \
salt-cloud -f add_service_certificate my-azure name=my_service_certificate \
salt-cloud -f cleanup_unattached_disks my-azure name=my_disk salt-cloud -f cleanup_unattached_disks my-azure name=my_disk delete_vhd=True
salt-cloud -f create_affinity_group my-azure name=my_affinity_group
salt-cloud -f create_service my-azure name=my_service label=my_service location='West US'
salt-cloud -f create_storage my-azure name=my_storage label=my_storage location='West US'
salt-cloud -f create_storage_container my-azure name=mycontainer
salt-cloud -f delete_affinity_group my-azure name=my_affinity_group
salt-cloud -f delete_disk my-azure name=my_disk salt-cloud -f delete_disk my-azure name=my_disk delete_vhd=True
salt-cloud -f delete_input_endpoint my-azure service=myservice \
salt-cloud -f delete_management_certificate my-azure name=my_management_certificate \
salt-cloud -f delete_service my-azure name=my_service
salt-cloud -f delete_service_certificate my-azure name=my_service_certificate \
salt-cloud -f delete_storage my-azure name=my_storage
salt-cloud -f delete_storage_container my-azure name=mycontainer
salt-cloud -d myminion salt-cloud -a destroy myminion service_name=myservice
salt-cloud -f show_affinity_group my-azure service=myservice \
salt-cloud -f get_blob my-azure container=base name=top.sls local_path=/usr/local/etc/salt/states/top.sls salt-cloud -f get_blob my-azure container=base name=content.txt return_content=True
salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob
salt-cloud -f show_blob_service_properties my-azure
salt-cloud -f show_deployment my-azure name=my_deployment
salt-cloud -f show_disk my-azure name=my_disk
salt-cloud -f show_input_endpoint my-azure service=myservice \
salt-cloud -f get_management_certificate my-azure name=my_management_certificate \
salt-cloud -f get_operation_status my-azure id=0123456789abcdef0123456789abcdef
salt-cloud -f show_service_certificate my-azure name=my_service_certificate \
salt-cloud -f show_storage my-azure name=my_storage
salt-cloud -f show_storage_container my-azure name=myservice
salt-cloud -f show_storage_container_acl my-azure name=myservice
salt-cloud -f show_storage_container_metadata my-azure name=myservice
salt-cloud -f show_storage_keys my-azure name=my_storage
salt-cloud -f lease_storage_container my-azure name=mycontainer
salt-cloud -f list_affinity_groups my-azure
salt-cloud -f list_blobs my-azure container=mycontainer
salt-cloud -f list_disks my-azure
salt-cloud -f list_input_endpoints my-azure service=myservice deployment=mydeployment
salt-cloud -f list_management_certificates my-azure name=my_management
salt-cloud -f list_service_certificates my-azure name=my_service
salt-cloud -f list_services my-azure
salt-cloud -f list_storage my-azure
salt-cloud -f list_storage_containers my-azure
salt-cloud -f list_virtual_networks my-azure service=myservice deployment=mydeployment
salt-cloud -f make_blob_url my-azure container=mycontainer blob=myblob
salt-cloud -f put_blob my-azure container=base name=top.sls blob_path=/usr/local/etc/salt/states/top.sls salt-cloud -f put_blob my-azure container=base name=content.txt blob_content='Some content'
salt-cloud -f regenerate_storage_keys my-azure name=my_storage key_type=primary
salt-cloud -f set_blob_properties my-azure
salt-cloud -f set_blob_service_properties my-azure
salt-cloud -f set_storage_container my-azure name=mycontainer
salt-cloud -f set_storage_container my-azure name=mycontainer \
salt-cloud -f show_affinity_group my-azure service=myservice \
salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob
salt-cloud -f show_blob_service_properties my-azure
salt-cloud -f show_deployment my-azure name=my_deployment
salt-cloud -f show_disk my-azure name=my_disk
salt-cloud -f show_input_endpoint my-azure service=myservice \
salt-cloud -f get_management_certificate my-azure name=my_management_certificate \
salt-cloud -f show_service my-azure name=my_service
salt-cloud -f show_service_certificate my-azure name=my_service_certificate \
salt-cloud -f show_storage my-azure name=my_storage
salt-cloud -f show_storage_container my-azure name=myservice
salt-cloud -f show_storage_container_acl my-azure name=myservice
salt-cloud -f show_storage_container_metadata my-azure name=myservice
salt-cloud -f show_storage_keys my-azure name=my_storage
salt-cloud -f update_affinity_group my-azure name=my_group label=my_group
salt-cloud -f update_disk my-azure name=my_disk label=my_disk salt-cloud -f update_disk my-azure name=my_disk new_name=another_disk
salt-cloud -f update_input_endpoint my-azure service=myservice \
salt-cloud -f update_storage my-azure name=my_storage label=my_storage salt.cloud.clouds.oneandone1&1 Cloud Server ModuleThe 1&1 SaltStack cloud module allows a 1&1 server to be automatically deployed and bootstrapped with Salt. It also has functions to create block storages and ssh keys.
The module requires the 1&1 api_token to be provided. The server should also be assigned a public LAN, a private LAN, or both along with SSH key pairs. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/oneandone.conf: my-oneandone-config: my-oneandone-profile: Set deploy to False if Salt should not be installed on the node. my-oneandone-profile: Create an SSH key sudo salt-cloud -f create_ssh_key my-oneandone-config name='SaltTest' description='SaltTestDescription' Create a block storage sudo salt-cloud -f create_block_storage my-oneandone-config name='SaltTest2' description='SaltTestDescription' size=50 datacenter_id='5091F6D8CBFEF9C26ACE957C652D5D49'
CLI Example: salt-cloud -d vm_name
salt-cloud -a reboot vm_name
salt-cloud -a start vm_name
salt-cloud -a stop vm_name salt.cloud.clouds.opennebulaOpenNebula Cloud ModuleThe OpenNebula cloud module is used to control access to an OpenNebula cloud. New in version 2014.7.0. Use of this module requires the xml_rpc, user, and password parameters to be set. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/opennebula.conf: my-opennebula-config: This driver supports accessing new VM instances via DNS entry instead of IP address. To enable this feature, in the provider or profile file add fqdn_base with a value matching the base of your fully-qualified domain name. Example: my-opennebula-config: The driver will prepend the hostname to the fqdn_base and do a DNS lookup to find the IP of the new VM. salt-cloud -f image_allocate opennebula datastore_name=default \
salt-cloud --list-images opennebula salt-cloud --function avail_images opennebula salt-cloud -f avail_images opennebula
salt-cloud --list-locations opennebula salt-cloud --function avail_locations opennebula salt-cloud -f avail_locations opennebula
Optional vm_ dict options for overwriting template:
Optional - Amount of vCPUs to allocate
CLI Example:
CLI Example: salt-cloud --destroy vm_name salt-cloud -d vm_name salt-cloud --action destroy vm_name salt-cloud -a destroy vm_name
salt-cloud -f get_cluster_id opennebula name=my-cluster-name
salt-cloud -f get_datastore_id opennebula name=my-datastore-name
salt-cloud -f get_host_id opennebula name=my-host-name
salt-cloud -f get_image_id opennebula name=my-image-name
salt-cloud -f get_one_version one_provider_name
salt-cloud -f get_secgroup_id opennebula name=my-secgroup-name
salt-cloud -f get_template_id opennebula name=my-template-name
salt-cloud -f get_template_image opennebula name=my-template-name
salt-cloud -f get_vm_id opennebula name=my-vm
salt-cloud -f get_vn_id opennebula name=my-vn-name
CLI Example: salt-cloud -f image_allocate opennebula path=/path/to/image_file.txt datastore_id=1 salt-cloud -f image_allocate opennebula datastore_name=default \
CLI Example: salt-cloud -f image_clone opennebula name=my-new-image image_id=10 salt-cloud -f image_clone opennebula name=my-new-image image_name=my-image-to-clone
CLI Example: salt-cloud -f image_delete opennebula name=my-image salt-cloud --function image_delete opennebula image_id=100
CLI Example: salt-cloud -f image_info opennebula name=my-image salt-cloud --function image_info opennebula image_id=5
CLI Example: salt-cloud -f image_persistent opennebula name=my-image persist=True salt-cloud --function image_persistent opennebula image_id=5 persist=False
CLI Example: salt-cloud -f image_snapshot_delete vm_id=106 snapshot_id=45 salt-cloud -f image_snapshot_delete vm_name=my-vm snapshot_id=111
CLI Example: salt-cloud -f image_snapshot_flatten vm_id=106 snapshot_id=45 salt-cloud -f image_snapshot_flatten vm_name=my-vm snapshot_id=45
CLI Example: salt-cloud -f image_snapshot_revert vm_id=106 snapshot_id=45 salt-cloud -f image_snapshot_revert vm_name=my-vm snapshot_id=120
CLI Example: salt-cloud -f image_update opennebula image_id=0 file=/path/to/image_update_file.txt update_type=replace salt-cloud -f image_update opennebula image_name="Ubuntu 14.04" update_type=merge \
salt-cloud -f list_clusters opennebula
salt-cloud -f list_datastores opennebula
salt-cloud -f list_hosts opennebula
salt-cloud -Q salt-cloud --query salt-cloud --function list_nodes opennebula salt-cloud -f list_nodes opennebula
salt-cloud -F salt-cloud --full-query salt-cloud --function list_nodes_full opennebula salt-cloud -f list_nodes_full opennebula
salt-cloud -f list_security_groups opennebula
salt-cloud -f list_templates opennebula
salt-cloud -f list_vns opennebula
CLI Example: salt-cloud -a reboot my-vm
CLI Example: salt-cloud -f secgroup_allocate opennebula path=/path/to/secgroup_file.txt salt-cloud -f secgroup_allocate opennebula \
CLI Example: salt-cloud -f secgroup_clone opennebula name=my-cloned-secgroup secgroup_id=0 salt-cloud -f secgroup_clone opennebula name=my-cloned-secgroup secgroup_name=my-secgroup
CLI Example: salt-cloud -f secgroup_delete opennebula name=my-secgroup salt-cloud --function secgroup_delete opennebula secgroup_id=100
CLI Example: salt-cloud -f secgroup_info opennebula name=my-secgroup salt-cloud --function secgroup_info opennebula secgroup_id=5
CLI Example: salt-cloud --function secgroup_update opennebula secgroup_id=100 \
CLI Example: salt-cloud --action show_instance vm_name salt-cloud -a show_instance vm_name
CLI Example: salt-cloud -a start my-vm
CLI Example: salt-cloud -a stop my-vm
CLI Example: salt-cloud -f template_allocate opennebula path=/path/to/template_file.txt salt-cloud -f template_allocate opennebula \
CLI Example: salt-cloud -f template_clone opennebula name=my-new-template template_id=0 salt-cloud -f template_clone opennebula name=my-new-template template_name=my-template
CLI Example: salt-cloud -f template_delete opennebula name=my-template salt-cloud --function template_delete opennebula template_id=5
template_instantiate creates a VM on OpenNebula
from a template, but it does not install Salt on the new VM. Use the
create function for that functionality: salt-cloud -p
opennebula-profile vm-name.
CLI Example: salt-cloud -f template_instantiate opennebula vm_name=my-new-vm template_id=0
CLI Example: salt-cloud --function template_update opennebula template_id=1 update_type=replace \
CLI Example: salt-cloud -a vm_action my-vm action='release'
CLI Example: salt-cloud -f vm_allocate path=/path/to/vm_template.txt salt-cloud --function vm_allocate path=/path/to/vm_template.txt hold=True
CLI Example: salt-cloud -a vm_attach my-vm path=/path/to/disk_file.txt salt-cloud -a vm_attach my-vm data="DISK=[DISK_ID=1]"
CLI Example: salt-cloud -a vm_attach_nic my-vm path=/path/to/nic_file.txt salt-cloud -a vm_attach_nic my-vm data="NIC=[NETWORK_ID=1]"
CLI Example: salt-cloud -a vm_deploy my-vm host_id=0 salt-cloud -a vm_deploy my-vm host_id=1 capacity_maintained=False salt-cloud -a vm_deploy my-vm host_name=host01 datastore_id=1 salt-cloud -a vm_deploy my-vm host_name=host01 datastore_name=default
CLI Example: salt-cloud -a vm_detach my-vm disk_id=1
CLI Example: salt-cloud -a vm_detach_nic my-vm nic_id=1
CLI Example: salt-cloud -a vm_disk_save my-vm disk_id=1 image_name=my-new-image salt-cloud -a vm_disk_save my-vm disk_id=1 image_name=my-new-image image_type=CONTEXT snapshot_id=10
CLI Example: salt-cloud -a vm_disk_snapshot_create my-vm disk_id=0 description="My Snapshot Description"
CLI Example: salt-cloud -a vm_disk_snapshot_delete my-vm disk_id=0 snapshot_id=6
CLI Example: salt-cloud -a vm_disk_snapshot_revert my-vm disk_id=0 snapshot_id=6
CLI Example: salt-cloud -a vm_info my-vm
CLI Example: salt-cloud -a vm_migrate my-vm host_id=0 datastore_id=1 salt-cloud -a vm_migrate my-vm host_id=0 datastore_id=1 live_migration=True salt-cloud -a vm_migrate my-vm host_name=host01 datastore_name=default
CLI Example: salt-cloud -a vm_monitoring my-vm
CLI Example: salt-cloud -a vm_resize my-vm path=/path/to/capacity_template.txt salt-cloud -a vm_resize my-vm path=/path/to/capacity_template.txt capacity_maintained=False salt-cloud -a vm_resize my-vm data="CPU=1 VCPU=1 MEMORY=1024"
CLI Example: salt-cloud -a vm_snapshot_create my-vm snapshot_name=my-new-snapshot
CLI Example: salt-cloud -a vm_snapshot_delete my-vm snapshot_id=8
CLI Example: salt-cloud -a vm_snapshot_revert my-vm snapshot_id=42
CLI Example: salt-cloud -a vm_update my-vm path=/path/to/user_template_file.txt update_type='replace'
CLI Example: salt-cloud -f vn_add_ar opennebula vn_id=3 path=/path/to/address_range.txt salt-cloud -f vn_add_ar opennebula vn_name=my-vn \
CLI Example: salt-cloud -f vn_allocate opennebula path=/path/to/vn_file.txt
CLI Example: salt-cloud -f vn_delete opennebula name=my-virtual-network salt-cloud --function vn_delete opennebula vn_id=3
CLI Example: salt-cloud -f vn_free_ar opennebula vn_id=3 ar_id=1 salt-cloud -f vn_free_ar opennebula vn_name=my-vn ar_id=1
CLI Example: salt-cloud -f vn_hold opennebula vn_id=3 path=/path/to/vn_hold_file.txt salt-cloud -f vn_hold opennebula vn_name=my-vn data="LEASES=[IP=192.168.0.5]"
CLI Example: salt-cloud -f vn_info opennebula vn_id=3 salt-cloud --function vn_info opennebula name=public
CLI Example: salt-cloud -f vn_release opennebula vn_id=3 path=/path/to/vn_release_file.txt salt-cloud =f vn_release opennebula vn_name=my-vn data="LEASES=[IP=192.168.0.5]"
CLI Example: salt-cloud -f vn_reserve opennebula vn_id=3 path=/path/to/vn_reserve_file.txt salt-cloud -f vn_reserve opennebula vn_name=my-vn data="SIZE=10 AR_ID=8 NETWORK_ID=1" salt.cloud.clouds.openstackOpenstack Cloud Driver
OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it. This OpenStack driver uses a the shade python module which is managed by the OpenStack Infra team. This module is written to handle all the different versions of different OpenStack tools for salt, so most commands are just passed over to the module to handle everything. ProviderThere are two ways to configure providers for this driver. The first one is to just let shade handle everything, and configure using os-client-config and setting up /etc/openstack/clouds.yml. clouds: And then this can be referenced in the salt provider based on the democloud name. myopenstack: This allows for just using one configuration for salt-cloud and for any other openstack tools which are all using /etc/openstack/clouds.yml The other method allows for specifying everything in the provider config, instead of using the extra configuration file. This will allow for passing salt-cloud configs only through pillars for minions without having to write a clouds.yml file on each minion.abs myopenstack: Or if you need to use a profile to setup some extra stuff, it can be passed as a profile to use any of the vendor config options. myrackspace: And this will pull in the profile for rackspace and setup all the correct options for the auth_url and different api versions for services. ProfileMost of the options for building servers are just passed on to the create_server function from shade. The salt specific ones are:
centos: This is the minimum setup required. If metadata is set to make sure that the host has finished setting up the wait_for_metadata can be set. centos: If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it: my-openstack-config: Anything else from the create_server docs can be passed through here.
NOTE: If there is anything added, that is not in this list, it
can be added to an extras dictionary for the profile, and that will be
to the create_server function.
salt-cloud -f avail_images myopenstack salt-cloud --list-images myopenstack
salt-cloud -f avail_sizes myopenstack salt-cloud --list-sizes myopenstack
function to call from shade.openstackcloud library
CLI Example salt-cloud -f call myopenstack func=list_images t sujksalt-cloud -f call myopenstack func=create_network name=mysubnet
salt-cloud -f list_networks myopenstack
salt-cloud -f list_nodes myopenstack
salt-cloud -f list_nodes_full myopenstack
salt-cloud -f list_nodes_min myopenstack
salt-cloud -f list_nodes_full myopenstack
salt-cloud -f list_subnets myopenstack network=salt-net
name of the instance
CLI Example salt-cloud -a show_instance myserver
salt.cloud.clouds.packetPacket Cloud Module Using Packet's Python API ClientThe Packet cloud module is used to control access to the Packet VPS system. Use of this module only requires the token parameter. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/packet.conf: The Packet profile requires size, image, location, project_id Optional profile parameters:
This driver requires Packet's client library: https://pypi.python.org/pypi/packet-python packet-provider:
salt-cloud --list-images packet-provider salt-cloud -f avail_images packet-provider
salt-cloud --list-locations packet-provider salt-cloud -f avail_locations packet-provider
salt-cloud -f avail_projects packet-provider
salt-cloud --list-sizes packet-provider salt-cloud -f avail_sizes packet-provider
CLI Example: salt-cloud -d name
salt-cloud -Q salt-cloud --query salt-cloud -f list_nodes packet-provider
salt-cloud -F salt-cloud --full-query salt-cloud -f list_nodes_full packet-provider
salt-cloud -f list_nodes_min packet-provider salt-cloud --function list_nodes_min packet-provider
salt.cloud.clouds.parallelsParallels Cloud ModuleThe Parallels cloud module is used to control access to cloud providers using the Parallels VPS system.
my-parallels-config:
salt-cloud --destroy mymachine
salt-cloud -a start mymachine
salt-cloud -a stop mymachine
salt.cloud.clouds.profitbricksProfitBricks Cloud ModuleThe ProfitBricks SaltStack cloud module allows a ProfitBricks server to be automatically deployed and bootstraped with Salt.
The module requires ProfitBricks credentials to be supplied along with an existing virtual datacenter UUID where the server resources will reside. The server should also be assigned a public LAN, a private LAN, or both along with SSH key pairs. ... Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/profitbricks.conf: my-profitbricks-config: my-profitbricks-profile: To use a private IP for connecting and bootstrapping node: my-profitbricks-profile: Set deploy to False if Salt should not be installed on the node. my-profitbricks-profile:
salt-cloud -f create_datacenter profitbricks name=mydatacenter location=us/las description="my description"
salt-cloud -f create_loadbalancer profitbricks name=mylb
CLI Example: salt-cloud -d vm_name
salt-cloud -f list_datacenters my-profitbricks-config
salt-cloud -f list_images my-profitbricks-config location=us/las
salt-cloud -a reboot vm_name
salt-cloud -a start vm_name
salt-cloud -a stop vm_name
salt.cloud.clouds.proxmoxProxmox Cloud ModuleNew in version 2014.7.0. The Proxmox cloud module is used to control access to cloud providers using the Proxmox system (KVM / OpenVZ / LXC).
my-proxmox-config:
salt-cloud --list-images my-proxmox-config
salt-cloud --list-locations my-proxmox-config
salt-cloud -p proxmox-ubuntu vmhostname
salt-cloud --destroy mymachine
salt-cloud -f get_resources_nodes my-proxmox-config
salt-cloud -f get_resources_vms my-proxmox-config
salt-cloud -Q my-proxmox-config
salt-cloud -F my-proxmox-config
salt-cloud -S my-proxmox-config
salt-cloud -a shutdown mymachine
salt-cloud -a start mymachine
salt-cloud -a stop mymachine
salt.cloud.clouds.pyraxPyrax Cloud ModulePLEASE NOTE: This module is currently in early development, and considered to be experimental and unstable. It is not recommended for production use. Unless you are actively developing code in this module, you should use the OpenStack module instead.
salt.cloud.clouds.qingcloudQingCloud Cloud ModuleNew in version 2015.8.0. The QingCloud cloud module is used to control access to the QingCloud. http://www.qingcloud.com/ Use of this module requires the access_key_id, secret_access_key, zone and key_filename parameter to be set. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/qingcloud.conf: my-qingcloud:
salt-cloud --list-images my-qingcloud salt-cloud -f avail_images my-qingcloud zone=gd1
salt-cloud --list-locations my-qingcloud
salt-cloud --list-sizes my-qingcloud salt-cloud -f avail_sizes my-qingcloud zone=pek2
salt-cloud -p qingcloud-ubuntu-c1m1 hostname1 salt-cloud -m /path/to/mymap.sls -P
salt-cloud -a destroy i-2f733r5n salt-cloud -d i-2f733r5n
salt-cloud -Q my-qingcloud
salt-cloud -F my-qingcloud
salt-cloud -f list_nodes_min my-qingcloud
salt-cloud -S my-qingcloud
salt-cloud -a reboot i-2f733r5n
salt-cloud -f show_image my-qingcloud image=trustysrvx64c salt-cloud -f show_image my-qingcloud image=trustysrvx64c,coreos4 salt-cloud -f show_image my-qingcloud image=trustysrvx64c zone=ap1
salt-cloud -a show_instance i-2f733r5n
salt-cloud -a start i-2f733r5n
salt-cloud -a stop i-2f733r5n salt-cloud -a stop i-2f733r5n force=True salt.cloud.clouds.saltifySaltify ModuleThe Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH. This module is useful for provisioning machines which are already installed, but not Salted. Changed in version 2018.3.0: The wake_on_lan capability, and actions destroy, reboot, and query functions were added. Use of this module requires some configuration in cloud profile and provider files as described in the Getting Started with Saltify documentation.
salt-cloud --list-images saltify returns a list of available profiles. New in version 2018.3.0.
salt-cloud --list-locations my-cloud-provider [ saltify will always return an empty dictionary ]
salt-cloud --list-sizes saltify [ saltify always returns an empty dictionary ]
Provision a single machine, adding its keys to the salt
master
else, Test ssh connections to the machine
Configuration parameters:
See also Miscellaneous Salt Cloud Options and Getting Started with Saltify CLI Example: salt-cloud -p mymachine my_new_id
CLI Example: salt-cloud --destroy mymachine
salt-cloud -Q returns a list of dictionaries of defined standard fields. New in version 2018.3.0.
salt-cloud -F returns a list of dictionaries. for 'saltify' minions, returns dict of grains (enhanced). New in version 2018.3.0.
CLI Example: salt-cloud -a reboot vm_name
salt.cloud.clouds.scalewayScaleway Cloud ModuleNew in version 2015.8.0. The Scaleway cloud module is used to interact with your Scaleway BareMetal Servers. Use of this module only requires the api_key parameter to be set. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/scaleway.conf: scaleway-config:
salt-cloud --destroy mymachine
salt.cloud.clouds.softlayerSoftLayer Cloud ModuleThe SoftLayer cloud module is used to control access to the SoftLayer VPS system. Use of this module only requires the apikey parameter. Set up the cloud configuration at: /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/softlayer.conf: my-softlayer-config: The SoftLayer Python Library needs to be installed in order to use the SoftLayer salt.cloud modules. See: https://pypi.python.org/pypi/SoftLayer
salt-cloud --destroy mymachine
salt.cloud.clouds.softlayer_hwSoftLayer HW Cloud ModuleThe SoftLayer HW cloud module is used to control access to the SoftLayer hardware cloud system Use of this module only requires the apikey parameter. Set up the cloud configuration at: /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/softlayer.conf: my-softlayer-config: The SoftLayer Python Library needs to be installed in order to use the SoftLayer salt.cloud modules. See: https://pypi.python.org/pypi/SoftLayer
salt-cloud --destroy mymachine
salt-cloud -f show_pricing my-softlayerhw-config profile=my-profile If pricing sources have not been cached, they will be downloaded. Once they have been cached, they will not be updated automatically. To manually update all prices, use the following command: salt-cloud -f update_pricing <provider> New in version 2015.8.0. salt.cloud.clouds.tencentcloudTencent Cloud Cloud ModuleNew in version 3000. The Tencent Cloud Cloud Module is used to control access to the Tencent Cloud instance. https://intl.cloud.tencent.com/
my-tencentcloud-config:
salt-cloud --list-images my-tencentcloud-config salt-cloud -f avail_images my-tencentcloud-config
salt-cloud --list-locations my-tencentcloud-config salt-cloud -f avail_locations my-tencentcloud-config
salt-cloud --list-sizes my-tencentcloud-config salt-cloud -f avail_sizes my-tencentcloud-config
tencentcloud-guangzhou-s1sm1: CLI Examples: salt-cloud -p tencentcloud-guangzhou-s1 myinstance
salt-cloud -a destroy myinstance salt-cloud -d myinstance
salt-cloud -f list_availability_zones my-tencentcloud-config
salt-cloud -f list_custom_images my-tencentcloud-config
salt-cloud -Q
salt-cloud -F
salt-cloud -f list_nodes_min my-tencentcloud-config
salt-cloud -S
salt-cloud -f list_securitygroups my-tencentcloud-config
salt-cloud -a reboot myinstance
salt-cloud -a show_disk myinstance
salt-cloud -f show_image tencentcloud image=img-31tjrtph
salt-cloud -a show_instance myinstance
salt-cloud -a start myinstance
salt-cloud -a stop myinstance salt-cloud -a stop myinstance force=True salt.cloud.clouds.vagrantVagrant Cloud DriverThe Vagrant cloud is designed to "vagrant up" a virtual machine as a Salt minion. Use of this module requires some configuration in cloud profile and provider files as described in the Getting Started with Vagrant documentation. New in version 2018.3.0.
salt-cloud --list-locations my-cloud-provider # \[ vagrant will always returns an empty dictionary \]
salt-cloud --list-sizes my-cloud-provider # \[ vagrant always returns an empty dictionary \]
salt-cloud -p my_profile new_node_1
salt-cloud --destroy mymachine
salt-cloud -Q
salt-call -F
CLI Example: salt-cloud -a reboot vm_name
salt.cloud.clouds.virtualboxA salt cloud provider that lets you use virtualbox on your machine and act as a cloud.
For now this will only clone existing VMs. It's best to create a template from which we will clone. Followed https://docs.saltproject.io/en/latest/topics/cloud/cloud.html#non-libcloud-based-modules to create this.
{
@type vm_info dict @return dict of resulting vm. !!!Passwords can and should be included!!!
@param name: @type name: str @param call: @type call: @return: True if all went well, otherwise an error message @rtype: bool|str
salt-cloud -Q @param kwargs: @type kwargs: @param call: @type call: @return: @rtype:
salt-cloud -F @param kwargs: @type kwargs: @param call: @type call: @return: @rtype:
salt.cloud.clouds.vmwareVMware Cloud ModuleNew in version 2015.5.4. The VMware cloud module allows you to manage VMware ESX, ESXi, and vCenter. See Getting started with VMware to get started.
Dependencies
pyVmomiPyVmomi can be installed via pip: pip install pyVmomi NOTE: Version 6.0 of pyVmomi has some problems with SSL error
handling on certain versions of Python. If using version 6.0 of pyVmomi,
Python 2.6, Python 2.7.9, or newer must be present. This is due to an upstream
dependency in pyVmomi 6.0 that is not supported in Python versions 2.7 to
2.7.8. If the version of Python is not in the supported range, you will need
to install an earlier version of pyVmomi. See Issue #29537 for more
information.
Based on the note above, to install an earlier version of pyVmomi than the version currently listed in PyPi, run the following: pip install pyVmomi==5.5.0.2014.1.1 The 5.5.0.2014.1.1 is a known stable version that this original VMware cloud driver was developed against. NOTE: Ensure python pyVmomi module is installed by running
following one-liner check. The output should be 0.
python -c "import pyVmomi" ; echo $? ConfigurationTo use this module, set up the vCenter or ESX/ESXi URL, username and password in the cloud configuration at /usr/local/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/vmware.conf: my-vmware-config: NOTE: Optionally, protocol and port can be
specified if the vCenter server is not using the defaults. Default is
protocol: https and port: 443.
NOTE: Changed in version 2015.8.0.
The provider parameter in cloud provider configuration was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile configuration. Cloud provider configuration now uses driver to refer to the salt-cloud driver that provides the underlying functionality to connect to a cloud provider, while cloud profile configuration continues to use provider to refer to the cloud provider configuration that you define. To test the connection for my-vmware-config specified in the cloud configuration, run test_vcenter_connection()
To use this function, you need to specify
esxi_host_user and esxi_host_password under your provider
configuration set up at /usr/local/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/vmware.conf:
vcenter01: The SSL thumbprint of the host system can be optionally specified by setting esxi_host_ssl_thumbprint under your provider configuration. To get the SSL thumbprint of the host system, execute the following command from a remote server: echo -n | openssl s_client -connect <YOUR-HOSTSYSTEM-DNS/IP>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha1 CLI Example: salt-cloud -f add_host my-vmware-config host="myHostSystemName" cluster="myClusterName" salt-cloud -f add_host my-vmware-config host="myHostSystemName" datacenter="myDatacenterName"
salt-cloud --list-images my-vmware-config
salt-cloud --list-locations my-vmware-config
salt-cloud --list-sizes my-vmware-config NOTE: Since sizes are built into templates, this function will
return an empty dictionary.
salt-cloud -f connect_host my-vmware-config host="myHostSystemName"
salt-cloud -a convert_to_template vmname
salt-cloud -p vmware-centos6.5 vmname
salt-cloud -f create_cluster my-vmware-config name="myNewCluster" datacenter="datacenterName"
salt-cloud -f create_datacenter my-vmware-config name="MyNewDatacenter"
salt-cloud -f create_datastore_cluster my-vmware-config name="datastoreClusterName" datacenter="datacenterName"
To create a Host and Cluster Folder under a Datacenter,
specify path="/yourDatacenterName/host/yourFolderName"
To create a Network Folder under a Datacenter, specify path="/yourDatacenterName/network/yourFolderName" To create a Storage Folder under a Datacenter, specify path="/yourDatacenterName/datastore/yourFolderName" To create a VM and Template Folder under a Datacenter, specify path="/yourDatacenterName/vm/yourFolderName" CLI Example: salt-cloud -f create_folder my-vmware-config path="/Local/a/b/c" salt-cloud -f create_folder my-vmware-config path="/MyDatacenter/vm/MyVMFolder" salt-cloud -f create_folder my-vmware-config path="/MyDatacenter/host/MyHostFolder" salt-cloud -f create_folder my-vmware-config path="/MyDatacenter/network/MyNetworkFolder" salt-cloud -f create_folder my-vmware-config path="/MyDatacenter/storage/MyStorageFolder"
If the VM is powered on, the internal state of the VM
(memory dump) is included in the snapshot by default which will also set the
power state of the snapshot to "powered on". You can set
memdump=False to override this. This field is ignored if the virtual
machine is powered off or if the VM does not support snapshots with memory
dumps. Default is memdump=True
NOTE: If the VM is powered on when the snapshot is taken,
VMware Tools can be used to quiesce the file system in the virtual machine by
setting quiesce=True. This field is ignored if the virtual machine is
powered off; if VMware Tools are not available or if memdump=True.
Default is quiesce=False
CLI Example: salt-cloud -a create_snapshot vmname snapshot_name="mySnapshot" salt-cloud -a create_snapshot vmname snapshot_name="mySnapshot" [description="My snapshot"] [memdump=False] [quiesce=True]
salt-cloud -d vmname salt-cloud --destroy vmname salt-cloud -a destroy vmname
salt-cloud -f disconnect_host my-vmware-config host="myHostSystemName"
salt-cloud -f enter_maintenance_mode my-vmware-config host="myHostSystemName"
salt-cloud -f exit_maintenance_mode my-vmware-config host="myHostSystemName"
salt-cloud -f get_vcenter_version my-vmware-config
salt-cloud -f list_clusters my-vmware-config
salt-cloud -f list_clusters_by_datacenter my-vmware-config To list clusters for a specified datacenter: CLI Example: salt-cloud -f list_clusters_by_datacenter my-vmware-config datacenter="datacenterName"
salt-cloud -f list_datacenters my-vmware-config
salt-cloud -f list_datastore_clusters my-vmware-config
salt-cloud -f list_datastores my-vmware-config
salt-cloud -f list_dvs my-vmware-config
salt-cloud -f list_folders my-vmware-config
You can specify type as either parallel,
iscsi, block or fibre.
To list all HBAs for each host system: CLI Example: salt-cloud -f list_hbas my-vmware-config To list all HBAs for a specified host system: CLI Example: salt-cloud -f list_hbas my-vmware-config host="hostSystemName" To list HBAs of specified type for each host system: CLI Example: salt-cloud -f list_hbas my-vmware-config type="HBAType" To list HBAs of specified type for a specified host system: CLI Example: salt-cloud -f list_hbas my-vmware-config host="hostSystemName" type="HBAtype"
salt-cloud -f list_hosts my-vmware-config
salt-cloud -f list_hosts_by_cluster my-vmware-config To list hosts for a specified cluster: CLI Example: salt-cloud -f list_hosts_by_cluster my-vmware-config cluster="clusterName"
salt-cloud -f list_hosts_by_datacenter my-vmware-config To list hosts for a specified datacenter: CLI Example: salt-cloud -f list_hosts_by_datacenter my-vmware-config datacenter="datacenterName"
salt-cloud -f list_networks my-vmware-config
salt-cloud -f list_nodes my-vmware-config To return a list of all VMs and templates present on ALL configured providers, with basic fields: CLI Example: salt-cloud -Q
salt-cloud -f list_nodes_full my-vmware-config To return a list of all VMs and templates present on ALL configured providers, with full details: CLI Example: salt-cloud -F
salt-cloud -f list_nodes_min my-vmware-config
salt-cloud -f list_nodes_select my-vmware-config To return a list of all VMs and templates present on ALL configured providers, with fields specified under query.selection in /usr/local/etc/salt/cloud: CLI Example: salt-cloud -S
salt-cloud -f list_portgroups my-vmware-config
salt-cloud -f list_resourcepools my-vmware-config
salt-cloud -f list_snapshots my-vmware-config To list snapshots for a specific VM/template: CLI Example: salt-cloud -f list_snapshots my-vmware-config name="vmname"
salt-cloud -f list_templates my-vmware-config
salt-cloud -f list_vapps my-vmware-config
If the host system is not in maintenance mode, it will
not be rebooted. If you want to reboot the host system regardless of whether
it is in maintenance mode, set force=True. Default is
force=False.
CLI Example: salt-cloud -f reboot_host my-vmware-config host="myHostSystemName" [force=True]
All the snapshots higher up in the hierarchy of the
current snapshot tree are consolidated and their virtual disks are merged. To
override this behavior and only remove all snapshots, set
merge_snapshots=False. Default is merge_snapshots=True
CLI Example: salt-cloud -a remove_all_snapshots vmname [merge_snapshots=False]
salt-cloud -f remove_host my-vmware-config host="myHostSystemName"
salt-cloud -a remove_snapshot vmname snapshot_name="mySnapshot" salt-cloud -a remove_snapshot vmname snapshot_name="mySnapshot" [remove_children="True"]
salt-cloud -f rescan_hba my-vmware-config host="hostSystemName" salt-cloud -f rescan_hba my-vmware-config hba="hbaDeviceName" host="hostSystemName"
If soft=True then issues a command to the guest
operating system asking it to perform a reboot. Otherwise hypervisor will
terminate VM and start it again. Default is soft=False
For soft=True vmtools should be installed on guest system. CLI Example: salt-cloud -a reset vmname salt-cloud -a reset vmname soft=True
The virtual machine will be powered on if the power state
of the snapshot when it was created was set to "Powered On". Set
power_off=True so that the virtual machine stays powered off regardless
of the power state of the snapshot when it was created. Default is
power_off=False.
If the power state of the snapshot when it was created was "Powered On" and if power_off=True, the VM will be put in suspended state after it has been reverted to the snapshot. CLI Example: salt-cloud -a revert_to_snapshot vmame [power_off=True] salt-cloud -a revert_to_snapshot vmame snapshot_name="selectedSnapshot" [power_off=True]
salt-cloud -a show_instance vmname
If the host system is not in maintenance mode, it will
not be shut down. If you want to shut down the host system regardless of
whether it is in maintenance mode, set force=True. Default is
force=False.
CLI Example: salt-cloud -f shutdown_host my-vmware-config host="myHostSystemName" [force=True]
salt-cloud -a start vmname
If soft=True then issues a command to the guest
operating system asking it to perform a clean shutdown of all services.
Default is soft=False
For soft=True vmtools should be installed on guest system. CLI Example: salt-cloud -a stop vmname salt-cloud -a stop vmname soft=True
salt-cloud -a suspend vmname
salt-cloud -a terminate vmname
salt-cloud -f test_vcenter_connection my-vmware-config
If the virtual machine is running Windows OS, use
reboot=True to reboot the virtual machine after VMware tools upgrade.
Default is reboot=False
CLI Example: salt-cloud -a upgrade_tools vmname salt-cloud -a upgrade_tools vmname reboot=True
If the virtual machine is running Windows OS, this
function will attempt to suppress the automatic reboot caused by a VMware
Tools upgrade.
CLI Example: salt-cloud -f upgrade_tools_all my-vmware-config salt.cloud.clouds.vultrpyVultr Cloud Module using python-vultr bindingsNew in version 2016.3.0. The Vultr cloud module is used to control access to the Vultr VPS system. Use of this module only requires the api_key parameter. Set up the cloud configuration at /usr/local/etc/salt/cloud.providers or /usr/local/etc/salt/cloud.providers.d/vultr.conf: my-vultr-config: Set up the cloud profile at /usr/local/etc/salt/cloud.profiles or /usr/local/etc/salt/cloud.profiles.d/vultr.conf: nyc-4gb-4cpu-ubuntu-14-04: This driver also supports Vultr's startup script feature. You can list startup scripts in your account with salt-cloud -f list_scripts <name of vultr provider> That list will include the IDs of the scripts in your account. Thus, if you have a script called 'setup-networking' with an ID of 493234 you can specify that startup script in a profile like so: nyc-2gb-1cpu-ubuntu-17-04: Similarly you can also specify a fiewall group ID using the option firewall_group_id. You can list firewall groups with salt-cloud -f list_firewall_groups <name of vultr provider> To specify SSH keys to be preinstalled on the server, use the ssh_key_names setting nyc-2gb-1cpu-ubuntu-17-04: You can list SSH keys available on your account using salt-cloud -f list_keypairs <name of vultr provider>
salt.cloud.clouds.xenXenServer Cloud DriverThe XenServer driver is designed to work with a Citrix XenServer. Requires XenServer SDK (can be downloaded from https://www.citrix.com/downloads/xenserver/product-software/ ) Place a copy of the XenAPI.py in the Python site-packages folder.
Example provider configuration: # /usr/local/etc/salt/cloud.providers.d/myxen.conf myxen: Example profile configuration: # /usr/local/etc/salt/cloud.profiles.d/myxen.conf suse:
salt-cloud --list-images myxen
salt-cloud --list-locations myxen
salt-cloud --list-sizes myxen
salt-cloud -p some_profile xenvm01
salt-cloud -d xenvm01
salt-cloud -f destroy_template myxen name=testvm2
salt-cloud -a destroy_vm_vdis xenvm01
salt-cloud -a get_pv_args xenvm01
salt-cloud -a get_vm_ip xenvm01 NOTE: Requires xen guest tools to be installed in VM
salt-cloud -S
salt-cloud -f pool_list myxen
salt-cloud -f pool_list myxen salt-cloud -a reboot xenvm01
salt-cloud -a resume xenvm01
salt-cloud -a set_pv_args xenvm01 pv_args="utf-8 graphical"
salt-cloud -a show_instance xenvm01 NOTE: memory is memory_dynamic_max
salt-cloud -a shutdown xenvm01
salt-cloud -f sr_list myxen
salt-cloud -a suspend xenvm01
salt-cloud -f template_list myxen salt-cloud -a unpause xenvm01
salt-cloud -a vbd_list xenvm01
salt-cloud -f vdi_list myxen terse=True
salt-cloud -a vif_list xenvm01 Configuring SaltSalt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file. The configuration files will be installed to /usr/local/etc/salt and are named after the respective components, /usr/local/etc/salt/master, and /usr/local/etc/salt/minion. Master ConfigurationBy default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the "interface" directive in the master configuration file, typically /usr/local/etc/salt/master, as follows: - #interface: 0.0.0.0 + interface: 10.0.0.1 After updating the configuration file, restart the Salt master. See the master configuration reference for more details about other configurable options. Minion ConfigurationAlthough there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed. If the DNS name "salt" does not resolve to point to the correct location of the Master, redefine the "master" directive in the minion configuration file, typically /usr/local/etc/salt/minion, as follows: - #master: salt + master: 10.0.0.1 After updating the configuration file, restart the Salt minion. See the minion configuration reference for more details about other configurable options. Proxy Minion ConfigurationA proxy minion emulates the behaviour of a regular minion and inherits their options. Similarly, the configuration file is /usr/local/etc/salt/proxy and the proxy tries to connect to the DNS name "salt". In addition to the regular minion options, there are several proxy-specific - see the proxy minion configuration reference. Running Salt
salt-master
salt-minion
salt-master --log-level=debug For information on salt's logging system please see the logging document.
More information about running salt as a non-privileged user can be found here. There is also a full troubleshooting guide available. Key IdentitySalt provides commands to validate the identity of your Salt master and Salt minions before the initial key exchange. Validating key identity helps avoid inadvertently connecting to the wrong Salt master, and helps prevent a potential MiTM attack when establishing the initial connection. Master Key FingerprintPrint the master key fingerprint by running the following command on the Salt master: salt-key -F master Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Save the configuration file and then restart the Salt minion. Minion Key FingerprintRun the following command on each Salt minion to view the minion key fingerprint: salt-call --local key.finger Compare this value to the value that is displayed when you run the salt-key --finger <MINION_ID> command on the Salt master. Key ManagementSalt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys. Before commands can be sent to a Minion, its key must be accepted on the Master. Run the salt-key command to list the keys known to the Salt Master: [root@master ~]# salt-key -L Unaccepted Keys: alpha bravo charlie delta Accepted Keys: This example shows that the Salt Master is aware of four Minions, but none of the keys has been accepted. To accept the keys and allow the Minions to be controlled by the Master, again use the salt-key command: [root@master ~]# salt-key -A [root@master ~]# salt-key -L Unaccepted Keys: Accepted Keys: alpha bravo charlie delta The salt-key command allows for signing keys individually or in bulk. The example above, using -A bulk-accepts all pending keys. To accept keys individually use the lowercase of the same option, -a keyname. SEE ALSO: salt-key manpage
Sending CommandsCommunication between the Master and a Minion may be verified by running the test.version command: [root@master ~]# salt alpha test.version alpha: Communication between the Master and all Minions may be tested in a similar way: [root@master ~]# salt '*' test.version alpha: Each of the Minions should send a 2018.3.4 response as shown above, or any other salt version installed. What's Next?Understanding targeting is important. From there, depending on the way you wish to use Salt, you should also proceed to learn about Remote Execution and Configuration Management. engine modules
salt.engines.docker_eventsSend events from Docker events :Depends: Docker API >= 1.22
engines: The config above sets up engines to listen for events from the Docker daemon and publish them to the Salt event bus. For filter reference, see https://docs.docker.com/engine/reference/commandline/events/ salt.engines.fluentAn engine that reads messages from the salt event bus and pushes them onto a fluent endpoint. New in version 3000. All arguments are optional Example configuration of default settings
engines: Example fluentd configuration <source>
salt.engines.http_logstashHTTP Logstash engineAn engine that reads messages from the salt event bus and pushes them onto a logstash endpoint via HTTP requests. Changed in version 2018.3.0. NOTE: By default, this engine take everything from the Salt bus
and exports into Logstash. For a better selection of the events that you want
to publish, you can use the tags and funs options.
engines:
salt.engines.ircbotIRC Bot engine New in version 2017.7.0. Example Configuration engines: Available commands on irc are:
Example of usage 08:33:57 @gtmanfred > !ping 08:33:57 gtmanbot > gtmanfred: pong 08:34:02 @gtmanfred > !echo ping 08:34:02 gtmanbot > ping 08:34:17 @gtmanfred > !event test/tag/ircbot irc is useful 08:34:17 gtmanbot > gtmanfred: TaDa! [DEBUG ] Sending event: tag = salt/engines/ircbot/test/tag/ircbot; data = {'_stamp': '2016-11-28T14:34:16.633623', 'data': ['irc', 'is', 'useful']}
This will allow the bot user to be fully authenticated
before joining any channels
WARNING: Unauthenticated Access to event stream
This engine sends events calls to the event stream without authenticating them in salt. Authentication will need to be configured and enforced on the irc server or enforced in the irc channel. The engine only accepts commands from channels, so non authenticated users could be banned or quieted in the channel. /mode +q $~a # quiet all users who are not authenticated /mode +r # do not allow unauthenticated users into the channel It would also be possible to add a password to the irc channel, or only allow invited users to join. salt.engines.junos_syslogJunos Syslog EngineNew in version 2017.7.0.
An engine that listens to syslog message from Junos devices, extract event information and generate message on SaltStack bus. The event topic sent to salt is dynamically generated according to the topic title specified by the user. The incoming event data (from the junos device) consists of the following fields:
The topic title can consist of any of the combination of above fields, but the topic has to start with 'jnpr/syslog'. So, we can have different combinations:
The corresponding dynamic topic sent on salt event bus would look something like:
The default topic title is 'jnpr/syslog/hostname/event'. The user can choose the type of data they wants of the event bus. Like, if one wants only events pertaining to a particular daemon, they can specify that in the configuration file: daemon: mgd One can even have a list of daemons like: daemon: Example configuration (to be written in master config file) engines: For junos_syslog engine to receive events, syslog must be set on the junos device. This can be done via following configuration: set system syslog host <ip-of-the-salt-device> port 516 any any Below is a sample syslog event which is received from the junos device: '<30>May 29 05:18:12 bng-ui-vm-9 mspd[1492]: No chassis configuration found' The source for parsing the syslog messages is taken from: https://gist.github.com/leandrosilva/3651640#file-xlog-py salt.engines.libvirt_eventsAn engine that listens for libvirt events and resends them to the salt event bus. The minimal configuration is the following and will listen to all events on the local hypervisor and send them with a tag starting with salt/engines/libvirt_events: engines: Note that the automatically-picked libvirt connection will depend on the value of uri_default in /etc/libvirt/libvirt.conf. To force using another connection like the local LXC libvirt driver, set the uri property as in the following example configuration. engines: Filters is a list of event types to relay to the event bus. Items in this list can be either one of the main types (domain, network, pool, nodedev, secret), all or a more precise filter. These can be done with values like <main_type>/<subtype>. The possible values are in the CALLBACK_DEFS constant. If the filters list contains all, all events will be relayed. Be aware that the list of events increases with libvirt versions, for example network events have been added in libvirt 1.2.1 and storage events in 2.0.0. Running the engine on non-rootRunning this engine as non-root requires a special attention, which is surely the case for the master running as user salt. The engine is likely to fail to connect to libvirt with an error like this one: [ERROR ] authentication unavailable: no polkit agent
available to authenticate action 'org.libvirt.unix.monitor'
To fix this, the user running the engine, for example the salt-master, needs to have the rights to connect to libvirt in the machine polkit config. A polkit rule like the following one will allow salt user to connect to libvirt: polkit.addRule(function(action, subject) {
New in version 2019.2.0.
salt.engines.logentriesAn engine that sends events to the Logentries logging service.
New in version 2016.3.0. To enable this engine the master and/or minion will need the following python libraries ssl certifi
If you are running a new enough version of python then the ssl library will be present already. You will also need the following values configured in the minion or master config.
engines: The 'token' can be obtained from the Logentries service. To test this engine salt '*' test.ping cmd.run uptime
salt.engines.logstash_engineAn engine that reads messages from the salt event bus and pushes them onto a logstash endpoint. New in version 2015.8.0.
engines:
salt.engines.napalm_syslogNAPALM syslog engineNew in version 2017.7.0. An engine that takes syslog messages structured in OpenConfig or IETF format and fires Salt events. As there can be many messages pushed into the event bus, the user is able to filter based on the object structure. Requirements
This engine transfers objects from the napalm-logs library into the event bus. The top dictionary has the following keys:
The napalm-logs transfers the messages via widely used transport mechanisms such as: ZeroMQ (default), Kafka, etc. The user can select the right transport using the transport option in the configuration.
engines:
engines: Event example: {
To consume the events and eventually react and deploy a configuration changes on the device(s) firing the event, one is able to identify the minion ID, using one of the following alternatives, but not limited to:
Master configuration example, to match the event and react: reactor: Which matches the events having the error code BGP_PREFIX_THRESH_EXCEEDED from any network operating system, from any host and reacts, executing the increase_prefix_limit_on_thresh_exceeded.sls reactor, found under one of the file_roots paths. Reactor example: increase_prefix_limit_on_thresh_exceeded: The reactor in the example increases the BGP prefix limit when triggered by an event as above. The minion is matched using the host field from the data (which is the body of the event), compared to the hostname grain field. When the event occurs, the reactor will execute the net.load_template function, sending as arguments the template salt://increase_prefix_limit.jinja defined by the user in their environment and the complete OpenConfig object under the variable name openconfig_structure. Inside the Jinja template, the user can process the object from openconfig_structure and define the bussiness logic as required.
Currently zmq is the only valid option.
salt.engines.reactorSetup Reactor Example Config in Master or Minion config engines: salt.engines.redis_sentinelAn engine that reads messages from the redis sentinel pubsub and sends reactor events based on the channels they are subscribed to. New in version 2016.3.0.
engines:
salt.engines.scriptSend events based on a script's stdout Example Config engines: Script engine configs:
{ "tag" : "lots/of/tacos",
"data" : { "toppings" : "cilantro" }
}
This will fire the event 'lots/of/tacos' on the event bus with the data obj as is.
salt.engines.slackAn engine that reads messages from Slack and can act on them New in version 2016.3.0.
IMPORTANT: This engine requires a bot user. To create a bot user,
first go to the Custom Integrations page in your Slack Workspace. Copy
and paste the following URL, and replace myworkspace with the proper
value for your workspace:
https://myworkspace.slack.com/apps/manage/custom-integrations Next, click on the Bots integration and request installation. Once approved by an admin, you will be able to proceed with adding the bot user. Once the bot user has been added, you can configure it by adding an avatar, setting the display name, etc. You will also at this time have access to your API token, which will be needed to configure this engine. Finally, add this bot user to a channel by switching to the channel and using /invite @mybotuser. Keep in mind that this engine will process messages from each channel in which the bot is a member, so it is recommended to narrowly define the commands which can be executed, and the Slack users which are allowed to run commands. This engine has two boolean configuration parameters that toggle specific features (both default to False):
Here are a few examples: !test.ping target=* !state.apply foo target=os:CentOS tgt_type=grain !pkg.version mypkg target=role:database tgt_type=pillar
The groups_pillar_name config option can be used to pull group configuration from the specified pillar key. NOTE: In order to use groups_pillar_name, the engine
must be running as a minion running on the master, so that the Caller
client can be used to retrieve that minions pillar data, because the master
process does not have pillar data.
Configuration ExamplesChanged in version 2017.7.0: Access control group support added This example uses a single group called default. In addition, other groups are being loaded from pillar data. The group names do not have any significance, it is the users and commands defined within them that are used to determine whether the Slack user has permission to run the desired command. engines: This example shows multiple groups applying to different users, with all users having access to run test.ping. Keep in mind that when using *, the value must be quoted, or else PyYAML will fail to load the configuration. engines:
{
else yields {'message_data': m_data} and the caller can handle that When encountering an error (e.g. invalid message), yields {}, the caller can proceed to the next message When the websocket being read from has given up all its messages, yields {'done': True} to indicate that the caller has read all of the relevant data for now, and should continue its own processing and check back for more data later. This relies on the caller sleeping between checks, otherwise this could flood
h = {'aliases': {}, 'commands': {'cmd.run', 'pillar.get'},
Run each of them through get_configured_target(('foo', f), 'pillar.get') and confirm a valid target
salt.engines.slack_bolt_engineAn engine that reads messages from Slack and can act on them New in version 3006.0.
IMPORTANT: This engine requires a Slack app and a Slack Bot user. To
create a bot user, first go to the Custom Integrations page in your
Slack Workspace. Copy and paste the following URL, and log in with account
credentials with administrative privileges:
https://api.slack.com/apps/new Next, click on the From scratch option from the Create an app popup. Give your new app a unique name, eg. SaltSlackEngine, select the workspace where your app will be running, and click Create App. Next, click on Socket Mode and then click on the toggle button for Enable Socket Mode. In the dialog give your Socket Mode Token a unique name and then copy and save the app level token. This will be used as the app_token parameter in the Slack engine configuration. Next, click on Event Subscriptions and ensure that Enable Events is in the on position. Then add the following bot events, message.channel and message.im to the Subcribe to bot events list. Next, click on OAuth & Permissions and then under Bot Token Scope, click on Add an OAuth Scope. Ensure the following scopes are included:
Once all the scopes have been added, click the Install to Workspace button under OAuth Tokens for Your Workspace, then click Allow. Copy and save the Bot User OAuth Token, this will be used as the bot_token parameter in the Slack engine configuration. Finally, add this bot user to a channel by switching to the channel and using /invite @mybotuser. Keep in mind that this engine will process messages from each channel in which the bot is a member, so it is recommended to narrowly define the commands which can be executed, and the Slack users which are allowed to run commands. This engine has two boolean configuration parameters that toggle specific features (both default to False):
Here are a few examples: !test.ping target=* !state.apply foo target=os:CentOS tgt_type=grain !pkg.version mypkg target=role:database tgt_type=pillar
The groups_pillar_name config option can be used to pull group configuration from the specified pillar key. NOTE: In order to use groups_pillar_name, the engine
must be running as a minion running on the master, so that the Caller
client can be used to retrieve that minion's pillar data, because the master
process does not have pillar data.
Configuration ExamplesChanged in version 2017.7.0: Access control group support added Changed in version 3006.0: Updated to use slack_bolt Python library. This example uses a single group called default. In addition, other groups are being loaded from pillar data. The users and commands defined within these groups are used to determine whether the Slack user has permission to run the desired command. engines: This example shows multiple groups applying to different users, with all users having access to run test.ping. Keep in mind that when using *, the value must be quoted, or else PyYAML will fail to load the configuration. engines:
{
else yields {'message_data': m_data} and the caller can handle that When encountering an error (e.g. invalid message), yields {}, the caller can proceed to the next message When the websocket being read from has given up all its messages, yields {'done': True} to indicate that the caller has read all of the relevant data for now, and should continue its own processing and check back for more data later. This relies on the caller sleeping between checks, otherwise this could flood
returns a dictionary of job id: result
h = {'aliases': {}, 'commands': {'cmd.run', 'pillar.get'},
Run each of them through get_configured_target(('foo', f), 'pillar.get') and confirm a valid target
returns tuple of: args (list), kwargs (dict)
salt.engines.sqs_eventsAn engine that continuously reads messages from SQS and fires them as events. Note that long polling is utilized to avoid excessive CPU usage. New in version 2015.8.0.
ConfigurationThis engine can be run on the master or on a minion. Example Config: sqs.keyid: GKTADJGHEIQSXMKKRBJ08H sqs.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs sqs.message_format: json Explicit sqs credentials are accepted but this engine can also utilize IAM roles assigned to the instance through Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not (or for boto version < 2.5.1) used you need to specify them either in a pillar or in the config file of the master or minion, as appropriate: To deserialize the message from json: sqs.message_format: json It's also possible to specify key, keyid and region via a profile: sqs.keyid: GKTADJGHEIQSXMKKRBJ08H sqs.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: sqs.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile: myprofile: Additionally you can define cross account sqs: engines:
salt.engines.stalekeyAn engine that uses presence detection to keep track of which minions have been recently connected and remove their keys if they have not been connected for a certain period of time. Requires that the minion_data_cache option be enabled. New in version 2017.7.0.
salt.engines.testA simple test engine, not intended for real use but as an example
salt.engines.thoriumManage the Thorium complex event reaction system
salt.engines.webhookSend events from webhook api
Unauthenticated endpoint
This engine sends webhook calls to the event stream. If the engine is running on a minion with file_client: local the event is sent to the minion event stream. Otherwise it is sent to the master event stream. Example Config engines: engines: execution modulessalt.modules.groupgroup is a virtual module that is fulfilled by one of the following modules:
salt.modules.kernelpkgkernelpkg is a virtual module that is fulfilled by one of the following modules:
salt.modules.pkgpkg is a virtual module that is fulfilled by one of the following modules:
salt.modules.serviceservice is a virtual module that is fulfilled by one of the following modules:
salt.modules.shadowshadow is a virtual module that is fulfilled by one of the following modules:
salt.modules.sysctlsysctl is a virtual module that is fulfilled by one of the following modules:
salt.modules.useruser is a virtual module that is fulfilled by one of the following modules:
salt.modules.acmeACME / Let's Encrypt moduleNew in version 2016.3.0. This module currently looks for certbot script in the $PATH as - certbot, - lestsencrypt, - certbot-auto, - letsencrypt-auto eventually falls back to /opt/letsencrypt/letsencrypt-auto NOTE: Installation & configuration of the Let's Encrypt
client can for example be done using
https://github.com/saltstack-formulas/letsencrypt-formula
WARNING: Be sure to set at least accept-tos = True in
cli.ini!
Most parameters will fall back to cli.ini defaults if None is given. DNS pluginsThis module currently supports the CloudFlare certbot DNS plugin. The DNS plugin credentials file needs to be passed in using the dns_plugin_credentials argument. Make sure the appropriate certbot plugin for the wanted DNS provider is installed before using this module.
CLI Example: salt 'gitlab.example.com' acme.cert dev.example.com "[gitlab.example.com]" test_cert=True renew=14 webroot=/opt/gitlab/embedded/service/gitlab-rails/public
salt 'vhost.example.com' acme.certs
CLI Example: salt 'gitlab.example.com' acme.expires dev.example.com
Code example: if __salt__['acme.has']('dev.example.com'):
CLI Example: salt 'gitlab.example.com' acme.info dev.example.com
Code example: if __salt__['acme.needs_renewal']('dev.example.com'):
salt.modules.aix_groupManage groups on Solaris IMPORTANT: If you feel that Salt should be using this module to
manage groups on a minion, and it is using a different module (or gives an
error similar to 'group.info' is not available), see here.
salt '*' group.add foo 3456
salt '*' group.adduser foo bar Verifies if a valid username 'bar' as a member of an existing group 'foo', if not then adds it.
salt '*' group.chgid foo 4376
salt '*' group.deluser foo bar Removes a member user 'bar' from a group 'foo'. If group is not present then returns True.
salt '*' group.getent
salt '*' group.info foo
salt '*' group.members foo 'user1,user2,user3,...'
salt.modules.aix_shadowManage account locks on AIX systems New in version 2018.3.0.
salt <minion_id> shadow.locked ALL
salt <minion_id> shadow.login_failures ALL
salt <minion_id> shadow.unlock user salt.modules.aixpkgPackage support for AIX IMPORTANT: If you feel that Salt should be using this module to
manage filesets or rpm packages on a minion, and it is using a different
module (or gives an error similar to 'pkg.install' is not available),
see here.
Return the latest available version of the named
fileset/rpm package available for upgrade or installation. If more than one
fileset/rpm package name is specified, a dict of name/version pairs is
returned.
If the latest version of a given fileset/rpm package is already installed, an empty string will be returned for that package. Changed in version 3005. CLI Example: salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package1> <package2> <package3> ...
This function will always return an empty string for unfound fileset/rpm package.
Returns a dict containing the new fileset(s)/rpm package(s) names and versions:
CLI Example: salt '*' pkg.install /stage/middleware/AIX/bash-4.2-3.aix6.1.ppc.rpm salt '*' pkg.install /stage/middleware/AIX/bash-4.2-3.aix6.1.ppc.rpm refresh=True salt '*' pkg.install /stage/middleware/AIX/VIOS2211_update/tpc_4.1.1.85.bff salt '*' pkg.install /cecc/repos/aix72/TL3/BASE/installp/ppc/bos.rte.printers_7.2.2.0.bff salt '*' pkg.install /stage/middleware/AIX/Xlc/usr/sys/inst.images/xlC.rte salt '*' pkg.install /stage/middleware/AIX/Firefox/ppc-AIX53/Firefox.base salt '*' pkg.install /cecc/repos/aix72/TL3/BASE/installp/ppc/bos.net salt '*' pkg.install pkgs='["foo", "bar"]' salt '*' pkg.install libxml2
salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package1> <package2> <package3> ...
This function will always return an empty string for unfound fileset/rpm package.
{'<package_name>': '<version>'}
CLI Example: salt '*' pkg.list_pkgs
Changed in version 3005:
Returns a list containing the removed packages. CLI Example: salt '*' pkg.remove <fileset/rpm package name> salt '*' pkg.remove tcsh salt '*' pkg.remove xlC.rte salt '*' pkg.remove Firefox.base.adt salt '*' pkg.remove pkgs='["foo", "bar"]'
salt '*' pkg.upgrade_available <package name>
salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package1> <package2> <package3> ... salt.modules.aliasesManage the information in the aliases file
salt '*' aliases.get_target alias
salt '*' aliases.has_target alias target
{'alias': 'target'}
CLI Example: salt '*' aliases.list_aliases
salt '*' aliases.rm_alias alias
salt '*' aliases.set_target alias target salt.modules.alternativesSupport for Alternatives system
salt '*' alternatives.auto name
salt '*' alternatives.check_exists name path
salt '*' alternatives.check_installed name path
salt '*' alternatives.display editor
salt '*' alternatives.install editor /usr/bin/editor /usr/bin/emacs23 50
salt '*' alternatives.remove name path
salt '*' alternatives.set name path
salt '*' alternatives.show_current editor
salt '*' alternatives.show_link editor salt.modules.ansiblegateAnsible SupportThis module can have an optional minion-level configuration in /usr/local/etc/salt/minion.d/ as follows: ansible_timeout: 1200
The timeout is how many seconds Salt should wait for any Ansible module to respond.
CLI Example: salt * ansible.call ping data=foobar
{
CLI Example: salt 'ansiblehost' ansible.discover_playbooks path=/srv/playbooks/ salt 'ansiblehost' ansible.discover_playbooks locations='["/srv/playbooks/", "/srv/foobar"]'
CLI Example: salt * ansible.help ping
salt * ansible.list salt * ansible.list '*win*' # To get all modules matching 'win' on it's name
CLI Example: salt 'ansiblehost' ansible.playbooks playbook=/srv/playbooks/play.yml
CLI Example: salt 'ansiblehost' ansible.targets salt 'ansiblehost' ansible.targets inventory=my_custom_inventory salt.modules.apacheSupport for Apache NOTE: The functions in here are generic functions designed to
work with all implementations of Apache. Debian-specific functions have been
moved into deb_apache.py, but will still load under the apache
namespace when a Debian-based system is detected.
NOTE: This function is not meant to be used from the command
line. Config is meant to be an ordered dict of all of the apache
configs.
CLI Example: salt '*' apache.config /etc/httpd/conf.d/ports.conf config="[{'Listen': '22'}]"
salt '*' apache.directives
salt '*' apache.fullversion
salt '*' apache.modules
The server-status handler is disabled by default. In
order for this function to work it needs to be enabled. See
http://httpd.apache.org/docs/2.2/mod/mod_status.html
The following configuration needs to exists in pillar/grains. Each entry nested in apache.server-status is a profile of a vhost/server. This would give support for multiple apache servers/vhosts. apache.server-status: CLI Examples: salt '*' apache.server_status salt '*' apache.server_status other-profile
salt '*' apache.servermods
salt '*' apache.signal restart
n Don't update file; display results on stdout. m Force MD5 hashing of the password (default). d Force CRYPT(3) hashing of the password. p Do not hash the password (plaintext). s Force SHA1 hashing of the password. CLI Examples: salt '*' apache.useradd /etc/httpd/htpasswd larry badpassword salt '*' apache.useradd /etc/httpd/htpasswd larry badpass opts=ns
salt '*' apache.userdel /etc/httpd/htpasswd larry
salt '*' apache.version
salt -t 10 '*' apache.vhosts salt.modules.apcupsModule for apcupsd
salt '*' apcups.status_battery
salt '*' apcups.status_charge salt.modules.apfSupport for Advanced Policy Firewall (APF)
salt '*' apf.allow 127.0.0.1
salt '*' apf.deny 1.2.3.4
salt '*' apf.refresh
salt '*' apf.remove 1.2.3.4 salt.modules.apkpkgSupport for apk IMPORTANT: If you feel that Salt should be using this module to
manage packages on a minion, and it is using a different module (or gives an
error similar to 'pkg.install' is not available), see
here.
New in version 2017.7.0.
salt '*' pkg.file_list httpd salt '*' pkg.file_list httpd postfix salt '*' pkg.file_list
salt '*' pkg.file_list httpd salt '*' pkg.file_list httpd postfix salt '*' pkg.file_list
salt '*' pkg.install <package name>
Multiple Package Installation Options:
salt '*' pkg.install pkgs='["foo", "bar"]'
salt '*' pkg.install sources='[{"foo": "salt://foo.deb"},{"bar": "salt://bar.deb"}]'
Returns a dict containing the new package names and versions: {'<package>': {'old': '<old-version>',
salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package1> <package2> <package3> ...
{'<package_name>': '<version>'}
CLI Example: salt '*' pkg.list_pkgs salt '*' pkg.list_pkgs versions_as_list=True
salt '*' pkg.list_upgrades
salt '*' pkg.owns /usr/bin/apachectl salt '*' pkg.owns /usr/bin/apachectl /usr/bin/basename
CLI Example: salt '*' pkg.refresh_db
Multiple Package Options:
Returns a dict containing the changes. CLI Example: salt '*' pkg.remove <package name> salt '*' pkg.remove <package1>,<package2>,<package3> salt '*' pkg.remove pkgs='["foo", "bar"]'
CLI Example: salt '*' pkg.upgrade
salt '*' pkg.version <package name> salt '*' pkg.version <package1> <package2> <package3> ... salt.modules.aptlyAptly Debian repository manager. New in version 2018.3.0.
CLI Example: salt '*' aptly.cleanup_db
CLI Example: salt '*' aptly.delete_repo name="test-repo"
CLI Example: salt '*' aptly.get_config
CLI Example: salt '*' aptly.get_repo name="test-repo"
CLI Example: salt '*' aptly.list_mirrors
CLI Example: salt '*' aptly.list_published
CLI Example: salt '*' aptly.list_repos
CLI Example: salt '*' aptly.list_snapshots
CLI Example: salt '*' aptly.new_repo name="test-repo" comment="Test main repo" component="main" distribution="trusty"
CLI Example: salt '*' aptly.set_repo name="test-repo" comment="Test universe repo" component="universe" distribution="xenial" salt.modules.aptpkgSupport for APT (Advanced Packaging Tool) IMPORTANT: If you feel that Salt should be using this module to
manage packages on a minion, and it is using a different module (or gives an
error similar to 'pkg.install' is not available), see here.
For repository management, the python-apt package must be installed.
WARNING: The apt-key binary is deprecated and will last be
available in Debian 11 and Ubuntu 22.04. It is recommended to use aptkey=False
when using this module.
CLI Examples: salt '*' pkg.add_repo_key 'salt://apt/sources/test.key' salt '*' pkg.add_repo_key text="'$KEY1'" salt '*' pkg.add_repo_key keyserver='keyserver.example' keyid='0000AAAA'
CLI Example: salt '*' pkg.autoremove salt '*' pkg.autoremove list_only=True salt '*' pkg.autoremove purge=True
Return the latest version of the named package available
for upgrade or installation. If more than one package name is specified, a
dict of name/version pairs is returned.
If the latest version of a given package is already installed, an empty string will be returned for that package. A specific repo can be requested using the fromrepo keyword argument. cache_valid_time New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds CLI Example: salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package name> fromrepo=unstable salt '*' pkg.latest_version <package1> <package2> <package3> ...
salt '*' pkg.del_repo "myrepo definition"
Setting this option to True requires that the
name param also be passed.
WARNING: The apt-key binary is deprecated and will last be
available in Debian 11 and Ubuntu 22.04. It is recommended to use aptkey=False
when using this module.
CLI Examples: salt '*' pkg.del_repo_key keyid=0123ABCD salt '*' pkg.del_repo_key name='ppa:foo/bar' keyid_ppa=True
salt '*' pkg.file_dict httpd salt '*' pkg.file_dict httpd postfix salt '*' pkg.file_dict
salt '*' pkg.file_list httpd salt '*' pkg.file_list httpd postfix salt '*' pkg.file_list
salt '*' pkg.get_repo "myrepo definition"
CLI Examples: salt '*' pkg.get_repo_keys
{'<host>':
CLI Example: salt '*' pkg.get_selections salt '*' pkg.get_selections 'python-*' salt '*' pkg.get_selections state=hold salt '*' pkg.get_selections 'openssh*' state=hold
salt '*' pkg.hold <package name>
salt '*' pkg.hold pkgs='["foo", "bar"]'
CLI Example: salt '*' pkg.info_installed <package1> salt '*' pkg.info_installed <package1> <package2> <package3> ... salt '*' pkg.info_installed <package1> failhard=false
salt '*' pkg.install <package name>
cache_valid_time New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds
Multiple Package Installation Options:
salt '*' pkg.install pkgs='["foo", "bar"]'
salt '*' pkg.install pkgs='["foo", {"bar": "1.2.3-0ubuntu0"}]'
salt '*' pkg.install sources='[{"foo": "salt://foo.deb"},{"bar": "salt://bar.deb"}]'
Returns a dict containing the new package names and versions: {'<package>': {'old': '<old-version>',
New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds CLI Example: salt '*' pkg.latest_version <package name> salt '*' pkg.latest_version <package name> fromrepo=unstable salt '*' pkg.latest_version <package1> <package2> <package3> ...
CLI Example: salt '*' pkg.list_downloaded
{'<package_name>': '<version>'}
CLI Example: salt '*' pkg.list_pkgs salt '*' pkg.list_pkgs versions_as_list=True
{
CLI Examples: salt '*' pkg.list_repo_pkgs salt '*' pkg.list_repo_pkgs foo bar baz
salt '*' pkg.list_repos salt '*' pkg.list_repos disabled=True
cache_valid_time New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds
CLI Example: salt '*' pkg.list_upgrades
NOTE: Due to the way keys are stored for APT, there is a known
issue where the key won't be updated unless another change is made at the same
time. Keys should be properly added on initial configuration.
CLI Examples: salt '*' pkg.mod_repo 'myrepo definition' uri=http://new/uri salt '*' pkg.mod_repo 'myrepo definition' comps=main,universe
salt '*' pkg.normalize_name zsh:amd64
salt '*' pkg.owner /usr/bin/apachectl salt '*' pkg.owner /usr/bin/apachectl /usr/bin/basename
salt '*' pkg.parse_arch zsh:amd64
Multiple Package Options:
New in version 0.16.0. Returns a dict containing the changes. CLI Example: salt '*' pkg.purge <package name> salt '*' pkg.purge <package1>,<package2>,<package3> salt '*' pkg.purge pkgs='["foo", "bar"]'
cache_valid_time New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds failhard If False, return results of Err lines as False for
the package database that encountered the error. If True, raise an error with
a list of the package databases that encountered errors.
CLI Example: salt '*' pkg.refresh_db
Multiple Package Options:
New in version 0.16.0. Returns a dict containing the changes. CLI Example: salt '*' pkg.remove <package name> salt '*' pkg.remove <package1>,<package2>,<package3> salt '*' pkg.remove pkgs='["foo", "bar"]'
salt '*' pkg.services_need_restart
This command is commonly used to mark specific packages to be held from being upgraded, that is, to be kept at a certain version. When a state is changed to anything but being held, then it is typically followed by apt-get -u dselect-upgrade. Note: Be careful with the clear argument, since it will start with setting all packages to deinstall state. Returns a dict of dicts containing the package names, and the new and old versions: {'<host>':
CLI Example: salt '*' pkg.set_selections selection='{"install": ["netcat"]}'
salt '*' pkg.set_selections selection='{"hold": ["openssh-server", "openssh-client"]}'
salt '*' pkg.set_selections salt://path/to/file
salt '*' pkg.set_selections salt://path/to/file clear=True
CLI Examples: salt myminion pkg.show gawk salt myminion pkg.show 'nginx-*' salt myminion pkg.show 'nginx-*' filter=description,provides
salt '*' pkg.unhold <package name>
salt '*' pkg.unhold pkgs='["foo", "bar"]'
{'<package>': {'old': '<old-version>',
cache_valid_time New in version 2016.11.0.
Skip refreshing the package database if refresh has already occurred within <value> seconds
CLI Example: salt '*' pkg.upgrade
salt '*' pkg.upgrade_available <package name>
salt '*' pkg.version <package name> salt '*' pkg.version <package1> <package2> <package3> ...
CLI Example: salt '*' pkg.version_cmp '0.2.4-0ubuntu1' '0.2.4.1-0ubuntu1' salt.modules.archiveA module to wrap (non-Windows) archive calls New in version 2014.1.0.
salt '*' archive.cmd_unzip template=jinja /tmp/zipfile.zip '/tmp/{{grains.id}}' excludes=file_1,file_2
This is not considered secure. It is recommended to
instead use archive.unzip for password-protected ZIP files. If a
password is used here, then the unzip command run to extract the ZIP file will
not show up in the minion log like most shell commands Salt runs do. However,
the password will still be present in the events logged to the minion log at
the debug log level. If the minion is logging at debug (or more
verbose), then be advised that the password will appear in the log.
New in version 2016.11.0. CLI Example: salt '*' archive.cmd_unzip /tmp/zipfile.zip /home/strongbad/ excludes=file_1,file_2
salt '*' archive.cmd_zip template=jinja /tmp/zipfile.zip /tmp/sourcefile1,/tmp/{{grains.id}}.txt
salt '*' archive.cmd_zip /tmp/baz.zip baz.txt cwd=/foo/bar New in version 2014.7.1.
CLI Example: salt '*' archive.cmd_zip /tmp/zipfile.zip /tmp/sourcefile1,/tmp/sourcefile2 # Globbing for sources (2017.7.0 and later) salt '*' archive.cmd_zip /tmp/zipfile.zip '/tmp/sourcefile*'
salt '*' archive.gunzip template=jinja /tmp/{{grains.id}}.txt.gz
CLI Example: # Create /tmp/sourcefile.txt salt '*' archive.gunzip /tmp/sourcefile.txt.gz salt '*' archive.gunzip /tmp/sourcefile.txt options='--verbose'
salt '*' archive.gzip template=jinja /tmp/{{grains.id}}.txt
CLI Example: # Create /tmp/sourcefile.txt.gz salt '*' archive.gzip /tmp/sourcefile.txt salt '*' archive.gzip /tmp/sourcefile.txt options='-9 --verbose'
If there is an error listing the archive's contents, the
cached file will not be removed, to allow for troubleshooting.
CLI Examples: salt '*' archive.is_encrypted /path/to/myfile.zip salt '*' archive.is_encrypted salt://foo.zip salt '*' archive.is_encrypted salt://foo.zip saltenv=dev salt '*' archive.is_encrypted https://domain.tld/myfile.zip clean=True salt '*' archive.is_encrypted https://domain.tld/myfile.zip source_hash=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15 salt '*' archive.is_encrypted ftp://10.1.2.3/foo.zip
This function will only provide results for XZ-compressed
archives if the xz CLI command is available, as Python does not at this
time natively support XZ compression in its tarfile module. Keep in
mind however that most Linux distros ship with xz already installed.
To check if a given minion has xz, the following Salt command can be run: salt minion_id cmd.which xz If None is returned, then xz is not present and must be installed. It is widely available and should be packaged as either xz or xz-utils.
salt minion_id archive.list /path/to/foo.tar.gz options='gzip --decompress --stdout' NOTE: It is not necessary to manually specify options for
gzip'ed archives, as gzip compression is natively supported by
tarfile.
If there is an error listing the archive's contents, the
cached file will not be removed, to allow for troubleshooting.
CLI Examples: salt '*' archive.list /path/to/myfile.tar.gz salt '*' archive.list /path/to/myfile.tar.gz strip_components=1 salt '*' archive.list salt://foo.tar.gz salt '*' archive.list https://domain.tld/myfile.zip salt '*' archive.list https://domain.tld/myfile.zip source_hash=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15 salt '*' archive.list ftp://10.1.2.3/foo.rar
salt '*' archive.rar template=jinja /tmp/rarfile.rar '/tmp/sourcefile1,/tmp/{{grains.id}}.txt'
CLI Example: salt '*' archive.rar /tmp/rarfile.rar /tmp/sourcefile1,/tmp/sourcefile2 # Globbing for sources (2017.7.0 and later) salt '*' archive.rar /tmp/rarfile.rar '/tmp/sourcefile*' This function has changed for version 0.17.0. In prior
versions, the cwd and template arguments must be specified, with
the source directories/files coming as a space-separated list at the end of
the command. Beginning with 0.17.0, sources must be a comma-separated
list, and the cwd and template arguments are optional.
Uses the tar command to pack, unpack, etc. tar files
salt '*' archive.tar cjvf /tmp/salt.tar.bz2 {{grains.saltpath}} template=jinja
CLI Examples: # Create a tarfile salt '*' archive.tar cjvf /tmp/tarfile.tar.bz2 /tmp/file_1,/tmp/file_2 # Create a tarfile using globbing (2017.7.0 and later) salt '*' archive.tar cjvf /tmp/tarfile.tar.bz2 '/tmp/file_*' # Unpack a tarfile salt '*' archive.tar xf foo.tar dest=/target/directory
salt '*' archive.unrar template=jinja /tmp/rarfile.rar /tmp/{{grains.id}}/ excludes=file_1,file_2
CLI Example: salt '*' archive.unrar /tmp/rarfile.rar /home/strongbad/ excludes=file_1,file_2
salt '*' archive.unzip template=jinja /tmp/zipfile.zip /tmp/{{grains.id}}/ excludes=file_1,file_2
CLI Example: salt '*' archive.unzip /tmp/zipfile.zip /home/strongbad/ excludes=file_1,file_2
The password will be present in the events logged to the
minion log file at the debug log level. If the minion is logging at
debug (or more verbose), then be advised that the password will appear
in the log.
New in version 2016.3.0.
CLI Example: salt '*' archive.unzip /tmp/zipfile.zip /home/strongbad/ password='BadPassword'
salt '*' archive.zip template=jinja /tmp/zipfile.zip /tmp/sourcefile1,/tmp/{{grains.id}}.txt
salt '*' archive.zip /tmp/baz.zip baz.txt cwd=/foo/bar
CLI Example: salt '*' archive.zip /tmp/zipfile.zip /tmp/sourcefile1,/tmp/sourcefile2 # Globbing for sources (2017.7.0 and later) salt '*' archive.zip /tmp/zipfile.zip '/tmp/sourcefile*' salt.modules.arista_pyeapiArista pyeapiNew in version 2019.2.0. Execution module to interface the connection with Arista switches, connecting to the remote network device using the pyeapi library. It is flexible enough to execute the commands both when running under an Arista Proxy Minion, as well as running under a Regular Minion by specifying the connection arguments, i.e., device_type, host, username, password etc. NOTE: To understand how to correctly enable the eAPI on your
switch, please check https://eos.arista.com/arista-eapi-101/.
DependenciesThe pyeapi Execution module requires the Python Client for eAPI (pyeapi) to be installed: pip install pyeapi. UsageThis module can equally be used via the pyeapi Proxy module or directly from an arbitrary (Proxy) Minion that is running on a machine having access to the network device API, and the pyeapi library is installed. When running outside of the pyeapi Proxy (i.e., from another Proxy Minion type, or regular Minion), the pyeapi connection arguments can be either specified from the CLI when executing the command, or in a configuration block under the pyeapi key in the configuration opts (i.e., (Proxy) Minion configuration file), or Pillar. The module supports these simultaneously. These fields are the exact same supported by the pyeapi Proxy Module:
Example (when not running in a pyeapi Proxy Minion): pyeapi: In case the username and password are the same on any device you are targeting, the block above (besides other parameters specific to your environment you might need) should suffice to be able to execute commands from outside a pyeapi Proxy, e.g.: salt '*' pyeapi.run_commands 'show version' 'show interfaces' salt '*' pyeapi.config 'ntp server 1.2.3.4' NOTE: Remember that the above applies only when not running in
a pyeapi Proxy Minion. If you want to use the pyeapi Proxy,
please follow the documentation notes for a proper setup.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
NOTE: This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
CLI Example: salt '*' pyeapi.call run_commands "['show version']"
This argument is ignored when config_file is
specified.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
NOTE: This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
CLI Example: salt '*' pyeapi.config commands="['ntp server 1.2.3.4', 'ntp server 5.6.7.8']"
salt '*' pyeapi.config config_file=salt://config.txt
salt '*' pyeapi.config config_file=https://bit.ly/2LGLcDy context="{'servers': ['1.2.3.4']}"
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
NOTE: This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
CLI Example: salt '*' pyeapi.get_config salt '*' pyeapi.get_config params='section snmp-server' salt '*' pyeapi.get_config config='startup-config'
This function returns an unserializable object, hence it
is not meant to be used on the CLI. This should mainly be used when invoked
from other modules for the low level connection with the network device.
USAGE Example: conn = __salt__['pyeapi.get_connection'](host='router1.example.com',
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
NOTE: This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
CLI Example: salt '*' pyeapi.run_commands 'show version' salt '*' pyeapi.run_commands 'show version' encoding=text salt '*' pyeapi.run_commands 'show version' encoding=text host=cr1.thn.lon username=example password=weak Output example: veos1:
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
NOTE: This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
This argument does not need to be specified when running
in a pyeapi Proxy Minion.
CLI Example: salt '*' salt.modules.artifactoryModule for fetching artifacts from Artifactory
salt.modules.atWrapper module for at(1) Also, a 'tag' feature has been added to more easily tag jobs.
Changed in version 2017.7.0.
salt '*' at.at <timespec> <cmd> [tag=<tag>] [runas=<user>] salt '*' at.at 12:05am '/sbin/reboot' tag=reboot salt '*' at.at '3:05am +3 days' 'bin/myscript' tag=nightly runas=jim salt '*' at.at '"22:02"' 'bin/myscript' tag=nightly runas=jim
salt '*' at.atc <jobid>
salt '*' at.atq salt '*' at.atq [tag] salt '*' at.atq [job number]
salt '*' at.atrm <jobid> <jobid> .. <jobid> salt '*' at.atrm all salt '*' at.atrm all [tag]
salt '*' at.jobcheck runas=jam day=13 salt '*' at.jobcheck day=13 month=12 year=13 tag=rose salt.modules.at_solarisWrapper for at(1) on Solaris-like systems NOTE: we try to mirror the generic at module where
possible
New in version 2017.7.0.
salt '*' at.at <timespec> <cmd> [tag=<tag>] [runas=<user>] salt '*' at.at 12:05am '/sbin/reboot' tag=reboot salt '*' at.at '3:05am +3 days' 'bin/myscript' tag=nightly runas=jim
salt '*' at.atc <jobid>
salt '*' at.atq salt '*' at.atq [tag] salt '*' at.atq [job number]
salt '*' at.atrm <jobid> <jobid> .. <jobid> salt '*' at.atrm all salt '*' at.atrm all [tag]
salt '*' at.jobcheck runas=jam day=13 salt '*' at.jobcheck day=13 month=12 year=13 tag=rose salt.modules.augeas_cfgManages configuration files via augeas This module requires the augeas Python module. WARNING: Minimal installations of Debian and Ubuntu have been seen
to have packaging bugs with python-augeas, causing the augeas module to fail
to import. If the minion has the augeas module installed, but the functions in
this execution module fail to run due to being unavailable, first restart the
salt-minion service. If the problem persists past that, the following command
can be run from the master to determine what is causing the import to fail:
salt minion-id cmd.run 'python -c "from augeas import Augeas"' For affected Debian/Ubuntu hosts, installing libpython2.7 has been known to resolve the issue.
salt '*' augeas.execute /files/etc/redis/redis.conf \ commands='["set bind 0.0.0.0", "set maxmemory 1G"]' New in version 2016.3.0.
salt '*' augeas.get /files/etc/hosts/1/ ipaddr New in version 2016.3.0.
salt '*' augeas.ls /files/etc/passwd
New in version 2016.3.0.
salt '*' augeas.match /files/etc/services/service-name ssh New in version 2016.3.0.
salt '*' augeas.remove \ /files/etc/sysctl.conf/net.ipv4.conf.all.log_martians
New in version 2016.3.0.
salt '*' augeas.setvalue /files/etc/hosts/1/canonical localhost This will set the first entry in /etc/hosts to localhost CLI Example: salt '*' augeas.setvalue /files/etc/hosts/01/ipaddr 192.168.1.1 \ Adds a new host to /etc/hosts the ip address 192.168.1.1 and hostname test CLI Example: salt '*' augeas.setvalue prefix=/files/etc/sudoers/ \ Ensures that the following line is present in /etc/sudoers: %wheel ALL = PASSWD : ALL , NOPASSWD : /usr/bin/apt-get , /usr/bin/aptitude
salt '*' augeas.tree /files/etc/
New in version 2016.3.0.
salt.modules.aws_sqsSupport for the Amazon Simple Queue Service.
CLI Example: salt '*' aws_sqs.create_queue <sqs queue> <region>
CLI Example: salt '*' aws_sqs.delete_message <sqs queue> <region> receipthandle='<sqs ReceiptHandle>' New in version 2014.7.0.
CLI Example: salt '*' aws_sqs.delete_queue <sqs queue> <region>
CLI Example: salt '*' aws_sqs.list_queues <region>
CLI Example: salt '*' aws_sqs.queue_exists <sqs queue> <region>
CLI Example: salt '*' aws_sqs.receive_message <sqs queue> <region> salt '*' aws_sqs.receive_message <sqs queue> <region> num=10 New in version 2014.7.0. salt.modules.azurearm_computeAzure (ARM) Compute Execution Module New in version 2019.2.0. WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Optional provider parameters:
CLI Example: salt-call azurearm_compute.availability_set_create_or_update testset testgroup
CLI Example: salt-call azurearm_compute.availability_set_delete testset testgroup
CLI Example: salt-call azurearm_compute.availability_set_get testset testgroup
CLI Example: salt-call azurearm_compute.availability_sets_list testgroup
CLI Example: salt-call azurearm_compute.availability_sets_list_available_sizes testset testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_capture testvm testcontainer testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_convert_to_managed_disks testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_deallocate testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_generalize testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_get testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_power_off testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_redeploy testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_restart testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machine_start testvm testgroup
CLI Example: salt-call azurearm_compute.virtual_machines_list testgroup
salt-call azurearm_compute.virtual_machines_list_all
CLI Example: salt-call azurearm_compute.virtual_machines_list_available_sizes testvm testgroup salt.modules.azurearm_dnsAzure (ARM) DNS Execution Module New in version 3000. WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Required provider parameters: if using username and password:
if using a service principal:
Optional provider parameters: cloud_environment: Used to point the cloud driver
to different API endpoints, such as Azure GovCloud.
Possible values:
CLI Example: salt-call azurearm_dns.record_set_create_or_update myhost myzone testgroup A
CLI Example: salt-call azurearm_dns.record_set_delete myhost myzone testgroup A
CLI Example: salt-call azurearm_dns.record_set_get '@' myzone testgroup SOA
CLI Example: salt-call azurearm_dns.record_sets_list_by_dns_zone myzone testgroup
CLI Example: salt-call azurearm_dns.record_sets_list_by_type myzone testgroup SOA
CLI Example: salt-call azurearm_dns.zone_create_or_update myzone testgroup
CLI Example: salt-call azurearm_dns.zone_delete myzone testgroup
CLI Example: salt-call azurearm_dns.zone_get myzone testgroup
CLI Example: salt-call azurearm_dns.zones_list
CLI Example: salt-call azurearm_dns.zones_list_by_resource_group testgroup salt.modules.azurearm_networkAzure (ARM) Network Execution Module New in version 2019.2.0. WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Optional provider parameters:
CLI Example: salt-call azurearm_network.check_dns_name_availability testdnsname westus
CLI Example: salt-call azurearm_network.check_ip_address_availability 10.0.0.4 testnet testgroup
CLI Example: salt-call azurearm_network.default_security_rule_get DenyAllOutBound testnsg testgroup
CLI Example: salt-call azurearm_network.default_security_rules_list testnsg testgroup
CLI Example: salt-call azurearm_network.get_virtual_machine_scale_set_network_interface test-iface0 testset testvm testgroup
CLI Example: salt-call azurearm_network.list_virtual_machine_scale_set_vm_network_interfaces testset testgroup
CLI Example: salt-call azurearm_network.list_virtual_machine_scale_set_vm_network_interfaces testset testvm testgroup
CLI Example: salt-call azurearm_network.load_balancer_create_or_update testlb testgroup
CLI Example: salt-call azurearm_network.load_balancer_delete testlb testgroup
CLI Example: salt-call azurearm_network.load_balancer_get testlb testgroup
CLI Example: salt-call azurearm_network.load_balancers_list testgroup
salt-call azurearm_network.load_balancers_list_all
CLI Example: salt-call azurearm_network.network_interface_create_or_update test-iface0 [{'name': 'testipconfig1'}] testsubnet testnet testgroup
CLI Example: salt-call azurearm_network.network_interface_delete test-iface0 testgroup
CLI Example: salt-call azurearm_network.network_interface_get test-iface0 testgroup
CLI Example: salt-call azurearm_network.network_interface_get_effective_route_table test-iface0 testgroup
CLI Example: salt-call azurearm_network.network_interface_list_effective_network_security_groups test-iface0 testgroup
CLI Example: salt-call azurearm_network.network_interfaces_list testgroup
salt-call azurearm_network.network_interfaces_list_all
CLI Example: salt-call azurearm_network.network_security_group_create_or_update testnsg testgroup
CLI Example: salt-call azurearm_network.network_security_group_delete testnsg testgroup
CLI Example: salt-call azurearm_network.network_security_group_get testnsg testgroup
CLI Example: salt-call azurearm_network.network_security_groups_list testgroup
salt-call azurearm_network.network_security_groups_list_all
CLI Example: salt-call azurearm_network.public_ip_address_create_or_update test-ip-0 testgroup
CLI Example: salt-call azurearm_network.public_ip_address_delete test-pub-ip testgroup
CLI Example: salt-call azurearm_network.public_ip_address_get test-pub-ip testgroup
CLI Example: salt-call azurearm_network.public_ip_addresses_list testgroup
salt-call azurearm_network.public_ip_addresses_list_all
CLI Example: salt-call azurearm_network.route_create_or_update test-rt '10.0.0.0/8' test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_delete test-rt test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_filter_create_or_update test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_delete test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_get test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_rule_create_or_update test-rule allow "['12076:51006']" test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_rule_delete test-rule test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_rule_get test-rule test-filter testgroup
CLI Example: salt-call azurearm_network.route_filter_rules_list test-filter testgroup
CLI Example: salt-call azurearm_network.route_filters_list testgroup
salt-call azurearm_network.route_filters_list_all
CLI Example: salt-call azurearm_network.route_get test-rt test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_table_create_or_update test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_table_delete test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_table_get test-rt-table testgroup
CLI Example: salt-call azurearm_network.route_tables_list testgroup
salt-call azurearm_network.route_tables_list_all
CLI Example: salt-call azurearm_network.routes_list test-rt-table testgroup
CLI Example: salt-call azurearm_network.security_rule_create_or_update testrule1 allow outbound 101 tcp testnsg testgroup source_address_prefix='*' destination_address_prefix=internet source_port_range='*' destination_port_range='1-1024'
CLI Example: salt-call azurearm_network.security_rule_delete testrule1 testnsg testgroup
CLI Example: salt-call azurearm_network.security_rule_get testrule1 testnsg testgroup
CLI Example: salt-call azurearm_network.security_rules_list testnsg testgroup
CLI Example: salt-call azurearm_network.subnet_create_or_update testsubnet '10.0.0.0/24' testnet testgroup
CLI Example: salt-call azurearm_network.subnet_delete testsubnet testnet testgroup
CLI Example: salt-call azurearm_network.subnet_get testsubnet testnet testgroup
CLI Example: salt-call azurearm_network.subnets_list testnet testgroup
CLI Example: salt-call azurearm_network.usages_list westus
CLI Example: salt-call azurearm_network.virtual_network_create_or_update testnet ['10.0.0.0/16'] testgroup
CLI Example: salt-call azurearm_network.virtual_network_delete testnet testgroup
CLI Example: salt-call azurearm_network.virtual_network_get testnet testgroup
CLI Example: salt-call azurearm_network.virtual_networks_list testgroup
salt-call azurearm_network.virtual_networks_list_all salt.modules.azurearm_resourceAzure (ARM) Resource Execution Module New in version 2019.2.0. WARNING: This cloud provider will be removed from Salt in version
3007 in favor of the saltext.azurerm Salt Extension
Optional provider parameters:
CLI Example: salt-call azurearm_resource.deployment_cancel testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_check_existence testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_create_or_update testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_delete testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_export_template testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_get testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_operation_get XXXXX testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_operations_list testdeploy testgroup
CLI Example: salt-call azurearm_resource.deployment_validate testdeploy testgroup
salt-call azurearm_resource.deployments_list testgroup
CLI Example: salt-call azurearm_resource.policy_assignment_create testassign /subscriptions/bc75htn-a0fhsi-349b-56gh-4fghti-f84852 testpolicy
CLI Example: salt-call azurearm_resource.policy_assignment_delete testassign /subscriptions/bc75htn-a0fhsi-349b-56gh-4fghti-f84852
CLI Example: salt-call azurearm_resource.policy_assignment_get testassign /subscriptions/bc75htn-a0fhsi-349b-56gh-4fghti-f84852
salt-call azurearm_resource.policy_assignments_list
CLI Example: salt-call azurearm_resource.policy_assignments_list_for_resource_group testgroup
CLI Example: salt-call azurearm_resource.policy_definition_create_or_update testpolicy '{...rule definition..}'
CLI Example: salt-call azurearm_resource.policy_definition_delete testpolicy
CLI Example: salt-call azurearm_resource.policy_definition_get testpolicy
CLI Example: salt-call azurearm_resource.policy_definitions_list
CLI Example: salt-call azurearm_resource.resource_group_check_existence testgroup
CLI Example: salt-call azurearm_resource.resource_group_create_or_update testgroup westus
CLI Example: salt-call azurearm_resource.resource_group_delete testgroup
CLI Example: salt-call azurearm_resource.resource_group_get testgroup
salt-call azurearm_resource.resource_groups_list
CLI Example: salt-call azurearm_resource.subscription_get XXXXXXXX
salt-call azurearm_resource.subscriptions_list
CLI Example: salt-call azurearm_resource.subscriptions_list_locations XXXXXXXX
salt-call azurearm_resource.tenants_list salt.modules.bamboohrSupport for BambooHR New in version 2015.8.0. Requires a subdomain and an apikey in /usr/local/etc/salt/minion: bamboohr:
salt myminion bamboohr.list_employees By default, the return data will be keyed by ID. However, it can be ordered by any other field. Keep in mind that if the field that is chosen contains duplicate values (i.e., location is used, for a company which only has one location), then each duplicate value will be overwritten by the previous. Therefore, it is advisable to only sort by fields that are guaranteed to be unique. CLI Examples: salt myminion bamboohr.list_employees order_by=id salt myminion bamboohr.list_employees order_by=displayName salt myminion bamboohr.list_employees order_by=workEmail
salt myminion bamboohr.list_meta_fields
salt myminion bamboohr.list_users By default, the return data will be keyed by ID. However, it can be ordered by any other field. Keep in mind that if the field that is chosen contains duplicate values (i.e., location is used, for a company which only has one location), then each duplicate value will be overwritten by the previous. Therefore, it is advisable to only sort by fields that are guaranteed to be unique. CLI Examples: salt myminion bamboohr.list_users order_by=id salt myminion bamboohr.list_users order_by=email
salt myminion bamboohr.show_employee 1138 By default, the fields normally returned from bamboohr.list_employees are returned. These fields are:
If needed, a different set of fields may be specified, separated by commas: CLI Example: salt myminion bamboohr.show_employee 1138 displayName,dateOfBirth A list of available fields can be found at http://www.bamboohr.com/api/documentation/employees.php
salt myminion bamboohr.update_employee 1138 nickname Curly
salt myminion bamboohr.update_employee 1138 nickname ''
salt myminion bamboohr.update_employee 1138 items='{"nickname": "Curly"}
salt myminion bamboohr.update_employee 1138 items='{"nickname": ""}
salt.modules.baredocBaredoc walks the installed module and state directories and generates dictionaries and lists of the function names and their arguments. New in version 3001.
CLI Example: salt myminion baredoc.list_modules myminion:
CLI Example: (example truncated for brevity) salt myminion baredoc.list_states myminion:
CLI Example: salt myminion baredoc.module_docs
CLI Example: salt myminion baredoc.state_docs at salt.modules.bcacheModule for managing BCache sets BCache is a block-level caching mechanism similar to ZFS L2ARC/ZIL, dm-cache and fscache. It works by formatting one block device as a cache set, then adding backend devices (which need to be formatted as such) to the set and activating them. It's available in Linux mainline kernel since 3.10 https://www.kernel.org/doc/Documentation/bcache.txt This module needs the bcache userspace tools to function. New in version 2016.3.0.
salt '*' bcache.attach sdc salt '*' bcache.attach /dev/bcache1
salt '*' bcache.back_make sdc cache_mode=writeback attach=True
salt '*' bcache.cache_make sdb reserved=10% block_size=4096
this increases the amount of reserved space available to
SSD garbage collectors, potentially (vastly) increasing performance
salt '*' bcache.config salt '*' bcache.config bcache1 salt '*' bcache.config errors=panic journal_delay_ms=150 salt '*' bcache.config bcache1 cache_mode=writeback writeback_percent=15
salt '*' bcache.detach sdc salt '*' bcache.detach bcache1
salt '*' bcache.device bcache0 salt '*' bcache.device /dev/sdc stats=True
salt '*' bcache.start
salt '*' bcache.status salt '*' bcache.status stats=True salt '*' bcache.status internals=True alldevs=True
'Stop' on an individual backing device means hard-stop;
no attempt at flushing will be done and the bcache device will seemingly
'disappear' from the device lists
CLI Example: salt '*' bcache.stop
salt '*' bcache.device bcache0 salt '*' bcache.device /dev/sdc
salt '*' bcache.uuid salt '*' bcache.uuid /dev/sda salt '*' bcache.uuid bcache0 salt.modules.beaconsModule for managing the Salt beacons on a minion New in version 2015.8.0.
CLI Example: salt '*' beacons.add ps "[{'processes': {'salt-master': 'stopped', 'apache2': 'stopped'}}]"
CLI Example: salt '*' beacons.delete ps salt '*' beacons.delete load
CLI Example: salt '*' beacons.disable
CLI Example: salt '*' beacons.disable_beacon ps
CLI Example: salt '*' beacons.enable
CLI Example: salt '*' beacons.enable_beacon ps
CLI Example: salt '*' beacons.list
CLI Example: salt '*' beacons.list_available
CLI Example: salt '*' beacons.modify ps "[{'salt-master': 'stopped'}, {'apache2': 'stopped'}]"
salt '*' beacons.reset
CLI Example: salt '*' beacons.save salt.modules.bigip
CLI Example: salt '*' bigip.add_pool_members bigip admin admin my-pool 10.2.2.1:80
CLI Example: salt '*' bigip.commit_transaction bigip admin admin my_transaction
CLI Example: salt '*' bigip.create_monitor bigip admin admin http my-http-monitor timeout=10 interval=5
CLI Example: salt '*' bigip.create_node bigip admin admin 10.1.1.2
CLI Example: salt '*' bigip.create_pool bigip admin admin my-pool 10.1.1.1:80,10.1.1.2:80,10.1.1.3:80 monitor=http
CLI Example: salt '*' bigip.create_profile bigip admin admin http my-http-profile defaultsFrom='/Common/http' salt '*' bigip.create_profile bigip admin admin http my-http-profile defaultsFrom='/Common/http' \
CLI Example: salt '*' bigip.create_virtual bigip admin admin my-virtual-3 26.2.2.5:80 \
CLI Example: salt '*' bigip.delete_monitor bigip admin admin http my-http-monitor
CLI Example: salt '*' bigip.delete_node bigip admin admin my-node
CLI Example salt '*' bigip.delete_node bigip admin admin my-pool
CLI Example: salt '*' bigip.delete_pool_member bigip admin admin my-pool 10.2.2.2:80
CLI Example: salt '*' bigip.delete_profile bigip admin admin http my-http-profile
CLI Example: salt '*' bigip.delete_transaction bigip admin admin my_transaction
CLI Example: salt '*' bigip.delete_virtual bigip admin admin my-virtual
CLI Example: salt '*' bigip.list_monitor bigip admin admin http my-http-monitor
CLI Example: salt '*' bigip.list_node bigip admin admin my-node
CLI Example: salt '*' bigip.list_pool bigip admin admin my-pool
CLI Example: salt '*' bigip.list_profile bigip admin admin http my-http-profile
CLI Example: salt '*' bigip.list_transaction bigip admin admin my_transaction
CLI Example: salt '*' bigip.list_virtual bigip admin admin my-virtual
CLI Example: salt '*' bigip.modify_monitor bigip admin admin http my-http-monitor timout=16 interval=6
CLI Example: salt '*' bigip.modify_node bigip admin admin 10.1.1.2 ratio=2 logging=enabled
CLI Example: salt '*' bigip.modify_pool bigip admin admin my-pool 10.1.1.1:80,10.1.1.2:80,10.1.1.3:80 min_active_members=1
CLI Example: salt '*' bigip.modify_pool_member bigip admin admin my-pool 10.2.2.1:80 state=use-down session=user-disabled
Creating Complex Args Profiles can get pretty complicated in terms of the
amount of possible config options. Use the following shorthand to create
complex arguments such as lists, dictionaries, and lists of dictionaries. An
option is also provided to pass raw json as well.
CLI Example: salt '*' bigip.modify_profile bigip admin admin http my-http-profile defaultsFrom='/Common/http' salt '*' bigip.modify_profile bigip admin admin http my-http-profile defaultsFrom='/Common/http' \
CLI Example: salt '*' bigip.modify_virtual bigip admin admin my-virtual source_address_translation=none salt '*' bigip.modify_virtual bigip admin admin my-virtual rules=my-rule,my-other-rule
CLI Example: salt '*' bigip.replace_pool_members bigip admin admin my-pool 10.2.2.1:80,10.2.2.2:80,10.2.2.3:80
CLI Example: salt '*' bigip.start_transaction bigip admin admin my_transaction salt.modules.bluez_bluetoothSupport for Bluetooth (using BlueZ in Linux). The following packages are required packages for this module: bluez >= 5.7 bluez-libs >= 5.7 bluez-utils >=
5.7 pybluez >= 0.18
salt '*' bluetooth.address
salt '*' bluetooth.block DE:AD:BE:EF:CA:FE
salt '*' bluetooth.discoverable hci0
salt '*' bluetooth.noscan hci0
salt '*' bluetooth.pair DE:AD:BE:EF:CA:FE 1234 Where DE:AD:BE:EF:CA:FE is the address of the device to pair with, and 1234 is the passphrase. TODO: This function is currently broken, as the bluez-simple-agent program no longer ships with BlueZ >= 5.0. It needs to be refactored.
salt '*' bluetooth.power hci0 on salt '*' bluetooth.power hci0 off
salt '*' bluetooth.scan
salt '*' bluetooth.start
salt '*' bluetooth.stop
salt '*' bluetooth.unblock DE:AD:BE:EF:CA:FE
salt '*' bluetooth.unpair DE:AD:BE:EF:CA:FE Where DE:AD:BE:EF:CA:FE is the address of the device to unpair. TODO: This function is currently broken, as the bluez-simple-agent program no longer ships with BlueZ >= 5.0. It needs to be refactored.
salt '*' bluetoothd.version salt.modules.boto3_elasticacheExecution module for Amazon Elasticache using boto3New in version 2017.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: elasticache.keyid: GKTADJGHEIQSXMKKRBJ08H elasticache.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: elasticache.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto3_elasticache.add_tags_to_resource name'=arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot' Tags="[{'Key': 'TeamOwner', 'Value': 'infrastructure'}]"
salt myminion boto3_elasticache.authorize_cache_security_group_ingress mycachesecgrp EC2SecurityGroupName=someEC2sg EC2SecurityGroupOwnerId=SOMEOWNERID
salt myminion boto3_elasticache.cache_cluster_exists myelasticache
salt myminion boto3_elasticache.cache_security_group_exists mysecuritygroup
salt myminion boto3_elasticache.cache_subnet_group_exists my-subnet-group
salt myminion boto3_elasticache.copy_snapshot name=mySnapshot TargetSnapshotName=copyOfMySnapshot
salt myminion boto3_elasticache.create_cache_cluster name=myCacheCluster Engine=redis CacheNodeType=cache.t2.micro NumCacheNodes=1 SecurityGroupIds='[sg-11223344]' CacheSubnetGroupName=myCacheSubnetGroup
salt myminion boto3_elasticache.create_cache_parameter_group name=myParamGroup CacheParameterGroupFamily=redis2.8 Description="My Parameter Group"
salt myminion boto3_elasticache.create_cache_security_group mycachesecgrp Description='My Cache Security Group'
salt myminion boto3_elasticache.create_cache_subnet_group name=my-subnet-group CacheSubnetGroupDescription="description" subnets='[myVPCSubnet1,myVPCSubnet2]'
salt myminion boto3_elasticache.create_replication_group name=myelasticache ReplicationGroupDescription=description
salt myminion boto3_elasticache.delete myelasticache
salt myminion boto3_elasticache.delete_cache_parameter_group myParamGroup
salt myminion boto3_elasticache.delete_cache_security_group myelasticachesg
salt myminion boto3_elasticache.delete_subnet_group my-subnet-group region=us-east-1
salt myminion boto3_elasticache.delete_replication_group my-replication-group
salt myminion boto3_elasticache.describe_cache_clusters salt myminion boto3_elasticache.describe_cache_clusters myelasticache
salt myminion boto3_elasticache.describe_cache_parameter_groups salt myminion boto3_elasticache.describe_cache_parameter_groups myParameterGroup
salt myminion boto3_elasticache.describe_cache_security_groups salt myminion boto3_elasticache.describe_cache_security_groups mycachesecgrp
salt myminion boto3_elasticache.describe_cache_subnet_groups region=us-east-1
salt myminion boto3_elasticache.describe_replication_groups salt myminion boto3_elasticache.describe_replication_groups myelasticache
salt myminion boto3_elasticache.list_cache_subnet_groups region=us-east-1
salt myminion boto3_elasticache.list_tags_for_resource name'=arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot'
Example: salt myminion boto3_elasticache.create_cache_cluster name=myCacheCluster NotificationTopicStatus=inactive
salt myminion boto3_elasticache.modify_cache_subnet_group name=my-subnet-group subnets='[myVPCSubnet3]'
salt myminion boto3_elasticache.modify_replication_group name=myelasticache ReplicationGroupDescription=newDescription
salt myminion boto3_elasticache.remove_tags_from_resource name'=arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot' TagKeys="['TeamOwner']"
salt myminion boto3_elasticache.replication_group_exists myelasticache
salt myminion boto3_elasticache.revoke_cache_security_group_ingress mycachesecgrp EC2SecurityGroupName=someEC2sg EC2SecurityGroupOwnerId=SOMEOWNERID salt.modules.boto3_elasticsearchConnection module for Amazon Elasticsearch Service New in version 3001.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: es.keyid: GKTADJGHEIQSXMKKRBJ08H es.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: es.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.add_tags domain_name=mydomain tags='{"foo": "bar", "baz": "qux"}'
New in version 3001.
Behind the scenes, this does 3 things:
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.check_upgrade_eligibility mydomain '6.7'
The value assigned to each key is a dict with the following case sensitive keys:
Note: Not all instance types allow enabling encryption at rest. See https://docs.aws.amazon.com /elasticsearch-service/latest/developerguide/aes-supported-instance-types.html
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.create_elasticsearch_domain mydomain \
elasticsearch_cluster_config='{ \
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.describe_elasticsearch_domains '["domain_a", "domain_b"]'
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.describe_elasticsearch_instance_type_limits \
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001.
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.remove_tags '["foo", "bar"]' domain_name=my_domain
New in version 3001.
INDEX_SLOW_LOGS, SEARCH_SLOW_LOGS,
ES_APPLICATION_LOGS.
The value assigned to each key is a dict with the following case sensitive keys:
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.update_elasticsearch_domain_config mydomain \
New in version 3001. CLI Example: salt myminion boto3_elasticsearch.upgrade_elasticsearch_domain mydomain \ target_version='6.7' \ perform_check_only=True
New in version 3001. salt.modules.boto3_route53Execution module for Amazon Route53 written against Boto 3 New in version 2017.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: route53.keyid: GKTADJGHEIQSXMKKRBJ08H route53.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: route53.region: us-east-1 It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Note that Route53 essentially ignores all (valid) settings for 'region', since there is only one Endpoint (in us-east-1 if you care) and any (valid) region setting will just send you there. It is entirely safe to set it to None as well.
CLI Example: salt myminion boto3_route53.associate_vpc_with_hosted_zone Name=example.org. VPCName=myVPC VPCRegion=us-east-1 Comment="Whoo-hoo! I added another VPC."
{
CLI Example: foo='{
CLI Example: salt myminion boto3_route53.create_hosted_zone example.org.
salt myminion boto3_route53.delete_hosted_zone Z1234567890
salt myminion boto3_route53.delete_hosted_zone_by_domain example.org.
CLI Example: salt myminion boto3_route53.disassociate_vpc_from_hosted_zone Name=example.org. VPCName=myVPC VPCRegion=us-east-1 Comment="Whoops! Don't wanna talk to this-here zone no more."
CLI Example: salt myminion boto3_route53.find_hosted_zone Name=salt.org. profile='{"region": "us-east-1", "keyid": "A12345678AB", "key": "xblahblahblah"}'
CLI Example: salt myminion boto3_route53.get_hosted_zone Z1234567690 profile='{"region": "us-east-1", "keyid": "A12345678AB", "key": "xblahblahblah"}'
CLI Example: salt myminion boto3_route53.get_hosted_zones_by_domain salt.org. profile='{"region": "us-east-1", "keyid": "A12345678AB", "key": "xblahblahblah"}'
salt myminion boto3_route53.get_records test.example.org example.org A
CLI Example: salt myminion boto3_route53.describe_hosted_zones profile='{"region": "us-east-1", "keyid": "A12345678AB", "key": "xblahblahblah"}'
CLI Example: salt myminion boto3_route53.update_hosted_zone_comment Name=example.org. Comment="This is an example comment for an example zone" salt.modules.boto3_snsConnection module for Amazon SNS
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: sns.keyid: GKTADJGHEIQSXMKKRBJ08H sns.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: sns.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto3_sns.create_topic mytopic region=us-east-1
salt myminion boto3_sns.delete_topic mytopic region=us-east-1
salt my_favorite_client boto3_sns.describe_topic a_sns_topic_of_my_choice
salt myminion boto3_sns.get_subscription_attributes somesubscription region=us-west-1
salt myminion boto3_sns.get_topic_attributes someTopic region=us-west-1
salt myminion boto3_sns.list_subscriptions region=us-east-1
salt myminion boto3_sns.list_subscriptions_by_topic mytopic region=us-east-1
salt myminion boto3_sns.list_topics
salt myminion boto3_sns.set_subscription_attributes someSubscription RawMessageDelivery jsonStringValue
salt myminion boto3_sns.set_topic_attributes someTopic DisplayName myDisplayNameValue
salt myminion boto3_sns.subscribe mytopic https https://www.example.com/sns-endpoint
salt myminion boto3_sns.topic_exists mytopic region=us-east-1
salt myminion boto3_sns.unsubscribe my_subscription_arn region=us-east-1 salt.modules.boto_apigatewayConnection module for Amazon APIGateway New in version 2016.11.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: apigateway.keyid: GKTADJGHEIQSXMKKRBJ08H apigateway.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: apigateway.region: us-west-2 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Changed in version 2015.8.0: All methods now return a dictionary. Create and delete methods return: created: true or created: false error: Request methods (e.g., describe_apigateway) return: apigateway: or error:
salt myminion boto_apigateway.activate_api_deployent restApiId stagename deploymentId
salt myminion boto_apigateway.exists myapi_name
salt myminion boto_apigateway.api_model_exists restApiId modelName
salt myminion boto_apigateway.associate_stagekeys_api_key \
CLI Example: salt myminion boto_apigateway.attach_usage_plan_to_apis plan_id='usage plan id' apis='[{"apiId": "some id 1", "stage": "some stage 1"}]'
salt myminion boto_apigateway.create_api myapi_name api_description
salt myminion boto_apigateway.create_api_deployent restApiId stagename stageDescription='' \
description='' cacheClusterEnabled=True|False cacheClusterSize=0.5 variables='{"name": "value"}'
salt myminion boto_apigateway.create_api_integration restApiId resourcePath httpMethod \
salt myminion boto_apigateway.create_api_integration_response restApiId resourcePath httpMethod \
salt myminion boto_apigateway.create_api_key name description salt myminion boto_apigateway.create_api_key name description enabled=False salt myminion boto_apigateway.create_api_key name description \
salt myminion boto_apigateway.create_api_method restApiId resourcePath, httpMethod, authorizationType, \
salt myminion boto_apigateway.create_api_method_response restApiId resourcePath httpMethod \
salt myminion boto_apigateway.create_api_model restApiId modelName modelDescription '<schema>' 'content-type'
salt myminion boto_apigateway.create_api_resources myapi_id resource_path
salt myminion boto_apigateway.create_api_stage restApiId stagename deploymentId \
CLI Example: salt myminion boto_apigateway.create_usage_plan name='usage plan name' throttle='{"rateLimit": 10.0, "burstLimit": 10}'
salt myminion boto_apigateway.delete_api myapi_name salt myminion boto_apigateway.delete_api myapi_name description='api description'
salt myminion boto_apigateway.delete_api_deployent restApiId deploymentId
salt myminion boto_apigateway.delete_api_integration restApiId resourcePath httpMethod
salt myminion boto_apigateway.delete_api_integration_response restApiId resourcePath httpMethod statusCode
salt myminion boto_apigateway.delete_api_key apikeystring
salt myminion boto_apigateway.delete_api_method restApiId resourcePath httpMethod
salt myminion boto_apigateway.delete_api_method_response restApiId resourcePath httpMethod statusCode
salt myminion boto_apigateway.delete_api_model restApiId modelName
salt myminion boto_apigateway.delete_api_resources myapi_id, resource_path
salt myminion boto_apigateway.delete_api_stage restApiId stageName
salt myminion boto_apigateway.delete_usage_plan plan_id='usage plan id'
salt myminion boto_apigateway.describe_api_deployent restApiId deploymentId
salt myminion boto_apigateway.describe_api_deployments restApiId
salt myminion boto_apigateway.describe_api_integration restApiId resourcePath httpMethod
salt myminion boto_apigateway.describe_api_integration_response restApiId resourcePath httpMethod statusCode
salt myminion boto_apigateway.describe_api_key apigw_api_key
salt myminion boto_apigateway.describe_api_keys
salt myminion boto_apigateway.describe_api_method restApiId resourcePath httpMethod
salt myminion boto_apigateway.describe_api_method_response restApiId resourcePath httpMethod statusCode
salt myminion boto_apigateway.describe_api_model restApiId modelName [True]
salt myminion boto_apigateway.describe_api_models restApiId
salt myminion boto_apigateway.describe_api_resource myapi_id resource_path
salt myminion boto_apigateway.describe_api_resource_method myapi_id resource_path httpmethod
salt myminion boto_apigateway.describe_api_resources myapi_id
salt myminion boto_apigateway.describe_api_stage restApiId stageName
salt myminion boto_apigateway.describe_api_stages restApiId deploymentId
salt myminion boto_apigateway.describe_apis salt myminion boto_apigateway.describe_apis name='api name' salt myminion boto_apigateway.describe_apis name='api name' description='desc str'
salt myminion boto_apigateway.describe_usage_plans salt myminion boto_apigateway.describe_usage_plans name='usage plan name' salt myminion boto_apigateway.describe_usage_plans plan_id='usage plan id'
CLI Example: salt myminion boto_apigateway.detach_usage_plan_to_apis plan_id='usage plan id' apis='[{"apiId": "some id 1", "stage": "some stage 1"}]'
salt myminion boto_apigateway.enable_api_key api_key
salt myminion boto_apigateway.disassociate_stagekeys_api_key \
salt myminion boto_apigateway.enable_api_key api_key
salt myminion boto_apigateway.flush_api_stage_cache restApiId stageName
salt myminion boto_apigateway.overwrite_api_stage_variables restApiId stageName variables='{"name": "value"}'
salt myminion boto_apigateway.update_api_key_description api_key description
salt myminion boto_apigateway.update_api_model_schema restApiId modelName schema
CLI Example: salt myminion boto_apigateway.update_usage_plan plan_id='usage plan id' throttle='{"rateLimit": 10.0, "burstLimit": 10}'
salt.modules.boto_asgConnection module for Amazon Autoscale Groups New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: asg.keyid: GKTADJGHEIQSXMKKRBJ08H asg.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: asg.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: salt myminion boto_asg.create myasg mylc '["us-east-1a", "us-east-1e"]' 1 10 load_balancers='["myelb", "myelb2"]' tags='[{"key": "Name", value="myasg", "propagate_at_launch": True}]'
salt myminion boto_asg.create_launch_configuration mylc image_id=ami-0b9c9f62 key_name='mykey' security_groups='["mygroup"]' instance_type='c3.2xlarge'
salt myminion boto_asg.delete myasg region=us-east-1
salt myminion boto_asg.delete_launch_configuration mylc
salt myminion boto_asg.describe_launch_configuration mylc
salt-call boto_asg.enter_standby my_autoscale_group_name '["i-xxxxxx"]'
salt myminion boto_asg.exists myasg region=us-east-1
salt-call boto_asg.exit_standby my_autoscale_group_name '["i-xxxxxx"]'
salt-call boto_asg.get_all_groups region=us-east-1 --output yaml
salt myminion boto_asg.get_all_launch_configurations
salt myminion boto.get_cloud_init_mime <cloud init>
salt myminion boto_asg.get_config myasg region=us-east-1
salt-call boto_asg.get_instances my_autoscale_group_name
salt '*' boto_asg.get_scaling_policy_arn mygroup mypolicy
salt myminion boto_asg.launch_configuration_exists mylc
salt-call boto_asg.list_groups region=us-east-1
salt myminion boto_asg.list_launch_configurations salt myminion boto_asg.update myasg mylc '["us-east-1a", "us-east-1e"]' 1 10 load_balancers='["myelb", "myelb2"]' tags='[{"key": "Name", value="myasg", "propagate_at_launch": True}]'
salt.modules.boto_cfnConnection module for Amazon Cloud Formation New in version 2015.5.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cfn.keyid: GKTADJGHEIQSXMKKRBJ08H cfn.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cfn.region: us-east-1
salt myminion boto_cfn.create mystack template_url='https://s3.amazonaws.com/bucket/template.cft' region=us-east-1
salt myminion boto_cfn.delete mystack region=us-east-1
salt myminion boto_cfn.describe mystack region=us-east-1
salt myminion boto_cfn.exists mystack region=us-east-1
salt myminion boto_cfn.get_template mystack
salt myminion boto_cfn.update_stack mystack template_url='https://s3.amazonaws.com/bucket/template.cft' region=us-east-1
salt myminion boto_cfn.validate_template mystack-template salt.modules.boto_cloudfrontConnection module for Amazon CloudFront New in version 2018.3.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cloudfront.keyid: GKTADJGHEIQSXMKKRBJ08H cloudfront.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cloudfront.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
CLI Example: salt myminion boto_cloudfront.create_distribution name=mydistribution profile=awsprofile config='{"Comment":"partial configuration","Enabled":true}'
salt-call boto_cloudfront.export_distributions --out=txt | sed "s/local: //" > cloudfront_distributions.sls
CLI Example: salt myminion boto_cloudfront.get_distribution name=mydistribution profile=awsprofile
CLI Example: salt myminion boto_cloudfront.update_distribution name=mydistribution profile=awsprofile config='{"Comment":"partial configuration","Enabled":true}'
salt.modules.boto_cloudtrailConnection module for Amazon CloudTrail New in version 2016.3.0. The dependencies listed above can be installed via package or pip.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cloudtrail.keyid: GKTADJGHEIQSXMKKRBJ08H cloudtrail.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cloudtrail.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_cloudtrail.add_tags my_trail tag_a=tag_value tag_b=tag_value
salt myminion boto_cloudtrail.create my_trail my_bucket
salt myminion boto_cloudtrail.delete mytrail
salt myminion boto_cloudtrail.describe mytrail
salt myminion boto_cloudtrail.exists mytrail
policies:
CLI Example: salt myminion boto_cloudtrail.list_tags my_trail
salt myminion boto_cloudtrail.remove_tags my_trail tag_a=tag_value tag_b=tag_value
salt myminion boto_cloudtrail.start_logging my_trail
salt myminion boto_cloudtrail.describe mytrail
salt myminion boto_cloudtrail.stop_logging my_trail
salt myminion boto_cloudtrail.update my_trail my_bucket salt.modules.boto_cloudwatchConnection module for Amazon CloudWatch New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cloudwatch.keyid: GKTADJGHEIQSXMKKRBJ08H cloudwatch.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cloudwatch.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt '*' convert_to_arn 'scaling_policy:'
Dimensions must be a dict. If the value of Dimensions is a string, it will be json decoded to produce a dict. alarm_actions, insufficient_data_actions, and ok_actions must be lists of string. If the passed-in value is a string, it will be split on "," to produce a list. The strings themselves for alarm_actions, insufficient_data_actions, and ok_actions must be Amazon resource names (ARN's); however, this method also supports an arn lookup notation, as follows: arn:aws:.... ARN as per
http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
scaling_policy:<as_name>:<scaling_policy_name> The named autoscale
group scaling policy, for the named group (e.g.
scaling_policy:my-asg:ScaleDown)
This is convenient for setting up autoscaling as follows. First specify a boto_asg.present state for an ASG with scaling_policies, and then set up boto_cloudwatch_alarm.present states which have alarm_actions that reference the scaling_policy. CLI Example: salt myminion boto_cloudwatch.create_alarm name=myalarm ... region=us-east-1
salt myminion boto_cloudwatch.delete_alarm myalarm region=us-east-1
salt myminion boto_cloudwatch.get_alarm myalarm region=us-east-1
CLI Example: salt myminion boto_cloudwatch.get_all_alarms region=us-east-1 --out=txt salt.modules.boto_cloudwatch_eventConnection module for Amazon CloudWatch Events New in version 2016.11.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cloudwatch_event.keyid: GKTADJGHEIQSXMKKRBJ08H cloudwatch_event.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cloudwatch_event.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_cloudwatch_event.create_or_update my_rule
salt myminion boto_cloudwatch_event.delete myrule
salt myminion boto_cloudwatch_event.describe myrule
salt myminion boto_cloudwatch_event.exists myevent region=us-east-1
salt myminion boto_cloudwatch_event.list_rules region=us-east-1
salt myminion boto_cloudwatch_event.list_targets myrule
salt myminion boto_cloudwatch_event.put_targets myrule [{'Id': 'target1', 'Arn': 'arn:***'}]
salt myminion boto_cloudwatch_event.remove_targets myrule ['Target1'] salt.modules.boto_cognitoidentityConnection module for Amazon CognitoIdentity New in version 2016.11.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: cognitoidentity.keyid: GKTADJGHEIQSXMKKRBJ08H cognitoidentity.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: cognitoidentity.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Changed in version 2015.8.0: All methods now return a dictionary. Create, delete, set, and update methods return: created: true or created: false error: Request methods (e.g., describe_identity_pools) return: identity_pools: or error:
salt myminion boto_cognitoidentity.create_identity_pool my_id_pool_name DeveloperProviderName=custom_developer_provider
salt myminion boto_cognitoidentity.delete_identity_pools my_id_pool_name salt myminion boto_cognitoidentity.delete_identity_pools '' IdentityPoolId=my_id_pool_id
salt myminion boto_cognitoidentity.describe_identity_pools my_id_pool_name salt myminion boto_cognitoidentity.describe_identity_pools '' IdentityPoolId=my_id_pool_id
salt myminion boto_cognitoidentity.get_identity_pool_roles my_id_pool_name salt myminion boto_cognitoidentity.get_identity_pool_roles '' IdentityPoolId=my_id_pool_id
salt myminion boto_cognitoidentity.set_identity_pool_roles my_id_pool_roles # this clears the roles salt myminion boto_cognitoidentity.set_identity_pool_roles my_id_pool_id AuthenticatedRole=my_auth_role UnauthenticatedRole=my_unauth_role # this set both roles salt myminion boto_cognitoidentity.set_identity_pool_roles my_id_pool_id AuthenticatedRole=my_auth_role # this will set the auth role and clear the unauth role salt myminion boto_cognitoidentity.set_identity_pool_roles my_id_pool_id UnauthenticatedRole=my_unauth_role # this will set the unauth role and clear the auth role
salt myminion boto_cognitoidentity.update_identity_pool my_id_pool_id my_id_pool_name DeveloperProviderName=custom_developer_provider salt.modules.boto_datapipelineConnection module for Amazon Data Pipeline New in version 2016.3.0.
salt myminion boto_datapipeline.activate_pipeline my_pipeline_id
salt myminion boto_datapipeline.create_pipeline my_name my_unique_id
salt myminion boto_datapipeline.delete_pipeline my_pipeline_id
salt myminion boto_datapipeline.describe_pipelines ['my_pipeline_id']
salt myminion boto_datapipeline.get_pipeline_definition my_pipeline_id
salt myminion boto_datapipeline.list_pipelines profile=myprofile
salt myminion boto_datapipeline.pipeline_id_from_name my_pipeline_name
salt myminion boto_datapipeline.put_pipeline_definition my_pipeline_id my_pipeline_objects salt.modules.boto_dynamodbConnection module for Amazon DynamoDB New in version 2015.5.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_dynamodb.create_global_secondary_index table_name / index_name
salt myminion boto_dynamodb.create_table table_name / region=us-east-1 / hash_key=id / hash_key_data_type=N / range_key=created_at / range_key_data_type=N / read_capacity_units=1 / write_capacity_units=1
salt myminion boto_dynamodb.delete table_name region=us-east-1
salt myminion boto_dynamodb.describe table_name region=us-east-1
salt myminion boto_dynamodb.exists table_name region=us-east-1
salt myminion boto_dynamodb.extract_index index
salt myminion boto_dynamodb.list_tags_of_resource resource_arn=arn:aws:dynamodb:us-east-1:012345678901:table/my-table New in version 3006.0.
salt myminion boto_dynamodb.tag_resource resource_arn=arn:aws:dynamodb:us-east-1:012345678901:table/my-table tags='{Name: my-table, Owner: Ops}'
New in version 3006.0.
salt myminion boto_dynamodb.untag_resource resource_arn=arn:aws:dynamodb:us-east-1:012345678901:table/my-table tag_keys='[Name, Owner]' New in version 3006.0.
salt myminion boto_dynamodb.update table_name region=us-east-1
salt myminion boto_dynamodb.update_global_secondary_index table_name / indexes salt.modules.boto_ec2Connection module for Amazon EC2 New in version 2015.8.0.
If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: ec2.keyid: GKTADJGHEIQSXMKKRBJ08H ec2.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: ec2.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid, and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
CLI Example: salt-call boto_ec2.allocate_eip_address domain=vpc New in version 2016.3.0.
CLI Example: salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni secondary_private_ip_address_count=2 New in version 2017.7.0.
CLI Example: salt myminion boto_ec2.associate_eip_address instance_name=bubba.ho.tep allocation_id=eipalloc-ef382c8a New in version 2016.3.0.
salt myminion boto_ec2.attach_network_interface my_eni instance_name=salt-master device_index=0
CLI Example: salt-call boto_ec2.attach_volume vol-12345678 i-87654321 /dev/sdh
salt myminion boto_ec2.create_image ami_name instance_name=myinstance
salt myminion boto_ec2.create_image another_ami_name tags='{"mytag": "value"}' description='this is my ami'
salt myminion boto_ec2.create_key mykey /root/
salt myminion boto_ec2.create_network_interface my_eni subnet-12345 description=my_eni groups=['my_group']
CLI Example: salt-call boto_ec2.create_tags vol-12345678 '{"Name": "myVolume01"}'
CLI Example: salt-call boto_ec2.create_volume us-east-1a size=10 salt-call boto_ec2.create_volume us-east-1a snapshot_id=snap-0123abcd
salt myminion boto_ec2.delete_key mykey
salt myminion boto_ec2.create_network_interface my_eni subnet-12345 description=my_eni groups=['my_group']
CLI Example: salt-call boto_ec2.delete_tags vol-12345678 '{"Name": "myVolume01"}'
salt-call boto_ec2.delete_tags vol-12345678 '["Name","MountPoint"]'
CLI Example: salt-call boto_ec2.delete_volume vol-12345678
salt myminion boto_ec2.detach_network_interface my_eni
CLI Example: salt-call boto_ec2.detach_volume vol-12345678 i-87654321
CLI Example: salt myminion boto_ec2.disassociate_eip_address association_id=eipassoc-e3ba2d16 New in version 2016.3.0.
salt myminion boto_ec2.exists myinstance
salt myminion boto_ec2.find_images tags='{"mytag": "value"}'
salt myminion boto_ec2.find_instances # Lists all instances
salt myminion boto_ec2.find_instances name=myinstance
salt myminion boto_ec2.find_instances tags='{"mytag": "value"}'
salt myminion boto_ec2.find_instances filters='{"vpc-id": "vpc-12345678"}'
CLI Example: salt-call boto_ec2.get_all_eip_addresses New in version 2016.3.0.
CLI Example: salt-call boto_ec2.get_all_tags '{"tag:Name": myInstanceNameTag, resource-type: instance}'
CLI Example: salt-call boto_ec2.get_all_volumes filters='{"tag:Name": "myVolume01"}'
salt myminion boto_ec2.get_attribute sourceDestCheck instance_name=my_instance
CLI Example: salt-call boto_ec2.get_eip_address_info addresses=52.4.2.15 New in version 2016.3.0.
salt myminion boto_ec2.get_id myinstance
salt myminion boto_ec2.get_key mykey
salt myminion boto_ec2.get_keys
salt myminion boto_ec2.get_network_interface name=my_eni
salt myminion boto_ec2.get_network_interface_id name=my_eni
CLI Example: salt myminion boto_ec2.get_tags instance_id
CLI Example: salt-call boto_ec2.get_unassociated_eip_address New in version 2016.3.0.
salt myminion boto_ec2.get_zones
salt myminion boto_ec2.import mykey publickey
salt myminion boto_ec2.modify_network_interface_attribute my_eni attr=description value='example description'
CLI Example: salt myminion boto_ec2.release_eip_address allocation_id=eipalloc-ef382c8a New in version 2016.3.0.
salt myminion boto_ec2.run ami-b80c2b87 name=myinstance
device-maps:
salt myminion boto_ec2.set_attribute sourceDestCheck False instance_name=my_instance
YAML example fragment: - filters:
salt myminion boto_ec2.terminate name=myinstance salt myminion boto_ec2.terminate instance_id=i-a46b9f
CLI Example: salt myminion boto_ec2.unassign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip New in version 2017.7.0. salt.modules.boto_efsConnection module for Amazon EFS New in version 2017.7.0.
http://docs.aws.amazon.com/efs/latest/ug/ If IAM roles are not used you need to specify them either in a pillar or in the minion's config file efs.keyid: GKTADJGHEIQSXMKKRBJ08H efs.key: askd+ghsdfjkghWupU/asdflkdfklgjsdfjajkghs A region may also be specified in the configuration efs.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid, and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
CLI Example: salt 'my-minion' boto_efs.create_file_system efs-name generalPurpose
CLI Example: salt 'my-minion' boto_efs.create_mount_target filesystemid subnetid
CLI Example: salt 'my-minion' boto_efs.create_tags
CLI Example: salt 'my-minion' boto_efs.delete_file_system filesystemid
CLI Example: salt 'my-minion' boto_efs.delete_mount_target mounttargetid
CLI Example: salt 'my-minion' boto_efs.delete_tags
CLI Example: salt 'my-minion' boto_efs.get_file_systems efs-id
CLI Example: salt 'my-minion' boto_efs.get_mount_targets
CLI Example: salt 'my-minion' boto_efs.get_tags efs-id
CLI Example: salt 'my-minion' boto_efs.set_security_groups my-mount-target-id my-sec-group salt.modules.boto_elasticacheConnection module for Amazon Elasticache New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: elasticache.keyid: GKTADJGHEIQSXMKKRBJ08H elasticache.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: elasticache.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_elasticache.authorize_cache_security_group_ingress myelasticachesg myec2sg 879879 salt myminion boto_elasticache.create myelasticache 1 redis cache.t1.micro cache_security_group_names='["myelasticachesg"]'
salt myminion boto_elasticache.create_cache_security_group myelasticachesg 'My Cache Security Group'
salt myminion boto_elasticache.create_replication_group myelasticache myprimarycluster description
salt myminion boto_elasticache.create_subnet_group my-subnet-group "group description" subnet_ids='[subnet-12345678, subnet-87654321]' region=us-east-1
salt myminion boto_elasticache.delete myelasticache
salt myminion boto_elasticache.delete_cache_security_group myelasticachesg 'My Cache Security Group'
salt myminion boto_elasticache.delete_replication_group my-replication-group region=us-east-1
salt myminion boto_elasticache.delete_subnet_group my-subnet-group region=us-east-1
salt myminion boto_elasticache.describe_replication_group mygroup
salt myminion boto_elasticache.exists myelasticache
salt myminion boto_elasticache.get_all_subnet_groups region=us-east-1
salt myminion boto_elasticache.get_cache_subnet_group mycache_subnet_group
salt myminion boto_elasticache.get_config myelasticache
salt myminion boto_elasticache.get_group_host myelasticachegroup
salt myminion boto_elasticache.get_node_host myelasticache
salt myminion boto_elasticache.group_exists myelasticache
salt myminion boto_elasticache.list_subnet_groups region=us-east-1
salt myminion boto_elasticache.revoke_cache_security_group_ingress myelasticachesg myec2sg 879879
salt myminion boto_elasticache.subnet_group_exists my-param-group region=us-east-1 salt.modules.boto_elasticsearch_domainConnection module for Amazon Elasticsearch Service New in version 2016.11.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: lambda.keyid: GKTADJGHEIQSXMKKRBJ08H lambda.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: lambda.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Create and delete methods return: created: true or created: false error: Request methods (e.g., describe_function) return: domain: or error:
salt myminion boto_elasticsearch_domain.add_tags mydomain tag_a=tag_value tag_b=tag_value
salt myminion boto_elasticsearch_domain.create mydomain \
salt myminion boto_elasticsearch_domain.delete mydomain
salt myminion boto_elasticsearch_domain.describe mydomain
salt myminion boto_elasticsearch_domain.exists mydomain
CLI Example: salt myminion boto_cloudtrail.list_tags my_trail
salt myminion boto_cloudtrail.remove_tags my_trail tag_a=tag_value tag_b=tag_value
salt myminion boto_elasticsearch_domain.status mydomain
salt myminion boto_elasticsearch_domain.update mydomain \ salt.modules.boto_elbConnection module for Amazon ELB New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: elb.keyid: GKTADJGHEIQSXMKKRBJ08H elb.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: elb.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_elb.apply_security_groups myelb '["mysecgroup1"]'
salt myminion boto_elb.attach_subnets myelb '["mysubnet"]'
salt myminion boto_elb.create myelb '["us-east-1a", "us-east-1e"]' '{"elb_port": 443, "elb_protocol": "HTTPS", ...}' region=us-east-1
salt myminion boto_elb.create_listeners myelb '[["HTTPS", "HTTP", 443, 80, "arn:aws:iam::11 11111:server-certificate/mycert"]]'
salt myminion boto_elb.create_policy myelb mypolicy LBCookieStickinessPolicyType '{"CookieExpirationPeriod": 3600}'
salt myminion boto_elb.delete myelb region=us-east-1
salt myminion boto_elb.delete_listeners myelb '[80,443]'
salt myminion boto_elb.delete_policy myelb mypolicy
CLI Example: salt myminion boto_elb.delete_tags my-elb-name ['TagToRemove1', 'TagToRemove2']
CLI Example: salt myminion boto_elb.deregister_instances myelb instance_id salt myminion boto_elb.deregister_instances myelb "[instance_id, instance_id]"
salt myminion boto_elb.detach_subnets myelb '["mysubnet"]'
salt myminion boto_elb.disable_availability_zones myelb '["us-east-1a"]'
salt myminion boto_elb.enable_availability_zones myelb '["us-east-1a"]'
salt myminion boto_elb.exists myelb region=us-east-1
salt myminion boto_elb.get_all_elbs region=us-east-1
salt myminion boto_elb.get_attributes myelb
salt myminion boto_elb.exists myelb region=us-east-1
salt myminion boto_elb.get_health_check myelb
salt myminion boto_elb.get_instance_health myelb salt myminion boto_elb.get_instance_health myelb region=us-east-1 instances="[instance_id,instance_id]"
salt myminion boto_elb.list_elbs region=us-east-1
salt myminion boto_elb.listener_dict_to_tuple '{"elb_port":80,"instance_port":80,"elb_protocol":"HTTP"}'
CLI Example: salt myminion boto_elb.register_instances myelb instance_id salt myminion boto_elb.register_instances myelb "[instance_id,instance_id]"
CLI example to set attributes on an ELB: salt myminion boto_elb.set_attributes myelb '{"access_log": {"enabled": "true", "s3_bucket_name": "mybucket", "s3_bucket_prefix": "mylogs/", "emit_interval": "5"}}' region=us-east-1
salt myminion boto_elb.set_backend_policy myelb 443 "[policy1,policy2]"
salt myminion boto_elb.set_health_check myelb '{"target": "HTTP:80/"}'
salt myminion boto_elb.set_instances myelb region=us-east-1 instances="[instance_id,instance_id]"
salt myminion boto_elb.set_listener_policy myelb 443 "[policy1,policy2]"
CLI Example: salt myminion boto_elb.set_tags my-elb-name "{'Tag1': 'Value', 'Tag2': 'Another Value'}"
salt.modules.boto_elbv2Connection module for Amazon ALB New in version 2017.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: elbv2.keyid: GKTADJGHEIQSXMKKRBJ08H elbv2.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs elbv2.region: us-west-2 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
CLI Example: salt myminion boto_elbv2.create_target_group learn1give1 protocol=HTTP port=54006 vpc_id=vpc-deadbeef
CLI Example: salt myminion boto_elbv2.delete_target_group arn:aws:elasticloadbalancing:us-west-2:644138682826:targetgroup/learn1give1-api/414788a16b5cf163
CLI Example: salt myminion boto_elbv2.deregister_targets myelb instance_id salt myminion boto_elbv2.deregister_targets myelb "[instance_id,instance_id]"
salt myminion boto_elbv2.describe_target_health arn:aws:elasticloadbalancing:us-west-2:644138682826:targetgroup/learn1give1-api/414788a16b5cf163 targets=["i-isdf23ifjf"]
CLI Example: salt myminion boto_elbv2.register_targets myelb instance_id salt myminion boto_elbv2.register_targets myelb "[instance_id,instance_id]"
salt myminion boto_elbv2.target_group_exists arn:aws:elasticloadbalancing:us-west-2:644138682826:targetgroup/learn1give1-api/414788a16b5cf163 salt.modules.boto_iamConnection module for Amazon IAM New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: iam.keyid: GKTADJGHEIQSXMKKRBJ08H iam.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs iam.region: us-east-1 It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_iam.add_user_to_group myuser mygroup
salt myminion boto_iam.associate_profile_to_role myirole myiprofile
salt myminion boto_iam.attach_group_policy mypolicy mygroup
salt myminion boto_iam.attach_role_policy mypolicy myrole
salt myminion boto_iam.attach_user_policy mypolicy myuser
salt myminion boto_iam.build_policy
salt myminion boto_iam.create_access_key myuser
salt myminion boto_iam.create_group group
salt myminion boto_iam.create_instance_profile myiprofile
salt myminion boto_iam.create_login_profile user_name password
salt myminios boto_iam.create_policy mypolicy '{"Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["s3:Get*", "s3:List*"], "Resource": ["arn:aws:s3:::my-bucket/shared/*"]},]}'
salt myminios boto_iam.create_policy_version mypolicy '{"Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["s3:Get*", "s3:List*"], "Resource": ["arn:aws:s3:::my-bucket/shared/*"]},]}'
salt myminion boto_iam.create_role myrole
salt myminion boto_iam.create_role_policy myirole mypolicy '{"MyPolicy": "Statement": [{"Action": ["sqs:*"], "Effect": "Allow", "Resource": ["arn:aws:sqs:*:*:*"], "Sid": "MyPolicySqs1"}]}'
salt myminion boto_iam.create_saml_provider my_saml_provider_name saml_metadata_document
salt myminion boto_iam.create_user myuser
salt myminion boto_iam.deactivate_mfa_device user_name serial_num
salt myminion boto_iam.delete_access_key myuser
salt myminion boto_iam.delete_group mygroup
salt myminion boto_iam.delete_group_policy mygroup mypolicy
salt myminion boto_iam.delete_instance_profile myiprofile
salt myminion boto_iam.delete_login_profile user_name
salt myminion boto_iam.delete_policy mypolicy
salt myminion boto_iam.delete_policy_version mypolicy v1
salt myminion boto_iam.delete_role myirole
salt myminion boto_iam.delete_role_policy myirole mypolicy
salt myminion boto_iam.delete_saml_provider my_saml_provider_name
salt myminion boto_iam.delete_server_cert mycert_name
salt myminion boto_iam.delete_user myuser
salt myminion boto_iam.delete_user_policy myuser mypolicy
salt myminion boto_iam.delete_virtual_mfa_device serial_num
salt myminion boto_iam.describe_role myirole
salt myminion boto_iam.detach_group_policy mypolicy mygroup
salt myminion boto_iam.detach_role_policy mypolicy myrole
salt myminion boto_iam.detach_user_policy mypolicy myuser
salt myminion boto_iam.disassociate_profile_from_role myirole myiprofile
salt-call boto_iam.export_roles --out=txt | sed "s/local: //" > iam_roles.sls
salt-call boto_iam.export_users --out=txt | sed "s/local: //" > iam_users.sls
salt myminion boto_iam.get_account_id
salt myminion boto_iam.get_account_policy
salt myminion boto_iam.get_all_access_keys myuser
salt myminion boto_iam.get_all_group_policies mygroup
salt-call boto_iam.get_all_groups
salt-call boto_iam.get_all_instance_profiles
salt myminion boto_iam.get_all_mfa_devices user_name
salt-call boto_iam.get_all_roles
salt myminion boto_iam.get_all_user_policies myuser
salt-call boto_iam.get_all_users
salt myminion boto_iam.get_group mygroup
salt myminion boto_iam.get_group mygroup
salt myminion boto_iam.get_group_policy mygroup policyname
salt myminion boto_iam.instance_profile_exists myiprofile
salt myminion boto_iam.instance_profile_exists myiprofile
salt myminion boto_iam.get_role_policy myirole mypolicy
salt myminion boto_iam.get_saml_provider arn
salt myminion boto_iam.get_saml_provider_arn my_saml_provider_name
salt myminion boto_iam.get_server_certificate mycert_name
salt myminion boto_iam.get_user myuser
salt myminion boto_iam.get_user_policy myuser mypolicyname
salt myminion boto_iam.instance_profile_exists myiprofile
salt myminion boto_iam.list_entities_for_policy mypolicy
salt myminion boto_iam.list_entities_for_policy mypolicy
salt myminion boto_iam.list_entities_for_policy mypolicy
salt myminion boto_iam.list_entities_for_policy mypolicy
salt-call boto_iam.list_instance_profiles
salt myminion boto_iam.list_policies
salt myminion boto_iam.list_policy_versions mypolicy
salt myminion boto_iam.list_role_policies myirole
salt myminion boto_iam.list_saml_providers
salt myminion boto_iam.instance_profile_exists myiprofile
salt myminion boto_iam.instance_profile_exists myiprofile
salt myminion boto_iam.profile_associated myirole myiprofile
salt myminion boto_iam.put_group_policy mygroup policyname policyrules
salt myminion boto_iam.put_user_policy myuser policyname policyrules
salt myminion boto_iam.remove_user_from_group mygroup myuser
salt myminion boto_iam.role_exists myirole
salt myminion boto_iam.set_default_policy_version mypolicy v1
salt myminion boto_iam.update_account_password_policy True
salt myminion boto_iam.update_assume_role_policy myrole '{"Statement":"..."}'
salt myminion boto_iam.update_saml_provider my_saml_provider_name saml_metadata_document
salt myminion boto_iam.upload_server_cert mycert_name crt priv_key
salt myminion boto_iam.user_exists_in_group myuser mygroup salt.modules.boto_iotConnection module for Amazon IoT New in version 2016.3.0. The dependencies listed above can be installed via package or pip.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: iot.keyid: GKTADJGHEIQSXMKKRBJ08H iot.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: iot.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_iot.attach_principal_policy mypolicy mycognitoID
salt myminion boto_iot.create_policy my_policy \
salt myminion boto_iot.create_policy_version my_policy \
salt myminion boto_iot.create_thing_type mythingtype \
salt myminion boto_iot.create_topic_rule my_rule "SELECT * FROM 'some/thing'" \
salt myminion boto_iot.delete_policy mypolicy
salt myminion boto_iot.delete_policy_version mypolicy version
salt myminion boto_iot.delete_thing_type mythingtype
salt myminion boto_iot.delete_rule myrule
salt myminion boto_iot.deprecate_thing_type mythingtype
salt myminion boto_iot.describe_policy mypolicy
salt myminion boto_iot.describe_policy_version mypolicy version
salt myminion boto_iot.describe_thing_type mythingtype
salt myminion boto_iot.describe_topic_rule myrule
salt myminion boto_iot.detach_principal_policy mypolicy mycognitoID
salt myminion boto_iot.list_policies Example Return: policies:
salt myminion boto_iot.list_policy_versions mypolicy Example Return: policyVersions:
salt myminion boto_iot.list_principal_policies myprincipal Example Return: policies:
salt myminion boto_iot.list_topic_rules Example Return: rules:
salt myminion boto_iot.policy_exists mypolicy
salt myminion boto_iot.policy_version_exists mypolicy versionid
salt myminion boto_iot.replace_topic_rule my_rule 'SELECT * FROM some.thing' \
salt myminion boto_iot.set_default_policy_version mypolicy versionid
salt myminion boto_iot.thing_type_exists mythingtype
salt myminion boto_iot.topic_rule_exists myrule salt.modules.boto_kinesisConnection module for Amazon Kinesis New in version 2017.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: kinesis.keyid: GKTADJGHEIQSXMKKRBJ08H kinesis.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: kinesis.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_kinesis.create_stream my_stream N region=us-east-1
salt myminion boto_kinesis.decrease_stream_retention_period my_stream N region=us-east-1
salt myminion boto_kinesis.delete_stream my_stream region=us-east-1
salt myminion boto_kinesis.disable_enhanced_monitoring my_stream ["metrics", "to", "disable"] region=us-east-1
salt myminion boto_kinesis.enable_enhanced_monitoring my_stream ["metrics", "to", "enable"] region=us-east-1
salt myminion boto_kinesis.exists my_stream region=us-east-1
salt myminion boto_kinesis.get_info_for_reshard existing_stream_details
salt myminion boto_kinesis.get_stream_when_active my_stream region=us-east-1
salt myminion boto_kinesis.increase_stream_retention_period my_stream N region=us-east-1
salt myminion boto_kinesis.list_streams
salt myminion boto_kinesis.long_int some_MD5_hash_as_string
salt myminion boto_kinesis.reshard my_stream N True region=us-east-1
salt.modules.boto_kmsConnection module for Amazon KMS New in version 2015.8.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: kms.keyid: GKTADJGHEIQSXMKKRBJ08H kms.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: kms.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config:
salt myminion boto_kms.create_alias 'alias/mykey' key_id
salt myminion boto_kms.create_grant 'alias/mykey' 'arn:aws:iam::1111111:/role/myrole' operations='["Encrypt","Decrypt"]'
salt myminion boto_kms.create_key '{"Statement":...}' "My master key"
salt myminion boto_kms.decrypt encrypted_ciphertext
salt myminion boto_kms.describe_key 'alias/mykey'
salt myminion boto_kms.disable_key 'alias/mykey'
salt myminion boto_kms.disable_key_rotation 'alias/mykey'
salt myminion boto_kms.enable_key 'alias/mykey'
salt myminion boto_kms.enable_key_rotation 'alias/mykey'
salt myminion boto_kms.encrypt 'alias/mykey' 'myplaindata' '{"aws:username":"myuser"}'
salt myminion boto_kms.generate_data_key 'alias/mykey' number_of_bytes=1024 key_spec=AES_128
salt myminion boto_kms.generate_data_key_without_plaintext 'alias/mykey' number_of_bytes=1024 key_spec=AES_128
salt myminion boto_kms.generate_random number_of_bytes=1024
salt myminion boto_kms.get_key_policy 'alias/mykey' mypolicy
salt myminion boto_kms.get_key_rotation_status 'alias/mykey'
salt myminion boto_kms.key_exists 'alias/mykey'
salt myminion boto_kms.list_grants 'alias/mykey'
salt myminion boto_kms.list_key_policies 'alias/mykey'
salt myminion boto_kms.put_key_policy 'alias/mykey' default '{"Statement":...}'
salt myminion boto_kms.re_encrypt 'encrypted_data' 'alias/mynewkey' default '{"Statement":...}'
salt myminion boto_kms.revoke_grant 'alias/mykey' 8u89hf-j09j...
salt myminion boto_kms.update_key_description 'alias/mykey' 'My key' salt.modules.boto_lambdaConnection module for Amazon Lambda New in version 2016.3.0.
The dependencies listed above can be installed via package or pip.
If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: lambda.keyid: GKTADJGHEIQSXMKKRBJ08H lambda.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: lambda.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Changed in version 2015.8.0: All methods now return a dictionary. Create and delete methods return: created: true or created: false error: Request methods (e.g., describe_function) return: function: or error:
salt myminion boto_lamba.add_permission my_function my_id "lambda:*" \
salt myminion boto_lambda.alias_exists myfunction myalias
salt myminion boto_lamba.create_alias my_function my_alias $LATEST "An alias"
salt myminion boto_lamba.create_event_source_mapping arn::::eventsource myfunction LATEST
{
Returns {'created': True} if the function was created and {created: False} if the function was not created. CLI Example: salt myminion boto_lamba.create_function my_function python2.7 my_role my_file.my_function my_function.zip salt myminion boto_lamba.create_function my_function python2.7 my_role my_file.my_function salt://files/my_function.zip
salt myminion boto_lambda.delete_alias myfunction myalias
salt myminion boto_lambda.delete_event_source_mapping 260c423d-e8b5-4443-8d6a-5e91b9ecd0fa
salt myminion boto_lambda.delete_function myfunction
salt myminion boto_lambda.describe_alias myalias
salt myminion boto_lambda.describe_event_source_mapping uuid
salt myminion boto_lambda.describe_function myfunction
salt myminion boto_lambda.alias_exists myfunction myalias
salt myminion boto_lambda.function_exists myfunction
salt myminion boto_lambda.get_event_source_mapping_ids arn:::: myfunction
salt myminion boto_lamba.get_permissions my_function
permissions: {...}
versions:
salt myminion boto_lambda.list_functions
salt myminion boto_lamba.remove_permission my_function my_id
salt myminion boto_lamba.update_alias my_lambda my_alias $LATEST
salt myminion boto_lamba.update_event_source_mapping uuid FunctionName=new_function
salt myminion boto_lamba.update_function_code my_function ZipFile=function.zip
{
Returns {'updated': True} if the function was updated, and {'updated': False} if the function was not updated. CLI Example: salt myminion boto_lamba.update_function_config my_function my_role my_file.my_function "my lambda function" salt.modules.boto_rdsConnection module for Amazon RDS New in version 2015.8.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: rds.keyid: GKTADJGHEIQSXMKKRBJ08H rds.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: rds.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_rds.create myrds 10 db.t2.micro MySQL sqlusr sqlpassw
salt myminion boto_rds.create_option_group my-opt-group mysql 5.6 "group description"
salt myminion boto_rds.create_parameter_group my-param-group mysql5.6 "group description"
salt myminion boto_rds.create_read_replica replicaname source_name
salt myminion boto_rds.create_subnet_group my-subnet-group "group description" '[subnet-12345678, subnet-87654321]' region=us-east-1
salt myminion boto_rds.delete myrds skip_final_snapshot=True region=us-east-1
salt myminion boto_rds.delete_option_group my-opt-group region=us-east-1
salt myminion boto_rds.delete_parameter_group my-param-group region=us-east-1
salt myminion boto_rds.delete_subnet_group my-subnet-group region=us-east-1
salt myminion boto_rds.describe myrds
salt myminion boto_rds.describe_db_instances jmespath='DBInstances[*].DBInstanceIdentifier'
salt myminion boto_rds.describe_db_subnet_groups
salt myminion boto_rds.describe_parameter_group parametergroupname region=us-east-1
salt myminion boto_rds.describe_parameters parametergroupname region=us-east-1
salt myminion boto_rds.exists myrds region=us-east-1
salt myminion boto_rds.get_endpoint myrds salt myminion boto_rds.modify_db_instance db_instance_identifier region=us-east-1
salt myminion boto_rds.option_group_exists myoptiongr region=us-east-1
salt myminion boto_rds.parameter_group_exists myparametergroup region=us-east-1
salt myminion boto_rds.subnet_group_exists my-param-group region=us-east-1
salt myminion boto_rds.update_parameter_group my-param-group parameters='{"back_log":1, "binlog_cache_size":4096}' region=us-east-1
salt.modules.boto_route53Connection module for Amazon Route53 New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: route53.keyid: GKTADJGHEIQSXMKKRBJ08H route53.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: route53.region: us-east-1 If a region is not specified, the default is 'universal', which is what the boto_route53 library expects, rather than None. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_route53.add_record test.example.org 1.1.1.1 example.org A
IP address to check. ip_addr or fqdn is required.
fqdn Domain name of the endpoint to check. ip_addr or fqdn is
required
port Port to check
hc_type Healthcheck type. HTTP | HTTPS | HTTP_STR_MATCH |
HTTPS_STR_MATCH | TCP
resource_path Path to check
string_match If hc_type is HTTP_STR_MATCH or HTTPS_STR_MATCH, the
string to search for in the response body from the specified resource
request_interval The number of seconds between the time that Amazon Route
53 gets a response from your endpoint and the time that it sends the next
health-check request.
failure_threshold The number of consecutive health checks that an endpoint
must pass or fail for Amazon Route 53 to change the current status of the
endpoint from unhealthy to healthy or vice versa.
region Region endpoint to connect to
key AWS key
keyid AWS keyid
profile AWS pillar profile
CLI Example: salt myminion boto_route53.create_healthcheck 192.168.0.1 salt myminion boto_route53.create_healthcheck 192.168.0.1 port=443 hc_type=HTTPS resource_path=/ fqdn=blog.saltstack.furniture
CLI Example: salt myminion boto_route53.create_hosted_zone example.org
CLI Example: salt myminion boto_route53.create_zone example.org
salt myminion boto_route53.delete_record test.example.org example.org A
salt myminion boto_route53.delete_zone example.org
CLI Example: salt myminion boto_route53.describe_hosted_zones domain_name=foo.bar.com. profile='{"region": "us-east-1", "keyid": "A12345678AB", "key": "xblahblahblah"}'
salt myminion boto_route53.get_record test.example.org example.org A
CLI Example: salt myminion boto_route53.list_all_zones_by_id
CLI Example: salt myminion boto_route53.list_all_zones_by_name
salt myminion boto_route53.modify_record test.example.org 1.1.1.1 example.org A
salt myminion boto_route53.zone_exists example.org
salt.modules.boto_s3Connection module for Amazon S3 using boto3 New in version 2018.3.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: s3.keyid: GKTADJGHEIQSXMKKRBJ08H s3.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: s3.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_s3.get_object_metadata \
salt myminion boto_s3.upload_file \ salt.modules.boto_s3_bucketConnection module for Amazon S3 Buckets New in version 2016.3.0. The dependencies listed above can be installed via package or pip.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: s3.keyid: GKTADJGHEIQSXMKKRBJ08H s3.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: s3.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_s3_bucket.create my_bucket \
salt myminion boto_s3_bucket.delete mybucket
salt myminion boto_s3_bucket.delete_cors my_bucket
salt myminion boto_s3_bucket.delete_lifecycle_configuration my_bucket
salt myminion boto_s3_bucket.delete_objects mybucket '{Objects: [Key: myobject]}'
salt myminion boto_s3_bucket.delete_policy my_bucket
salt myminion boto_s3_bucket.delete_replication my_bucket
salt myminion boto_s3_bucket.delete_tagging my_bucket
salt myminion boto_s3_bucket.delete_website my_bucket
salt myminion boto_s3_bucket.describe mybucket
salt myminion boto_s3_bucket.empty mybucket
salt myminion boto_s3_bucket.exists mybucket
Owner: {...}
Buckets:
salt myminion boto_s3_bucket.list_object_versions mybucket
salt myminion boto_s3_bucket.list_objects mybucket
salt myminion boto_s3_bucket.put_acl my_bucket 'public' \
salt myminion boto_s3_bucket.put_cors my_bucket '[{\
salt myminion boto_s3_bucket.put_lifecycle_configuration my_bucket '[{\
salt myminion boto_s3_bucket.put_logging my_bucket log_bucket '[{...}]' prefix
salt myminion boto_s3_bucket.put_notification_configuration my_bucket
salt myminion boto_s3_bucket.put_policy my_bucket {...}
salt myminion boto_s3_bucket.put_replication my_bucket my_role [...]
salt myminion boto_s3_bucket.put_request_payment my_bucket Requester
salt myminion boto_s3_bucket.put_tagging my_bucket my_role [...]
salt myminion boto_s3_bucket.put_versioning my_bucket Enabled
salt myminion boto_s3_bucket.put_website my_bucket IndexDocument='{"Suffix":"index.html"}'
salt.modules.boto_secgroupConnection module for Amazon Security Groups New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: secgroup.keyid: GKTADJGHEIQSXMKKRBJ08H secgroup.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: secgroup.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_secgroup.authorize mysecgroup ip_protocol=tcp from_port=80 to_port=80 cidr_ip='['10.0.0.0/8', '192.168.0.0/24']'
salt myminion boto_secgroup.convert_to_group_ids mysecgroup vpc-89yhh7h
salt myminion boto_secgroup.create mysecgroup 'My Security Group'
salt myminion boto_secgroup.delete mysecgroup
CLI Example: salt myminion boto_secgroup.delete_tags ['TAG_TO_DELETE1','TAG_TO_DELETE2'] security_group_name vpc_id=vpc-13435 profile=my_aws_profile
salt myminion boto_secgroup.exists mysecgroup
salt myminion boto_secgroup.get_all_security_groups filters='{group-name: mygroup}'
salt myminion boto_secgroup.get_config mysecgroup
salt myminion boto_secgroup.get_group_id mysecgroup
salt myminion boto_secgroup.revoke mysecgroup ip_protocol=tcp from_port=80 to_port=80 cidr_ip='10.0.0.0/8'
CLI Example: salt myminion boto_secgroup.set_tags "{'TAG1': 'Value1', 'TAG2': 'Value2'}" security_group_name vpc_id=vpc-13435 profile=my_aws_profile
salt.modules.boto_snsConnection module for Amazon SNS
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: sns.keyid: GKTADJGHEIQSXMKKRBJ08H sns.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: sns.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_sns.create mytopic region=us-east-1
salt myminion boto_sns.delete mytopic region=us-east-1
salt myminion boto_sns.exists mytopic region=us-east-1
salt myminion boto_sns.get_all_subscriptions_by_topic mytopic region=us-east-1
salt myminion boto_sns.get_all_topics
salt myminion boto_sns.get_arn mytopic
salt myminion boto_sns.subscribe mytopic https https://www.example.com/sns-endpoint region=us-east-1
salt myminion boto_sns.unsubscribe my_topic my_subscription_arn region=us-east-1 New in version 2016.11.0. salt.modules.boto_sqsConnection module for Amazon SQS New in version 2014.7.0.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: sqs.keyid: GKTADJGHEIQSXMKKRBJ08H sqs.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: sqs.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile:
salt myminion boto_sqs.create myqueue region=us-east-1
salt myminion boto_sqs.delete myqueue region=us-east-1
salt myminion boto_sqs.exists myqueue region=us-east-1
salt myminion boto_sqs.get_attributes myqueue
salt myminion boto_sqs.list region=us-east-1
salt myminion boto_sqs.set_attributes myqueue '{ReceiveMessageWaitTimeSeconds: 20}' region=us-east-1
salt.modules.boto_ssmConnection module for Amazon SSM
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
salt-call boto_ssm.delete_parameter test-param
salt-call boto_ssm.get_parameter test-param withdescription=True
salt-call boto_ssm.put_parameter test-param test_value Type=SecureString KeyId=alias/aws/ssm Description='test encrypted key' salt.modules.boto_vpcConnection module for Amazon VPC New in version 2014.7.0.
If IAM roles are not used you need to specify them either in a pillar or in the minion's config file: vpc.keyid: GKTADJGHEIQSXMKKRBJ08H vpc.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs A region may also be specified in the configuration: vpc.region: us-east-1 If a region is not specified, the default is us-east-1. It's also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: myprofile: Changed in version 2015.8.0: All methods now return a dictionary. Create and delete methods return: created: true or created: false error: Request methods (e.g., describe_vpc) return: vpcs: or error: New in version 2016.11.0. Functions to request, accept, delete and describe VPC peering connections. Named VPC peering connections can be requested using these modules. VPC owner accounts can accept VPC peering connections (named or otherwise). Examples showing creation of VPC peering connection # Create a named VPC peering connection salt myminion boto_vpc.request_vpc_peering_connection vpc-4a3e622e vpc-be82e9da name=my_vpc_connection # Without a name salt myminion boto_vpc.request_vpc_peering_connection vpc-4a3e622e vpc-be82e9da # Specify a region salt myminion boto_vpc.request_vpc_peering_connection vpc-4a3e622e vpc-be82e9da region=us-west-2 Check to see if VPC peering connection is pending salt myminion boto_vpc.is_peering_connection_pending name=salt-vpc # Specify a region salt myminion boto_vpc.is_peering_connection_pending name=salt-vpc region=us-west-2 # specify an id salt myminion boto_vpc.is_peering_connection_pending conn_id=pcx-8a8939e3 Accept VPC peering connection salt myminion boto_vpc.accept_vpc_peering_connection name=salt-vpc # Specify a region salt myminion boto_vpc.accept_vpc_peering_connection name=salt-vpc region=us-west-2 # specify an id salt myminion boto_vpc.accept_vpc_peering_connection conn_id=pcx-8a8939e3 Deleting VPC peering connection via this module # Delete a named VPC peering connection salt myminion boto_vpc.delete_vpc_peering_connection name=salt-vpc # Specify a region salt myminion boto_vpc.delete_vpc_peering_connection name=salt-vpc region=us-west-2 # specify an id salt myminion boto_vpc.delete_vpc_peering_connection conn_id=pcx-8a8939e3
Warning: Please specify either the vpc_peering_connection_id or name but not both. Specifying both will result in an error! CLI Example: salt myminion boto_vpc.accept_vpc_peering_connection name=salt-vpc # Specify a region salt myminion boto_vpc.accept_vpc_peering_connection name=salt-vpc region=us-west-2 # specify an id salt myminion boto_vpc.accept_vpc_peering_connection conn_id=pcx-8a8939e3
salt myminion boto_vpc.associate_dhcp_options_to_vpc 'dhcp-a0bl34pp' 'vpc-6b1fe402'
salt myminion boto_vpc.associate_network_acl_to_subnet \ salt myminion boto_vpc.associate_network_acl_to_subnet \
salt myminion boto_vpc.associate_route_table 'rtb-1f382e7d' 'subnet-6a1fe403' salt myminion boto_vpc.associate_route_table route_table_name='myrtb' \
salt myminion boto_vpc.check_vpc vpc_name=myvpc profile=awsprofile
salt myminion boto_vpc.create '10.0.0.0/24'
salt myminion boto_vpc.create_customer_gateway 'ipsec.1', '12.1.2.3', 65534
salt myminion boto_vpc.create_dhcp_options domain_name='example.com' \
salt myminion boto_vpc.create_internet_gateway \
|