Professional Documents
Culture Documents
3 User's Guide
(Generated on July 15, 2014, from git revision 7f5d71e92f649cdac1af24dc4f3bc95b0a76c0)
2/404
Puppet 3.6.2
PuppetDB 1.6.2
Facter 1.7.5
MCollective 2.5.1
Hiera 1.3.3
Dashboard 2.1.6
The What Gets Installed Where page includes a list of all the major packages that comprise PE 3.3.
About Puppet
Puppet is the leading open source conguration management tool. It allows system conguration
manifests to be written in a high-level DSL and can compose modular chunks of conguration to
create a machines unique conguration. By default, Puppet Enterprise uses a client/server Puppet
deployment, where agent nodes fetch congurations from a central puppet master.
About Orchestration
Puppet Enterprise includes distributed task orchestration features. Nodes managed by PE will listen
for commands over a message bus and independently take action when they hear an authorized
request. This lets you investigate and command your infrastructure in real time without relying on a
central inventory.
Licensing
PE can be evaluated with a complimentary ten node license; beyond that, a commercial per-node
Puppet Enterprise 3.3 User's Guide About Puppet Enterprise
3/404
license is required for use. A license key le will have been emailed to you after your purchase, and
the puppet master will look for this key at /etc/puppetlabs/license.key. Puppet will log warnings
if the license is expired or exceeded, and you can view the status of your license by running puppet
license at the command line on the puppet master.
To purchase a license, please see the Puppet Enterprise pricing page, or contact Puppet Labs at
sales@puppetlabs.com or (877) 575-9775. For more information on licensing terms, please see the
licensing FAQ. If you have misplaced or never received your license key, please contact
sales@puppetlabs.com.
Next: New Features
New Features
Puppet Enterprise 3.3 introduces the following new features and improvements.
Puppet Enterprise Installer Improvements
This release introduces a web-based interface meant to simplifyand provide better clarity into
the PE installation experience. You now have a few paths to choose from when installing PE.
Perform a guided installation using the web-based interface. Think of this as an installation
interview in which we ask you exactly how you want to install PE. If youre able to provide a few
SSH credentials, this method will get you up and running fairly quickly. Refer to the installation
overview for more information.
Use the web-based interface to create an answer le that you can then add as an argument to
the installer script to perform an installation (e.g., sudo ./puppet-enterprise-installer -a
~/my_answers.txt). Refer to Automated Installation with an Answer File, which provides an
overview on installing PE with an answer le.
Write your own answer le or use the answer le(s) provided in the PE installation tarball. Check
the Answer File Reference Overview to get started.
Manifest Ordering
Puppet Enterprise is now using a new ordering setting in the Puppet core that allows you to
congure how unrelated resources should be ordered when applying a catalog. By default,
ordering will be set to manifest in PE.
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
4/404
5/404
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings/config_file.rb:77:in
`collect')
/etc/puppet
Using the modulepath, manifest, or config_version settings will raise a deprecation warning
similar to
# puppet.conf
[main]
modulepath = /tmp/foo
manifest = /tmp/foodir
config_version = /usr/bin/false
# puppet config print confdir
Warning: Setting manifest is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')
Warning: Setting modulepath is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')
Warning: Setting config_version is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')
Note: Executing puppet commands will raise the modulepath deprecation warning.
6/404
7/404
Note: Razor is included in Puppet Enterprise 3.3 as a tech preview. Puppet Labs tech previews
provide early access to new technology still under development. As such, you should only
use them for evaluation purposes and not in production environments. You can nd more
information on tech previews on the tech preview support scope page.
Security Fixes
CVE-2014-0224 OpenSSL vulnerability in secure communications
Assessed Risk Level: medium
Aected Platforms:
Puppet Enterprise 2.8 (Solaris, Windows)
Puppet Enterprise 3.2 (Solaris, Windows, AIX)
Due to a vulnerability in OpenSSL versions 1.0.1 and later, an attacker could intercept and decrypt
secure communications. This vulnerability requires that both the client and server be running an
unpatched version of OpenSSL. Unlike heartbleed, this attack vector occurs after the initial
handshake, which means ecnryption keys are not compromised. However, Puppet Enterprise
encrypts catalogs for transmission to agents, so PE manifests containing sensitive information
could have been intercepted. We advise all users to avoid including sensitive information in
catalogs.
Puppet Enterprise 3.3.0 includes a patched version of OpenSSL.
CVSS v2 score: 2.4 with Vector: AV:N/AC:H/Au:M/C:P/I:P/A:N/E:U/RL:OF/RC:C
CVE-2014-0198 OpenSSL vulnerability could allow denial of service attack
Assessed Risk Level: low
Aected Platforms: Puppet Enterprise 3.2 (Solaris, Windows, AIX)
Due to a vulnerability in OpenSSL versions 1.0.0 and 1.0.1, if SSL_MODE_\RELEASE_BUFFERS is
enabled, an attacker could cause a denial of service.
CVSS v2 score: 1.9 with Vector: AV:N/AC:H/Au:N/C:N/I:N/A:P/E:U/RL:OF/RC:C
CVE-2014-3251 MCollective aes_security plugin did not correctly validated new server certs
Assessed Risk Level: low
Aected Platforms:
Mcollective (all)
Puppet Enterprise 3.2
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
8/404
The MCollective aes_security public key plugin did not correctly validate new server certs against
the CA certicate. By exploiting this vulnerability within a specic race condition window, an
attacker with local access could initiate an unauthorized Mcollective client connection with a server.
Note that this vulnerability requires that a collective be congured to use the aes_security plugin.
Puppet Enterprise and open source Mcollective are not congured to use the plugin and are not
vulnerable by default.
CVSS v2 score: 3.4 with Vector: AV:L/AC:H/Au:M/C:P/I:N/A:C/E:POC/RL:OF/RC:C
Bug Fixes
The following is a basic overview of some of the bug xes in this release:
Installation - xes improve installation so that the installer checks for cong les and not just
/etc/puppetlabs/, stops pe-puppet-dashboard-workers during upgrade, warns the user if there
is not enough PostgreSQL disk space, and more.
UI updates - xes make the appearance and behavior more consistent across all areas of the
console.
Known Issues
As we discover them, this page will be updated with known issues in Puppet Enterprise 3.3 and
earlier. Fixed issues will be removed from this list and noted above in the release notes. If you nd
new problems yourself, please le bugs in Puppet here and bugs specic to Puppet Enterprise here.
To nd out which of these issues may aect you, run /opt/puppet/bin/puppet --version, the
output of which will look something like 3.6.1 (Puppet Enterprise 3.3.0). To upgrade to a
newer version of Puppet Enterprise, see the chapter on upgrading.
The following issues aect the currently shipped version of PE and all prior releases through the
3.x.x series, unless otherwise stated.
Puppet Enterprise Cannot Locate Samba init Script for Ubuntu 14.04
If you attempt to install and start Samba using PE resource management, you will may encounter
the following errors:
Error: /Service[smb]: Could not evaluate: Could not find init script or upstart
conf file for 'smb'`
Error: Could not run: Could not find init script or upstart conf file for
'smb'`
To workaround this issue, install and start Samba with the following commands:
puppet resource package samba ensure=present
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
9/404
PostgreSQL Buer Memory Issue Can Cause PE Install to Fail on Machines with Large
Amounts of RAM
In some cases, when installing PE on machines with large amounts of RAM, the PostgreSQL
database will use more shared buer memory than is available and will not be able to start. This will
prevent PE from installing correctly. For more information and a suggested workaround, refer to
Troubleshooting the Console and Database.
Upgrades to PE 3.x from 2.8.3 Can Fail if PostgreSQL is Already Installed
There are two scenarios in which your upgrade can fail:
1. If PostgreSQL is already running on port 5432 on the server assigned the database support role,
pe-postgresql wont be able to start.
2. Another version of PostgreSQL is not running, but which psql resolves to something other than
/opt/puppet/bin/psql, which is the instance used by PE.
In this second scenario, youll see the following failure output:
## Performing migration of the console database. This may take a while...
DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins!
Support for these plugins will be removed in Rails 4.0. Move them out and
bundle them in your Gemfile, or fold them in to your app as lib/myplugin/*
and config/initializers/myplugin.rb. See the release notes for more on this:
http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released.
(called from <top (required)> at /opt/puppet/share/puppetdashboard/Rakefile:16)
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Database transfer failed.
To work around these issues, ensure the PostgreSQL service is stopped before installing PE. To
determine if PostgreSQL is running, run service status postgresql. If an equivalent of stopped
or no such service is returned, the service is not running. If the service is running, stop it (e.g.,
service postgresql stop) and disable it ( chkconfig postgresql off).
To resolve the issue, make sure that which psql resolves to /opt/puppet/bin/psql.
Upgrades from 3.2.0 Can Cause Issues with Multi-Platform Agent Packages
Users upgrading from PE 3.2.0 to a later version of 3.x (including 3.2.3) will see errors when
attempting to download agent packages for platforms other than the master. After adding pe_repo
classes to the master for desired agent packages, errors will be seen on the subsequent puppet run
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
10/404
as PE attempts to access the requisite packages. For a simple workaround to this issue, see the
installer troubleshooting page.
Live Management Cannot Uninstall Packages on Windows Nodes
An issue with MCollective prevents correct uninstallation of packages on nodes running Windows.
You can uninstall packages on Windows nodes using Puppet, for example: package { 'Google
Chrome': ensure => absent, }
The issue is being tracked on this support ticket.
A NOTE ABOUT SYMLINKS
The answer le no longer gives the option of whether to install symlinks. These are now
automatically installed by packages. To allow the creation of symlinks, you need to ensure that
/usr/local is writable.
Upgrades to PE 3.2.x or Later Remove Commented Authentication Sections from rubycasserver/config.yml
If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain the
commented sections for the third-party services. Weve provided the commented sections on the
console cong page, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
pe_mcollective Module Integer Parameter Issue
The pe_mcollective module includes a parameter for the ActiveMQ heap size ( activemq_heap_mb).
A bug prevents this parameter from correctly accepting an integer when one is entered in the
console. The problem can be avoided by placing the integer inside quote marks (e.g., "10"). This
will cause Puppet to correctly validate the value when it is passed from the console.
Safari Certicate Handling May Prevent Console Access
Due to Apache bug 53193 and the way Safari handles certicates, Puppet Labs recommends that PE
3.3 users avoid using Safari to access the PE console.
If you need to use Safari, you may encounter the following dialog box the rst time you attempt to
access the console after installing/upgrading PE 3.3:
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
11/404
If this happens, click Cancel to access the console. (In some cases, you may need to click Cancel
several times.)
This issue will be xed in a future release.
puppet module list --tree Shows Incorrect Dependencies After Uninstalling Modules
If you uninstall a module with puppet module uninstall <module name> and then run puppet
module list --tree, you will get a tree that does not accurately reect module dependencies.
Passenger Global Queue Error on Upgrade
When upgrading a PE 2.8.3 master to PE 3.3.0, restarting pe-httpd produces a warning: The
'PassengerUseGlobalQueue' option is obsolete: global queueing is now always turned
on. Please remove this option from your configuration file. This error will not aect
anything in PE, but if you wish, you can turn it o by removing the line in question from
/etc/puppetlabs/httpd/conf.d/passenger-extra.conf.
puppet resource Fails if puppet.conf is Modied to Make puppet apply Work with PuppetDB.
In an eort to make puppet apply work with PuppetDB in masterless puppet scenarios, users may
edit puppet.conf to make storecongs point to PuppetDB. This breaks puppet resource, causing it
to fail with a Ruby error. For more information, see the console & database troubleshooting page,
and for a workaround see the PuppetDB documentation on connecting puppet apply.
Puppet Agent on Windows Requires --onetime
On Windows systems, puppet agent runs started locally from the command line require either the -onetime or --test option to be set. This is due to Puppet bug PUP-1275.
BEAST Attack Mitigation
A known weakness in Apache HTTPD leaves it vulnerable to a man-in-the-middle attack known as
the BEAST (Browser Exploit Against SSL/TLS) attack. The vulnerability exists because Apache HTTPD
uses a FIPS-compliant cipher suite that can be cracked via a brute force attack that can discover the
decryption key. If FIPS compliance is not required for your infrastructure, we recommend you
mitigate vulnerability to the BEAST attack by using a cipher suite that includes stronger ciphers.
This can be done as follows:
In /etc/puppetlabs/httpd/conf.d/puppetdashboard.conf, edit the SSLCipherSuite and
SSLProtocol options to:
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
12/404
SSLCipherSuite ALL:!ADH:+RC4+RSA:+HIGH:+AES+256:+CBC3:-LOW:-SSLv2:-EXP
SSLProtocol ALL -SSLv2
Note that unless your system contains OpenSSL v1.0.1d (the version that correctly supports TLS1.1
1and 1.2), prioritizing RC4 may leave you vulnerable to other types of attacks.
Readline Version Issues on AIX Agents
As with PE 2.8.2, on AIX 5.3, puppet agents depend on readline-4-3.2 being installed. You can
check the installed version of readline by running rpm -q readline. If you need to install it, you
can download it from IBM. Install it before installing the puppet agent.
On AIX 6.1 and 7.1, the default version of readline, 4-3.2, is insucient. You need to replace it
before upgrading or installing by running
rpm -e --nodeps readline
rpm -Uvh readline-6.1-1.aix6.1.ppc.rpm
If you see an error message after running this, you can disregard it. Readline-6 should be
successfully installed, and you can proceed with the installation or upgrade (you can verify the
installation with rpm -q readline).
Debian/Ubuntu Local Hostname Issue
On some versions of Debian/Ubuntu, the default /etc/hosts le contains an entry for the
machines hostname with a local IP address of 127.0.1.1. This can cause issues for PuppetDB and
PostgreSQL, because binding a service to the hostname will cause it to resolve to the local-only IP
address rather than its public IP. As a result, nodes (including the console) will fail to connect to
PuppetDB and PostgreSQL.
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
13/404
To x this, add an entry to /etc/hosts that resolves the machines FQDN to its public IP address.
This should be done prior to installing PE. However, if PE has already been installed, restarting the
pe-puppetdb and pe-postgresql services after adding the entry to the hosts le should x things.
console_auth Fails After PostgreSQL Restart
RubyCAS server, the component which provides console log-in services, will not automatically
reconnect if it loses connection to its database, which can result in a 500 Internal Server Error
when attempting to log in or out. You can resolve the issue by restarting Apache on the consoles
node with sudo /etc/init.d/pe-httpd restart.
Inconsistent Counts When Comparing Service Resources in Live Management
In the Browse Resources tab, comparing a service across a mixture of RedHat-based and Debianbased nodes will give dierent numbers in the list view and the detail view.
Augeas File Access Issue
On AIX agents, the Augeas lens is unable to access or modify etc/services. There is no known
workaround.
After Upgrading, Nodes Report a Not a PE Agent Error
When doing the rst puppet run after upgrading using the upgrader script included in PE tarballs,
agents are reporting an error: <node.name> is not a Puppet Enterprise agent. This was caused by
a bug in the upgrader that has since been xed. If you downloaded a tarball prior to November 28,
2012, simply download the tarball again to get the xed upgrader. If you prefer, you can download
the latest upgrader module from the Forge. Alternatively, you can x it by changing
/etc/puppetlabs/facter/facts.d/is_pe.txt to contain: is_pe=true.
Answer File Required for Some SMTP Servers
Any SMTP server that requires authentication, TLS, or runs over any port other than 25 needs to be
explicitly added to an answers le. See the advanced conguration page for details.
pe-httpd Must Be Restarted After Revoking Certicates
(Issue #8421)
Due to an upstream bug in Apache, the pe-httpd service on the puppet master must be restarted
after revoking any nodes certicate.
After using puppet cert revoke or puppet cert clean to revoke a certicate, restart the service
by running:
$ sudo /etc/init.d/pe-httpd restart
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
14/404
15/404
This error is because there is no CA-cert bundle on Solaris 10 to trust the Puppet Labs Forge
certicate.
Razor Known Issues
Please see the page Razor Setup Recommendations and Known Issues.
Puppet Terminology
For help with Puppet-specic terms and language, visit the glossary
For a complete guide to the Puppet language, visit the reference manual
Next: Compliance: Alternate Workow
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes
16/404
Support Lifecycle
Puppet Enterprise 3.x will receive feature updates through June 25, 2014 (or the release of Puppet
Enterprise 4, whichever is longer), and will receive security updates through June 25, 2015 (or 1
year from the release of Puppet Enterprise 4, whichever is longer). See the support lifecycle page
for more details.
After Puppet Enterprise 3.x reaches end-of-life, customers can still contact Puppet Labs support for
best-eort help, although we will recommend upgrading as soon as you are able.
When seeking support, you may be asked to run an information-gathering support script named,
puppet-enterprise-support. The script is located in the root of the unzipped Puppet Enterprise
installer tarball; it is also installed on any master, PuppetDB, or console node and can be run via
/opt/puppet/bin/puppet-enterprise-support.
This script will collect a large amount of system information, compress it, and print the location of
the zipped tarball when it nishes running; an uncompressed directory (named support)
containing the same data will be left in the same directory as the compressed copy. We recommend
that you examine the collected data before forwarding it to Puppet Labs, as it may contain sensitive
information that you will wish to redact.
Puppet Enterprise 3.3 User's Guide Getting Support for Puppet Enterprise
17/404
18/404
http://groups.google.com/a/puppetlabs.com/group/pe-users
Click on Sign in and apply for membership.
Click on Enter your email address to access the document.
Enter your email address.
Your request to join will be sent to Puppet Labs for authorization and you will receive an email
when youve been added to the user group.
Note: The installation instructions describe how to install a single agent. If you want to install
more than one agent, just repeat the steps in the Install the Puppet Enterprise Agent
section.
Examine and control nodes in real time with live management
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
19/404
For part two, youll build on your knowledge of PE and learn about module development . You can
choose from either the Linux track or the Windows track.
In part two, youll learn about:
Basic module structure
Editing manifests and templates
Writing your own modules
Creating a site module that builds other modules into a complete machine role
Applying classes to groups with the console
Following this walkthrough will take approximately 30-60 minutes for each part.
Creating a Deployment
A typical Puppet Enterprise deployment consists of:
A number of agent nodes, which are computers (physical or virtual) managed by Puppet.
At least one puppet master server, which serves congurations to agent nodes.
At least one console server, which analyzes agent reports and presents a GUI for managing your
site. (This may or may not be the same server as the master.)
At least one database support server which runs PuppetDB and databases that support the
console. (This may or may not be the same server as the console server.)
For this walk-through, you will create a simple deployment where the puppet master, the console,
and database support components will run on one machine (a.k.a., a monolithic master). This
machine will manage one or two agent nodes. In a production environment you have total exibility
in how you deploy and distribute your master, console, and database support components, but for
the purposes of this guide were keeping things simple.
20/404
21/404
Are you installing using root with an ssh key? The installer will ask you to provide the
username, private key path, and key passphrase (as needed) for each node on which
youre installing a PE component. Remote root ssh login must enabled on each node,
including the node from which youre running the installer. And the public root ssh key
must be added to authorized_keys on each node on which youre installing a PE
component.
Please ensure that port 3000 is reachable, as the web-based installer uses this port. You
can close the port when the installation is complete.
The web-based installer does not support sudo congurations with Defaults targetpw
or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.
Tip: Be sure to download the full PE tarball, not the agent-only tarball. The agent-only
tarball is used for package management-based agent installation which is not covered by
this guide.
2. Unpack the tarball. (Run tar -xf <tarball>.)
3. From the PE installer directory, run sudo ./puppet-enterprise-installer.
4. When prompted, choose Yes to install the setup packages. (If you choose No, the installer will
exit.)
At this point, the PE installer will start a web server and provide a web address:
https://<install platform hostname>:3000. Please ensure that port 3000 is reachable. If
necessary, you can close port 3000 when the installation is complete. Also be sure to use https.
Warning: Leave your terminal connection open until the installation is complete; otherwise
the installation will fail.
5. Copy the address into your browser.
6. When prompted, accept the security request in your browser.
The web-based installation uses a default SSL certicate; youll have to add a security exception
in order to access the web-based installer. This is safe to do..
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
22/404
You have now installed the puppet master node. As indicated by the installer, the puppet
master node is also an agent node, and can congure itself the same way it congures the
other nodes in a deployment. Stay logged in as root for further exercises.
LOG IN TO THE CONSOLE
To log in to the console, you can select the Start Using Puppet Enterprise Now button that appears
at the end of the web-based installer or follow the steps below.
1. On your control workstation, open a web browser and point it to the address supplied by the
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
23/404
Tip: If you dont have internet connectivity, refer to the note about installing without internet
connectivity to choose a method that is suitable for your needs.
The puppet master that youve installed hosts a package repository for the agent of the same OS
and architecture as the puppet master. When you run the installation script on your agent (for
example, curl -k https://<master.example.com>:8140/packages/current/install.bash |
sudo bash), the script will detect the OS on which it is running, set up an apt (or yum, or zypper)
repo that refers back to the master, pull down and install the pe-agent packages.
Note that if install.bash cant nd agent packages corresponding to the agents platform, it will fail
with an error message telling you which pe_repo class you need to add to the master.
If your agent is the same OS and architecture as the puppet master, run the script above to set up
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
24/404
2. Search for the pe_repo::platform::debian_6_amd64 class in the list of classes, and click its
checkbox to select it. Click the Add selected classes button at the bottom of the page.
3. Navigate to the master.example.com node page, click the Edit button, and begin typing
pe_repo::platform::debian_6_amd64 in the Classes eld; you can select the
pe_repo::platform::debian_6_amd64 class from the list of autocomplete suggestions.
4. Click the Update button after you have selected it.
5. Note that the pe_repo::platform::debian_6_amd64 class now appears in the list of classes for
the master.example.com node.
6. Navigate to the live management page, and select the Control Puppet tab. Use the runonce
action to trigger a puppet run.
The new repo will be created in /opt/puppet/packages/public. It will be called puppetenterprise-3.3.0-debian-6-amd64-agent.
7. SSH into the Debian node where you want to install the agent, and run curl -k
https://<master.example.com>:8140/packages/current/install.bash | sudo bash.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
25/404
The installer will then install and congure the Puppet Enterprise agent.
You have now installed the puppet agent node. Stay logged in as root for further exercises.
2. Click the Accept All button to approve all the requests and add the nodes.
The puppet agents can now retrieve congurations from the master the next time puppet
runs.
26/404
interval is congurable with the runinterval setting in puppet.conf.) However, you can also trigger
a puppet run manually from the command line.
1. On the agent node, log in as root and run puppet agent --test on the command line. This will
trigger a single puppet run on the agent with verbose logging.
Note: You may receive a -bash: puppet: command not found error; this is due to the fact
that PE installs its binaries in /opt/puppet/bin and /opt/puppet/sbin, which arent
included in your default $PATH. To include these binaries in your default $PATH, manually
add them to your prole or run PATH=/opt/puppet/bin:$PATH;export PATH.
2. Note the long string of log messages, which should end with notice: Finished catalog run
in [...] seconds.
You are now fully managing the agent node. It has checked in with the puppet master for the
rst time and received its conguration info. It will continue to check in and fetch new
congurations every 30 minutes. The node will also appear in the console, where you can
make changes to its conguration by assigning classes and modifying the values of class
parameters.
27/404
2. Explore the console. Note that if you click on a node to view its details, you can see its recent
history, the Puppet classes it receives, and a very large list of inventory information about it. See
here for more information about navigating the console.
You now know how to nd detailed information about any node PE is managing, including
its status, inventory details, and the results of its last puppet run.
2. Check the list of nodes at the bottom of the page for agent1.example.com depending on your
timing, it may already be present. If so, skip to on each agent node below.
3. If agent1 is not a member of the group already, click the Edit button:
28/404
4. In the nodes eld, begin typing agent1.example.coms name. You can then select it from the list
of autocompletion guesses. Click the Update button after you have selected it.
29/404
5. On each agent node, run puppet agent --test again, as described above. Note the long string
of log messages related to the pe_mcollective class.
In a normal environment, you would usually skip these steps and allow orchestration to come online when Puppet runs automatically.
The agent node can now respond to orchestration messages and its resources can be viewed
live in the console.
2. Note that the master and the agent nodes are all listed in the sidebar.
Discovering Resources
1. Note that you are currently in the Browse Resources tab.
2. Choose user resources from the list of resource types, then click the Find Resources button:
30/404
3. Examine the complete list of user accounts found on all of the nodes currently selected in the
sidebar node list. (In this case, both the master and the agent node are selected.) Most of the
users will be identical, as these machines are very close to a default OS install, but some users
related to the puppet masters functionality are only on one node:
4. Click on any user to view details about its properties and where it is present.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
31/404
The other resource types work in a similar manner. Choose the node(s) whose resources you wish
to browse. Select a resource type, click Find Resources to discover the resource on the selected
nodes, click on one of the resulting found resources to see details about it.
Triggering Puppet Runs
Rather than using the command line to kick o puppet runs with puppet agent -t one at a time,
you can use live management to run Puppet on several selected nodes.
1. On the console, in the live management page, click the Control Puppet tab.
2. Make sure one or more nodes are selected with the node selector on the left.
3. Click the runonce action to reveal the red Run button and additional options, and then click the
Run button to run Puppet on the selected nodes.
Note: You cant always use the runonce actions additional options with *nix nodes, you
must stop the pe-puppet service before you can use options like noop. See this note in the
orchestration section of the manual for more details.
32/404
You have just triggered a puppet run on several agents at once; in this case, the master and the
agent node. The runonce action will trigger a puppet run on every node currently selected in the
sidebar.
When using this action in production deployments, select target nodes carefully, as running it on
dozens or hundreds of nodes at once can strain the Puppet master server. If you need to do an
immediate Puppet run on many nodes, you should use the orchestration command line to do a
controlled run series.
Installing Modules
Puppet congures nodes by applying classes to them. Classes are chunks of Puppet code that
congure a specic aspect or feature of a machine.
Puppet classes are distributed in the form of modules. You can save time by using pre-existing
modules. Pre-existing modules are distributed on the Puppet Forge, and can be installed with the
puppet module subcommand. Any module installed on the Puppet master can be used to congure
agent nodes.
Installing a Forge Module
We will install a Puppet Enterprise supported module: puppetlabs-ntp. While you can use any
module available on the Forge, PE customers can take advantage of supported modules which are
supported, tested, and maintained by Puppet Labs.
1. On your control workstation, point your browser to
http://forge.puppetlabs.com/puppetlabs/ntp. This is the Forge listing for a module that installs,
congures, and manages the NTP service.
2. On the puppet master, run puppet module search ntp. This searches for modules from the
Puppet Forge with ntp in their names or descriptions and results in something like:
33/404
@warriornew ntp
We want puppetlabs-ntp, which is the PE supported NTP module. You can view detailed info
about the module in the Read Me on the Forge page you just visited:
http://forge.puppetlabs.com/puppetlabs/ntp.
3. Install the module by running puppet module install puppetlabs-ntp:
You have just installed a Puppet module. All of the classes in it are now available to be added
to the console and assigned to nodes.
There are many more modules, including PE supported modules, on the Forge. In part two of this
guide youll learn more about modules, including customizing and writing your own modules on
either Windows or *nix platforms.
Using Modules in the PE Console
Every module contains one or more classes. Classes are named chunks of puppet code and are the
primary means by which Puppet congures nodes. The module you just installed contains a class
called ntp. To use any class, you must rst tell the console about it and then assign it to one or
more nodes.
1. On the console, click the Add classes button in the sidebar:
34/404
2. Locate the ntp class in the list of classes, and click its checkbox to select it. Click the Add
selected classes button at the bottom of the page.
3. Navigate to the default group page (by clicking the link in the Groups menu in the sidebar), click
the Edit button, and begin typing ntp in the Classes eld; you can select the ntp class from the
list of autocomplete suggestions. Click the Update button after you have selected it.
35/404
4. Note that the ntp class now appears in the list of classes for the default group. Also note that the
default group contains your master and agent.
5. Navigate to the live management page, and select the Control Puppet tab. Use the runonce
action to trigger a puppet run on both the master and the agent. This will congure the nodes
using the newly-assigned classes. Wait one or two minutes.
6. On the agent, stop the NTP service.
Note: the NTP service name may vary depending on your operating system; for example, on
Debian nodes, the service name is NTP.
7. Run ntpdate us.pool.ntp.org. The result should resemble the following:
28 Jan 17:12:40 ntpdate[27833]: adjust time server 50.18.44.19 offset 0.057045 sec
8. Finally, restart the NTP service.
Puppet is now managing NTP on the nodes in the default group. So, for example, if you
forget to restart the NTP service on one of those nodes after running ntpdate, PE will
automatically restart it on the next puppet run.
SETTING CLASS PARAMETERS
You can use the console to set the values of the class parameters of nodes by selecting a node and
then clicking Edit parameters in the list of classes. For example, you want to specify an NTP server
for a given node.
1. Click a node in the node list.
2. On the node view page, click the Edit button.
3. Find NTP in the class list, and click Edit Parameters.
36/404
4. Enter a value for the parameter you wish to set. To set a specic server, enter ntp1.example.com
in the box next to the servers parameter.
The grey text that appears as values for some parameters is the default value, which can be either a
literal value or a Puppet variable. You can restore this value with the Reset value control that
appears next to the value after you have entered a custom value.
For more information, see the page on classifying nodes with the console.
Viewing Changes with Event Inspector
The event inspector lets you view and research changes and other events. Click the Events tab in the
main navigation bar. The event inspector window is displayed, showing the default view: classes
with failures. Note that in the summary pane on the left, one event, a successful change, has been
recorded for Nodes. However, there are two changes for Classes and Resources. This is because the
NTP class loaded from the Puppetlabs-ntp module contains additional classesa class that handles
the conguration of NTP ( Ntp::Config)and a class that handles the NTP service ( Ntp::Service).
37/404
You can click on events in the summary pane to inspect them in detail. For example, if you click
With Changes in the Classes With Events summary view, the main pane will show you that the
Ntp::Config and Ntp::Service classes were successfully added when you triggered the last
puppet run.
You can keep clicking to drill down and see more detail. You can click the previous arrow (left of the
summary pane), the bread-crumb trail at the top of the page, or bookmark a page for later
reference (but note that after subsequent puppet runs, the bookmarks may be dierent when you
revisit them). Eventually, you will end up at a run summary that shows you the details of the event.
For example, you can see exactly which piece of puppet code was responsible for generating the
event; in this case, it was line 15 of the service.pp manifest and line 21 of the config.pp manifest.
38/404
If there had been a problem applying this class, this information would tell you exactly what piece
of code you need to x. In this case, event inspector lets you conrm that PE is now managing NTP.
In the upper right corner of the detail pane is a link to a run report which contains information
about the puppet run that made the change, including metrics about the run, logs, and more
information. Visit the reports page for more information.
Summary
You have now experienced the core features and workows of Puppet Enterprise. In summary, a
Puppet Enterprise user will:
Install the PE agent on nodes they wish to manage (*nix and Windows instructions), and add the
nodes by approving their certicate requests.
Use pre-built, PE supported modules from the Puppet Forge to save time and eort.
Assign classes from modules to nodes in the console.
Use the console to set values for class parameters.
Allow nodes to be managed by regularly scheduled Puppet runs.
Use live management to inspect and compare nodes, and to trigger on-demand puppet agent
runs when necessary.
Use event inspector to learn more about events that occurred during puppet runs, such as what
was changed or why something failed.
Next Steps
Beyond what this brief walkthrough has covered, most users will go on to:
Edit Forge modules to customize them to your infrastructures needs.
Create new modules from scratch by writing classes that manage resources.
Use a site module to compose other modules into machine roles, allowing console users to
control policy instead of implementation.
Congure multiple nodes at once by adding classes to groups in the console instead of
individual nodes.
To learn about these workows, continue to part two of this quick start guide. Choose from either
the Windows or the Linux tracks.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3
39/404
OTHER RESOURCES
Puppet Labs oers many opportunities for learning and training, from formal certication courses
to guided on-line lessons. Weve noted a few below; head over to the learning Puppet page to
discover more.
Learning Puppet is a series of exercises on various core topics on deploying and using PE. It
includes the Learning Puppet VM which provides PE pre-installed and congured on VMware and
VirtualBox virtualization platforms.
The Puppet Labs workshop contains a series of self-paced, online lessons that cover a variety of
topics on Puppet basics. You can sign up at the learning page.
To explore the rest of the PE users manual, use the sidebar at the top of this page, or return to
the index.
Next: Quick Start: Writing Modules (Windows) or Quick Start Writing Modules (Linux)
Before starting this walkthrough, you should have completed the introductory quick start
guide. You should still be logged in as root or administrator on your nodes.
Getting Started
First, youll need to install the puppet agent on a node running a supported version of Windows.
Once the agent is installed, sign its certicate to add it to the console just as you did for the rst
agent node in part one of this guide.
Next, install the Puppet Labs Registry module on your puppet master. The process is identical to
how you installed the NTP module in part one. Once the module has been installed, add its class as
you did with NTP.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows
40/404
41/404
2. Run ls to view the currently installed modules; note that registry is present.
3. Open registry/manifests/service_example.pp, using the text editor of your choice (vi, nano,
etc.). Avoid using Notepad since it can introduce errors.
service_example.pp contains the following:
class registry::service_example {
# Define a new service named "Puppet Test" that is disabled.
registry::service { 'PuppetExample1':
display_name => "Puppet Example 1",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample1.bat',
start => 'disabled',
}
registry::service { 'PuppetExample2':
display_name => "Puppet Example 2",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample2.bat',
start => 'disabled',
}
}
4. Remove the PuppetExample2 registry::service resource, and add the following file
resource:
class registry::service_example {
# Define a new service named "Puppet Test" that is disabled.
registry::service { 'PuppetExample1':
display_name => "Puppet Example 1",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample1.bat',
start => 'disabled',
}
file { 'C:\PuppetExample1.bat':
ensure => file,
content => ":loop\r\nTIMEOUT /T 300\r\ngoto loop\r\n",
notify => registry::service['PuppetExample1'],
}
}
42/404
Puppet has also set a number of Registry keys to dene the PuppetExample1 Windows service. You
can use event inspector to view the specic changes.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows
43/404
To see PuppetExample1 in the list of services that are running, youll rst need to reboot your
Windows agent node, and then navigate to Services via the Administrative Tools.
44/404
You have written a new module containing a single class. Puppet now knows about this class,
and it can be added to the console and assigned to your Windows nodes, just as you did in
part one of this guide.
Note the following about this new class:
The registry::value dened resource type allows you to use Puppet to manage the
parent key for a particular value automatically.
The key parameter species the path the key the value(s) must be in.
The value parameter lists the name of the registry value(s) to manage. This is copied
from the resource title if not specied.
The type parameter determines the type of the registry value(s). Defaults to string. Valid
values are string, array, dword, qword, binary, or expand.
data Lists the data inside the registry value.
For more information about writing classes, refer to the following documentation:
To learn how to write resource declarations, conditionals, and classes in a guided tour format,
start at the beginning of Learning Puppet.
For a complete but succinct guide to the Puppet languages syntax, see the Puppet 3 language
reference.
For complete documentation of the available resource types, see the type reference.
For short, printable references, see the modules cheat sheet and the core types cheat sheet.
Using Your Custom Module in the Console
1. On the console, use the Add classes button to choose the critical_policy class from the list,
and then click the Add selected classes button to make it available, just as in the previous
example. You may need to wait a moment or two for the class to show up in the list.
2. Add the critical_policy class to your Wiindows agent node.
3. On the Windows agent node, manually set the data values of legalnoticecaption and
legalnoticetext to some other values. For example, set legalnoticecaption to Larrys
Computer and set legalnoticetext to This is Larrys computer.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows
45/404
4. Use live management to run the runonce action on your Windows agent node.
5. On the Windows agent node, refresh the registry and note that the values of
legalnoticecaption and legalnoticetext have been returned to the values specied in your
critical_policy manifest.
If you reboot your Windows machine, you will see the legal caption and text before you log in again.
You have created a new class from scratch and used it to manage registry settings on your
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows
46/404
Windows server.
This class declares other classes with the include function. Note the if conditional that sets
dierent classes for dierent OSs using the $osfamily fact. In this example, if an agent node is not
a Windows agent, puppet will apply the motd and core_permissions classes. For more information
about declaring classes, see the modules and classes chapters of Learning Puppet.
1. On the console, remove all of the previous example classes from your nodes and groups, using
the Edit button in each node or group page. Be sure to leave the pe_* classes in place.
2. Add the site::basic class to the console with the Add classes button in the sidebar as before.
3. Assign the site::basic class to the default group.
Your nodes are now receiving the same congurations as before, but with a simplied
interface in the console. Instead of deciding which classes a new node should receive, you
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows
47/404
can decide what type of node it is and take advantage of decisions you made earlier.
Summary
You have now performed the core workows of an intermediate Puppet user. In the course of their
normal work, intermediate users:
Download and modify Forge modules to t their deployments needs.
Create new modules and write new classes to manage many types of resources, including les,
services, packages, user accounts, and more.
Build and curate a site module to safely empower junior admins and simplify the decisions
involved in deploying new machines.
Monitor and troubleshoot events that aect their infrastructure.
Next: System Requirements
Before starting this walkthrough, you should have completed the introductory quick start
guide. You should still be logged in as root or administrator on your nodes.
Getting Started
Since youll be using the same master and agent nodes you congured in part one, all you need to
install for the following exercises is the Puppet Labs supported Apache module. The process is
identical to how you installed the NTP module in part one, but just be sure to install the module on
your master. Once the module has been installed, use the console to add its class and then classify
the master as you did with NTP.
48/404
Although many Forge modules are exact solutions that t your site, many are almost but not quite
what you need. Sometimes you will need to edit some of your Forge modules.
Module Basics
By default, modules are stored in /etc/puppetlabs/puppet/modules. If need be, you can congure
this path with the modulepath setting in puppet.conf.)
Modules are directory trees. For these exercises youll use the following les:
apache/ (the module name)
manifests/
init.pp (contains the apache class)
php.pp (contains the php class to install PHP for Apache)
vhosts.pp (contains the Apache virtual hosts class)
templates/
vhost.conf.erb (contains the vhost template, managed by PE)
Every manifest (.pp) le contains a single class. File names map to class names in a predictable way:
init.pp contains a class with the same name as the module; <NAME>.pp contains a class called
<MODULE NAME>::<NAME>; and <NAME>/<OTHER NAME>.pp contains <MODULE
NAME>::<NAME>::<OTHER NAME>.
Many modules, including Apache, contain directories other than manifests and templates; for
simplicitys sake, we do not cover them in this introductory guide.
For more on how modules work, see Module Fundamentals in the Puppet documentation.
For more on best practices, methods, and approaches to writing modules, see the Beginners
Guide to Modules.
For a more detailed guided tour, also see the module chapters of Learning Puppet.
Editing a Manifest
This simplied exercise modies a template from the Puppet Labs Apache module, specically
'vhost.conf.erb. Youll edit the template to include some simple variables that will be populated
by facts (using PEs implementation of Facter) about your node.
1. On the puppet master, navigate to the modules directory by running cd
/etc/puppetlabs/puppet/modules.
2. Run ls to view the currently installed modules; note that apache is present.
3. Open apache/templates/vhosts.conf.erb, using the text editor of your choice (vi, nano, etc.).
Avoid using Notepad since it can introduce errors. vhosts.conf.erb contains the following
header:
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
49/404
# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
# ************************************
4. Collect the following facts about your agent node:
run facter osfamily (this returns your agent nodes OS)
run facter id (this returns the id of the currently logged in user)
5. Edit the header of vhosts.conf.erb so that it contains the following variables for Facter
lookups:
# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
#
# This file is authorized for deployment by <%= scope.lookupvar('::id') %>.
#
# This file is authorized for deployment ONLY on <%=
scope.lookupvar('::osfamily') %> <%=
scope.lookupvar('::operatingsystemmajrelease') %>.
#
# Deployment by any other user or on any other system is strictly
prohibited.
# ************************************
6. On the console, add apache to the available classes, and then add that class to your agent node.
Refer to the introductory section of this guide if you need help adding classes in the console.
7. Use live management to kick o a puppet run.
At this point, puppet congures apache and starts the httpd service. When this happens, a default
apache vhost is created based on the contents of vhosts.conf.erb.
1. On the agent node, navigate to one of the following locations based on your operating system:
Redhat-based: /etc/httpd/conf.d
Debian-based: /etc/apache2/sites-available
2. View 15-default.conf; depending on the nodes OS, the header will show some variation of the
following contents:
# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
#
# This file is authorized for deployment by root.
#
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
50/404
As you can see, PE has used Facter to retrieve some key facts about your node, and then used those
facts to populate the header of your vhost template.
But now, lets see what happens you write your own Puppet code.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
51/404
You have written a new module containing a new class that includes two other classes.
Puppet now knows about this your new class, and it can be added to the console and
assigned to your node, just as you did in part one of this guide.
Note the following about your new class:
The class apache has been modied to include the mpm_module attribute; this attribute
determines which multi-process module is congured and loaded for the Apache (HTTPD)
process. In this case, the value is set to prefork.
include apache::mod::php indicates that your new class relies on those classes to
function correctly. However, PE understands that your node needs to be classied with
these classes and will take care of that work automatically when you classify your node
with the pe_quickstart_app class; in other words, you dont need to worry about
classifying your nodes with Apache and Apache PHP.
The priority attribute of 10 ensures that your app has a higher priority on port 80 than
the default Apache vhost app.
The le /var/pe_quickstart_app/index.php contains whatever is specied by the
content attribute. This is the content you will see when you launch your app. PE uses the
ensure attribute to create that le the rst time the class is applied. This the content you
will see when you launch your app.
For more information about writing classes, refer to the following documentation:
To learn how to write resource declarations, conditionals, and classes in a guided tour format,
start at the beginning of Learning Puppet.
For a complete but succinct guide to the Puppet languages syntax, see the Puppet 3 language
reference.
For complete documentation of the available resource types, see the type reference.
For short, printable references, see the modules cheat sheet and the core types cheat sheet.
Using Your Custom Module in the Console
1. On the console, click the Add classes button, choose the pe_quickstart_app class from the list,
and then click the Add selected classes button to make it available, just as in the previous
example. You may need to wait a moment or two for the class to show up in the list.
2. Navigate to the node view page for your agent node, and use the Edit button to add the
pe_quickstart_app class to your agent node, and remove the apache class you previously
added.
Note: Since the pe_quickstart_app includes the apache class, you need to remove the
rst apache class you added the master node, as puppet will only allow you to declare a
class once.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
52/404
3. Use live management to run the runonce action your agent node.
When the puppet run is complete, you will see in the nodes log that a vhost for the app has been
created and the Apache service (httpd) has been started.
4. Use a browser to navigate to port 80 of the IP address for your node; e.g,
http://<yournodeip>:80.
You have created a new class from scratch and used it to launch a Apache PHP-based web app.
Needless to say, in the real world, your apps will do a lot more than display PHP info pages. But for
the purposes of this exercise, lets take a closer look at how PE is managing your app.
Using PE to Manage Your App
1. On the agent node, open /var/www/pe_quickstart_app/index.php, and change the content;
change it to something like, THIS APP IS MANAGED BY PUPPET!
2. Refresh your browser, and notice that the PHP info page has been replaced with your new
message.
3. On the console, use live management to run the runonce action on your node.
4. Refresh your browser, and notice that puppet has reset your web app to display the PHP info
page. (You can also see that the contents of /var/www/pe_quickstart_app/index.php has been
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
53/404
This class declares other classes with the include function. Note the if conditional that sets
dierent classes for dierent kernels using the $kernel fact. In this example, if an agent node is a
Linux machine, puppet will apply your pe_quickstart_app class; if it is a window machines, puppet
will apply the registry::compliance_example class. For more information about declaring classes,
see the modules and classes chapters of Learning Puppet.
1. On the console, remove all of the previous example classes from your nodes and groups, using
the Edit button in each node or group page. Be sure to leave the pe_* classes in place.
2. Add the site::basic class to the console with the Add classes button in the sidebar as before.
3. Assign the site::basic class to the default group.
Your nodes are now receiving the same congurations as before, but with a simplied
interface in the console. Instead of deciding which classes a new node should receive, you
can decide what type of node it is and take advantage of decisions you made earlier.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux
54/404
Summary
You have now performed the core workows of an intermediate Puppet user. In the course of their
normal work, intermediate users:
Download and modify Forge modules to t their deployments needs.
Create new modules and write new classes to manage many types of resources, including les,
services, and more.
Build and curate a site module to safely empower junior admins and simplify the decisions
involved in deploying new machines.
Next: System Requirements
Operating System
Puppet Enterprise 3.3 supports the following systems:
Operating system
Version(s)
Arch
Component(s)
4, 5, 6, & 7
x86 &
x86_64
CentOS
4, 5, & 6
x86 &
x86_64
Ubuntu LTS
i386 &
amd64
all
Debian
i386 &
amd64
all
55/404
Oracle Linux
4, 5 & 6
x86 &
x86_64
Scientic Linux
4, 5 & 6
x86 &
x86_64
x86 &
x86_64
all
Solaris
SPARC &
i386
agent
Microsoft Windows
x86 &
x86_64
agent
AIX
Power
agent
Mac OS X
Mavericks (10.9)
x86_64
agent
Note: Some operating systems require an active subscription with the vendors package
management system to install dependencies, such as Red Hat Network.
Note: In addition, upgrading your OS while PE is installed can cause problems with PE. To
perform an OS upgrade, youll need to uninstall PE, perform the OS upgrade, and then
reinstall PE as follows:
1. Back up your databases and other PE les.
2. Perform a complete uninstall (including the -p -d uninstaller option).
3. Upgrade your OS.
4. Install PE.
5. Restore your backup.
Hardware Requirements
Puppet Enterprises hardware requirements depend on the components a machine performs.
For the puppet master, PE console, PuppetDB and database support, and any agent nodes, we
recommend that your hardware meets the following requirements.
At least four processor cores per node
At least 4 GB RAM per node
Very accurate timekeeping
For /var/, at least 1 GB of free space for each PE component on a given node
For PE-installed PostgreSQL, /opt/ requires at least 100 GB of free space for data gathering
For no PE-installed PostgreSQL, /opt/ needs at least 1 GB of disk space available
56/404
Supported Browsers
The following browsers are supported for use with the console:
Chrome: Current version, as of release
Firefox: Current version, as of release
Internet Explorer: 9, 10, and 11
Safari: 7
System Conguration
Before installing Puppet Enterprise at your site, you should make sure that your nodes and network
are properly congured.
Timekeeping
We recommend using NTP or an equivalent service to ensure that time is in sync between your
puppet master and any puppet agent nodes. If time drifts out of sync in your PE infrastructure, you
may encounter issues such as nodes disappearing from live manangement in the console. A service
like NTP (available as a Puppet Labs supported module) will ensure accurate timekeeping.
Name Resolution
Decide on a preferred name or set of names agent nodes can use to contact the puppet master
server.
Ensure that the puppet master server can be reached via domain name lookup by all of the
future puppet agent nodes at the site.
You can also simplify conguration of agent nodes by using a CNAME record to make the puppet
master reachable at the hostname puppet. (This is the default puppet master hostname that is
automatically suggested when installing an agent node.)
Firewall Conguration
Congure your rewalls to accommodate Puppet Enterprises network trac. In brief: you should
open up ports 8140, 8081, 61613, and 443. The more detailed version is:
If you are installing PE using the web-based installer, ensure port 3000 is open. You can close
this port when the installation is complete.
All agent nodes must be able to send requests to the puppet master on ports 8140 (for Puppet)
and 61613 (for orchestration).
The puppet master must be able to accept inbound trac from agents on ports 8140 (for
Puppet) and 61613 (for orchestration).
Any hosts you will use to access the console must be able to reach the console server on port
443, or whichever port you specify during installation. (Users who cannot run the console on
port 443 will often run it on port 3000.)
If you will be invoking orchestration commands from machines other than the puppet master,
Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation
57/404
they will need to be able to reach the master on port 61613. (Note: enabling other machines to
invoke orchestration actions is possible but not supported in this version of Puppet Enterprise.)
If you will be running the console and puppet master on separate servers, the console server
must be able to accept trac from the puppet master (and the master must be able to send
requests) on ports 443 and 8140. The console server must also be able to send requests to the
puppet master on port 8140, both for retrieving its own catalog and for viewing archived le
contents.
PuppetDB needs to accept connections on port 8081, and the puppet master and PE console
need to be able to do outbound trac on 8081.
Dependencies and OS Specic Details
This section details the packages that are installed from the various OS repos. Unless you do not
have internet access, you shouldnt need to worry about installing these manually, they will be set
up during PE installation.
POSTGRESQL REQUIREMENT
If you will be using your own instance of PostgreSQL (as opposed to the instance PE can install) for
the console and PuppetDB, it must be version 9.1 or higher.
OPENSSL REQUIREMENT
OpenSSL is a dependency required for PE. For Solaris 10 and all versions of RHEL, Debian, Ubuntu,
Windows, and AIX nodes, OpenSSL is included with PE; for all other platforms it is installed directly
from the system repositories.
Centos
All Nodes
Master Nodes
Console Nodes
Console/Console DB Nodes
pciutils
apr
apr
libjpeg
libxslt
system-logos
apr-util
apr-util
which
curl
curl
libxml2
mailcap
mailcap
dmidecode
libjpeg
net-tools
libtool-ltdl
libtool-ltdl
virt-what
unixODBC
unixODBC
libxml2
RHEL
All Nodes
Master Nodes
Console Nodes
Console/Console DB Nodes
pciutils
apr
apr
libjpeg
libxslt
system-logos
apr-util
apr-util
libxml2
58/404
which
apr-util-ldap (RHEL 6)
curl
libxml2
curl
mailcap
dmidecode
mailcap
apr-util-ldap (RHEL 6)
net-tools
libjpeg
cronie (RHEL 6)
libtool-ltdl (RHEL 7)
libtool-ltdl (RHEL 7)
vixie-cron (RHEL 4, 5)
unixODBC (RHEL 7)
unixODBC (RHEL 7)
virt-what
SLES
All Nodes
Master Nodes
Console Nodes
Console/Console DB Nodes
pciutils
libapr1
libapr1
libjpeg
libxml2
pmtools
libapr-util1
libapr-util1
cron
libxslt
curl
libxml2
curl
net-tools
libjpeg
libxslt
db43
db43
unixODBC
unixODBC
Debian
All Nodes
Master Nodes
Console Nodes
Console/Console DB Nodes
pciutils
le
le
libjpeg62
libxslt1.1
dmidecode
libmagic1
libmagic1
libxml2-dev (Debian 7)
libxml2
cron
libpcre3
libpcre3
locales-all (Debian 7)
libxml2
curl
curl
hostname
perl
perl
libldap-2.4-2
mime-support
mime-support
libreadline5
libapr1
libapr1
virt-what
libcap2
libcap2
libaprutil1
libaprutil1
libaprutil1-dbd-sqlite3
libaprutil1-dbd-sqlite3
libaprutil1-ldap
libaprutil1-ldap
59/404
libjpeg62
libcurl3 (Debian 7)
libcurl3 (Debian 7)
libxml2-dev (Debian 7)
libxml2-dev (Debian 7)
Ubuntu
All Nodes
Master Nodes
Console Nodes
Console/Console DB Nodes
pciutils
le
le
libjpeg62
libxslt1.1
dmidecode
libmagic1
libmagic1
cron
libpcre3
libpcre3
libxml2
curl
curl
hostname
perl
perl
libldap-2.4-2
mime-support
mime-support
libreadline5
libapr1
libapr1
virt-what
libcap2
libcap2
libaprutil1
libaprutil1
libaprutil1-dbd-sqlite3
libaprutil1-dbd-sqlite3
libaprutil1-ldap
libaprutil1-ldap
libxml2
libjpeg62
AIX
In order to run the puppet agent on AIX systems, youll need to ensure the following are installed
before attempting to install the puppet agent:
bash
zlib
readline
All AIX toolbox packages are available from IBM.
To install the packages on your selected node directly, you can run rpm -Uvh with the following
URLs (note that the RPM package provider on AIX must be run as root):
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/bash/bash-3.21.aix5.2.ppc.rpm
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/zlib/zlib-1.2.3Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation
60/404
4.aix5.2.ppc.rpm
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/readline/readline-6.11.aix6.1.ppc.rpm (AIX 6.1 and 7.1 only)
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/readline/readline-4.32.aix5.1.ppc.rpm (AIX 5.3 only)
Note: if you are behind a rewall or running an http proxy, the above commands may not work.
Instead, use the link above to nd the packages you need.
Note: GPG verication will not work on AIX, the RPM version used by AIX (even 7.1) is too old. The
AIX package provider doesnt support package downgrades (installing an older package over a
newer package). Avoid using leading zeros when specifying a version number for the AIX provider
(i.e., use 2.3.4 not 02.03.04).
The PE AIX implementation supports the NIM, BFF, and RPM package providers. Check the Type
Reference for technical details on these providers.
Solaris
Solaris support is agent only.
For Solaris 10, the following packages are required:
SUNWgccruntime
SUNWzlib
In some instances, bash may not be present on Solaris systems. It needs to be installed before
running the PE installer. Install it via the media used to install the OS or via CSW if that is present
on your system. (CSWbash or SUNWbash are both suitable.)
For Solaris 11 the following packages are required:
system/readline
system/library/gcc-45-runtime
library/security/openssl
These packages are available in the Oracle Solaris release repository (enabled by default on Solaris
11). The PE installer will automatically install them; however, if the release repository is not enabled,
the packages will need to be installed manually.
Next Steps
To install Puppet Enterprise on *nix nodes, continue to Installing Puppet Enterprise.
To install Puppet Enterprise on Windows nodes, continue to Installing Windows Agents.
61/404
62/404
Note: Before getting started, we recommend you read about the Puppet Enterprise
components to familiarize yourself with the parts that make up a PE installation.
Will install on
Debian
Solaris
Ubuntu LTS
AIX
SLES
Note: Bindings for SELinux are available on RHEL 5 and 6. They are not installed by default but are
included in the installation tarball.
Verifying the Installer
To verify the PE installer, you can import the Puppet Labs public key and run a cryptographic
verication of the tarball you downloaded. The Puppet Labs public key is certied by Puppet and is
available from public keyservers, such as pgp.mit.edu. Youll need to have GnuPG installed and the
GPG signature (.asc le) that you downloaded with the PE tarball.
To import the Puppet Labs public key, run:
$ gpg --keyserver=pgp.mit.edu --recv-key 4BD6EC30
63/404
Note: When you verify the signature but do not have a trusted path to one of the signatures on the
release key, you will see a warning similar to
Could not find a valid trust path to the key.
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the
owner.
This warning is generated because you have not created a trust path to certify who signed the
release key; it can be ignored.
64/404
Note: By default, the puppet master will check for the availability of updates whenever the
pe-httpd service restarts. In order to retrieve the correct update information, the master will
pass some basic, anonymous information to Puppet Labs servers. This behavior can be
disabled. You can nd the details on what is collected and how to disable upgrade checking
in the correct answer le reference. If an update is available, a message will alert you.
65/404
For a split installation, you install the console on its own dedicated server, but if you have a
monolithic installation, you install it on the same server as all of the other PE components.
The console server can:
serve the console web interface, which enables administrators to directly edit resources on
nodes, trigger immediate Puppet runs, group and assign classes to nodes, view reports and
graphs, view inventory information, and invoke orchestration actions.
collect reports from and serve node information to the puppet master.
The Console Databases
As indicated in the Database Support section above, the console and console_auth databases rely
on data provided by a PostgreSQL database. You will either have PE install this database or
congure one manually on your own. You only need to create the database instancesthe console
will populate them.
IMPORTANT: If you are using an existing PostgreSQL instance, you will need the host name
and port of the node you intend to use to provide database support, and you will also need
the user passwords for accessing the databases.
When performing split installations using the automated installation method, install the
database support component before you install the console, so that you have access to the
database users passwords during installation of the console.
66/404
license key le, please email sales@puppetlabs.com and well re-send it.
Note that you can download and install Puppet Enterprise on up to ten nodes at no charge. No
license key is needed to run PE on up to ten nodes.
Setting Puppet in Your Default Path
PE installs its binaries in /opt/puppet/bin and /opt/puppet/sbin, which arent included in your
default $PATH. To include these binaries in your default $PATH, manually add them to your prole
or run PATH=/opt/puppet/bin:$PATH;export PATH.
Installing Agents
Agent installation instructions can be found at Installing PE Agents.
Note: The answer le generated by the procedure on this page can be used to perform an
automated installation. You can nd the installer answer le at
/opt/puppet/share/installer/answers on the machine from which youre running the
installer, but note that these answers are overwritten each time you run the installer.
67/404
The machine you run the installer from must have the same OS/architecture as your PE
deployment.
Please ensure that port 3000 is reachable, as the web-based installer uses this port. You
can close this port when the installation is complete.
The web-based installer does not support sudo congurations with Defaults targetpw
or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.
A Note about Passwords: In some cases, during the installation process, youll be asked to
supply passwords. The ' (single quote) is forbidden in all passwords.
68/404
Warning: Leave your terminal connection open until the installation is complete; otherwise,
the installation will fail.
69/404
b. DNS aliases: provide a comma-separated list of static, valid DNS names (default is puppet),
so agents can trust the master if they contact it. You should make sure that this static list
contains the DNS name or alias youll be conguring your agents to contact.
c. SSH username: provide the username to use when connecting to the puppet master. This eld
defaults to root.
d. SSH password: (optional) provide the sudo password for the SSH username provided.
e. SSH key le path: (optional) provide the absolute path to the SSH key on the machine you are
performing the installation from.
f. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
5. Provide the following information about database support (PuppetDB, the console, and the
console_auth databases):
a. Install PostgreSQL for me: (default) PE will install a PostgreSQL instance for the databases. This
will use PE-generated default names and usernames for the databases. The passwords can be
retrieved from /etc/puppetlabs/installer/database_info.install when the installation is
complete.
b. Use an Existing PostgreSQL instance: if you already have a PostgreSQL instance youd like to
use, youll need to provide the following information:
the PostgreSQL server DNS name
the port number used by the PostgreSQL server (default is 5432)
the PuppetDB database username (default is pe-puppetdb)
the PuppetDB database password
the console database name (default is pe-console)
the console database user name (default is pe-console)
the console database password
the console authentication database name (default is console_auth)
the console authentication database user name (default is console_auth)
the console authentication database password
Note: You will also need to make sure the databases and users youve entered actually
exist. The SQL commands you need will resemble the following:
CREATE TABLESPACE "pe-console" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/console';
CREATE USER "console" PASSWORD 'password';
CREATE DATABASE "console" OWNER "console" TABLESPACE "pe-console"
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic
70/404
71/404
When the installation is complete, the installer script that was running in the terminal will close
itself.
Finally, click Start using Puppet Enterprise to log into the console or continue on to Installing
Agents.
Next: Installing PE Agents
Note: The answer le generated by the procedure on this page can be used to perform an
automated installation. You can nd the installer answer le at
/opt/puppet/share/installer/answers on the machine from which youre running the
installer, but note that these answers are overwritten each time you run the installer.
72/404
or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.
A Note about Passwords: In some cases, during the installation process, youll be asked to
supply passwords. The ' (single quote) is forbidden in all passwords.
73/404
provide the username, private key path, and key passphrase (as needed) for each node on
which youre installing a PE component.
Prerequisite: The non-root user SSH key must be added to authorized_keys on each
node on which youre installing a PE component. And the non-root user must be
granted sudo access on each box.
Warning: Leave your terminal connection open until the installation is complete; otherwise,
the installation will fail.
74/404
d. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
e. SSH key le path: (optional) provide the absolute path to the SSH key on the machine from
where you are performing the installation.
f. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
5. Provide the following information about the PuppetDB server:
a. PuppetDB hostname: provide the fully qualied domain name of the server youre installing
the PuppetDB on.
b. SSH username: provide the username to use when connecting to PuppetDB. This user must
either be root or have sudo access.
c. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
d. SSH key le path: (optional) provide the absolute path to the SSH key on the machine you are
performing the installation from.
e. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
6. Provide the following information about the console server:
a. Console hostname: provide the fully qualied domain name of the server youre installing the
PE console on.
b. SSH username: provide the username to use when connecting to the console. This user must
either be root or have sudo access.
c. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
d. SSH key le path: (optional) provide the absolute path to the SSH key on the machine from
where you are performing the installation.
e. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
7. Provide the following information about database support (PuppetDB, the console, and the
console_auth databases):
a. Install PostgreSQL for me: (default) PE will install a PostgreSQL instance for the databases on
the same node as PuppetDB. This will use PE-generated default names and usernames for the
databases. The passwords can be retrieved from
/etc/puppetlabs/installer/database_info.install when the installation is complete.
b. Use an Existing PostgreSQL instance: if you already have a PostgreSQL instance youd like to
use, youll need to provide the following information:
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split
75/404
Note: You will also need to make sure the databases and users youve entered actually
exist. The SQL commands you need will resemble the following:
CREATE TABLESPACE "pe-console" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/console';
CREATE USER "console" PASSWORD 'password';
CREATE DATABASE "console" OWNER "console" TABLESPACE "pe-console"
ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8' template
template0;
CREATE USER "console_auth" PASSWORD 'password';
CREATE DATABASE "console_auth" OWNER "console_auth" TABLESPACE "peconsole" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;
CREATE TABLESPACE "pe-puppetdb" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/puppetdb';
CREATE USER "pe-puppetdb" PASSWORD 'password';
CREATE DATABASE "pe-puppetdb" OWNER "pe-puppetdb" TABLESPACE "pepuppetdb" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;
76/404
77/404
As an example, if your master is on a node running EL6 and you want to add an agent node
running Debian 6 on AMD64 hardware:
1. Use the console to add the pe_repo::platform::debian_6_amd64 class.
If needed, refer to instructions on classing the master.
2. To create a new repo containing the agent packages, use live management to kick o a puppet
run.
The new repo is created in /opt/puppet/packages/public. It will be called puppet-enterprise3.3.0-debian-6-amd64-agent.
3. SSH into the node where you want to install the agent, and run curl -k https://<master
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents
78/404
Note: The -k ag is needed in order to get curl to trust the master, which it wouldnt
otherwise since Puppet and its SSL infrastructure have not yet been set up on the node.
In some cases, you may be using wget instead of curl. Please use the appropriate ags as
needed.
The install.bash script actually uses a secondary script to retrieve and install an agent
package repo once it has detected the platform on which it is running. You can use this
secondary script if you want to manually specify the platform of the agent packages. You can
also use this script as an example or as the basis for your own custom scripts. The script can
be found at https://<master hostname>:8140/packages/current/<platform>.bash,
where <platform> uses the form el-6-x86_64. Platform names are the same as those used
for the PE tarballs:
el-{5, 6}-{i386, x86_64}
debian-{6, 7}-{i386, amd64}
ubuntu-{10.04, 12.04}-{i386, amd64}
sles-11-{i386, x86_64}
79/404
Warning for Mac OS X users: When performing a command line install of an agent on an OS X
system, you must run puppet config set server and puppet config set certname for
the agent to function correctly.
80/404
this certicate.
Node requests can be approved or rejected using the consoles certicate management capability.
Pending node requests are indicated in the main navigation bar. Click on this indicator to go to a
page where you can see current requests, and then approve or reject them as needed.
Alternatively, you can use the command line interface (CLI), but note that certicate signing with the
CLI is done on the puppet master node. To view the list of pending certicate requests, run:
$ sudo puppet cert list
After signing a new nodes certicate, it may take up to 30 minutes before that node appears in the
console and begins retrieving congurations. You can use live management or the CLI to trigger a
puppet run manually on the node if you want to see it right away.
If you need to remove certicates (e.g., during reinstallation of a node), you can use the puppet
cert clean <node name> command.
81/404
By default, the master node hosts a repo that contains packages used for agent installation. When
you download the tarball for the master, the master also downloads the agent tarball for the same
platform and unpacks it in this repo.
When installing agents on a platform that is dierent from the master platform, the install script
attempts to connect to the internet to download the appropriate agent tarball when you classify the
puppet master. If you will not have internet access at the time of installation, you need to download
the appropriate agent tarball in advance and use the option below that corresponds with your
particular deployment.
Option 1
If you would like to use the PE-provided repo, you can copy the agent tarball into the
/opt/staging/pe_repo directory on your master.
If you upgrade your server, you will need to perform this task again for the new version.
Option 2
If you already have a package management/distribution system, you can use it to install agents
by adding the agent packages to your repo. In this case, you can disable the PE-hosted repo
feature altogether by removing the pe_repo class from your master, along with any class that
starts with pe_repo::.
If you upgrade your server, you will need to perform this task again for the new version.
Option 3
If your deployment has multiple masters and you dont wish to copy the agent tarball to each
one, you can specify a path to the agent tarball. This can be done with an answer le, by setting
q_tarball_server to an accessible server containing the tarball, or by using the console to set
the base_path parameter of the pe_repo class to an accessible server containing the tarball.
Next: Upgrading
82/404
Installing Puppet
To install Puppet Enterprise on a Windows node, simply download and run the installer, which is a
standard Windows .msi package and will run as a graphical wizard. Alternately, you can run the
installer unattended; see Automated Installation below.
The installer must be run with elevated privileges. Installing Puppet does not require a system
reboot.
The only information you need to specify during installation is the hostname of your puppet master
server:
After Installation
Once the installer nishes:
Puppet agent will be running as a Windows service, and will fetch and apply congurations every
30 minutes (by default). You can now assign classes to the node as normal; see Puppet:
Puppet Enterprise 3.3 User's Guide Installing Windows Agents
83/404
Assigning Congurations to Nodes for more details. After the rst puppet run, the MCollective
service will also be running and the node can now be controlled with live management and
orchestration. The puppet agent service and the MCollective service can be started and stopped
independently using either the service control manager GUI or the command line sc.exe utility;
see Running Puppet on Windows for more details.
The Start Menu will contain a Puppet folder, with shortcuts for running puppet agent manually,
running Facter, and opening a command prompt for use with the Puppet tools. See Running
Puppet on Windows for more details.
Puppet is automatically added to the machines PATH environment variable. This means you can
open any command line and call puppet, facter and the few other batch les that are in the bin
directory of the Puppet installation. This will also add necessary items for the Puppet
environment to the shell, but only for the duration of execution of each of the particular
commands.
Automated Installation
For automated deployments, Puppet can be installed unattended on the command line as follows:
msiexec /qn /i puppet.msi
84/404
You can also specify /l*v install.txt to log the progress of the installation to a le.
The following public MSI properties can also be specied:
MSI Property
Puppet Setting
Default Value
INSTALLDIR
n/a
PUPPET_MASTER_SERVER
server
puppet
PUPPET_CA_SERVER
ca_server
Value of PUPPET_MASTER_SERVER
PUPPET_AGENT_CERTNAME
certname
PUPPET_AGENT_ENVIRONMENT
environment
production
PUPPET_AGENT_STARTUP_MODE
n/a
PUPPET_AGENT_ACCOUNT_USER
n/a
PUPPET_AGENT_ACCOUNT_PASSWORD
n/a
PUPPET_AGENT_ACCOUNT_DOMAIN
n/a
For example:
msiexec /qn /i puppet.msi PUPPET_MASTER_SERVER=puppet.acme.com
Note: If a value for the environment variable already exists in puppet.conf, specifying it during
installation will NOT override that value.
Upgrading
Puppet can be upgraded by installing a new version of the MSI package. No extra steps are
required, and the installer will handle stopping and re-starting the puppet agent service.
When upgrading, the installer will not replace any settings in the main puppet.conf conguration
le, but it can add previously unspecied settings if they are provided on the command line.
Uninstalling
Puppet can be uninstalled through the Windows standard Add or Remove Programs interface or
from the command line.
To uninstall from the command line, you must have the original MSI le or know the ProductCode
of the installed MSI:
msiexec /qn /x [puppet.msi|product-code]
85/404
Uninstalling will remove Puppets program directory, the puppet agent service, and all related
registry keys. It will leave the data directory intact, including any SSL keys. To completely remove
Puppet from the system, the data directory can be manually deleted.
Installation Details
What Gets Installed
In order to provide a self-contained installation, the Puppet installer includes all of Puppets
dependencies, including Ruby, Gems, and Facter. (Puppet redistributes the 32-bit Ruby application
from rubyinstaller.org. MCollective is also installed.
These prerequisites are used only for Puppet Enterprise components and do not interfere with
other local copies of Ruby.
Program Directory
Unless overridden during installation, Puppet and its dependencies are installed into the standard
Program Files directory for 32-bit applications and the Program Files(x86) directory for 64-bit
applications.
Puppet Enterprises default installation path is:
OS type
32-bit
64-bit
The Program Files directory can be located using the PROGRAMFILES environment variable on 32-bit
versions of Windows or the PROGRAMFILES(X86) variable on 64-bit versions.
Puppets program directory contains the following subdirectories:
Directory
Description
bin
facter
Facter source
hiera
Hiera source
mcollective
MCollective source
mcollective_plugins
misc
resources
puppet
Puppet source
service
sys
86/404
Path
Default
2003
%ALLUSERSPROFILE%\Application
Data\PuppetLabs\puppet
7, 2008,
2012
%PROGRAMDATA%\PuppetLabs\
C:\ProgramData\PuppetLabs\
87/404
Warning: In PE agent certnames need to be lowercase. For Mac OS X agents, the certname is
derived from the name of the machine (e.g., My-Example-Mac>. To prevent installation
issues, you will want to make sure the name of your machine uses lowercases letters. You
can make this change in System Preferences > Sharing > Computer Name > Edit.
To make this change from the command line, run the following commands:
1. sudo scutil --set ComputerName <newname>
2. sudo scutil --set LocalHostName <newname>
3. sudo scutil --set HostName <newname>
If you dont want to change your computers name, you can also enter the agent certname in
all lowercase letters when prompted by the installer.
88/404
89/404
Tip: If you want to use the answer le created from the web-based installer, you can nd it at
/opt/puppet/share/installer/answers on the machine from which youre running the
Puppet Enterprise 3.3 User's Guide Automated Installation with an Answer File
90/404
installer, but note that these answers are overwritten each time you run the installer.
You must hand edit any pre-made answer le before using it, as new nodes will need, at a
minimum, a unique agent certname.
q_puppetagent_certname=$(hostname -f)
or backticks:
q_puppetagent_certname=`uuidgen`
Answer les can also contain arbitrary shell code and control logic, but you will probably be able to
get by with a few simple name-discovery commands.
See the answer le reference for a complete list of variables and the conditions where theyre
needed, or simply start editing one of the example les in answers/.
91/404
q_puppetagent_certname=$(hostname -f)
To set it to a UUID:
q_puppetagent_certname=$(uuidgen)
Uninstaller Answers
q_pe_uninstall
Y or N Whether to uninstall. Answer les must set this to Y.
q_pe_purge
Y or N Whether to purge additional les when uninstalling, including all conguration les,
modules, manifests, certicates, and the home directories of any users created by the PE
installer.
92/404
q_pe_remove_db
Y or N Whether to remove any PE-specic databases when uninstalling.
Next: What gets installed where?
93/404
Y or N Whether or not the installation is an all-in-one installation, (i.e., are PuppetDB and
the console also being installed on this node).
q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
ADDITIONAL COMPONENT ANSWERS
94/404
Puppet Enterprise 3.3 User's Guide Monolithic Puppet Enterprise Install Answer File Reference
95/404
q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).
See the Answer File Overview and the section on automated installation for more details.
Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
ADDITIONAL GLOBAL ANSWERS
q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
Additional Component Answers
These answers are optional.
q_puppetagent_install
Y or N Whether to install the puppet agent component.
Puppet Agent Answers
These answers are always needed.
q_puppetagent_certname=pe-console.<your local domain>
String An identifying string for this agent node. This per-node ID must be unique across
your entire site. Fully qualied domain names are often used as agent certnames.
q_puppetagent_server=pe-master.<your local domain>
String The hostname of the puppet master server. For the agent to trust the masters
certicate, this must be one of the valid DNS names you chose when installing the puppet
master.
q_fail_on_unsuccessful_master_lookup=y
Y or N Whether to quit the install if the puppet master cannot be reached.
q_skip_master_verification=n
Y or N This is a silent install option, default is N. When set to Y, the installer will skip
master verication which allows the user to deploy agents when they know the master wont
be available.
Puppet Master Answers
These answers are generally needed if you are installing the puppet master component.
q_disable_live_manangement=n
Y or N Whether to disable or enable live management in the console. Note that you need to
manually add this question to your answer to le before an installation or upgrade.
q_pe_database=y
Y or N Whether to have the PostgreSQL server for the console managed by PE or to manage
it yourself. Set to Y if youre using PE-managed PostgreSQL.
q_puppet_enterpriseconsole_auth_user_email=<your email>
String The email address the consoles admin user will use to log in.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 98/404
q_puppet_enterpriseconsole_auth_password=<your password>
String The password for the consoles admin user. Must be longer than eight characters.
q_puppet_enterpriseconsole_smtp_host=smtp.<your local domain>
String The SMTP server used to email account activation codes to new console users.
q_puppet_enterpriseconsole_smtp_port=25
Integer The port to use when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_use_tls=n
Y or N Whether to use TLS when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_user_auth=n
Y or N Whether to authenticate to the SMTP server with a username and password.
q_puppet_enterpriseconsole_smtp_username=
String The username to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_puppet_enterpriseconsole_smtp_password=
String The password to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_puppet_enterpriseconsole_database_name=console
String The database the console will use. Note that if you are not installing the database
support component, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_database_user=console
String The PostgreSQL user the console will use. Note that if you are not installing the
database support component, this user must already exist on the PostgreSQL server and
must be able to edit the consoles database.
q_puppet_enterpriseconsole_database_password=<your password>
*String The password for the consoles PostgreSQL user.
q_puppet_enterpriseconsole_auth_database_name=console_auth
String The database the console authentication will use. Note that if you are not installing
the database support component, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_auth_database_user=console_auth
String The PostgreSQL user the console authentication will use. Note that if you are not
installing the database support component, this user must already exist on the PostgreSQL
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 99/404
q_puppet_enterpriseconsole_master_hostname
String The hostname of the server running the master component. Only needed in a split
install.
Database Support Answers
These answers are only needed if you are installing the database support component.
q_database_host=pe-puppetdb.localdomain
String The hostname of the server running the PostgreSQL server that supports the
console.
q_database_port=5432
Integer The port where the PostgreSQL server that supports the console can be reached.
q_puppetdb_database_name=pe-puppetdb
String The database PuppetDB will use.
q_puppetdb_database_password=strongpassword1748
String The password for PuppetDBs root user.
q_puppetdb_database_user=pe-puppetdb
String PuppetDBs root user name.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 100/404
q_puppetdb_hostname=pe-puppetdb.localdomain
String The hostname of the server running PuppetDB.
ADDITIONAL DATABASE SUPPORT ANSWERS
q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).
Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Puppet Master Answer File Reference
101/404
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
ADDITIONAL GLOBAL ANSWERS
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Puppet Master Answer File Reference
102/404
These answers are generally needed if you are installing the puppet master component.
q_all_in_one_install=n
Y or N Whether or not the installation is an all-in-one installation, (i.e., are puppetdb and
the console also being installed on this node).
q_puppetmaster_certname=pe-master.<your local domain>
String An identifying string for the puppet master. This ID must be unique across your
entire site. The servers fully qualied domain name is often used as the puppet masters
certname.
q_puppetmaster_dnsaltnames=pe-master,pe-master.<your local domain>
String Valid DNS names at which the puppet master can be reached. Must be a commaseparated list. In a normal installation, defaults to ,,puppet,puppet..
q_puppetmaster_enterpriseconsole_hostname=pe-console.<your local domain>
String The hostname of the server running the console component. Only needed if you are
not installing the console component on the puppet master server.
q_puppetmaster_enterpriseconsole_port=443
Integer The port on which to contact the console server. Only needed if you are not
installing the console component on the puppet master server.
q_pe_check_for_updates=n
y or n; MUST BE LOWERCASE Whether to check for updates whenever the pe-httpd service
restarts. To get the correct update info, the server will pass some basic, anonymous info to
Puppet Labs servers. Specically, it will transmit: * the IP address of the client * the type and
version of the clients OS * the installed version of PE * the number of nodes licensed and the
number of nodes used If you wish to disable update checks (e.g. if your company policy
forbids transmitting this information), you will need to set this to n. You can also disable
checking after installation by editing the /etc/puppetlabs/installer/answers.install le.
q_public_hostname=
String A publicly accessible hostname where the console can be accessed if the host name
resolves to a private interface (e.g., Amazon EC2). This is set automatically by the installer on
EC2 nodes, but can be set manually in environments with multiple hostnames.
ADDITIONAL PUPPET MASTER ANSWERS
PuppetDB Answers
q_puppetdb_hostname=pe-puppetdb.<your local domain>
String The hostname of the server running PuppetDB.
ADDITIONAL PUPPETDB ANSWERS
Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, PuppetDB Answer File Reference
104/404
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, PuppetDB Answer File Reference
105/404
q_skip_master_verification=n
Y or N This is a silent install option, default is N. When set to Y, the installer will skip
master verication which allows the user to deploy agents when they know the master wont
be available.
Puppet Master Answers
These answers are generally needed if you are installing the puppet master role.
q_puppetmaster_certname=${q_puppetagent_server}
String An identifying string for the puppet master. This ID must be unique across your
entire site. The servers fully qualied domain name is often used as the puppet masters
certname.
q_puppet_enterpriseconsole_database_name=console
String The database the console will use. Note that if you are not installing the database
support role, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_database_user=console
String The PostgreSQL user the console will use. Note that if you are not installing the
database support role, this user must already exist on the PostgreSQL server and must be
able to edit the consoles database.
q_puppet_enterpriseconsole_database_password=<your password>
String The password for the consoles PostgreSQL user.
q_puppet_enterpriseconsole_auth_database_name=console_auth
String The database the console authentication will use. Note that if you are not installing
the database support role, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_auth_database_user=console_auth
String The PostgreSQL user the console authentication will use. Note that if you are not
installing the database support role, this user must already exist on the PostgreSQL server
and must be able to edit the auth database.
q_puppet_enterpriseconsole_auth_database_password=<your password>
String The password for the auth databases PostgreSQL user.
ADDITIONAL PUPPET MASTER ANSWERS
q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).
107/404
If you have a monolithic installation (with the master, console, and database components all on the
same node), the installer will upgrade each component in the correct order, automatically.
Upgrading a Split Installation
If you have a split installation (with the master, console and database components on dierent
nodes), the process involves the following steps, which must be performed in the following order:
1. Upgrade Master
2. Upgrade PuppetDB
3. Upgrade Console
4. Upgrade Agents
To upgrade Windows agents, simply download and run the new MSI package as
described in Installing Windows Agents. However, be sure to upgrade your master, console,
and database nodes rst.
108/404
Upgrades from 3.2.0 Can Cause Issues with Multi-Platform Agent Packages
Users upgrading from PE 3.2.0 to a later version of 3.x (including 3.2.3) will see errors when
attempting to download agent packages for platforms other than the master. After adding pe_repo
classes to the master for desired agent packages, errors will be seen on the subsequent puppet run
as PE attempts to access the requisite packages. For a simple workaround to this issue, see the
installer troubleshooting page.
Upgrades to PE 3.x from 2.8.3 Can Fail if PostgreSQL is Already Installed
This issue has been documented in the known issues section of the release notes.
A Note about Changes to puppet.conf that Can Cause Issues During Upgrades
If you manage puppet.conf with Puppet or a third-party tool like Git or r10k, you may encounter
errors after upgrading based on the following changes. Please assess these changes before
upgrading.
node_terminus Changes
In PE versions earlier than 3.2, node classication was congured with node_terminus=exec,
located in /etc/puppetlabs/puppet/puppet.conf. This caused the puppet master to execute a
custom shell script ( /etc/puppetlabs/puppet-dashboard/external_node) which ran a curl
command to retrieve data from the console.
PE 3.2 changes node classication in puppet.conf. The new conguration is
node_terminus=console. The external_node script is no longer available; thus,
node_terminus=exec no longer works.
With this change, we have improved security, as the puppet master can now verify the console.
The console certicate name is pe-internal-dashboard. The puppet master now nds the
console by reading the contents of /etc/puppetlabs/puppet/console.conf, which provides the
following:
[main]
server=<console hostname>
port=<console port>
certificate_name=pe-internal-dashboard
This le tells the puppet master where to locate the console and what name it should expect the
console to have. If you want to change the location of the console, you can edit console.conf,
but DO NOT change the certificate_name setting.
The rules for certicate-based authorization to the console are found in
/etc/puppetlabs/console-auth/certificate_authorization.yml on the console node. By
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise
109/404
default, this le allows the puppet master read-write access to the console (based on its
certicate name) to request node data and submit report data.
Reports Changes
Reports are no longer submitted to the console using reports=https. PE 3.2 changed the
setting in puppet.conf to reports=console. This change works in the same way as the
node_terminus changes described above.
Upgrading Split Console and Custom PostgreSQL Databases
When upgrading from 3.1 to 3.3, the console database tables are upgraded from 32-bit integers to
64-bit. This helps to avoid ID overows in large databases. In order to migrate the database, the
upgrader will temporarily require disc space equivalent to 20% more than the largest table in the
consoles database (by default, located here: /opt/puppet/var/lib/pgsqul/9.2/console). If the
database is in this default location, on the same node as the console, the upgrader can successfully
determine the amount of disc space needed and provide warnings if needed. However, there are
certain circumstances in which the upgrader cannot make this determination automatically.
Specically, the installer cannot determine the disc space requirement if:
1. The console database is installed on a dierent node than the console.
2. The console database is a custom instance, not the database installed by PE.
In case 1, the installer can determine how much space is needed, but it will be up to the user to
determine if sucient free-space exists. In case 2, the installer is unable to obtain any information
about the size or state of the database.
Running a 3.x Master with 2.8.x Agents is not Supported
3.x versions of PE contain changes to the MCollective module that are not compatible with 2.8.x
agents. When running a 3.x master with a 2.8.x agent, it is possible that puppet will still continue to
run and check into the console, but this means puppet is running in a degraded state that is not
supported.
Upgrades to PE 3.2.x or Later Removes Commented Authentication Sections from rubycasserver/config.yml
If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain the
commented sections for the third-party services. Weve provided the commented sections on the
console cong page, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
Upgrade puppetlabs-inile to Version 1.1.0 or Later Is Required
If you have the puppetlabs-inile module installed, you must upgrade to version 1.1.0 or higher of
the module before you upgrade to PE 3.3.
110/404
Downloading PE
If you havent done so already, you will need a Puppet Enterprise tarball appropriate for your
system(s). See the Installing PE section of this guide for more information on accessing Puppet
Enterprise tarballs, or go directly to the download page.
Once downloaded, copy the appropriate tarball to each node youll be upgrading.
Note: PE3 has moved from the MySQL implementation used in PE 2.x to PostgreSQL for all
database support. PE3 also now includes PuppetDB, which requires PostgreSQL. When
upgrading from 2.x to 3.x, the installer will automatically pipe your existing data from
MySQL to PostgreSQL.
You will need to have a node available and ready to receive an installation of PuppetDB and
PostgreSQL. This can be the same node as the one running the master and console (if you
have a monolithic, all-on-one implementation), or it can be a separate node (if you are
running a split component implementation). In a split component implementation, the
database node must be up and running and reachable at a known hostname before starting
the upgrade process on the console node.
The upgrader can install a pre-congured version of PostgreSQL (must be version 9.1 or
higher) along with PuppetDB on the node you select. If you prefer to use a node with an
existing instance of PostgreSQL, that instance needs to be manually congured with the
correct users and access. This also needs to be done BEFORE starting the upgrade.
Upgrade Master
Start the upgrade by running the puppet-enterprise-installer script on the master node. The
script will detect any previous versions of PE components and stop any PE services that are currently
running. The script will then step through the install script, providing default answers based on the
components it has detected on the node (e.g., if the script detects only an agent on a given node, it
will provide No as the default answer to installing the master component). The upgrader should
be able to answer all of the questions based on your current installation except for the hostname
and port of the PuppetDB node you prepped before starting the upgrade.
As with installation, the script will also check for any missing dependent vendor packages and oer
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise
111/404
As with installation, the script will also check for any missing dependent vendor packages and oer
to install them automatically.
Lastly, the script will summarize the upgrade plan and ask you to go ahead and perform the
upgrade. Your answers to the script will be saved as usual in
/etc/puppetlabs/installer/answers.install.
The upgrade script will run and provide detailed information as to what it installs, what it updates
and what it replaces. It will preserve existing certicates and puppet.conf les.
Upgrade PuppetDB
On the node you provisioned for PuppetDB before starting the upgrade, unpack the PE 3.3 tarball
and run the puppet-enterprise-installer script. If you are upgrading from a 2.8 deployment,
you will need to provide some answers to the upgrader, as follows:
?? Install puppet master? [y/N] Answer N. This will not be your master. The master was
upgraded in the previous step.
?? Puppet master hostname to connect to? [Default: puppet] Enter the FQDN of the
master node you upgraded in the previous step.
?? Install PuppetDB? [y/N] Answer Y. This is the reason we are performing this installation
on this node.
?? Install the cloud provisioner? [y/N] Choose whether or not you would like to install
the cloud provisioner component on this node.
?? Install a PostgreSQL server locally? [Y/n] If you want the installer to create a
PostgreSQL server instance for PuppetDB data, answer Y. If you are using an existing
PostgresSQL instance located elsewhere, answer N and be prepared to answer questions about
its hostname, port, database name, database user, and password.
?? Certname for this node? [Default: my_puppetdb_node.example.com ] Enter the FQDN
for this node.
?? Certname for the master? [Default: hostname.entered.earlier ] You only need to
change the default if the hostname and certname of your master are dierent.
The installer will save auto-generated users and passwords in
/etc/puppetlabs/installer/database_info.install. Do not delete this le, you will need its
information in the next step.
POTENTIAL DATABASE TRANSFER ISSUES
The node running PostgreSQL must have access to the en_US.UTF8 locale. Otherwise, certain
non-ASCII characters will not translate correctly and may cause issues and unpredictability.
If you have manually re-ordered the columns in your old MySQL database, the transfer may fail
or may import values into inappropriate columns, leading to incorrect data and unpredictable
behavior.
If some string values (e.g. for group name) are literals written exactly as NULL, they will be
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise
112/404
transferred as undened values or, if the target PostgreSQL column has a not-null constraint,
the import may fail altogether.
Upgrade the Console
On the node serving the console component, unpack the PE 3.3 tarball and run the puppetenterprise-installer script. The installer will detect the version from which you are upgrading
and answer as many installer questions as possible based on your existing deployment.
Note: When upgrading a node running the console component, the upgrader will pipe the
current MySQL databases into the new PostgreSQL databases. If your databases contain a lot
of data, this transfer may take some time to complete.
Pruning the MySQL data before starting the upgrade will make things go faster. While not
absolutely necessary, to make the transfer go faster we recommend deleting all but twofour weeks worth of reports.
If you are running the console on a VM, you may also wish to temporarily increase the
amount of RAM available.
Note that your old database will NOT be deleted after the upgrade completes. After you are
sure the upgrade was successful, you will need to delete the database les yourself to
reclaim disk space.
The installer will also ask for the following information:
The hostname and port number for the PuppetDB node you created in the previous step.
Database credentials; specically, the database names, user names, and passwords for the
console, console_auth, and pe-puppetdb databases. These can be found in
/etc/puppetlabs/installer/database_info.install on the PuppetDB node.
Note: If you will be using your own instance of PostgreSQL (as opposed to the instance PE can
install) for the console and PuppetDB, it must be version 9.1 or higher.
DISABLING/ENABLING LIVE MANAGEMENT DURING AN UPGRADE
The status of live management is not managed during an upgrade of PE unless you specically
indicate a change is needed in an answer le. In other words, if your previous version of PE had live
management enabled (the PE default), it will remain enabled after you upgrade unless you add or
change q_disable_live_manangement={y|n} in your answer le.
Depending on your answer, the disable_live_management setting in /etc/puppetlabs/puppetdashboard/settings.yml on the puppet master (or console node in a split install) will be set to
either true or false after the upgrade is complete.
(Note that you can enable/disable Live Management at any time during normal operations by
editing the aforementioned settings.yml and then running sudo /etc/init.d/pe-httpd
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise
113/404
restart.)
Upgrade Agents and Complete the Upgrade
The simplest way to upgrade agents is to upgrade the pe-agent package in the repo your package
manager (e.g., Satellite) is using. Similarly, if you are using the PE package repo hosted on the
master, it will get upgraded when you upgrade the master. You can then use the agent install script
as usual to upgrade your agent.
For nodes running an OS that doesnt support remote package repos (e.g., RHEL 4, AIX) youll need
to use the installer script on the PE tarball as you did for the master, etc. On each node with a
puppet agent, unpack the PE 3.3 tarball and run the puppet-enterprise-installer script. The
installer will detect the version from which you are upgrading and answer as many installer
questions as possible based on your existing deployment. Note that the agents on your puppet
master, PE console, and PuppetDB nodes will have been updated already when you upgraded those
nodes. Nodes running 2.x agents will not be available for live management until they have been
upgraded.
PE services should restart automatically after the upgrade. But if you want to check that everything
is working correctly, you can run puppet agent -t on your agents to ensure that everything is
behaving as it was before upgrading. Generally speaking, its a good idea to run puppet right away
after an upgrade to make sure everything is hooked and has the latest conguration.
114/404
Regardless of the path you use, the uninstaller will ask you to conrm that you want to uninstall.
By default, the uninstaller will remove the Puppet Enterprise software, users, logs, cron jobs, and
caches, but it will leave your modules, manifests, certicates, databases, and conguration les in
place, as well as the home directories of any users it removes.
You can use the following command-line ags to change the uninstallers behavior:
Uninstaller Options
-p
Purge additional les. With this ag, the uninstaller will also remove all conguration les,
modules, manifests, certicates, and the home directories of any users created by the PE
installer. This will also remove the Puppet Labs public GPG key used for package verication.
-d
Also remove any databases created during installation.
-h
Display a help message.
-n
Run in noop mode; show commands that would have been run during uninstallation without
Puppet Enterprise 3.3 User's Guide Uninstalling Puppet Enterprise
115/404
Note that if you plan to reinstall any PE component on a node youve run an uninstall on, you may
need to run puppet cert clean <node name> on the master in order to remove any orphaned
certicates from the node.
Next: Automated Installation
116/404
sales@puppetlabs.com or (877) 575-9775. For more information on licensing terms, please see the
licensing FAQ. If you have misplaced or never received your license key, please contact
sales@puppetlabs.com.
Software
What
All functional components of PE, excluding conguration les. You are not likely to need to change
these components. The following software components are installed:
Puppet
PuppetDB
Facter
MCollective
Hiera
Puppet Dashboard
Where
On *nix nodes, all PE software (excluding cong les and generated data) is installed under
/opt/puppet.
On Windows nodes, all PE software is installed in the Puppet Enterprise subdirectory of the
standard 32-bit applications directory
Executable binaries on *nix are in /opt/puppet/bin and /opt/puppet/sbin.
The Puppet modules included with PE are installed on the puppet master server in
/opt/puppet/share/puppet/modules. Dont modify anything in this directory or add modules of
your own. Instead, install them in /etc/puppetlabs/puppet/modules.
Orchestration plugins are installed in /opt/puppet/libexec/mcollective/mcollective on *nix
and in <COMMON_APPDATA> \PuppetLabs\mcollective\etc\plugins\mcollective on Windows. If
you are adding new plugins to your PE agent nodes, you should distribute them via Puppet as
described in the Adding Actions page of this manual.
Dependencies
For information about PostgreSQL and OpenSSL requirements, refer to the system requirements.
Conguration Files
What
Files used to congure Puppet and its subsidiary components. These are the les you will likely
change to accomodate the needs of your environment.
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?
117/404
Where
On *nix nodes, Puppet Enterprises conguration les all live under /etc/puppetlabs.
On Windows nodes, Puppet Enterprises conguration les all live under
<COMMON_APPDATA>\PuppetLabs. The location of this folder varies by Windows version; in 2008 and
2012, its default location is C:\ProgramData\PuppetLabs\.
PEs various components all have subdirectories inside this main data directory:
Puppets confdir is in the puppet subdirectory. This directory contains the puppet.conf le, the
site manifest ( manifests/site.pp), and the modules directory.
The orchestration engines cong les are in the mcollective subdirectory on all agent nodes,
as well as the activemq subdirectory and the /var/lib/peadmin directories on the puppet
master. The default les in these directories are managed by Puppet Enterprise, but you can add
plugin cong les to the mcollective/plugin.d directory.
The consoles cong les are in the puppet-dashboard, rubycas-server, and console-auth
subdirectories.
PuppetDBs cong les are in the puppetdb subdirectory.
Log Files
What
The software distributed with Puppet Enterprise generates the following log les, which can be
found as follows.
Where
Puppet Master Logs
/var/log/pe-httpd/access.log
/var/log/pe-httpd/puppetmaster.error.log
/var/log/pe-httpd/puppetmaster.access.log contains all the endpoints that have been
accessed with the puppet master REST API.
Puppet Agent Logs
The puppet agent service logs its activity to the syslog service. Your syslog conguration dictates
where these messages will be saved, but the default location is /var/log/messages on Linux and
/var/adm/messages on Solaris.
ActiveMQ Logs
/var/log/pe-activemq/wrapper.log
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?
118/404
/var/log/pe-activemq/activemq.log
/var/opt/puppet/activemq/data/kahadb/db-1.log
/var/opt/puppet/activemq/data/audit.log
Orchestration Service Log
/var/log/pe-mcollective/mcollective.log maintained by the orchestration service, which is
installed on all nodes.
/var/log/pe-mcollective/mcollective-audit.log exists on all nodes that have mcollective
installed; logs any mcollective actions run on the node, including information about the client
that called the node
Console Logs
/var/log/pe-console-auth/auth.log
/var/log/pe-console-auth/cas_client.log
/var/log/pe-console-auth/cas.log
/var/log/pe-httpd/error.log contains errors related to Passenger. Console errors that dont
get logged anywhere else can be found in this log. If you have problems with the console or
Puppet, this log may be useful.
/var/log/pe-httpd/puppetdashboard.access.log contains all the endpoints that have been
accessed in the console.
/var/log/pe-httpd/puppetdashboard.error.log
/var/log/pe-puppet-dashboard/certificate_manager.log
/var/log/pe-puppet-dashboard/delayed_job.log
/var/log/pe-puppet-dashboard/event-inspector.log
/var/log/pe-puppet-dasboard/failed_reports/ contains a collection of any reports that fail to
upload the to the dashboard.
/var/log/pe-puppet-dashboard/live-management.log
/var/log/pe-puppet-dashboard/mcollective_client.log
/var/log/pe-puppet-dashboard/production.log
Installer Logs
/var/log/pe-installer/http.log contains the web requests sent to the installer; present only
on the machine from which the web-based install was performed.
/var/log/pe-installer/installer-<timestamp>.log contains the operations performed and
any errors that occurred during installation
Database Log
/var/log/pe-puppetdb/pe-puppetdb.log
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?
119/404
/var/log/pe-postgresql/pgstartup.log
Miscellaneous Logs
These les may or may not be present.
/var/log/pe-httpd/other_vhosts_access.log
/var/log/pe-puppet/masterhttp.log
/var/log/pe-puppet/rails.log
120/404
classifying new nodes, etc. See the Cloud Provisioning section for more information.
Orchestration Tools Tools used to orchestrate simultaneous actions across a number of
nodes. These tools are built on the MCollective framework and are accessed either via the mco
command or via the Live Management page of the PE console. See the Orchestration section for
more information.
Module Tools The Module tool is used to access and create Puppet Modules, which are
reusable chunks of Puppet code users have written to automate conguration and deployment
tasks. For more information, and to access modules, visit the Puppet Forge.
Console The console is Puppet Enterprises GUI web interface. The console provides tools to
view and edit resources on your nodes, view reports and activity graphs, trigger Puppet runs, etc.
See the Console section of the Puppet Manual for more information.
For more details, you can also refer to the man page for a given command or subcommand.
Services
PE uses the following services:
pe-activemq The ActiveMQ message server, which passes messages to the MCollective servers
on agent nodes. Runs on servers with the puppet master component.
pe-httpd Apache 2, which manages and serves puppet master and the console on servers
with those components. (Note that PE uses Passenger to run puppet master, instead of running it
as a standalone daemon.)
pe-mcollective The orchestration (MCollective) daemon, which listens for orchestration
messages and invokes actions. Runs on every agent node.
pe-memcached The puppet memcached daemon. Runs on the same node as the PE console.
pe-puppet (on EL and Debian-based platforms) The puppet agent daemon. Runs on every
agent node.
pe-puppet-dashboard-workers A supervisor that manages the consoles background
processes. Runs on servers with the console component. -pe-puppetdb and pe-postgresql
Daemons that manage and serve the database components. Note that pe-postgresql is only
created if we install and manage PostgreSQL for you.
User Accounts
PE creates the following users:
peadmin An administrative account which can invoke orchestration actions. This is the only PE
user account intended for use in a login shell. See the Invoking Orchestration Actions page of
this manual for more about this user. This user exists on servers with the puppet master
component.
pe-puppet A system user which runs the puppet master processes spawned by Passenger.
pe-apache A system user which runs Apache ( pe-httpd).
pe-activemq A system user which runs the ActiveMQ message bus used by MCollective.
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?
121/404
puppet-dashboard A system user which runs the console processes spawned by Passenger.
pe-puppetdb A system user with root access to the database.
pe-auth The PE console auth user.
pe-memcached The PE memcached daemon user,
pe-postgres A system user with access to the pe-postgreSQL instance. Note that this user is
only created if we install and manage PostgreSQL for you.
Certicates
During install, PE generates the following certicates (can be found at
/etc/puppetlabs/puppet/ssl/certs):
pe-internal-dashboard The certicate for the puppet dashboard.
<user-entered console certname> The certicate for the PE console. Only generated if the
user has chosen to install the console in a split component conguration.
<user entered PuppetDB certname> The certicate for the database component. Only
generated if the user has chosen to install the database in a split component conguration.
<user-entered master certname> This certicate is either generated at install if the puppet
master and console are the same machine or is signed by the master if the console is on a
separate machine.
pe-internal-mcollective-servers A shared certicate generated on the puppet master and
shared to all agent nodes.
pe-internal-peadmin-mcollective-client The orchestration certicate for the peadmin
account on the puppet master.
pe-internal-puppet-console-mcollective-client The orchestration certicate for the PE
console/live management
pe-internal-broker The certicate generated for the activemq instance running over SSL on
the puppet master. Added to /etc/puppetlabs/activemq/broker.ks.
A fresh PE install should thus give the following list of certicates:
root@master:~# puppet cert list --all
+ "master"
(40:D5:40:FA:E2:94:36:4D:C4:8C:CE:68:FB:77:73:AB) (alt names: "DNS:master",
"DNS:puppet", "DNS:puppet.soupkitchen.internal")
+ "pe-internal-broker"
(D3:E1:A8:B1:3A:88:6B:73:76:D1:E3:DA:49:EF:D0:4D) (alt names: "DNS:master",
"DNS:master.soupkitchen.internal", "DNS:pe-internal-broker", "DNS:stomp")
+ "pe-internal-dashboard"
(F9:10:E7:7F:97:C8:1B:2F:CC:D9:F1:EA:B2:FE:1E:79)
+ "pe-internal-mcollective-servers"
(96:4F:AA:75:B5:7E:12:46:C2:CE:1B:7B:49:FF:05:49)
+ "pe-internal-peadmin-mcollective-client"
(3C:4D:8E:15:07:41:18:E2:21:57:19:01:2E:DB:AB:07)
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?
122/404
+ "pe-internal-puppet-console-mcollective-client"
(97:10:76:B5:3E:8D:02:D2:3D:A6:43:F4:89:F4:8B:94)
Documentation
Man pages for the Puppet subcommands are generated on the y. To view them, run puppet man
<SUBCOMMAND>.
The pe-man command from previous versions of Puppet Enterprise is no longer functional. Use the
above method instead.
Next: Accessing the Console
Browser Requirements
For the browser requirements, see system requirements.
123/404
Note the https protocol handler you cannot reach the console over plain http.
124/404
Step 1:
Step 2:
125/404
126/404
If this happens, click Cancel to access the console. (In some cases, you may need to click
Cancel several times.)
Logging In
For security, accessing the console requires a user name and password. PE allows three dierent
levels of user access: read, read-write, and admin. If you are an admin setting up the console or
accessing it for the rst time, use the user and password you chose when you installed the console.
Otherwise, you will need to get credentials from your sites administrator. See the User
Management page for more information on managing console user accounts.
127/404
Since the console is the main point of control for your infrastructure, you will probably want to
decline your browsers oer to remember its password.
Next: Navigating the Console
The following navigation items all lead to their respective sections of the console:
Nodes
Groups
Classes
Reports
Inventory Search
Live Management
Node requests
The navigation item containing your username (admin in the screenshot above) is a menu which
provides access to your account information and (for admin users) the user management tools.
The Resources menu leads to the Puppet Enterprise documentation and also provides links to the
Puppet Forge, Geppetto IDE documentation, and Puppet Labs Support and Feedback portals.
The licenses menu shows you the number of nodes that are currently active and the number of
nodes still available on your current license. See below for more information on working with
licenses.
Puppet Enterprise 3.3 User's Guide Navigating the Console
128/404
Note: For users limited to read-only access, some elements of the console shown here will
not be visible.
The Sidebar
Within the node/group/class/report pages of the console, you can also use the sidebar as a
shortcut to many sections and subsections.
129/404
130/404
Many pages in the console including class and group detail pages contain a node list view. A
Puppet Enterprise 3.3 User's Guide Navigating the Console
131/404
list will show the name of each node that is relevant to the current view (members of a group, for
example), a graph of their recent aggregate activity, and a few details about each nodes most
recent run. Node names will have icons next to them representing their most recent state.
Certain node lists (the main node list and the per-state lists) include a search eld. This eld
accepts partial node names, and narrows the list to show only nodes whose names match the
search.
Clicking the name of a node will take you to that nodes node detail page, where you can see indepth information and assign congurations directly to the node. See the Grouping and Classifying
Nodes and Viewing Reports and Inventory Data pages for information about node detail pages.
REPORTS AND REPORT LISTS
Node detail pages contain a report list. If you click a report in this list, or a timestamp in the Latest
report column of a node list view, you can navigate to a report detail page. See the Viewing Reports
and Inventory Data page for information about report detail pages.
GROUPS
Groups can contain any number of nodes, and nodes can belong to more than one group. Each
group detail page contains a node list view.
132/404
You can use a group page to view aggregate information about its members, or to assign
congurations to every member at once. See the Grouping and Classifying Nodes page for
information about assigning congurations to groups.
CLASSES
Classes are the main unit of Puppet congurations. You must deliberately add classes to the
console with the Add classes button before you can assign them to nodes or groups. See the
Grouping and Classifying Nodes page for information about adding classes and assigning them to
nodes or groups. If you click the name of a class to see its class detail page, you can view a node list
of every node assigned that class.
Working with Licenses
The licenses menu shows you the number of nodes that are currently active and the number of
nodes still available on your current license. If the number of available licenses is exceeded, a
warning will be displayed. The number of licenses used is determined by the number of active
nodes known to Puppetdb. This is a change from previous behavior which used the number of
unrevoked certs known by the CA to determine used licenses. The menu item provides convenient
links to purchase and pricing information.
Unused nodes will be deactivated automatically after seven days with no activity (no new facts,
catalog or reports), or you can use puppet node deactivate for immediate results. The console
will cache license information for some time, so if you have made changes to your license le (e.g.
Puppet Enterprise 3.3 User's Guide Navigating the Console
133/404
adding or renewing licenses), the changes may not show for up to 24 hours. You can restart the
pe-memcached service in order to update the license display sooner.
Next: Navigating the Live Management Page
Notes: To invoke orchestration actions, you must be logged in as a read-write or admin level
user. Read-only users can browse resources, but cannot invoke actions.
Since the live management page queries information directly from your nodes rather than
using the consoles cached reports, it responds more slowly than other parts of the console.
134/404
Nodes are listed by the same Puppet certicate names used in the rest of the console interface.
As long as you stay within the live management page, your selection and ltering in the node list
will persist across all three tabs. The node list gets reset once you navigate to a dierent area of the
console.
Puppet Enterprise 3.3 User's Guide Navigating Live Management
135/404
Selecting Nodes
Clicking a node selects it or deselects it. Use the select all and select none controls to select and
deselect all nodes that match the current lter.
Only visible nodes i.e. nodes that match the current lter can be selected. (Note that an empty
lter shows all nodes.) You dont have to worry about accidentally commanding invisibly selected
nodes.
Filtering by Name
Use the node lter eld to lter your nodes by name.
You can use the following wildcards in the node lter eld:
? matches one character
* matches many (or zero) characters
Use the lter button or the enter key to conrm your search, then wait for the node list to be
updated.
Hint: Use the Wildcards allowed link for a quick pop-over reminder.
Advanced Search
You can also lter by Puppet class or by the value of any fact on your nodes. Click the advanced
search link to reveal these elds.
136/404
Hint: Use the common fact names link for a pop-over list of the most useful facts. Click a fact
name to copy it to the lter eld.
You can browse the inventory data in the consoles node views to nd fact values to search with;
this can help when looking for nodes similar to a specic node. You can also check the list of core
facts for valid fact names.
Filtering by Puppet class can be the most powerful ltering tool on this page, but it requires you to
Puppet Enterprise 3.3 User's Guide Navigating Live Management
137/404
have already assigned classes to your nodes. See the chapter on grouping and classifying nodes for
more details.
Tabs
The live management page is split into three tabs.
The Browse Resources tab lets you browse, search, inspect, and compare resources on any
subset of your nodes.
The Control Puppet tab lets you invoke Puppet-related actions on your nodes. These include
telling any node to immediately fetch and apply its conguration, temporarily disabling puppet
agent on some nodes, and more.
The Advanced Tasks tab lets you invoke orchestration actions on your nodes. It can invoke both
the built-in actions and any custom actions youve installed.
The Browse Resources Tab
The interface of the Browse Resources tab is covered in the Orchestration: Browsing Resources
chapter of this manual.
The Control Puppet Tab
The Control Puppet tab consists of a single action list (see below) with several Puppet-related
actions. Detailed instructions for these actions are available in the Orchestration: Control Puppet
page of this manual.
138/404
ACTION LISTS
Action lists contain groups of related actions for example, the service list has actions for starting,
stopping, restarting, and checking the status of services:
139/404
These groups of actions come from the MCollective agent plugins you have installed, and each
action list corresponds to one plugin. Both default and custom plugins are included on the
Advanced Tasks page.
Invoking Actions
You can invoke actions from the Control Puppet and Advanced Tasks tabs.
To invoke an action, you must be viewing an action list.
1. Click the name of the action you want. It will reveal a red Run button and any available argument
elds (see below). Some actions do not have arguments.
2. Enter any arguments you wish to use.
3. Press the Run button; Puppet Enterprise will show that the action is running, then display any
results from the action.
If several nodes have similar results, theyll be collapsed to save space; you can click any result
group to see which nodes have that result.
Invoking an action with an argument:
140/404
An action in progress:
Results:
Argument Fields
Some arguments are mandatory, and some are optional. Mandatory arguments will be denoted with
a red asterisk (*).
Although all arguments are presented as text elds, some arguments have specic format
requirements:
The format of each argument should be clear from its description; otherwise, check the
documentation for the action. Documentation for PEs built-in actions is available at the list of
built-in actions.
Arguments that are boolean in nature (on/o-type arguments) must have a value of true or
false no other values are allowed.
Next: Managing Node Requests
141/404
Node request management allows sysadmins to view and respond to node requests graphically,
from within the console. This means nodes can be approved for addition to the deployment without
needing access to the puppet master or using the CLI. For further security, node request
management supports the consoles user management system: only users with read/write
privileges can take action on node requests.
Once the console has been properly congured to point at the appropriate Certicate Authority
(CA), it will display all of the nodes that have generated Certicate Signing Requests (CSRs). You
can then approve or deny the requests, individually or in a batch.
For each node making a request, you can also see its name and associated CSR ngerprint.
Viewing Node Requests
You can view the number of pending node requests from anywhere in the console by checking the
indicator in the top right of the main menu bar.
Click on the pending nodes indicator to view and manage the current requests.
You will see a view containing a list of all the pending node requests. Each item on the list shows
the nodes name and its corresponding CSRs ngerprint. (Click on the truncated ngerprint to view
the whole thing in a pop-up.)
If there are no pending node requests, you will see some instructions for adding new nodes. If this
is not what you expect to see, the location of your Certicate Authority (CA) may not be congured
correctly.
Rejecting and Approving Nodes
The ability to respond to node requests is linked to your user privileges. You must be logged in to
Puppet Enterprise 3.3 User's Guide Working with Node Requests
142/404
the console as a user with read/write privileges before you can respond to requests.
Use the buttons to accept or reject nodes, singly or all at once. Note that once a node request is
approved, the node will not show up in the console until the next puppet run takes place. This
could be as long as 30 minutes, depending on how you have set up your puppet master. Depending
on how many nodes you have in your site total, and on the number of pending requests, it can also
take up to two seconds per request for Reject All or Accept All to nish processing.
In some cases, DNS altnames may be set up for agent nodes. In such cases, you cannot use the
console to approve/reject node requests. The CSR for those nodes must be accepted or rejected
using puppet cert on the CA. For more information, see the DNS altnames entry in the
conguration reference.
In some cases, attempting to accept or reject a node request will result in an error. This is typically
because the request has been modied somehow, usually by being accepted or rejected elsewhere
(e.g. by another user or from the CLI) since the request was rst generated.
Accepted/rejected nodes will remain displayed in the console for 24 hours after the action is taken.
This interval cannot be modied. However, you can use the Clear accepted/rejected requests button
to clean up the display at any time.
WORKING WITH REQUESTS FROM THE CLI
You can still view, approve, and reject node requests using the command line interface.
You can view pending node requests in the CLI by running
$ sudo puppet cert list
Puppet Enterprise 3.3 User's Guide Working with Node Requests
143/404
For more information on working with certicates from the CLI, see the Puppet tools guide or view
the man page for puppet cert.
Conguration Details
By default, the location of the CA is set to the location of PEs puppet master. If the CA is in a
custom location (as in cases where there are multiple puppet masters), you will have to set the
ca_server and ca_port settings in the /opt/puppet/share/puppetdashboard/config/settings.yml le.
When upgrading PE from a version before 2.7.0, the upgrader will convert the currently installed
auth.conf le to one that is fully managed by Puppet and which includes a new rule for request
management. However, if auth.conf has been manually modied prior to the upgrade, the
upgrader will NOT convert the le. Consequently, to get it working, you will need to add the new
rule manually by adding the code below into /etc/puppetlabs/puppet/auth.conf:
path /certificate_status
method find, search, save, destroy
auth yes
allow pe-internal-dashboard
description
value types
default value
required
path
string
$title
no
acl_method
string, array
auth
sring
yes
no
allow
array
[]
no
order
order in auth.conf le
string
99
no
regex
bool
false
no
no
144/404
environment
environments to allow
string
no
Note: To use the console to assign node congurations, you must be logged in as a readwrite or admin level user. Read-only users can view node conguration data, but cannot
modify it.
145/404
Classes
The classes the console knows about are a subset of the classes available to the puppet master. You
must explicitly add classes to the console before you can assign them to any nodes or groups.
Adding New Classes
To add a new class to the console, navigate to the Add classes page by clicking one of the
following:
The Add classes button in the consoles sidebar
The Add new classes link in the upper right corner of the class list page
The Add classes page allows you to easily add classes that are detected on the puppet master
server, as well as manually add classes that cant be autodetected.
146/404
The Add classes page displays a list of classes from the puppet master server. The list only includes
classes from the default production environment classes that only exist in other environments
( test, dev, etc.) will not be in the list and must be added manually (see below).
To select one or more classes from the list, click the checkbox next to each class you wish to add.
To browse more easily, you can use the text eld above the list, which lters the list as you type.
Filtering is not limited to the start of a class name; you can type substrings from anywhere within
the class name.
Once you have selected the classes you want, click the Add selected classes button at the bottom of
the page to nalize your choices. The classes you added can now be assigned to nodes and groups.
Note that you must click __Add selected classes__ to nish; otherwise your classes will not be added
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
147/404
to the console.
VIEWING DOCUMENTATION FOR DETECTED CLASSES
The list of detected classes includes short descriptions, which are extracted from comments in the
Puppet code where the class is dened.
To view the full documentation from these comments, you can click the show more link next to a
description. This will display the docs for that class, formatted using RDoc markup.
You may need to manually add certain classes to the console. This can be necessary if you are
running multiple environments, some of which contain classes that cannot be found in the
production environment.
To manually add a class, use the text elds under the Dont see a class? header near the bottom of
the page.
1. Type the complete, fully qualied name of the class in the class name eld.
2. Optionally, type a description for the class in the description eld.
3. Click the green plus (+) button to the right of the text elds, which becomes enabled after you
have entered a name.
After you click the plus (+) button, the class will appear in a new list below, with its checkbox
already selected. You may now click the Add selected classes button at the bottom of the page to
nish adding the class, or you can select additional classes, either manually or from the list of
detected classes. You must click __Add selected classes__ to nish; otherwise, your classes will not
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
148/404
149/404
For classes added from the autodetected list, the description on the class detail page will be
automatically lled in with documentation extracted from that classs Puppet code. However, this
documentation will be displayed raw instead of formatted as RDoc markup.
Nodes
Node Detail Pages
Each node in a Puppet Enterprise deployment has its own node detail page in the PE console. You
can reach a node detail page by clicking that nodes name in any node list view.
From a node detail page, you can:
View the nodes current variables, groups, and classes
Click the Edit button to navigate to the node edit page
Hide the node, causing it to stop appearing in node list views
Delete the node, removing all reports and information about that node from the console (it will
reappear as a new node if it submits a new Puppet run report)
View the nodes recent activity and run status (see Viewing Reports & Inventory Data)
View the nodes inventory data (see Viewing Reports & Inventory Data)
150/404
151/404
Note: You can only assign classes that are already known to the console. See Adding New
Classes on this page for details.
To remove a class from a node, click the Remove class link next to the classs name. Note that
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
152/404
classes inherited from a group cant be modied from the node edit page you must either edit it
from the group page, or remove the node from that group.
To edit class parameters for a class, click the Edit parameters link next to its name. See the next
section of this page for details.
After making edits, always click the __Update__ button to save your changes.
153/404
from a group, its parameters cant be modied from the node edit page you must edit them from
the group page, or else explicitly add the class to the node.
To set class parameters, click the Edit parameters link next to a class name on a node edit page.
This will bring up a class parameters dialog.
The class parameters dialog allows you to easily add values for any parameters that can be detected
from the puppet master server. It also lets you manually add parameters that cant be autodetected.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
154/404
Note: Class parameters can be strings, booleans, numbers, hashes or arrays. the PE
console will automatically convert the strings true and false to real boolean values. Hashes
and arrays should be expressed using Ruby-style syntax.
ADDING VALUES FOR DETECTED PARAMETERS
The class parameters dialog displays a list of parameters from the puppet master server. The list
only includes the parameters this class has in the default production environment. If a version of
this class in another environment has extra parameters, or if the class doesnt exist in production,
those parameters wont appear and must be added manually.
The main (autodetected) parameter list includes the names of the known parameters under the Key
heading, and their current values.
Parameters that are using their default values will have that value shown in grey text. This value
may be a literal value, or it may be a Puppet variable. (This is generally the case for modules that
use the params class pattern, or for classes whose parameters default to fact values.) You can
enter a new value if you choose.
Parameters that have had values set by a user are displayed with black text and a blue
background. They also have a Reset to default control next to the value.
Parameters with no user-set value and no default value are displayed with a white background
and no text. These parameters generally must be assigned a value before the class will work.
To add or change a value for a detected parameter, type a new value in the Value eld. Alternately,
you can use the Reset to default control next to the value to restore the default value. Default values
can be viewed in a tooltip by hovering your cursor over the Value eld for the parameter.
Remember to click the __Done__ button to exit the dialog, and click the __Update__ button on the
node edit page to save your changes.
MANUALLY ADDING PARAMETERS
You may need to manually add certain parameters for a class. This can be necessary if you are
running multiple environments and some of them contain newer versions of certain classes that
include parameters that cant be found in the production versions.
To manually add a parameter, use the text elds under the Other parameters header.
Type the name of the class parameter in the Add a parameter eld, then type a value in the Value
eld. Click the green plus (+) button to the right of the text elds, which becomes enabled after you
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
155/404
Instead of a Reset to default control, the list of manually-added parameters includes Delete links for
each parameter, which will remove the parameter and its value.
Remember to click the __Done__ button to exit the dialog, and then click the __Update__ button on
the node edit page to save your changes.
SUPPORTED DATA TYPES
Any data type not recognized as a boolean, number, hash or array will be treated as a string.
Hashes and arrays are expressed using Ruby-style syntax.
Editing Groups on Nodes
Assigning a node to a group will cause that node to inherit all of the classes, class parameters, and
variables assigned to that group. It will also inherit the conguration data from any group that
group is a member of.
Nodes can override the conguration data they inherit from their group(s); the main limitation on
this is that you must explicitly add a class to a node before assigning class parameters that dier
from those inherited from a group.
To add a node to a group, start typing the groups name into the Add a group text eld on the
node edit page. As you type, an auto-completion list of the most likely choices appears; the list
continues to narrow as you type more. To nish selecting a group, click a choice from the list or use
the arrow keys to select one and press enter.
To remove a node from a group, click the Remove node from group link next to the groups name.
Note that groups inherited from another group cant be removed via the node edit page you
must either remove it from the other groups page, or remove the node from the other group.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
156/404
Note that you can also edit group membership from a group edit page.
Note: Variables can only be strings. The PE console does not support setting arrays, hashes,
or booleans as variables.
157/404
Groups
Groups let you assign classes and variables to many nodes at once. This saves you time and makes
the structure of your site more visible.
Nodes can belong to many groups, and will inherit classes and variables from all of them. Groups
can also be members of other groups, and will inherit conguration information from their parent
group the same way nodes do.
Special Groups
Puppet Enterprise automatically creates and maintains several special groups in the console:
THE DEFAULT GROUP
The console automatically adds every node to a group called default. You can use this group for
any classes you need assigned to every single node.
Nodes are added to the default group by a periodic background task, so it may take a few minutes
after a node rst checks in before it joins the group.
THE MCOLLECTIVE AND NO MCOLLECTIVE GROUPS
These groups are created when initially setting up a Puppet Enterprise deployment, but are not
automatically added to.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes
158/404
159/404
From a group detail page, you can view the currently assigned conguration data for that group, or
use the Edit button to assign new conguration data. You can also delete the group, which will
cause any members to lose membership in the group.
Group detail pages also show any groups of which that group is a member (under the Groups
header) and any groups that are members of that group (under the Derived groups header).
Editing Nodes on Groups
You can change the membership of a group from both node edit pages and group edit pages.
To add a node to a group from a group edit page, start typing into the Add a node text eld. As you
type, an auto-completion list of the most likely choices appears; the list continues to narrow as you
type more. To nish selecting a node, click a choice from the list or use the arrow keys to select one
and press enter.
160/404
161/404
If you choose to go ahead and create a conict, any aected nodes will receive reduced
congurations from the puppet master the console will decline to provide any conguration data
for that node until you resolve the conict. Note that this will not necessarily appear as a run failure
the node will simply not attempt to manage resources that would have been managed by classes
from the PE console. To restore the nodes to full management, you must x the conict.
When viewing a node page, conicts are shown as red warning (!) icons next to the aected
variables or classes. You can click the icon to bring up a summary of the conict, showing the
sources of the conicting values.
162/404
node.
The event inspector page displays two panes of data. Clicking an item will show its details (and any
sub-items) in the detail pane on the right. The context pane on the left always shows the list of
items from which the one in the right pane was chosen, to let you easily view similar items and
compare their states.
To backtrack out of the current list of items, you can use the breadcrumb navigation or the previous
button (appearing left of the left pane after youve drilled in at least one level). The back and
forward buttons in your browser will behave normally, returning you to the previously loaded URL.
Puppet Enterprise 3.3 User's Guide Using Event Inspector
163/404
You can also bookmark pages as you investigate events on classes, nodes, and resources, allowing
you to return to a previous set of events. However, after subsequent Puppet runs, the contents of
the bookmarked pages may be dierent when you revisit them. Also, if there are no changes for a
selected time period, the bookmarks may show default text indicating there were no events on that
class, node, or resource.
You can export data in the right pane to a CSV le using the Export table as CSV link at the top
right of the pane.
Puppet Enterprise 3.3 User's Guide Using Event Inspector
164/404
Events
An event is PEs attempt to modify an individual property of a given resource. During a Puppet
run, Puppet compares the current state of each property on each resource to the desired state for
that property. If Puppet successfully compares them and the property is already in sync (the current
state is the desired state), Puppet moves on to the next without noting anything. Otherwise, it will
attempt some action and record an event, which will appear in the report it sends to the puppet
master at the end of the run. These reports provide the data event inspector presents.
There are four kinds of events, all of which are shown in event inspector:
Change: a property was out of sync, and Puppet had to make changes to reach the desired state.
Failure: a property was out of sync; Puppet tried to make changes, but was unsuccessful.
No-op: a property was out of sync, but Puppet was previously instructed to not make changes on
this resource (via either the --noop command-line option, the noop setting, or the noop =>
true metaparameter). Instead of making changes, Puppet will log a no-op event and report the
changes it would have made.
Skip: a prerequisite for this resource was not met, so Puppet did not compare its current state to
the desired state. (This prerequisite is either a failure in one of the resources dependencies or a
timing limitation set with the schedule metaparameter.) The resource may be in sync or out of
sync; Puppet doesnt know yet.
Perspectives
Event inspector can use three perspectives to correlate and contextualize information about events:
Classes
Nodes
Resources
For example, if you were concerned about a failed service, say Apache or MongoDB, you could start
by looking into failed resources or classes. On the other hand, if you were experiencing a
geographic outage, you might start by drilling into failed node events.
Switching between perspectives can help you nd the common threads among a group of failures,
and follow them to a root cause. One way to think about this is to see the node as where an event
takes place while a class shows what was changed, and a resource shows how that change came
about.
165/404
resources). Each sub-list shows the number of events for that perspective, both as per-event-type
counts and as bar graphs which measure against the total event count from that perspective. (For
example, if four classes have events, and two of those classes have events that are failures, the
Classes with events bar graph will be at 50%.)
You can click any item in the sub-lists (classes with failures, nodes with events, etc.) to load more
specic info into the detail pane and begin looking for the causes of notable events. Until an item is
selected, the right pane defaults to showing classes with failures.
166/404
After you click Testweb, you can select the Nodes with failures tab or the Resources with failures
tab, depending on how you want to investigate the failure on the class.
You click the Resources with failures tab, which loads a detail view showing failed resources. In this
case, you can see in the detail pane that there is an issue with a le resource, specically
/var/www/first/.htaccess.
167/404
Next, you drill down further by clicking on the failed resource in the detail pane. Note that the left
pane now displays the failed resource info that was in the detail pane previously. This helps you
stay aware of the context youre searching in. You can use the previous button next to the left
pane, the breadcrumb trail at the top, or the back button in your browser to step back through the
process, if you wish.
After clicking the failed resource, the detail pane now shows the node it failed on.
You bookmark this page and email the link to your team so they can see the specics of the failure.
You click on the failure, and the detail pane loads the specics of the failure including the cong
version associated with the run and the specic line of code and manifest where the error occurs.
You see from the error message that the error was caused by the manifest trying to set the owner
of the le resource to a non-existent user ( Message: Could not find user www-data) on the
intended platform.
168/404
You now know the cause of the failure and which line of which manifest you need to edit to resolve
the issue. If you need help guring out the issue with your code, you might wish to try Geppetto, an
IDE that can help diagnose puppet code issues. Youll probably also be having a word with your
colleagues regarding the importance of remembering the target OS when working on a module!
If a given puppet run restarts PuppetDB, puppet will not be able to submit a run report from that
run to PuppetDB since, obviously, PuppetDB is not available. Because event inspector relies on data
from PuppetDB, and PuppetDB reports are not queued, event inspector will not display any events
from that run. Note that in such cases, a run report will be available via the consoles Reports tab.
Having a puppet run restart PuppetDB is an unlikely scenario, but one that could arise in cases
where some change to, say, a parameter in the puppetdb class causes the pe-puppetdb service to
restart. This is a known issue that will be xed in a future release.
RUNS WITHOUT EVENTS NOT DISPLAYED
If a run encounters a catastrophic failure where an error prevents a catalog from compiling, event
inspector will not display any failures. This is because no events actually occurred. Its important to
remember that event inspector is primarily concerned with events, not runs.
TIME SYNC IS IMPORTANT
Keeping time synchronized across your deployment will help event inspector produce accurate
information and keep it running smoothly. Consider running NTP or similar across your
deployment. As a bonus, NTP is easily managed with PE and doing so is an excellent way to learn
puppet and PE if you are new to them. The PE Deployment Guide can walk you through one simple
method of NTP automation.
SCHEDULED RESOURCES LOG SKIPS
If the schedule metaparameter is set for a given resource, and the scheduled time has not yet
arrived, that resource will log a skip event in event inspector. Note that this is only true for userdened schedule and does not apply to built-in scheduled tasks that happen weekly, daily, etc.
SIMPLIFIED DISPLAY FOR SOME RESOURCE TYPES
For resource types that take the ensure property, (e.g. user or le resource types), when the
resource is rst created, event inspector will only display a single event. This is because puppet has
only changed one property ( ensure) which sets all the baseline properties of that resource at once.
For example, all of the properties of a given user are created when the user is added, just as they
would be if the user was added manually. If a PE run changes properties of that user resource later,
each individual property change will be shown as a separate event.
Next: Viewing Reports and Inventory Data
169/404
Node States
Depending on how its last Puppet run went, every node is in one of six states. Each state is
indicated by a specic color in graphs and the node state summary, and by an icon beside the
report or the node name in a report list or node list view.
Unresponsive: The node hasnt reported to the puppet master recently; something may be wrong.
The cuto for considering a node unresponsive defaults to one hour, and can be congured in
settings.yml with the no_longer_reporting_cutoff setting. Represented by dark grey text.
This state has no icon; the node retains whatever icon the last report used.
Failed: During its last Puppet run, this node encountered some error from which it couldnt
recover. Something is probably wrong, and investigation is recommended. Represented by red
text or the
failed icon.
No-op: During its last Puppet run, this node would have made changes, but since it was either
running in no-op mode or found a discrepancy in a resource whose noop metaparameter was
set to true, it simulated the changes instead of enforcing them. See the nodes last report for
more details. Represented by orange text or the
pending icon.
Changed: This nodes last Puppet run was successful, and changes were made to bring the node
into compliance. Represented by blue text or the
changed icon.
Unchanged: This nodes last Puppet run was successful, and it was fully compliant; no changes
were necessary. Represented by green text or the
unchanged icon.
Unreported: Although Dashboard is aware of this nodes existence, it has never submitted a
Puppet report. It may be a newly-commissioned node, it may have never come online, or its copy
of Puppet may not be congured correctly. Represented by light grey text or the
error icon.
Reading Reports
Graphs
Each node detail page has a pair of graphs: a histogram showing the number of runs per day and
the results of those runs, and a line chart tracking how long each run took. (Run status histograms
also appear on class detail pages, group detail pages, and last-run-status pages.)
Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data
170/404
The daily run status histogram is broken down with the same colors that indicate run status in the
consoles sidebar: red for failed runs, orange for pending runs (where a change would have been
made, but the resource to be changed was marked as no-op), blue for successful runs where
changes were made, and green for successful runs that did nothing.
The run-time chart graphs how long each of the last 30 Puppet runs took to complete. A longer run
usually means changes were made, but could also indicate heavy server load or some other
circumstance.
Reports
Each node page has a short list of recent reports, with a More button at the bottom for viewing
older reports:
Each report represents a single Puppet run. Clicking a report will take you to a tabbed view that
splits the report up into metrics, log, and events.
Metrics is a rough summary of what happened during the run, with resource totals and the time
spent retrieving the conguration and acting on each resource type.
Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data
171/404
Events is a list of the resources the run managed, sorted by whether any changes were made. You
can click on a changed resource to see which attributes were modied.
Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data
172/404
Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data
173/404
Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data
174/404
Facts include things like the operating system ( operatingsystem), the amount of memory
( memorytotal), and the primary IP address ( ipaddress). You can also add arbitrary custom facts to
your Puppet modules, and they too will show up in the inventory.
The facts you see in the inventory can be useful when ltering nodes in the live management page.
Exporting Data
You can export the inventory and report tables to a CSV le using the Export as CSV link at the top
right of the tables.
Next: Managing Users
175/404
176/404
Logging In
You will encounter the login screen whenever you try to access a protected part of the console. The
screen will ask for your email address and password. After successfully authenticating, you will be
taken to the part of the console you were trying to access.
When youre done working in the console, choose Logout from the user account menu. Note that
you will be logged out automatically after 20 minutes.
Note: User authentication services rely on a PostgreSQL database. If this database is restarted for
any reason, you may get an error message when trying to log in or out. See known issues for more
information.
Viewing Your User Account
To view your user information, access the user account menu by clicking on your username (the
rst part of your email address) at the top right of the navigation bar.
177/404
Choose My account to open a page where you can see your username/email and your user access
level (admin, read-write or read-only) and text boxes for changing your password.
Selecting Admin Tools will open a screen showing a list of users by email address, their access role
and status. Note that users who have not yet activated their accounts by responding to the
Puppet Enterprise 3.3 User's Guide Managing Console Users
178/404
Click on a users row to open a pop-up pane with information about that user. The pop-up will
show the users name/email, their current role, their status and other information. If the user has
not yet validated their account, you will also see the link that was generated and included in the
validation email. Note that if there is an SMTP issue and the email fails to send, you can manually
send this link to the user.
179/404
To modify the settings for a given user, click on the users row to open the pop-up pane. In this
pane, you can change their role and their email address or reset their password. Dont forget to
click the Save changes button after making your edits.
Note that resetting a password or changing an email address will change that users status back to
Pending, which will send them another validation email and require them to complete the validation
and password setting process again.
For users who have completed the validation process, you can also enable or disable a users
account. Disabling the account will prevent that user from accessing the console, but will not
remove them from the users database.
ADDING/DELETING USERS
To add a new user, open the user admin screen by choosing Admin Tools in the user menu. Enter
Puppet Enterprise 3.3 User's Guide Managing Console Users
180/404
the users email address and their desired role, then click the Add user button. The user will be
added to the list with a pending status and an activation email will be automatically sent to them.
To delete an existing user (including pending users), click on the users name in the list and then
click the Delete account button. Note that deleting a user cannot be undone, so be sure this is what
you want to do before proceeding.
Working with Users From the Command Line
Several actions related to console users can be done from the command line using rake tasks. This
can be useful for things like automating user creation/deletion or importing large numbers of
users from an external source all at once. All of these tasks should be run on the console server
node.
Note that console_auth rake tasks that list, add or remove users must be run using the bundle
exec command. For example,
cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:list
The console_auth rake tasks will add their actions to the console_auth log, located by default at
/var/log/pe-console-auth/auth.log.
ADDING OR MODIFYING USERS
The db:create_user rake task is used to add users. The command is issued as follows:
cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:create_user USERNAME="<email address>" PASSWORD="<password>"
ROLE="< Admin | Read-Only | Read-Write >"
If you specify a user that already exists, the same command can be used to change attributes for
that user, e.g. to reset a password or elevate/demote privileges.
DELETING USERS
The db:users:remove task is used to delete users. The command is issued as follows:
cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:remove[<email address>]
VIEWING USERS
To print a list of existing users to the screen use the db:users:list task as follows:
181/404
cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:list
LOCKED USERS
Users will get locked out of their accounts after ten failed authentication attempts. Once locked out,
users will not be able to access the console and will see a message on the login screen letting them
know their account has been locked. A similar message will appear on the command line if users
are attempting access that way. Admin users will see a warning sign next to a locked user in the
admin screen and a warning message will be added to a locked users detail view. Their account
status will also be set to disabled. An admin can restore a users access by either resetting the
users password or changing the users status back to enabled.
Note: To use a third-party authentication system, you must congure two les on the
console server. See the Conguring Third-Party Authentication Services section of the
console cong page for details.
Puppet Enterprise 3.3 User's Guide Managing Console Users
182/404
Third-party services are only used for authenticating users; the consoles RBAC still manages each
users privileges. If a user has never logged in before, they are assigned a default role. (This role
can be congured. See the cas_client_config.yml section of the cong instructions for details.)
External users access privileges are managed in the same manner as internal users, via the
consoles user administration interface.
The account interface for an externally authenticated user diers slightly from internal users in that
external users do not have UI for changing their passwords or deleting accounts.
The user administration page will also indicate the authentication service (Account Type) being
used for a given user and provide a link to a legend that lists the external authentication services
and the default access privileges given to users of a given service.
183/404
Lastly, note that while built-in auth accounts use the email address provided, AD/LDAP accounts are
generally accessed using just the username (e..g a.user), although this may vary in your
organizations specic implementation.
Next: Console Inventory Search
This eld allows you to enter a fact name, a value, and a comparison operator. After you have
searched for one fact, you may narrow down the search by adding additional facts.
184/404
The search results page will show a list of nodes, as well as a summary of their recent Puppet runs.
You can click nodes in the list to browse to their detail pages.
To choose facts to search for, you should view the inventory data for a node that resembles the
nodes you are searching for.
Next: Conguring & Tuning the Console
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
185/404
The <TASK AND ARGUMENTS> placeholder is the only part that will dier between the various
tasks; the rest is boilerplate that should be used with every task.
There are two ways to specify arguments for a task. PE 3.0.1 and later can use both styles; PE 3.0.0
(and the PE 2.x series) can only use the environment variable style.
Task Arguments as Parameters (task[argument,argument,...])
This invocation style is available in PE 3.0.1 and later. It allows invoking multiple tasks at once,
which was not possible with the environment variable style.
Use the following syntax to specify arguments as parameters:
node:addgroup["switch07.example.com","no mcollective"]
Note: The PE consoles rake tasks can all be invoked multiple times in the same run. This
diers from rakes default behavior, which will suppress additional invocations of the same
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
186/404
command. If you need tasks to run only once per command for some reason, you can add
allow_repeating_tasks=false to the command line.
ESCAPING
If the value of any argument contains a comma, the comma must be escaped with one or more
backslashes. The number of escape characters depends on how the string is quoted.
With single quotes, use one backslash.
With double quotes, use two backslashes.
The examples below would both set a value of no mcollective,network devices for the second
argument:
node:add['switch07.example.com','no mcollective\,network devices']
node:add["switch07.example.com","no mcollective\\,network devices"]
In two tasks ( node:variables and nodegroup:variables), the value of an argument might consist
of a comma-separated list whose terms, themselves, contain commas. In these cases, the interior
commas should be escaped with three backslashes for single-quoted strings, and six backslashes
for double-quoted strings. The examples below would both set the value of the
haproxy_application_servers variable to
web04.example.com,web05.example.com,web06.example.com:
nodegroup:variables['load
balancers','haproxy_application_port=3000\,haproxy_application_servers=web04.example.co
nodegroup:variables["load
balancers","haproxy_application_port=3000\\,haproxy_application_servers=web04.example.c
Deprecation note: Invoking tasks like this will cause deprecation warnings, but it will
continue to work for the duration of the Puppet Enterprise 3.x series, with removal
tentatively planned for Puppet Enterprise 4.0.
Use the following syntax to specify arguments as environment variables:
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
187/404
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
188/404
Parameters:
name node name
node:variables[name]
List variables for a node.
Parameters:
name node name
189/404
Parameters:
name node name
groups groups to assign to the node
node:addclass[name,class]
Add a class to a node.
Parameters:
name node name
class classes to add to the node
node:addclassparam[name,class,param,value]
Add a classparam to a node. If the parameter already exists its value is overwritten.
Parameters:
name node name
class class (already assigned to the node)
param parameter name
value parameter value
node:addgroup[name,group]
Add a group to a node.
Parameters:
name node name
group group to add to the node
node:delclassparam[name,class,param]
Remove a class param from a node.
Parameters:
name node name
class class name
param parameter name
node:variables[name,variables]
Add (or edit, if they exist) variables for a node. Variables must be specied as a comma-separated
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
190/404
list of variable=value pairs; the list must be quoted and the commas must be escaped.
Parameters:
name node name
variables variables specied as <VARIABLE>=<VALUE>,<VARIABLE>=<VALUE>,
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
191/404
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data
192/404
193/404
API page][new_tasks].
Deprecation note: Invoking tasks like this will cause deprecation warnings, but it will
continue to work for the duration of the Puppet Enterprise 3.x series, with removal
tentatively planned for Puppet Enterprise 4.0.
nodegroup:listgroups name=<NAME>
List child groups that belong to a node group.
nodegroup:variables name=<NAME>
List variables for a node group.
Add (or edit, if they exist) variables for a node group. Variables must be specied as a commaseparated list of variable=value pairs; the list must be quoted.
If you want to set a variables value to a string containing commas, you must escape those commas.
Use a single backslash for single-quoted strings, and two backslashes for double-quoted strings.
smtp:
address: mail.example.com
port: 25
use_tls: false
## Uncomment to enable SMTP authentication
#username: smtp_username
#password: smtp_password
198/404
Note: if you are using two-factor authentication with Google accounts, you must rst create
an application-specic password in order to successfully log into the console.
Conguring cas_client_config.yml
The /etc/puppetlabs/console-auth/cas_client_config.yml le contains several commentedout lines under the authorization: key. Un-comment the lines that correspond to the RubyCAS
authenticators you wish to use, and set a new default_role if desired.
Each entry consists of the following:
A common identier (e.g. local, or ldap, etc.), which is used in the console_auth database and
corresponds to the classname of the RubyCAS authenticator.
default_role, which denes the role to assign to users by default allowed values are readonly, read-write, or admin.
description, which is simply a human readable description of the service.
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases
199/404
The order in which authentication services are listed in the cas_client_config.yml le is the order
in which the services will be checked for valid accounts. In other words, the rst service that returns
an account matching the entered user credential is the service that will perform authentication and
log-in.
This example shows how to edit the le if you want to use AD and the built-in (local) auth services
while leaving Google and LDAP disabled:
## This configuration file contains information required by any web
## service that makes use of the CAS server for authentication.
authentication:
## Use this configuration option if the CAS server is on a host different
## from the console-auth server.
# cas_host: master:443
## The port CAS is listening on. This is ignored if cas_host is set.
# cas_port: 443
## The session secret is randomly generated during installation of Puppet
## Enterprise and will be regenerated any time console-auth is enabled or
disabled.
session_key: 'puppet_enterprise_console'
session_secret: [REDACTED]
## Set this to true to allow anonymous users read-only access to all of
## Puppet Enterprise Console.
global_unauthenticated_access: false
authorization:
local:
default_role: read-only
description: Local
# ldap:
# default_role: read-only
# description: LDAP
activedirectoryldap:
default_role: read-only
description: Active Directory
# google:
# default_role: read-only
# description: Google
Note: If your console server ever ran PE 2.5, the commented-out sections may not be present
in this le. To nd example cong text that can be copied and pasted into place, look for a
cas_client_config.yml.rpmnew or cas_client_config.yml.dpkg-new le in the same
directory.
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases
200/404
Conguring rubycas-server/config.yml
The /etc/puppetlabs/rubycas-server/config.yml le is used to congure RubyCAS to use
external authentication services. As before, you will need to un-comment the section for the thirdparty service you wish to enable and congure it as necessary.
Note: If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain
the commented sections for the third-party services. Weve provided the commented
sections below, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
The values for the listed keys are LDAP and ActiveDirectory standards. If you are not the
administrator of those databases, you should check with that administrator for the correct values.
GOOGLE AUTHENTICATION
201/404
# password:
# host: localhost
# user_table: user
# username_column: username
# password_column: password
#
#
ACTIVEDIRECTORY AUTHETICATION
202/404
203/404
204/404
# - class: CASServer::Authenticators::SQL
# database:
# adapter: postgresql
# database: some_database_with_users_table
# username: root
# password:
# host: localhost
# user_table: user
# username_column: username
# password_column: password
#
# During authentication, the user credentials will be checked against the first
# authenticator and on failure fall through to the second authenticator.
Note: The commented-out examples in the cong le may or may not have a line break
between after the hyphen; both are valid YAML.
# OK
- class: CASServer::Authenticators::SQLEncrypted
# Also OK
class: CASServer::Authenticators::SQLEncrypted
As the above examples show, its generally best to specify just dc= attributes in the base key. The
criteria for the Organizational Unit ( OU) and Common Name ( CN) should be specied in the filter
key. The value of the filter: key is where authorized users should be located in the AD
organizational structure. Generally speaking, the filter: key is where you would specify an OU or
an AD Group. In order to authenticate, users will need to be in the specied OU or Group.
Also note that the value for the filter: key must be the full name for the leftmost cn=; you cannot
use the user ID or logon name. In addition, the auth_user: key requires the full Distinguished
Name (DN), including any CNs associated with the user and all of the dc= attributes used in the DN.
205/404
[main]
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases
206/404
To change the location of the console, youll need to specify the console hostname, port, and
certicate name.
q_pe_check_for_updates=n
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases
207/404
An Overview of Puppet
Note: This page gives a broad overview of how Puppet congures systems, and provides
links to deeper information. If you prefer to learn by doing, you can follow the Puppet
Enterprise quick start guide:
Quick Start: Using PE
Quick Start: Writing Modules
Summary of Puppet
Puppet Enterprise (PE) uses Puppet as the core of its conguration management features. Puppet
models desired system states, enforces those states, and reports any variances so you can track
what Puppet is doing.
To model system states, Puppet uses a declarative resource-based language this means a user
describes a desired nal state (e.g. this package must be installed or this service must be
running) rather than describing a series of steps to execute.
Puppet breaks conguration management out into four major areas of activity:
1. The user describes re-usable pieces of conguration by creating or downloading Puppet
modules.
2. The user assigns (and congures) classes to each machine in the PE deployment.
3. Each node fetches and applies its complete conguration from the puppet master server, either
on a recurring schedule or on demand. This conguration includes all of the classes that have
been assigned to that node. Applying a conguration enforces the desired state that was dened
by the user, and submits a report about any changes that had to be made.
4. The user may view aggregate and individual reports to monitor what resources have been
changed by Puppet.
Continue reading this page for an overview of the rst three activities and links to deeper info. See
the Viewing Reports and Inventory Data page to learn how to monitor Puppets activity from the PE
console.
208/404
You can change the priority of Puppet processes ( puppet agent, puppet apply) using the priority
setting. This can be helpful if you want to manage resource-intensive loads on busy nodes. Note
that the process must be running as privileged user if it is going to raise its priority.
DIFFERENT RUN INTERVAL
You can change the run interval by setting a new value for the runinterval setting in each agent
nodes puppet.conf le.
This le is located at /etc/puppetlabs/puppet/puppet.conf on *nix nodes, and
<DATADIR> \puppet.conf on Windows.
Puppet Enterprise 3.3 User's Guide An Overview of Puppet
209/404
Make sure you put this setting in the [agent] or [main] block of puppet.conf.
Since you will be managing this le on many systems at once, you may wish to manage
puppet.conf with a Puppet template.
RUN FROM CRON
On *nix nodes, the pe-puppet daemon process can sometimes use more memory than is desired.
This was a common problem in PE 2.x which is largely solved in PE 3, but some users may still wish
to disable it.
You can turn o the daemon and still get scheduled runs by creating a cron task for puppet agent
on your *nix nodes. An example snippet of Puppet code, which would create this task on nonWindows nodes:
# Place in /etc/puppetlabs/puppet/manifests/site.pp on the puppet master
node, outside any node statement.
# Run puppet agent hourly (with splay) on non-Windows nodes:
if $osfamily != windows {
cron { 'puppet_agent':
ensure => 'present',
command => '/opt/puppet/bin/puppet agent --onetime --no-daemonize -splay --splaylimit 1h --logdest syslog',
user => 'root',
minute => 0,
}
}
Remember, after creating this task you should turn o the pe-puppet service on *nix nodes.
Windows note: This is unnecessary on Windows, since it doesnt use the same version of the
pe-puppet service; the Windows service was implemented long after the *nix service, and
was designed from the start to limit memory usage. Additionally, its more dicult on
Windows to make a scheduled task run multiple times a day.
ON-DEMAND ONLY
You can stop all scheduled runs by stopping the pe-puppet service on all nodes. This will cause
nodes to only fetch congurations when you explicitly trigger runs with the orchestration engine.
If you are only doing on-demand runs, youre likely to be running large numbers of nodes at once.
For best performance, you should take advantage of the orchestration engines ability to run many
nodes in a controlled series.
Next: Puppet Modules and Manifests
210/404
Other References
This page consists mostly of small examples and links to detailed information. If you want
more complete context, you should read some of the following documents instead:
Learning the Puppet Language
If you are new to Puppet, start here. For a complete introduction to the Puppet language,
read and follow along with the Learning Puppet series, which will introduce you to the basic
concepts and then teach advanced class writing and module construction.
Learning Puppet
Quick Start
For those who learn by doing, the PE users guide includes a pair of interactive quick start
guides, which walk you through installing, using, hacking, and creating Puppet modules.
Quick Start: Using PE
Quick Start: Writing Modules
Modules in Context
The Puppet Enterprise Deployment Guide includes detailed walkthroughs of how to choose
modules and compose them into complete congurations.
Deployment Guide ch. 3: Automating Your Infrastructure
Geppetto IDE
Geppetto is an integrated development environment (IDE) for Puppet. It provides a toolset for
developing puppet modules and manifests that includes syntax highlighting, content
assistance, error tracing/debugging, and code completion features. Geppetto also provides
Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests
211/404
integration with git, enabling side-by-side comparison of code from a given repo complete
with highlighting, code validation, syntax error parsing, and expression troubleshooting.
In addition, Geppetto provides tools that integrate with Puppet products. It includes an
interface to the Puppet Forge, which allows you to create modules from existing modules on
the Forge as well as easily upload your custom modules. Geppetto also provides PE
integration by parsing PuppetDB error reporting. This allows you to quickly nd the
problems with your puppet code that are causing conguration failures. For complete
information, visit the Geppetto documentation.
Printable References
These two cheat sheets are useful when writing your own modules or hacking existing
modules.
Module Layout Cheat Sheet
Core Resource Type Cheat Sheet
Manifests
Manifests are les containing Puppet code. They are standard text les saved with the .pp
extension. Most manifests should be arranged into modules.
Resources
The core of the Puppet language is declaring resources. A resource declaration looks like this:
# A resource declaration:
file { '/etc/passwd':
ensure => file,
owner => 'root',
group => 'root',
mode => '0600',
}
212/404
When a resource depends on another resource, you should explicitly state the relationship to make
sure they happen in the right order.
See the Resources page of the Puppet language reference for details about resource
declarations.
See the Relationships and Ordering page for details about relationships.
By default, the ordering setting is congured for manifest ordering, but you will not see
this displayed in puppet.conf (located at /etc/puppetlabs/puppet/puppet.conf on the
puppet master).
To toggle the setting to random or title-hash, you will need to add it to the agent section;
for example:
Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests
213/404
[agent]
ordering = title-hash
enviroment = production
...
...
214/404
Classes are named blocks of Puppet code that can be assigned to nodes. They should be stored in
modules so that the puppet master can locate them by name.
Dened resources (i.e., dened resource types) extend the capability of classes and are stored in
the module structure. They cannot be assigned directly to nodes but can enable you to build much
more sophisticated classes.
See the Classes page of the Puppet language reference for details about dening and declaring
classes.
See the Dened Types page for details about dened resource types.
Puppet Modules
Modules are a convention for arranging Puppet manifests so that they can be automatically located
and loaded by the puppet master. They can also contain plugins, static les for nodes to download,
and templates.
Modules can contain many Puppet classes. Generally, the classes in a given module are all
somewhat related. (For example, an apache module might have a class that installs and enables
Apache, a class that enables PHP with Apache, a class that turns on mod_rewrite, etc.)
A module is:
A directory
with a specic internal layout
which is located in one of the puppet masters modulepath directories.
In Puppet Enterprise, the main modulepath directory for users is located at
/etc/puppetlabs/puppet/modules on the puppet master server.
Module Structure
This example module, named my_module, shows the standard module layout:
my_module/ This outermost directorys name matches the name of the module.
manifests/ Contains all of the manifests in the module.
init.pp Contains one class named my_module. This classs name must match the
modules name.
other_class.pp Contains one class named my_module::other_class.
my_defined_type.pp Contains one dened type named my_module::my_defined_type.
implementation/ This directorys name aects the class names beneath it.
foo.pp Contains a class named my_module::implementation::foo.
bar.pp Contains a class named my_module::implementation::bar.
215/404
216/404
Puppet Tools
Puppet is built on a large number of services and command-line tools. Understanding which to
reach for and when is crucial to using Puppet eectively.
You can read more about any of these tools by running puppet man <SUBCOMMAND> at the command
line.
Services
Puppet agent and puppet master are the heart of Puppets architecture.
The puppet agent service runs on every managed Puppet Enterprise node. It fetches and applies
congurations from a puppet master server.
In Puppet Enterprise, the puppet agent runs without user interaction as the pe-puppet service;
by default, it performs a run every 30 minutes. You can also use the orchestration engine to
manually trigger Puppet runs on any nodes. (If you are logged into an agent node as an
administrator, you can also run sudo puppet agent --test from the command line.)
The puppet agent reads its settings from the [main] and [agent] blocks of
/etc/puppetlabs/puppet/puppet.conf.
The puppet master service compiles and serves congurations to agent nodes.
In Puppet Enterprise, the puppet master is managed by Apache and Passenger, under the
umbrella of the pe-httpd service. Apache handles HTTPS requests from agents, and it spawns
and kills puppet master processes as needed.
Puppet Enterprise 3.3 User's Guide Puppet: Assigning Congurations to Nodes
217/404
The puppet master creates agent congurations by consulting its Puppet modules and the
instructions it receives from the console.
The puppet master reads its settings from the [main] and [master] blocks of
/etc/puppetlabs/puppet/puppet.conf. It can also be congured conditionally by using
environments.
The PuppetDB service collects information from the puppet master, and makes it available to
other services.
The puppet master itself consumes PuppetDBs data in the form of exported resources. You can
also install a set of additional functions to do deeper queries from your Puppet manifests.
External services can easily integrate with PuppetDBs data via its query API. See the PuppetDB
manuals API pages for more details.
Everyday Tools
The node requests page of the PE console is used to add nodes to your Puppet Enterprise
deployment.
After a new agent node has been installed, it requests a certicate from the master, which will
allow it to fetch congurations; the agent node cant be managed by PE until its certicate
request has been approved. See the documentation for the node requests page for more info.
When you decommission a node and remove it from your infrastructure, you should destroy its
certicate information by logging into the puppet master server as an admin user and running
puppet cert clean <NODE NAME>.
The puppet apply subcommand can compile and apply Puppet manifests without the need for a
puppet master. Its ideal for testing new modules ( puppet apply -e 'include <CLASS NAME>'),
but can also be used to manage an entire Puppet deployment in a masterless arrangement.
The puppet resource subcommand provides an interactive shell for manipulating Puppets
underlying resource framework. It works well for one-o administration tasks and ad-hoc
management, and oers an abstraction layer between various OSs implementations of core
functionality.
$ sudo puppet resource package nano ensure=latest
notice: /Package[nano]/ensure: created
package { 'nano':
ensure => '1.3.12-1.1',
}
Advanced Tools
Puppet Enterprise 3.3 User's Guide Puppet: Assigning Congurations to Nodes
218/404
See the cloud provisioning chapter of this guide for more about the cloud provisioning tools.
See the orchestration chapter of this guide for more about the command-line orchestration
tools.
Next: Puppet Data Library
PuppetDB
PuppetDB is a built-in part of PE 3.0 and later.
PuppetDB stores up-to-date copies of every nodes facts, resource catalogs, and run reports as part
of each Puppet run. External tools can easily query and search all of this data over a stable,
versioned HTTP query API. This is a more full-featured replacement for Puppets older Inventory
Service interface, and it enables entirely new functionality like class, resource, and event searches.
See the documentation for PuppetDBs query API here.
Since PuppetDB receives all facts for all nodes, you can extend its data with custom facts on your
puppet master server.
EXAMPLE: Using the old Puppet Inventory Service, a customer automated the validation and
reporting of their servers warranty status. Their automation regularly retrieved the serial
numbers of all servers in the data center, then checked them against the hardware vendors
warranty database using the vendors public API to determine the warranty status for each.
Using PuppetDBs improvements over the inventory API, it would also be possible to correlate
serial number data with what the machines were actually being used for, by getting lists of
the Puppet classes being applied to each machine.
219/404
The Puppet Run Report Service provides push access to the reports that every node submits after
each Puppet run. By writing a custom report processor, you can divert these reports to any custom
service, which can use them to determine whether a Puppet run was successful, or dig deeply into
the specic changes for each and every resource under management for every node.
You can also write out-of-band report processors that consume the YAML les written to disk by
the puppet masters default report handler.
Learn more about the Puppet Run Report Service here.
EXAMPLE: Using the Puppet Resource Dependency Graph and Gephi, a visualization tool, a
customer identied unknown dependencies within a complicated set of conguration
modules. They used this knowledge to re-write parts of the modules to get better
performance.
Learn more about the Puppet Resource Dependency Graph here
Next: Puppet References
Puppet References
Puppet has a lot of moving parts and a lot of information to remember. The following resources will
help you keep the info you need at your ngertips and use Puppet eectively.
Resource Types
Resource types are the atomic unit of Puppet congurations, and there are a lot of them to
remember.
The Core Types Cheat Sheet is a fast, printable two-page guide to the most useful resource
types.
The Type Reference is the complete dictionary of Puppets built-in resource types. No other page
will be more useful to you on a daily basis.
Puppet Enterprise 3.3 User's Guide Puppet References
220/404
Puppet Syntax
The Puppet Language Reference covers every part of the Puppet language as of Puppet 3.x.
References
For an exhaustive description of puppets conguration settings and auxiliary conguration
les, refer to the Conguring Puppet Guide.
For details, syntax and options for the available conguration settings, visit the conguration
reference.
For details on how to congure access to Puppets pseudo-RESTful HTTP API, refer to the Access
Control Guide.
Note: If you havent modied the auth.conf le, it may occasionally be modied when
upgrading between Puppet Enterprise versions. However, if you HAVE modied it, the
upgrader will not automatically overwrite your changes, and you may need to manually
update auth.conf to accomodate new Puppet Enterprise features. Be sure to read the
upgrade notes when upgrading your puppet master to new versions of PE.
Conguring Hiera
Puppet in PE includes full Hiera support, including automatic class parameter lookup.
The hiera.yaml le is located at /etc/puppetlabs/puppet/hiera.yaml on the puppet master
server.
221/404
See the Hiera documentation for details about the hiera.yaml cong le format.
To use Hiera with Puppet Enterprise, you must, at minimum, edit hiera.yaml to set a :datadir
for the :yaml backend, ensure that the hierarchy is a good t for your deployment, and create
data source les in the data directory.
To learn more about using Hiera, see the Hiera documentation.
q_pe_check_for_updates=n
Quick Links
Special orchestration tasks:
Controlling Puppet
Browsing and Searching Resources
General orchestration tasks:
Puppet Enterprise 3.3 User's Guide Overview of Orchestration Topics
222/404
Orchestration Fundamentals
Actions and Plugins
Orchestration isnt quite like SSH, PowerShell, or other tools meant for running arbitrary shell code
in an ad-hoc way.
PEs orchestration is built around the idea of predened actions it is essentially a highly parallel
remote procedure call (RPC) system.
Actions are distributed in MCollective agent plugins, which are bundles of several related actions.
Many plugins are available by default; see Built-In Orchestration Actions.
You can extend the orchestration engine by downloading or writing new plugins and adding
them to the engine with Puppet.
Invoking Actions and Filtering Nodes
The core concept of PEs orchestration is invoking actions, in parallel, on a select group of nodes.
Typically you choose some nodes to operate on (usually with a lter that describes the desired fact
values or Puppet classes), and specify an action and its arguments. The orchestration engine then
runs that action on the chosen nodes, and displays any data collected during the run.
Puppet Enterprise can invoke orchestration actions in two places:
In the PE console (on the live management page)
On the command line
223/404
You can also allow your sites custom applications to invoke orchestration actions.
Special Interfaces: Puppet Runs and Resources
In addition to the main action invocation interfaces, Puppet Enterprise provides special interfaces
for two of the most useful orchestration tasks:
Remotely controlling the puppet agent and triggering Puppet runs
Browsing and comparing resources across your nodes
Orchestration Internals
Components
The orchestration engine consists of the following parts:
The pe-activemq service (which runs on the puppet master server) routes all orchestrationrelated messages.
The pe-mcollective service (which runs on every agent node) listens for authorized commands
and invokes actions in response. It relies on the available agent plugins for its set of possible
actions.
The mco command (available to the peadmin user account on the puppet master server) and the
live management page of the PE console can issue authorized orchestration commands to any
number of nodes.
Conguration
See the Conguring Orchestration page.
Security
The orchestration engine in Puppet Enterprise 3.0 uses the same security model as the
recommended standard MCollective deployment. See the security model section on the
MCollective standard deployment page for a more detailed rundown of these security measures.
In short, all commands and replies are encrypted in transit, and only a few authorized clients are
permitted to send commands. By default, PE allows orchestration commands to be sent by:
Read/write and admin users of the PE console
Users able to log in to the puppet master server with full administrator sudo privileges
If you extend orchestration by integrating external applications, you can limit the actions each
application has access to by distributing policy les; see the Conguring Orchestration page for
more details.
You can also allow additional users to log in as the peadmin user on the puppet master, usually by
distributing standard SSH public keys.
Puppet Enterprise 3.3 User's Guide Overview of Orchestration Topics
224/404
Network Trac
Every node (including all agent nodes, the puppet master server, and the console) needs the ability
to initiate connections to the puppet master server over TCP port 61613. See the notes on rewall
conguration in the System Requirements chapter of this guide for more details about PEs
network trac.
Next: Invoking Actions
Note: Although you will be running these commands on the Linux command line, they can
invoke orchestration actions on both *nix and Windows machines.
MCollective Documentation
Puppet Enterprises orchestration engine, MCollective, has its own section of the documentation
site, which includes more complete details and examples for command line orchestration usage.
This page covers basic CLI usage and all PE-specic information; for more details, see the following
pages from the MCollective docs:
MCollective Command Line Usage
Filtering
Logging In as peadmin
To run orchestration commands, you must log in to the puppet master server as the special
peadmin user account, which is created during installation.
Note: Puppet Enterprise 3.0 does not support adding more orchestration user accounts.
This means that, while it is possible (albeit complex) to allow other accounts on other
machines to invoke orchestration actions, upgrading to a future version of PE may disable
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
225/404
access for these extra accounts, requiring you to re-enable them manually. We do not
provide instructions for enabling extra orchestration accounts.
By default, the peadmin account cannot log in with a password. We recommend two ways to log in:
Using Sudo
Anyone able to log into the puppet master server as an admin user with full root sudo privileges
can become the peadmin user by running:
$ sudo -i -u peadmin
This is the default way to log in as the peadmin user. It means that orchestration commands can
only be issued by the group of users who can fully control the puppet master.
Adding SSH Keys
If you wish to allow other users to run orchestration commands without giving them full control
over the puppet master, you can add their public SSH keys to peadmins authorized keys le.
You can use Puppets ssh_authorized_key resource type to do this, or add keys manually to the
/var/lib/peadmin/.ssh/authorized_keys le.
Subcommands
The mco command has several subcommands, and its possible to add more run mco help for a
list of all available subcommands. The default subcommands in Puppet Enterprise 3.0 are:
Main subcommand:
rpc
This is the general purpose orchestration client, which can invoke actions from any MCollective
agent plugin.
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
226/404
Special-purpose subcommands:
These subcommands only invoke certain kinds of actions, but have some extra UI enhancements to
make them easier to use than the equivalent mco rpc command.
puppet
package
service
Help and support subcommands:
These subcommands can display information about the available agent plugins and subcommands.
help displays help for subcommands.
plugin the mco plugin doc command can display help for agent plugins.
completion a helper for shell completion systems.
Inventory and reporting subcommands:
These subcommands can retrieve and summarize information from Puppet Enterprise agent nodes.
ping pings all matching nodes and reports on response times
facts displays a summary of values for a single fact across all systems
inventory general reporting tool for nodes, collectives and subcollectives
find like ping, but doesnt report response times
List of Plugins
To get a list of the available plugins, which includes MCollective agent plugins, data query plugins,
discovery methods, and validator plugins, run mco plugin doc.
Agent Plugin Help
Related orchestration actions are bundled together in MCollective agent plugins. (Puppet-related
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
227/404
If there is also a data plugin with the same name, you may need to prepend agent/ to the plugin
name to disambiguate:
$ mco plugin doc agent/<PLUGIN>
Invoking Actions
Orchestration actions are invoked with either the general purpose rpc subcommand or one of the
special-purpose subcommands. Note that unless you specify a lter, orchestration commands will
be run on every server in your Puppet Enterprise deployment; make sure you know what will
happen before conrming any potentially disruptive commands. For more info on lters, see
Filtering Actions below.
The rpc Subcommand
The most useful subcommand is mco rpc. This is the general purpose orchestration client, which
can invoke actions from any MCollective agent plugin. See List of Built-In Actions for more
information about agent plugins.
Example:
$ mco rpc service restart service=httpd
For a list of available agent plugins, actions, and their required inputs, see List of Built-In Actions
or the Getting Help header above.
Special-Purpose Subcommands
Although mco rpc can invoke any action, sometimes a special-purpose application can provide a
more convenient interface.
Example:
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
228/404
The puppet subcommands special runall action is able to run many nodes without
exceeding a certain load of concurrent runs. It does this by repeatedly invoking the puppet
agents status action, and only sending a runonce action to the next node if theres enough
room in the concurrency limit.
This uses the same actions that the mco rpc command can invoke, but since rpc doesnt
know that the output of the status action is relevant to the timing of the runonce action, it
cant provide that improved UI.
Each special-purpose subcommand ( puppet, service, and package) has its own CLI syntax. For
example, mco service puts the name of the service before the action, to mimic the format of the
more common platform-specic service commands:
$ mco service httpd status
Run mco help <SUBCOMMAND> to get specic help for each subcommand.
Filtering Actions
By default, orchestration actions aect all PE nodes. You can limit any action to a smaller set of
nodes by specifying a lter.
$ mco service pe-httpd status --with-fact fact_is_puppetconsole=true
Note: For more details about lters, see the following pages from the MCollective docs:
MCollective CLI Usage: Filters
Filtering
All command line orchestration actions can accept the same lter options, which are listed under
the Host Filters section of any mco help <SUBCOMMAND> text:
Host Filters
-W, --with FILTER Combined classes and facts filter
-S, --select FILTER Compound filter combining facts and
classes
-F, --wf, --with-fact fact=val Match hosts with a certain fact
-C, --wc, --with-class CLASS Match hosts with a certain config
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
229/404
management class
-A, --wa, --with-agent AGENT Match hosts with a certain agent
-I, --wi, --with-identity IDENT Match hosts with a certain configured
identity
Each type of lter lets you specify a type of metadata and a desired value. The orchestration action
will only run on nodes where that data has that desired value.
Any number of fact, class, and agent lters can also be combined in a single command; this will
make it so nodes must match every lter to run the action.
Matching Strings and Regular Expressions
Filter values are usually simple strings. These must match exactly and are case-sensitive.
Most lters can also accept regular expressions as their values; these are surrounded by forward
slashes, and are interpreted as standard Ruby regular expressions. (You can even turn on various
options for a subpattern, such as case insensitivity -F "osfamily=/(?i:redhat)/".) Unlike plain
strings, they accept partial matches.
Filtering by Identity
A nodes identity is the same as its Puppet certname, as specied during installation. Identities will
almost always be unique per node.
$ mco puppet runonce -I web3balancer.example.com
You can use the -I or --with-identity option multiple times to create a lter that matches
multiple specic nodes.
You cannot combine the identity lter with other lter types.
The identity lter accepts regular expressions.
Filtering by Fact, Class, and Agent
Facts are the standard Puppet Enterprise facts, which are available in your Puppet manifests and
can be viewed as inventory information in the PE console. A list of the core facts is available here.
Use the -F or --with-fact option with a fact=value pair to lter on facts.
Classes are the Puppet classes that are assigned to a node. This includes classes assigned in the
console, assigned via Hiera, declared in site.pp, or declared indirectly by another class. Use the
-C or --with-class option with a class name to lter on classes.
Agents are MCollective agent plugins. Puppet Enterprises default plugins are available on every
node, so ltering by agent makes more sense if you are distributing custom plugins to only a
subset of your nodes. For example, if you made an emergency change to a custom plugin that
you distribute with Puppet, you could lter by agent to trigger an immediate Puppet run on all
aected systems. ( mco puppet runall 5 -A my_agent) Use the -A or --with-agent option to
lter on agents.
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions
230/404
Since mixing classes and facts is so common, you can also use the -W or --with option to supply a
mixture of class names and fact=value pairs.
Compound Select Filters
The -S or --select option accepts arbitrarily complex lters. Like -W, it can accept a mixture of
class names and fact=value pairs, but it has two extra tricks:
BOOLEAN LOGIC
The -W lter always combines facts and classes with and logic nodes must match all of the
criteria to match the lter.
The -S lter lets you combine values with nested Boolean and/or/not logic:
In addition, the -S lter lets you use data plugin queries as an additional kind of metadata.
Data plugins can be tricky, but are very powerful. To use them eectively, you must:
1. Check the list of data plugins with mco plugin doc.
2. Read the help for the data plugin you want to use, with mco plugin doc data/<NAME>. Note any
required input and the available outputs.
3. Use the rpcutil plugins get_data action on a single node to check the format of the output
youre interested in. This action requires source (the plugin name) and query (the input)
arguments:
$ mco rpc rpcutil get_data source="fstat" query="/etc/hosts" -I web01
This will show all of the outputs for that plugin and input on that node.
4. Construct a query fragment of the format <PLUGIN>('<INPUT>').<OUTPUT>=<VALUE> note the
parentheses, the fact that the input must be in quotes, the .output notation, and the equals
sign. Make sure the value youre searching for matches the expected format, which you saw
when you did your test query.
5. Use that fragment as part of a -S lter:
You can specify multiple data plugin query fragments per -S lter.
231/404
The MCollective documentation includes a page on writing custom data plugins. Installing
custom data plugins is similar to installing custom agent plugins; see Adding New Actions
for details.
232/404
By default, puppet agent idles in the background and performs a run every 30 minutes, but the
orchestration engine can give complete control over this behavior. See the table of contents above
for an overview of the available features.
Note: The orchestration engine cannot trigger a nodes very rst puppet agent run. A nodes
rst run will happen automatically within 30 minutes after you sign its certicate.
Basics
Invoking Actions
The orchestration engine can control Puppet from the PE console and from the puppet master
servers Linux command line. These interfaces dont have identical capabilities, so this page will call
out any dierences when applicable.
See the following pages for basic instructions on invoking actions, including how to log in:
Invoking Actions on the Command Line
Navigating Live Management
In the console, most of these tasks use the Control Puppet tab of the live management page, which
behaves much like the Advanced Tasks tab. On the command line, most of these tasks use the mco
puppet subcommand.
233/404
234/404
and resilient.
This dierence only aects *nix nodes; Windows nodes always behave like a stopped *nix
node. The dierence will be addressed in a future version of PE; for now, you may wish to
stop the pe-puppet service before trying to do noop or tags runs.
In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the runonce action. Enter any arguments, and click the red Run button.
ARGUMENTS
If the agent service is stopped (on aected *nix nodes; see above), you can change the way Puppet
runs by specifying optional arguments:
Force ( true/false) Ignore the default splay and run all nodes immediately.
Server Contact a dierent puppet master than normal. Useful for testing new manifests (or a
new version of PE) on a subset of nodes.
Tags (comma-separated list of tags) Apply only resources with these tags. Tags can be class
names, and this is a fast way to test changes to a single class without performing an entire
Puppet run.
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet
235/404
Noop ( true/false) Only simulate changes, and submit a report describing what would have
changed in a real run. Useful for safely testing new manifests. If you have congured puppet
agent to always run in no-op mode (via /etc/puppetlabs/puppet/puppet.conf), you can set
this to false to do an enforcing Puppet run.
Splay ( true/false) Defaults to true. Whether to stagger runs over a period of time.
Splaylimit (in seconds) The period of time over which to randomly stagger runs. The more
nodes you are running at once, the longer this should be.
Environment The Puppet environment in which to run. Useful for testing new manifests on a
subset of nodes.
On the Command Line
While logged in to the puppet master server as peadmin, run the mco puppet runonce command.
Be sure to specify a lter to limit the number of nodes; you should generally invoke this action on
fewer than 10 nodes at a time, especially if the agent service is running and you cannot specify
extra options (see above).
EXTRA OPTIONS
If the agent service is stopped (on aected *nix nodes; see above), you can change the way Puppet
runs with command line options. You can see a list of these by running mco puppet --help.
236/404
--tags TAGS, which takes a comma-separated list of tags and applies only resources with those
tags. Tags can be class names, and this is a fast way to test changes to a single class without
performing an entire Puppet run.
--server SERVER, which causes puppet agent to contact a dierent puppet master than normal.
Also useful for testing new manifests (or a new version of PE) on a subset of nodes.
Back to top
This action requires an argument, which must be the number of nodes allowed to run at once. It
invokes a run on that many nodes, then only starts the next node when one has nished. This
prevents your puppet master from being overwhelmed by the herd and will delay only as long as is
necessary. The ideal concurrency will vary from site to site, depending on how powerful your
puppet master server is and how complex your congurations are.
The runall action can take extra options like --noop as described for the runonce action; however,
note that restrictions still apply for *nix nodes where the pe-puppet service is running.
Back to top
237/404
After a node has been disabled for an hour, it will appear as unresponsive in the consoles node
views, and will stay that way until it is re-enabled.
In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the enable or disable action. Enter a reason (if disabling), and click the red
Run button.
On the Command Line
While logged in to the puppet master server as peadmin, run mco puppet disable or mco puppet
enable with or without a lter.
Example: You noticed Puppet runs failing on a load balancer and expect theyll start failing on the
other ones too:
$ mco puppet disable "Investigating a problem with the haproxy module. -NF" -C
/haproxy/
Back to top
Back to top
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet
238/404
Note that on disabled nodes, the reason for disabling is shown in the disable_message eld.
On the Command Line
AGGREGATE STATUS
While logged in to the puppet master server as peadmin, run mco puppet status with or without a
lter. This returns an abbreviated status for each node and a summarized breakdown of how many
nodes are in which conditions.
$ mco puppet status
VIEWING DISABLE MESSAGES
239/404
The one thing mco puppet status doesnt show is the reason why puppet agent was disabled. If
youre checking up on disabled nodes, you can get a more raw view of the status by running mco
rpc puppet status instead. This will display the reason in the Lock Message eld.
Example: Get the detailed status for every disabled node, using the puppet data plugin:
Back to top
240/404
You can get sparkline graphs for the last run statistics across all your nodes with the mco puppet
summary command. This shows the distribution of your nodes, so you can see whether a signicant
group is taking notably longer or seeing more changes.
$ mco puppet summary
Summary statistics for 10 nodes:
Total resources: min:
93.0 max: 155.0
Out Of Sync resources: min:
0.0 max: 0.0
Failed resources: min:
0.0 max: 0.0
Changed resources: min:
0.0 max: 0.0
Config Retrieval time (seconds): min:
1.9 max: 5.8
Total run-time (seconds): min:
2.2 max: 6.7
Time since last run (seconds): min:
314.0 max: 23.4k
DETAILED STATISTICS
While logged in to the puppet master server as peadmin, run mco rpc puppet last_run_summary
with or without a lter. This returns detailed run statistics for each node. (Note that this uses the
rpc subcommand instead of the puppet subcommand.)
Next: Browsing Resources
Note: Resource browsing and comparison are only available in the PE console; there is not a
command line interface for these features.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources
241/404
If you need to do simple resource inspections on the command line, you can investigate the
puppetral plugins find and search actions. These give output similar to what you can get
from running puppet resource <type> [<name>] locally.
Resource Types
The Browse Resources tab can inspect the following resource types:
group
host
package
service
user
For an introduction to resources and types, please see the Resources chapter of Learning Puppet.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources
242/404
After clicking Inspect All, the Browse Resources tab will use the lists of resources it got to prepopulate the corresponding lists in each resource type page. This can save you a few clicks on the
Find Resources buttons (see below).
Resource Type Pages
Resource type pages contain a search eld, a Find Resources button, and (if the Find Resources
button has been used) a list of resources labeled with their nodes and number of variants.
243/404
selected.
If you have previously clicked the Inspect All button, the resource type page will be pre-populated;
if it is empty, you must click the Find Resources button.
The resource type page will display a list of all resources of that type on the selected nodes, plus a
summary of how similar the resources are. An Update button is available for re-scanning your
nodes. In general, a set of nodes that perform similar tasks should have very similar resources.
The resource list shows the name of each resource, the number of nodes it was found on, and how
many variants of it were found. You can sort the list by any of these properties by clicking the
headers.
To inspect a resource, click its name.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources
244/404
To search, enter a resource name in the search eld and conrm with the enter key or the search
button.
Once located, you will be taken directly to the inspect view for that resource. This is the same as the
inspect view available when browsing (see below).
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources
245/404
When you inspect a resource, you can see the values of all its properties. If there is more than one
variant, you can see all of them and the properties that dier across nodes will be highlighted.
To see which nodes have each variant, click the on N nodes labels to expand the node lists.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources
246/404
Related Topics
For an overview of orchestration topics, see the Orchestration Overview page.
To invoke actions in the PE console, see Navigating Live Management.
To invoke actions on the command line, see Invoking Actions.
To add your own actions, see Adding Orchestration Actions.
247/404
apt_update
Update the apt cache
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the apt-get command
output
(Appears as Output on CLI)
Output from apt-get
Back to top
checkupdates
Check for updates
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
248/404
install
Install a package
Input:
package (required)
Package to install
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
249/404
purge
Purge a package
Input:
package (required)
Package to purge
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
250/404
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top
status
Get the status of a package
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
251/404
Input:
package (required)
Package to retrieve the status of
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
252/404
version
(Appears as Version on CLI)
Version number
Back to top
uninstall
Uninstall a package
Input:
package (required)
Package to uninstall
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
253/404
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top
update
Update a package
Input:
package (required)
Package to update
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
254/404
yum_checkupdates
Check for YUM updates
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the yum command
outdated_packages
(Appears as Outdated Packages on CLI)
Outdated packages
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
255/404
output
(Appears as Output on CLI)
Output from YUM
Back to top
yum_clean
Clean the YUM cache
Input:
mode
One of the various supported clean modes
Type: list
Valid Values: all, headers, packages, metadata, dbcache, plugins, expire-cache
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the yum command
output
(Appears as Output on CLI)
Output from YUM
Back to top
256/404
enable
Enable the Puppet agent
(no inputs)
Outputs:
enabled
(Appears as Enabled on CLI)
Is the agent currently locked
status
(Appears as Status on CLI)
Status
Back to top
last_run_summary
Get the summary of the last Puppet run
(no inputs)
Outputs:
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
257/404
changed_resources
(Appears as Changed Resources on CLI)
Resources that were changed
config_retrieval_time
(Appears as Cong Retrieval Time on CLI)
Time taken to retrieve the catalog from the master
config_version
(Appears as Cong Version on CLI)
Puppet cong version for the previously applied catalog
failed_resources
(Appears as Failed Resources on CLI)
Resources that failed to apply
lastrun
(Appears as Last Run on CLI)
When the Agent last applied a catalog in local time
out_of_sync_resources
(Appears as Out of Sync Resources on CLI)
Resources that were not in desired state
since_lastrun
(Appears as Since Last Run on CLI)
How long ago did the Agent last apply a catalog in local time
summary
(Appears as Summary on CLI)
Summary data as provided by Puppet
total_resources
(Appears as Total Resources on CLI)
Total resources managed on a node
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
258/404
total_time
(Appears as Total Time on CLI)
Total time taken to retrieve and process the catalog
type_distribution
(Appears as Type Distribution on CLI)
Resource counts per type managed by Puppet
Back to top
resource
Evaluate Puppet RAL resources
Inputs:
name (required)
Resource Name
Type: string
Format/Validation: ^.+$
Length: 150
type (required)
Resource Type
Type: string
Format/Validation: ^.+$
Length: 50
Outputs:
changed
(Appears as Changed on CLI)
Was a change applied based on the resource
result
(Appears as Result on CLI)
The result from the Puppet resource
Back to top
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
259/404
runonce
Invoke a single Puppet run
Inputs:
environment
Which Puppet environment to run
Type: string
Format/Validation: puppet_variable
Length: 50
force
Will force a run immediately else is subject to default splay time
Type: boolean
noop
Do a Puppet dry run
Type: boolean
server
Address and port of the Puppet Master in server:port format
Type: string
Format/Validation: puppet_server_address
Length: 50
splay
Sleep for a period before initiating the run
Type: boolean
splaylimit
Maximum amount of time to sleep before run
Type: number
tags
Restrict the Puppet run to a comma list of tags
Type: string
Format/Validation: puppet_tags
Length: 120
Output:
260/404
summary
(Appears as Summary on CLI)
Summary of command run
Back to top
status
Get the current status of the Puppet agent
(no inputs)
Outputs:
applying
(Appears as Applying on CLI)
Is a catalog being applied
daemon_present
(Appears as Daemon Running on CLI)
Is the Puppet agent daemon running on this system
disable_message
(Appears as Lock Message on CLI)
Message supplied when agent was disabled
enabled
(Appears as Enabled on CLI)
Is the agent currently locked
idling
(Appears as Idling on CLI)
Is the Puppet agent daemon running but not doing any work
lastrun
(Appears as Last Run on CLI)
When the Agent last applied a catalog in local time
since_lastrun
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
261/404
262/404
search
Get detailed info for all resources of a given type
Input:
type (required)
Type of resource to check
Type: string
Format/Validation: .
Length: 90
Output:
result
(Appears as Result on CLI)
The values of the inspected resources
Back to top
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
263/404
collective_info
Info about the main and sub collectives
(no inputs)
Outputs:
collectives
(Appears as All Collectives on CLI)
All Collectives
main_collective
(Appears as Main Collective on CLI)
The main Collective
Back to top
daemon_stats
Get statistics from the running daemon
264/404
(no inputs)
Outputs:
agents
(Appears as Agents on CLI)
List of agents loaded
configfile
(Appears as Cong File on CLI)
Cong le used to start the daemon
filtered
(Appears as Failed Filter on CLI)
Didnt pass lter checks
passed
(Appears as Passed Filter on CLI)
Passed lter checks
pid
(Appears as PID on CLI)
Process ID of the daemon
replies
(Appears as Replies on CLI)
Replies sent back to clients
starttime
(Appears as Start Time on CLI)
Time the server started
threads
(Appears as Threads on CLI)
List of threads active in the daemon
times
265/404
get_cong_item
Get the active value of a specic cong property
Input:
item (required)
The item to retrieve from the server
Type: string
Format/Validation: ^.+$
Length: 50
Outputs:
266/404
item
(Appears as Property on CLI)
The cong property being retrieved
value
(Appears as Value on CLI)
The value that is in use
Back to top
get_data
Get data from a data plugin
Inputs:
query
The query argument to supply to the data plugin
Type: string
Format/Validation: ^.+$
Length: 50
source (required)
The data plugin to retrieve information from
Type: string
Format/Validation: ^\w+$
Length: 50
Outputs:
Back to top
get_fact
Retrieve a single fact from the fact store
Input:
fact (required)
The fact to retrieve
Type: string
Format/Validation: ^[\w\-\.]+$
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
267/404
Length: 40
Outputs:
fact
(Appears as Fact on CLI)
The name of the fact being returned
value
(Appears as Value on CLI)
The value of the fact
Back to top
inventory
System Inventory
(no inputs)
Outputs:
agents
(Appears as Agents on CLI)
List of agent names
classes
(Appears as Classes on CLI)
List of classes on the system
collectives
(Appears as All Collectives on CLI)
All Collectives
data_plugins
(Appears as Data Plugins on CLI)
List of data plugin names
facts
268/404
ping
Responds to requests for PING with PONG
(no inputs)
Output:
pong
(Appears as Timestamp on CLI)
The local timestamp
Back to top
269/404
Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after restarting
Back to top
start
Start a service
Input:
service (required)
The service to start
Type: string
Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after starting
Back to top
status
Gets the status of a service
Input:
service (required)
The service to get the status for
Type: string
Format/Validation: service_name
Length: 90
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions
270/404
Output:
status
(Appears as Service Status on CLI)
The status of the service
Back to top
stop
Stop a service
Input:
service (required)
The service to stop
Type: string
Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after stopping
Back to top
Next: Adding New Orchestration Actions
271/404
Related Topics
For an overview of orchestration topics, see the Orchestration Overview page.
To invoke actions in the PE console, see Navigating Live Management.
To invoke actions on the command line, see Invoking Actions.
For a list of built-in actions, see List of Built-In Orchestration Actions.
Note: Additionally, some MCollective agent plugins may be part of a bundle of related
plugins, which may include new subcommands, data plugins, and more.
A full list of plugin types and the nodes they should be installed on is available here. Note
that in MCollective terminology, servers refers to Puppet Enterprise agent nodes and
clients refers to the puppet master and console nodes.
DISTRIBUTION
Not every agent node needs to use every plugin the orchestration engine is built to gracefully
handle an inconsistent mix of plugins across nodes.
This means you can distribute special-purpose plugins to only the nodes that need them, without
worrying about securing them on irrelevant nodes. Nodes that dont have a given plugin will ignore
its actions, and you can also lter orchestration commands by the list of installed plugins.
272/404
If you use Nagios, the NRPE plugin (from Puppet Labs) is a good rst plugin to install.
Searching GitHub for mcollective agent will turn up many plugins, including ones for
vmware_tools, libvirt, junk lters in iptables, and more.
Writing MCollective Agent Plugins
Most people who use orchestration heavily will want custom actions tailored to the needs of their
own infrastructure. You can get these by writing new MCollective agent plugins in Ruby.
The MCollective documentation has instructions for writing agent plugins:
Writing agent plugins
Writing DDL les
Aggregating replies for better command line interfaces
Additionally, you can learn a lot by reading the code of Puppet Enterprises built-in plugins. These
are located in the /opt/puppet/libexec/mcollective/mcollective/ directory on any *nix PE
node.
273/404
node.
If any of these les change, restart the pe-mcollective service, which is managed by the
pe_mcollective module.
To accomplish these, you will need to write some limited interaction with the pe_mcollective
module, which is part of Puppet Enterprises implementation. We have kept these interactions as
minimal as possible; if any of them change in a future version of Puppet Enterprise, we will provide
a warning in the upgrade notes for that versions documentation.
Step 1: Create a Module for Your Plugin(s)
You have several options for laying this out:
One class for all of your custom plugins. This works ne if you have a limited number of plugins
and will be installing them on every agent node.
One module with several classes for individual plugins or groups of plugins. This is good for
installing certain plugins on only some of your agent nodes you can split specialized plugins
into a pair of mcollective_plugins::<name>::agent and
mcollective_plugins::<name>::client classes, and assign the former to the aected agent
nodes and the latter to the console and puppet master nodes.
A new module for each plugin. This is maximally exible, but can sometimes get cluttered.
Once the module is created, put the plugin les into its files/ directory.
Step 2: Create Relationships and Set Variables
For any class that will be installing plugins on agent nodes, you should put the following four lines
near the top of the class denition:
Class['pe_mcollective::server::plugins'] -> Class[$title] ~> Service['pemcollective']
include pe_mcollective
$plugin_basedir = $pe_mcollective::server::plugins::plugin_basedir
$mco_etc = $pe_mcollective::params::mco_etc
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise
274/404
Note: The Class[$title] notation seen above is a resource reference to the class that
contains this statement; it uses the $title variable, which always contains the name of the
surrounding container.
Next, put all relevant plugin les into place, using the $plugin_basedir variable we set above:
file {"${plugin_basedir}/agent/nrpe.ddl":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.ddl',
}
file {"${plugin_basedir}/agent/nrpe.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.rb',
}
In ${mco etc}/plugin.d/nrpe.conf
plugin.nrpe.conf_dir = /etc/nagios/nrpe
conf_dir = /etc/nagios/nrpe
You can use a normal le resource to create these cong les with the appropriate values. For
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise
275/404
simple congs, you can set the content directly in the manifest; for complex ones, you can use a
template.
file {"${mco_etc}/plugin.d/nrpe.cfg":
ensure => file,
content => "conf_dir = /etc/nagios/nrpe\n",
}
POLICY FILES
You can also distribute policy les for the ActionPolicy authorization plugin. This can be a useful
way to completely disable certain unused actions, limit actions so they can only be used on a subset
of your agent nodes, or allow certain actions from the command line but not from the live
management page.
These les should be named for the agent plugin they apply to, and should go in
${mco_etc}/policies/<plugin name>.cfg. Policy les should be distributed to every agent node
that runs the plugin you are conguring.
Note: The policies directory doesnt exist by default; you will need to use a file resource
with ensure => directory to initialize it.
The policy le format is documented here. When conguring caller IDs in policy les, note that PE
uses the following two IDs by default:
cert=peadmin-public the command line orchestration client, as used by the peadmin user on
the puppet master server.
cert=puppet-dashboard-public the live management page in the PE console.
Example: This code would completely disable the package plugins update option, to force users to
do package upgrades through your centralized Puppet code:
file {"${mco_etc}/policies": ensure => directory,}
file {"${mco_etc}/policies/package.policy":
ensure => file,
content => "policy default allow
deny * update * *
",
}
276/404
contains all PE nodes which have not been added to the special no mcollective group.)
For plugins you are only distributing to some agent nodes, you must do the following:
Create two Puppet classes for the plugin: a main class that installs everything, and a client class
that only installs the .ddl le and the supporting plugins.
Assign the main class to any agent nodes that should be running the plugin.
Assign the client class to the puppet_console and puppet_master groups in the console.
(These special groups contain all of the console and puppet master nodes in your deployment,
respectively.)
Step 6: Run Puppet
You can either wait for the next scheduled Puppet run, or trigger an on-demand run using
MCollective.
Step 7: Conrm the Plugin is Installed
Follow the instructions in the MCollective documentation to verify that your new plugins are
properly installed.
Example
This is an example of a Puppet class that installs the Puppet Labs nrpe plugin. The files directory
of the module would simply contain a complete copy of the nrpe plugins Git repo. In this example,
we are not creating separate agent and client classes.
# /etc/puppetlabs/puppet/modules/mco_plugins/manifests/nrpe.pp
class mco_plugins::nrpe {
Class['pe_mcollective::server::plugins'] -> Class[$title] ~> Service['pemcollective']
include pe_mcollective
$plugin_basedir = $pe_mcollective::server::plugins::plugin_basedir
$mco_etc = $pe_mcollective::params::mco_etc
File {
owner => $pe_mcollective::params::root_owner,
group => $pe_mcollective::params::root_group,
mode => $pe_mcollective::params::root_mode,
}
file {"${plugin_basedir}/agent/nrpe.ddl":
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise
277/404
Conguring Orchestration
The Puppet Enterprise (PE) orchestration engine can be congured to enable new actions, modify
existing actions, restrict actions, and prevent run failures on non-PE nodes.
278/404
Adding Actions
See the Adding Actions page of this manual.
Unsupported Features
Puppet Enterprise 3.3 User's Guide Conguring Orchestration
279/404
Conguring Performance
ActiveMQ Heap Usage (Puppet Master Server Only)
The puppet master node runs an ActiveMQ server to route orchestration commands. By default, its
process uses a Java heap size of 512 MB; this is the best value for mid-sized deployments, but can
be a problem when building small proof-of-concept deployments on memory-starved VMs.
You can set a new heap size by doing the following:
1. In the PE console, navigate to the special puppet_master group.
2. On the puppet_master group page, click Edit.
3. Under Variables, in the key eld, add activemq_heap_mb, and in the value eld add a new heap
size to use (in MB).
4. Click Update.
You can later delete the variable to revert to the default setting.
Puppet Enterprise 3.3 User's Guide Conguring Orchestration
280/404
Registration Interval
By default, all agent nodes will send dummy registration messages over the orchestration
middleware every ten minutes. We use these as a heartbeat to work around weaknesses in the
underlying Stomp network protocol.
Most users shouldnt need to change this behavior, but you can adjust the frequency of the
heartbeat messages as follows:
1. In the PE console, navigate to the special mcollective group.
2. On the mcollective group page, click Edit.
3. Under Variables, in the key eld, add mcollective_registerinterval, and in the value eld add
a new interval (in seconds).
4. Click Update.
You can later delete the variable to revert to the default setting.
Orchestration SSL
By default, the orchestration engine uses SSL to encrypt all orchestration messages. You can disable
this in order to investigate problems, but should never disable it in a production deployment where
business-critical orchestration commands are being run.
To disable SSL:
1. In the PE console, navigate to the mcollective group.
2. On the mcollective group page, click Edit.
3. Under Variables, in the key eld, add mcollective_enable_stopmp_ssl, and in the value eld
add false.
4. Click Update.
You can later delete the variable to revert to the default setting.
Next: Cloud Provisioning: Overview
281/404
Note: Non-root users are not able to use PEs orchestration capabilities to manage your
nodes, and Mcollective must be disabled on all nodes.
1. As a root user, install and congure a monolithic PE master. Use the standard installation
method, or use an answer le to automate your installation.
2. Disable live management (MCollective).
This can be done by adding q_disable_live_management=y to your answer le if youre
performing an automated installation. Otherwise you can edit /etc/puppetlabs/puppetdashboard/settings.yml and set the disable_live_management setting to true.
3. After the installation is complete, log into the console and verify that the Live Management tab is
NOT present in the main, top nav bar.
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
282/404
4. Make sure no new agents can get added to the MCollective group.
a. Click the Groups tab, select the default group, and click Edit.
b. Add the no mcollective group and click Update.
1. On each agent node, install a PE agent while logged in as a root user. Refer to the instructions
for installing agents.
2. Log in to an agent node as a root user, and add the non-root user with puppet resource user
<unique non-root username> ensure=present managehome=true.
Note: Each and every non-root user must have a unique name.
3. As a root user, still on the agent node, set the non-root users password. For example, on most
*nix systems you would run passwd <username>.
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
283/404
4. By default, the pe-puppet service runs automatically as a root user, so it needs to be disabled. As
a root user on the agent node, stop the service by running puppet resource service pepuppet ensure=stopped enable=false.
Tip: If you wish to use su - nonrootuser to switch between accounts, make sure to use
the - ( -l in some unix variants) argument so that full login privileges are correctly
granted. Otherwise you may see permission denied errors when trying to apply a
catalog.
5. As the non-root user, generate and submit the cert for the agent node. Log into the agent node
and execute the following command:
puppet agent -t --certname "<unique non-root username.hostname>" --server "<master
hostname>"
This puppet run will submit a cert request to the master and will create a ~/.puppet directory
structure in the non-root users home directory.
6. As the non-root user, create a Puppet conguration le ( ~/.puppet/puppet.conf) to specify the
agent certname and the hostname of the master:
[main]
certname = <unique non-root username.hostname>
server = <master hostname>
7. Log into the console, navigate to the pending node requests, and accept the requests from nonroot user agents.
Note: It is possible to also sign the root user certicate in order to allow that user to also manage
the node. However, you should do so only with great caution as this introduces the possibility of
unwanted behavior and potential security issues. For example, if your site.pp has no default
node conguration, running agent as non-admin could lead to unwanted node denitions
getting generated using alt hostnames, which is a potential security issue. In general, if you
deploy this scenario, you should ensure that the root and non-root users never try to manage
the same resources,ensure that they have clear-cut node denitions, and ensure that classes
scope correctly.
8. You can now connect the non-root agent node to the master and get PE to congure it. Log into
the agent node as the non-root user and run puppet agent -t.
PE should now run and apply the conguration specied in the catalog. Keep an eye on the
output from the runif you see Facter facts being created in the non-root users home
directory, you know that you have successfully created a functional non-root agent.
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
284/404
Check the following to make sure the agent is properly congured and functioning as desired:
The non-root agent node should be able to request certicates and be able to download and
apply the catalog from the master without issue when a non-privileged user executes puppet
agent -t.
The puppet agent service should not be running. Check it with service pe-puppet status.
The non-root agent node should not receive the pe-mcollective class. You can check the
console to ensure that nonrootuser is part of the no mcollective group.
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
285/404
Non-privileged users should be able to collect existing facts by running facter on agents, and
they should be able to dene new, external Facter facts.
INSTALL AND CONFIGURE WINDOWS AGENTS AND THEIR CERTIFICATES
If you need to run agents without admin privileges on nodes running a Windows OS, take the
following steps:
1. Connect to the agent node as an admin user and install the Windows agent.
2. As an admin user, add the non-admin user with the following command: puppet resource user
<unique non-admin username> ensure=present managehome=true password="puppet"
groups="Users".
Note: Each and every non-admin user must have a unique name. If the non-privileged user
needs remote desktop access, edit the user resource to include the Remote Desktop Users
group.
3. While still connected as an admin user, disable the pe-puppet service with puppet resource
service pe-puppet ensure=stopped enable=false.
4. Log out of the Windows agent machine and log back in as the non-admin user, and then run the
following command:
puppet agent -t --certname "<unique non-privileged username>" --server "<master
hostname>"
This puppet run will submit a cert request to the master and will create a ~/.puppet directory
structure in the non-root users home directory.
5. As the non-admin user, create a Puppet conguration le
( %USERPROFILE%/.puppet/puppet.conf) to specify the agent certname and the hostname of the
master:
[main]
certname = <unique non-privileged username.hostname>
server = <master hostname>
6. While still connected as the non-admin user, send a cert request to the master by running
puppet with puppet agent -t.
7. On the master node, as an admin user, sign the non-root certicate request using the console or
by running puppet cert sign nonrootuser.
Note: It is possible to also sign the root user certicate in order to allow that user to also manage
the node. However, you should do so only with great caution as this introduces the possibility of
unwanted behavior and potential security issues. For example, if your site.pp has no default
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
286/404
node conguration, running agent as non-admin could lead to unwanted node denitions
getting generated using alt hostnames, a potential security issue. In general, then, if you deploy
this scenario you should be careful to ensure the root and non-root users never try to manage
the same resources, have clear-cut node denitions, ensure that classes scope correctly, and so
forth.
8. On the agent node, verify that the agent is connected and working by again starting a puppet
run while logged in as the non-admin user. Running puppet agent -t should download and
process the catalog from the master without issue.
Usage
Non-root users can only use a subset of PEs functionality. Basically, any operation that requires
root privileges (e.g., installing system packages) cannot be managed by a non-root puppet agent.
On *nix systems, as non-root agent you should be able to enforce the following resource types:
cron (only non-root cron jobs can be viewed or set)
exec (cannot run as another user or group)
file (only if the non-root user has read/write privileges)
notify
schedule
ssh_key
ssh_authorized_key
service
augeas
You should also be able to inspect the following resource types (use puppet resource <resource
type>):
host
mount
package
On windows systems as non-admin user you should be able to enforce the following resource types
:
exec
file
You should also be able to inspect the following resource types (use puppet resource <resource
type>):
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges
287/404
host
package
user
group
service
ISSUES & WARNINGS
When running a cron job as non-root user, using the -u ag to set a user with root privileges
will cause the job to fail, resulting in the following error message:
Notice: /Stage[main]/Main/Node[nonrootuser]/Cron[illegal_action]/ensure: created
must be privileged to use -u
Next: Beginners Guide to Modules
288/404
Tip: You will need to run pe-httpd restart any load-balanced masters in your system.
5. Delete the node from the console. Navigate to the node detail page for the deactivated node, and
click the Delete button.
Alternatively, you can also run /opt/puppet/bin/rake -f /opt/puppet/share/puppetdashboard/Rakefile RAILS_ENV=production node:del[node name].
This action does NOT disable MCollective/live management on the node.
Note: If you delete a node from the node view without rst deactivating the node, the node will
be absent from the node list in the console, but the license count will not decrement, and on the
next puppet run, the node will be listed in the console.
6. To disable MCollective/live management on the node, uninstall the puppet agent, stop the pemcollective service (on the agent, run service pe-mcollective stop), or destroy the agent
node altogether.
7. You should also manually remove the node certicates in
/etc/pupuppetlabs/mcollective/ssl/clients.
At this point, the node should be fully deactivated.
Important: Ensure you are on the puppet agent node when you do this. Backing up the ssl
directory, as opposed to deleting it, will enable you to easily recover in the event of a
problem. DO NOT perform step 3 on the puppet master.
Puppet Enterprise 3.3 User's Guide Regenerating a Puppet Agent Certicate
289/404
Per-node certicates for the puppet master (and any agent nodes)
pe-internal-broker
pe-internal-dashboard
pe-internal-mcolllective-servers
pe-internal-peadmin-mcollective-client
pe-internal-puppet-console-mcollective-client
Each of these will need to be replaced with new certicates signed by your external CA. The steps
below will explain how to nd and replace these credentials.
Locating the PE Agent Certicate and Security Credentials
Every system under PE management (including the puppet master, console, and PuppetDB) runs the
puppet agent service. To determine the proper locations for the certicate and security credential
les used by the puppet agent, run the following commands:
Certicate: puppet agent --configprint hostcert
Private key: puppet agent --configprint hostprivkey
Public key: puppet agent --configprint hostpubkey
Certicate Revocation List: puppet agent --configprint hostcrl
Local copy of the CAs certicate: puppet agent --configprint localcacert
Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 291/404
Tip: You will also need to create a cert and security credentials for any agent nodes using the
same CA as you used for the puppet master. Weve included instructions at the end of the
doc.
When you use a custom CA to create a certicate for the console, the console still needs to trust
requests from other elements of your PE infrastructure that have been authenticated with
certicates signed by PEs built-in CA; and when making requests to the puppet master, the console
still needs to present a certicate signed by PEs built-in CA.
Also, when the puppet master is acting as a client, it needs to trust the certicates signed by both
the custom CA and PEs built-in CA.
Here are the main things you will need to do:
1. Set up the custom certicates and security credentials (private and public keys).
2. Generate a complete CA bundle for the puppet master.
Step 1: Set up Custom Certs and Security Credentials
Puppet Enterprise 3.3 User's Guide Conguring the Puppet Enterprise Console to Use a Custom SSL
295/404
Certicate
1. Retrieve the custom certicates public and private keys and the customs CAs public key, and,
for ease of use, name them as follows:
public-dashboard.cert.pem
public-dashboard.private_key.pem
public-dashboard.ca_cert.pem
2. Add those les to /opt/puppet/share/puppet-dashboard/certs/.
3. Edit /etc/puppetlabs/httpd/conf.d/puppetdashboard.conf so that it contains the new
certicate and keys. The complete SSL list in puppetdashboard.conf should appear as follows:
SSLCertificateFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.cert.pem
SSLCertificateKeyFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.private_key.pem
SSLCertificateChainFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.ca_cert.pem
SSLCACertificateFile /opt/puppet/share/puppet-dashboard/certs/pe-internaldashboard.ca_cert.pem
SSLCARevocationFile /opt/puppet/share/puppet-dashboard/certs/pe-internaldashboard.ca_crl.pem
Important: Make sure you do not duplicate any of the above parameters in
/etc/puppetlabs/httpd/conf.d/puppetdashboard.conf.
The rst three certicates in the list are your custom certicates public and private keys and your
custom CAs public key. The fourth and fth entries are PEs built-in CAs public key and certicate
revocation list (CRL). They should not be edited in any way. This conguration will cause the
console to present the signed certicate from your custom CA to clients while still using PEs builtin CA to authenticate requests from the puppet master.
Step 2: Generate the Complete CA Bundle for the Puppet Master
1. On the puppet master, create ca_auth.pem by running cat
/etc/puppetlabs/puppet/ssl/certs/ca.pem /opt/puppet/share/puppetdashboard/certs/public-dashboard.ca_cert.pem >
/etc/puppetlabs/puppet/ssl/ca_auth.pem.
Note: The second path in the above command is the full path the public key of the custom
CA, which you put in /opt/puppet/share/puppet-dashboard/certs/ in step 1.2.
2. Change the permissions of the le you just created by running chmod 644
/etc/puppetlabs/puppet/ss/ca_auth.pem.
Puppet Enterprise 3.3 User's Guide Conguring the Puppet Enterprise Console to Use a Custom SSL
296/404
Certicate
/etc/puppetlabs/puppet/ss/ca_auth.pem.
3. Edit /etc/puppetlabs/puppet/puppet.conf and, in the [master] stanza, add
ssl_client_ca_auth = /etc/puppetlabs/puppet/ssl/ca_auth.pem.
4. Edit /etc/puppetlabs/puppet/console.conf and for certificate_name, change the value to
the DNS FQDN of the console server. Note that the DNS FQDN must match the name of the new
console certicate.
5. Restart the pe-httpd service on both the master and console servers by running sudo
/etc/init.d/pe-httpd restart. (If it is an all-in-one install, you only need to restart the pehttpd service once.)
6. Kick o a puppet run.
You should now be able to navigate to your console and see the custom certicate in your browser.
This is a Tech Preview release of Razor. This means you are getting early access to Razor
technology so you can test the functionality and provide feedback. However, this Tech Preview
version of Razor is not intended for production use because Puppet Labs cannot guarantee Razors
stability. As Razor is further developed, functionality might be added, removed or changed in a way
that is not backward compatible with this Tech Preview version.
For details about Tech Preview software from Puppet Labs, visit Tech Preview Features Support
Scope.
297/404
When a new node appears, Razor discovers its characteristics by booting it with the Razor
microkernel and inventorying its facts.
The node is tagged
The node is tagged based on its characteristics. Tags contain a match condition a Boolean
expression that has access to the nodes facts and determines whether the tag should be applied to
the node or not.
The node tags match a Razor policy
298/404
Node tags are compared to tags in the policy table. The rst policy with tags that match the nodes
tags is applied to the node.
Policies pull together all the provisioning elements
299/404
300/404
Install Overview
Below are the essential steps to create a virtual test environment. Each of these steps is described in
more detail in the following sections.
1. Install PE in your virtual environment.
2. Install and congure DHCP/DNS/TFTP service. Weve chosen dnsmasq for this example setup.
3. Congure SELinux to enable PXE boot. Note: youll download iPXE software in the steps for
installing and setting up Razor.
4. Optional: If you installed dnsmasq, then congure dnsmasq for PXE booting and TFTP
When you nish this section, go on to Install and Set Up Razor.
Install PE in Your Virtual Environment
In your virtual testing environment, set up a puppet master running a standard install of Puppet
Enterprise 3.3. For more information, see Installing Puppet Enterprise.
Note: Were nding that VirtualBox 4.3.6 gets to the point of downloading the microkernel from the
Razor server and hangs at 0% indenitely. We dont have this problem with VirtualBox 4.2.22.
Install and Congure dnsmasq DHCP/TFTP Service
The installation thats described here, particularly these prerequisites, are one way to congure
your Razor test environment. Were providing explicit instructions for this setup because its been
tested and is relatively straightforward.
As stated in the Warning above, to avoid breaking your company network or inadvertently
overwriting machines or servers on your network, you should be working with a completely isolated
test environment.
1. Use YUM to install dnsmasq:
yum install dnsmasq
2. If it doesnt already exist, create the directory /var/lib/tftpboot .
3. Change the permissions for /var/lib/tftpboot:
Puppet Enterprise 3.3 User's Guide Install and Set Up a Virtual Environment for Testing Razor
301/404
SELINUX=disabled
Note: Disabling SELinux is highly insecure and should only be done for testing purposes.
Another option is to craft an enforcement rule for SELinux that will enable PXE boot but will not
completely disable SElinux.
2. Restart the computer and log in again.
Edit the dnsmasq Conguration File to Enable PXE Boot
1. Edit the le /etc/dnsmasq.conf, by adding the following line at the bottom of the le:
conf-dir=/etc/dnsmasq.d
2. Write and exit the le.
3. Create the le /etc/dnsmasq.d/razor and add the following conguration information:
Puppet Enterprise 3.3 User's Guide Install and Set Up a Virtual Environment for Testing Razor
302/404
Hint: With the export command, you can avoid having to repeatedly replace placeholder
text. The steps for installing assume you have declared a server name and the port to use for
Razor with this command:
export RAZOR_HOSTNAME=<server name>
export RAZOR_PORT=8080
For example:
export RAZOR_HOSTNAME=centos6.4
export RAZOR_PORT=8080
The steps below therefore use $RAZOR_HOSTNAME and $RAZOR_PORT for brevity.
303/404
If you dont have access to the internet or would like to pull the PE tarball from your own location,
you can use the class parameter pe_tarball_base_url and stipulate your own URL. Note that the
code assumes that the tarball still has the same name format as on our server.
1. Manually add the pe-razor class in the PE console. To do so, on the console sidebar, click the
Add classes button. Then, in Add classes under Dont see a class? type in pe-razor and click the
green plus (+) button. For information about adding a class and classifying the Razor server
using the PE Console, see the Adding New Classes and Editing Classes on Nodes sections of this
guide.
Note: You can also add the following to site.pp:
node <AGENT_CERT>{
include pe_razor
}
2. On the Razor server, run puppet with: puppet agent -t (otherwise you have to wait for the
scheduled agent run).
Load iPXE Software
You must set your machines to PXE boot. Without PXE booting, Razor has no way to interact with a
system. This is OK if the node has already been enrolled with Razor and is installed, but it will
prevent any changes on the server (for example, an attempt to reinstall the system) from having any
eect on the node. Razor relies on seeing when a machine boots and starts all its interactions
with a node when that node boots.
Razor provides a specic iPXE boot image to ensure youre using a compatible version.
1. Download the iPXE boot image undionly-20140116.kpxe.
2. Copy the image to /var/lib/tftpboot: cp undionly-20140116.kpxe /var/lib/tftpboot.
3. Download the iPXE bootstrap script from the Razor server to the /var/lib/tftpboot directory:
wget http://${RAZOR_HOSTNAME}:${RAZOR_PORT}/api/microkernel/bootstrap?
nic_max=1 -O /var/lib/tftpboot/bootstrap.ipxe
Note: Make sure you dont use localhost as the name of the Razor host. The bootstrap script
chain-loads the next iPXE script from the Razor server. This means that it has to contain the correct
hostname for clients to try and fetch that script from, or it isnt going to work.
Verify the Razor Server
Test the new Razor conguration: wget http://${$RAZOR_HOSTNAME}:${RAZOR_PORT}/api -O
test.out.
Puppet Enterprise 3.3 User's Guide Install and Set Up Razor
304/404
The command should execute successfully, and the output JSON le test.out should contain a list
of available Razor commands.
2. You can verify that the Razor client is installed by printing Razor help:
razor -u http://${$RAZOR_HOSTNAME}:${RAZOR_PORT}/api
3. Youll likely get this warning message about JSON: MultiJson is using the default adapter
(ok_json). We recommend loading a dierent JSON library to improve performance. This
message is harmless, but you can disble it with this command:
gem install json_pure
Note: There is also a razor-client gem that contains the client for the open source Razor client.
We strongly recommended that you not install the two clients simultaneously, and that you only use
pe-razor-client with the Razor shipped as part of Puppet Enterprise. If you already have razorclient installed, or are not sure if you do, run gem uninstall razor-client prior to step (1)
above.
Uninstall Razor
To uninstall the Razor Server:
1. Run yum erase pe-razor.
2. Drop the PostgreSQL database that the server used.
3. Change DHCP/TFTP so that the machines that have been installed will continue to boot outside
the scope of Razor.
To uninstall the Razor client:
Run gem uninstall pe-razor-client.
305/404
Include Repos
A repo contains all of the actual bits used when installing a node with Razor. The repo is identied
by a unique name, such as centos-6.4. The instructions for an installation are contained in tasks,
which are described below.
To load a repo onto the server, you use: razor create-repo --name=<repo name> --iso-url
<URL>.
For example: razor create-repo --name=centos-6.4 --iso-url
http://mirrors.usc.edu/pub/linux/distributions/centos/6.4/isos/x86_64/CentOS-6.4x86_64-minimal.iso.
Note: Creating the repo can take ve or so minutes, plus however long it takes to download the ISO
and unpack the contents. Currently, the best way to nd out the status is to check the log le.
Include Brokers
Brokers are responsible for handing a node o to a cong management system like Puppet
Enterprise. Brokers consist of two parts: a broker type and information that is specic for the broker
type.
The broker type is closely tied to the conguration management system that the node is being
handed o to. Generally, it consists of a shell script template and a description of what additional
information must be specied to create a broker from that broker type.
For the Puppet Enterprise broker type, this information consists of the nodes server, and the
version of PE that a node should use. The PE version defaults to latest unless you stipulate a
dierent version.
You create brokers with the create-broker command. For example, the following sets up a simple
no-op broker that does nothing: razor create-broker --name=noop --broker-type=noop.
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning
306/404
This command sets up the PE broker, which requires the server parameter.
razor create-broker --name foo --broker-type puppet-pe --configuration '{
"server": "puppet.example.com" }'
Include Tasks
Tasks describe a process or collection of actions that should be performed while provisioning
machines. They can be used to designate an operating system or other software that should be
installed, where to get it, and the conguration details for the installation.
Tasks are structurally simple. They consist of a YAML metadata le and any number of ERB
templates. You include the tasks you want to run in your policies (policies are described in the next
section).
Razor provides a handful of existing tasks, or you can create your own. To learn more about tasks,
see Writing Tasks and Templates.
Create Policies
Policies orchestrate repos, brokers, and tasks to tell Razor what bits to install, where to get the bits,
how they should be congured, and how to communicate between a node and PE.
Note: Tags are named rule-sets that identify which nodes should be attached to a given policy.
Because policies contain a good deal of information, its handy to save them in a JSON le that you
run when you create the policy. Heres an example of a policy called centos-for-small. This policy
stipulates that it should be applied to the rst 20 nodes that have no more than two processors that
boot.
{
"name": "centos-for-small",
"repo": { "name": "centos-6.4" },
"task": { "name": "centos" },
"broker": { "name": "noop" },
"enabled": true,
"hostname": "host${id}.example.com",
"root_password": "secret",
"max_count": "20",
"tags": [{ "name": "small", "rule": ["<=", ["num", ["fact",
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning
307/404
"processorcount"]], 2]}]
}
Policy Tables You might create multiple policies, and then retrieve the policies collection. The
policies are listed in order in a policy table. You can inuence the order of policies as follows:
When you create a policy, you can include a before or after parameter in the request to
indicate where the new policy should appear in the policy table.
Using the move-policy command with before and after parameters, you can put an existing
policy before or after another one.
See Razor Command Reference for more information.
CREATE A POLICY
1. Create a le called policy.json and copy the following template text into it:
{
"name": "test_<NODE_ID>",
"repo": { "name": "<OS>" },
"task": { "name": "<INSTALLER>" },
"broker": { "name": "pe" },
"enabled": true,
"hostname": "node${id}.vm",
"root_password": "puppet",
"max_count": "20",
"tags": [{ "name": "<TAG_NAME>", "rule": ["in",["fact",
"macaddress"],"<NODE_MAC_ADDRESS>"]}]
}
2. Edit the options in the policy.json template with information specic to your environment.
3. Apply the policy by executing: razor create-policy --json policy.json.
You can also inspect the registered nodes by appending the node name to the command as follows.
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning
308/404
The name of the node is generated by the server and follows the pattern nodeNNN where NNN is an
integer. This command provides information such as the log path, hardware information,
associated policies, and facts.
razor nodes <NODE_NAME>
The following command opens a specic nodes log: razor nodes <node name> log.
Next: Razor Command Reference
309/404
/svc URLs
The /svc namespace is an internal namespace, used for communication with the iPXE client, the
microkernel, and other internal components of Razor.
This namespace is not enumerated under /api.
Commands
The list of commands that the Razor server supports is returned as part of a request to GET /api in
the commands array. Clients can identify commands using the rel attribute of each entry in the
array, and should make their POST requests to the URL given in the url attribute.
Commands are generally asynchronous and return a status code of 202 Accepted on success. The
url property of the response generally refers to an entity that is aected by the command and can
be queried to determine when the command has nished.
Create new repo
There are two avors of repositories: ones where Razor unpacks ISOs for you and serves their
contents, and ones that are somewhere else, for example, on a mirror you maintain. The rst form
is created by creating a repo with the iso-url property; the server will download and unpack the
ISO image into its le system:
{
"name": "fedora19",
"iso-url": "file:///tmp/Fedora-19-x86_64-DVD.iso"
}
The second form is created by providing a url property when you create the repository; this form is
merely a pointer to a resource somehwere and nothing will be downloaded onto the Razor server:
{
"name": "fedora19",
"url": "http://mirrors.n-ix.net/fedora/linux/releases/19/Fedora/x86_64/os/"
}
Delete a repo
The delete-repo command accepts a single repo name:
{
"name": "fedora16"
}
310/404
Create task
Razor supports both tasks stored in the lesystem and tasks stored in the database; for
development, it is highly recommended that you store your tasks in the lesystem. Details about
that can be found on the Wiki
For production setups, it is usually better to store your tasks in the database. To create a task,
clients post the following to the /spec/create_task URL:
{
"name": "redhat6",
"os": "Red Hat Enterprise Linux",
"os_version": "6",
"description": "A basic installer for RHEL6",
"boot_seq": {
"1": "boot_install",
"default": "boot_local"
}
"templates": {
"boot_install": " ... ERB template for an ipxe boot file ...",
"installer": " ... another ERB template ..."
}
}
os
os_version
description
Human-readable description
boot_seq
templates
Create broker
To create a broker, clients post the following to the create-broker URL:
{
"name": "puppet",
"configuration": {
"server": "puppet.example.org",
"environment": "production"
},
"broker-type": "puppet"
}
The broker-type must correspond to a broker that is present on the broker_path set in
Puppet Enterprise 3.3 User's Guide Razor API Reference
311/404
The broker-type must correspond to a broker that is present on the broker_path set in
config.yaml.
The permissible settings for the configuration hash depend on the broker type and are declared
in the broker types configuration.yaml.
Delete broker
A broker can be deleted by posting its name to the /spec/delete_broker command:
{
"name": "small",
}
If the broker is used by a policy, the attempt to delete the broker will fail.
Create tag
To create a tag, clients post the following to the /spec/create_tag command:
{
"name": "small",
"rule": ["=", ["fact", "processorcount"], "2"]
}
The name of the tag must be unique; the rule is a match expression.
Delete tag
A tag can be deleted by posting its name to the /spec/delete_tag command:
{
"name": "small",
"force": true
}
If the tag is used by a policy, the attempt to delete the tag will fail unless the optional parameter
force is set to true; in that case the tag will be removed from all policies that use it and then
deleted.
Update tag
The rule for a tag can be changed by posting the following to the /spec/update_tag_rule
command:
{
"name": "small",
Puppet Enterprise 3.3 User's Guide Razor API Reference
312/404
This will change the rule of the given tag to the new rule. The tag will be reevaluated against all
nodes and each nodes tag attribute will be updated to reect whether the tag now matches or not,
i.e., the tag will be added to/removed from each nodes tag as appropriate.
If the tag is used by any policies, the update will only be performed if the optional parameter force
is set to true. Otherwise, the command will return with status code 400.
Create policy
{
"name": "a policy",
"repo": { "name": "some_repo" },
"task": { "name": "redhat6" },
"broker": { "name": "puppet" },
"hostname": "host${id}.example.com",
"root_password": "secret",
"max_count": "20",
"before"|"after": { "name": "other policy" },
"node_metadata": { "key1": "value1", "key2": "value2" },
"tags": [{ "name": "existing_tag"},
{ "name": "new_tag", "rule": ["=", "dollar", "dollar"]}]
}
The overall list of policies is ordered, and polcies are considered in that order. When a new policy is
created, the entry before or after can be used to put the new policy into the table before or after
another policy. If neither before or after are specied, the policy is appended to the policy table.
Tags, brokers, tasks and repos are referenced by their name. Tags can also be created by providing
a rule; if a tag with that name already exists, the rule must be equal to the rule of the existing tag.
Hostname is a pattern for the host names of the nodes bound to the policy; eventually youll be able
to use facts and other fun stu there. For now, you get to say ${id} and get the nodes DB id.
The max_count determines how many nodes can be bound at any given point to this policy at the
most. This can either be set to nil, indicating that an unbounded number of nodes can be bound
to this policy, or a positive integer to set an upper bound.
The node_metadata allows a policy to apply metadata to a node when it binds. This is NON
AUTHORITATIVE in that it will not replace existing metadata on the node with the same keys; it will
only add keys that are missing.
Move policy
This command makes it possible to change the order in which policies are considered when
Puppet Enterprise 3.3 User's Guide Razor API Reference
313/404
matching against nodes. To put an existing policy into a dierent place in the policy table, use the
move-policy command with a body like:
{
"name": "a policy",
"before"|"after": { "name": "other policy" }
}
This will change the policy table so that a policy will appear before or after the policy other
policy.
Enable/disable policy
Policies can be enabled or disabled. Only enabled policies are used when matching nodes against
policies. There are two commands to toggle a policys enabled ag: enable-policy and disablepolicy, which both accept the same body, consisting of the name of the policy in question:
{
"name": "a policy"
}
The new-count can be an integer, which must be larger than the number of nodes that are
currently bound to the policy, or null to make the policy unbounded
Add/remove tags to/from Policy
You can add or remove tags from policies with add-policy-tag and remove-policy-tag
respectively. In both cases supply the name of a policy and the name of the tag. When adding a tag,
you can specify an existing tag, or create a new one by supplying a name and rule for the new tag:
{
"name": "a-policy-name",
"tag" : "a-tag-name",
"rule": "new-match-expression" #Only for `add-policy-tag`
}
Puppet Enterprise 3.3 User's Guide Razor API Reference
314/404
Delete policy
Policies can be deleted with the delete-policy command. It accepts the name of a single policy:
{
"name": "my-policy"
}
Note that this does not aect the installed status of a node, and therefore wont, by itself, cause a
node to be bound to another policy upon reboot.
Delete node
A single node can be removed from the database with the delete-node command. It accepts the
name of a single node:
{
"name": "node17"
}
Of course, if that node boots again at some point, it will be automatically recreated.
Reinstall node
This command removes a nodes association with any policy and clears its installed ag; once the
node reboots, it will boot back into the Microkernel and go through discovery, tag matching and
possibly be bound to another policy. This command does not change its metadata or facts. Specify
which node to unbind by sending the nodes name in the body of the request
{
"name": "node17"
}
315/404
{
"name": "node17",
"ipmi-hostname": "bmc17.example.com",
"ipmi-username": null,
"ipmi-password": "sekretskwirrl"
}
The various IPMI elds can be null (representing no value, or the NULL username/password as
dened by IPMI), and if omitted are implicitly set to the NULL value.
You must provide an IPMI hostname if you provide either a username or password, since we only
support remote, not local, communication with the IPMI target.
Reboot node
Razor can request a node reboot through IPMI, if the node has IPMI credentials associated. This
only supports hard power cycle reboots.
This is applied in the background, and will run as soon as available execution slots are available for
the task IPMI communication has some generous internal rate limits to prevent it overwhelming
the network or host server.
This background process is persistent: if you restart the Razor server before the command is
executed, it will remain in the queue and the operation will take place after the server restarts.
There is no time limit on this at this time.
Multiple commands can be queued, and they will be processed sequentially, with no limitation on
how frequently a node can be rebooted.
If the IPMI request fails (that is: ipmitool reports it is unable to communicate with the node) the
request will be retried. No detection of actual results is included, though, so you may not know if
the command is delivered and fails to reboot the system.
This is not integrated with the IPMI power state monitoring, and you may not see power transitions
in the record, or through the node object if polling.
The format of the command is:
{
"name": "node1",
}
316/404
desired power state to be set for a node, and if the node is observed to be in a dierent power state
an IPMI command will be issued to change to the desired state.
The format of the command is:
{
"name": "node1234",
"to": "on"|"off"|null
}
The name eld identies the node to change the setting on.
The to eld contains the desired power state to set. Valid values are on, off, or null (the JSON
NULL/nil value), which reect power on, power o, and do not enforce power state
respectively.
Power state is enforced every time it is observed; by default this happens on a scheduled basis in
the background every few minutes.
Modify node metadata
Node metadata is similar to a nodes facts except metadata is what the administrators tell Razor
about the node rather than what the node tells Razor about itself.
Metadata is a collection of key => value pairs (like facts). Use the modify-node-metadata command
to add/update, remove or clear a nodes metadata. The request should look like:
{
"node": "node1",
"update": { # Add or update these keys
"key1": "value1",
"key2": "value2",
...
}
"remove": [ "key3", "key4", ... ], # Remove these keys
"no_replace": true # Do not replace keys on
# update. Only add new keys
}
or
{
"node": "node1",
"clear": true # Clear all metadata
}
As above, multiple update and/or removes can be done in the one command, however, clear can
Puppet Enterprise 3.3 User's Guide Razor API Reference
317/404
only be done on its own (it doesnt make sense to update some details and then clear everything).
An error will also be returned if an attempt is made to update and remove the same key.
Update node metadata
The update-node-metadata command is a shortcut to modify-node-metadata that allows for
updating single keys on the command line or with a GET request with a simple data structure that
looks like.
{
"node" : "mode1",
"key" : "my_key",
"value" : "my_val",
"no_replace": true #Optional. Will not replace existing keys
}
or
{
"node" : "node1",
"all" : true, # Removes all keys
}
Collections
Along with the list of supported commands, a GET /api request returns a list of supported
collections in the collections array. Each entry contains at minimum url, spec, and name keys,
which correspond respectively to the endpoint through which the collection can be retrieved (via
GET), the type of collection, and a human-readable name for the collection.
A GET request to a collection endpoint will yield a list of JSON objects, each of which has at
minimum the following elds:
id
318/404
spec
name
Dierent types of objects may specify other properties by dening additional key-value pairs. For
example, here is a sample tag listing:
[
{
"spec": "http://localhost:8080/spec/object/tag",
"id": "http://localhost:8080/api/collections/objects/14",
"name": "virtual",
"rule": [ "=", [ "fact", "is_virtual" ], true ]
},
{
"spec": "http://localhost:8080/spec/object/tag",
"id": "http://localhost:8080/api/collections/objects/27",
"name": "group 4",
"rule": [
"in", [ "fact", "dhcp_mac" ],
"79-A8-C3-39-E4-BA",
"6C-35-FE-B7-BD-2D",
"F9-92-DF-E0-26-5D"
]
}
]
In addition, references to other resources are represented either as an array of, in the case of a
one- or many-to-many relationship, or single, for a one- to-one relationship, JSON objects with
the following elds:
url
obj_id
name
If the reference object is in an array, the obj_id eld serves as a unique identier within the array.
Other things
The default boostrap iPXE le
A GET request to /api/microkernel/bootstrap will return an iPXE script that can be used to
bootstrap nodes that have just PXE booted (it culminates in chain loading from the Razor server)
The URL accepts the parameter nic_max which should be set to the maximum number of network
interfaces that respond to DHCP on any given machine. It defaults to 4.
319/404
The syntax for rule expressions is dened in lib/razor/matcher.rb. Expressions are of the form
[op arg1 arg2 .. argn] where op is one of the operators below, and arg1 through argn are the
arguments for the operator. If they are expressions themselves, they will be evaluated before op is
evaluated.
The expression language currently supports the following operators:
Operator
Returns
Aliases
"eq"
"neq"
["not", arg]
logical negation of arg, where any value other than false and nil is considered true
["tag", arg]
the result (a boolean) of evaluating the tag with name arg against the current node
["num", arg1]
"gt"
"lt"
"gte"
320/404
"lte"
* Note: The fact and metadata operators take an optional second argument. If arg2 is
passed, it is returned if the fact/metadata entry arg1 is not found. If the fact/metadata entry
arg1 is not found and no second argument is given, a RuleEvaluationError is raised.
1. Create a directory on the broker_path that is set in your config.yaml le. You can call it
something like sample.broker. By default, the brokers directory in Razor.root is on that path.
2. Write a template for your broker install script. For example, create a le called broker.json and
add the following:
{
"name": "pe",
"configuration": {
"server": "<PUPPET_MASTER_HOST>"
},
"broker-type": "puppet-pe"
}
321/404
return a valid shell script since tasks generally perform the hando to the broker by running a
command like, curl -s <%= broker_install_url %> | /bin/bash. The server makes sure that
the GET request to broker_install_url returns the brokers install script after interpolating the
template.
In the install.erb template, you have access to two objects: node and broker. The node object
gives you access to things like the nodes facts (via node.facts["foo"]) and the nodes tags (via
node.tags), etc.
The broker object gives you access to the conguration settings. For example, if your
configuration.yaml species that a setting version must be provided when creating a broker
from this broker type, you can access the value of version for the current broker as
broker.version.
For each parameter, you can provide a human-readable description and indicate whether this
parameter is required. Parameters that are not explicitly required are optional.
Next: Razor Tasks
322/404
Once youve automated the install for your operating system (for example, via kickstart or preseed),
turning that into a task is a matter of writing a bit of metadata and templating some of the things
that your task does. For examples, check out the stock tasks that ship with Razor.
Tasks are stored in the le system. The conguration setting task_path determines where in the
le system Razor looks for tasks and can be a colon-separated list of paths. Relative paths in that
list are taken to be relative to the top-level Razor directory. For example, setting task_path to
/opt/puppet/share/razor-server/tasks:/home/me/task:tasks will make Razor search these
three directories in that order for tasks.
Task Metadata
Tasks can include the following metadata in the tasks YAML le. This le is called NAME.yaml where
NAME is the task name.
Only os_version and boot_sequence are required. The base key allows you to derive one task
from another by reusing some of the base metadata and templates. If the derived task has
metadata thats dierent from the metadata in base, the derived metadata overrides the base tasks
metadata.
The boot_sequence hash indicates which templates to use when a node using this task boots. In
the example above, a node will rst boot using boot_templ1, then using boot_templ2. For every
subsequent boot, the node will use boot_local.
Writing Templates
Task templates are ERB templates and are searched in all the directories given in the task_path
conguration setting. Templates are searched in the subdirectories in this order:
1. name/os_version
2. name
3. common
If the task has a base task, the base tasks template directories are searched just before the common
Puppet Enterprise 3.3 User's Guide Writing Tasks and Templates to Automate Processes
323/404
directory.
TEMPLATE HELPERS
Templates can use the following helpers to generate URLs that point back to the server; all of the
URLs respond to a GET request, even the ones that make changes on the server:
file_url(TEMPLATE): the URL that will retrieve TEMPLATE.erb (after evaluation) from the current
nodes task.
repo_url(PATH): the URL to the le at PATH in the current repo.
log_url(MESSAGE, SEVERITY): the URL that will log MESSAGE in the current nodes log.
node_url: the URL for the current node.
store_url(VARS): the URL that will store the values in the hash VARS in the node. Currently only
changing the nodes IP address is supported. Use store_url("ip" => "192.168.0.1") for that.
stage_done_url: the URL that tells the server that this stage of the boot sequence is nished,
and that the next boot sequence should begin upon reboot.
broker_install_url: a URL from which the install script for the nodes broker can be retrieved.
You can see an example in the script, os_complete.erb, which is used by most tasks.
Each boot (except for the default boot) must culminate in something akin to curl <%=
stage_done_url %> before the node reboots. Omitting this will cause the node to reboot with the
same boot template over and over again.
The task must indicate to the Razor server that it has successfully completed by doing a GET request
against stage_done_url("finished"), for example using curl or wget. This will mark the node
installed in the Razor database.
You use these helpers by causing your script to perform an HTTP GET against the generated URL.
This might mean that you pass an argument like ks=<%= file_url("kickstart")%> when booting
a kernel, or that you put curl <%= log_url("Things work great") %> in a shell script.
Next: Razor Conguration & Known Issues
Puppet Enterprise 3.3 User's Guide Setup Information and Known Issues
324/404
To successfully use a machine with Razor and install an operating system on it, the machine must:
+ Be supported by the operating system to be installed on it. + Be able to successfully boot into the
microkernel, which is based on Fedora 19. + Be able to successfully boot the iPXE rmware.
USING RAZOR
The repo contains the actual bits that are used when installing a node; the installation instructions
are contained in tasks. Razor comes with a few predened tasks to get you started. They can be
found in the tasks/ directory in the razor-server repo, and they can all be used by simply
mentioning their name in a policy. This includes the vmware_esxi installer.
Known Issues
Razor doesnt handle local time jumps
The Razor server is sensitive to large jumps in the local time, like the one that is experienced by a
VM after it has been suspended for some time and then resumed. In that case, the server will stop
processing background tasks, such as the creation of repos. To remediate that, restart the server
with service pe-razor-server restart.
JSON warning
When you run Razor commands, you might get this warning: MultiJson is using the default adapter
(ok_json). We recommend loading a dierent JSON library to improve performance.
You can disregard the warning since this situation is completely harmless. However, if youre using
Ruby 1.8.7, you can install a separate JSON library, such as json_pure, to prevent the warning from
appearing.
Razor hangs in VirtualBox 4.3.6
Were nding that VirtualBox 4.3.6 gets to the point of downloading the microkernel from the
Razor server and hangs at 0% indenitely. We dont have this problem with VirtualBox 4.2.22.
Using Razor on Windows
Windows support is ALPHA quality. The purpose of the current Windows installer is to get real world
experience with Windows installation automation, and to discover the missing features required to
fully support Windows infrastructure.
Temp les arent removed in a timely manner
This is due to Ruby code working as designed, and while it takes longer to remove temporary les
than you might expect, the les are eventually removed when the object is nalized.
The no_replace parameter is ignored for the update-node-metadata command
This parameter is not currently working.
Puppet Enterprise 3.3 User's Guide Setup Information and Known Issues
325/404
The error is meant to indicate that you cannot supply both those attributes at the same time on a
single repo instance.
Updates might be required for VMware ESXi 5.5 igb les
You might have to update your VMware ESXi 5.5 ISO with updated igb drivers before you can install
ESXi with Razor. See this driver package download page on the VMware site for the updated igb
drivers you need.
Next: Cloud Provisioning Overview
326/404
Note for Puppet users Most of the information in these sections applies to Puppet as well as PE.
However, provisioning on VMWare is only supported by Puppet Enterprise.
Tools
PEs provisioning tools are built on the node, node_vmware, node_aws, and node_gce
subcommands. Each of these subcommands has a selection of available actions (such as list and
start) that are used to complete specic provisioning tasks. You can get detailed information
about a subcommand and its actions by running puppet help and puppet man.
The VMware, AWS, and GCE subcommands are only used for cloud provisioning tasks. Node, on the
other hand, is a general purpose Puppet subcommand that includes several provisioning-specic
actions. These are:
classify
init
install
The clean action may also be useful when decommissioning nodes.
The cloud provisioning tools except for GCE are powered by Fog, the Ruby cloud services library.
Fog is automatically installed on any machine receiving the cloud provisioner component.
Next: Installing and Conguring Cloud Provisioner
Prerequisites
Services
The following services and credentials are required:
VMware requires: VMware vSphere 4.0 (or later) and VMware vCenter
Amazon Web Services requires: An existing Amazon account with support for EC2
Google Compute Engine requires: An existing Google account and billing information.
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning
327/404
Installing
Cloud provisioning tools are installed automatically as part of the web-based PE install. If you dont
want to install the cloud provisioning tools, then use an answer le with your Puppet Enterprise
installation, and set the q_puppet_cloud_install option to N.
If you install PE without installing the cloud provisioning tools, and then decide you want to install
them, you can do so using the package manager of your choice (Yum, APT, etc.). The packages you
need are: pe-cloud-provisioner and pe-cloud-provisioner-libs. They can be found in the packages
directory of the installer tarball.
Conguring
To create new virtual machines with Puppet Enterprise, youll need to rst congure the services
youll be using.
Start by creating a le called .fog in the home directory of the user who will be provisioning new
nodes.
$ touch ~/.fog
This will be the conguration le for Fog, the cloud abstraction library that powers PEs
provisioning tools. Once it is lled out, it will consist of a YAML hash indicating the locations of your
cloud services and the credentials necessary to control them. For example:
:default:
:vsphere_server: vc01.example.com
:vsphere_username: cloudprovisioner
:vsphere_password: abc123
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
:aws_access_key_id: AKIAIISJV5TZ3FPWU3TA
:aws_secret_access_key: ABCDEFGHIJKLMNOP1234556/s
328/404
:vsphere_server: vc01.prod.example.com
:vsphere_username: cloudprovisioner
:vsphere_password: abc123
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
:aws_access_key_id: AKIAIISJV5TZ3FPWU3TA
:aws_secret_access_key: ABCDEFGHIJKLMNOP1234556/s
You can access these congurations by prepending cloud provisioner commands with a special
environment variable, FOG_CREDENTIAL:
This will result in an error message containing the servers public key hash
notice: Connecting ...
err: The remote system presented a public key with hash
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8 but
we're expecting a hash of <unset>. If you are sure the remote system is
authentic set vsphere_expected_pubkey_hash: <the hash printed in this
message> in ~/.fog
err: Try 'puppet help node_vmware list' for usage
329/404
Select the Security Credentials menu and choose Access Credentials. Click on the Access Keys tab to
view your Access Keys.
You need to record two pieces of information: the Access Key ID and the Secret Key ID. To see your
Secret Access Key, click the Show link under Secret Access Key.
Put both keys in your ~/.fog le as described above. You will also need to generate an SSH private
key using Horizon, or simply import a selected public key.
Additional AWS Conguration
For Puppet to provision nodes in Amazon Web Services, you will need an EC2 account with the
following::
At least one Amazon-managed SSH key pair.
A security group that allows outbound trac on ports 8140 and 61613, and inbound SSH trac
on port 22 from the machine being used for provisioning.
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning
330/404
Youll need to provide the names of these resources as arguments when running the provisioning
commands.
KEY PAIRS
To nd or create your Amazon SSH key pair, browse to the Amazon Web Service EC2 console.
Select the Key Pairs menu item from the dashboard. If you dont have any existing key pairs, you
can create one with the Create Key Pairs button. Specify a new name for the key pair to create it; the
private key le will be automatically downloaded to your host.
Make a note of the name of your key pair, since you will need to know it when creating new
instances.
SECURITY GROUP
To add or edit a security group, select the Security Groups menu item from the dashboard. You
should see a list of the available security groups. If no groups exist, you can create a new one by
clicking the Create Security Groups button. Otherwise, you can edit an existing group.
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning
331/404
To add the required rules, select the Inbound tab and add an SSH rule. Make sure that inbound SSH
trac is using port 22. You can also indicate a specic source to lock the source IP down to an
appropriate source IP or network. Click Add Rule to add the rule, then click Apply Rule Changes to
save.
You should also ensure that your security group allows outbound trac on ports 8140 and 61613.
These are the ports PE uses to request congurations and listen for orchestration messages.
Demonstration
The following video demonstrates the setup process and some basic functions:
332/404
options for working with your project are displayed in the left navigation bar.
In the left-hand navigation bar, click APIs and auth and then click Registered Apps.
Click the REGISTER APP button. Give your app a nameit can be whatever you likeand click Native
as the platform.
Click Register. Your apps page opens, and a CLIENT ID and CLIENT SECRET are provided. Note
Youll need the ID and secret, so capture these for future reference.
Now, in PE, run puppet node_gce register <client ID> <client secret> and follow the online
instructions. Youll get a URL to visit in your browser. There, youll log into your Google account and
grant permission for your node to access GCE.
Once permission is granted, youll get a token of about 64 characters. Copy this token as requested
into your node_gce run to complete the registration. * * *
Next: Provisioning with VMware
333/404
If you havent yet conrmed your vSphere servers public key hash in your ~/.fog le, youll receive
an error message containing said hash:
$ puppet node_vmware list
notice: Connecting ...
err: The remote system presented a public key with hash
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8 but
we're expecting a hash of <unset>. If you are sure the remote system is
authentic set vsphere_expected_pubkey_hash: <the hash printed in this
message> in ~/.fog
err: Try 'puppet help node_vmware list' for usage
Conrm that you are communicating with the correct, trusted vSphere server by checking the
hostname in your ~/.fog le, then add the hash to the .fog le as follows:
334/404
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
Now you should be able to run the puppet node_vmware list command and see a list of existing
virtual machines:
$ puppet node_vmware list
notice: Connecting ...
notice: Connected to vc01.example.com as cloudprovisioner (API version 4.1)
notice: Finding all Virtual Machines ... (Started at 12:16:01 PM)
notice: Control will be returned to you in 10 minutes at 12:26 PM if locating
is unfinished.
Locating: 100% |ooooooooooooooooooooooooooooooooooooooooooooooooooo|
Time: 00:00:34
notice: Complete
/Datacenters/Solutions/vm/master_template
powerstate: poweredOff
name: master_template
hostname: puppetmaster.example.com
instanceid: 5032415e-f460-596b-c55d-6ca1d2799311
ipaddress: ---.---.---.--template: true
/Datacenters/Solutions2/vm/puppetagent
powerstate: poweredOn
name: puppetagent
hostname: agent.example.com
instanceid: 5032da5d-68fd-a550-803b-aa6f52fbf854
ipaddress: 192.168.100.218
template: false
This shows that youre connected to your vSphere server, and lists an available VMware template (
at master_template) and one virtual machine (agent.example.com). VMware templates contain the
information needed to build new virtual machines, such as the operating system, hardware
conguration, and other details.
Specically, list will return all of the following information:
The location of the template or machine
The status of the machine (for example, poweredOff or poweredOn)
The name of the template or machine on the vSphere server
The host name of the machine
The instanceid of the machine
The IP address of the machine (note that templates dont have IP addresses)
The type of entry - either a VMware template or a virtual machine
335/404
Puppet Enterprise can create and manage virtual machines from VMware templates using the
node_vmware create action.
Here node_vmware create has built a new virtual machine named newpuppetmaster with a
template of /Datacenters/Solutions/vm/master_template. (This is the template seen earlier with
the list action.) The virtual machine will be powered on, which may take several minutes to
complete.
Important: All ENC connections to cloud nodes now require SSL support.
The following video demonstrates the above and some other basic functions:
336/404
You can see weve specied the path to the virtual machine we wish to start, in this case
/Datacenters/Solutions/vm/newpuppetmaster.
To stop a virtual machine, use:
$ puppet node_vmware stop /Datacenters/Solutions/vm/newpuppetmaster
This will stop the running virtual machine (which may take a few minutes).
Lastly, we can terminate a VMware instance. Be aware this will:
Force-shutdown the virtual machine
Delete the virtual machine AND its hard disk images
This is a destructive and permanent action that should only be taken when you wish to delete the
virtual machine and its data!
The following video demonstrates the termination process and some other related functions:
337/404
The puppet node_vmware command has extensive in-line help and a man page.
To see the available actions and command line options, run:
$ puppet help node_vmware
USAGE: puppet node_vmware <action>
This subcommand provides a command line interface to work with VMware vSphere
Virtual Machine instances. The goal of these actions is to easily create
new virtual machines, install Puppet onto them, and clean up when they're
no longer required.
OPTIONS:
--render-as FORMAT - The rendering format to use.
--verbose - Whether to log verbosely.
--debug - Whether to log debug information.
ACTIONS:
create Create a new VM from a template
find Find a VMware Virtual Machine
list List VMware Virtual Machines
start Start a Virtual Machine
stop Stop a running Virtual Machine
terminate Terminate (destroy) a VM
See 'puppet man node_vmware' or 'man puppet-node_vmware' for full help.
For example:
$ puppet help node_vmware start
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine
338/404
The output gives you a list of instances running in each geographical zone (this example only
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine
339/404
shows two of the available zones). You can see that there is one registered instance on GCE. The
information thats provided for the instance includes the SSH key used to establish the connection,
the type of projectin this case, n1-standard-1which was set during registration, and the image
that the instance contains. Here, the image is a Debian Wheezy OS.
Note: If you have no instances running, each zone thats listed will give the message, no instances
in zone.
Once run, youll get the message, Creating the VM is pending. When its complete, you will see the
new instance listed in your Google Cloud Console.
Using bootstrap
The node_gce bootstrap subcommand creates and installs a puppet agent.
It includes the following options:
project lists the project
node name (for example cloud-provisioner-testing-1)
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine
340/404
$ puppet node_gce --trace bootstrap --project cloud-provisioner-testing-1 peagent n1-standard-1 --image debian-7-wheezy-v20130816 --login myname --installscript puppet-enterprise-http --installer-answers agent_no_cloud.answer.sample
--installer-payload 'http://commondatastorage.googleapis.com/peinstall%2Fpuppet-enterprise-3.3.0-rc2-8-g629db7a-debian-7-amd64.tar.gz'
In the above example, the installation tarball was uploaded to Google Cloud Storage (shown below)
to make the process faster. (Note: By selecting the Shared Publicly check box, you can avoid having
to sign in while this process runs. Dont forget to clear the check box when youre done.)
When you run the bootstrap subcommand, youll get status messages for each stage, such as:
Waiting for SSH response and Installing Puppet.
If you dont have certicate autosigning turned on, youll get a message that signing certicate
failed. In this case, you can go to your Puppet Enterprise console and check the node requests.
![PE Console with Node Request][noderequest]
Just click the Accept button. Once the certicate request has been accepted, the new agent is
displayed in the PE console, where you can congure and manage it.
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine
341/404
After you run this command, wait a few moments, and then youll get the message, Deleting the
VM is done. You can conrm that the instance was deleted by checking your Google Cloud
Console.
The following video demonstrates using many node_gce subcommands.
For example,
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine
342/404
343/404
state: running
i-01a33662:
created_at: Sat Nov 12 04:32:25 UTC 2011
dns_name: ec2-107-22-79-148.compute-1.amazonaws.com
id: i-01a33662
state: running
This shows three running EC2 instances. For each instance, the following characteristics are shown:
The instance name
The date the instance was created
The DNS host name of the instance
The ID of the instance
The state of the instance, for example: running or terminated
If you have no instances running, nothing will be returned.
344/404
1.compute.amazonaws.com
ec2-50-18-93-82.us-east-1.compute.amazonaws.com
Youve created a new instance using an AMI of ami-edae6384, a key named cloudprovisioner, and
of the machine type m1.small. If youve forgotten the available key names on your account, you can
get a list with the node_aws list_keynames action:
You can also specify a variety of other options, including the region in which to start the instance.
You can see a full list of these options by running puppet help node_aws create.
After the instance has been created, the public DNS name of the instance will be returned. In this
case: ec2-50-18-93-82.us-east-1.compute.amazonaws.com.
Using bootstrap
The bootstrap action is a wrapper that combines several actions, allowing you to create, classify,
install Puppet on, and sign the certicate of EC2 machine instances. Classication is done via the
console.
In addition to the three options required by create (see above), bootstrap also requires the
following:
The name of the user Puppet should be using when logging in to the new node. ( --login or -username)
The path to a local private key that allows SSH access to the node ( --keyfile). Typically, this is
the path to the private key that gets downloaded from the Amazon EC2 site.
The example below will bootstrap a node using the ami0530e66c image, located in the US East
region and running as a t1.micro machine type.
puppet node_aws bootstrap
--region us-east-1
--image ami-0530e66c
--login root --keyfile ~/.ec2/ccaum_rsa.pem
--keyname ccaum_rsa
--type t1.micro
Demo
The following video demonstrates the EC2 instance creation process in more detail:
Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services
345/404
$ cp mykey.pem ~/.ssh/mykey.pem
Ensure the .ssh directory and the key have appropriate permissions.
You can now use this key to connect to our new instance.
$ ssh -i ~/.ssh/mykey.pem root@ec2-50-18-93-82.us-east-1.compute.amazonaws.com
346/404
...
notice: Destroying i-df7ee898 (ec2-50-18-93-82.us-east-1.compute.amazonaws.com)
... Done
The following video demonstrates the EC2 instance termination process in more detail:
Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services
347/404
For more detailed help you can also view the man page .
$ puppet man node_aws
For example,
$ puppet help node_aws list
Classifying nodes
Once you have created instances for your cloud infrastructure, you need to start conguring them
and adding the les, settings, and/or services needed for their intended purposes. The fastest and
easiest way to do this is to add them to your existing console groups. You can do this by assigning
groups to nodes or nodes to groups with the consoles web interface. However, you can also work
right from the command line, which can be more convenient if youre already at your terminal and
have the nodes name ready at hand.
To classify nodes and add them to a console group, run puppet node classify as follows.
Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet
348/404
Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
$ puppet node classify \
--insecure \
--node-group=appserver_pool \
--enc-server=localhost \
--enc-port=443 \
--enc-auth-user=console \
--enc-auth-passwd=password \
ec2-50-19-149-87.compute-1.amazonaws.com
notice: Contacting https://localhost:443/ to classify
ec2-50-19-149-87.compute-1.amazonaws.com
complete
The above example adds an AWS EC2 instance to the console. Note that you use the name of the
node you are classifying as the commands argument and the --node-group option to specify the
group you want to add your new node to. The other options contain the connection and
authentication data needed to properly connect to the node.
Important: All ENC connections to cloud nodes now require SSL support.
Note that until the rst puppet run is performed on this node, Puppet itself will not yet be installed.
(Unless one of the wrapper commands has been used. See below.)
To see additional help for node classication, run puppet help node classify. For more about
how the console groups and classies nodes, see the section on grouping and classifying.
You may also wish review the basics of Puppet classes and conguration to help you understand
how groups and classes interact.
The process of adding a node to the console is demonstrated in the following video:
Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet
349/404
Installing Puppet
Use the puppet node install command to install PE components onto the new instances.
350/404
details.
In addition to these default conguration options, you can specify a number of additional options
to control how and what we install on the host. You can control the version of Facter to install, the
specic answers le to use to congure Puppet Enterprise, the certicate name of the agent to be
installed, and a variety of other options. To see a full list of the available options, use the puppet
help node install command.
The process of installing Puppet on a node is demonstrated in detail in the following video:
351/404
For example:
$ puppet node init \
--insecure \
--node-group=appserver_pool \
--enc-server=localhost \
--enc-port=443 \
--enc-auth-user=console \
--enc-auth-passwd=password \
--install-script=puppet-enterprise \
--keyfile=~/.ssh/mykey.pem \
--login=root \
ec2-50-19-207-181.compute-1.amazonaws.com
The invocation above will connect to the console, classify the node in the appserver_pool group,
and then install Puppet Enterprise on this node.
Using autosign.conf
Alternatively, if your CA puppet master has the autosign setting congured, it can sign certicates
automatically. While this can greatly simplify the process, there are some security issues associated
with going this route, so be sure you are comfortable with the process and know the risks.
Next: Sample Cloud Provisioning Workow
352/404
with puppet node_vmware create. This gives him a new node with the following characteristics:
a complete OS already installed
whatever is contained in the VMware template he specied as an option of the create action
does not have Puppet installed on it yet
not yet congured to function as a CloudWidget application server
When Tom rst congured Puppet, he set up his workstation with the ability to remotely sign
certicates. He did this by creating a certicate/key pair and then modifying the CAs auth.conf to
allow that certicate to perform authentication tasks. (To nd out more about how to do this, see
the auth.conf documentation and the HTTP API guide.)
This allows Tom to use puppet node init to complete the process of getting the new node up and
running. Puppet node init is a wrapper command that will install Puppet, classify the node,
and sign the certicate ( puppet certicate sign or puppet cert sign). Classifying the node
tells Puppet which conguration groups and classes should be applied to the node. In this case,
applying the cloudwidget_appserv class congures the node with all the settings, les, and
database hooks needed to create a fully congured, ready-to-run app server tailored to the
CloudWidget environment.
Note: if Tom had not done the prep work needed for remote signing of certicates he could run the
puppet node install, puppet node classify and puppet cert sign commands separately.
Now Tom needs to run Puppet on the new node in order to apply the conguration. He could wait
30 minutes for Puppet to run automatically, but instead he SSHs into the machine and runs Puppet
interactively with puppet agent --test.
At this point Tom now has:
A new virtual machine node with Puppet installed.
A node with a signed certicate that is an authorized member of the CloudWidget deployment.
Puppet has fully congured the node with all of the bits and pieces needed to go live and start
doing real work as a fully functioning CloudWidget application server.
The CloudWidget infrastructure is now scaled and running at acceptable loads. Tom leans back and
takes a sip of his coee. Its still hot.
Next: The pe_accounts::user Type
353/404
Usage Example
# /etc/puppetlabs/puppet/modules/site/manifests/users.pp
class site::users {
# Declaring a dependency: we require several shared groups from the
site::groups class (see below).
Class[site::groups] -> Class[site::users]
# Setting resource defaults for user accounts:
Pe_accounts::User {
shell => '/bin/zsh',
}
# Declaring our pe_accounts::user resources:
pe_accounts::user {'puppet':
locked => true,
comment => 'Puppet Service Account',
home => '/var/lib/puppet',
uid => '52',
gid => '52',
}
pe_accounts::user {'sysop':
locked => false,
comment => 'System Operator',
uid => '700',
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type
354/404
355/404
Parameters
Many of the types parameters echo those of the standard user type.
name
The users name. While limitations dier by operating system, it is generally a good idea to restrict
user names to 8 characters, beginning with a letter. Defaults to the resources title.
ensure
Species whether the user and its primary group should exist. Valid values are present and
absent. Defaults to present. Note that when a user is created, a group with the same name as the
user is also created.
shell
The users login shell. The shell must exist and be executable. Defaults to /bin/bash.
comment
A description of the user. Generally a users full name. Defaults to the users name.
home
The home directory of the user. Defaults to /home/<user's name>
uid
The users uid number. Must be specied numerically; defaults to being automatically determined
( undef).
gid
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type
356/404
The gid of the primary group with the same name as the user. The pe_accounts::user type will
create and manage this group. Must be specied numerically, defaults to being automatically
determined ( undef).
groups
An array of groups the user belongs to. The primary group should not be listed. Defaults to an
empty array.
membership
Whether specied groups should be considered the complete list ( inclusive) or the minimum list
( minimum) of groups to which the user belongs. Valid values are inclusive and minimum; defaults
to minimum.
password
The users password, in whatever encrypted format the local machine requires. Be sure to enclose
any value that includes a dollar sign ($) in single quotes (). Defaults to '!!', which prevents the
user from logging in with a password.
locked
Whether the user should be prevented from logging in. Set this to true for system users and users
whose login privileges have been revoked. Valid values are true and false; defaults to false.
sshkeys
An array of SSH public keys associated with the user. Unlike with the ssh_authorized_key type,
these should be complete public key strings that include the type and name of the key, exactly as
the key would appear in its id_rsa.pub or id_dsa.pub le. Defaults to an empty array.
managehome
A boolean parameter that dictates whether or not a users home directory should be managed by
the account type. If ensure is set to absent and managehome is true, the users home directory will
be recursively deleted.
Next: The pe_accounts Class
357/404
Note: This class is assigned to the consoles default group with no parameters, which will
prevent it from being redeclared with any conguration. To use the class, you must:
Unassign it from the default group in the console
Create a wrapper module that declares this class with the necessary parameters
Re-assign the wrapper class to whichever nodes need it
Usage Example
To use YAML les as a data store:
class {'pe_accounts':
data_store => yaml,
}
class {'pe_accounts':
data_store => namespace,
Puppet Enterprise 3.3 User's Guide The pe_accounts Class
358/404
class {'pe_accounts':
manage_users => false,
manage_groups => false,
manage_sudoers => true,
}
Data Stores
Account data can come from one of two sources: a Puppet class that declares three variables, or a
set of three YAML les stored in /etc/puppetlabs/puppet/data.
Using a Puppet Class as a Data Store
This option is most useful if you are able to generate or import your user data with a custom
function, which may be querying from an LDAP directory or some other data source.
The Puppet class containing the data must have a name ending in ::data. (We recommend
site::pe_accounts::data.) This class must declare the following variables:
$users_hash should be a hash in which each key is the title of a pe_accounts::user resource
and each value is a hash containing that resources attributes and values.
$groups_hash should be a hash in which each key is the title of a group and each value is a hash
containing that resources attributes and values.
See below for examples of the data formats used in these variables.
When declaring the pe_accounts class to use data in a Puppet class, use the following attributes:
359/404
the title of a pe_accounts::user resource and each value is a hash containing that resources
attributes and values.
pe_accounts_groups_hash.yaml, which should contain an anonymous hash in which each key is
the title of a group and each value is a hash containing that resources attributes and values.
See below for examples of the data formats used in these variables.
When declaring the pe_accounts class to use data in YAML les, use the following attribute:
Data Formats
This class uses three hashes of data to construct the pe_accounts::user and group resources it
manages.
THE USERS HASH
The users hash represents a set of pe_accounts::user resources. Each key should be the title of a
pe_accounts::user resource, and each value should be another hash containing that resources
attributes and values.
PUPPET EXAMPLE
$users_hash = {
sysop => {
locked => false,
comment => 'System Operator',
uid => '700',
gid => '700',
groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
sysop+moduledevkey@puppetlabs.com'],
},
villain => {
locked => true,
comment => 'Test Locked Account',
uid => '701',
gid => '701',
groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
villain+moduledevkey@puppetlabs.com'],
},
}
YAML EXAMPLE
360/404
sysop:
locked: false
comment: System Operator
uid: '700'
gid: '700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
sysop+moduledevkey@puppetlabs.com
villain:
locked: true
comment: Test Locked Account
uid: '701'
gid: '701'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
villain+moduledevkey@puppetlabs.com
The groups hash represents a set of shared group resources. Each key should be the title of a
group resource, and each value should be another hash containing that resources attributes and
values.
PUPPET EXAMPLE
$groups_hash = {
developer => {
gid => 3003,
ensure => present,
},
sudonopw => {
gid => 3002,
ensure => present,
},
sudo => {
gid => 3001,
ensure => present,
},
admin => {
gid => 3000,
ensure => present,
},
}
YAML EXAMPLE
361/404
--developer:
gid: "3003"
ensure: "present"
sudonopw:
gid: "3002"
ensure: "present"
sudo:
gid: "3001"
ensure: "present"
admin:
gid: "3000"
ensure: "present"
Parameters
manage_groups
Species whether or not to manage a set of shared groups, which can be used by all
pe_accounts::user resources. If true, your data store must dene these groups in the
$groups_hash variable or the pe_accounts_groups_hash.yaml le. Allowed values are true and
false; defaults to true.
manage_users
Species whether or not to manage a set of pe_accounts::user resources. If true, your data store
must dene these users in the $users_hash variable or the pe_accounts_users_hash.yaml le.
Allowed values are true and false; defaults to true.
manage_sudoers
Species whether or not to add sudo rules to the nodes sudoers le. If true, the class will add
%sudo and %sudonopw groups to the sudoers le and give them full sudo and passwordless sudo
privileges respectively. You will need to make sure that the sudo and sudonopw groups exist in the
groups hash, and that your chosen users have those groups in their groups arrays. Managing
sudoers is not supported on Solaris.
Allowed values are true and false; defaults to false.
data_store
Species the data store to use for accounts and groups.
When set to namespace, data will be read from the puppet class specied in the data_namespace
parameter. When set to yaml, data will be read from specially-named YAML les in the
/etc/puppetlabs/puppet/data directory. (If you have changed your $confdir, it will look in
$confdir/data.) Example YAML les are provided in the ext/data/ directory of this module.
Puppet Enterprise 3.3 User's Guide The pe_accounts Class
362/404
$confdir/data.) Example YAML les are provided in the ext/data/ directory of this module.
Allowed values are yaml and namespace; defaults to namespace.
data_namespace
Species the Puppet namespace from which to read data. This must be the name of a Puppet class,
and must end with ::data (we recommend using site::pe_accounts::data); the class will
automatically be declared by the pe_accounts class. The class cannot have any parameters, and
must declare variables named:
$users_hash
$groups_hash
See the pe_accounts::data class included in this module (in manifests/data.pp) for an example;
see the data formats section for information on each hashs data structure.
Defaults to pe_accounts::data.
sudoers_path
Species the path to the sudoers le on this system. Defaults to /etc/sudoers.
Next: Maintenance: Maintaining the Console & Databases
363/404
If the number of pending tasks appears to be growing linearly, the background task processes may
have died and left invalid PID les. To restart the worker tasks, run:
$ sudo /etc/init.d/pe-puppet-dashboard-workers restart
The number of pending tasks shown in the console should start decreasing rapidly after restarting
the workers.
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases
364/404
reindex+vacuum will run both of the above commands on the console database.
To run the task, change your working directory to /opt/puppet/share/puppet-dashboard and
make sure your PATH variable contains /opt/puppet/bin (or use the full path to the rake binary).
Then run the task rake db:raw:optimize[mode]. You can disregard any error messages about
insucient privileges to vacuum certain system objects because these objects should not require
vacuuming. If you believe they do, you can do so manually by logging in to psql (or your tool of
choice) as a database superuser.
Please note that you should have at least as much free space available as is currently in use, on the
partition where your postgresql data is stored, prior to attempting a full vacuum. If you are using
the PE-vendored postgresql, the postgres data is kept in /opt/puppet/var/lib/pgsql/.
The PostgreSQL docs contain more detailed information about vacuuming and reindexing.
Although this task should be run regularly as a cron job, the actual frequency at which you set it to
run will depend on your sites policies.
If you run the reports:prune task without any arguments, it will display further usage instructions.
The available units of time are yr, mon, wk, day, hr, and min.
Database Backups
You can back up and restore your PE databases by using the standard PostgreSQL tool, pg dump.
Best practices recommend hourly local backups and backups to a remote system nightly for the
console, console_auth and puppetdb databases, or as dictated by your company policy.
Providing comprehensive documentation about backing up and restoring PostgreSQL databases is
beyond the scope of this guide, but the following commands should provide you enough guidance
to perform back ups and restorations of your PE databases.
To backup the databases, run:
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases
365/404
su - pe-postgres -s /bin/bash
pg_dump pe-puppetdb -f /tmp/pe-puppetdb.backup --create
pg_dump console -f /tmp/console.backup --create
pg_dump console_auth -f /tmp/console_auth.backup --create
2. On the database server (which may or may not be the same as the console, depending on your
deployments architecture) use the PostgreSQL administration tool of your choice to change the
users password. With the standard psql client, you can do this with:
You will use the same procedure to change the console_auth database users password, except you
will need to edit both the /opt/puppet/share/console-auth/db/database.yml and
/opt/puppet/share/rubycas-server/config.yml les.
The same procedure is also used for the PuppetDB users password, except youll edit
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases
366/404
Warning: This procedure will enable insecure access to the PuppetDB instance on your
server.
If you are unfamiliar with editing class parameters in the console, refer to Editing Class Parameters
on Nodes.
Next: Troubleshooting the Installer
367/404
Note: If you have any custom Simple RPC agents, you will want to back these up. These are
located in the libdir congured in /etc/puppetlabs/mcollective/server.cfg.
On a monolithic (all-in-one) install, the databases and PE les will all be located on the same node
as the puppet master.
On a split install (master, console, PuppetDB/PostgreSQL each on a separate node), they will be
located across the various servers assigned to these PE components.
/etc/puppetlabs/: dierent versions of this directory can be found on the server assigned to
the puppet master component, the server assigned to the console component, and the server
assigned to the PuppetDB/PostgreSQL component. You should back up each version.
/opt/puppet/share/puppet-dashboard/certs: located on the server assigned to the console
component.
The console and console_auth databases: located on the server assigned to the
PuppetDB/PostgreSQL component.
The PuppetDB database: located on the server assigned to the PuppetDB/PostgreSQL component.
Purge the Puppet Enterprise Installation (Optional)
If youre planning on restoring your databases and PE les to the same server(s), youll want to rst
fully purge your existing Puppet Enterprise installation.
PE contains an uninstaller script located at /opt/puppet/bin/puppet-enterprise-uninstaller.
You can also run it from the same directory as the installer script in the PE tarball you originally
downloaded. To do so, run sudo ./puppet-enterprise-uninstaller -p -d. The -p and -d ags
are to purge all conguration data and local databases.
Important: If you have a split install, you will need to run the uninstaller on each server that
has been assigned a component.
After running the uninstaller, ensure that /opt/puppet/ and /etc/puppetlabs/ are no longer
present on the system.
For more information about using the PE uninstaller, refer to Uninstalling Puppet Enterprise.
Restore Your Database and Puppet Enterprise Files
1. Using the standard install process (run the puppet-enterprise-installer script.), reinstall the
same version of Puppet Enterprise that was installed for the les you backed up.
Puppet Enterprise 3.3 User's Guide Back Up and Restore a Puppet Enterprise Installation
368/404
If you have your original answer le, use it during the installation process; otherwise, be sure to
set the same database passwords you used during initial installation.
If you need to review the PE installation process, check out Installing Puppet Enterpise.
2. Run the following commands, in the order specied:
a. service pe-httpd stop
b. service pe-puppet stop
c. service pe-mcollective stop
d. service pe-puppet-dashboard-workers stop
e. service pe-activemq stop
f. service pe-puppetdb stop
3. Purge any locks remaining on the database from the services that were running earlier with
service pe-postgresql restart.
4. Run the following commands, in the order specied:
a. su - pe-postgres -s /bin/bash -c "psql"
b. drop database console;
c. drop database console_auth;
d. drop database "pe-puppetdb";
e. \q
Note: During this process, you may encounter an error message similar to, ERROR: role
"console" already exists. This error is safe to ignore.
5. Restore from your /etc/puppetlabs/ backup the following directories and les:
For a monolithic install, these les should all be replaced on the puppet master:
/etc/puppetlabs/puppet/puppet.conf
/etc/puppetlabs/puppet/ssl (fully replace with backup, do not leave existing ssl data)
/opt/puppet/share/puppet-dashboard/certs
The PuppetDB, console, and console_auth databases
The modulepathif youve congured it to be something other than the PE default.
Puppet Enterprise 3.3 User's Guide Back Up and Restore a Puppet Enterprise Installation
369/404
For a split install, these les and databases should be replaced on the various servers assigned
to these PE components.
/etc/puppetlabs/: as noted earlier, there is a dierent version of this directory for the
puppet master component, the console component, and the database support component
(i.e., PuppetDB and PostgreSQL). You should replace each version.
/opt/puppet/share/puppet-dashboard/certs: located on the server assigned to the console
component.
The console and console_auth databases: located on the server assigned to the database
support component.
The PuppetDB database: located on the server assigned to the database support component.
The modulepath: located on the server assigned to assigned to the puppet master
component.
Note: If you backed up any Simple RPC agents, you will need to restore these on the same
server assigned to the puppet master component.
6. Run chown -R pe-puppet:pe-puppet /etc/puppetlabs/puppet/ssl/.
7. Run chown -R puppet-dashboard /opt/puppet/share/puppet-dashboard/certs/.
8. Restore modules, manifests, hieradata, etc, if necessary. These are typically located in the
/etc/puppetlabs/ directory, but you may have congured them in another location.
9. Run /opt/puppet/sbin/puppetdb-ssl-setup -f. This script generates SSL certicates and
conguration based on the agent cert on your PuppetDB node.
10. Start all PE services you stopped in step 2. (For example, run service pe-httpd start.)
Note: During this process, you may get a message indicating that starting the dashboard
workers failed, but they have in fact started. You can verify this by running service pepuppet-dashboard-workers status.
370/404
as PE attempts to access the requisite packages. The issue is caused by an incorrectly set parameter
of the pe_repo class. It can be xed as follows:
1. In the console, navigate to the node page for each master node where you wish to add agent
packages.
2. On the masters node page, click Edit and then, for the pe_repo parameter, click Edit parameters
3. Next to the base_path parameter, click Reset value
4. Save the parameter change and update the node.
Once this has been done, you should now be able to add new agent platforms without issue.
A Note about Changes to puppet.conf that Can Cause Issues During Upgrades
If you manage puppet.conf with Puppet or a third-party tool like Git or r10k, you may encounter
errors after upgrading based on the following changes. Please assess these changes before
upgrading.
node_terminus Changes
In PE versions earlier than 3.2, node classication was congured with node_terminus=exec,
located in /etc/puppetlabs/puppet/puppet.conf. This caused the puppet master to execute a
custom shell script ( /etc/puppetlabs/puppet-dashboard/external_node) which ran a curl
command to retrieve data from the console.
PE 3.2 changes node classication in puppet.conf; the new conguration is
node_terminus=console. The external_node script is no longer available; thus,
node_terminus=exec no longer works.
With this change, we have improved security, as the puppet master can now verify the console.
The console certicate name is pe-internal-dashboard. The puppet master now nds the
console by reading the contents of /etc/puppetlabs/puppet/console.conf, which provides the
following:
[main]
server=<console hostname>
port=<console port>
certificate_name=pe-internal-dashboard
This le tells the puppet master where to locate the console and what name it should expect the
console to have. If you want to change the location of the console, you can edit console.conf,
but DO NOT change the certificate_name setting.
The rules for certicate-based authorization to the console are found in
/etc/puppetlabs/console-auth/certificate_authorization.yml on the console node. By
Puppet Enterprise 3.3 User's Guide Troubleshooting Installer Issues
371/404
default, this le allows the puppet master read-write access to the console (based on its
certicate name) to request node data and submit report data.
Reports Changes
Report submission to the console no longer happens using reports=https. PE 3.2 changed the
setting in puppet.conf to reports=console. This change works in the same way as the
node_terminus changes described above.
Installing Without Internet Connectivity
By default, the master node hosts a repo that contains packages used for agent installation. When
you download the tarball for the master, the master also downloads the agent tarball for the same
platform and unpacks it in this repo.
When installing agents on a platform that is dierent from the master platform, the install script
attempts to connect to the internet to download the appropriate agent tarball. If you will not have
internet access at the time of installation, you need to download the appropriate agent tarball in
advance and use the option below that corresponds with your particular deployment.
Option 1
If you would like to use the PE-provided repo, you can copy the agent tarball into the
/opt/staging/pe_repo directory on your master.
If you upgrade your server, you will need to perform this task again for the new version.
Option 2
If you already have a package management/distribution system, you can use it to install agents
by adding the agent packages to your repo. In this case, you can disable the PE-hosted repo
feature altogether by removing the pe_repo class from your master, along with any class that
starts with pe_repo::.
Option 3
If your deployment has multiple masters and you dont wish to copy the agent tarball to each
one, you can specify a path to the agent tarball. This can be done with an answer le, by setting
q_tarball_server to an accessible server containing the tarball, or by using the console to set
the base_path parameter of the pe_repo class to an accessible server containing the tarball.
Is DNS Wrong?
If name resolution at your site isnt quite behaving right, PEs installer can go haywire.
Puppet agent has to be able to reach the puppet master server at one of its valid DNS names.
(Specically, the name you identied as the masters hostname during the installer interview.)
The puppet master also has to be able to reach itself at the puppet master hostname you chose
Puppet Enterprise 3.3 User's Guide Troubleshooting Installer Issues
372/404
during installation.
If youve split the master and console components onto dierent servers, they have to be able to
talk to each other as well.
Are the Security Settings Wrong?
The installer fails in a similar way when the systems rewall or security group is restricting the
ports Puppet uses.
Puppet communicates on ports 8140, 61613, and 443. If you are installing the puppet master
and the console on the same server, it must accept inbound trac on all three ports. If youve
split the two components, the master must accept inbound trac on 8140 and 61613 and the
console must accept inbound trac on 8140 and 443.
If your puppet master has multiple network interfaces, make sure it is allowing trac via the IP
address that its valid DNS names resolve to, not just via an internal interface.
Did You Try to Install the Console Before the Puppet Master?
If you are installing the console and the puppet master on separate servers and tried to install the
console rst, the installer may fail.
How Do I Recover From a Failed Install?
First, x any conguration problem that may have caused the install to fail. See above for a list of
the most common errors.
Next, run the uninstaller script. See the uninstallation instructions in this guide for full details.
After you have run the uninstaller, you can safely run the installer again.
Problems with PE when upgrading your OS
Upgrading your OS while PE is installed can cause problems with PE. To perform an OS upgrade,
youll need to uninstall PE, perform the OS upgrade, and then reinstall PE as follows:
1. Back up your databases and other PE les.
2. Perform a complete uninstall (including the -pd uninstaller option).
3. Upgrade your OS.
4. Install PE.
5. Restore your backup.
Next: Troubleshooting Connections & Communications
373/404
Below are some common issues that can prevent the dierent parts of Puppet Enterprise from
communicating with each other.
If the puppet master is alive and reachable, youll get something like:
Trying 172.16.158.132...
Connected to screech.example.com.
Escape character is '^]'.
374/404
If you see this, it means the agent has submitted a certicate signing request which hasnt yet been
signed. Run puppet cert list on the puppet master to see a list of pending requests, then run
puppet cert sign <NODE NAME> to sign a given nodes certicate. The node should successfully
retrieve and apply its conguration the next time it runs.
Do Agents Trust the Masters Certicate?
Check the puppet agent logs on your nodes and look for something like the following:
err: Could not retrieve catalog from remote server: SSL_connect returned=1
errno=0
state=SSLv3 read server certificate B: certificate verify failed. This is
often
because the time is out of sync on the server or client
When you installed the puppet master role, you approved a list of valid DNS names to be included in
the masters certicate. Agents will ONLY trust the master if they contact it at one of THESE
hostnames.
To see the hostname agents are using to contact the master, run puppet agent --configprint
server. If this does not return one of the valid DNS names you chose during installation of the
master, edit the server setting in the agents /etc/puppetlabs/puppet/puppet.conf les to point
to a valid DNS name.
If you need to reset your puppet masters valid DNS names, run the following:
$ /etc/init.d/pe-httpd stop
$ puppet cert clean <puppet master's certname>
$ puppet cert generate <puppet master's certname> --dns_alt_names=<commaseparated list of DNS names>
$ /etc/init.d/pe-httpd start
IS TIME IN SYNC ON YOUR NODES?
375/404
If a node re-uses an old nodes certname and the master retains the previous nodes certicate, the
new node will be unable to request a new certicate.
Run the following on the master:
$ puppet cert clean <NODE NAME>
This usually happens when puppet master is installed with a certname that isnt its hostname. To x
Puppet Enterprise 3.3 User's Guide Troubleshooting Connections Between Components
376/404
Changing this on the puppet master will x the error on all agent nodes.
Next: Troubleshooting the Console & Database Support
FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=34427584512,03600).
377/404
A suggested workaround is tweak the machines shmmax and shmall kernel settings before
installing PE. The shmmax setting should be set to approximately 50% of the total RAM; the shmall
setting can be calculated by dividing the new shmmax setting by the PAGE_SIZE. ( PAGE_SIZE can be
conrmed by running getconf PAGE_SIZE).
Use the following commands to set the new kernel settings:
sysctl -w kernel.shmmax=<your shmmax calculation>
sysctl -w kernel.shmall=<your shmall calculation>
Alternatively, you can also report the issue to the Puppet Labs customer support portal.
378/404
$ cd /opt/puppet/share/puppet-dashboard
$ sudo /opt/puppet/bin/bundle exec /opt/puppet/bin/rake -s -f
/opt/puppet/share/console-auth/Rakefile db:create_user
USERNAME=<adminuser@example.com> PASSWORD=<password> ROLE="Admin"
RAILS_ENV=production
You can now log in to the console as the user you just created, and use the normal admin tools to
reset other users passwords.
379/404
This can happen if the consoles authentication layer thinks it lives on a hostname that isnt
accessible to the rest of the world. The authentication systems hostname is automatically detected
during installation, and the installer can sometimes choose an internal-only hostname.
To x this:
1. Open the /etc/puppetlabs/console-auth/cas_client_config.yml le for editing. Locate the
cas_host line, which is likely commented-out:
authentication:
## Use this configuration option if the CAS server is on a host different
## from the console-auth server.
# cas_host: console.example.com:443
Change its value to contain the public hostname of the console server, including the correct port.
2. Open the /etc/puppetlabs/console-auth/config.yml le for editing. Locate the
console_hostname line:
authentication:
console_hostname: console.example.com
Change its value if necessary. If you are serving the console on a port other than 443, be sure to
add the port. (For example: console.example.com:3000)
After you remove the broken group names, you can create new groups with valid names and readd your nodes as needed.
380/404
381/404
it using live management. In such cases, you can often force it to reconnect by waiting a minute or
two and then running puppet agent -t until you see output indicating the mcollective server has
picked up the node. The output should look similar to:
Notice:
/Stage[main]/Pe_mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/conten
--- /etc/puppetlabs/mcollective/server.cfg 2013-06-14 15:53:41.251544110 -0700
+++ /tmp/puppet-file20130624-42806-157zyeq 2013-06-24 14:45:09.865182380 -0700
@@ -7,7 +7,7 @@
loglevel = info
daemonize = 1
-identity = crm02
+identity = agent2.example.com
# Plugins
securityprovider = ssl
plugin.ssl_server_private = /etc/puppetlabs/mcollective/ssl/mcollectiveprivate.pem
Tip: You should also run NTP to verify that time is in sync across your deployment.
Puppet Enterprise 3.3 User's Guide Tips & Solutions for Working with Puppet
382/404
383/404
Add the appropriate le or missing credentials to the existing le to resolve this issue.
Note that versions of fog newer than 0.7.2 may not be fully compatible with Cloud Provisioner. This
issue is currently being investigated.
Certicate Signing Issues
ACCESSING PUPPET MASTER ENDPOINT
For automatic signing to work, the computer running Cloud Provisioner (i.e. the CP control node)
needs to be able to access the puppet masters certificate_status REST endpoint. This can be
done in the masters auth.conf le as follows:
path /certificate_status
method save
auth yes
allow {certname}
Note that if the CP control node is on a machine other than the puppet master, it must be able to
reach the puppet master over port 8140.
GENERATING PER-USER CERTIFICATES
The CP control node needs to have a certicate that is signed by the puppet masters CA. While its
possible to use an existing certicate (if, say, the control node was or is an agent node), its
preferable to generate a per-user certicate for a clearer, more explicit security policy.
Start by running the following on the control node: puppet certificate generate {certname} -Puppet Enterprise 3.3 User's Guide Finding Common Problems
384/404
ca-location remote Then sign the certicate as usual on the master ( puppet cert sign
{certname}). Lastly, back on the control node again, run:
Tips
Process Explorer
We recommend installing Process Explorer and conguring it to replace Task Manager. This will
make debugging signicantly easier.
Logging
As of Puppet 2.7.x, messages from the puppetd log le are available via the Windows Event Viewer
(choose Windows Logs > Application). To enable debugging, stop the puppet service and restart it
as:
c:\>sc stop puppet && sc start puppet --debug --trace
Puppets windows service component also writes to the windows.log within the same log directory
and can be used to debug issues with the service.
Common Issues
Installation
The Puppet MSI package will not overwrite an existing entry in the puppet.conf le. As a result, if
you uninstall the package, then reinstall the package using a dierent puppet master hostname,
Puppet wont actually apply the new value if the previous value still exists in <data
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows
385/404
directory> \etc\puppet.conf.
In general, weve taken the approach of preserving conguration data on the system when doing an
upgrade, uninstall or reinstall.
To fully clean out a system make sure to delete the <data directory>.
Similarly, the MSI will not overwrite the custom facts written to the PuppetLabs\facter\facts.d
directory.
Unattended installation
Puppet may fail to install when trying to perform an unattended install from the command line, e.g.
msiexec /qn /i puppet.msi
To get troubleshooting data, specify an installation log, e.g. /l*v install.txt. Look in the log for
entries like the following:
MSI (s) (7C:D0) [17:24:15:870]: Rejecting product '{D07C45E2-A53E-4D7B-844FF8F608AFF7C8}': Non-assigned apps are disabled for non-admin users.
MSI (s) (7C:D0) [17:24:15:870]: Note: 1: 1708
MSI (s) (7C:D0) [17:24:15:870]: Product: Puppet -- Installation failed.
MSI (s) (7C:D0) [17:24:15:870]: Windows Installer installed the product.
Product Name: Puppet. Product Version: 2.7.12. Product Language: 1033.
Manufacturer: Puppet Labs. Installation success or error status: 1625.
MSI (s) (7C:D0) [17:24:15:870]: MainEngineThread is returning 1625
MSI (s) (7C:08) [17:24:15:870]: No System Restore sequence number for this
installation.
Info 1625.This installation is forbidden by system policy. Contact your system
administrator.
If you see entries like this you know you dont have sucient privileges to install puppet. Make sure
to launch cmd.exe with the Run as Administrator option selected, and try again.
File Paths
Path Separator
Make sure to use a semi-colon (;) as the path separator on Windows, e.g.,
modulepath=path1;path2
File Separator
In most resource attributes, the Puppet language accepts either forward- or backslashes as the le
separator. However, some attributes absolutely require forward slashes, and some attributes
absolutely require backslashes. See the relevant section of Writing Manifests for Windows for more
information.
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows
386/404
Backslashes
When backslashes are double-quoted(), they must be escaped. When single-quoted (), they may
be escaped. For example, these are valid le resources:
file { 'c:\path\to\file.txt': }
file { 'c:\\path\\to\\file.txt': }
file { "c:\\path\\to\\file.txt": }
But this is an invalid path, because \p, \t, \f will be interpreted as escape sequences:
file { "c:\path\to\file.txt": }
UNC Paths
UNC paths are not currently supported. However, the path can be mapped as a network drive and
accessed that way.
Case-insensitivity
Several resources are case-insensitive on Windows (le, user, group). When establishing
dependencies among resources, make sure to specify the case consistently. Otherwise, puppet may
not be able to resolve dependencies correctly. For example, applying the following manifest will
fail, because puppet does not recognize that FOOBAR and foobar are the same user:
file { 'c:\foo\bar':
ensure => directory,
owner => 'FOOBAR'
}
user { 'foobar':
ensure => present
}
...
err: /Stage[main]//File[c:\foo\bar]: Could not evaluate: Could not find user
FOOBAR
Dis
Puppet does not show dis on Windows (e.g., puppet agent --show_diff) unless a third-party di
utility has been installed (e.g., msys, gnudi, cygwin, etc) and the diff property has been set
appropriately.
387/404
If the owner and/or group are specied in a le resource on Windows, the mode must also be
specied. So this is okay:
file { 'c:/path/to/file.bat':
ensure => present,
owner => 'Administrator',
group => 'Administrators',
mode => 0770
}
The latter case will remove any permissions the Administrators group previously had to the le,
resulting in the eective permissions of 0700. And since puppet runs as a service under the
SYSTEM account, not Administrator, Puppet itself will not be able to manage the le the next
time it runs!
To get out of this state, have Puppet execute the following (with an exec resource) to reset the le
permissions:
takeown /f c:/path/to/file.bat && icacls c:/path/to/file.bat /reset
Exec
When declaring a Windows exec resource, the path to the resource typically depends on the
%WINDIR% environment variable. Since this may vary from system to system, you can use the path
fact in the exec resource:
exec { 'cmd.exe /c echo hello world':
path => $::path
}
Shell Builtins
Puppet does not currently support a shell provider on Windows, so executing shell builtins directly
will fail:
exec { 'echo foo':
path => 'c:\windows\system32;c:\windows'
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows
388/404
}
...
err: /Stage[main]//Exec[echo foo]/returns: change from notrun to 0 failed:
Could not find command 'echo'
Powershell
By default, powershell enforces a restricted execution policy which prevents the execution of
scripts. Consequently, make sure to specify the appropriate execution policy in the powershell
command:
exec { 'test':
command => 'powershell.exe -executionpolicy remotesigned -file C:\test.ps1',
path => $::path
}
Package
The source of an MSI package must be a le on either a local lesystem or on a network mapped
drive. It does not support URI-based sources, though you can achieve a similar result by dening a
le whose source is the puppet master and then dening a package whose source is the local le.
Service
Windows services support a short name and a display name. Make sure to use the short name in
puppet manifests. For example use wuauserv, not Automatic Updates. You can use sc query to
get a list of services and their various names.
Error Messages
Error: Could not connect via HTTPS to https://forge.puppetlabs.com / Unable to
verify the SSL certificate / The certificate may not be signed by a valid CA / The
CA bundle included with OpenSSL may not be valid or up to date
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows
389/404
This can occur when you run the puppet module subcommand on newly provisioned Windows
nodes.
The Puppet Forge uses an SSL certicate signed by the GeoTrust Global CA certicate. Newly
provisioned Windows nodes may not have that CA in their root CA store yet.
To resolve this and enable the puppet module subcommand on Windows nodes, do one of the
following:
Run Windows Update and fetch all available updates, then visit https://forge.puppetlabs.com
in your web browser. The web browser will notice that the GeoTrust CA is whitelisted for
automatic download, and will add it to the root CA store.
Download the GeoTrust Global CA certicate from GeoTrusts list of root certicates and
manually install it by running certutil -addstore Root GeoTrust_Global_CA.pem.
Service 'Puppet Agent' (puppet) failed to start. Verify that you have sufficient
privileges to start system services.
This can occur when installing puppet on a UAC system from a non-elevated account. Although
the installer displays the UAC prompt to install puppet, it does not elevate when trying to start
the service. Make sure to run from an elevated cmd.exe process when installing the MSI.
Cannot run on Microsoft Windows without the sys-admin, win32-process, win32-dir,
win32-service and win32-taskscheduler gems.
Puppet requires the indicated Windows-specic gems, which can be installed using gem install
<gem>
err: /Stage[main]//Scheduled_task[task_system]: Could not evaluate: The operation
completed successfully.
This error can occur when using version < 0.2.1 of the win32-taskscheduler gem. Run gem
update win32-taskscheduler
err: /Stage[main]//Exec[C:/tmp/foo.exe]/returns: change from notrun to 0 failed:
CreateProcess() failed: Access is denied.
This error can occur when requesting an executable from a remote puppet master that cannot
be executed. For a le to be executable on Windows, set the user/group executable bits
accordingly on the puppet master (or alternatively, specify the mode of the le as it should exist
on the Windows host):
file { "C:/tmp/foo.exe":
source => "puppet:///modules/foo/foo.exe",
}
390/404
exec { 'C:/tmp/foo.exe':
logoutput => true
}
391/404
err: You cannot service a running 64-bit operating system with a 32-bit version of
DISM. Please use the version of DISM that corresponds to your computer's
architecture.
As described in the Installation Guide, 64-bit versions of windows will redirect all le system
access from %windir%\system32 to %windir%\SysWOW64 instead. When attempting to congure
Windows roles and features using dism.exe, make sure to use the 64-bit version. This can be
done by executing c:\windows\sysnative\dism.exe, which will prevent le system redirection.
See https://projects.puppetlabs.com/issues/12980
Error: Could not parse for environment production: Syntax error at =; expected }
This error will usually occur if puppet apply -e is used from the command line and the supplied
command is surrounded with single quotes (), which will cause cmd.exe to interpret any => in
the command as a redirect. To solve this surround the command with double quotes () instead.
See https://projects.puppetlabs.com/issues/20528.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
392/404Deploy
Note: This page explains how to regenerate all certicates in a split PE deployment that is,
where the puppet master, PuppetDB, and PE console components are all installed on
separate servers. See this page for instructions on regenerating certicates in a monolithic
PE deployment.
Overview
In some cases, you may nd that you need to regenerate the SSL certicates and security credentials
(private and public keys) that are generated by PEs built-in certicate authority (CA). For example,
you may have a puppet master you need to move to a dierent network in your infrastructure, or
you may nd you need to regenerate all the certicates and security credentials in your
infrastructure due to an unforeseen security vulnerability.
Regardless of your situation, regenerating your certs involves the following four steps (complete
procedures follow below):
1. On your master, youll clear the certs and security credentials, regenerate the CA, and then
regenerate the certs and security credentials.
2. Next, youll clear and regenerate certs and security credentials for PuppetDB.
3. Then, youll clear and regenerate certs and security credentials for the PE console
4. Lastly, youll clear and regenerate certs and security credentials for all agent nodes.
Note that this process destroys the certicate authority and all other certicates. It is meant for use
in the event of a total compromise of your site, or some other unusual circumstance. If you just
need to replace a few agent certicates, you can use the puppet cert clean command on your
puppet master and then follow step four for any agents that need to be replaced.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
393/404Deploy
At this point:
You have a brand new CA certicate and key.
Your puppet master has a certicate from the new CA, and it can once again eld new
certicate requests.
The puppet master will reject any requests for conguration catalogs from nodes that
havent replaced their certicates (which, at this point, will be all of them except the
master).
The puppet master cant serve catalogs even to agents that do have new certicates, since
it cant communicate with the console and PuppetDB.
Orchestration and live management are down.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
394/404Deploy
At this point:
The PuppetDB server is now completely taken care of.
The puppet master can talk to PuppetDB again.
The puppet master cant serve catalogs to agents yet, since it still wont trust the console
server.
Orchestration and live management are still down.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
395/404Deploy
If the master doesnt autosign the certicate in this step, you may have changed its autosign
conguration. Youll need to manually sign the certicate (see below).
7. Navigate to the console certs directory with sudo cd /opt/puppet/share/puppetdashboard/certs. Stay in this directory for the following steps.
8. Remove all the credentials in this directory with sudo rm -rf /opt/puppet/share/puppetdashboard/certs/*.
9. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:create_key_pair.
10. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:request. The puppet master
will autosign the request, and the script will fetch the certicate.
If the master doesnt autosign the certicate in this step, you may have changed its autosign
conguration. Youll need to manually sign the certicate (see below). 11. Run sudo
/opt/puppet/bin/rake RAILS_ENV=production cert:retrieve. 12. Ensure the console can access
the new credentials with sudo chown -R puppet-dashboard:puppet-dashboard
/opt/puppet/share/puppet-dashboard/certs. 13. Re-start the console service with sudo puppet
resource service pe-httpd ensure=running. 14. Re-start the puppet agent service with sudo
puppet resource service pe-puppet ensure=running.
At this point:
The console server is now completely taken care of.
The puppet master can talk to the console again, and vice versa.
The puppet master can now serve catalogs to agents.
However, it will only trust agents that have replaced their certicates. The only agents that
have replaced their certicates at this point are the puppet master node, the PuppetDB
node, and the console node.
The console is usable, but because its SSL certicate has been replaced, your web browser
may notice the change, assume it results from a malicious attack, and refuse to allow you
access. If this happens, you may need to go into your browsers collection of cached
certicates and delete the old cert. Details of this process are beyond the scope of this
guide and will vary by browser and platform. (You can delay having to gure this out by
temporarily using a dierent browser.)
Orchestration and live management may not immediately work, but they will start working
again within about 30 minutes, as soon as both the puppet master server and the console
node complete a puppet agent run. (The certicates used by MCollective and the ActiveMQ
service are completely managed by Puppet, and dont have to be manually regenerated.)
On any of the nodes that are completely taken care of, you can start a successful agent run
with sudo puppet agent -t. Try it on your console and PuppetDB nodes to ensure it works
as expected.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
396/404Deploy
Once you have regenerated all agents certicates, everything should now be back to normal
and fully functional under the new CA.
manual steps and will require you to replace certicates on every agent node managed by
your puppet master.
Note: This page explains how to regenerate all certicates in a monolithic PE deployment
that is, where the puppet master, PuppetDB, and PE console components are all installed on
the same server. See this page for instructions on regenerating certicates in a split PE
deployment.
Overview
In some cases, you may nd that you need to regenerate the certicates and security credentials
(private and public keys) generated by PEs built-in certicate authority (CA). For example, you may
have a puppet master that you need to move to a dierent network in your infrastructure, or you
may nd that you need to regenerate all the certicates and security credentials in your
infrastructure due to an unforeseen security vulnerability.
Regardless of your situation, regenerating your certicatess involves the following four steps
(complete procedures follow below):
1. On your master, youll clear the certs and security credentials, regenerate the CA, and then
regenerate the certs and security credentials.
2. Next, youll clear and regenerate certs and security credentials for PuppetDB.
3. Then, youll clear and regenerate certs and security credentials for the PE console.
4. Lastly, youll clear and regenerate certs and security credentials for all agent nodes.
Note that this process destroys the certicate authority and all other certicates. It is meant for use
in the event of a total compromise of your site, or some other unusual circumstance. If you just
need to replace a few agent certicates, you can use the puppet cert clean command on your
puppet master and then follow step four for any agent certs that need to be replaced.
4. Stop the puppet master service with sudo puppet resource service pe-httpd
ensure=stopped.
5. Clear all certs from your master with sudo rm -rf /etc/puppetlabs/puppet/ssl/*.
6. Regenerate the CA by running sudo puppet cert list -a. You should see this message:
Notice: Signed certificate request for ca.
7. Generate the puppet masters new certs with sudo puppet master --no-daemonize --verbose.
8. When you see Notice: Starting Puppet master <your Puppet and PE versions>, type CTRL
+ C.
9. Start the puppet master service with sudo puppet resource service pe-httpd
ensure=running.
10. Start the puppet agent service with sudo puppet resource service pe-puppet
ensure=running.
At this point:
You have a brand new CA certicate and key.
Your puppet master has a certicate from the new CA, and it can once again eld new
certicate requests.
The puppet master will reject any requests for conguration catalogs from nodes that
havent replaced their certicates (which, at this point, will be all of them except the
master).
The puppet master cant serve catalogs even to agents that do have new certicates, since
it cant communicate with the console and PuppetDB.
Orchestration and live management are down.
At this point:
The puppet master can talk to PuppetDB again.
The puppet master cant serve catalogs to agents yet, since it still wont trust the console
service.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Monolithic Puppet
399/404
Enterprise
At this point:
The puppet master can talk to the console again, and vice versa.
The puppet master can now serve catalogs to agents.
However, it will only trust agents that have replaced their certicates. The only agent that
has replaced its certicate at this point is the monolithic puppet master.
The console is usable, but because its SSL certicate has been replaced, your web browser
may notice the change, assume it results from a malicious attack, and refuse to allow you
access. If this happens, you may need to delete the old cert from your browsers collection
of cached certicates. Details of this process are beyond the scope of this guide and will
vary by browser and platform. (You can delay having to gure this out by temporarily
using a dierent browser.)
Orchestration and live management may not immediately work, but they will start working
again as soon as both the puppet master server and the console node complete a puppet
agent run. (The certicates used by MCollective and the ActiveMQ service are completely
managed by Puppet and dont have to be manually regenerated.)
On the monolithic puppet master, you can now start a successful agent run with sudo
puppet agent -t.
To replace the certs on agents, youll need to log into each agent node and do the following:
1. Stop the puppet agent service. On *nix nodes, run sudo puppet resource service pe-puppet
ensure=stopped. On Windows nodes, run the same command (minus sudo) with Administrator
privileges.
2. Stop the orchestration service. On *nix nodes, run sudo puppet resource service pemcollective ensure=stopped. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
3. Delete the agents SSL directory. On *nix nodes, run sudo rm -rf
/etc/puppetlabs/puppet/ssl/*. On Windows nodes, delete the $confdir\ssl directory, using
the Administrator confdir. See here for more information on locating the confdir.
4. Re-start the puppet agent service. On *nix nodes, run sudo puppet resource service pepuppet ensure=running. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
Once puppet agent starts, it will automatically generate keys and request a new certicate from
the CA puppet master.
5. If you are not using autosigning, you will need to sign each agent nodes certicate request. You
can do this with the PE consoles request manager, or by logging into the CA puppet master
server, running sudo puppet cert list to see pending requests, and running sudo puppet
cert sign <NAME> to sign requests.
Once an agent nodes new certicate is signed, it will fetch it automatically within a few minutes and
begin a Puppet run. After a node has fetched its new certicate and completed a full Puppet run, it
will once again appear in orchestration and live management. If, after waiting for a short time, you
dont see the agent node in live management, use NTP to make sure time is in sync aross your PE
deployment. On Windows nodes, you may need to log into the node and check the status of the
Marionette Collective service sometimes it can hang while attempting to stop or restart.
Once you have regenerated all agents certicates, everything should now be back to normal
and fully functional under the new CA.
Instead of writing audit manifests: Write manifests that describe the desired baseline state(s).
Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool
401/404
This is identical to writing Puppet manifests to manage systems: you use the resource
declaration syntax to describe the desired state of each signicant resource.
Instead of running puppet agent in its default mode: Make it sync the signicant resources in
no-op mode, which can be done for the entire Puppet run, or per-resource. (See below.) This
causes Puppet to detect changes and simulate changes, without automatically enforcing the
desired state.
In the console: Look for pending events and node status. Pending is how the console
represents detected dierences and simulated changes.
CONTROLLING YOUR MANIFESTS
As part of a solid change control process, you should be maintaining your Puppet manifests in a
version control system like Git. A well-designed branch structure in version control will allow
changes to your manifests to be tracked, controlled, and audited.
NO-OP FEATURES
Puppet resources or catalogs can be marked as no-op before they are applied by the agent
nodes. This means that the user describes a desired state for the resource, and Puppet will detect
and report any divergence from this desired state. Puppet will report what should change to bring
the resource into the desired state, but it will not make those changes automatically.
To set an individual resource as no-op, set the noop metaparameter to true.
file {'/etc/sudoers':
owner => root,
group => root,
mode => 0600,
noop => true,
}
This allows you to mix enforced resources and no-op resources in the same Puppet run.
To do an entire Puppet run in no-op, set the noop setting to true. This can be done in the
[agent] block of puppet.conf, or as a --noop command-line ag. If you are running puppet
agent in the default daemon mode, you would set no-op in puppet.conf.
IN THE CONSOLE
In the console, you can locate the changes Puppet has detected by looking for pending nodes,
reports, and events. A pending status means Puppet has detected a change and simulated a x,
but has not automatically managed the resource.
You can nd a pending status in the following places:
The node summary, which lists the number of nodes on which changes were detected.
Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool
402/404
The list of recent reports, which uses an orange asterisk to show reports in which changes were
detected.
The log and events tabs of any report containing pending events. These tabs will show you what
changes were detected, and how they dier from the desired system state described in your
manifests.
Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool
403/404
AFTER DETECTION
When a Puppet node reports no-op events, this means someone has made changes to a no-op
resource that has a desired state desribed. Generally, this either means an unauthorized change
has been made, or an authorized change was made but the manifests have not yet been updated to
contain the change. You will need to either:
Revert the system to the desired state (possibly by running puppet agent with --no-noop).
Edit your manifests to contain the new desired state, and check the changed manifests into
version control.
BEFORE DETECTION
However, your admins should generally be changing the manifests before making authorized
changes. This serves as documentation of the changes approval.
SUMMARY
In this alternate workow, you are essentially still maintaining baselines of your systems desired
states. However, instead of maintaining an abstract baseline by approving changes in the console,
you are maintaining concrete baselines in readable Puppet code, which can be audited via version
control records.
2010 Puppet Labs info@puppetlabs.com 411 NW Park Street / Portland, OR 97209 1-877-5759775
Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool
404/404