You are on page 1of 404

Puppet Enterprise 3.

3 User's Guide
(Generated on July 15, 2014, from git revision 7f5d71e92f649cdac1af24dc4f3bc95b0a76c0)

About Puppet Enterprise


Thank you for choosing Puppet Enterprise (PE), IT automation software that allows system
administrators to programmatically provision, congure, and manage servers, network devices and
storage, in the data center or in the cloud.
This users guide will help you start using Puppet Enterprise 3.3, and will serve as a reference as
you gain more experience. It covers PE-specic features and oers brief introductions to Puppet
and the orchestration engine. Use the navigation at left to move between the guides sections and
chapters.

For New Users


If youve never used Puppet before and want to evaluate Puppet Enterprise, follow the Puppet
Enterprise quick start guide. This walkthrough will guide you through creating a small
proof-of-concept deployment while demonstrating the core features and workows of
Puppet Enterprise.

For Returning Users


See the whats new page for the new features in this release of Puppet Enterprise. You can
nd detailed release notes for updates within the 3.3.x series in the release notes.

About Puppet Enterprise


Puppet Enterprise is a comprehensive tool for enterprise systems conguration management.
Specically, PE oers:
Conguration management tools that let sysadmins dene a desired state for their infrastructure
and then automatically enforce that state.
A web-based console UI for analyzing events, managing your Puppet systems and users, and
editing resources on the y.
Powerful orchestration capabilities.
Cloud provisioning tools for creating and conguring new VM instances.
Puppet Enterprise consists of a complete stack of Puppet Labs technologies, which are
automatically installed and connected. Specically, PE 3.3 includes all of the following Puppet Labs
software:
Puppet Enterprise 3.3 User's Guide About Puppet Enterprise

2/404

Puppet 3.6.2
PuppetDB 1.6.2
Facter 1.7.5
MCollective 2.5.1
Hiera 1.3.3
Dashboard 2.1.6
The What Gets Installed Where page includes a list of all the major packages that comprise PE 3.3.

About Puppet
Puppet is the leading open source conguration management tool. It allows system conguration
manifests to be written in a high-level DSL and can compose modular chunks of conguration to
create a machines unique conguration. By default, Puppet Enterprise uses a client/server Puppet
deployment, where agent nodes fetch congurations from a central puppet master.

About Orchestration
Puppet Enterprise includes distributed task orchestration features. Nodes managed by PE will listen
for commands over a message bus and independently take action when they hear an authorized
request. This lets you investigate and command your infrastructure in real time without relying on a
central inventory.

About the Puppet Enterprise Console


PEs console is the web front-end for managing your systems. The console can:
Trigger immediate puppet runs on an arbitrary subset of your nodes
Browse and compare resources on your nodes in real time
Analyze events and reports to help you visualize your infrastructure over time
Browse inventory data and backed-up le contents from your nodes
Group similar nodes and control the Puppet classes they receive in their catalogs
Run advanced orchestration tasks

About the Cloud Provisioning Tools


PE includes command line tools for building new nodes, which can create new VMware, Google
Compute Engine, Openstack, and Amazon EC2 instances, install PE on any virtual or physical
machine, and classify newly provisioned nodes within your Puppet infrastructure.

Licensing
PE can be evaluated with a complimentary ten node license; beyond that, a commercial per-node
Puppet Enterprise 3.3 User's Guide About Puppet Enterprise

3/404

license is required for use. A license key le will have been emailed to you after your purchase, and
the puppet master will look for this key at /etc/puppetlabs/license.key. Puppet will log warnings
if the license is expired or exceeded, and you can view the status of your license by running puppet
license at the command line on the puppet master.
To purchase a license, please see the Puppet Enterprise pricing page, or contact Puppet Labs at
sales@puppetlabs.com or (877) 575-9775. For more information on licensing terms, please see the
licensing FAQ. If you have misplaced or never received your license key, please contact
sales@puppetlabs.com.
Next: New Features

Puppet Enterprise 3.3.0 Release Notes


This page contains information about the Puppet Enterprise (PE) 3.3.0 release, including new
features, known issues, bug xes and more.

New Features
Puppet Enterprise 3.3 introduces the following new features and improvements.
Puppet Enterprise Installer Improvements
This release introduces a web-based interface meant to simplifyand provide better clarity into
the PE installation experience. You now have a few paths to choose from when installing PE.
Perform a guided installation using the web-based interface. Think of this as an installation
interview in which we ask you exactly how you want to install PE. If youre able to provide a few
SSH credentials, this method will get you up and running fairly quickly. Refer to the installation
overview for more information.
Use the web-based interface to create an answer le that you can then add as an argument to
the installer script to perform an installation (e.g., sudo ./puppet-enterprise-installer -a
~/my_answers.txt). Refer to Automated Installation with an Answer File, which provides an
overview on installing PE with an answer le.
Write your own answer le or use the answer le(s) provided in the PE installation tarball. Check
the Answer File Reference Overview to get started.
Manifest Ordering
Puppet Enterprise is now using a new ordering setting in the Puppet core that allows you to
congure how unrelated resources should be ordered when applying a catalog. By default,
ordering will be set to manifest in PE.

Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

4/404

The following values are allowed for the ordering setting:


manifest: (default) uses the order in which the resources were declared in their manifest les.
title-hash: orders resources randomly, but will use the same order across runs and across
nodes; this is the default in previous versions of Puppet.
random: orders resources randomly and change their order with each run. This can work like a
fuzzer for shaking out undeclared dependencies.
Regardless of this settings value, Puppet will always obey explicit dependencies set with the
before/require/notify/subscribe metaparameters and the ->/~> chaining arrows; this setting
only aects the relative ordering of unrelated resources.
For more information, and instructions on changing the ordering setting, refer to the Puppet
Modules and Manifest Page.
Directory Environments and Deprecation Warnings
The latest version of the Puppet core (Puppet 3.6) deprecates the classic cong-le environments in
favor of the new and improved directory environments. Over time, both Puppet open source and
Puppet Enterprise will make more extensive use of this pattern.
Environments are isolated groups of puppet agent nodes. This frees you to use dierent versions of
the same modules for dierent populations of nodes, which is useful for testing changes to your
Puppet code before implementing them on production machines. (You could also do this by
running a separate puppet master for testing, but using environments is often easier.)
In this release of PE, please note that if you dene environment blocks or use any of the
modulepath, manifest, and config_version settings in puppet.conf, you will see deprecation
warnings intended to prepare you for these changes. Conguring PE to use no environments will
also produce deprecation warnings.
Once PE has fully moved to directory environments, the default production environment will take
the place of the global manifest/modulepath/config_version settings.
PE 3.3 User Impact
If you use an environment cong section in puppet.conf, you will see a deprecation warning
similar to
# puppet.conf
[legacy]
# puppet config print confdir
Warning: Sections other than main, master, agent, user are deprecated in
puppet.conf. Please use the directory environments feature to specify
environments. (See
http://docs.puppetlabs.com/puppet/latest/reference/environments.html)
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings/config_file.rb:77:in
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

5/404

(at /usr/lib/ruby/site_ruby/1.8/puppet/settings/config_file.rb:77:in
`collect')
/etc/puppet

Using the modulepath, manifest, or config_version settings will raise a deprecation warning
similar to
# puppet.conf
[main]
modulepath = /tmp/foo
manifest = /tmp/foodir
config_version = /usr/bin/false
# puppet config print confdir
Warning: Setting manifest is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')
Warning: Setting modulepath is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')
Warning: Setting config_version is deprecated in puppet.conf. See
http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1065:in `each')

Note: Executing puppet commands will raise the modulepath deprecation warning.

About Disabling Deprecation Warnings


You can disable deprecation warnings by adding disable_warnings = deprecations to the
[main] section of puppet.conf. However, please note that this will disable ALL deprecation
warnings. We recommend that you re-enable deprecation warnings when upgrading so that
you dont potentially miss new warnings.
The Puppet 3.6 documentation has a comprehensive overview on working with directory
environments, but please note that this feature may have variations in functionality once fully
integrated in Puppet Enterprise.
New Puppet Enterprise Supported Modules
This release adds new modules to the list of Puppet Enterprise supported modules: ACL (for
Windows), vcsrepo, and Windows PowerShell. Visit the supported modules page to learn more, or
check out the ReadMes for ACL, vcsrepo, and PowerShell.
Puppet Module Tool (PMT) Improvements
The PMT has been updated to deprecate the Modulele in favor of metadata.json. To help ease the
transition, when you run puppet module generate the module tool will kick o an interview and
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

6/404

generate metadata.json based on your responses.


If you have already built a module and are still using a Modulele, you will receive a deprecation
warning when you build your module with puppet module build. You will need to perform
migration steps before you publish your module. For complete instructions on working with
metadata.json, see Publishing Modules
Please see Known Issues for information about a bug impacting modules that were built with the
new PMT but did not perform the migration steps.
Console Data Export
Every node list view in the console now includes a link to export the table data in CSV format, so
that you can include the data in a spreadsheet or other tool.
Support for Red Hat Enterprise Linux 7
This release provides full support for RHEL 7 for all applicable PE features and capabilities. For
more information, see the system requirements.
Support for Ubuntu 14.04 LTS
This release provides full support for Ubuntu 14.04 LTS for all applicable PE features and
capabilities. For more information, see the system requirements.
Support for Mac OS X (Agent Only)
The puppet agent can now be installed on nodes running Mac OS X Mavericks (10.9). Other
components (e.g., master) are not supported. For more information, see the system requirements
and the Mac OS X installation instructions.
Support for Windows 2012 R2 (Agent Only)
This release provides agent only support for nodes running Windows 2012 R2. For more
information, see the system requirements and Installing Windows Agents.
Additional OS Support for Agent Install via Package Management Tools
This release increases the number of PE supported operating systems than can install agents via
package management tools, making the agent installation process faster and simpler. For details,
visit Installing Puppet Enterprise Agents.
Support for stdlib 4
This version of PE is fully compatible with version 4.x of stdlib.
Razor Provisioning Tech Preview Usability Enhancements and Bug Fixes
Razor is included in PE 3.3 as a tech preview. This version of Razor includes usability enhancements
and bug xes. For more information, refer to the Razor documentation.
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

7/404

Note: Razor is included in Puppet Enterprise 3.3 as a tech preview. Puppet Labs tech previews
provide early access to new technology still under development. As such, you should only
use them for evaluation purposes and not in production environments. You can nd more
information on tech previews on the tech preview support scope page.

Security Fixes
CVE-2014-0224 OpenSSL vulnerability in secure communications
Assessed Risk Level: medium
Aected Platforms:
Puppet Enterprise 2.8 (Solaris, Windows)
Puppet Enterprise 3.2 (Solaris, Windows, AIX)
Due to a vulnerability in OpenSSL versions 1.0.1 and later, an attacker could intercept and decrypt
secure communications. This vulnerability requires that both the client and server be running an
unpatched version of OpenSSL. Unlike heartbleed, this attack vector occurs after the initial
handshake, which means ecnryption keys are not compromised. However, Puppet Enterprise
encrypts catalogs for transmission to agents, so PE manifests containing sensitive information
could have been intercepted. We advise all users to avoid including sensitive information in
catalogs.
Puppet Enterprise 3.3.0 includes a patched version of OpenSSL.
CVSS v2 score: 2.4 with Vector: AV:N/AC:H/Au:M/C:P/I:P/A:N/E:U/RL:OF/RC:C
CVE-2014-0198 OpenSSL vulnerability could allow denial of service attack
Assessed Risk Level: low
Aected Platforms: Puppet Enterprise 3.2 (Solaris, Windows, AIX)
Due to a vulnerability in OpenSSL versions 1.0.0 and 1.0.1, if SSL_MODE_\RELEASE_BUFFERS is
enabled, an attacker could cause a denial of service.
CVSS v2 score: 1.9 with Vector: AV:N/AC:H/Au:N/C:N/I:N/A:P/E:U/RL:OF/RC:C
CVE-2014-3251 MCollective aes_security plugin did not correctly validated new server certs
Assessed Risk Level: low
Aected Platforms:
Mcollective (all)
Puppet Enterprise 3.2
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

8/404

The MCollective aes_security public key plugin did not correctly validate new server certs against
the CA certicate. By exploiting this vulnerability within a specic race condition window, an
attacker with local access could initiate an unauthorized Mcollective client connection with a server.
Note that this vulnerability requires that a collective be congured to use the aes_security plugin.
Puppet Enterprise and open source Mcollective are not congured to use the plugin and are not
vulnerable by default.
CVSS v2 score: 3.4 with Vector: AV:L/AC:H/Au:M/C:P/I:N/A:C/E:POC/RL:OF/RC:C

Bug Fixes
The following is a basic overview of some of the bug xes in this release:
Installation - xes improve installation so that the installer checks for cong les and not just
/etc/puppetlabs/, stops pe-puppet-dashboard-workers during upgrade, warns the user if there
is not enough PostgreSQL disk space, and more.
UI updates - xes make the appearance and behavior more consistent across all areas of the
console.

Known Issues
As we discover them, this page will be updated with known issues in Puppet Enterprise 3.3 and
earlier. Fixed issues will be removed from this list and noted above in the release notes. If you nd
new problems yourself, please le bugs in Puppet here and bugs specic to Puppet Enterprise here.
To nd out which of these issues may aect you, run /opt/puppet/bin/puppet --version, the
output of which will look something like 3.6.1 (Puppet Enterprise 3.3.0). To upgrade to a
newer version of Puppet Enterprise, see the chapter on upgrading.
The following issues aect the currently shipped version of PE and all prior releases through the
3.x.x series, unless otherwise stated.
Puppet Enterprise Cannot Locate Samba init Script for Ubuntu 14.04
If you attempt to install and start Samba using PE resource management, you will may encounter
the following errors:
Error: /Service[smb]: Could not evaluate: Could not find init script or upstart
conf file for 'smb'`
Error: Could not run: Could not find init script or upstart conf file for
'smb'`

To workaround this issue, install and start Samba with the following commands:
puppet resource package samba ensure=present
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

9/404

puppet resource service smbd provider=init enable=true ensure=running


puppet resource service nmbd provider=init enable=true ensure=running

PostgreSQL Buer Memory Issue Can Cause PE Install to Fail on Machines with Large
Amounts of RAM
In some cases, when installing PE on machines with large amounts of RAM, the PostgreSQL
database will use more shared buer memory than is available and will not be able to start. This will
prevent PE from installing correctly. For more information and a suggested workaround, refer to
Troubleshooting the Console and Database.
Upgrades to PE 3.x from 2.8.3 Can Fail if PostgreSQL is Already Installed
There are two scenarios in which your upgrade can fail:
1. If PostgreSQL is already running on port 5432 on the server assigned the database support role,
pe-postgresql wont be able to start.
2. Another version of PostgreSQL is not running, but which psql resolves to something other than
/opt/puppet/bin/psql, which is the instance used by PE.
In this second scenario, youll see the following failure output:
## Performing migration of the console database. This may take a while...
DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins!
Support for these plugins will be removed in Rails 4.0. Move them out and
bundle them in your Gemfile, or fold them in to your app as lib/myplugin/*
and config/initializers/myplugin.rb. See the release notes for more on this:
http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released.
(called from <top (required)> at /opt/puppet/share/puppetdashboard/Rakefile:16)
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Database transfer failed.

To work around these issues, ensure the PostgreSQL service is stopped before installing PE. To
determine if PostgreSQL is running, run service status postgresql. If an equivalent of stopped
or no such service is returned, the service is not running. If the service is running, stop it (e.g.,
service postgresql stop) and disable it ( chkconfig postgresql off).
To resolve the issue, make sure that which psql resolves to /opt/puppet/bin/psql.
Upgrades from 3.2.0 Can Cause Issues with Multi-Platform Agent Packages
Users upgrading from PE 3.2.0 to a later version of 3.x (including 3.2.3) will see errors when
attempting to download agent packages for platforms other than the master. After adding pe_repo
classes to the master for desired agent packages, errors will be seen on the subsequent puppet run
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

10/404

as PE attempts to access the requisite packages. For a simple workaround to this issue, see the
installer troubleshooting page.
Live Management Cannot Uninstall Packages on Windows Nodes
An issue with MCollective prevents correct uninstallation of packages on nodes running Windows.
You can uninstall packages on Windows nodes using Puppet, for example: package { 'Google
Chrome': ensure => absent, }
The issue is being tracked on this support ticket.
A NOTE ABOUT SYMLINKS

The answer le no longer gives the option of whether to install symlinks. These are now
automatically installed by packages. To allow the creation of symlinks, you need to ensure that
/usr/local is writable.
Upgrades to PE 3.2.x or Later Remove Commented Authentication Sections from rubycasserver/config.yml
If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain the
commented sections for the third-party services. Weve provided the commented sections on the
console cong page, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
pe_mcollective Module Integer Parameter Issue
The pe_mcollective module includes a parameter for the ActiveMQ heap size ( activemq_heap_mb).
A bug prevents this parameter from correctly accepting an integer when one is entered in the
console. The problem can be avoided by placing the integer inside quote marks (e.g., "10"). This
will cause Puppet to correctly validate the value when it is passed from the console.
Safari Certicate Handling May Prevent Console Access
Due to Apache bug 53193 and the way Safari handles certicates, Puppet Labs recommends that PE
3.3 users avoid using Safari to access the PE console.
If you need to use Safari, you may encounter the following dialog box the rst time you attempt to
access the console after installing/upgrading PE 3.3:

Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

11/404

If this happens, click Cancel to access the console. (In some cases, you may need to click Cancel
several times.)
This issue will be xed in a future release.
puppet module list --tree Shows Incorrect Dependencies After Uninstalling Modules
If you uninstall a module with puppet module uninstall <module name> and then run puppet
module list --tree, you will get a tree that does not accurately reect module dependencies.
Passenger Global Queue Error on Upgrade
When upgrading a PE 2.8.3 master to PE 3.3.0, restarting pe-httpd produces a warning: The
'PassengerUseGlobalQueue' option is obsolete: global queueing is now always turned
on. Please remove this option from your configuration file. This error will not aect
anything in PE, but if you wish, you can turn it o by removing the line in question from
/etc/puppetlabs/httpd/conf.d/passenger-extra.conf.
puppet resource Fails if puppet.conf is Modied to Make puppet apply Work with PuppetDB.
In an eort to make puppet apply work with PuppetDB in masterless puppet scenarios, users may
edit puppet.conf to make storecongs point to PuppetDB. This breaks puppet resource, causing it
to fail with a Ruby error. For more information, see the console & database troubleshooting page,
and for a workaround see the PuppetDB documentation on connecting puppet apply.
Puppet Agent on Windows Requires --onetime
On Windows systems, puppet agent runs started locally from the command line require either the -onetime or --test option to be set. This is due to Puppet bug PUP-1275.
BEAST Attack Mitigation
A known weakness in Apache HTTPD leaves it vulnerable to a man-in-the-middle attack known as
the BEAST (Browser Exploit Against SSL/TLS) attack. The vulnerability exists because Apache HTTPD
uses a FIPS-compliant cipher suite that can be cracked via a brute force attack that can discover the
decryption key. If FIPS compliance is not required for your infrastructure, we recommend you
mitigate vulnerability to the BEAST attack by using a cipher suite that includes stronger ciphers.
This can be done as follows:
In /etc/puppetlabs/httpd/conf.d/puppetdashboard.conf, edit the SSLCipherSuite and
SSLProtocol options to:

Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

12/404

SSLCipherSuite ALL:!ADH:+RC4+RSA:+HIGH:+AES+256:+CBC3:-LOW:-SSLv2:-EXP
SSLProtocol ALL -SSLv2

This will set the order of ciphers to:


KRB5-RC4-MD5 SSLv3 Kx=KRB5 Au=KRB5 Enc=RC4(128) Mac=MD5
KRB5-RC4-SHA SSLv3 Kx=KRB5 Au=KRB5 Enc=RC4(128) Mac=SHA1
RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1
RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5
DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1
DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1
AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1
DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1
DHE-DSS-AES128-SHA SSLv3 Kx=DH Au=DSS Enc=AES(128) Mac=SHA1
AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1
KRB5-DES-CBC3-MD5 SSLv3 Kx=KRB5 Au=KRB5 Enc=3DES(168) Mac=MD5
KRB5-DES-CBC3-SHA SSLv3 Kx=KRB5 Au=KRB5 Enc=3DES(168) Mac=SHA1
EDH-RSA-DES-CBC3-SHA SSLv3 Kx=DH Au=RSA Enc=3DES(168) Mac=SHA1
EDH-DSS-DES-CBC3-SHA SSLv3 Kx=DH Au=DSS Enc=3DES(168) Mac=SHA1
DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1

Note that unless your system contains OpenSSL v1.0.1d (the version that correctly supports TLS1.1
1and 1.2), prioritizing RC4 may leave you vulnerable to other types of attacks.
Readline Version Issues on AIX Agents
As with PE 2.8.2, on AIX 5.3, puppet agents depend on readline-4-3.2 being installed. You can
check the installed version of readline by running rpm -q readline. If you need to install it, you
can download it from IBM. Install it before installing the puppet agent.
On AIX 6.1 and 7.1, the default version of readline, 4-3.2, is insucient. You need to replace it
before upgrading or installing by running
rpm -e --nodeps readline
rpm -Uvh readline-6.1-1.aix6.1.ppc.rpm

If you see an error message after running this, you can disregard it. Readline-6 should be
successfully installed, and you can proceed with the installation or upgrade (you can verify the
installation with rpm -q readline).
Debian/Ubuntu Local Hostname Issue
On some versions of Debian/Ubuntu, the default /etc/hosts le contains an entry for the
machines hostname with a local IP address of 127.0.1.1. This can cause issues for PuppetDB and
PostgreSQL, because binding a service to the hostname will cause it to resolve to the local-only IP
address rather than its public IP. As a result, nodes (including the console) will fail to connect to
PuppetDB and PostgreSQL.
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

13/404

To x this, add an entry to /etc/hosts that resolves the machines FQDN to its public IP address.
This should be done prior to installing PE. However, if PE has already been installed, restarting the
pe-puppetdb and pe-postgresql services after adding the entry to the hosts le should x things.
console_auth Fails After PostgreSQL Restart
RubyCAS server, the component which provides console log-in services, will not automatically
reconnect if it loses connection to its database, which can result in a 500 Internal Server Error
when attempting to log in or out. You can resolve the issue by restarting Apache on the consoles
node with sudo /etc/init.d/pe-httpd restart.
Inconsistent Counts When Comparing Service Resources in Live Management
In the Browse Resources tab, comparing a service across a mixture of RedHat-based and Debianbased nodes will give dierent numbers in the list view and the detail view.
Augeas File Access Issue
On AIX agents, the Augeas lens is unable to access or modify etc/services. There is no known
workaround.
After Upgrading, Nodes Report a Not a PE Agent Error
When doing the rst puppet run after upgrading using the upgrader script included in PE tarballs,
agents are reporting an error: <node.name> is not a Puppet Enterprise agent. This was caused by
a bug in the upgrader that has since been xed. If you downloaded a tarball prior to November 28,
2012, simply download the tarball again to get the xed upgrader. If you prefer, you can download
the latest upgrader module from the Forge. Alternatively, you can x it by changing
/etc/puppetlabs/facter/facts.d/is_pe.txt to contain: is_pe=true.
Answer File Required for Some SMTP Servers
Any SMTP server that requires authentication, TLS, or runs over any port other than 25 needs to be
explicitly added to an answers le. See the advanced conguration page for details.
pe-httpd Must Be Restarted After Revoking Certicates
(Issue #8421)
Due to an upstream bug in Apache, the pe-httpd service on the puppet master must be restarted
after revoking any nodes certicate.
After using puppet cert revoke or puppet cert clean to revoke a certicate, restart the service
by running:
$ sudo /etc/init.d/pe-httpd restart

Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

14/404

Dynamic Man Pages Are Incorrectly Formatted


Man pages generated with the puppet man subcommand are not formatted as proper man pages
and are instead displayed as Markdown source text. This is a purely cosmetic issue, and the pages
are still fully readable.
To improve the display of Puppet man pages, you can use your system gem command to install the
ronn gem:

$ sudo gem install ronn

Deleted Nodes Can Reappear in the Console


Due to the fact that the console will create a node listing for any node found via the inventory
search function, nodes deleted from the console can sometimes reappear. See the console bug
report describing the issue.
The nodes will reappear after deletion if PuppetDB data for that node has not yet expired, and you
perform an inventory search in the console that returns information for that node.
You can avoid the reappearance of nodes by removing them with the following procedure:
1. puppet node clean <node_certname>
2. puppet node deactivate <node_certname>
3. sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile
RAILS_ENV=production node:del[<node_certname>]
These steps will remove the nodes certicate, purge information about the node from PuppetDB,
and delete the node from the console. The last command is equivalent to logging into the console
and deleting the node via the UI.
For instructions on completely deactivating an agent node, refer to Deactivating a PE Agent Node.
Errors Related to Stopping pe-postresql Service
If for any reason the pe-postresql service is stopped, agents will receive several dierent error
messages, for example:
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: (<unknown>): mapping values are not allowed in
this context at line 7 column 28

or, when attempting to request a catalog:


Error: Could not retrieve catalog from remote server: Error 400 on SERVER:
(<unknown>): mapping values are not allowed in this context at line 7 column 28
Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

15/404

Warning: Not using cache on failed catalog


Error: Could not retrieve catalog; skipping run

If you encounter these errors, simply re-start the pe-postgresql service.


Modules Must Perform Migration Steps Before Being Published with the New Puppet Module
Tool
The PMT has a known issue wherein modules that were published to the Puppet Forge using the
new PMT and that had not performed the migration steps before publishing will have erroneous
checksum information in their metadata.json. These checksums will cause errors that prevent you
from upgrading or uninstalling the module.
To determine if a module youre using has this issue, run puppet module changes usernamemodulename. If your module has this checksum issue, you will see that the metadata.json has been
modied. If you try to upgrade or uninstall a module with this issue, you will receive warnings and
your action will fail.
To work around this issue: 1. Navigate to the current version of the module. 2. If the
checksums.json le is present, open it in your editor and delete the line: metadata.json: [some
checksum here] 3. If there is no checksums.json, open the metadata.json le in your editor and
delete the entire checksums eld.
The Puppet Module Tool (PMT) Does Not Support Solaris 10
When attempting to use the PMT on Solaris 10, youll get an error like the following:
Error: Could not connect via HTTPS to https://forgeapi.puppetlabs.com
Unable to verify the SSL certificate
The certificate may not be signed by a valid CA
The CA bundle included with OpenSSL may not be valid or up to date

This error is because there is no CA-cert bundle on Solaris 10 to trust the Puppet Labs Forge
certicate.
Razor Known Issues
Please see the page Razor Setup Recommendations and Known Issues.

Puppet Terminology
For help with Puppet-specic terms and language, visit the glossary
For a complete guide to the Puppet language, visit the reference manual
Next: Compliance: Alternate Workow

Puppet Enterprise 3.3 User's Guide Puppet Enterprise 3.3.0 Release Notes

16/404

Getting Support for Puppet Enterprise


Getting support for Puppet Enterprise is easy; it is available both from Puppet Labs and the
community of Puppet Enterprise users. We provide responsive, dependable, quality support to
resolve any issues regarding the installation, operation, and use of Puppet.
There are three primary ways to get support for Puppet Enterprise:
Reporting issues to the Puppet Labs customer support portal.
Joining the Puppet Enterprise user group.
Seeking help from the Puppet open source community.

Support Lifecycle
Puppet Enterprise 3.x will receive feature updates through June 25, 2014 (or the release of Puppet
Enterprise 4, whichever is longer), and will receive security updates through June 25, 2015 (or 1
year from the release of Puppet Enterprise 4, whichever is longer). See the support lifecycle page
for more details.
After Puppet Enterprise 3.x reaches end-of-life, customers can still contact Puppet Labs support for
best-eort help, although we will recommend upgrading as soon as you are able.

Reporting Issues to the Customer Support Portal


Paid Support
Puppet Labs provides two levels of commercial support oerings for Puppet Enterprise: Standard
and Premium. Both oerings allow you to report your support issues to our condential customer
support portal. You will receive an account and log-on for this portal when you purchase Puppet
Enterprise.
Customer support portal: https://support.puppetlabs.com
THE PE SUPPORT SCRIPT

When seeking support, you may be asked to run an information-gathering support script named,
puppet-enterprise-support. The script is located in the root of the unzipped Puppet Enterprise
installer tarball; it is also installed on any master, PuppetDB, or console node and can be run via
/opt/puppet/bin/puppet-enterprise-support.
This script will collect a large amount of system information, compress it, and print the location of
the zipped tarball when it nishes running; an uncompressed directory (named support)
containing the same data will be left in the same directory as the compressed copy. We recommend
that you examine the collected data before forwarding it to Puppet Labs, as it may contain sensitive
information that you will wish to redact.
Puppet Enterprise 3.3 User's Guide Getting Support for Puppet Enterprise

17/404

The information collected by the support script includes:


iptables info (is it loaded? what are the inbound and outbound rules?) (both ipv4 and ipv6)
a full run of Facter (if installed)
SELinux status
the amount of free disk and memory on the system
hostname info ( /etc/hosts and the output of hostname --fqdn)
the umask of the system
NTP conguration (what servers are available, the oset from them)
a listing (no content) of the les in /opt/puppet, /var/opt/lib/pe-puppet and
/var/opt/lib/pe-puppetmaster
the OS and kernel
a list of installed packages
the current process list
a listing of puppet certs
a listing of all services (except on Debian, which lacks the equivalent command)
current environment variables
whether the puppet master is reachable
the output of mco ping and mco inventory
a list of all modules on the system
the output of puppet module changes (shows if any modules installed by PE have been
modied)
the output of /nodes.csv from the console (includes a list of known nodes and metadata about
their most recent puppet runs)
It also copies the following les:
system logs
the contents of /etc/puppetlabs
the contents of /var/log/pe-*
Free Support
If you are evaluating Puppet Enterprise, we also oer support during your evaluation period. During
this period you can report issues with Puppet Enterprise to our public support portal. Please be
aware that all issues led here are viewable by all other users.
Public support portal: https://tickets.puppetlabs.com/browse/ENTERPRISE

Join the Puppet Enterprise User Group


Puppet Enterprise 3.3 User's Guide Getting Support for Puppet Enterprise

18/404

http://groups.google.com/a/puppetlabs.com/group/pe-users
Click on Sign in and apply for membership.
Click on Enter your email address to access the document.
Enter your email address.
Your request to join will be sent to Puppet Labs for authorization and you will receive an email
when youve been added to the user group.

Getting Support From the Existing Puppet Community


As a Puppet Enterprise customer you are more than welcome to participate in our large and helpful
open source community as well as report issues against the open source project.
Puppet open source user group:
http://groups.google.com/group/puppet-users
Puppet Developers group:
http://groups.google.com/group/puppet-dev
Report issues with the open source Puppet project:
https://tickets.puppetlabs.com/browse/PUP
Next: Quick Start

Quick Start: Using PE 3.3


Welcome to the Puppet Enterprise 3.3 quick start guide. This document is a short walkthrough to
help you evaluate Puppet Enterprise (PE) and become familiar with its features. There are two parts
to this guide, an introductory guide (below) that demonstrates basic use and concepts and a
follow-up guide where you can build on the concepts you learned in the introduction while
learning some basics about developing puppet modules for either Windows or *nix platforms.
QUICK START PART ONE: INTRODUCTION

In this rst part, follow along to learn how to:


Create a small proof-of-concept deployment

Note: The installation instructions describe how to install a single agent. If you want to install
more than one agent, just repeat the steps in the Install the Puppet Enterprise Agent
section.
Examine and control nodes in real time with live management
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

19/404

Examine and control nodes in real time with live management


Install a PE-supported Puppet module
Apply Puppet classes to nodes using the console
Set the parameters of classes using the console
Use the console to inspect and analyze the results of conguration events
QUICK START PART TWO: DEVELOPING MODULES

For part two, youll build on your knowledge of PE and learn about module development . You can
choose from either the Linux track or the Windows track.
In part two, youll learn about:
Basic module structure
Editing manifests and templates
Writing your own modules
Creating a site module that builds other modules into a complete machine role
Applying classes to groups with the console

Following this walkthrough will take approximately 30-60 minutes for each part.

Creating a Deployment
A typical Puppet Enterprise deployment consists of:
A number of agent nodes, which are computers (physical or virtual) managed by Puppet.
At least one puppet master server, which serves congurations to agent nodes.
At least one console server, which analyzes agent reports and presents a GUI for managing your
site. (This may or may not be the same server as the master.)
At least one database support server which runs PuppetDB and databases that support the
console. (This may or may not be the same server as the console server.)
For this walk-through, you will create a simple deployment where the puppet master, the console,
and database support components will run on one machine (a.k.a., a monolithic master). This
machine will manage one or two agent nodes. In a production environment you have total exibility
in how you deploy and distribute your master, console, and database support components, but for
the purposes of this guide were keeping things simple.

Preparing Your Proof-of-Concept Systems


To create this small deployment, you will need the following:
At least two computers (nodes) running a *nix operating system supported by Puppet
Enterprise.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

20/404

These can be virtual machines or physical servers.


One of these nodes (the puppet master server) should have at least 1 GB of RAM. Note:
For actual production use, a puppet master node should have at least 4 GB of RAM.
For part two, if you choose to follow the Windows track youll need a computer running a
version of Microsoft Windows supported by Puppet Enterprise.
Puppet Enterprise installer tarballs suitable for the OS and architecture your nodes are
using.
A network all of your nodes should be able to reach each other.
All of the nodes you intend to use should have their system clocks set to within a minute
of each other.
An internet connection or a local mirror of your operating systems package repositories,
for downloading additional software that Puppet Enterprise may require.
Properly congured rewalls.
For demonstration purposes, all nodes should allow all trac on ports 8140, 61613,
and 443. (Production deployments can and should partially restrict this trac.)
Properly congured name resolution.
Each node needs a unique hostname, and they should be on a shared domain. For the
rest of this walkthrough, we will refer to the puppet master as master.example.com
and the agent node as agent1.example.com. You can use any hostnames and any
domain; simply substitute the names as needed throughout this document.
All nodes must know their own hostnames. This can be done by properly conguring
reverse DNS on your local DNS server, or by setting the hostname explicitly. Setting the
hostname usually involves the hostname command and one or more conguration les,
while the exact method varies by platform.
All nodes must be able to reach each other by name. This can be done with a local DNS
server, or by editing the /etc/hosts le on each node to point to the proper IP
addresses. Test this by running ping master.example.com and ping
agent1.example.com on every node.
Optionally, to simplify conguration later, all nodes should also be able to reach the
puppet master node at the hostname puppet. This can be done with DNS or with hosts
les. Test this by running ping puppet on every node.
The control workstation from which you are carrying out these instructions must be
able to reach every node in the deployment by name.
Properly congured SSH.
If you have a properly congured SSH agent with agent forwarding enabled, you dont
need to perform any additional SSH congurations. Your SSH agent will be used by the
installer.
Are you installing using root with a password? The installer will ask you to provide the
username and password for the node on which youre installing PE. Remote root ssh
login must be enabled, including on the node from which youre running the installer.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

21/404

Are you installing using root with an ssh key? The installer will ask you to provide the
username, private key path, and key passphrase (as needed) for each node on which
youre installing a PE component. Remote root ssh login must enabled on each node,
including the node from which youre running the installer. And the public root ssh key
must be added to authorized_keys on each node on which youre installing a PE
component.
Please ensure that port 3000 is reachable, as the web-based installer uses this port. You
can close the port when the installation is complete.
The web-based installer does not support sudo congurations with Defaults targetpw
or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.

Installing the Puppet Master


1. Download and verify the appropriate PE tarball.

Tip: Be sure to download the full PE tarball, not the agent-only tarball. The agent-only
tarball is used for package management-based agent installation which is not covered by
this guide.
2. Unpack the tarball. (Run tar -xf <tarball>.)
3. From the PE installer directory, run sudo ./puppet-enterprise-installer.
4. When prompted, choose Yes to install the setup packages. (If you choose No, the installer will
exit.)
At this point, the PE installer will start a web server and provide a web address:
https://<install platform hostname>:3000. Please ensure that port 3000 is reachable. If
necessary, you can close port 3000 when the installation is complete. Also be sure to use https.

Warning: Leave your terminal connection open until the installation is complete; otherwise
the installation will fail.
5. Copy the address into your browser.
6. When prompted, accept the security request in your browser.
The web-based installation uses a default SSL certicate; youll have to add a security exception
in order to access the web-based installer. This is safe to do..
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

22/404

Youll be taken to the installer start page.


7. On the start page, click Lets get started.
8. Next, youll be asked to choose your deployment type. Select Monolithic.
9. Provide the following information about the puppet master server:
a. Puppet master FQDN: provide the fully qualied domain name of the server youre installing PE
on; for example, master.example.com.
b. DNS aliases: provide a comma-separated list of aliases agent nodes can use to reach to the
master; for example master.
c. SSH Username: provide the SSH username for the user connecting to the puppet master; in this
case, root.
10. When prompted about database support, choose the default option Install PostgreSQL for me.
11. Provide the following information about the PE console administrator user:
a. Console superuser email address: provide the address youll use to log in to the console as the
administrator.
b. Console superuser passphrase: create a password for the console login; as indicated, the
password must be at least eight characters.
12. For SMTP Hostname use localhost.
13. Click Submit.
14. On the conrm plan page, review the information you provided, and, if it looks correct, click
Continue.
If you need to make any changes, click Go Back and make whatever changes are required.
15. On the validation page, the installer will verify various conguration elements (e.g., if SSH
credentials are correct, if there is enough disk space, and if the OS is the same for the various
components). If there arent any outstanding issues, click Deploy now.
The installer will then install and congure Puppet Enterprise. It may also need to install additional
packages from your OSs repository. This process may take up to 10-15 minutes. When the
installation is complete, the installer script that was running in the terminal will close itself.

You have now installed the puppet master node. As indicated by the installer, the puppet
master node is also an agent node, and can congure itself the same way it congures the
other nodes in a deployment. Stay logged in as root for further exercises.
LOG IN TO THE CONSOLE

To log in to the console, you can select the Start Using Puppet Enterprise Now button that appears
at the end of the web-based installer or follow the steps below.
1. On your control workstation, open a web browser and point it to the address supplied by the
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

23/404

installer; for example, https://master.example.com. You will receive a warning about an


untrusted certicate. This is because you were the signing authority for the consoles certicate,
and your Puppet Enterprise deployment is not known to the major browser vendors as a valid
signing authority. Ignore the warning and accept the certicate. The steps to do this vary by
browser.
2. On the login page for the console, log in with the email address and password you provided
when installing the puppet master.

The console GUI loads in your browser.


Installing the Puppet Enterprise Agent
Note: This procedure references RHEL and Debian, but it can be used for all supported
platforms except Windows. For instructions on installing agents on Windows, refer to the
Windows agent installation instructions.

Tip: If you dont have internet connectivity, refer to the note about installing without internet
connectivity to choose a method that is suitable for your needs.
The puppet master that youve installed hosts a package repository for the agent of the same OS
and architecture as the puppet master. When you run the installation script on your agent (for
example, curl -k https://<master.example.com>:8140/packages/current/install.bash |
sudo bash), the script will detect the OS on which it is running, set up an apt (or yum, or zypper)
repo that refers back to the master, pull down and install the pe-agent packages.
Note that if install.bash cant nd agent packages corresponding to the agents platform, it will fail
with an error message telling you which pe_repo class you need to add to the master.
If your agent is the same OS and architecture as the puppet master, run the script above to set up
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

24/404

the agent, and then continue on to Connecting Agents to the Master


If your master OS and architecture is dierent than the agent (for example, the master is on a node
running RHEL6 and you want to add an agent node running Debian 6 on AMD64 hardware) follow
this example:
1. On the console, click the Add classes button in the sidebar:

2. Search for the pe_repo::platform::debian_6_amd64 class in the list of classes, and click its
checkbox to select it. Click the Add selected classes button at the bottom of the page.
3. Navigate to the master.example.com node page, click the Edit button, and begin typing
pe_repo::platform::debian_6_amd64 in the Classes eld; you can select the
pe_repo::platform::debian_6_amd64 class from the list of autocomplete suggestions.
4. Click the Update button after you have selected it.
5. Note that the pe_repo::platform::debian_6_amd64 class now appears in the list of classes for
the master.example.com node.
6. Navigate to the live management page, and select the Control Puppet tab. Use the runonce
action to trigger a puppet run.
The new repo will be created in /opt/puppet/packages/public. It will be called puppetenterprise-3.3.0-debian-6-amd64-agent.
7. SSH into the Debian node where you want to install the agent, and run curl -k
https://<master.example.com>:8140/packages/current/install.bash | sudo bash.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

25/404

The installer will then install and congure the Puppet Enterprise agent.

You have now installed the puppet agent node. Stay logged in as root for further exercises.

Connecting Agents to the Master


After installing, the agent nodes are not yet allowed to fetch congurations from the puppet
master; they must be explicitly approved and granted a certicate.
Approving the Certicate Request
During installation, the agent node contacted the puppet master and requested a certicate. To add
the node to the console and to start managing its conguration, youll need to approve its request
on the puppet master. This is most easily done via the console.
1. From the console, note the pending node requests indicator in the upper right corner. Click it to
load a list of currently pending node requests.

2. Click the Accept All button to approve all the requests and add the nodes.

The puppet agents can now retrieve congurations from the master the next time puppet
runs.

Testing the Agent Nodes


During this walkthrough, we will be running the puppet agent interactively. By default, the agent
runs in the background and fetches congurations from the puppet master every 30 minutes. (This
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

26/404

interval is congurable with the runinterval setting in puppet.conf.) However, you can also trigger
a puppet run manually from the command line.
1. On the agent node, log in as root and run puppet agent --test on the command line. This will
trigger a single puppet run on the agent with verbose logging.

Note: You may receive a -bash: puppet: command not found error; this is due to the fact
that PE installs its binaries in /opt/puppet/bin and /opt/puppet/sbin, which arent
included in your default $PATH. To include these binaries in your default $PATH, manually
add them to your prole or run PATH=/opt/puppet/bin:$PATH;export PATH.
2. Note the long string of log messages, which should end with notice: Finished catalog run
in [...] seconds.

You are now fully managing the agent node. It has checked in with the puppet master for the
rst time and received its conguration info. It will continue to check in and fetch new
congurations every 30 minutes. The node will also appear in the console, where you can
make changes to its conguration by assigning classes and modifying the values of class
parameters.

Viewing the Agent Node in the Console


1. Click Nodes in the primary navigation bar. Youll see various UI elements, which show a summary
of recent puppet runs and their status. Notice that the master and any agent nodes appear in the
list of nodes:

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

27/404

2. Explore the console. Note that if you click on a node to view its details, you can see its recent
history, the Puppet classes it receives, and a very large list of inventory information about it. See
here for more information about navigating the console.

You now know how to nd detailed information about any node PE is managing, including
its status, inventory details, and the results of its last puppet run.

Avoiding the Wait


Although the puppet agent is now fully functional on the agent node, some other Puppet Enterprise
software is not; specically, the daemon that listens for orchestration messages is not yet
congured. This is because Puppet Enterprise uses Puppet to congure itself.
Puppet Enterprise does this automatically within 30 minutes of a nodes rst check-in. To speed up
the process and avoid the wait, do the following:
1. On the console, use the sidebar to navigate to the mcollective group:

2. Check the list of nodes at the bottom of the page for agent1.example.com depending on your
timing, it may already be present. If so, skip to on each agent node below.
3. If agent1 is not a member of the group already, click the Edit button:

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

28/404

4. In the nodes eld, begin typing agent1.example.coms name. You can then select it from the list
of autocompletion guesses. Click the Update button after you have selected it.

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

29/404

5. On each agent node, run puppet agent --test again, as described above. Note the long string
of log messages related to the pe_mcollective class.
In a normal environment, you would usually skip these steps and allow orchestration to come online when Puppet runs automatically.

The agent node can now respond to orchestration messages and its resources can be viewed
live in the console.

Using Live Management to Control Agent Nodes


Live management uses Puppet Enterprises orchestration features to view and edit resources in real
time. It can also trigger puppet runs and perform other orchestration tasks.
1. On the console, click the Live Management tab in the top navigation.

2. Note that the master and the agent nodes are all listed in the sidebar.
Discovering Resources
1. Note that you are currently in the Browse Resources tab.
2. Choose user resources from the list of resource types, then click the Find Resources button:

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

30/404

3. Examine the complete list of user accounts found on all of the nodes currently selected in the
sidebar node list. (In this case, both the master and the agent node are selected.) Most of the
users will be identical, as these machines are very close to a default OS install, but some users
related to the puppet masters functionality are only on one node:

4. Click on any user to view details about its properties and where it is present.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

31/404

The other resource types work in a similar manner. Choose the node(s) whose resources you wish
to browse. Select a resource type, click Find Resources to discover the resource on the selected
nodes, click on one of the resulting found resources to see details about it.
Triggering Puppet Runs
Rather than using the command line to kick o puppet runs with puppet agent -t one at a time,
you can use live management to run Puppet on several selected nodes.
1. On the console, in the live management page, click the Control Puppet tab.
2. Make sure one or more nodes are selected with the node selector on the left.
3. Click the runonce action to reveal the red Run button and additional options, and then click the
Run button to run Puppet on the selected nodes.

Note: You cant always use the runonce actions additional options with *nix nodes, you
must stop the pe-puppet service before you can use options like noop. See this note in the
orchestration section of the manual for more details.

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

32/404

You have just triggered a puppet run on several agents at once; in this case, the master and the
agent node. The runonce action will trigger a puppet run on every node currently selected in the
sidebar.
When using this action in production deployments, select target nodes carefully, as running it on
dozens or hundreds of nodes at once can strain the Puppet master server. If you need to do an
immediate Puppet run on many nodes, you should use the orchestration command line to do a
controlled run series.

Installing Modules
Puppet congures nodes by applying classes to them. Classes are chunks of Puppet code that
congure a specic aspect or feature of a machine.
Puppet classes are distributed in the form of modules. You can save time by using pre-existing
modules. Pre-existing modules are distributed on the Puppet Forge, and can be installed with the
puppet module subcommand. Any module installed on the Puppet master can be used to congure
agent nodes.
Installing a Forge Module
We will install a Puppet Enterprise supported module: puppetlabs-ntp. While you can use any
module available on the Forge, PE customers can take advantage of supported modules which are
supported, tested, and maintained by Puppet Labs.
1. On your control workstation, point your browser to
http://forge.puppetlabs.com/puppetlabs/ntp. This is the Forge listing for a module that installs,
congures, and manages the NTP service.
2. On the puppet master, run puppet module search ntp. This searches for modules from the
Puppet Forge with ntp in their names or descriptions and results in something like:

Searching http://forgeapi.puppetlabs.com ...


NAME DESCRIPTION
AUTHOR KEYWORDS
puppetlabs-ntp NTP Module
@puppetlabs ntp aix
saz-ntp UNKNOWN
@saz ntp OEL
thias-ntp Network Time Protocol...
@thias ntp ntpd
warriornew-ntp ntp setup
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

33/404

@warriornew ntp

We want puppetlabs-ntp, which is the PE supported NTP module. You can view detailed info
about the module in the Read Me on the Forge page you just visited:
http://forge.puppetlabs.com/puppetlabs/ntp.
3. Install the module by running puppet module install puppetlabs-ntp:

Preparing to install into /etc/puppetlabs/puppet/modules ...


Notice: Downloading from http://forgeapi.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/puppet/modules
puppetlabs-ntp (v3.0.1)

You have just installed a Puppet module. All of the classes in it are now available to be added
to the console and assigned to nodes.
There are many more modules, including PE supported modules, on the Forge. In part two of this
guide youll learn more about modules, including customizing and writing your own modules on
either Windows or *nix platforms.
Using Modules in the PE Console
Every module contains one or more classes. Classes are named chunks of puppet code and are the
primary means by which Puppet congures nodes. The module you just installed contains a class
called ntp. To use any class, you must rst tell the console about it and then assign it to one or
more nodes.
1. On the console, click the Add classes button in the sidebar:

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

34/404

2. Locate the ntp class in the list of classes, and click its checkbox to select it. Click the Add
selected classes button at the bottom of the page.

3. Navigate to the default group page (by clicking the link in the Groups menu in the sidebar), click
the Edit button, and begin typing ntp in the Classes eld; you can select the ntp class from the
list of autocomplete suggestions. Click the Update button after you have selected it.

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

35/404

4. Note that the ntp class now appears in the list of classes for the default group. Also note that the
default group contains your master and agent.
5. Navigate to the live management page, and select the Control Puppet tab. Use the runonce
action to trigger a puppet run on both the master and the agent. This will congure the nodes
using the newly-assigned classes. Wait one or two minutes.
6. On the agent, stop the NTP service.
Note: the NTP service name may vary depending on your operating system; for example, on
Debian nodes, the service name is NTP.
7. Run ntpdate us.pool.ntp.org. The result should resemble the following:
28 Jan 17:12:40 ntpdate[27833]: adjust time server 50.18.44.19 offset 0.057045 sec
8. Finally, restart the NTP service.

Puppet is now managing NTP on the nodes in the default group. So, for example, if you
forget to restart the NTP service on one of those nodes after running ntpdate, PE will
automatically restart it on the next puppet run.
SETTING CLASS PARAMETERS

You can use the console to set the values of the class parameters of nodes by selecting a node and
then clicking Edit parameters in the list of classes. For example, you want to specify an NTP server
for a given node.
1. Click a node in the node list.
2. On the node view page, click the Edit button.
3. Find NTP in the class list, and click Edit Parameters.

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

36/404

4. Enter a value for the parameter you wish to set. To set a specic server, enter ntp1.example.com
in the box next to the servers parameter.
The grey text that appears as values for some parameters is the default value, which can be either a
literal value or a Puppet variable. You can restore this value with the Reset value control that
appears next to the value after you have entered a custom value.
For more information, see the page on classifying nodes with the console.
Viewing Changes with Event Inspector
The event inspector lets you view and research changes and other events. Click the Events tab in the
main navigation bar. The event inspector window is displayed, showing the default view: classes
with failures. Note that in the summary pane on the left, one event, a successful change, has been
recorded for Nodes. However, there are two changes for Classes and Resources. This is because the
NTP class loaded from the Puppetlabs-ntp module contains additional classesa class that handles
the conguration of NTP ( Ntp::Config)and a class that handles the NTP service ( Ntp::Service).

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

37/404

You can click on events in the summary pane to inspect them in detail. For example, if you click
With Changes in the Classes With Events summary view, the main pane will show you that the
Ntp::Config and Ntp::Service classes were successfully added when you triggered the last
puppet run.

You can keep clicking to drill down and see more detail. You can click the previous arrow (left of the
summary pane), the bread-crumb trail at the top of the page, or bookmark a page for later
reference (but note that after subsequent puppet runs, the bookmarks may be dierent when you
revisit them). Eventually, you will end up at a run summary that shows you the details of the event.
For example, you can see exactly which piece of puppet code was responsible for generating the
event; in this case, it was line 15 of the service.pp manifest and line 21 of the config.pp manifest.

Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

38/404

If there had been a problem applying this class, this information would tell you exactly what piece
of code you need to x. In this case, event inspector lets you conrm that PE is now managing NTP.
In the upper right corner of the detail pane is a link to a run report which contains information
about the puppet run that made the change, including metrics about the run, logs, and more
information. Visit the reports page for more information.

Summary
You have now experienced the core features and workows of Puppet Enterprise. In summary, a
Puppet Enterprise user will:
Install the PE agent on nodes they wish to manage (*nix and Windows instructions), and add the
nodes by approving their certicate requests.
Use pre-built, PE supported modules from the Puppet Forge to save time and eort.
Assign classes from modules to nodes in the console.
Use the console to set values for class parameters.
Allow nodes to be managed by regularly scheduled Puppet runs.
Use live management to inspect and compare nodes, and to trigger on-demand puppet agent
runs when necessary.
Use event inspector to learn more about events that occurred during puppet runs, such as what
was changed or why something failed.
Next Steps
Beyond what this brief walkthrough has covered, most users will go on to:
Edit Forge modules to customize them to your infrastructures needs.
Create new modules from scratch by writing classes that manage resources.
Use a site module to compose other modules into machine roles, allowing console users to
control policy instead of implementation.
Congure multiple nodes at once by adding classes to groups in the console instead of
individual nodes.
To learn about these workows, continue to part two of this quick start guide. Choose from either
the Windows or the Linux tracks.
Puppet Enterprise 3.3 User's Guide Quick Start: Using PE 3.3

39/404

OTHER RESOURCES

Puppet Labs oers many opportunities for learning and training, from formal certication courses
to guided on-line lessons. Weve noted a few below; head over to the learning Puppet page to
discover more.
Learning Puppet is a series of exercises on various core topics on deploying and using PE. It
includes the Learning Puppet VM which provides PE pre-installed and congured on VMware and
VirtualBox virtualization platforms.
The Puppet Labs workshop contains a series of self-paced, online lessons that cover a variety of
topics on Puppet basics. You can sign up at the learning page.
To explore the rest of the PE users manual, use the sidebar at the top of this page, or return to
the index.
Next: Quick Start: Writing Modules (Windows) or Quick Start Writing Modules (Linux)

Module Writing Basics for Windows


Welcome to part two of the PE 3.3 quick start guidethe Windows track. This document is a
continuation of the introductory quick start guide, and is a short walkthrough to help you become
more familiar with Puppet modules, module development, and additional PE features for your
Windows agent nodes. Follow along to learn how to:
Modify a module obtained from the Forge
Write your own Puppet module
Create a site module that assigns other modules into machine roles
Apply Puppet classes to groups with the console

Before starting this walkthrough, you should have completed the introductory quick start
guide. You should still be logged in as root or administrator on your nodes.

Getting Started
First, youll need to install the puppet agent on a node running a supported version of Windows.
Once the agent is installed, sign its certicate to add it to the console just as you did for the rst
agent node in part one of this guide.
Next, install the Puppet Labs Registry module on your puppet master. The process is identical to
how you installed the NTP module in part one. Once the module has been installed, add its class as
you did with NTP.

Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

40/404

Editing a Forge Module


Although many Forge modules are exact solutions that t your site, many more are almost but not
quite what you need. Typically, you will edit many of your Forge modules.
Module Basics
By default, modules are stored in /etc/puppetlabs/puppet/modules. You can congure this path
with the modulepath setting in puppet.conf.)
Modules are directory trees. The manifest directory of the Puppet Labs Registry module contains the
following les:
registry/ (the module name)
manifests/
init.pp (contains the registry class)
service_example.pp (contains the registry::service class used in an example below)
compliance_example.pp (provides an example registry::compliance_example class)
purge_example.pp (provides an example registry::purge_example class)
service.pp (denes registry::service)
value.pp (denes registry::value)
Every manifest (.pp) le contains a single class. File names map to class names in a predictable way:
init.pp contains a class with the same name as the module; <NAME>.pp contains a class called
<MODULE NAME>::<NAME>; and <NAME>/<OTHER NAME>.pp contains <MODULE
NAME>::<NAME>::<OTHER NAME>.
Many modules contain directories other than manifests; for simplicitys sake, we will not cover
them in this introductory guide.
For more on how modules work, see Module Fundamentals in the Puppet documentation.
For more on best practices, methods, and approaches to writing modules, see the Beginners
Guide to Modules.
For a more detailed guided tour, also see the module chapters of Learning Puppet.
Editing a Manifest
This simplied exercise will modify an example manifest from the Puppet Labs Registry module,
specically service_example.pp. The registry::service dened resource type makes it easy to
control your registry; you can avoid having to declare both registry_key and registry_value
resources with just a bit of puppet code.
1. On the puppet master, navigate to the modules directory by running cd
/etc/puppetlabs/puppet/modules.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

41/404

2. Run ls to view the currently installed modules; note that registry is present.
3. Open registry/manifests/service_example.pp, using the text editor of your choice (vi, nano,
etc.). Avoid using Notepad since it can introduce errors.
service_example.pp contains the following:

class registry::service_example {
# Define a new service named "Puppet Test" that is disabled.
registry::service { 'PuppetExample1':
display_name => "Puppet Example 1",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample1.bat',
start => 'disabled',
}
registry::service { 'PuppetExample2':
display_name => "Puppet Example 2",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample2.bat',
start => 'disabled',
}
}

4. Remove the PuppetExample2 registry::service resource, and add the following file
resource:
class registry::service_example {
# Define a new service named "Puppet Test" that is disabled.
registry::service { 'PuppetExample1':
display_name => "Puppet Example 1",
description => "This is a simple example managing the registry entries
for a Windows Service",
command => 'C:\PuppetExample1.bat',
start => 'disabled',
}

file { 'C:\PuppetExample1.bat':
ensure => file,
content => ":loop\r\nTIMEOUT /T 300\r\ngoto loop\r\n",
notify => registry::service['PuppetExample1'],
}
}

The registry::service_example class is now managing C:\PuppetExample1.bat, and the


contents of that le are being set with the content attribute. For more on resource declarations,
see the manifests chapter of Learning Puppet or the resources page of the language reference.
For more about how le paths with backslashes work in manifests for Windows, see the page on
writing manifests for Windows.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

42/404

5. Save and close the le.


6. On the console, add registry::service_example to the available classes, and then add that
class to the Windows agent node. Refer to the introductory section of this guide if you need help
adding classes in the console.
7. Kick o a puppet run.
On the windows agent node, navigate to your C:\ directory. Puppet has created the file resource
PuppetExample1.bat, which is one of the resources that Puppet manages when it applies the class
registry::service_example.

Puppet has also set a number of Registry keys to dene the PuppetExample1 Windows service. You
can use event inspector to view the specic changes.

Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

43/404

To see PuppetExample1 in the list of services that are running, youll rst need to reboot your
Windows agent node, and then navigate to Services via the Administrative Tools.

Writing a Puppet Module


Puppet Labs modules save time, but at some point most users will also need to write their own
modules.
Writing a Class in a Module
During this exercise, you will create a class called critical_policy that will manage a collection of
important settings and options in your Windows registry, most notably the legal caption and text
users will see before the login screen.
1. On the puppet master, make sure youre still in the modules directory, cd
/etc/puppetlabs/puppet/modules, and then run mkdir -p critical_policy/manifests to
create the new module directory and its manifests directory.
2. Use your text editor to create and open the critical_policy/manifests/init.pp le.
3. Edit the init.pp le so it contains the following puppet code, and then save it and exit the editor:
class critical_policy {

registry::value { 'Legal notice caption':
key =>
'HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System',
value => 'legalnoticecaption',
data => 'Legal Notice',
}

registry::value { 'Legal notice text':
key =>
'HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System',
value => 'legalnoticetext',
data => 'Login constitutes acceptance of the End User Agreement',
}

registry::value { 'Allow Windows Update to Forcibly reboot':
key => 'HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU',
value => 'NoAutoRebootWithLoggedOnUsers',
type => 'dword',
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

44/404

data => '0',


}
}

You have written a new module containing a single class. Puppet now knows about this class,
and it can be added to the console and assigned to your Windows nodes, just as you did in
part one of this guide.
Note the following about this new class:
The registry::value dened resource type allows you to use Puppet to manage the
parent key for a particular value automatically.
The key parameter species the path the key the value(s) must be in.
The value parameter lists the name of the registry value(s) to manage. This is copied
from the resource title if not specied.
The type parameter determines the type of the registry value(s). Defaults to string. Valid
values are string, array, dword, qword, binary, or expand.
data Lists the data inside the registry value.

For more information about writing classes, refer to the following documentation:
To learn how to write resource declarations, conditionals, and classes in a guided tour format,
start at the beginning of Learning Puppet.
For a complete but succinct guide to the Puppet languages syntax, see the Puppet 3 language
reference.
For complete documentation of the available resource types, see the type reference.
For short, printable references, see the modules cheat sheet and the core types cheat sheet.
Using Your Custom Module in the Console
1. On the console, use the Add classes button to choose the critical_policy class from the list,
and then click the Add selected classes button to make it available, just as in the previous
example. You may need to wait a moment or two for the class to show up in the list.
2. Add the critical_policy class to your Wiindows agent node.
3. On the Windows agent node, manually set the data values of legalnoticecaption and
legalnoticetext to some other values. For example, set legalnoticecaption to Larrys
Computer and set legalnoticetext to This is Larrys computer.

Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

45/404

4. Use live management to run the runonce action on your Windows agent node.
5. On the Windows agent node, refresh the registry and note that the values of
legalnoticecaption and legalnoticetext have been returned to the values specied in your
critical_policy manifest.

If you reboot your Windows machine, you will see the legal caption and text before you log in again.

You have created a new class from scratch and used it to manage registry settings on your
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

46/404

Windows server.

Using a Site Module


Many users create a site module. Instead of describing smaller units of a conguration, the
classes in a site module describe a complete conguration for a given type of machine. For
example, a site module might contain:
A site::basic class, for nodes that require security management but havent been given a
specialized role yet.
A site::webserver class for nodes that serve web content.
A site::dbserver class for nodes that provide a database server to other applications.
Site modules hide complexity so you can more easily divide labor at your site. System architects can
create the site classes, and junior admins can create new machines and assign a single role class
to them in the console. In this workow, the console controls policy, not ne-grained
implementation.
On the puppet master, create the /etc/puppetlabs/puppet/modules/site/manifests/basic.pp
le, and edit it to contain the following:
class site::basic {
if $osfamily == 'windows' {
include critical_policy
}
else {
include motd
include core_permissions
}
}

This class declares other classes with the include function. Note the if conditional that sets
dierent classes for dierent OSs using the $osfamily fact. In this example, if an agent node is not
a Windows agent, puppet will apply the motd and core_permissions classes. For more information
about declaring classes, see the modules and classes chapters of Learning Puppet.
1. On the console, remove all of the previous example classes from your nodes and groups, using
the Edit button in each node or group page. Be sure to leave the pe_* classes in place.
2. Add the site::basic class to the console with the Add classes button in the sidebar as before.
3. Assign the site::basic class to the default group.

Your nodes are now receiving the same congurations as before, but with a simplied
interface in the console. Instead of deciding which classes a new node should receive, you
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Windows

47/404

can decide what type of node it is and take advantage of decisions you made earlier.

Summary
You have now performed the core workows of an intermediate Puppet user. In the course of their
normal work, intermediate users:
Download and modify Forge modules to t their deployments needs.
Create new modules and write new classes to manage many types of resources, including les,
services, packages, user accounts, and more.
Build and curate a site module to safely empower junior admins and simplify the decisions
involved in deploying new machines.
Monitor and troubleshoot events that aect their infrastructure.
Next: System Requirements

Module Writing Basics for Linux


Welcome to part two of the PE 3.3 quick start guidethe Linux track. This document is a
continuation of the introductory quick start guide, and is a short walkthrough to help you become
more familiar with Puppet modules, module development, and additional PE features. Follow along
to learn how to:
Modify a module obtained from the Forge
Write your own Puppet module
Create a site module that composes other modules into machine roles
Apply Puppet classes to groups with the console

Before starting this walkthrough, you should have completed the introductory quick start
guide. You should still be logged in as root or administrator on your nodes.

Getting Started
Since youll be using the same master and agent nodes you congured in part one, all you need to
install for the following exercises is the Puppet Labs supported Apache module. The process is
identical to how you installed the NTP module in part one, but just be sure to install the module on
your master. Once the module has been installed, use the console to add its class and then classify
the master as you did with NTP.

Editing a Forge Module


Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

48/404

Although many Forge modules are exact solutions that t your site, many are almost but not quite
what you need. Sometimes you will need to edit some of your Forge modules.
Module Basics
By default, modules are stored in /etc/puppetlabs/puppet/modules. If need be, you can congure
this path with the modulepath setting in puppet.conf.)
Modules are directory trees. For these exercises youll use the following les:
apache/ (the module name)
manifests/
init.pp (contains the apache class)
php.pp (contains the php class to install PHP for Apache)
vhosts.pp (contains the Apache virtual hosts class)
templates/
vhost.conf.erb (contains the vhost template, managed by PE)
Every manifest (.pp) le contains a single class. File names map to class names in a predictable way:
init.pp contains a class with the same name as the module; <NAME>.pp contains a class called
<MODULE NAME>::<NAME>; and <NAME>/<OTHER NAME>.pp contains <MODULE
NAME>::<NAME>::<OTHER NAME>.
Many modules, including Apache, contain directories other than manifests and templates; for
simplicitys sake, we do not cover them in this introductory guide.
For more on how modules work, see Module Fundamentals in the Puppet documentation.
For more on best practices, methods, and approaches to writing modules, see the Beginners
Guide to Modules.
For a more detailed guided tour, also see the module chapters of Learning Puppet.
Editing a Manifest
This simplied exercise modies a template from the Puppet Labs Apache module, specically
'vhost.conf.erb. Youll edit the template to include some simple variables that will be populated
by facts (using PEs implementation of Facter) about your node.
1. On the puppet master, navigate to the modules directory by running cd
/etc/puppetlabs/puppet/modules.
2. Run ls to view the currently installed modules; note that apache is present.
3. Open apache/templates/vhosts.conf.erb, using the text editor of your choice (vi, nano, etc.).
Avoid using Notepad since it can introduce errors. vhosts.conf.erb contains the following
header:
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

49/404

# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
# ************************************
4. Collect the following facts about your agent node:
run facter osfamily (this returns your agent nodes OS)
run facter id (this returns the id of the currently logged in user)
5. Edit the header of vhosts.conf.erb so that it contains the following variables for Facter
lookups:
# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
#
# This file is authorized for deployment by <%= scope.lookupvar('::id') %>.
#
# This file is authorized for deployment ONLY on <%=
scope.lookupvar('::osfamily') %> <%=
scope.lookupvar('::operatingsystemmajrelease') %>.
#
# Deployment by any other user or on any other system is strictly
prohibited.
# ************************************
6. On the console, add apache to the available classes, and then add that class to your agent node.
Refer to the introductory section of this guide if you need help adding classes in the console.
7. Use live management to kick o a puppet run.
At this point, puppet congures apache and starts the httpd service. When this happens, a default
apache vhost is created based on the contents of vhosts.conf.erb.
1. On the agent node, navigate to one of the following locations based on your operating system:
Redhat-based: /etc/httpd/conf.d
Debian-based: /etc/apache2/sites-available
2. View 15-default.conf; depending on the nodes OS, the header will show some variation of the
following contents:
# ************************************
# Vhost template in module puppetlabs-apache
# Managed by Puppet
#
# This file is authorized for deployment by root.
#
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

50/404

# This file is authorized for deployment ONLY on Redhat 6.


#
# Deployment by any other user or on any other system is strictly
prohibited.
# ************************************

As you can see, PE has used Facter to retrieve some key facts about your node, and then used those
facts to populate the header of your vhost template.
But now, lets see what happens you write your own Puppet code.

Writing a Puppet Module


Puppet Labs modules save time, but at some point you may that youll need to write your own
modules.
Writing a Class in a Module
During this exercise, you will create a class called pe_quickstart_app that will manage a PHPbased web app running on an Apache virtual host.
1. On the puppet master, make sure youre still in the modules directory ( cd
/etc/puppetlabs/puppet/modules) and then run mkdir -p pe_quickstart_app/manifests to
create the new module directory and its manifests directory.
2. Use your text editor to create and open the pe_quickstart_app/manifests/init.pp le.
3. Edit the init.pp le so it contains the following puppet code, and then save it and exit the
editor:
class pe_quickstart_app {

class { 'apache':
mpm_module => 'prefork',
}

include apache::mod::php

apache::vhost { 'pe_quickstart_app':
port => '80',
docroot => '/var/www/pe_quickstart_app',
priority => '10',
}

file { '/var/www/pe_quickstart_app/index.php':
ensure => file,
content => "<?php phpinfo() ?>\n",
mode => '0644',
}

}

Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

51/404

You have written a new module containing a new class that includes two other classes.
Puppet now knows about this your new class, and it can be added to the console and
assigned to your node, just as you did in part one of this guide.
Note the following about your new class:
The class apache has been modied to include the mpm_module attribute; this attribute
determines which multi-process module is congured and loaded for the Apache (HTTPD)
process. In this case, the value is set to prefork.
include apache::mod::php indicates that your new class relies on those classes to
function correctly. However, PE understands that your node needs to be classied with
these classes and will take care of that work automatically when you classify your node
with the pe_quickstart_app class; in other words, you dont need to worry about
classifying your nodes with Apache and Apache PHP.
The priority attribute of 10 ensures that your app has a higher priority on port 80 than
the default Apache vhost app.
The le /var/pe_quickstart_app/index.php contains whatever is specied by the
content attribute. This is the content you will see when you launch your app. PE uses the
ensure attribute to create that le the rst time the class is applied. This the content you
will see when you launch your app.

For more information about writing classes, refer to the following documentation:
To learn how to write resource declarations, conditionals, and classes in a guided tour format,
start at the beginning of Learning Puppet.
For a complete but succinct guide to the Puppet languages syntax, see the Puppet 3 language
reference.
For complete documentation of the available resource types, see the type reference.
For short, printable references, see the modules cheat sheet and the core types cheat sheet.
Using Your Custom Module in the Console
1. On the console, click the Add classes button, choose the pe_quickstart_app class from the list,
and then click the Add selected classes button to make it available, just as in the previous
example. You may need to wait a moment or two for the class to show up in the list.
2. Navigate to the node view page for your agent node, and use the Edit button to add the
pe_quickstart_app class to your agent node, and remove the apache class you previously
added.

Note: Since the pe_quickstart_app includes the apache class, you need to remove the
rst apache class you added the master node, as puppet will only allow you to declare a
class once.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

52/404

3. Use live management to run the runonce action your agent node.
When the puppet run is complete, you will see in the nodes log that a vhost for the app has been
created and the Apache service (httpd) has been started.
4. Use a browser to navigate to port 80 of the IP address for your node; e.g,
http://<yournodeip>:80.

Tip: Be sure to use http instead of https.

You have created a new class from scratch and used it to launch a Apache PHP-based web app.
Needless to say, in the real world, your apps will do a lot more than display PHP info pages. But for
the purposes of this exercise, lets take a closer look at how PE is managing your app.
Using PE to Manage Your App
1. On the agent node, open /var/www/pe_quickstart_app/index.php, and change the content;
change it to something like, THIS APP IS MANAGED BY PUPPET!
2. Refresh your browser, and notice that the PHP info page has been replaced with your new
message.
3. On the console, use live management to run the runonce action on your node.
4. Refresh your browser, and notice that puppet has reset your web app to display the PHP info
page. (You can also see that the contents of /var/www/pe_quickstart_app/index.php has been
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

53/404

reset to what was specied in your manifest.)

Using a Site Module


Many users create a site module. Instead of describing smaller units of a conguration, the
classes in a site module describe a complete conguration for a given type of machine. For
example, a site module might contain:
A site::basic class, for nodes that require security management but havent been given a
specialized role yet.
A site::webserver class for nodes that serve web content.
A site::dbserver class for nodes that provide a database server to other applications.
Site modules hide complexity so you can more easily divide labor at your site. System architects can
create the site classes, and junior admins can create new machines and assign a single role class
to them in the console. In this workow, the console controls policy, not ne-grained
implementation.
On the puppet master, create /etc/puppetlabs/puppet/modules/site/manifests/basic.pp,
and edit the le to contain the following:
class site::basic {
if $kernel == 'Linux' {
include pe_quickstart_app
}
elsif $kernel == 'windows' {
include registry::compliance_example
}
}

This class declares other classes with the include function. Note the if conditional that sets
dierent classes for dierent kernels using the $kernel fact. In this example, if an agent node is a
Linux machine, puppet will apply your pe_quickstart_app class; if it is a window machines, puppet
will apply the registry::compliance_example class. For more information about declaring classes,
see the modules and classes chapters of Learning Puppet.
1. On the console, remove all of the previous example classes from your nodes and groups, using
the Edit button in each node or group page. Be sure to leave the pe_* classes in place.
2. Add the site::basic class to the console with the Add classes button in the sidebar as before.
3. Assign the site::basic class to the default group.

Your nodes are now receiving the same congurations as before, but with a simplied
interface in the console. Instead of deciding which classes a new node should receive, you
can decide what type of node it is and take advantage of decisions you made earlier.
Puppet Enterprise 3.3 User's Guide Module Writing Basics for Linux

54/404

Summary
You have now performed the core workows of an intermediate Puppet user. In the course of their
normal work, intermediate users:
Download and modify Forge modules to t their deployments needs.
Create new modules and write new classes to manage many types of resources, including les,
services, and more.
Build and curate a site module to safely empower junior admins and simplify the decisions
involved in deploying new machines.
Next: System Requirements

System Requirements and Pre-Installation


Before installing Puppet Enterprise:
Ensure that your nodes are running a supported operating system.
Ensure that your puppet master and console servers are suciently powerful (see the hardware
section section below).
Ensure that your network, rewalls, and name resolution are congured correctly and all target
servers are communicating.
Plan to install the puppet master server before the console server, and the console server before
any agent nodes. If you are separating components, install them in this order:
1. Puppet Master
2. PuppetDB and PostgreSQL
3. Console
4. Agents

Operating System
Puppet Enterprise 3.3 supports the following systems:
Operating system

Version(s)

Arch

Component(s)

Red Hat Enterprise Linux

4, 5, 6, & 7

x86 &
x86_64

all (RHEL 4 supports agent only)

CentOS

4, 5, & 6

x86 &
x86_64

all (CentOS 4 supports agent only)

Ubuntu LTS

10.04, 12.04, & 14.04

i386 &
amd64

all

Debian

Squeeze (6) & Wheezy (7)

i386 &
amd64

all

Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

55/404

Oracle Linux

4, 5 & 6

x86 &
x86_64

all (Oracle Linux 4 supports agent


only)

Scientic Linux

4, 5 & 6

x86 &
x86_64

all (Scientic Linux 4 supports agent


only)

SUSE Linux Enterprise


Server

11 (SP1 and later)

x86 &
x86_64

all

Solaris

10 (Update 9 or later) & 11

SPARC &
i386

agent

Microsoft Windows

2003, 2003R2, 2008, 2008R2, 7, 8, 2012, &


2012R2

x86 &
x86_64

agent

AIX

5.3, 6.1, & 7.1

Power

agent

Mac OS X

Mavericks (10.9)

x86_64

agent

Note: Some operating systems require an active subscription with the vendors package
management system to install dependencies, such as Red Hat Network.

Note: In addition, upgrading your OS while PE is installed can cause problems with PE. To
perform an OS upgrade, youll need to uninstall PE, perform the OS upgrade, and then
reinstall PE as follows:
1. Back up your databases and other PE les.
2. Perform a complete uninstall (including the -p -d uninstaller option).
3. Upgrade your OS.
4. Install PE.
5. Restore your backup.

Hardware Requirements
Puppet Enterprises hardware requirements depend on the components a machine performs.
For the puppet master, PE console, PuppetDB and database support, and any agent nodes, we
recommend that your hardware meets the following requirements.
At least four processor cores per node
At least 4 GB RAM per node
Very accurate timekeeping
For /var/, at least 1 GB of free space for each PE component on a given node
For PE-installed PostgreSQL, /opt/ requires at least 100 GB of free space for data gathering
For no PE-installed PostgreSQL, /opt/ needs at least 1 GB of disk space available

Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

56/404

Supported Browsers
The following browsers are supported for use with the console:
Chrome: Current version, as of release
Firefox: Current version, as of release
Internet Explorer: 9, 10, and 11
Safari: 7

System Conguration
Before installing Puppet Enterprise at your site, you should make sure that your nodes and network
are properly congured.
Timekeeping
We recommend using NTP or an equivalent service to ensure that time is in sync between your
puppet master and any puppet agent nodes. If time drifts out of sync in your PE infrastructure, you
may encounter issues such as nodes disappearing from live manangement in the console. A service
like NTP (available as a Puppet Labs supported module) will ensure accurate timekeeping.
Name Resolution
Decide on a preferred name or set of names agent nodes can use to contact the puppet master
server.
Ensure that the puppet master server can be reached via domain name lookup by all of the
future puppet agent nodes at the site.
You can also simplify conguration of agent nodes by using a CNAME record to make the puppet
master reachable at the hostname puppet. (This is the default puppet master hostname that is
automatically suggested when installing an agent node.)
Firewall Conguration
Congure your rewalls to accommodate Puppet Enterprises network trac. In brief: you should
open up ports 8140, 8081, 61613, and 443. The more detailed version is:
If you are installing PE using the web-based installer, ensure port 3000 is open. You can close
this port when the installation is complete.
All agent nodes must be able to send requests to the puppet master on ports 8140 (for Puppet)
and 61613 (for orchestration).
The puppet master must be able to accept inbound trac from agents on ports 8140 (for
Puppet) and 61613 (for orchestration).
Any hosts you will use to access the console must be able to reach the console server on port
443, or whichever port you specify during installation. (Users who cannot run the console on
port 443 will often run it on port 3000.)
If you will be invoking orchestration commands from machines other than the puppet master,
Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

57/404

they will need to be able to reach the master on port 61613. (Note: enabling other machines to
invoke orchestration actions is possible but not supported in this version of Puppet Enterprise.)
If you will be running the console and puppet master on separate servers, the console server
must be able to accept trac from the puppet master (and the master must be able to send
requests) on ports 443 and 8140. The console server must also be able to send requests to the
puppet master on port 8140, both for retrieving its own catalog and for viewing archived le
contents.
PuppetDB needs to accept connections on port 8081, and the puppet master and PE console
need to be able to do outbound trac on 8081.
Dependencies and OS Specic Details
This section details the packages that are installed from the various OS repos. Unless you do not
have internet access, you shouldnt need to worry about installing these manually, they will be set
up during PE installation.
POSTGRESQL REQUIREMENT

If you will be using your own instance of PostgreSQL (as opposed to the instance PE can install) for
the console and PuppetDB, it must be version 9.1 or higher.
OPENSSL REQUIREMENT

OpenSSL is a dependency required for PE. For Solaris 10 and all versions of RHEL, Debian, Ubuntu,
Windows, and AIX nodes, OpenSSL is included with PE; for all other platforms it is installed directly
from the system repositories.
Centos
All Nodes

Master Nodes

Console Nodes

Console/Console DB Nodes

Cloud Provisioner Nodes

pciutils

apr

apr

libjpeg

libxslt

system-logos

apr-util

apr-util

which

curl

curl

libxml2

mailcap

mailcap

dmidecode

libjpeg

net-tools

libtool-ltdl

libtool-ltdl

virt-what

unixODBC

unixODBC

libxml2

RHEL
All Nodes

Master Nodes

Console Nodes

Console/Console DB Nodes

Cloud Provisioner Nodes

pciutils

apr

apr

libjpeg

libxslt

system-logos

apr-util

apr-util

Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

libxml2

58/404

which

apr-util-ldap (RHEL 6)

curl

libxml2

curl

mailcap

dmidecode

mailcap

apr-util-ldap (RHEL 6)

net-tools

libjpeg

cronie (RHEL 6)

libtool-ltdl (RHEL 7)

libtool-ltdl (RHEL 7)

vixie-cron (RHEL 4, 5)

unixODBC (RHEL 7)

unixODBC (RHEL 7)

virt-what

SLES
All Nodes

Master Nodes

Console Nodes

Console/Console DB Nodes

Cloud Provisioner Nodes

pciutils

libapr1

libapr1

libjpeg

libxml2

pmtools

libapr-util1

libapr-util1

cron

libxslt

curl

libxml2

curl

net-tools

libjpeg

libxslt

db43

db43

unixODBC

unixODBC

Debian
All Nodes

Master Nodes

Console Nodes

Console/Console DB Nodes

Cloud Provisioner Nodes

pciutils

le

le

libjpeg62

libxslt1.1

dmidecode

libmagic1

libmagic1

libxml2-dev (Debian 7)

libxml2

cron

libpcre3

libpcre3

locales-all (Debian 7)

libxml2

curl

curl

hostname

perl

perl

libldap-2.4-2

mime-support

mime-support

libreadline5

libapr1

libapr1

virt-what

libcap2

libcap2

libaprutil1

libaprutil1

libaprutil1-dbd-sqlite3

libaprutil1-dbd-sqlite3

libaprutil1-ldap

libaprutil1-ldap

Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

59/404

libjpeg62

libcurl3 (Debian 7)

libcurl3 (Debian 7)

libxml2-dev (Debian 7)

libxml2-dev (Debian 7)

Ubuntu
All Nodes

Master Nodes

Console Nodes

Console/Console DB Nodes

Cloud Provisioner Nodes

pciutils

le

le

libjpeg62

libxslt1.1

dmidecode

libmagic1

libmagic1

cron

libpcre3

libpcre3

libxml2

curl

curl

hostname

perl

perl

libldap-2.4-2

mime-support

mime-support

libreadline5

libapr1

libapr1

virt-what

libcap2

libcap2

libaprutil1

libaprutil1

libaprutil1-dbd-sqlite3

libaprutil1-dbd-sqlite3

libaprutil1-ldap

libaprutil1-ldap

libxml2

libjpeg62

AIX
In order to run the puppet agent on AIX systems, youll need to ensure the following are installed
before attempting to install the puppet agent:
bash
zlib
readline
All AIX toolbox packages are available from IBM.
To install the packages on your selected node directly, you can run rpm -Uvh with the following
URLs (note that the RPM package provider on AIX must be run as root):
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/bash/bash-3.21.aix5.2.ppc.rpm
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/zlib/zlib-1.2.3Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

60/404

4.aix5.2.ppc.rpm
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/readline/readline-6.11.aix6.1.ppc.rpm (AIX 6.1 and 7.1 only)
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/readline/readline-4.32.aix5.1.ppc.rpm (AIX 5.3 only)
Note: if you are behind a rewall or running an http proxy, the above commands may not work.
Instead, use the link above to nd the packages you need.
Note: GPG verication will not work on AIX, the RPM version used by AIX (even 7.1) is too old. The
AIX package provider doesnt support package downgrades (installing an older package over a
newer package). Avoid using leading zeros when specifying a version number for the AIX provider
(i.e., use 2.3.4 not 02.03.04).
The PE AIX implementation supports the NIM, BFF, and RPM package providers. Check the Type
Reference for technical details on these providers.
Solaris
Solaris support is agent only.
For Solaris 10, the following packages are required:
SUNWgccruntime
SUNWzlib
In some instances, bash may not be present on Solaris systems. It needs to be installed before
running the PE installer. Install it via the media used to install the OS or via CSW if that is present
on your system. (CSWbash or SUNWbash are both suitable.)
For Solaris 11 the following packages are required:
system/readline
system/library/gcc-45-runtime
library/security/openssl
These packages are available in the Oracle Solaris release repository (enabled by default on Solaris
11). The PE installer will automatically install them; however, if the release repository is not enabled,
the packages will need to be installed manually.
Next Steps
To install Puppet Enterprise on *nix nodes, continue to Installing Puppet Enterprise.
To install Puppet Enterprise on Windows nodes, continue to Installing Windows Agents.

Puppet Enterprise 3.3 User's Guide System Requirements and Pre-Installation

61/404

Installing Puppet Enterprise Overview


This section covers *nix operating systems. To install PE on Windows, see Installing
Windows Agents.

Installing Puppet Enterprise


Your PE installation will go more smoothly if you know a few things in advance. Puppet Enterprises
functions are spread across several dierent components which get installed and congured when
you run the installer. You can choose to install multiple components on a single node (a
monolithic install) or spread the components across multiple nodes (a split install), but you
should note that the agent component gets installed on every node.
You should decide on your deployment needs before starting the install process. For each node
where you will be installing a PE component, you should know the fully qualied domain name
where that node can be reached and you should ensure that rewall rules are set up to allow access
to the required ports.
With that knowledge in hand, the installation process will proceed in three stages:
1. You will choose an installation method.
2. You will install the main components of PEthe puppet master, PuppetDB, database support,
and the PE console. (Note that the Cloud Provisioner is installed by default when you run the
web-based installer. If you plan on performing an automated installation with an answer le, you
can disable the Cloud Provisioner installation.)
3. You will install the PE agent on all the nodes you wish to manage with PE. Refer to the agent
installation instructions
Choose an Installation Method
Before you begin, choose an installation method. Weve provided a few paths to choose from.
Perform a guided installation using the web-based interface. Think of this as an installation
interview in which we ask you exactly how you want to install PE. If youre able to provide a few
SSH credentials, this method will get you up and running fairly quickly. Choose from one of the
following installation types:
Monolithic installation (for up to 500 nodes)
Split installation (for 500-1500 nodes)
Use the web-based interface to create an answer le that you can then add as an argument to
the installer script to perform an installation (e.g., sudo ./puppet-enterprise-installer -a
~/my_answers.txt). Refer to Automated Installation with an Answer File, which provides an
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Overview

62/404

overview on installing PE with an answer le.


Write your own answer le or use the answer le(s) provided in the PE installation tarball. Check
the Answer File Reference Overview to get started.
See the system requirements for any hardware-related specications.

Note: Before getting started, we recommend you read about the Puppet Enterprise
components to familiarize yourself with the parts that make up a PE installation.

Downloading Puppet Enterprise


Start by downloading the tarball for the current version of Puppet Enterprise, along with the GPG
signature (.asc), from the Puppet Labs website.
Choosing an Installer Tarball
Puppet Enterprise is distributed in tarballs specic to your OS version and architecture.
AVAILABLE *NIX TARBALLS
Filename ends with

Will install on

-debian-<version and arch>.tar.gz

Debian

-el-<version and arch>.tar.gz

RHEL, CentOS, Scientic Linux, or Oracle Linux

-solaris-<version and arch>.tar.gz

Solaris

-ubuntu-<version and arch>.tar.gz

Ubuntu LTS

-aix-<version and arch>.tar.gz

AIX

-sles-<version and arch>.tar.gz

SLES

Note: Bindings for SELinux are available on RHEL 5 and 6. They are not installed by default but are
included in the installation tarball.
Verifying the Installer
To verify the PE installer, you can import the Puppet Labs public key and run a cryptographic
verication of the tarball you downloaded. The Puppet Labs public key is certied by Puppet and is
available from public keyservers, such as pgp.mit.edu. Youll need to have GnuPG installed and the
GPG signature (.asc le) that you downloaded with the PE tarball.
To import the Puppet Labs public key, run:
$ gpg --keyserver=pgp.mit.edu --recv-key 4BD6EC30

Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Overview

63/404

The result should be similar to


gpg: requesting key 4BD6EC30 from hkp server pgp.mit.edu
gpg: key 4BD6EC30: public key "Puppet Labs Release Key" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)

Next, verify the release signature on the tarball by running:


$ gpg --verify puppet-enterprise-<version>-<platform>.tar.gz.asc

The result should be similar to


gpg: Signature made Tue 18 Jun 2013 10:05:25 AM PDT using RSA key ID 4BD6EC30
gpg: Good signature from "Puppet Labs Release Key (Puppet Labs Release Key)"

Note: When you verify the signature but do not have a trusted path to one of the signatures on the
release key, you will see a warning similar to
Could not find a valid trust path to the key.
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the
owner.

This warning is generated because you have not created a trust path to certify who signed the
release key; it can be ignored.

About Puppet Enterprise Components


Before beginning installation, you should familiarize yourself with the following PE components.
The Puppet Agent
The puppet agent is most easily installed using a package manager (see installing agents). On
platforms (Windows) that do not support remote package repos, you can use the installer script.
This component should be installed on every node in your deployment. When you install the puppet
master, PuppetDB, or console components, the puppet agent component will be installed
automatically on the machines assigned to those components.
Nodes with the puppet agent component can:
run the puppet agent daemon, which receives and applies congurations from the puppet
master.
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Overview

64/404

listen for orchestration messages and invoke orchestration actions.


send data to the master for use by PuppetDB.
The Puppet Master
In most deployments, you should install this component on one node (installing multiple puppet
masters requires additional conguration that is beyond the scope of this guide). The puppet
master must be a robust, dedicated server; see the system requirements for details.
The puppet master server can:
compile and serve conguration catalogs to puppet agent nodes.
route orchestration messages through its ActiveMQ server.
issue valid orchestration commands (from an administrator logged in as the peadmin user).

Note: By default, the puppet master will check for the availability of updates whenever the
pe-httpd service restarts. In order to retrieve the correct update information, the master will
pass some basic, anonymous information to Puppet Labs servers. This behavior can be
disabled. You can nd the details on what is collected and how to disable upgrade checking
in the correct answer le reference. If an update is available, a message will alert you.

PuppetDB and Database Support


The PuppetDB component uses an instance of PostgreSQL that is either installed by PE or manually
congured by you. In a monolithic installation, PuppetDB is installed on the same node as the
console and puppet master components. In a split install, PuppetDB is installed on its own server.
During installation, you will be asked if you want this PostgreSQL instance to be installed by PE or if
you want to use one youve already congured.
Database support for the console (the console and console_auth databases) runs on the same
instance of PostgreSQL as PuppetDB.
PuppetDB is the fast, scalable, and reliable data warehouse for PE. It caches data generated by PE,
and gives you advanced features at awesome speed with a powerful API.
PuppetDB stores:
the most recent facts from every node.
the most recent catalog for every node.
fourteen days (congurable) of event reports for every node (an optional, congurable setting).
If you want to set up a PuppetDB database manually, the PuppetDB conguration documentation
has more information.
The Console
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Overview

65/404

For a split installation, you install the console on its own dedicated server, but if you have a
monolithic installation, you install it on the same server as all of the other PE components.
The console server can:
serve the console web interface, which enables administrators to directly edit resources on
nodes, trigger immediate Puppet runs, group and assign classes to nodes, view reports and
graphs, view inventory information, and invoke orchestration actions.
collect reports from and serve node information to the puppet master.
The Console Databases
As indicated in the Database Support section above, the console and console_auth databases rely
on data provided by a PostgreSQL database. You will either have PE install this database or
congure one manually on your own. You only need to create the database instancesthe console
will populate them.

IMPORTANT: If you are using an existing PostgreSQL instance, you will need the host name
and port of the node you intend to use to provide database support, and you will also need
the user passwords for accessing the databases.
When performing split installations using the automated installation method, install the
database support component before you install the console, so that you have access to the
database users passwords during installation of the console.

The Cloud Provisioner


This component is automatically installed when you install PE using the web-based installation
method. You can opt out of the cloud provisioning tools by performing an automated installation
with an answers le. If you wish to use cloud provisioning, you should install PE on a system where
administrators have shell access. Since it requires condential information about your cloud
accounts to function, it should be installed on a secure system.
Administrators can use the cloud provisioner tools to:
create new VMware and Amazon EC2 virtual machine instances.
install Puppet Enterprise on any virtual or physical system.
add newly provisioned nodes to a group in the console.

Notes, Warnings, and Tips


Verifying Your License
When you purchased Puppet Enterprise, you should have been sent a license.key le that lists
how many nodes you can deploy. For PE to run without logging license warnings, you should copy
this le to the puppet master node as /etc/puppetlabs/license.key. If you dont have your
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Overview

66/404

license key le, please email sales@puppetlabs.com and well re-send it.
Note that you can download and install Puppet Enterprise on up to ten nodes at no charge. No
license key is needed to run PE on up to ten nodes.
Setting Puppet in Your Default Path
PE installs its binaries in /opt/puppet/bin and /opt/puppet/sbin, which arent included in your
default $PATH. To include these binaries in your default $PATH, manually add them to your prole
or run PATH=/opt/puppet/bin:$PATH;export PATH.

Installing Agents
Agent installation instructions can be found at Installing PE Agents.

Installing Puppet Enterprise: Monolithic


The following instructions are for installing a monolithic installation of PE. When you perform a
monolithic installation of PE, the master, console, and PuppetDB components are all installed on the
same machine. This type of installation is recommended for deployments up to 500 nodes.
See the installation overview for instructions on downloading Puppet Enterprise.

Note: The answer le generated by the procedure on this page can be used to perform an
automated installation. You can nd the installer answer le at
/opt/puppet/share/installer/answers on the machine from which youre running the
installer, but note that these answers are overwritten each time you run the installer.

General Prerequisites and Notes


Make sure that DNS is properly congured on the machines youre installing PE on. All
nodes must know their own hostnames. This can be done by properly conguring reverse
DNS on your local DNS server, or by setting the hostname explicitly. Setting the hostname
usually involves the hostname command and one or more conguration les, while the
exact method varies by platform. In addition, all nodes must be able to reach each other
by name. This can be done with a local DNS server, or by editing the /etc/hosts le on
each node to point to the proper IP addresses.
You can run the installer from a machine that is part of your PE deployment or from a
machine that is outside your deployment. If you want to run the installer from a machine
that is part of your deployment, we recommend you run it from the same node assigned
the console component (in a split install).
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic

67/404

The machine you run the installer from must have the same OS/architecture as your PE
deployment.
Please ensure that port 3000 is reachable, as the web-based installer uses this port. You
can close this port when the installation is complete.
The web-based installer does not support sudo congurations with Defaults targetpw
or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.
A Note about Passwords: In some cases, during the installation process, youll be asked to
supply passwords. The ' (single quote) is forbidden in all passwords.

SSH Prerequisites and Notes


If you have a properly congured SSH agent with agent forwarding enabled, you dont need
to perform any additional SSH congurations. Your SSH agent will be used by the installer.
If youre using SSH keys to authenticate across the nodes of your PE installation, the public
key for the user account performing the installation must be included in the
authorized_keys le for that user account on each node that youre installing a PE
component on, including the machine from which youre running the installer. This applies
to root or non-root users.
The web-based installer will prompt for the user account name, the SSH private key location,
and the SSH passphrase for each node on which youre installing a PE component.
Please review the following authentication options:
Are you installing using root with a password? The installer will ask you to provide the
username and password for each node on which youre installing a PE component.
Prerequisite: Remote root SSH login must be enabled on each node, including the node
from which youre running the installer.
Are you installing using a non-root user with a password? The installer will ask you to
provide the username and password for each node on which youre installing a PE
component.
Prerequisite: Sudo must be enabled for the non-root user on which youre installing a
PE component.
Are you installing using root with an SSH key? The installer will ask you to provide the
username, private key path, and key passphrase (as needed) for each node on which
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic

68/404

youre installing a PE component.


Prerequisite: Remote root SSH login must enabled on each node, including the node
from which youre running the installer. And the public root ssh key must be added to
authorized_keys on each node on which youre installing a PE component.
Are you installing using a non-root user with an SSH key? The installer will ask you to
provide the username, private key path, and key passphrase (as needed) for each node on
which youre installing a PE component.
Prerequisite: The non-root user SSH key must be added to authorized_keys on each
node on which youre installing a PE component. And the non-root user must be
granted sudo access on each box.

Monolithic Installation: Part 1


1. Download and verify the appropriate PE tarball.
2. Unpack the tarball. (Run tar -xf <tarball>.)
3. From the PE installer directory, run sudo ./puppet-enterprise-installer.
4. When prompted, choose Yes to install the setup packages. (If you choose No, the installer will
exit.)
At this point, the PE installer will start a web server and provide a web address:
https://<install platform hostname>:3000. Please ensure that port 3000 is reachable. If
necessary, you can close port 3000 when the installation is complete. Also be sure to use https.
5. Copy the address into your browser and continue on to Monolithic Installation: Part 2.

Warning: Leave your terminal connection open until the installation is complete; otherwise,
the installation will fail.

Monolithic Installation: Part 2


1. When prompted, accept the security request in your browser.
The web-based installation uses a default SSL certicate; youll have to add a security exception
in order to access the web-based installer. This is safe to do.
Youll be taken to the installer start page.
2. On the start page, click Lets get started.
3. Next, youll be asked to choose your deployment type. Select Monolithic.
4. Provide the following information about the puppet master server:
a. Puppet master FQDN: provide the fully qualied domain name of the server youre installing PE
on. It will be the name of the puppet master certicate. This FQDN must be resolvable from the
machine on which youre running the installer.
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic

69/404

b. DNS aliases: provide a comma-separated list of static, valid DNS names (default is puppet),
so agents can trust the master if they contact it. You should make sure that this static list
contains the DNS name or alias youll be conguring your agents to contact.
c. SSH username: provide the username to use when connecting to the puppet master. This eld
defaults to root.
d. SSH password: (optional) provide the sudo password for the SSH username provided.
e. SSH key le path: (optional) provide the absolute path to the SSH key on the machine you are
performing the installation from.
f. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
5. Provide the following information about database support (PuppetDB, the console, and the
console_auth databases):
a. Install PostgreSQL for me: (default) PE will install a PostgreSQL instance for the databases. This
will use PE-generated default names and usernames for the databases. The passwords can be
retrieved from /etc/puppetlabs/installer/database_info.install when the installation is
complete.
b. Use an Existing PostgreSQL instance: if you already have a PostgreSQL instance youd like to
use, youll need to provide the following information:
the PostgreSQL server DNS name
the port number used by the PostgreSQL server (default is 5432)
the PuppetDB database username (default is pe-puppetdb)
the PuppetDB database password
the console database name (default is pe-console)
the console database user name (default is pe-console)
the console database password
the console authentication database name (default is console_auth)
the console authentication database user name (default is console_auth)
the console authentication database password

Note: You will also need to make sure the databases and users youve entered actually
exist. The SQL commands you need will resemble the following:
CREATE TABLESPACE "pe-console" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/console';
CREATE USER "console" PASSWORD 'password';
CREATE DATABASE "console" OWNER "console" TABLESPACE "pe-console"
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic

70/404

ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8' template


template0;
CREATE USER "console_auth" PASSWORD 'password';
CREATE DATABASE "console_auth" OWNER "console_auth" TABLESPACE "peconsole" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;
CREATE TABLESPACE "pe-puppetdb" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/puppetdb';
CREATE USER "pe-puppetdb" PASSWORD 'password';
CREATE DATABASE "pe-puppetdb" OWNER "pe-puppetdb" TABLESPACE "pepuppetdb" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;

Consult the PostgreSQL documentation for more info.


6. Provide the following information about the PE console administrator user:
a. Console superuser email address: provide the address youll use to log in to the console as the
administrator.
b. Console superuser password: create a password for the console login; the password must be
at least eight characters.
7. Provide the following information about the PE console mail server:
SMTP hostname: the console requires access to an SMTP server in order to email account
information to users. If necessary, this can be changed after installation.
To add more information about the SMTP host, select Advanced SMTP options. Here you can
congure advanced SMTP options for setting the port, username, password, and whether or not
to use TLS.
8. Click Submit.
9. On the conrm plan page, review the information you provided, and, if it looks correct, click
Continue.
If you need to make any changes, click Go Back and make whatever changes are required.
10. On the validation page, the installer will verify various conguration elements (e.g., if SSH
credentials are correct, if there is enough disk space, and if the OS is the same for the various
components). If there arent any outstanding issues, click Deploy now.
At this point, PE will begin installing your deployment, and you can monitor the installation as it
runs by toggling Log View and Summary View (top-right corner of page). If you notice any errors
during the installation, check /var/log/pe-installer/installer.log on the machine from which
you are running the installer.
You can nd the installer answer le at /opt/puppet/share/installer/answers on the machine
from which youre running the installer, but note that these answers are overwritten each time you
run the installer.
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Monolithic

71/404

When the installation is complete, the installer script that was running in the terminal will close
itself.
Finally, click Start using Puppet Enterprise to log into the console or continue on to Installing
Agents.
Next: Installing PE Agents

Installing Puppet Enterprise: Split


The following instructions are for installing a split installation of PE. When you perform a split
installation of PE, the master, console, and PuppetDB components are all installed on separate
machines. This type of installation is recommended for deployments of 500-1500 nodes.
See the installation overview for instructions on downloading Puppet Enterprise.

Note: The answer le generated by the procedure on this page can be used to perform an
automated installation. You can nd the installer answer le at
/opt/puppet/share/installer/answers on the machine from which youre running the
installer, but note that these answers are overwritten each time you run the installer.

General Prerequisites and Notes


Make sure that DNS is properly congured on the machines youre installing PE on. All
nodes must know their own hostnames. This can be done by properly conguring reverse
DNS on your local DNS server or by setting the hostname explicitly. Setting the hostname
usually involves the hostname command and one or more conguration les, but the
exact method varies by platform. In addition, all nodes must be able to reach each other
by name. This can be done with a local DNS server or by editing the /etc/hosts le on
each node to point to the proper IP addresses.
You can run the installer from a machine that is part of your PE deployment or from a
machine that is outside your deployment. If you want to run the installer from a machine
that is part of your deployment, we recommend you run it from the same node assigned
the console component (in a split install).
The machine you run the installer from must have the same OS/architecture as your PE
deployment.
Please ensure that port 3000 is reachable, as the web-based installer uses this port. You
can close this port when the installation is complete.
The web-based installer does not support sudo congurations with Defaults targetpw
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split

72/404

or Defaults rootpw. Make sure your /etc/sudoers le does not contain, or else
comment out, those lines.
For Debian Users: If you gave the root account a password during the installation of
Debian, sudo may not have been installed. In this case, you will need to either install PE as
root, or install sudo on any node(s) on which you want to install PE.
A Note about Passwords: In some cases, during the installation process, youll be asked to
supply passwords. The ' (single quote) is forbidden in all passwords.

SSH Prerequisites and Notes


If you have a properly congured SSH agent with agent forwarding enabled, you dont need
to perform any additional SSH congurations. Your SSH agent will be used by the installer.
If youre using SSH keys to authenticate across the nodes of your PE installation, the public
key for the user account performing the installation must be included in the
authorized_keys le for that user account on each node that youre installing a PE
component on, including the machine from which youre running the installer. This applies
to root or non-root users.
The web-based installer will prompt for the user account name, the SSH private key location,
and the SSH passphrase for each node on which youre installing a PE component.
Please review the following authentication options:
Are you installing using root with a password? The installer will ask you to provide the
username and password for each node on which youre installing a PE component.
Prerequisite: Remote root SSH login must be enabled on each node, including the node
from which youre running the installer.
Are you installing using a non-root user with a password? The installer will ask you to
provide the username and password for each node on which youre installing a PE
component.
Prerequisite: Sudo must be enabled for the non-root user on which youre installing a
PE component.
Are you installing using root with an SSH key? The installer will ask you to provide the
username, private key path, and key passphrase (as needed) for each node on which
youre installing a PE component.
Prerequisite: Remote root SSH login must enabled on each node, including the node
from which youre running the installer. And the public root SSH key must be added to
authorized_keys on each node on which youre installing a PE component.
Are you installing using a non-root user with an SSH key? The installer will ask you to
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split

73/404

provide the username, private key path, and key passphrase (as needed) for each node on
which youre installing a PE component.
Prerequisite: The non-root user SSH key must be added to authorized_keys on each
node on which youre installing a PE component. And the non-root user must be
granted sudo access on each box.

Split Installation: Part 1


1. Download and verify the appropriate PE tarball.
2. Unpack the tarball. (Run tar -xf <tarball>.)
3. From the PE installer directory, run sudo ./puppet-enterprise-installer.
4. When prompted, choose Yes to install the setup packages. (If you choose No, the installer will
exit.)
At this point, the PE installer will start a web server and provide a web address:
https://<install platform hostname>:3000. Please ensure that port 3000 is reachable. If
necessary, you can close port 3000 when the installation is complete. Also be sure to use https.
5. Copy the address into your browser and continue on to Split Install: Part 2.

Warning: Leave your terminal connection open until the installation is complete; otherwise,
the installation will fail.

Split Installation: Part 2


1. When prompted, accept the security request in your browser.
The web-based installation uses a default SSL certicate; youll have to add a security exception
in order to access the web-based installer. This is safe to do.
Youll be taken to the installer start page.
2. On the start page, click Lets get started.
3. Next, youll be asked to choose your deployment type. Select Split.
4. Provide the following information about the puppet master server:
a. Puppet master FQDN: provide the fully qualied domain name of the server youre installing
the puppet master on. It will be the name of the puppet master certicate.
b. DNS aliases: provide a comma-separated list of static, valid DNS names (default is puppet),
so agents can trust the master if they contact it. You should make sure that this static list
contains the DNS name or alias youll be conguring your agents to contact..
c. SSH username: provide the username to use when connecting to the puppet master. This eld
defaults to root.
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split

74/404

d. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
e. SSH key le path: (optional) provide the absolute path to the SSH key on the machine from
where you are performing the installation.
f. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
5. Provide the following information about the PuppetDB server:
a. PuppetDB hostname: provide the fully qualied domain name of the server youre installing
the PuppetDB on.
b. SSH username: provide the username to use when connecting to PuppetDB. This user must
either be root or have sudo access.
c. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
d. SSH key le path: (optional) provide the absolute path to the SSH key on the machine you are
performing the installation from.
e. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
6. Provide the following information about the console server:
a. Console hostname: provide the fully qualied domain name of the server youre installing the
PE console on.
b. SSH username: provide the username to use when connecting to the console. This user must
either be root or have sudo access.
c. SSH password: (optional) if necessary, provide the sudo password for the SSH username
provided.
d. SSH key le path: (optional) provide the absolute path to the SSH key on the machine from
where you are performing the installation.
e. SSH key passphrase: (optional) provide if your SSH key is protected with a passphrase.
7. Provide the following information about database support (PuppetDB, the console, and the
console_auth databases):
a. Install PostgreSQL for me: (default) PE will install a PostgreSQL instance for the databases on
the same node as PuppetDB. This will use PE-generated default names and usernames for the
databases. The passwords can be retrieved from
/etc/puppetlabs/installer/database_info.install when the installation is complete.
b. Use an Existing PostgreSQL instance: if you already have a PostgreSQL instance youd like to
use, youll need to provide the following information:
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split

75/404

the PostgreSQL server DNS name


the port number used by the PostgreSQL server (default is 5432)
the PuppetDB database username (default is pe-puppetdb)
the PuppetDB database password
the console database name (default is pe-console)
the console database user name (default is pe-console)
the console database password
the console authentication database name (default is console_auth)
the console authentication database user name (default is console_auth)
the console authentication database password

Note: You will also need to make sure the databases and users youve entered actually
exist. The SQL commands you need will resemble the following:
CREATE TABLESPACE "pe-console" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/console';
CREATE USER "console" PASSWORD 'password';
CREATE DATABASE "console" OWNER "console" TABLESPACE "pe-console"
ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8' template
template0;
CREATE USER "console_auth" PASSWORD 'password';
CREATE DATABASE "console_auth" OWNER "console_auth" TABLESPACE "peconsole" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;
CREATE TABLESPACE "pe-puppetdb" LOCATION
'/opt/puppet/var/lib/pgsql/9.2/puppetdb';
CREATE USER "pe-puppetdb" PASSWORD 'password';
CREATE DATABASE "pe-puppetdb" OWNER "pe-puppetdb" TABLESPACE "pepuppetdb" ENCODING 'utf8' LC_CTYPE 'en_US.utf8' LC_COLLATE 'en_US.utf8'
template template0;

Consult the PostgreSQL documentation for more info.


8. Provide the following information about the PE console administrator user:
a. Console superuser email address: provide the address youll use to log in to the console as the
administrator.
b. Console superuser password: create a password for the console login; the password must be
at least eight characters.
9. Provide the following information about the PE console mail server:
SMTP hostname: the console requires access to an SMTP server in order to email account
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise: Split

76/404

information to users. If necessary, this can be congured after installation.


To add more information about the SMTP host, select Advanced SMTP options. Here you can add
the SMTP port, username, and passwords.
10. Click Submit.
11. On the conrm plan page, review the information you provided, and, if it looks correct, click
Continue.
If you need to make any changes, click Go Back and make whatever changes are required.
12. On the validation page, the installer will verify various conguration elements (e.g., if SSH
credentials are correct, if there is enough disk space, and if the OS is the same for the various
components). If there arent any outstanding issues, click Deploy now.
At this point, PE will begin installing your deployment, and you can monitor the installation as it
runs by toggling Log View and Summary View (top-right corner of page). If you notice any errors
during the installation, check /var/log/pe-installer/installer.log on the machine from which
you are running the installer.
You can nd the installer answer le at /opt/puppet/share/installer/answers on the machine
from which youre running the installer, but note that these answers are overwritten each time you
run the installer.
When the installation is complete, the installer script that was running in the terminal will close
itself.
Finally, click Start using Puppet Enterprise to log into the console or continue on to Installing
Agents.
Next: Installing PE Agents

Installing Puppet Enterprise Agents


Installing Agents
If you have a supported OS that is capable of using remote package repos, the simplest way to
install the PE agent is with standard *nix package management tools.

About Windows Agent Installation


Windows cannot be installed using the package management instructions outlined below.
To install the agent on a node running the Windows OS, refer to the installing Windows
agent instructions.
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents

77/404

Installing Agents Using PE Package Management


If your infrastructure does not currently host a package repository, PE hosts a package repo on the
master that corresponds to the OS and architecture of the master node. The repo is created during
installation of the master. The repo serves packages over HTTPS using the same port as the puppet
master (8140). This means agents wont require any new ports to be open other than the one they
already need to communicate with the master.
You can also add repos for any PE-supported OS and architecture by creating a new repository for
that platform. This is done by adding a new class to your master, pe_repo::platform::<platform>
for each platform on which youll be running an agent. Classify the master using the desired
platform, and on the next puppet run, the new repo will be created and populated with the
appropriate agent packages for that platform.
Once you have added the packages to the PE repo, you can use the agent installation script, hosted
on the master, to install agent packages on your selected nodes. The script can be found at
https://<master hostname>:8140/packages/current/install.bash.
When you run the installation script on your agent (for example, with curl -k https://<master
hostname>:8140/packages/current/install.bash | sudo bash), the script will detect the OS on
which it is running, set up an apt (or yum, or zypper) repo that refers back to the master, pull down
and install the pe-agent packages, and create a simple puppet.conf le. The certname for the
agent node installed this way will be the value of facter fqdn.
Note that if install.bash cant nd agent packages corresponding to the agents platform, it will fail
with an error message telling you which pe_repo class you need to add to the master.
After youve installed the agent on the target node, you can congure it using puppet config set.
See Conguring Agents below.
USING THE PE AGENT PACKAGE INSTALLATION SCRIPT

As an example, if your master is on a node running EL6 and you want to add an agent node
running Debian 6 on AMD64 hardware:
1. Use the console to add the pe_repo::platform::debian_6_amd64 class.
If needed, refer to instructions on classing the master.
2. To create a new repo containing the agent packages, use live management to kick o a puppet
run.
The new repo is created in /opt/puppet/packages/public. It will be called puppet-enterprise3.3.0-debian-6-amd64-agent.
3. SSH into the node where you want to install the agent, and run curl -k https://<master
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents

78/404

hostname>:8140/packages/current/install.bash | sudo bash.


The script will install the PE agent packages, create a basic puppet.conf, and kick o a puppet run.

Note: The -k ag is needed in order to get curl to trust the master, which it wouldnt
otherwise since Puppet and its SSL infrastructure have not yet been set up on the node.
In some cases, you may be using wget instead of curl. Please use the appropriate ags as
needed.

PLATFORM-SPECIFIC INSTALL SCRIPT

The install.bash script actually uses a secondary script to retrieve and install an agent
package repo once it has detected the platform on which it is running. You can use this
secondary script if you want to manually specify the platform of the agent packages. You can
also use this script as an example or as the basis for your own custom scripts. The script can
be found at https://<master hostname>:8140/packages/current/<platform>.bash,
where <platform> uses the form el-6-x86_64. Platform names are the same as those used
for the PE tarballs:
el-{5, 6}-{i386, x86_64}
debian-{6, 7}-{i386, amd64}
ubuntu-{10.04, 12.04}-{i386, amd64}
sles-11-{i386, x86_64}

Installing Agents Using Your Package Management Tools


If you are currently using native package management, you will need to perform the following
steps:
1. Add the agent packages to the appropriate repo.
2. Congure your package manager (yum, apt) to point to that repo.
3. Install the packages as you would any other packages.
Agent packages can be found on the puppet master, in /opt/puppet/packages/public. This
directory contains agent packages that correspond to the puppet masters OS/architecture. For
example, if your puppet master is running on Debian 7, in /opt/puppet/packages/public, you will
nd the directory puppet-enterprise-3.3.0-debian-7-amd64-agent/debian-7-amd64, which
contains a directory with all the packages needed to install an agent. You will also nd a JSON le
that lists the versions of those packages. (All agent package repos follow the naming convention
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents

79/404

<installed PE version & OS platform>-agent/agent_packages.)


If your nodes are running an OS and/or architecture that is dierent from the master, download the
appropriate agent tarball, extract the agent packages into the appropriate repo, and then install the
agents on your nodes just as you would any other package (e.g., yum install pe-agent).
Alternatively, if you have internet access to your master node, you can follow the instructions below
and use the console to classify the master with one of the built-in
pe_repo::platform::<platform> classes. Once the master is classied and a puppet run has
occurred, the appropriate agent packages will be generated and stored in
/opt/puppet/packages/public/<platform version>. If your master does not have internet access,
you will need to download the agents manually.
After youve installed the agent on the target node, you can congure it using puppet config set.
See Conguring Agents below.
Conguring Agents
After you follow the installation steps above, your agent should be ready for management with
Puppet Enterprise once you sign its certicate. However, if you need to perform additional
congurations (e.g., for a Mac OS X installed from the command line), you can congure it (point it
at the correct master, assign a certname, etc.) by editing /etc/puppetlabs/puppet/puppet.conf
directly or by using the puppet config set sub-command, which will edit puppet.conf
automatically.
For example, to point the agent at a master called master.example.com, run puppet config set
server master.example.com. This will add the setting server = puppetmaster.example.com to
the [main] section of puppet.conf. To set the certname for the agent, run puppet config set
certname agent.example.com. For more details, see the documentation for puppet config set.

Warning for Mac OS X users: When performing a command line install of an agent on an OS X
system, you must run puppet config set server and puppet config set certname for
the agent to function correctly.

Signing Agent Certicates


Before nodes with the puppet agent component can fetch congurations or appear in the console,
an administrator needs to sign their certicate requests. This helps prevent unauthorized nodes
from intercepting sensitive conguration data.
After the rst puppet run, which the installer should trigger at the end of installation (or it can be
triggered manually with puppet agent -t), the agent will automatically submit a certicate request
to the puppet master. Before the agent can retrieve any congurations, a user will have to approve
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents

80/404

this certicate.
Node requests can be approved or rejected using the consoles certicate management capability.
Pending node requests are indicated in the main navigation bar. Click on this indicator to go to a
page where you can see current requests, and then approve or reject them as needed.

Alternatively, you can use the command line interface (CLI), but note that certicate signing with the
CLI is done on the puppet master node. To view the list of pending certicate requests, run:
$ sudo puppet cert list

To sign one of the pending requests, run:


$ sudo puppet cert sign <name>

After signing a new nodes certicate, it may take up to 30 minutes before that node appears in the
console and begins retrieving congurations. You can use live management or the CLI to trigger a
puppet run manually on the node if you want to see it right away.
If you need to remove certicates (e.g., during reinstallation of a node), you can use the puppet
cert clean <node name> command.

Important Notes and Warnings


Installing Without Internet Connectivity
Puppet Enterprise 3.3 User's Guide Installing Puppet Enterprise Agents

81/404

By default, the master node hosts a repo that contains packages used for agent installation. When
you download the tarball for the master, the master also downloads the agent tarball for the same
platform and unpacks it in this repo.
When installing agents on a platform that is dierent from the master platform, the install script
attempts to connect to the internet to download the appropriate agent tarball when you classify the
puppet master. If you will not have internet access at the time of installation, you need to download
the appropriate agent tarball in advance and use the option below that corresponds with your
particular deployment.
Option 1
If you would like to use the PE-provided repo, you can copy the agent tarball into the
/opt/staging/pe_repo directory on your master.
If you upgrade your server, you will need to perform this task again for the new version.
Option 2
If you already have a package management/distribution system, you can use it to install agents
by adding the agent packages to your repo. In this case, you can disable the PE-hosted repo
feature altogether by removing the pe_repo class from your master, along with any class that
starts with pe_repo::.
If you upgrade your server, you will need to perform this task again for the new version.
Option 3
If your deployment has multiple masters and you dont wish to copy the agent tarball to each
one, you can specify a path to the agent tarball. This can be done with an answer le, by setting
q_tarball_server to an accessible server containing the tarball, or by using the console to set
the base_path parameter of the pe_repo class to an accessible server containing the tarball.

Next: Upgrading

Installing Windows Agents


This chapter refers to Windows functionality. To install PE on *nix nodes, see Installing
Puppet Enterprise.
For supported versions of Windows, see the System Requirements page.
Windows nodes in Puppet Enterprise:
Can fetch congurations from a puppet master and apply manifests locally
Puppet Enterprise 3.3 User's Guide Installing Windows Agents

82/404

Can respond to live management or orchestration commands


Cannot serve as a puppet master, console, or database support server
See the main Puppet on Windows documentation for details on running Puppet on Windows and
writing manifests for Windows.
In particular, note that puppet must be run with elevated privileges (a.k.a., Run as administrator),
as explained in this section on Windows Security Context.

Installing Puppet
To install Puppet Enterprise on a Windows node, simply download and run the installer, which is a
standard Windows .msi package and will run as a graphical wizard. Alternately, you can run the
installer unattended; see Automated Installation below.
The installer must be run with elevated privileges. Installing Puppet does not require a system
reboot.
The only information you need to specify during installation is the hostname of your puppet master
server:

After Installation
Once the installer nishes:
Puppet agent will be running as a Windows service, and will fetch and apply congurations every
30 minutes (by default). You can now assign classes to the node as normal; see Puppet:
Puppet Enterprise 3.3 User's Guide Installing Windows Agents

83/404

Assigning Congurations to Nodes for more details. After the rst puppet run, the MCollective
service will also be running and the node can now be controlled with live management and
orchestration. The puppet agent service and the MCollective service can be started and stopped
independently using either the service control manager GUI or the command line sc.exe utility;
see Running Puppet on Windows for more details.
The Start Menu will contain a Puppet folder, with shortcuts for running puppet agent manually,
running Facter, and opening a command prompt for use with the Puppet tools. See Running
Puppet on Windows for more details.

Puppet is automatically added to the machines PATH environment variable. This means you can
open any command line and call puppet, facter and the few other batch les that are in the bin
directory of the Puppet installation. This will also add necessary items for the Puppet
environment to the shell, but only for the duration of execution of each of the particular
commands.

Automated Installation
For automated deployments, Puppet can be installed unattended on the command line as follows:
msiexec /qn /i puppet.msi

Puppet Enterprise 3.3 User's Guide Installing Windows Agents

84/404

You can also specify /l*v install.txt to log the progress of the installation to a le.
The following public MSI properties can also be specied:
MSI Property

Puppet Setting

Default Value

INSTALLDIR

n/a

Version-dependent; see below

PUPPET_MASTER_SERVER

server

puppet

PUPPET_CA_SERVER

ca_server

Value of PUPPET_MASTER_SERVER

PUPPET_AGENT_CERTNAME

certname

Value of facter fdqn (must be lowercase)

PUPPET_AGENT_ENVIRONMENT

environment

production

PUPPET_AGENT_STARTUP_MODE

n/a

Automatic; see startup mode

PUPPET_AGENT_ACCOUNT_USER

n/a

LocalSystem; see agent account

PUPPET_AGENT_ACCOUNT_PASSWORD

n/a

No Value; see agent account

PUPPET_AGENT_ACCOUNT_DOMAIN

n/a

.; see agent account

For example:
msiexec /qn /i puppet.msi PUPPET_MASTER_SERVER=puppet.acme.com

Note: If a value for the environment variable already exists in puppet.conf, specifying it during
installation will NOT override that value.

Upgrading
Puppet can be upgraded by installing a new version of the MSI package. No extra steps are
required, and the installer will handle stopping and re-starting the puppet agent service.
When upgrading, the installer will not replace any settings in the main puppet.conf conguration
le, but it can add previously unspecied settings if they are provided on the command line.

Uninstalling
Puppet can be uninstalled through the Windows standard Add or Remove Programs interface or
from the command line.
To uninstall from the command line, you must have the original MSI le or know the ProductCode
of the installed MSI:
msiexec /qn /x [puppet.msi|product-code]

Puppet Enterprise 3.3 User's Guide Installing Windows Agents

85/404

Uninstalling will remove Puppets program directory, the puppet agent service, and all related
registry keys. It will leave the data directory intact, including any SSL keys. To completely remove
Puppet from the system, the data directory can be manually deleted.

Installation Details
What Gets Installed
In order to provide a self-contained installation, the Puppet installer includes all of Puppets
dependencies, including Ruby, Gems, and Facter. (Puppet redistributes the 32-bit Ruby application
from rubyinstaller.org. MCollective is also installed.
These prerequisites are used only for Puppet Enterprise components and do not interfere with
other local copies of Ruby.
Program Directory
Unless overridden during installation, Puppet and its dependencies are installed into the standard
Program Files directory for 32-bit applications and the Program Files(x86) directory for 64-bit
applications.
Puppet Enterprises default installation path is:
OS type

Default Install Path

32-bit

C:\Program Files\Puppet Labs\Puppet Enterprise

64-bit

C:\Program Files (x86)\Puppet Labs\Puppet Enterprise

The Program Files directory can be located using the PROGRAMFILES environment variable on 32-bit
versions of Windows or the PROGRAMFILES(X86) variable on 64-bit versions.
Puppets program directory contains the following subdirectories:
Directory

Description

bin

scripts for running Puppet and Facter

facter

Facter source

hiera

Hiera source

mcollective

MCollective source

mcollective_plugins

plugins used by MCollective

misc

resources

puppet

Puppet source

service

code to run puppet agent as a service

sys

Ruby and other tools

Puppet Enterprise 3.3 User's Guide Installing Windows Agents

86/404

Agent Startup Mode


The agent is set to Automatic startup by default, but allows for you to pass Manual or Disabled as
well.
Automatic means that the Puppet agent will start with windows and be running all the time in
the background. This is the what you would choose when you want to run Puppet with a master.
Manual means that the agent will start up only when it is started in the services console or
through net start on the command line. Typically this used in advanced usages of Puppet.
Disabled means that the agent will be installed but disabled and will not be able to start in the
services console (unless you change the start up type in the services console rst). This is
desirable when you want to install puppet but you only want to invoke it as you specify and not
use it with a master.
Agent Account
By default, Puppet installs the agent with the built in SYSTEM account. This account does not have
access to the network, therefore it is suggested that another account that has network access be
specied. The account must be an existing account. In the case of a domain user, the account does
not need to have accessed the box. If this account is not a local administrator and it is specied as
part of the install, it will be added to the Administrators group on that particular node. The
account will also be granted Logon as Service as part of the installation process. As an example,
if you wanted to set the agent account to a domain user AbcCorp\bob you would call the installer
from the command line appending the following items: PUPPET_AGENT_ACCOUNT_DOMAIN=AbcCorp
PUPPET_AGENT_ACCOUNT_USER=bob PUPPET_AGENT_ACCOUNT_PASSWORD=password.
Data Directory
Puppet Enterprise and its components store settings ( puppet.conf), manifests, and generated data
(like logs and catalogs) in the data directory. Puppets data directory contains two subdirectories for
the various components (facter, MCollective, etc.):
etc (the $confdir) contains conguration les, manifests, certicates, and other important les
var (the $vardir) contains generated data and logs
When run with elevated privileges Puppets intended state the data directory is located in the
COMMON_APPDATA folder. This folders location varies by Windows version:
OS Version

Path

Default

2003

%ALLUSERSPROFILE%\Application
Data\PuppetLabs\puppet

C:\Documents and Settings\All Users\Application


Data\PuppetLabs\

7, 2008,
2012

%PROGRAMDATA%\PuppetLabs\

C:\ProgramData\PuppetLabs\

Puppet Enterprise 3.3 User's Guide Installing Windows Agents

87/404

Since the CommonAppData directory is a system folder, it is hidden by default. See


http://support.microsoft.com/kb/812003 for steps to show system and hidden les and folders.
If Puppet is run without elevated privileges, it will use a .puppet directory in the current users
home folder as its data directory. This may result in Puppet having unexpected settings.
More
For more details about using Puppet on Windows, see:
Running Puppet on Windows
Writing Manifests for Windows
Next: Upgrading

Installing Mac OS X Agents


Note: Mac OS X is an agent only platform. Version 10.9 is required.
Mac OS X agents provide consistent automated management of Apple laptops and desktops from
the same Puppet Enterprise infrastructure that manages your servers. Capabilities include PE core
functionality, plus OS X-specic capabilities:
Package installation via DMG and PKG
Service management via LaunchD
Directory Services integration for local user/group management
Inventory facts via System Proler

Warning: In PE agent certnames need to be lowercase. For Mac OS X agents, the certname is
derived from the name of the machine (e.g., My-Example-Mac>. To prevent installation
issues, you will want to make sure the name of your machine uses lowercases letters. You
can make this change in System Preferences > Sharing > Computer Name > Edit.
To make this change from the command line, run the following commands:
1. sudo scutil --set ComputerName <newname>
2. sudo scutil --set LocalHostName <newname>
3. sudo scutil --set HostName <newname>
If you dont want to change your computers name, you can also enter the agent certname in
all lowercase letters when prompted by the installer.

Puppet Enterprise 3.3 User's Guide Installing Mac OS X Agents

88/404

Install with Puppet Enterprise Package Management


To install the agent on a node running Mac OS X using PE package management tools, refer to
Installing Agents Using PE Package Management.
Install from Finder
To install the agent on a node running Mac OS X using Finder:
1. Download the OS X PE agent package.
2. Open the PE .dmg and click the installer .pkg.
3. Follow the instructions in the installer dialog. You will need to include the puppet masters
hostname and the agents certname.
The installer automatically generates a certicate and contacts the master to request that the
certicate be signed.
Install from Command Line
To install the agent on a node running Mac OS X using the command line:
1. SSH into your OS X node as root or a sudo user. Note that you will be in /var/root.
2. Download the OS X PE agent package.
3. Run sudo hdiutil mount <DMGFILE>.
You will see a line that ends, /Volumes/puppet-enterprise-VERSION. This is the mount point for
the virtual volume created from the disk image.
4. Run cd /Volumes/puppet-enterprise-VERSION.
5. Run sudo installer -pkg puppet-enterprise-installer-<version>.pkg -target /.
6. To verify the install, run /opt/puppet/bin/puppet --version.
Tip: Run PATH=/opt/puppet/bin:$PATH;export PATH to add the PE binaries to your path.
7. Using the instructions in Conguring Agents, point the OS X agent at the correct puppet master
and set the agents certname.
8. Kick o a puppet run using sudo puppet agent -t. This will create a certicate signing request
that you will need to sign.

Automated Installation with an Answer File


You can run the Puppet Enterprise installer while logged into the target server in an automated
mode that requires minimal user interaction. The installer will read pre-selected answers to the
install conguration questions from an answer le. There are two steps to the process:
1. Create an answer le or obtain the answer le created by the web-based installer. You can nd
the latter at /opt/puppet/share/installer/answers on the machine from which you ran the
installer.
Puppet Enterprise 3.3 User's Guide Automated Installation with an Answer File

89/404

2. Run the installer with the -a or -A ag pointed at the answer le.


The ag will cause the installer to read your choices from the answer le and act on them
immediately instead of interviewing a user to customize the installation.
Automated installation can greatly speed up large deployments and is crucial when installing PE
with the cloud provisioning tools.
However, please note that an automated installation requires you to run the installer with an
answer le on each node on which you are installing a PE component. In other words, a monolithic
installation will require you to run the installer with an answer le on one node, but a split
installation will require you to run the installer with an answer le on three nodes.

Warning: If youre performing a split installation of PE using the automated installation


process, install the components in the following order:
1. Puppet master
2. Puppet DB and database support (which includes the console database)
3. The PE console

Obtaining an Answer File


Answer les are simply shell scripts that declare variables used by the installer, such as:
q_install=y
q_puppet_cloud_install=n
q_puppet_enterpriseconsole_install=n
q_puppet_symlinks_install=y
q_puppetagent_certname=webmirror1.example.com
q_puppetagent_install=y
q_puppetagent_server=puppet
q_puppetmaster_install=n
q_vendor_packages_install=y

(A full list of these variables is available in the answer le reference.)


To obtain an answer le, you can:
Use one of the example les provided in the installers answers directory.
Retrieve the answers.lastrun le from a node on which youve already installed PE.
Write one by hand.

Tip: If you want to use the answer le created from the web-based installer, you can nd it at
/opt/puppet/share/installer/answers on the machine from which youre running the
Puppet Enterprise 3.3 User's Guide Automated Installation with an Answer File

90/404

installer, but note that these answers are overwritten each time you run the installer.
You must hand edit any pre-made answer le before using it, as new nodes will need, at a
minimum, a unique agent certname.

Editing Answer Files


Although you can use literal strings in an answer le for one-o installations, you should ll certain
variables dynamically with bash subshells if you want your answer les to be reusable.
To run a subshell that will return the output of its command, use either the $() notation

q_puppetagent_certname=$(hostname -f)

or backticks:
q_puppetagent_certname=`uuidgen`

Answer les can also contain arbitrary shell code and control logic, but you will probably be able to
get by with a few simple name-discovery commands.
See the answer le reference for a complete list of variables and the conditions where theyre
needed, or simply start editing one of the example les in answers/.

Running the Installer in Automated Mode


Once you have your answer le, simply run the installer with the -a or -A option, providing your
answer le as an argument:
$ sudo ./puppet-enterprise-installer -a ~/my_answers.txt
Installing with the -a option will fail if any required question variables are not set.
Installing with the -A option will prompt the user for any missing answers to question variables.

Answer File Reference Overview


Answer les are used for automated installations of PE. See the section on automated installation
for more details.

Answer File Syntax


Puppet Enterprise 3.3 User's Guide Answer File Reference Overview

91/404

Answer les consist of normal shell script variable assignments:


q_database_port=3306

Boolean answers should use Y or N (case-insensitive) rather than true, false, 1, or 0.


A variable can be omitted if a prior answer ensures that it wont be used (i.e.
q_puppetmaster_certname can be left blank if q_puppetmaster_install = n).
Answer les can include arbitrary bash control logic and can assign variables with commands in
subshells ( $(command)). For example, to set an agent nodes certname to its fqdn:

q_puppetagent_certname=$(hostname -f)

To set it to a UUID:
q_puppetagent_certname=$(uuidgen)

Sample Answer Files


PE includes a collection of sample answer les in the answers directory of your distribution tarball.
Answer le references are available for monolithic (all-in-one) and split installations. For split
installations, the answer le references are broken out across the various components you will
install; there is an answer le for the puppet master, the console, and the PuppetDB components.
Choose from the following:
Monolithic (all-in-one) installation
Split installation (puppet master node)
Split installation (console node)
Split installation (PuppetDB node)

Uninstaller Answers
q_pe_uninstall
Y or N Whether to uninstall. Answer les must set this to Y.
q_pe_purge
Y or N Whether to purge additional les when uninstalling, including all conguration les,
modules, manifests, certicates, and the home directories of any users created by the PE
installer.

Puppet Enterprise 3.3 User's Guide Answer File Reference Overview

92/404

q_pe_remove_db
Y or N Whether to remove any PE-specic databases when uninstalling.
Next: What gets installed where?

Monolithic Puppet Enterprise Install Answer


File Reference
The following answers can be used to perform an automated monolithic (all-in-one) installation of
PE.
A .txt le version can be found in the answers directory of the PE installer tarball.
See the Answer File Overview and the section on automated installation for more details.
Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
ADDITIONAL GLOBAL ANSWERS

These answers are optional.


q_run_updtvpkg
Y or N Only used on AIX. Whether to run updtvpkg command to add info about native
libraries to the RPM database. The answer should usually be Y, unelss you have special
needs around the RPM.
Components
These answers are always needed.
q_puppetmaster_install=y
Y or N Whether to install the puppet master component.
q_all_in_one_install=y
Puppet Enterprise 3.3 User's Guide Monolithic Puppet Enterprise Install Answer File Reference

93/404

Y or N Whether or not the installation is an all-in-one installation, (i.e., are PuppetDB and
the console also being installed on this node).
q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
ADDITIONAL COMPONENT ANSWERS

These answers are optional.


q_puppetagent_install
Y or N Whether to install the puppet agent component.
Puppet Agent Answers
These answers are always needed.
q_puppetagent_certname=pe-puppet.<your local domain>
String An identifying string for this agent node. This per-node ID must be unique across
your entire site. Fully qualied domain names are often used as agent certnames.
Puppet Master Answers
These answers are generally needed if you are installing the puppet master component.
q_puppetmaster_certname=pe-puppet.<your local domain>
String An identifying string for the puppet master. This ID must be unique across your
entire site. The servers fully qualied domain name is often used as the puppet masters
certname.
q_puppetmaster_dnsaltnames=pe-puppet,pe-puppet.<your local domain>
String Valid DNS names at which the puppet master can be reached. Must be a commaseparated list. In a normal installation, defaults to
<hostname>,<hostname.domain>,puppet,puppet.<domain>.
q_pe_check_for_updates=n
y or n; MUST BE LOWERCASE Whether to check for updates whenever the pe-httpd service
restarts. To get the correct update info, the server will pass some basic, anonymous info to
Puppet Labs servers. Specically, it will transmit:
the IP address of the client
the type and version of the clients OS
the installed version of PE
the number of nodes licensed and the number of nodes used
If you wish to disable update checks (e.g. if your company policy forbids transmitting this
information), you will need to set this to n. You can also disable checking after installation by
Puppet Enterprise 3.3 User's Guide Monolithic Puppet Enterprise Install Answer File Reference

94/404

editing the /etc/puppetlabs/installer/answers.install le.


q_disable_live_manangement=n
Y or N Whether to disable or enable live management in the console. Note that you need to
manually add this question to your answer to le before an installation or upgrade.
q_puppet_enterpriseconsole_httpd_port=443
Integer The port on which to serve the console. The default is port 443, which will allow
access to the console from a web browser without manually specifying a port. If port 443 is
not available, the installer will try port 3000, 3001, 3002, 3003, 3004, and 3005.
q_puppet_enterpriseconsole_auth_user_email=<your email>
String The email address the consoles admin user will use to log in.
q_puppet_enterpriseconsole_auth_password=<your password>
String The password for the consoles admin user. Must be longer than eight characters.
q_puppet_enterpriseconsole_smtp_host=smtp.<your local domain>
String The SMTP server used to email account activation codes to new console users.
q_puppet_enterpriseconsole_smtp_port=25
Integer The port to use when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_use_tls=n
Y or N Whether to use TLS when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_user_auth=n
Y or N Whether to authenticate to the SMTP server with a username and password.
q_puppet_enterpriseconsole_smtp_username=
String The username to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_puppet_enterpriseconsole_smtp_password=
String The password to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_public_hostname=
String A publicly accessible hostname where the console can be accessed if the host name
resolves to a private interface (e.g., Amazon EC2). This is set automatically by the installer on
EC2 nodes, but can be set manually in environments with multiple hostnames.
ADDITIONAL PUPPET MASTER ANSWERS

Puppet Enterprise 3.3 User's Guide Monolithic Puppet Enterprise Install Answer File Reference

95/404

These answers are optional.


q_tarball_server
String The location from which PE agent tarballs will be downloaded before installation.
Note that agent tarballs are only available for certain operating systems. For details, see the
PE agent installation instructions.
Database Support Answers
These answers are only needed if you are installing the database support component.
q_database_install=y
Y or N Whether or not to install the PostgreSQL server that supports the console.
q_puppetdb_database_name=pe-puppetdb
String The database PuppetDB will use.
q_puppetdb_database_password=<your password>
String The password for PuppetDBs root user.
q_puppetdb_database_user=pe-puppetdb
String PuppetDBs root user name.
ADDITIONAL DATABASE SUPPORT ANSWERS

q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).

Split Puppet Enterprise Install, Console


Answer File Reference
The following answers can be used to perform an automated split installation of PE on the node
assigned to the console component.
A .txt le version can be found in the answers directory of the PE installer tarball.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 96/404

See the Answer File Overview and the section on automated installation for more details.

Warning: If youre performing a split installation of PE using the automated installation


process, install the components in the following order:
1. Puppet master
2. Puppet DB and database support (which includes the console database)
3. The PE console

Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
ADDITIONAL GLOBAL ANSWERS

These answers are optional.


q_run_updtvpkg
Y or N Only used on AIX. Whether to run updtvpkg command to add info about native
libraries to the RPM database. The answer should usually be Y, unelss you have special
needs around the RPM.
Component Answers
These answers are always needed.
q_puppetmaster_install=n
Y or N Whether to install the puppet master component.
q_puppetdb_install=n
Y or N Whether to install the database support (the console PostgreSQL server and
PuppetDB) component.
q_puppet_enterpriseconsole_install=y
Y or N Whether to install the console component.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 97/404

q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
Additional Component Answers
These answers are optional.
q_puppetagent_install
Y or N Whether to install the puppet agent component.
Puppet Agent Answers
These answers are always needed.
q_puppetagent_certname=pe-console.<your local domain>
String An identifying string for this agent node. This per-node ID must be unique across
your entire site. Fully qualied domain names are often used as agent certnames.
q_puppetagent_server=pe-master.<your local domain>
String The hostname of the puppet master server. For the agent to trust the masters
certicate, this must be one of the valid DNS names you chose when installing the puppet
master.
q_fail_on_unsuccessful_master_lookup=y
Y or N Whether to quit the install if the puppet master cannot be reached.
q_skip_master_verification=n
Y or N This is a silent install option, default is N. When set to Y, the installer will skip
master verication which allows the user to deploy agents when they know the master wont
be available.
Puppet Master Answers
These answers are generally needed if you are installing the puppet master component.
q_disable_live_manangement=n
Y or N Whether to disable or enable live management in the console. Note that you need to
manually add this question to your answer to le before an installation or upgrade.
q_pe_database=y
Y or N Whether to have the PostgreSQL server for the console managed by PE or to manage
it yourself. Set to Y if youre using PE-managed PostgreSQL.
q_puppet_enterpriseconsole_auth_user_email=<your email>
String The email address the consoles admin user will use to log in.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 98/404

q_puppet_enterpriseconsole_auth_password=<your password>
String The password for the consoles admin user. Must be longer than eight characters.
q_puppet_enterpriseconsole_smtp_host=smtp.<your local domain>
String The SMTP server used to email account activation codes to new console users.
q_puppet_enterpriseconsole_smtp_port=25
Integer The port to use when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_use_tls=n
Y or N Whether to use TLS when contacting the SMTP server.
q_puppet_enterpriseconsole_smtp_user_auth=n
Y or N Whether to authenticate to the SMTP server with a username and password.
q_puppet_enterpriseconsole_smtp_username=
String The username to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_puppet_enterpriseconsole_smtp_password=
String The password to use when contacting the SMTP server. Only used when
q_puppet_enterpriseconsole_smtp_user_auth is Y.
q_puppet_enterpriseconsole_database_name=console
String The database the console will use. Note that if you are not installing the database
support component, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_database_user=console
String The PostgreSQL user the console will use. Note that if you are not installing the
database support component, this user must already exist on the PostgreSQL server and
must be able to edit the consoles database.
q_puppet_enterpriseconsole_database_password=<your password>
*String The password for the consoles PostgreSQL user.
q_puppet_enterpriseconsole_auth_database_name=console_auth
String The database the console authentication will use. Note that if you are not installing
the database support component, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_auth_database_user=console_auth
String The PostgreSQL user the console authentication will use. Note that if you are not
installing the database support component, this user must already exist on the PostgreSQL
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 99/404

server and must be able to edit the auth database.


q_puppet_enterpriseconsole_auth_database_password=<your password>
String The password for the auth databases PostgreSQL user.
q_public_hostname=
String A publicly accessible hostname where the console can be accessed if the host name
resolves to a private interface (e.g., Amazon EC2). This is set automatically by the installer on
EC2 nodes, but can be set manually in environments with multiple hostnames.
ADDITIONAL PUPPET MASTER ANSWERS

These answers are optional.


q_tarball_server
String The location from which PE agent tarballs will be downloaded before installation.
Note that agent tarballs are only available for certain operating systems. For details, see the
PE agent installation instructions.
ADDITIONAL CONSOLE ANSWERS

q_puppet_enterpriseconsole_master_hostname
String The hostname of the server running the master component. Only needed in a split
install.
Database Support Answers
These answers are only needed if you are installing the database support component.
q_database_host=pe-puppetdb.localdomain
String The hostname of the server running the PostgreSQL server that supports the
console.
q_database_port=5432
Integer The port where the PostgreSQL server that supports the console can be reached.
q_puppetdb_database_name=pe-puppetdb
String The database PuppetDB will use.
q_puppetdb_database_password=strongpassword1748
String The password for PuppetDBs root user.
q_puppetdb_database_user=pe-puppetdb
String PuppetDBs root user name.

Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Console Answer File Reference 100/404

q_puppetdb_hostname=pe-puppetdb.localdomain
String The hostname of the server running PuppetDB.
ADDITIONAL DATABASE SUPPORT ANSWERS

q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).

Split Puppet Enterprise Install, Puppet Master


Answer File Reference
The following answers can be used as a baseline to perform an automated split installation of PE on
the node assigned to the puppet master component.
A .txt le version can be found in the answers directory of the PE installer tarball.
See the Answer File Overview and the section on automated installation for more details.

Warning: If youre performing a split installation of PE using the automated installation


process, install the components in the following order:
1. Puppet master
2. Puppet DB and database support (which includes the console database)
3. The PE console

Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Puppet Master Answer File Reference
101/404

repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.
ADDITIONAL GLOBAL ANSWERS

These answers are optional.


q_run_updtvpkg
Y or N Only used on AIX. Whether to run updtvpkg command to add info about native
libraries to the RPM database. The answer should usually be Y, unelss you have special
needs around the RPM.
Component Answers
These answers are always needed.
q_puppetmaster_install=y
Y or N Whether to install the puppet master component.
q_puppetdb_install=n
Y or N Whether to install the database support (the console Postgres server and PuppetDB)
component.
q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
Additional Component Answers
These answers are optional.
q_puppetagent_install
Y or N Whether to install the puppet agent component.
Puppet Agent Answers
These answers are always needed.
q_fail_on_unsuccessful_master_lookup
Y or N Whether to quit the install if the puppet master cannot be reached.
q_skip_master_verification=n
Y or N This is a silent install option, default is N. When set to Y, the installer will skip
master verication which allows the user to deploy agents when they know the master wont
be available.
Puppet Master Answers

Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Puppet Master Answer File Reference
102/404

These answers are generally needed if you are installing the puppet master component.
q_all_in_one_install=n
Y or N Whether or not the installation is an all-in-one installation, (i.e., are puppetdb and
the console also being installed on this node).
q_puppetmaster_certname=pe-master.<your local domain>
String An identifying string for the puppet master. This ID must be unique across your
entire site. The servers fully qualied domain name is often used as the puppet masters
certname.
q_puppetmaster_dnsaltnames=pe-master,pe-master.<your local domain>
String Valid DNS names at which the puppet master can be reached. Must be a commaseparated list. In a normal installation, defaults to ,,puppet,puppet..
q_puppetmaster_enterpriseconsole_hostname=pe-console.<your local domain>
String The hostname of the server running the console component. Only needed if you are
not installing the console component on the puppet master server.
q_puppetmaster_enterpriseconsole_port=443
Integer The port on which to contact the console server. Only needed if you are not
installing the console component on the puppet master server.
q_pe_check_for_updates=n
y or n; MUST BE LOWERCASE Whether to check for updates whenever the pe-httpd service
restarts. To get the correct update info, the server will pass some basic, anonymous info to
Puppet Labs servers. Specically, it will transmit: * the IP address of the client * the type and
version of the clients OS * the installed version of PE * the number of nodes licensed and the
number of nodes used If you wish to disable update checks (e.g. if your company policy
forbids transmitting this information), you will need to set this to n. You can also disable
checking after installation by editing the /etc/puppetlabs/installer/answers.install le.
q_public_hostname=
String A publicly accessible hostname where the console can be accessed if the host name
resolves to a private interface (e.g., Amazon EC2). This is set automatically by the installer on
EC2 nodes, but can be set manually in environments with multiple hostnames.
ADDITIONAL PUPPET MASTER ANSWERS

These answers are optional.


q_tarball_server
String The location from which PE agent tarballs will be downloaded before installation.
Note that agent tarballs are only available for certain operating systems. For details, see the
PE agent installation instructions.
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, Puppet Master Answer File Reference
103/404

PuppetDB Answers
q_puppetdb_hostname=pe-puppetdb.<your local domain>
String The hostname of the server running PuppetDB.
ADDITIONAL PUPPETDB ANSWERS

These answers are optional.


q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).

Split Puppet Enterprise Install, PuppetDB


Answer File Reference
The following answers can be used as baseline to perform an automated split installation of PE on
the node assigned to the PuppetDB component.
A .txt le version can be found in the answers directory of the PE installer tarball.
See the Answer File Overview and the section on automated installation for more details.

Warning: If youre performing a split installation of PE using the automated installation


process, install the components in the following order:
1. Puppet master
2. Puppet DB and database support (which includes the console database)
3. The PE console

Global Answers
These answers are always needed.
q_install=y
Y or N Whether to install. Answer les must set this to Y.
q_vendor_packages_install=y
Y or N Whether the installer has permission to install additional packages from the OSs
repositories. If this is set to N, the installation will fail if the installer detects missing
dependencies.

Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, PuppetDB Answer File Reference
104/404

ADDITIONAL GLOBAL ANSWERS

These answers are optional.


q_run_updtvpkg
Y or N Only used on AIX. Whether to run updtvpkg command to add info about native
libraries to the RPM database. The answer should usually be Y, unelss you have special
needs around the RPM.
Component Answers
These answers are always needed.
q_puppetmaster_install=n
Y or N Whether to install the puppet master component.
q_puppetdb_install=y
Y or N Whether to install the database support (the console Postgres server and PuppetDB)
component.
q_puppet_cloud_install=n
Y or N Whether to install the cloud provisioner component.
Additional Component Answers
These answers are optional.
q_puppetagent_install
Y or N Whether to install the puppet agent component.
Puppet Agent Answers
These answers are always needed.
q_puppetagent_certname=pe-puppetdb.<your local domain>
String An identifying string for this agent node. This per-node ID must be unique across
your entire site. Fully qualied domain names are often used as agent certnames.
q_puppetagent_server=pe-master.<your local domain>
String The hostname of the puppet master server. For the agent to trust the masters
certicate, this must be one of the valid DNS names you chose when installing the puppet
master.
q_fail_on_unsuccessful_master_lookup=y
Y or N Whether to quit the install if the puppet master cannot be reached.

Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, PuppetDB Answer File Reference
105/404

q_skip_master_verification=n
Y or N This is a silent install option, default is N. When set to Y, the installer will skip
master verication which allows the user to deploy agents when they know the master wont
be available.
Puppet Master Answers
These answers are generally needed if you are installing the puppet master role.
q_puppetmaster_certname=${q_puppetagent_server}
String An identifying string for the puppet master. This ID must be unique across your
entire site. The servers fully qualied domain name is often used as the puppet masters
certname.
q_puppet_enterpriseconsole_database_name=console
String The database the console will use. Note that if you are not installing the database
support role, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_database_user=console
String The PostgreSQL user the console will use. Note that if you are not installing the
database support role, this user must already exist on the PostgreSQL server and must be
able to edit the consoles database.
q_puppet_enterpriseconsole_database_password=<your password>
String The password for the consoles PostgreSQL user.
q_puppet_enterpriseconsole_auth_database_name=console_auth
String The database the console authentication will use. Note that if you are not installing
the database support role, this database must already exist on the PostgreSQL server.
q_puppet_enterpriseconsole_auth_database_user=console_auth
String The PostgreSQL user the console authentication will use. Note that if you are not
installing the database support role, this user must already exist on the PostgreSQL server
and must be able to edit the auth database.
q_puppet_enterpriseconsole_auth_database_password=<your password>
String The password for the auth databases PostgreSQL user.
ADDITIONAL PUPPET MASTER ANSWERS

These answers are optional.


q_tarball_server
String The location from which PE agent tarballs will be downloaded before installation.
Note that agent tarballs are only available for certain operating systems. For details, see the
Puppet Enterprise 3.3 User's Guide Split Puppet Enterprise Install, PuppetDB Answer File Reference
106/404

PE agent installation instructions.


Database Support Answers
These answers are only needed if you are installing the database support role.
q_database_install=y
Y or N Whether or not to install the PostgreSQL server that supports the console.
q_puppetdb_database_name=pe-puppetdb
String The database PuppetDB will use.
q_puppetdb_database_password=<your password>
String The password for PuppetDBs root user.
q_puppetdb_database_user=pe-puppetdb
String PuppetDBs root user name.
ADDITIONAL DATABASE SUPPORT ANSWERS

q_database_root_password
String The password for the consoles PostgreSQL user.
q_database_root_user
String The consoles PostgreSQL root user name.
q_puppetdb_plaintext_port
Integer The port on which PuppetDB accepts plain-text HTTP connections (default port is
8080).

Upgrading Puppet Enterprise


Upgrading Overview
The Puppet Installer script is used to perform both installations and upgrades. The script will check
for a prior version and run as upgrader or installer as needed. You start by downloading and
unpacking a tarball with the appropriate version of the PE packages for your system. Then, when
you run the puppet-enterprise-installer script, the script will check for a prior installation of PE
and, if it detects one, will ask if you want to proceed with the upgrade. The installer will then
upgrade all the PE components (master, agent, etc.) it nds on the node to version 3.3.
Upgrading a Monolithic Installation
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

107/404

If you have a monolithic installation (with the master, console, and database components all on the
same node), the installer will upgrade each component in the correct order, automatically.
Upgrading a Split Installation
If you have a split installation (with the master, console and database components on dierent
nodes), the process involves the following steps, which must be performed in the following order:
1. Upgrade Master
2. Upgrade PuppetDB
3. Upgrade Console
4. Upgrade Agents

To upgrade Windows agents, simply download and run the new MSI package as
described in Installing Windows Agents. However, be sure to upgrade your master, console,
and database nodes rst.

Important Notes and Warnings


Before Upgrading, Back Up Your Databases and Other PE Files
We recommend that you back up the following databases and PE les.
On a monolithic (all-in-one) install, the databases and PE les will all be located on the same node
as the puppet master.
/etc/puppetlabs/
/opt/puppet/share/puppet-dashboard/certs
The console and console_auth databases
The PuppetDB database
On a split install, the databases and PE les will be located across the various components assigned
to your servers.
/etc/puppetlabs/: dierent versions of this directory can be found on the server assigned to
the puppet master component, the server assigned to the console component, and the server
assigned to the database support component (i.e., PuppetDB and PostgreSQL). You should back
up each version.
/opt/puppet/share/puppet-dashboard/certs: located on the server assigned to the console
component.
The console and console_auth databases: located on the server assigned to the database
support component.
The PuppetDB database: located on the server assigned to the database support component.

Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

108/404

Upgrades from 3.2.0 Can Cause Issues with Multi-Platform Agent Packages
Users upgrading from PE 3.2.0 to a later version of 3.x (including 3.2.3) will see errors when
attempting to download agent packages for platforms other than the master. After adding pe_repo
classes to the master for desired agent packages, errors will be seen on the subsequent puppet run
as PE attempts to access the requisite packages. For a simple workaround to this issue, see the
installer troubleshooting page.
Upgrades to PE 3.x from 2.8.3 Can Fail if PostgreSQL is Already Installed
This issue has been documented in the known issues section of the release notes.
A Note about Changes to puppet.conf that Can Cause Issues During Upgrades
If you manage puppet.conf with Puppet or a third-party tool like Git or r10k, you may encounter
errors after upgrading based on the following changes. Please assess these changes before
upgrading.
node_terminus Changes
In PE versions earlier than 3.2, node classication was congured with node_terminus=exec,
located in /etc/puppetlabs/puppet/puppet.conf. This caused the puppet master to execute a
custom shell script ( /etc/puppetlabs/puppet-dashboard/external_node) which ran a curl
command to retrieve data from the console.
PE 3.2 changes node classication in puppet.conf. The new conguration is
node_terminus=console. The external_node script is no longer available; thus,
node_terminus=exec no longer works.
With this change, we have improved security, as the puppet master can now verify the console.
The console certicate name is pe-internal-dashboard. The puppet master now nds the
console by reading the contents of /etc/puppetlabs/puppet/console.conf, which provides the
following:
[main]
server=<console hostname>
port=<console port>
certificate_name=pe-internal-dashboard

This le tells the puppet master where to locate the console and what name it should expect the
console to have. If you want to change the location of the console, you can edit console.conf,
but DO NOT change the certificate_name setting.
The rules for certicate-based authorization to the console are found in
/etc/puppetlabs/console-auth/certificate_authorization.yml on the console node. By
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

109/404

default, this le allows the puppet master read-write access to the console (based on its
certicate name) to request node data and submit report data.
Reports Changes
Reports are no longer submitted to the console using reports=https. PE 3.2 changed the
setting in puppet.conf to reports=console. This change works in the same way as the
node_terminus changes described above.
Upgrading Split Console and Custom PostgreSQL Databases
When upgrading from 3.1 to 3.3, the console database tables are upgraded from 32-bit integers to
64-bit. This helps to avoid ID overows in large databases. In order to migrate the database, the
upgrader will temporarily require disc space equivalent to 20% more than the largest table in the
consoles database (by default, located here: /opt/puppet/var/lib/pgsqul/9.2/console). If the
database is in this default location, on the same node as the console, the upgrader can successfully
determine the amount of disc space needed and provide warnings if needed. However, there are
certain circumstances in which the upgrader cannot make this determination automatically.
Specically, the installer cannot determine the disc space requirement if:
1. The console database is installed on a dierent node than the console.
2. The console database is a custom instance, not the database installed by PE.
In case 1, the installer can determine how much space is needed, but it will be up to the user to
determine if sucient free-space exists. In case 2, the installer is unable to obtain any information
about the size or state of the database.
Running a 3.x Master with 2.8.x Agents is not Supported
3.x versions of PE contain changes to the MCollective module that are not compatible with 2.8.x
agents. When running a 3.x master with a 2.8.x agent, it is possible that puppet will still continue to
run and check into the console, but this means puppet is running in a degraded state that is not
supported.
Upgrades to PE 3.2.x or Later Removes Commented Authentication Sections from rubycasserver/config.yml
If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain the
commented sections for the third-party services. Weve provided the commented sections on the
console cong page, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
Upgrade puppetlabs-inile to Version 1.1.0 or Later Is Required
If you have the puppetlabs-inile module installed, you must upgrade to version 1.1.0 or higher of
the module before you upgrade to PE 3.3.

Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

110/404

Downloading PE
If you havent done so already, you will need a Puppet Enterprise tarball appropriate for your
system(s). See the Installing PE section of this guide for more information on accessing Puppet
Enterprise tarballs, or go directly to the download page.
Once downloaded, copy the appropriate tarball to each node youll be upgrading.

Running the Upgrade


Before starting the upgrade, all of the components (agents, master, console, etc.) in your current
deployment should be correctly congured and communicating with each other, and live
management should be up and running with all nodes connected.

Important: All installer commands should be run as root.

Note: PE3 has moved from the MySQL implementation used in PE 2.x to PostgreSQL for all
database support. PE3 also now includes PuppetDB, which requires PostgreSQL. When
upgrading from 2.x to 3.x, the installer will automatically pipe your existing data from
MySQL to PostgreSQL.
You will need to have a node available and ready to receive an installation of PuppetDB and
PostgreSQL. This can be the same node as the one running the master and console (if you
have a monolithic, all-on-one implementation), or it can be a separate node (if you are
running a split component implementation). In a split component implementation, the
database node must be up and running and reachable at a known hostname before starting
the upgrade process on the console node.
The upgrader can install a pre-congured version of PostgreSQL (must be version 9.1 or
higher) along with PuppetDB on the node you select. If you prefer to use a node with an
existing instance of PostgreSQL, that instance needs to be manually congured with the
correct users and access. This also needs to be done BEFORE starting the upgrade.

Upgrade Master
Start the upgrade by running the puppet-enterprise-installer script on the master node. The
script will detect any previous versions of PE components and stop any PE services that are currently
running. The script will then step through the install script, providing default answers based on the
components it has detected on the node (e.g., if the script detects only an agent on a given node, it
will provide No as the default answer to installing the master component). The upgrader should
be able to answer all of the questions based on your current installation except for the hostname
and port of the PuppetDB node you prepped before starting the upgrade.
As with installation, the script will also check for any missing dependent vendor packages and oer
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

111/404

As with installation, the script will also check for any missing dependent vendor packages and oer
to install them automatically.
Lastly, the script will summarize the upgrade plan and ask you to go ahead and perform the
upgrade. Your answers to the script will be saved as usual in
/etc/puppetlabs/installer/answers.install.
The upgrade script will run and provide detailed information as to what it installs, what it updates
and what it replaces. It will preserve existing certicates and puppet.conf les.
Upgrade PuppetDB
On the node you provisioned for PuppetDB before starting the upgrade, unpack the PE 3.3 tarball
and run the puppet-enterprise-installer script. If you are upgrading from a 2.8 deployment,
you will need to provide some answers to the upgrader, as follows:
?? Install puppet master? [y/N] Answer N. This will not be your master. The master was
upgraded in the previous step.
?? Puppet master hostname to connect to? [Default: puppet] Enter the FQDN of the
master node you upgraded in the previous step.
?? Install PuppetDB? [y/N] Answer Y. This is the reason we are performing this installation
on this node.
?? Install the cloud provisioner? [y/N] Choose whether or not you would like to install
the cloud provisioner component on this node.
?? Install a PostgreSQL server locally? [Y/n] If you want the installer to create a
PostgreSQL server instance for PuppetDB data, answer Y. If you are using an existing
PostgresSQL instance located elsewhere, answer N and be prepared to answer questions about
its hostname, port, database name, database user, and password.
?? Certname for this node? [Default: my_puppetdb_node.example.com ] Enter the FQDN
for this node.
?? Certname for the master? [Default: hostname.entered.earlier ] You only need to
change the default if the hostname and certname of your master are dierent.
The installer will save auto-generated users and passwords in
/etc/puppetlabs/installer/database_info.install. Do not delete this le, you will need its
information in the next step.
POTENTIAL DATABASE TRANSFER ISSUES

The node running PostgreSQL must have access to the en_US.UTF8 locale. Otherwise, certain
non-ASCII characters will not translate correctly and may cause issues and unpredictability.
If you have manually re-ordered the columns in your old MySQL database, the transfer may fail
or may import values into inappropriate columns, leading to incorrect data and unpredictable
behavior.
If some string values (e.g. for group name) are literals written exactly as NULL, they will be
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

112/404

transferred as undened values or, if the target PostgreSQL column has a not-null constraint,
the import may fail altogether.
Upgrade the Console
On the node serving the console component, unpack the PE 3.3 tarball and run the puppetenterprise-installer script. The installer will detect the version from which you are upgrading
and answer as many installer questions as possible based on your existing deployment.

Note: When upgrading a node running the console component, the upgrader will pipe the
current MySQL databases into the new PostgreSQL databases. If your databases contain a lot
of data, this transfer may take some time to complete.
Pruning the MySQL data before starting the upgrade will make things go faster. While not
absolutely necessary, to make the transfer go faster we recommend deleting all but twofour weeks worth of reports.
If you are running the console on a VM, you may also wish to temporarily increase the
amount of RAM available.
Note that your old database will NOT be deleted after the upgrade completes. After you are
sure the upgrade was successful, you will need to delete the database les yourself to
reclaim disk space.
The installer will also ask for the following information:
The hostname and port number for the PuppetDB node you created in the previous step.
Database credentials; specically, the database names, user names, and passwords for the
console, console_auth, and pe-puppetdb databases. These can be found in
/etc/puppetlabs/installer/database_info.install on the PuppetDB node.
Note: If you will be using your own instance of PostgreSQL (as opposed to the instance PE can
install) for the console and PuppetDB, it must be version 9.1 or higher.
DISABLING/ENABLING LIVE MANAGEMENT DURING AN UPGRADE

The status of live management is not managed during an upgrade of PE unless you specically
indicate a change is needed in an answer le. In other words, if your previous version of PE had live
management enabled (the PE default), it will remain enabled after you upgrade unless you add or
change q_disable_live_manangement={y|n} in your answer le.
Depending on your answer, the disable_live_management setting in /etc/puppetlabs/puppetdashboard/settings.yml on the puppet master (or console node in a split install) will be set to
either true or false after the upgrade is complete.
(Note that you can enable/disable Live Management at any time during normal operations by
editing the aforementioned settings.yml and then running sudo /etc/init.d/pe-httpd
Puppet Enterprise 3.3 User's Guide Upgrading Puppet Enterprise

113/404

restart.)
Upgrade Agents and Complete the Upgrade
The simplest way to upgrade agents is to upgrade the pe-agent package in the repo your package
manager (e.g., Satellite) is using. Similarly, if you are using the PE package repo hosted on the
master, it will get upgraded when you upgrade the master. You can then use the agent install script
as usual to upgrade your agent.
For nodes running an OS that doesnt support remote package repos (e.g., RHEL 4, AIX) youll need
to use the installer script on the PE tarball as you did for the master, etc. On each node with a
puppet agent, unpack the PE 3.3 tarball and run the puppet-enterprise-installer script. The
installer will detect the version from which you are upgrading and answer as many installer
questions as possible based on your existing deployment. Note that the agents on your puppet
master, PE console, and PuppetDB nodes will have been updated already when you upgraded those
nodes. Nodes running 2.x agents will not be available for live management until they have been
upgraded.
PE services should restart automatically after the upgrade. But if you want to check that everything
is working correctly, you can run puppet agent -t on your agents to ensure that everything is
behaving as it was before upgrading. Generally speaking, its a good idea to run puppet right away
after an upgrade to make sure everything is hooked and has the latest conguration.

Checking For Updates


Check here to nd out the latest maintenance release of Puppet Enterprise. To see the version of PE
you are currently using, you can run puppet --version on the command line.
Note: By default, the puppet master will check for updates whenever the pe-httpd service restarts.
As part of the check, it passes some basic, anonymous information to Puppet Labs servers. This
behavior can be disabled if need be. The details on what is collected and how to disable checking
can be found in one of the answer le references.
Next: Uninstalling

Uninstalling Puppet Enterprise


Use the puppet-enterprise-uninstaller script to uninstall PE components on a given node. This
script can remove a working PE installation, or undo a partially failed installation to prepare for a
re-install.

About Mac OS X and Windows Agent Uninstallation


Puppet Enterprise 3.3 User's Guide Uninstalling Puppet Enterprise

114/404

For instructions on uninstalling PE on node running Windows, refer to the uninstalling


section of the Windows agent installation instructions.
To uninstall PE on a node running Mac OS X, simply access the uninstaller .pkg in the original
OS X PE agent package you downloaded, and follow the instructions in the uninstall dialog.
Warning: When you use the OS X uninstaller, you will completely remove all aspects of the PE
agent from that node.

Using the Uninstaller


The puppet-enterprise-uninstaller script is installed on the master, the PuppetDB, and the
console nodes. In order to uninstall PE, you must run the uninstaller on each node; you can run it
with /opt/puppet/bin/puppet-enterprise-uninstaller.
If you installed PE using the automated install process, you must run the uninstaller on each node.
You can nd the uninstaller in the same directory as the installer script. Run it with root privileges
from the command line:
$ sudo ./puppet-enterprise-uninstaller

Regardless of the path you use, the uninstaller will ask you to conrm that you want to uninstall.
By default, the uninstaller will remove the Puppet Enterprise software, users, logs, cron jobs, and
caches, but it will leave your modules, manifests, certicates, databases, and conguration les in
place, as well as the home directories of any users it removes.
You can use the following command-line ags to change the uninstallers behavior:
Uninstaller Options
-p
Purge additional les. With this ag, the uninstaller will also remove all conguration les,
modules, manifests, certicates, and the home directories of any users created by the PE
installer. This will also remove the Puppet Labs public GPG key used for package verication.
-d
Also remove any databases created during installation.
-h
Display a help message.
-n
Run in noop mode; show commands that would have been run during uninstallation without
Puppet Enterprise 3.3 User's Guide Uninstalling Puppet Enterprise

115/404

actually running them.


-y
Dont ask to conrm uninstallation, assuming an answer of yes.
-s
Save an answer le and quit without uninstalling.
-a
Read answers from le and fail if an answer is missing. See the uninstaller answers section of
the answer le reference for a list of available answers.
-A
Read answers from le and prompt for input if an answer is missing. See the uninstaller
answers section of the answer le reference for a list of available answers.
Thus, to remove every trace of PE from a system, you would run:
$ sudo ./puppet-enterprise-uninstaller -d -p

Note that if you plan to reinstall any PE component on a node youve run an uninstall on, you may
need to run puppet cert clean <node name> on the master in order to remove any orphaned
certicates from the node.
Next: Automated Installation

PE 3.3 Installing What Gets Installed


Where?
License File
Your PE license le (which was emailed to you when you purchased Puppet Enterprise) should be
placed in /etc/puppetlabs/license.key.
Puppet Enterprise can be evaluated with a complementary ten-node license; beyond that, a
commercial per-node license is required for use. A license key le will have been emailed to you
after your purchase, and the puppet master will look for this key at /etc/puppetlabs/license.key.
Puppet will log warnings if the license is expired or exceeded, and you can view the status of your
license by running puppet license at the command line on the puppet master.
To purchase a license, please see the Puppet Enterprise pricing page, or contact Puppet Labs at
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

116/404

sales@puppetlabs.com or (877) 575-9775. For more information on licensing terms, please see the
licensing FAQ. If you have misplaced or never received your license key, please contact
sales@puppetlabs.com.

Software
What
All functional components of PE, excluding conguration les. You are not likely to need to change
these components. The following software components are installed:
Puppet
PuppetDB
Facter
MCollective
Hiera
Puppet Dashboard

Where
On *nix nodes, all PE software (excluding cong les and generated data) is installed under
/opt/puppet.
On Windows nodes, all PE software is installed in the Puppet Enterprise subdirectory of the
standard 32-bit applications directory
Executable binaries on *nix are in /opt/puppet/bin and /opt/puppet/sbin.
The Puppet modules included with PE are installed on the puppet master server in
/opt/puppet/share/puppet/modules. Dont modify anything in this directory or add modules of
your own. Instead, install them in /etc/puppetlabs/puppet/modules.
Orchestration plugins are installed in /opt/puppet/libexec/mcollective/mcollective on *nix
and in <COMMON_APPDATA> \PuppetLabs\mcollective\etc\plugins\mcollective on Windows. If
you are adding new plugins to your PE agent nodes, you should distribute them via Puppet as
described in the Adding Actions page of this manual.

Dependencies
For information about PostgreSQL and OpenSSL requirements, refer to the system requirements.

Conguration Files
What
Files used to congure Puppet and its subsidiary components. These are the les you will likely
change to accomodate the needs of your environment.
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

117/404

Where
On *nix nodes, Puppet Enterprises conguration les all live under /etc/puppetlabs.
On Windows nodes, Puppet Enterprises conguration les all live under
<COMMON_APPDATA>\PuppetLabs. The location of this folder varies by Windows version; in 2008 and
2012, its default location is C:\ProgramData\PuppetLabs\.
PEs various components all have subdirectories inside this main data directory:
Puppets confdir is in the puppet subdirectory. This directory contains the puppet.conf le, the
site manifest ( manifests/site.pp), and the modules directory.
The orchestration engines cong les are in the mcollective subdirectory on all agent nodes,
as well as the activemq subdirectory and the /var/lib/peadmin directories on the puppet
master. The default les in these directories are managed by Puppet Enterprise, but you can add
plugin cong les to the mcollective/plugin.d directory.
The consoles cong les are in the puppet-dashboard, rubycas-server, and console-auth
subdirectories.
PuppetDBs cong les are in the puppetdb subdirectory.

Log Files
What
The software distributed with Puppet Enterprise generates the following log les, which can be
found as follows.

Where
Puppet Master Logs
/var/log/pe-httpd/access.log
/var/log/pe-httpd/puppetmaster.error.log
/var/log/pe-httpd/puppetmaster.access.log contains all the endpoints that have been
accessed with the puppet master REST API.
Puppet Agent Logs
The puppet agent service logs its activity to the syslog service. Your syslog conguration dictates
where these messages will be saved, but the default location is /var/log/messages on Linux and
/var/adm/messages on Solaris.
ActiveMQ Logs
/var/log/pe-activemq/wrapper.log

Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

118/404

/var/log/pe-activemq/activemq.log
/var/opt/puppet/activemq/data/kahadb/db-1.log
/var/opt/puppet/activemq/data/audit.log
Orchestration Service Log
/var/log/pe-mcollective/mcollective.log maintained by the orchestration service, which is
installed on all nodes.
/var/log/pe-mcollective/mcollective-audit.log exists on all nodes that have mcollective
installed; logs any mcollective actions run on the node, including information about the client
that called the node
Console Logs
/var/log/pe-console-auth/auth.log
/var/log/pe-console-auth/cas_client.log
/var/log/pe-console-auth/cas.log
/var/log/pe-httpd/error.log contains errors related to Passenger. Console errors that dont
get logged anywhere else can be found in this log. If you have problems with the console or
Puppet, this log may be useful.
/var/log/pe-httpd/puppetdashboard.access.log contains all the endpoints that have been
accessed in the console.
/var/log/pe-httpd/puppetdashboard.error.log
/var/log/pe-puppet-dashboard/certificate_manager.log
/var/log/pe-puppet-dashboard/delayed_job.log
/var/log/pe-puppet-dashboard/event-inspector.log
/var/log/pe-puppet-dasboard/failed_reports/ contains a collection of any reports that fail to
upload the to the dashboard.
/var/log/pe-puppet-dashboard/live-management.log
/var/log/pe-puppet-dashboard/mcollective_client.log
/var/log/pe-puppet-dashboard/production.log
Installer Logs
/var/log/pe-installer/http.log contains the web requests sent to the installer; present only
on the machine from which the web-based install was performed.
/var/log/pe-installer/installer-<timestamp>.log contains the operations performed and
any errors that occurred during installation
Database Log
/var/log/pe-puppetdb/pe-puppetdb.log
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

119/404

/var/log/pe-postgresql/pgstartup.log
Miscellaneous Logs
These les may or may not be present.
/var/log/pe-httpd/other_vhosts_access.log
/var/log/pe-puppet/masterhttp.log
/var/log/pe-puppet/rails.log

Puppet Enterprise Software Components


PE 3.3 includes the following major software components:
Puppet 3.6.2
PuppetDB 1.6.2
Facter 1.7.5
MCollective 2.5.1
ActiveMQ 5.9.0
Live Management: 1.3.1
Cloud Provisioner 1.1.6
Hiera 1.3.3
Dashboard 2.1.6
PostgreSQL 9.2.7
Ruby 1.9.3
Augeas 1.1.0
Passenger 4.0.37
Java 1.7.0
OpenSSL 1.0.0m

Additional Puppet Enterprise Components


PE installs the following additional components.
Tools for Working with Puppet Enterprise
PE installs several suites of tools to help you work with the major components of the software.
These include:
Puppet Tools Tools that control basic functions of Puppet such as puppet master, puppet
apply and puppet cert. See the Tools section of the Puppet Manual for more information.
Cloud Provisioning Tools Tools used to provision new nodes. Mostly based around the node
subcommand, these tools are used for tasks such as creating or destroying virtual machines,
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

120/404

classifying new nodes, etc. See the Cloud Provisioning section for more information.
Orchestration Tools Tools used to orchestrate simultaneous actions across a number of
nodes. These tools are built on the MCollective framework and are accessed either via the mco
command or via the Live Management page of the PE console. See the Orchestration section for
more information.
Module Tools The Module tool is used to access and create Puppet Modules, which are
reusable chunks of Puppet code users have written to automate conguration and deployment
tasks. For more information, and to access modules, visit the Puppet Forge.
Console The console is Puppet Enterprises GUI web interface. The console provides tools to
view and edit resources on your nodes, view reports and activity graphs, trigger Puppet runs, etc.
See the Console section of the Puppet Manual for more information.
For more details, you can also refer to the man page for a given command or subcommand.
Services
PE uses the following services:
pe-activemq The ActiveMQ message server, which passes messages to the MCollective servers
on agent nodes. Runs on servers with the puppet master component.
pe-httpd Apache 2, which manages and serves puppet master and the console on servers
with those components. (Note that PE uses Passenger to run puppet master, instead of running it
as a standalone daemon.)
pe-mcollective The orchestration (MCollective) daemon, which listens for orchestration
messages and invokes actions. Runs on every agent node.
pe-memcached The puppet memcached daemon. Runs on the same node as the PE console.
pe-puppet (on EL and Debian-based platforms) The puppet agent daemon. Runs on every
agent node.
pe-puppet-dashboard-workers A supervisor that manages the consoles background
processes. Runs on servers with the console component. -pe-puppetdb and pe-postgresql
Daemons that manage and serve the database components. Note that pe-postgresql is only
created if we install and manage PostgreSQL for you.
User Accounts
PE creates the following users:
peadmin An administrative account which can invoke orchestration actions. This is the only PE
user account intended for use in a login shell. See the Invoking Orchestration Actions page of
this manual for more about this user. This user exists on servers with the puppet master
component.
pe-puppet A system user which runs the puppet master processes spawned by Passenger.
pe-apache A system user which runs Apache ( pe-httpd).
pe-activemq A system user which runs the ActiveMQ message bus used by MCollective.
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

121/404

puppet-dashboard A system user which runs the console processes spawned by Passenger.
pe-puppetdb A system user with root access to the database.
pe-auth The PE console auth user.
pe-memcached The PE memcached daemon user,
pe-postgres A system user with access to the pe-postgreSQL instance. Note that this user is
only created if we install and manage PostgreSQL for you.
Certicates
During install, PE generates the following certicates (can be found at
/etc/puppetlabs/puppet/ssl/certs):
pe-internal-dashboard The certicate for the puppet dashboard.
<user-entered console certname> The certicate for the PE console. Only generated if the
user has chosen to install the console in a split component conguration.
<user entered PuppetDB certname> The certicate for the database component. Only
generated if the user has chosen to install the database in a split component conguration.
<user-entered master certname> This certicate is either generated at install if the puppet
master and console are the same machine or is signed by the master if the console is on a
separate machine.
pe-internal-mcollective-servers A shared certicate generated on the puppet master and
shared to all agent nodes.
pe-internal-peadmin-mcollective-client The orchestration certicate for the peadmin
account on the puppet master.
pe-internal-puppet-console-mcollective-client The orchestration certicate for the PE
console/live management
pe-internal-broker The certicate generated for the activemq instance running over SSL on
the puppet master. Added to /etc/puppetlabs/activemq/broker.ks.
A fresh PE install should thus give the following list of certicates:
root@master:~# puppet cert list --all
+ "master"
(40:D5:40:FA:E2:94:36:4D:C4:8C:CE:68:FB:77:73:AB) (alt names: "DNS:master",
"DNS:puppet", "DNS:puppet.soupkitchen.internal")
+ "pe-internal-broker"
(D3:E1:A8:B1:3A:88:6B:73:76:D1:E3:DA:49:EF:D0:4D) (alt names: "DNS:master",
"DNS:master.soupkitchen.internal", "DNS:pe-internal-broker", "DNS:stomp")
+ "pe-internal-dashboard"
(F9:10:E7:7F:97:C8:1B:2F:CC:D9:F1:EA:B2:FE:1E:79)
+ "pe-internal-mcollective-servers"
(96:4F:AA:75:B5:7E:12:46:C2:CE:1B:7B:49:FF:05:49)
+ "pe-internal-peadmin-mcollective-client"
(3C:4D:8E:15:07:41:18:E2:21:57:19:01:2E:DB:AB:07)
Puppet Enterprise 3.3 User's Guide PE 3.3 Installing What Gets Installed Where?

122/404

+ "pe-internal-puppet-console-mcollective-client"
(97:10:76:B5:3E:8D:02:D2:3D:A6:43:F4:89:F4:8B:94)

Documentation
Man pages for the Puppet subcommands are generated on the y. To view them, run puppet man
<SUBCOMMAND>.
The pe-man command from previous versions of Puppet Enterprise is no longer functional. Use the
above method instead.
Next: Accessing the Console

Accessing the Console


The console is Puppet Enterprises web GUI. Use it to:
Manage node requests to join the puppet deployment
Assign Puppet classes to nodes and groups
View reports and activity graphs
Trigger Puppet runs on demand
Browse and compare resources on your nodes
View inventory data
Invoke orchestration actions on your nodes
Manage console users and their access privileges

Browser Requirements
For the browser requirements, see system requirements.

Reaching the Console


The console will be served as a website over SSL, on whichever port you chose when installing the
console component.
Lets say your console server is console.domain.com. If you chose to use the default port of 443,
you can omit the port from the URL and can reach the console by navigating to:
https://console.domain.com
If you chose to use port 3000, you would reach the console at:
https://console.domain.com:3000

Puppet Enterprise 3.3 User's Guide Accessing the Console

123/404

Note the https protocol handler you cannot reach the console over plain http.

Accepting the Consoles Certicate


The console uses an SSL certicate created by your own local Puppet certicate authority. Since this
authority is specic to your site, web browsers wont know it or trust it, and youll have to add a
security exception in order to access the console.
This is safe to do. Your web browser will warn you that the consoles identity hasnt been veried by
one of the external authorities it knows of, but that doesnt mean its untrustworthy. Since you or
another administrator at your site is in full control of which certicates the Puppet certicate
authority signs, the authority verifying the site is you.
Accepting the Certicate in Google Chrome or Chromium
Use the Proceed anyway button on Chromes warning page.

Accepting the Certicate in Mozilla Firefox


Click I Understand the Risks to reveal more of the page, then click the Add Exception button. On
the dialog this raises, click the Conrm Security Exception button.
Puppet Enterprise 3.3 User's Guide Accessing the Console

124/404

Step 1:

Step 2:

Puppet Enterprise 3.3 User's Guide Accessing the Console

125/404

Accepting the Certicate in Apple Safari


Click the Continue button on the warning dialog.

Note: Safari certicate handling may prevent console access.


Due to Apache bug 53193 and the way Safari handles certicates, Puppet Labs recommends
that PE 3.3 users avoid using Safari to access the PE console.
If you need to use Safari, you may encounter the following dialog box the rst time you
attempt to access the console after installing/upgrading PE 3.3:

Puppet Enterprise 3.3 User's Guide Accessing the Console

126/404

If this happens, click Cancel to access the console. (In some cases, you may need to click
Cancel several times.)

Accepting the Certicate in Microsoft Internet Explorer


Click the Continue to this website (not recommended) link on the warning page.

Logging In
For security, accessing the console requires a user name and password. PE allows three dierent
levels of user access: read, read-write, and admin. If you are an admin setting up the console or
accessing it for the rst time, use the user and password you chose when you installed the console.
Otherwise, you will need to get credentials from your sites administrator. See the User
Management page for more information on managing console user accounts.

Puppet Enterprise 3.3 User's Guide Accessing the Console

127/404

Since the console is the main point of control for your infrastructure, you will probably want to
decline your browsers oer to remember its password.
Next: Navigating the Console

Navigating the Console


Getting Around
The Main Navigation
Navigate between sections of the console using the main navigation at the top.

The following navigation items all lead to their respective sections of the console:
Nodes
Groups
Classes
Reports
Inventory Search
Live Management
Node requests
The navigation item containing your username (admin in the screenshot above) is a menu which
provides access to your account information and (for admin users) the user management tools.
The Resources menu leads to the Puppet Enterprise documentation and also provides links to the
Puppet Forge, Geppetto IDE documentation, and Puppet Labs Support and Feedback portals.
The licenses menu shows you the number of nodes that are currently active and the number of
nodes still available on your current license. See below for more information on working with
licenses.
Puppet Enterprise 3.3 User's Guide Navigating the Console

128/404

Note: For users limited to read-only access, some elements of the console shown here will
not be visible.

The Sidebar
Within the node/group/class/report pages of the console, you can also use the sidebar as a
shortcut to many sections and subsections.

Puppet Enterprise 3.3 User's Guide Navigating the Console

129/404

The sidebar contains the following elements:


The background tasks indicator. The console handles Puppet run reports asynchronously using
several background worker processes. This element lets you monitor the health of those
workers. The number of tasks increases as new reports come in, and decreases as the workers
nish processing them. If the number of tasks increases rapidly and wont go down, something
is wrong with the worker processes and you may need to use the advanced tasks tab to restart
the pe-puppet-dashboard-workers service on the console node. A green check-mark with the
text All systems go means the worker processes have caught up with all available reports.
The node state summary. Depending on how its last Puppet run went, every node is in one of six
states. A description of those states is available here. The state summary shows how many nodes
are in each state, and you can click any of the states for a view of all nodes in that state. You can
also click the Radiator view link for a high-visibility dashboard (see below for a screenshot) and
the Add node button to add a node before it has submitted any reports. (Nodes are automatically
added to the console after they have submitted their rst report, so this button is only useful in
certain circumstances.)
The group summary, which lists the node groups in use and shows how many nodes are
members of each. You can click each group name to view and edit that groups detail page. You
can also use the Add group button to create a new group.
The class summary, which lists the classes in use and shows how many nodes have been directly
assigned each class. (The summary doesnt count nodes that receive a class due to their group
membership.) You can click each class name to view and edit that classs detail page. You can
also use the Add classes button to add a new class to the console.
A screenshot of the radiator view:

Puppet Enterprise 3.3 User's Guide Navigating the Console

130/404

Whats in the Console?


Node Requests
Whenever you install Puppet Enterprise on a new node, it will ask to be added to the deployment.
You must use the request manager to approve the new node before you can begin managing its
conguration.
Orchestration
The live management section allows you to invoke orchestration actions and browse and compare
resources on your nodes.
Nodes, Groups, Classes, and Reports
The nodes, groups, classes, and reports sections of the console are closely intertwined, and contain
tools for inspecting the status of your systems and assigning congurations to them.
See the Grouping and Classifying Nodes page for details about assigning congurations to
nodes.
See the Viewing Reports and Inventory Data page for details about inspecting the status of your
nodes.
You can export node lists, reports, and inventory tables to a CSV le using the Export as CSV link at
the top right of the table.
NODES AND NODE LISTS

Many pages in the console including class and group detail pages contain a node list view. A
Puppet Enterprise 3.3 User's Guide Navigating the Console

131/404

list will show the name of each node that is relevant to the current view (members of a group, for
example), a graph of their recent aggregate activity, and a few details about each nodes most
recent run. Node names will have icons next to them representing their most recent state.
Certain node lists (the main node list and the per-state lists) include a search eld. This eld
accepts partial node names, and narrows the list to show only nodes whose names match the
search.

Clicking the name of a node will take you to that nodes node detail page, where you can see indepth information and assign congurations directly to the node. See the Grouping and Classifying
Nodes and Viewing Reports and Inventory Data pages for information about node detail pages.
REPORTS AND REPORT LISTS

Node detail pages contain a report list. If you click a report in this list, or a timestamp in the Latest
report column of a node list view, you can navigate to a report detail page. See the Viewing Reports
and Inventory Data page for information about report detail pages.
GROUPS

Groups can contain any number of nodes, and nodes can belong to more than one group. Each
group detail page contains a node list view.

Puppet Enterprise 3.3 User's Guide Navigating the Console

132/404

You can use a group page to view aggregate information about its members, or to assign
congurations to every member at once. See the Grouping and Classifying Nodes page for
information about assigning congurations to groups.
CLASSES

Classes are the main unit of Puppet congurations. You must deliberately add classes to the
console with the Add classes button before you can assign them to nodes or groups. See the
Grouping and Classifying Nodes page for information about adding classes and assigning them to
nodes or groups. If you click the name of a class to see its class detail page, you can view a node list
of every node assigned that class.
Working with Licenses
The licenses menu shows you the number of nodes that are currently active and the number of
nodes still available on your current license. If the number of available licenses is exceeded, a
warning will be displayed. The number of licenses used is determined by the number of active
nodes known to Puppetdb. This is a change from previous behavior which used the number of
unrevoked certs known by the CA to determine used licenses. The menu item provides convenient
links to purchase and pricing information.
Unused nodes will be deactivated automatically after seven days with no activity (no new facts,
catalog or reports), or you can use puppet node deactivate for immediate results. The console
will cache license information for some time, so if you have made changes to your license le (e.g.
Puppet Enterprise 3.3 User's Guide Navigating the Console

133/404

adding or renewing licenses), the changes may not show for up to 24 hours. You can restart the
pe-memcached service in order to update the license display sooner.
Next: Navigating the Live Management Page

Navigating Live Management


What is Live Management?
The Puppet Enterprise (PE) consoles live management page is an interface to PEs orchestration
engine. It can be used to browse resources on your nodes and invoke orchestration actions.
Related pages:
See the Orchestration: Overview page for background information about the orchestration
engine.
See the Orchestration: Invoking Actions page to invoke the same orchestration actions on the
command line.

Notes: To invoke orchestration actions, you must be logged in as a read-write or admin level
user. Read-only users can browse resources, but cannot invoke actions.
Since the live management page queries information directly from your nodes rather than
using the consoles cached reports, it responds more slowly than other parts of the console.

Puppet Enterprise 3.3 User's Guide Navigating Live Management

134/404

Disabling/Enabling Live Management


In some cases, after you install PE, you may nd that your workow requires live management to be
disabled. You can disable/enable live management at any time by editing the
disable_live_management setting in /etc/puppetlabs/puppet-dashboard/settings.yml on the
puppet master. Note that after making your change, you must run sudo /etc/init.d/pe-httpd
restart to complete the process.
By default, disable_live_managment is set to false, but you can also congure your automated
installations or upgrades to disable/enable live management as needed during installation or
upgrade.

The Node List


Every task in live management inspects or modies a selection of nodes. Use the lterable node list
in the live management sidebar to select the nodes for your next action. (This list will only contain
nodes that have completed at least one Puppet run, which may take up to 30 minutes after youve
signed their certicates.)

Nodes are listed by the same Puppet certicate names used in the rest of the console interface.
As long as you stay within the live management page, your selection and ltering in the node list
will persist across all three tabs. The node list gets reset once you navigate to a dierent area of the
console.
Puppet Enterprise 3.3 User's Guide Navigating Live Management

135/404

Selecting Nodes
Clicking a node selects it or deselects it. Use the select all and select none controls to select and
deselect all nodes that match the current lter.
Only visible nodes i.e. nodes that match the current lter can be selected. (Note that an empty
lter shows all nodes.) You dont have to worry about accidentally commanding invisibly selected
nodes.
Filtering by Name
Use the node lter eld to lter your nodes by name.

You can use the following wildcards in the node lter eld:
? matches one character
* matches many (or zero) characters
Use the lter button or the enter key to conrm your search, then wait for the node list to be
updated.

Hint: Use the Wildcards allowed link for a quick pop-over reminder.

Advanced Search
You can also lter by Puppet class or by the value of any fact on your nodes. Click the advanced
search link to reveal these elds.

Puppet Enterprise 3.3 User's Guide Navigating Live Management

136/404

Hint: Use the common fact names link for a pop-over list of the most useful facts. Click a fact
name to copy it to the lter eld.

You can browse the inventory data in the consoles node views to nd fact values to search with;
this can help when looking for nodes similar to a specic node. You can also check the list of core
facts for valid fact names.
Filtering by Puppet class can be the most powerful ltering tool on this page, but it requires you to
Puppet Enterprise 3.3 User's Guide Navigating Live Management

137/404

have already assigned classes to your nodes. See the chapter on grouping and classifying nodes for
more details.

Tabs
The live management page is split into three tabs.

The Browse Resources tab lets you browse, search, inspect, and compare resources on any
subset of your nodes.
The Control Puppet tab lets you invoke Puppet-related actions on your nodes. These include
telling any node to immediately fetch and apply its conguration, temporarily disabling puppet
agent on some nodes, and more.
The Advanced Tasks tab lets you invoke orchestration actions on your nodes. It can invoke both
the built-in actions and any custom actions youve installed.
The Browse Resources Tab
The interface of the Browse Resources tab is covered in the Orchestration: Browsing Resources
chapter of this manual.
The Control Puppet Tab
The Control Puppet tab consists of a single action list (see below) with several Puppet-related
actions. Detailed instructions for these actions are available in the Orchestration: Control Puppet
page of this manual.

Puppet Enterprise 3.3 User's Guide Navigating Live Management

138/404

The Advanced Tasks Tab


The Advanced Tasks tab contains a column of task navigation links in the left pane, which are used
to switch the right pane between several action lists (and a summary list, which briey describes
each action list).

ACTION LISTS

Action lists contain groups of related actions for example, the service list has actions for starting,
stopping, restarting, and checking the status of services:

Puppet Enterprise 3.3 User's Guide Navigating Live Management

139/404

These groups of actions come from the MCollective agent plugins you have installed, and each
action list corresponds to one plugin. Both default and custom plugins are included on the
Advanced Tasks page.

For more information on these plugins, see:


The actions and plugins section of the orchestration overview page
The list of built-in orchestration actions
The Orchestration: Adding Actions page
Note that you can also trigger all of these actions from the command line:
Invoking orchestration actions

Invoking Actions
You can invoke actions from the Control Puppet and Advanced Tasks tabs.
To invoke an action, you must be viewing an action list.
1. Click the name of the action you want. It will reveal a red Run button and any available argument
elds (see below). Some actions do not have arguments.
2. Enter any arguments you wish to use.
3. Press the Run button; Puppet Enterprise will show that the action is running, then display any
results from the action.
If several nodes have similar results, theyll be collapsed to save space; you can click any result
group to see which nodes have that result.
Invoking an action with an argument:

Puppet Enterprise 3.3 User's Guide Navigating Live Management

140/404

An action in progress:

Results:

Argument Fields
Some arguments are mandatory, and some are optional. Mandatory arguments will be denoted with
a red asterisk (*).
Although all arguments are presented as text elds, some arguments have specic format
requirements:
The format of each argument should be clear from its description; otherwise, check the
documentation for the action. Documentation for PEs built-in actions is available at the list of
built-in actions.
Arguments that are boolean in nature (on/o-type arguments) must have a value of true or
false no other values are allowed.
Next: Managing Node Requests

Working with Node Requests


Intro/Overview
Puppet Enterprise 3.3 User's Guide Working with Node Requests

141/404

Node request management allows sysadmins to view and respond to node requests graphically,
from within the console. This means nodes can be approved for addition to the deployment without
needing access to the puppet master or using the CLI. For further security, node request
management supports the consoles user management system: only users with read/write
privileges can take action on node requests.
Once the console has been properly congured to point at the appropriate Certicate Authority
(CA), it will display all of the nodes that have generated Certicate Signing Requests (CSRs). You
can then approve or deny the requests, individually or in a batch.
For each node making a request, you can also see its name and associated CSR ngerprint.
Viewing Node Requests
You can view the number of pending node requests from anywhere in the console by checking the
indicator in the top right of the main menu bar.

Click on the pending nodes indicator to view and manage the current requests.
You will see a view containing a list of all the pending node requests. Each item on the list shows
the nodes name and its corresponding CSRs ngerprint. (Click on the truncated ngerprint to view
the whole thing in a pop-up.)
If there are no pending node requests, you will see some instructions for adding new nodes. If this
is not what you expect to see, the location of your Certicate Authority (CA) may not be congured
correctly.
Rejecting and Approving Nodes
The ability to respond to node requests is linked to your user privileges. You must be logged in to
Puppet Enterprise 3.3 User's Guide Working with Node Requests

142/404

the console as a user with read/write privileges before you can respond to requests.
Use the buttons to accept or reject nodes, singly or all at once. Note that once a node request is
approved, the node will not show up in the console until the next puppet run takes place. This
could be as long as 30 minutes, depending on how you have set up your puppet master. Depending
on how many nodes you have in your site total, and on the number of pending requests, it can also
take up to two seconds per request for Reject All or Accept All to nish processing.

In some cases, DNS altnames may be set up for agent nodes. In such cases, you cannot use the
console to approve/reject node requests. The CSR for those nodes must be accepted or rejected
using puppet cert on the CA. For more information, see the DNS altnames entry in the
conguration reference.
In some cases, attempting to accept or reject a node request will result in an error. This is typically
because the request has been modied somehow, usually by being accepted or rejected elsewhere
(e.g. by another user or from the CLI) since the request was rst generated.
Accepted/rejected nodes will remain displayed in the console for 24 hours after the action is taken.
This interval cannot be modied. However, you can use the Clear accepted/rejected requests button
to clean up the display at any time.
WORKING WITH REQUESTS FROM THE CLI

You can still view, approve, and reject node requests using the command line interface.
You can view pending node requests in the CLI by running
$ sudo puppet cert list
Puppet Enterprise 3.3 User's Guide Working with Node Requests

143/404

To sign one of the pending requests, run:


$ sudo puppet cert sign <name>

For more information on working with certicates from the CLI, see the Puppet tools guide or view
the man page for puppet cert.
Conguration Details
By default, the location of the CA is set to the location of PEs puppet master. If the CA is in a
custom location (as in cases where there are multiple puppet masters), you will have to set the
ca_server and ca_port settings in the /opt/puppet/share/puppetdashboard/config/settings.yml le.
When upgrading PE from a version before 2.7.0, the upgrader will convert the currently installed
auth.conf le to one that is fully managed by Puppet and which includes a new rule for request
management. However, if auth.conf has been manually modied prior to the upgrade, the
upgrader will NOT convert the le. Consequently, to get it working, you will need to add the new
rule manually by adding the code below into /etc/puppetlabs/puppet/auth.conf:

path /certificate_status
method find, search, save, destroy
auth yes
allow pe-internal-dashboard

Request Management Modules


PE installs three modules needed for node request management: puppetlabs-request_manager,
puppetlabs-auth_conf, and puppetlabs-concat. These are installed in the
/opt/puppet/share/puppet/modules/ directory, and usually shouldnt be modied.
The puppetlabs-auth_conf module contains a new dened type: auth_conf::acl. The type takes
the following parameters:
parameter

description

value types

default value

required

path

URL path of ACL

string

$title

no

acl_method

nd, search save, delete

string, array

auth

yes, no, any

sring

yes

no

allow

certnames to access path

array

[]

no

order

order in auth.conf le

string

99

no

regex

is the path a regex?

bool

false

no

no

Puppet Enterprise 3.3 User's Guide Working with Node Requests

144/404

environment

environments to allow

string

no

Next: Grouping and Classifying Nodes

Grouping and Classifying Nodes


This page describes how to use the Puppet Enterprise (PE) console to assign congurations to
nodes. (For help with inspecting status and activity among your nodes, see the Viewing Reports and
Inventory Data page.)

Note: To use the console to assign node congurations, you must be logged in as a readwrite or admin level user. Read-only users can view node conguration data, but cannot
modify it.

Overview: Assigning Congurations With the PE Console


Note: As described in the Puppet section of this manual, node congurations are compiled
from a variety of sources, including the PE console.
For a complete description of Puppet Enterprises conguration data sources, see the
Assigning Congurations to Nodes page of the Puppet section of this manual.
Puppet classes are the primary unit of node conguration in PE. Classes are named blocks of
Puppet code that can be either declared by other Puppet code or directly assigned to nodes or
groups of nodes.
The console allows you to assign classes and congure their behavior.

Creating Puppet Classes


Before you can assign classes in the console, the classes need to be available to the puppet
master server. This means they must be located in an installed module. There are several
ways to get modules:
Download modules from the Puppet Forge in addition to the many public modules
available for free, the Puppet Forge provides PE supported modules. PE supported
modules are rigorously tested with PE and are supported by Puppet Labs via the usual
support channels.
Write your own classes, and put them in a module.
If you are new to Puppet and have not written Puppet code before, follow the Learning
Puppet tutorial, which walks you through the basics of Puppet code, classes, and modules.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

145/404

Navigating the Console


See the Navigating the Console page for details on navigating the PE console.
See the Viewing Reports and Inventory Data page for details on inspecting the status and recent
activity of your nodes.

Classes
The classes the console knows about are a subset of the classes available to the puppet master. You
must explicitly add classes to the console before you can assign them to any nodes or groups.
Adding New Classes
To add a new class to the console, navigate to the Add classes page by clicking one of the
following:
The Add classes button in the consoles sidebar
The Add new classes link in the upper right corner of the class list page

The Add classes page allows you to easily add classes that are detected on the puppet master
server, as well as manually add classes that cant be autodetected.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

146/404

ADDING DETECTED CLASSES

The Add classes page displays a list of classes from the puppet master server. The list only includes
classes from the default production environment classes that only exist in other environments
( test, dev, etc.) will not be in the list and must be added manually (see below).
To select one or more classes from the list, click the checkbox next to each class you wish to add.
To browse more easily, you can use the text eld above the list, which lters the list as you type.
Filtering is not limited to the start of a class name; you can type substrings from anywhere within
the class name.

Once you have selected the classes you want, click the Add selected classes button at the bottom of
the page to nalize your choices. The classes you added can now be assigned to nodes and groups.
Note that you must click __Add selected classes__ to nish; otherwise your classes will not be added
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

147/404

to the console.
VIEWING DOCUMENTATION FOR DETECTED CLASSES

The list of detected classes includes short descriptions, which are extracted from comments in the
Puppet code where the class is dened.
To view the full documentation from these comments, you can click the show more link next to a
description. This will display the docs for that class, formatted using RDoc markup.

MANUALLY ADDING CLASSES

You may need to manually add certain classes to the console. This can be necessary if you are
running multiple environments, some of which contain classes that cannot be found in the
production environment.
To manually add a class, use the text elds under the Dont see a class? header near the bottom of
the page.
1. Type the complete, fully qualied name of the class in the class name eld.
2. Optionally, type a description for the class in the description eld.
3. Click the green plus (+) button to the right of the text elds, which becomes enabled after you
have entered a name.

After you click the plus (+) button, the class will appear in a new list below, with its checkbox
already selected. You may now click the Add selected classes button at the bottom of the page to
nish adding the class, or you can select additional classes, either manually or from the list of
detected classes. You must click __Add selected classes__ to nish; otherwise, your classes will not
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

148/404

be added to the console.


Once you have nished adding a class, you can assign it to nodes and groups.
If you change your mind about adding a class you entered manually, you can click the remove link
next to it in the list. You can then continue selecting more classes.
Viewing the Known Classes
There are two lists of classes in the console: One in the consoles sidebar, and one reached by
clicking the Classes item in the main navigation.
The sidebar list also includes counts of nodes with the class assigned, but these numbers are not
complete: they only include nodes that have the class directly assigned, excluding nodes that
receive the class from a group.
In the class list page, reached by clicking the Classes navigation item, classes that were manually
added are marked with an asterisk (*) to show that they are not available in the puppet masters
production environment.
Class Detail Pages
You can view an individual class detail page by clicking the name of that class in one of the
following places:
The sidebars class list
The class list page
A node or group detail page
Class detail pages contain a description of the class, a recent run summary, and a list of all nodes to
which the class is assigned. The node list includes a source column that shows, for each node,
whether the class was assigned directly or via a group. (When assigned via a group, the group name
is a link to the group detail page.)
The upper right corner of a class detail page has an Edit button that you can use to change the
name and description of the class. There is also a Delete button for removing a class.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

149/404

For classes added from the autodetected list, the description on the class detail page will be
automatically lled in with documentation extracted from that classs Puppet code. However, this
documentation will be displayed raw instead of formatted as RDoc markup.

Nodes
Node Detail Pages
Each node in a Puppet Enterprise deployment has its own node detail page in the PE console. You
can reach a node detail page by clicking that nodes name in any node list view.
From a node detail page, you can:
View the nodes current variables, groups, and classes
Click the Edit button to navigate to the node edit page
Hide the node, causing it to stop appearing in node list views
Delete the node, removing all reports and information about that node from the console (it will
reappear as a new node if it submits a new Puppet run report)
View the nodes recent activity and run status (see Viewing Reports & Inventory Data)
View the nodes inventory data (see Viewing Reports & Inventory Data)

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

150/404

Viewing Current Conguration Data


Each node detail page has three tables near the top that display the current variables, groups, and
classes assigned to that node. Each of these tables has a source column.
If the source of an item is the nodes own name, it was assigned directly to that node. You can
change it by editing the node.
If the source of an item is the name of a group, the item was assigned to that group and the
node inherited it. The group name is a link to the group detail page; if you need to change the
item, you can navigate to the groups page.
In PE 3.1, class parameters are not shown on the node detail page; to see them, you must go to the
node edit page or the group edit page, if the class is inherited from a group.
Node Edit Pages
Clicking the Edit button on a node detail page navigates to the node edit page, which allows you to
edit the nodes classes, groups, and variables. You can also add an optional description for the
node.
The main functions of node edit pages are described below.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

151/404

Editing Classes on Nodes


Assigning a class to a node will cause that node to manage the resources declared by that class.
Some classes may need to be congured by setting either variables or class parameters. See
Puppet: Assigning Congurations to Nodes for more background information.
To assign a class, start typing the classs name into the Add a class text eld on the node edit page.
As you type, an auto-completion list of the most likely choices appears; the list continues to narrow
as you type more. To nish selecting a class, click a choice from the list or use the arrow keys to
select one and press enter.

Note: You can only assign classes that are already known to the console. See Adding New
Classes on this page for details.
To remove a class from a node, click the Remove class link next to the classs name. Note that
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

152/404

classes inherited from a group cant be modied from the node edit page you must either edit it
from the group page, or remove the node from that group.
To edit class parameters for a class, click the Edit parameters link next to its name. See the next
section of this page for details.
After making edits, always click the __Update__ button to save your changes.

Editing Class Parameters on Nodes


After you have assigned a class to a node, you can set class parameters to congure it. (See
Puppet: Assigning Congurations to Nodes for more details.) Note that if the class was inherited
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

153/404

from a group, its parameters cant be modied from the node edit page you must edit them from
the group page, or else explicitly add the class to the node.
To set class parameters, click the Edit parameters link next to a class name on a node edit page.
This will bring up a class parameters dialog.

The class parameters dialog allows you to easily add values for any parameters that can be detected
from the puppet master server. It also lets you manually add parameters that cant be autodetected.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

154/404

Note: Class parameters can be strings, booleans, numbers, hashes or arrays. the PE
console will automatically convert the strings true and false to real boolean values. Hashes
and arrays should be expressed using Ruby-style syntax.
ADDING VALUES FOR DETECTED PARAMETERS

The class parameters dialog displays a list of parameters from the puppet master server. The list
only includes the parameters this class has in the default production environment. If a version of
this class in another environment has extra parameters, or if the class doesnt exist in production,
those parameters wont appear and must be added manually.
The main (autodetected) parameter list includes the names of the known parameters under the Key
heading, and their current values.
Parameters that are using their default values will have that value shown in grey text. This value
may be a literal value, or it may be a Puppet variable. (This is generally the case for modules that
use the params class pattern, or for classes whose parameters default to fact values.) You can
enter a new value if you choose.
Parameters that have had values set by a user are displayed with black text and a blue
background. They also have a Reset to default control next to the value.
Parameters with no user-set value and no default value are displayed with a white background
and no text. These parameters generally must be assigned a value before the class will work.
To add or change a value for a detected parameter, type a new value in the Value eld. Alternately,
you can use the Reset to default control next to the value to restore the default value. Default values
can be viewed in a tooltip by hovering your cursor over the Value eld for the parameter.
Remember to click the __Done__ button to exit the dialog, and click the __Update__ button on the
node edit page to save your changes.
MANUALLY ADDING PARAMETERS

You may need to manually add certain parameters for a class. This can be necessary if you are
running multiple environments and some of them contain newer versions of certain classes that
include parameters that cant be found in the production versions.
To manually add a parameter, use the text elds under the Other parameters header.

Type the name of the class parameter in the Add a parameter eld, then type a value in the Value
eld. Click the green plus (+) button to the right of the text elds, which becomes enabled after you
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

155/404

have entered a name.

Instead of a Reset to default control, the list of manually-added parameters includes Delete links for
each parameter, which will remove the parameter and its value.
Remember to click the __Done__ button to exit the dialog, and then click the __Update__ button on
the node edit page to save your changes.
SUPPORTED DATA TYPES

Class parameters support the following data types:


* Strings (e.g., "centos" or 'centos')
* Booleans (e.g., true or false)
* Numbers (e.g., 123)
* Hashes (e.g., `{'a'=>1}`
* Arrays (e.g., [1,2.3]

Any data type not recognized as a boolean, number, hash or array will be treated as a string.
Hashes and arrays are expressed using Ruby-style syntax.
Editing Groups on Nodes
Assigning a node to a group will cause that node to inherit all of the classes, class parameters, and
variables assigned to that group. It will also inherit the conguration data from any group that
group is a member of.
Nodes can override the conguration data they inherit from their group(s); the main limitation on
this is that you must explicitly add a class to a node before assigning class parameters that dier
from those inherited from a group.
To add a node to a group, start typing the groups name into the Add a group text eld on the
node edit page. As you type, an auto-completion list of the most likely choices appears; the list
continues to narrow as you type more. To nish selecting a group, click a choice from the list or use
the arrow keys to select one and press enter.
To remove a node from a group, click the Remove node from group link next to the groups name.
Note that groups inherited from another group cant be removed via the node edit page you
must either remove it from the other groups page, or remove the node from the other group.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

156/404

Note that you can also edit group membership from a group edit page.

Editing Variables on Nodes


You can also set variables from a nodes edit page. Variables set in the console become top-scope
variables available to all Puppet manifests.
To add a variable, look under the Variables heading. You should put the name of the variable in the
Key eld and the value in the Value eld.
There will always be at least one empty pair of variable elds on a nodes edit page. You can use the
Add variable button to add more empty elds, in order to add multiple variables at once. You can
also edit existing variables, or use the grey delete (x) button to delete a variable entirely.

Note: Variables can only be strings. The PE console does not support setting arrays, hashes,
or booleans as variables.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

157/404

Groups
Groups let you assign classes and variables to many nodes at once. This saves you time and makes
the structure of your site more visible.
Nodes can belong to many groups, and will inherit classes and variables from all of them. Groups
can also be members of other groups, and will inherit conguration information from their parent
group the same way nodes do.
Special Groups
Puppet Enterprise automatically creates and maintains several special groups in the console:
THE DEFAULT GROUP

The console automatically adds every node to a group called default. You can use this group for
any classes you need assigned to every single node.
Nodes are added to the default group by a periodic background task, so it may take a few minutes
after a node rst checks in before it joins the group.
THE MCOLLECTIVE AND NO MCOLLECTIVE GROUPS

These groups are used to manage Puppet Enterprises orchestration engine.


The no mcollective group is manually managed by the admin user. You can add any node that
should not have orchestration features enabled to this group. This is generally used for non-PE
nodes like network devices, which cannot support orchestration.
The mcollective group is automatically managed by a periodic background task; it contains
every node that is not a member of the no mcollective group. Admin users can add classes to
this group if they have any third-party classes that should be assigned to every node that has
orchestration enabled. However, you should not remove the pe_mcollective class from this
group.
THE MASTER, CONSOLE, AND PUPPETDB GROUPS

These groups are created when initially setting up a Puppet Enterprise deployment, but are not
automatically added to.
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

158/404

puppet_master this group contains the original puppet master node.


puppet_console this group contains the original console node.
puppet_puppetdb this group contains the original database support node.
Adding a New Group
Use the Add group button in the consoles sidebar or the Add group link in the main groups page,
then enter the groups name and any classes, groups, variables, and nodes you want to assign to
the new group.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

159/404

Group Detail Pages


You can see a list of groups in the Groups section of the sidebar, or by clicking the Groups item in
the main navigation.
Clicking the name of a group in a group list or the node detail page of one of that groups
members will take you to its group detail page.

From a group detail page, you can view the currently assigned conguration data for that group, or
use the Edit button to assign new conguration data. You can also delete the group, which will
cause any members to lose membership in the group.
Group detail pages also show any groups of which that group is a member (under the Groups
header) and any groups that are members of that group (under the Derived groups header).
Editing Nodes on Groups
You can change the membership of a group from both node edit pages and group edit pages.
To add a node to a group from a group edit page, start typing into the Add a node text eld. As you
type, an auto-completion list of the most likely choices appears; the list continues to narrow as you
type more. To nish selecting a node, click a choice from the list or use the arrow keys to select one
and press enter.

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

160/404

Editing Classes, Class Parameters, and Variables on Groups


Editing classes, class parameters, and variables for a group works much the same way as editing
them for a single node. See the following sections above for details:
Assigning classes
Setting class parameters
Setting variables
The one major dierence involves variable and class parameter conicts. Since nodes can belong to
multiple groups, and since groups are not necessarily arranged in a strict hierarchy, its possible for
two equal groups to contribute conicting values for variables and for class parameters.
If you attempt to set values that would cause a conict, the PE console will warn you and give you a
chance to back out. The warning will show where the conict is arising, and which nodes are
aected:

Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

161/404

If you choose to go ahead and create a conict, any aected nodes will receive reduced
congurations from the puppet master the console will decline to provide any conguration data
for that node until you resolve the conict. Note that this will not necessarily appear as a run failure
the node will simply not attempt to manage resources that would have been managed by classes
from the PE console. To restore the nodes to full management, you must x the conict.
When viewing a node page, conicts are shown as red warning (!) icons next to the aected
variables or classes. You can click the icon to bring up a summary of the conict, showing the
sources of the conicting values.

Editing Groups on Groups


Groups can also be members of other groups. Nodes that belong to a group will also inherit
conguration data from any groups that group belongs to.
Adding group membership from a group works the same way as adding group membership from a
Puppet Enterprise 3.3 User's Guide Grouping and Classifying Nodes

162/404

node.

Automating Class and Group Edits


The console provides rake tasks that can add classes, nodes, and groups, and edit the
conguration data assigned to nodes and groups. You can use these tasks as a minimal API to
automate workows, import or export data, or bypass the consoles GUI when performing large
tasks.
For information about these tasks, see the Rake API page.
Next: Using Event Inspector

Using Event Inspector


Puppet Enterprise (PE) event inspector is a reporting tool that provides data for investigating the
current state of your infrastructure. Its focus is on correlating information and presenting it from
multiple perspectives, in order to reveal common causes behind related events. Event inspector
provides insight into how Puppet is managing congurations, and what is happening where when
events occur.
Event inspector lets you accomplish two important tasks: monitoring a summary of your
infrastructures activity and analyzing the details of important changes and failures. Event inspector
lets you analyze events from several dierent perspectives, so you can reject noise and choose the
context that best allows you to understand events that concern you.

Structure and Terminology


Navigating Event Inspector
Event inspector can be reached by clicking Events in the consoles main navigation bar.

The event inspector page displays two panes of data. Clicking an item will show its details (and any
sub-items) in the detail pane on the right. The context pane on the left always shows the list of
items from which the one in the right pane was chosen, to let you easily view similar items and
compare their states.
To backtrack out of the current list of items, you can use the breadcrumb navigation or the previous
button (appearing left of the left pane after youve drilled in at least one level). The back and
forward buttons in your browser will behave normally, returning you to the previously loaded URL.
Puppet Enterprise 3.3 User's Guide Using Event Inspector

163/404

You can also bookmark pages as you investigate events on classes, nodes, and resources, allowing
you to return to a previous set of events. However, after subsequent Puppet runs, the contents of
the bookmarked pages may be dierent when you revisit them. Also, if there are no changes for a
selected time period, the bookmarks may show default text indicating there were no events on that
class, node, or resource.

Note: Refreshing and Time Periods


The event inspector page does not refresh automatically; it fetches data once when loading,
and uses this same batch of data until the page is closed or reloaded. This ensures that
shifting data wont accidentally disrupt an investigation.
You can see how old the current data is by checking the timestamp at the top of the page.
Reload the page in your browser to update the data to the most recent events.
You can also restrict the time period over which event inspector is reporting by using the
drop-down time period restriction menu. Event inspector does not display events that
happened more than 24 hours in the past.

You can export data in the right pane to a CSV le using the Export table as CSV link at the top
right of the pane.
Puppet Enterprise 3.3 User's Guide Using Event Inspector

164/404

Events
An event is PEs attempt to modify an individual property of a given resource. During a Puppet
run, Puppet compares the current state of each property on each resource to the desired state for
that property. If Puppet successfully compares them and the property is already in sync (the current
state is the desired state), Puppet moves on to the next without noting anything. Otherwise, it will
attempt some action and record an event, which will appear in the report it sends to the puppet
master at the end of the run. These reports provide the data event inspector presents.
There are four kinds of events, all of which are shown in event inspector:
Change: a property was out of sync, and Puppet had to make changes to reach the desired state.
Failure: a property was out of sync; Puppet tried to make changes, but was unsuccessful.
No-op: a property was out of sync, but Puppet was previously instructed to not make changes on
this resource (via either the --noop command-line option, the noop setting, or the noop =>
true metaparameter). Instead of making changes, Puppet will log a no-op event and report the
changes it would have made.
Skip: a prerequisite for this resource was not met, so Puppet did not compare its current state to
the desired state. (This prerequisite is either a failure in one of the resources dependencies or a
timing limitation set with the schedule metaparameter.) The resource may be in sync or out of
sync; Puppet doesnt know yet.
Perspectives
Event inspector can use three perspectives to correlate and contextualize information about events:
Classes
Nodes
Resources
For example, if you were concerned about a failed service, say Apache or MongoDB, you could start
by looking into failed resources or classes. On the other hand, if you were experiencing a
geographic outage, you might start by drilling into failed node events.
Switching between perspectives can help you nd the common threads among a group of failures,
and follow them to a root cause. One way to think about this is to see the node as where an event
takes place while a class shows what was changed, and a resource shows how that change came
about.

Summary View: Monitoring Infrastructure


When event inspector rst loads, the left pane contains the summary view. This list is an overview of
recent Puppet activity across your whole infrastructure, and can help you rapidly assess the
magnitude of any issues.
The summary view is split into three sub-lists, with one for each perspective (classes, nodes, and
Puppet Enterprise 3.3 User's Guide Using Event Inspector

165/404

resources). Each sub-list shows the number of events for that perspective, both as per-event-type
counts and as bar graphs which measure against the total event count from that perspective. (For
example, if four classes have events, and two of those classes have events that are failures, the
Classes with events bar graph will be at 50%.)
You can click any item in the sub-lists (classes with failures, nodes with events, etc.) to load more
specic info into the detail pane and begin looking for the causes of notable events. Until an item is
selected, the right pane defaults to showing classes with failures.

Analyzing Changes and Failures


Once the summary view has brought a group of events to your attention, you can use event
inspector to analyze their root causes. Event inspector groups events into types based on their role
in Puppets conguration code. Instead of taking a node-centric perspective on a deployment,
event inspector takes a more holistic approach by adding the class and resource views. One way to
think about this is to see the node as where an event takes place while a class shows what was
changed, and a resource shows how that change came about. To see how this works in a practical
Puppet Enterprise 3.3 User's Guide Using Event Inspector

166/404

sense, lets work through an example.


Assume you are a sysadmin and Puppet developer for a large web commerce enterprise. While you
were in a meeting, your team started rolling out a new deployment of web servers. In the summary
panes default initial classes view, you note that a failure has been logged for the Testweb class that
you use for test congurations on new web server instances.

After you click Testweb, you can select the Nodes with failures tab or the Resources with failures
tab, depending on how you want to investigate the failure on the class.
You click the Resources with failures tab, which loads a detail view showing failed resources. In this
case, you can see in the detail pane that there is an issue with a le resource, specically
/var/www/first/.htaccess.

Puppet Enterprise 3.3 User's Guide Using Event Inspector

167/404

Next, you drill down further by clicking on the failed resource in the detail pane. Note that the left
pane now displays the failed resource info that was in the detail pane previously. This helps you
stay aware of the context youre searching in. You can use the previous button next to the left
pane, the breadcrumb trail at the top, or the back button in your browser to step back through the
process, if you wish.
After clicking the failed resource, the detail pane now shows the node it failed on.

You bookmark this page and email the link to your team so they can see the specics of the failure.
You click on the failure, and the detail pane loads the specics of the failure including the cong
version associated with the run and the specic line of code and manifest where the error occurs.
You see from the error message that the error was caused by the manifest trying to set the owner
of the le resource to a non-existent user ( Message: Could not find user www-data) on the
intended platform.

Puppet Enterprise 3.3 User's Guide Using Event Inspector

168/404

You now know the cause of the failure and which line of which manifest you need to edit to resolve
the issue. If you need help guring out the issue with your code, you might wish to try Geppetto, an
IDE that can help diagnose puppet code issues. Youll probably also be having a word with your
colleagues regarding the importance of remembering the target OS when working on a module!

Tips & Issues


RUNS THAT RESTART PUPPETDB NOT DISPLAYED

If a given puppet run restarts PuppetDB, puppet will not be able to submit a run report from that
run to PuppetDB since, obviously, PuppetDB is not available. Because event inspector relies on data
from PuppetDB, and PuppetDB reports are not queued, event inspector will not display any events
from that run. Note that in such cases, a run report will be available via the consoles Reports tab.
Having a puppet run restart PuppetDB is an unlikely scenario, but one that could arise in cases
where some change to, say, a parameter in the puppetdb class causes the pe-puppetdb service to
restart. This is a known issue that will be xed in a future release.
RUNS WITHOUT EVENTS NOT DISPLAYED

If a run encounters a catastrophic failure where an error prevents a catalog from compiling, event
inspector will not display any failures. This is because no events actually occurred. Its important to
remember that event inspector is primarily concerned with events, not runs.
TIME SYNC IS IMPORTANT

Keeping time synchronized across your deployment will help event inspector produce accurate
information and keep it running smoothly. Consider running NTP or similar across your
deployment. As a bonus, NTP is easily managed with PE and doing so is an excellent way to learn
puppet and PE if you are new to them. The PE Deployment Guide can walk you through one simple
method of NTP automation.
SCHEDULED RESOURCES LOG SKIPS

If the schedule metaparameter is set for a given resource, and the scheduled time has not yet
arrived, that resource will log a skip event in event inspector. Note that this is only true for userdened schedule and does not apply to built-in scheduled tasks that happen weekly, daily, etc.
SIMPLIFIED DISPLAY FOR SOME RESOURCE TYPES

For resource types that take the ensure property, (e.g. user or le resource types), when the
resource is rst created, event inspector will only display a single event. This is because puppet has
only changed one property ( ensure) which sets all the baseline properties of that resource at once.
For example, all of the properties of a given user are created when the user is added, just as they
would be if the user was added manually. If a PE run changes properties of that user resource later,
each individual property change will be shown as a separate event.
Next: Viewing Reports and Inventory Data

Puppet Enterprise 3.3 User's Guide Using Event Inspector

169/404

Viewing Reports and Inventory Data


When nodes fetch their congurations from the puppet master, they send back inventory data and a
report of their run. These end up in the console, where you can view them in that nodes detail
page.

Node States
Depending on how its last Puppet run went, every node is in one of six states. Each state is
indicated by a specic color in graphs and the node state summary, and by an icon beside the
report or the node name in a report list or node list view.
Unresponsive: The node hasnt reported to the puppet master recently; something may be wrong.
The cuto for considering a node unresponsive defaults to one hour, and can be congured in
settings.yml with the no_longer_reporting_cutoff setting. Represented by dark grey text.
This state has no icon; the node retains whatever icon the last report used.
Failed: During its last Puppet run, this node encountered some error from which it couldnt
recover. Something is probably wrong, and investigation is recommended. Represented by red
text or the
failed icon.
No-op: During its last Puppet run, this node would have made changes, but since it was either
running in no-op mode or found a discrepancy in a resource whose noop metaparameter was
set to true, it simulated the changes instead of enforcing them. See the nodes last report for
more details. Represented by orange text or the

pending icon.

Changed: This nodes last Puppet run was successful, and changes were made to bring the node
into compliance. Represented by blue text or the

changed icon.

Unchanged: This nodes last Puppet run was successful, and it was fully compliant; no changes
were necessary. Represented by green text or the

unchanged icon.

Unreported: Although Dashboard is aware of this nodes existence, it has never submitted a
Puppet report. It may be a newly-commissioned node, it may have never come online, or its copy
of Puppet may not be congured correctly. Represented by light grey text or the
error icon.

Reading Reports
Graphs
Each node detail page has a pair of graphs: a histogram showing the number of runs per day and
the results of those runs, and a line chart tracking how long each run took. (Run status histograms
also appear on class detail pages, group detail pages, and last-run-status pages.)

Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data

170/404

The daily run status histogram is broken down with the same colors that indicate run status in the
consoles sidebar: red for failed runs, orange for pending runs (where a change would have been
made, but the resource to be changed was marked as no-op), blue for successful runs where
changes were made, and green for successful runs that did nothing.
The run-time chart graphs how long each of the last 30 Puppet runs took to complete. A longer run
usually means changes were made, but could also indicate heavy server load or some other
circumstance.
Reports
Each node page has a short list of recent reports, with a More button at the bottom for viewing
older reports:

Each report represents a single Puppet run. Clicking a report will take you to a tabbed view that
splits the report up into metrics, log, and events.
Metrics is a rough summary of what happened during the run, with resource totals and the time
spent retrieving the conguration and acting on each resource type.

Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data

171/404

Log is a table of all the messages logged during the run.

Events is a list of the resources the run managed, sorted by whether any changes were made. You
can click on a changed resource to see which attributes were modied.

Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data

172/404

Viewing Inventory Data


Each nodes page has a section called inventory. This section contains all of the fact values reported
by the node on its most recent run.

Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data

173/404

Puppet Enterprise 3.3 User's Guide Viewing Reports and Inventory Data

174/404

Facts include things like the operating system ( operatingsystem), the amount of memory
( memorytotal), and the primary IP address ( ipaddress). You can also add arbitrary custom facts to
your Puppet modules, and they too will show up in the inventory.
The facts you see in the inventory can be useful when ltering nodes in the live management page.

Exporting Data
You can export the inventory and report tables to a CSV le using the Export as CSV link at the top
right of the tables.
Next: Managing Users

Managing Console Users


The Puppet Enterprise console supports individual user management, access and authentication.
Instead of a single, shared username and password authenticated over HTTP with SSL, the console
allows secure individual user accounts with dierent access privileges. Specically, user accounts
allow the assignment of one of three access levels: read-only, read-write, or admin.
Console users can also be managed using external, third-party authentication services such as
LDAP, Active Directory or Google Accounts.
Following standard security practices, user passwords are hashed with a salt and then stored in a
database separated from other console data. Authentication is built on CAS, an industry standard,
single sign-on protocol. Security is further enhanced by an account lockout mechanism that locks
user accounts after ten failed login attempts. This diminishes the likelihood of a successful brute
force attack.
Note: By default, CAS authentication for the console runs over port 443. If your console needs to
access CAS on a dierent host/port, you can congure that in /etc/puppetlabs/consoleauth/cas_client_config_yml.

Puppet Enterprise 3.3 User's Guide Managing Console Users

175/404

User Access and Privileges


Depending on the access privileges assigned to them, users will be able to see and access dierent
parts of the console:
Read-Only Users can only view information on the console, but cannot perform any actions. In
particular, read-only users are restricted from:
accessing the Control Puppet tab in live management
accessing the Advanced Tasks tab in live management
adding, editing, or removing nodes, groups, or classes
Read-Write Users have access to all parts of the console EXCEPT the user-management interface.
Read-write users can interact with the console and use it to perform node management tasks.
Admin Users have unrestricted access to all parts of the console, including the user-management
interface. Through this interface, admin users can:
add a new user
delete a user
change a users role
re-enable a disabled user
disable an enabled user
edit a users email
prompt a change to the users password
There is one exception to this: admin users cannot disable, delete or change the privileges of their
own accounts. Only another admin user can make these changes.
Anonymous Users In addition to authenticated, per-user access, the console can also be congured
to allow anonymous, read-only access. When so congured, the console can be viewed by anyone
with a web browser who can access the site URL. For instructions on how to do this, visit the
console conguration page.

Managing Accounts and Users Internally


Signing Up
In order to sign up as a console user at any access level, an account must be created for you by an
admin. Upon account creation, you will receive an email containing an activation link. You must
follow this link in order to set your password and activate your account. The link will take you to a
screen where you can enter and conrm your password, thereby completing account activation.
Once you have completed activation you will be taken to the Login screen where you can enter your
new credentials.

Puppet Enterprise 3.3 User's Guide Managing Console Users

176/404

Logging In
You will encounter the login screen whenever you try to access a protected part of the console. The
screen will ask for your email address and password. After successfully authenticating, you will be
taken to the part of the console you were trying to access.
When youre done working in the console, choose Logout from the user account menu. Note that
you will be logged out automatically after 20 minutes.

Note: User authentication services rely on a PostgreSQL database. If this database is restarted for
any reason, you may get an error message when trying to log in or out. See known issues for more
information.
Viewing Your User Account
To view your user information, access the user account menu by clicking on your username (the
rst part of your email address) at the top right of the navigation bar.

Puppet Enterprise 3.3 User's Guide Managing Console Users

177/404

Choose My account to open a page where you can see your username/email and your user access
level (admin, read-write or read-only) and text boxes for changing your password.

User Administration Tools


Users with admin level access can view information about users and manage their access, including
adding and deleting users as needed. Admin level users will see an additional menu choice in the
user account menu: Admin Tools. Users with read-write or read-only accounts will NOT see the
Admin Tools menu item.

VIEWING USERS AND SETTINGS

Selecting Admin Tools will open a screen showing a list of users by email address, their access role
and status. Note that users who have not yet activated their accounts by responding to the
Puppet Enterprise 3.3 User's Guide Managing Console Users

178/404

activation email and setting a password will show a status of pending.

Click on a users row to open a pop-up pane with information about that user. The pop-up will
show the users name/email, their current role, their status and other information. If the user has
not yet validated their account, you will also see the link that was generated and included in the
validation email. Note that if there is an SMTP issue and the email fails to send, you can manually
send this link to the user.

Puppet Enterprise 3.3 User's Guide Managing Console Users

179/404

MODIFYING USER SETTINGS

To modify the settings for a given user, click on the users row to open the pop-up pane. In this
pane, you can change their role and their email address or reset their password. Dont forget to
click the Save changes button after making your edits.
Note that resetting a password or changing an email address will change that users status back to
Pending, which will send them another validation email and require them to complete the validation
and password setting process again.
For users who have completed the validation process, you can also enable or disable a users
account. Disabling the account will prevent that user from accessing the console, but will not
remove them from the users database.

ADDING/DELETING USERS

To add a new user, open the user admin screen by choosing Admin Tools in the user menu. Enter
Puppet Enterprise 3.3 User's Guide Managing Console Users

180/404

the users email address and their desired role, then click the Add user button. The user will be
added to the list with a pending status and an activation email will be automatically sent to them.
To delete an existing user (including pending users), click on the users name in the list and then
click the Delete account button. Note that deleting a user cannot be undone, so be sure this is what
you want to do before proceeding.
Working with Users From the Command Line
Several actions related to console users can be done from the command line using rake tasks. This
can be useful for things like automating user creation/deletion or importing large numbers of
users from an external source all at once. All of these tasks should be run on the console server
node.
Note that console_auth rake tasks that list, add or remove users must be run using the bundle
exec command. For example,

cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:list

The console_auth rake tasks will add their actions to the console_auth log, located by default at
/var/log/pe-console-auth/auth.log.
ADDING OR MODIFYING USERS

The db:create_user rake task is used to add users. The command is issued as follows:

cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:create_user USERNAME="<email address>" PASSWORD="<password>"
ROLE="< Admin | Read-Only | Read-Write >"

If you specify a user that already exists, the same command can be used to change attributes for
that user, e.g. to reset a password or elevate/demote privileges.
DELETING USERS

The db:users:remove task is used to delete users. The command is issued as follows:

cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:remove[<email address>]
VIEWING USERS

To print a list of existing users to the screen use the db:users:list task as follows:

Puppet Enterprise 3.3 User's Guide Managing Console Users

181/404

cd /opt/puppet/share/puppet-dashboard
sudo /opt/puppet/bin/bundle exec rake -f /opt/puppet/share/consoleauth/Rakefile db:users:list
LOCKED USERS

Users will get locked out of their accounts after ten failed authentication attempts. Once locked out,
users will not be able to access the console and will see a message on the login screen letting them
know their account has been locked. A similar message will appear on the command line if users
are attempting access that way. Admin users will see a warning sign next to a locked user in the
admin screen and a warning message will be added to a locked users detail view. Their account
status will also be set to disabled. An admin can restore a users access by either resetting the
users password or changing the users status back to enabled.

Using Third-Party Authentication Services


User access can be managed with external, third-party authentication services. The following
external services are supported:
LDAP
Active Directory (AD)
Google accounts

Note: To use a third-party authentication system, you must congure two les on the
console server. See the Conguring Third-Party Authentication Services section of the
console cong page for details.
Puppet Enterprise 3.3 User's Guide Managing Console Users

182/404

Third-party services are only used for authenticating users; the consoles RBAC still manages each
users privileges. If a user has never logged in before, they are assigned a default role. (This role
can be congured. See the cas_client_config.yml section of the cong instructions for details.)
External users access privileges are managed in the same manner as internal users, via the
consoles user administration interface.
The account interface for an externally authenticated user diers slightly from internal users in that
external users do not have UI for changing their passwords or deleting accounts.

The user administration page will also indicate the authentication service (Account Type) being
used for a given user and provide a link to a legend that lists the external authentication services
and the default access privileges given to users of a given service.

Puppet Enterprise 3.3 User's Guide Managing Console Users

183/404

Lastly, note that while built-in auth accounts use the email address provided, AD/LDAP accounts are
generally accessed using just the username (e..g a.user), although this may vary in your
organizations specic implementation.
Next: Console Inventory Search

Searching for Nodes by Fact


The Inventory Search section of the Puppet Enterprise console lets you search Puppets inventory of
node data. This search utility uses Puppet Enterprises central data storage layer, PuppetDB.

Using the Inventory Search


Use consoles main navigation to reach the Inventory Search section.

This eld allows you to enter a fact name, a value, and a comparison operator. After you have
searched for one fact, you may narrow down the search by adding additional facts.

Puppet Enterprise 3.3 User's Guide Searching for Nodes by Fact

184/404

The search results page will show a list of nodes, as well as a summary of their recent Puppet runs.
You can click nodes in the list to browse to their detail pages.
To choose facts to search for, you should view the inventory data for a node that resembles the
nodes you are searching for.
Next: Conguring & Tuning the Console

Rake API for Querying and Modifying Console


Data
The Puppet Enterprise console provides rake tasks that can add classes, nodes, and groups, and
edit the conguration data assigned to nodes and groups. You can use these tasks as a minimal API
to automate workows, import or export data, or bypass the consoles GUI when performing large
tasks.

Invoking Console Rake Tasks


Console rake tasks must be invoked from the command line on the console server.
They should be invoked by a user account with sucient sudo privileges to modify items owned
by the puppet-dashboard user.
Every rake command should begin as follows:

Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

185/404

$ sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile


RAILS_ENV=production <TASK AND ARGUMENTS>

The <TASK AND ARGUMENTS> placeholder is the only part that will dier between the various
tasks; the rest is boilerplate that should be used with every task.
There are two ways to specify arguments for a task. PE 3.0.1 and later can use both styles; PE 3.0.0
(and the PE 2.x series) can only use the environment variable style.
Task Arguments as Parameters (task[argument,argument,...])
This invocation style is available in PE 3.0.1 and later. It allows invoking multiple tasks at once,
which was not possible with the environment variable style.
Use the following syntax to specify arguments as parameters:
node:addgroup["switch07.example.com","no mcollective"]

Specically, you should provide:


The name of the task
An opening square bracket ( [)
A comma-separated list of argument values
No spaces are allowed before or after commas. Spaces will cause the task to fail.
Each task requires its arguments in a specic order; see the list of tasks below.
Some arguments are optional. To skip an optional argument but provide a later optional
argument, provide an empty string. (For example,
node:add['web06.example.com',,'linux::base'].)
Nearly all values should be quoted for safety, although values that consist of only
alphanumeric characters with no spaces may be left unquoted.
Both single and double quotes are okay, but see Escaping below.
A closing square bracket ( ])
To run multiple tasks, simply put multiple tasks and their arguments in the same command line, in
the order they should run. For example:
$ sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile
RAILS_ENV=production node:addgroup["switch07.example.com","no mcollective"]
node:addgroup["switch07.example.com","network devices"]

Note: The PE consoles rake tasks can all be invoked multiple times in the same run. This
diers from rakes default behavior, which will suppress additional invocations of the same
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

186/404

command. If you need tasks to run only once per command for some reason, you can add
allow_repeating_tasks=false to the command line.
ESCAPING

If the value of any argument contains a comma, the comma must be escaped with one or more
backslashes. The number of escape characters depends on how the string is quoted.
With single quotes, use one backslash.
With double quotes, use two backslashes.
The examples below would both set a value of no mcollective,network devices for the second
argument:
node:add['switch07.example.com','no mcollective\,network devices']
node:add["switch07.example.com","no mcollective\\,network devices"]

In two tasks ( node:variables and nodegroup:variables), the value of an argument might consist
of a comma-separated list whose terms, themselves, contain commas. In these cases, the interior
commas should be escaped with three backslashes for single-quoted strings, and six backslashes
for double-quoted strings. The examples below would both set the value of the
haproxy_application_servers variable to
web04.example.com,web05.example.com,web06.example.com:

nodegroup:variables['load
balancers','haproxy_application_port=3000\,haproxy_application_servers=web04.example.co

nodegroup:variables["load
balancers","haproxy_application_port=3000\\,haproxy_application_servers=web04.example.c

Task Arguments as Environment Variables (task argument=value argument=value)


This invocation style is available in all PE 3.x and 2.x releases. It does not allow invoking multiple
tasks at once; this can cause performance problems when running many tasks, as the preparation
time for each rake command can be quite long.

Deprecation note: Invoking tasks like this will cause deprecation warnings, but it will
continue to work for the duration of the Puppet Enterprise 3.x series, with removal
tentatively planned for Puppet Enterprise 4.0.
Use the following syntax to specify arguments as environment variables:
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

187/404

node:addgroup name="switch07.example.com" group="no mcollective"

Specically, you should provide:


The name of the task
For each argument:
A space
The name of the argument
An equals sign ( =)
The value of the argument, as a quoted or unquoted string
Each task has specic names that must be used for its arguments; the arguments may be specied
in any order. For the names of each tasks arguments, see the list of rake API tasks with
environment variable argument names, which is maintained on a dierent page.

Node Tasks: Getting Info


node:list[(match)]
List nodes. Can optionally match nodes by regex.
Parameters:
match (optional) regular expression to match (if omitted all nodes are listed)
node:listclasses[name]
List classes for a node.
Parameters:
name node name
node:listclassparams[name,class]
List classparams for a node/class pair.
Parameters:
name node name
class class name
node:listgroups[name]
List groups for a node.

Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

188/404

Parameters:
name node name
node:variables[name]
List variables for a node.
Parameters:
name node name

Node Tasks: Modifying Info


node:add[name,(groups),(classes),(onexists)]
Add a new node. Classes and groups can be specied as comma-separated lists.
Parameters:
name node name
groups (optional) groups to assign to the newly added node
classes (optional) classes to assign to the newly added node
onexists (optional) skip (do not add the node if it exists) or fail (exit with failure if the node
exists); the default value is fail
node:del[name]
Delete a node.
Parameters:
name node name
node:classes[name,classes]
Replace the list of classes assigned to a node. This task will destroy existing data. Classes must be
specied as a comma-separated list.
Parameters:
name node name
classes classes to assign to the node
node:groups[name,groups]
Replace the list of groups a node belongs to. This task will destroy existing data. Groups must be
specied as a comma-separated list.
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

189/404

Parameters:
name node name
groups groups to assign to the node
node:addclass[name,class]
Add a class to a node.
Parameters:
name node name
class classes to add to the node
node:addclassparam[name,class,param,value]
Add a classparam to a node. If the parameter already exists its value is overwritten.
Parameters:
name node name
class class (already assigned to the node)
param parameter name
value parameter value
node:addgroup[name,group]
Add a group to a node.
Parameters:
name node name
group group to add to the node
node:delclassparam[name,class,param]
Remove a class param from a node.
Parameters:
name node name
class class name
param parameter name
node:variables[name,variables]
Add (or edit, if they exist) variables for a node. Variables must be specied as a comma-separated
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

190/404

list of variable=value pairs; the list must be quoted and the commas must be escaped.
Parameters:
name node name
variables variables specied as <VARIABLE>=<VALUE>,<VARIABLE>=<VALUE>,

Class Tasks: Getting Info


nodeclass:list[(match)]
List node classes. Can optionally match classes by regex.
Parameters:
match (optional) regular expression to match (if omitted all classes are listed)

Class Tasks: Modifying Info


nodeclass:add[name,onexists]
Add a new class. This must be a class available to the Puppet autoloader via a module.
Parameters:
name class name
onexists (optional) skip (do not add the class if it exists) or fail (exit with failure if the class
exists); the default value is fail
nodeclass:del[name]
Delete a node class.
Parameters:
name class name

Group Tasks: Getting Info


nodegroup:list[match]
List node groups. Can optionally match groups by regex.
Parameters:
match (optional) regular expression to match (if omitted all groups are listed)
nodegroup:listclasses[name]

Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

191/404

List classes that belong to a node group.


Parameters:
name group name
nodegroup:listclassparams[name,class]
List classparams for a nodegroup/class pair.
Parameters:
name group name
class class name
nodegroup:listgroups[name]
List child groups that belong to a node group.
Parameters:
name group name
nodegroup:variables[name]
List variables for a node group.
Parameters:
name group name

Group Tasks: Modifying Info


nodegroup:add[name,(classes),(onexists)]
Create a new node group. Classes can be specied as a comma-separated list.
Parameters:
name group name
classes (optional) classes to assign to the newly added group
onexists (optional) skip (do not add the group if it exists) or fail (exit with failure if the group
exists); the default value is fail
nodegroup:del[name]
Delete a node group.
Parameters:

Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

192/404

name group name


nodegroup:add_all_nodes[name]
Add every known node to a group.
Parameters:
name group name
nodegroup:addclass[name,class]
Assign a class to a group without overwriting its existing classes.
Parameters:
name group name
class class name
nodegroup:edit[name,classes]
Replace the classes assigned to a node group. This task will destroy existing data. Classes must be
specied as a comma-separated list.
Parameters:
name group name
classes classes to assign to the group
nodegroup:addclassparam[name,class,param,value]
Add classparam to a nodegroup. If the parameter already exists its value is overwritten.
name group name
class class (already assigned to the node)
param parameter name
value parameter value
nodegroup:addgroup[name,group]
Add a child group to a nodegroup.
Parameters:
name parent group name
group name of the group to add as a child group
nodegroup:delclass[name,class]
Puppet Enterprise 3.3 User's Guide Rake API for Querying and Modifying Console Data

193/404

Remove a class from a nodegroup.


Parameters:
name group name
class class name
nodegroup:delclassparam[name,class,param]
Remove a class param from a node group.
Parameters:
name group name
class class name
param parameter name
nodegroup:delgroup[name,group]
Remove a child group from a nodegroup.
Parameters:
name parent group name
group child group name
nodegroup:variables[name,variables]
Add (or edit, if they exist) variables for a node group. Variables must be specied as a commaseparated list of variable=value pairs; the list must be quoted and the commas must be escaped.
Parameters:
name group name
variables variables specied as "<VARIABLE>=<VALUE>,<VARIABLE>=<VALUE>,..."

List of Rake API Tasks with Environment


Variable Argument Names
This page contains a complete list of console Rake tasks in Puppet Enterprise 3.0, using the older
invocation style with named arguments expressed as environment variables.
For more complete information on the Puppet Enterprise consoles Rake API, [see the main Rake
API page][rake_api].
For a list of tasks with the newer style of parameters, [see the task list section of the main Rake
Puppet Enterprise 3.3 User's Guide List of Rake API Tasks with Environment Variable Argument Names
194/404

API page][new_tasks].

Deprecation note: Invoking tasks like this will cause deprecation warnings, but it will
continue to work for the duration of the Puppet Enterprise 3.x series, with removal
tentatively planned for Puppet Enterprise 4.0.

Node Tasks: Getting Info


node:list [match=<REGULAR EXPRESSION>]
List nodes. Can optionally match nodes by regex.
node:listclasses name=<NAME>
List classes for a node.
node:listclassparams name=<NAME> class=<CLASS>
List classparams for a node/class pair.
node:listgroups name=<NAME>
List groups for a node.
node:variables name=<NAME>
List variables for a node.

Node Tasks: Modifying Info


node:add name=<NAME> [groups=<GROUPS>] [classes=<CLASSES>]
Add a new node. Classes and groups can be specied as comma-separated lists.
node:del name=<NAME>
Delete a node.
node:classes name=<NAME> classes=<CLASSES>
Replace the list of classes assigned to a node. Classes must be specied as a comma-separated list.
node:groups name=<NAME> groups=<GROUPS>
Replace the list of groups a node belongs to. Groups must be specied as a comma-separated list.
node:addclass name=<NAME> class=<CLASS>
Add a class to a node.
Puppet Enterprise 3.3 User's Guide List of Rake API Tasks with Environment Variable Argument Names
195/404

node:addclassparam name=<NAME> class=<CLASS> param=<PARAM> value=<VALUE>


Add a classparam to a node.
node:addgroup name=<NAME> group=<GROUP>
Add a group to a node.
node:delclassparam name=<NAME> class=<CLASS> param=<PARAM>
Remove a class param from a node.
node:variables name=<NAME> variables="<VARIABLE>=<VALUE>,<VARIABLE>=<VALUE>,..."
Add (or edit, if they exist) variables for a node. Variables must be specied as a comma-separated
list of variable=value pairs; the list must be quoted.
If you want to set a variables value to a string containing commas, you must escape those commas.
Use a single backslash for single-quoted strings, and two backslashes for double-quoted strings.

Class Tasks: Getting Info


nodeclass:list [match=<REGULAR EXPRESSION>]
List node classes. Can optionally match classes by regex.

Class Tasks: Modifying Info


nodeclass:add name=<NAME>
Add a new class. This must be a class available to the Puppet autoloader via a module.
nodeclass:del name=<NAME>
Delete a node class.

Group Tasks: Getting Info


nodegroup:list [match=<REGULAR EXPRESSION>]
List node groups. Can optionally match groups by regex.
nodegroup:listclasses name=<NAME>
List classes that belong to a node group.
nodegroup:listclassparams name=<NAME> class=<CLASS>
List classparams for a nodegroup/class.
Puppet Enterprise 3.3 User's Guide List of Rake API Tasks with Environment Variable Argument Names
196/404

nodegroup:listgroups name=<NAME>
List child groups that belong to a node group.
nodegroup:variables name=<NAME>
List variables for a node group.

Group Tasks: Modifying Info


nodegroup:add name=<NAME> [classes=<CLASSES>]
Create a new node group. Classes can be specied as a comma-separated list.
nodegroup:del name=<NAME>
Delete a node group.
nodegroup:add_all_nodes name=<NAME>
Add every known node to a group.
nodegroup:addclass name=<NAME> class=<CLASS>
Assign a class to a group without overwriting its existing classes.
nodegroup:edit name=<NAME> classes=<CLASSES>
Replace the classes assigned to a node group. Classes must be specied as a comma-separated list.
nodegroup:addclassparam name=<NAME> class=<CLASS> param=<PARAM> value=<VALUE>
Add classparam to a nodegroup.
nodegroup:addgroup name=<NAME> group=<GROUP>
Add a child group to a nodegroup.
nodegroup:delclass name=<NAME> class=<CLASS>
Remove a class from a nodegroup.
nodegroup:delclassparam name=<NAME> class=<CLASS> param=<PARAM>
Remove a class param from a node group.
nodegroup:delgroup name=<NAME> group=<GROUP>
Remove a child group from a nodegroup.
nodegroup:variables name=<NAME> variables="<VARIABLE>=<VALUE>,<VARIABLE>=<VALUE>,..."
Puppet Enterprise 3.3 User's Guide List of Rake API Tasks with Environment Variable Argument Names
197/404

Add (or edit, if they exist) variables for a node group. Variables must be specied as a commaseparated list of variable=value pairs; the list must be quoted.
If you want to set a variables value to a string containing commas, you must escape those commas.
Use a single backslash for single-quoted strings, and two backslashes for double-quoted strings.

Conguring & Tuning the Console &


Databases
Conguring Console Authentication
Conguring the SMTP Server
The consoles account system sends verication emails to new users, and requires an SMTP server
to do so. If your sites SMTP server requires a user and password, TLS, or a non-default port, you
can congure these by editing the /etc/puppetlabs/console-auth/config.yml le:

smtp:
address: mail.example.com
port: 25
use_tls: false
## Uncomment to enable SMTP authentication
#username: smtp_username
#password: smtp_password

Allowing Global Unauthenticated Access


Important: Do not enable global unauthenticated access alongside third-party authentication
services.
To allow anonymous, read-only access to the console, do the following:
Edit the /etc/puppetlabs/console-auth/cas_client_config.yml le and change the
global_unauthenticated_access setting to true.
In the same le, under authorization, comment out all the other authentication choices.
Restart Apache by running sudo /etc/init.d/pe-httpd restart.
Changing Session Duration
If you wish to change the duration of a users session before they have to re-authenticate you need
to edit two settings. In /etc/puppetlabs/rubycas-server/config.yml change the
maximum_session_lifetime setting by specifying, in seconds, how long the session should last.
The default is 1200 (20 minutes).
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

198/404

Next, in /etc/puppetlabs/console-auth/cas_client_config.yml, edit the session_timeout


setting so it is the same as maximum_session_lifetime. Again, the default is 1200 seconds.

Conguring the Console to Use a Custom SSL Certicate


Full instructions are available here.

Conguring Third-Party Authentication Services


User access can be managed with external, third-party authentication services, as described on the
user management and authorization page. The following external services are supported:
LDAP
Active Directory (AD)
Google accounts
To use external authentication, the following two les must be correctly congured:
/etc/puppetlabs/console-auth/cas_client_config.yml (see below)
/etc/puppetlabs/rubycas-server/config.yml (see below)
(Note that YAML requires whitespace and tabs to match up exactly. Type carefully.)
After editing these les you must restart the pe-httpd and pe-puppet-dashboard-workers services
via their init.d scripts.

Note: if you are using two-factor authentication with Google accounts, you must rst create
an application-specic password in order to successfully log into the console.

Conguring cas_client_config.yml
The /etc/puppetlabs/console-auth/cas_client_config.yml le contains several commentedout lines under the authorization: key. Un-comment the lines that correspond to the RubyCAS
authenticators you wish to use, and set a new default_role if desired.
Each entry consists of the following:
A common identier (e.g. local, or ldap, etc.), which is used in the console_auth database and
corresponds to the classname of the RubyCAS authenticator.
default_role, which denes the role to assign to users by default allowed values are readonly, read-write, or admin.
description, which is simply a human readable description of the service.
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

199/404

The order in which authentication services are listed in the cas_client_config.yml le is the order
in which the services will be checked for valid accounts. In other words, the rst service that returns
an account matching the entered user credential is the service that will perform authentication and
log-in.
This example shows how to edit the le if you want to use AD and the built-in (local) auth services
while leaving Google and LDAP disabled:
## This configuration file contains information required by any web
## service that makes use of the CAS server for authentication.
authentication:
## Use this configuration option if the CAS server is on a host different
## from the console-auth server.
# cas_host: master:443
## The port CAS is listening on. This is ignored if cas_host is set.
# cas_port: 443
## The session secret is randomly generated during installation of Puppet
## Enterprise and will be regenerated any time console-auth is enabled or
disabled.
session_key: 'puppet_enterprise_console'
session_secret: [REDACTED]
## Set this to true to allow anonymous users read-only access to all of
## Puppet Enterprise Console.
global_unauthenticated_access: false
authorization:
local:
default_role: read-only
description: Local
# ldap:
# default_role: read-only
# description: LDAP
activedirectoryldap:
default_role: read-only
description: Active Directory
# google:
# default_role: read-only
# description: Google

Note: If your console server ever ran PE 2.5, the commented-out sections may not be present
in this le. To nd example cong text that can be copied and pasted into place, look for a
cas_client_config.yml.rpmnew or cas_client_config.yml.dpkg-new le in the same
directory.

Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

200/404

Conguring rubycas-server/config.yml
The /etc/puppetlabs/rubycas-server/config.yml le is used to congure RubyCAS to use
external authentication services. As before, you will need to un-comment the section for the thirdparty service you wish to enable and congure it as necessary.

Note: If you are upgrading to PE 3.2.x or later, rubycas-server/config.yml will not contain
the commented sections for the third-party services. Weve provided the commented
sections below, which you can copy and paste into rubycas-server/config.yaml after you
upgrade.
The values for the listed keys are LDAP and ActiveDirectory standards. If you are not the
administrator of those databases, you should check with that administrator for the correct values.
GOOGLE AUTHENTICATION

# === Google Authentication


====================================================
#
# The Google authenticator allows users to log in to your CAS server using
# their Google account credentials (i.e. the same email and password they
# would use to log in to Google services like Gmail). This authenticator
# requires no special configuration -- just specify its class name:
#
# authenticator:
# - class: CASServer::Authenticators::Google
#
# If you are behind an http proxy, you can try specifying proxy settings as
follows:
#
# authenticator:
# - class: CASServer::Authenticators::Google
# proxy:
# host: your-proxy-server
# port: 8080
# username: nil
# password: nil
#
# Note that as with all authenticators, it is possible to use the Google
# authenticator alongside other authenticators. For example, CAS can first
# attempt to validate the account with Google, and if that fails, fall back
# to some other local authentication mechanism.
#
# For example:
#
# authenticator:
# - class: CASServer::Authenticators::Google
# - class: CASServer::Authenticators::SQL
# database:
# adapter: postgresql
# database: some_database_with_users_table
# username: root
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

201/404

# password:
# host: localhost
# user_table: user
# username_column: username
# password_column: password
#
#
ACTIVEDIRECTORY AUTHETICATION

# === ActiveDirectory Authentication


===========================================
#
# This method authenticates against Microsoft's Active Directory using LDAP.
# You must configure the ActiveDirectory server, and base DN. The port number
# and LDAP filter are optional. You must also enter a CN and password
# for a special "authenticator" user. This account is used to log in to
# the ActiveDirectory server and search LDAP. This does not have to be an
# administrative account -- it only has to be able to search for other
# users.
#
# Note that the auth_user parameter must be the user's CN (Common Name).
# In Active Directory, the CN is genarally the user's full name, which is
usually
# NOT the same as their username (sAMAccountName).
#
# For example:
#
#
# authenticator:
# - class: CASServer::Authenticators::ActiveDirectoryLDAP
# ldap:
# host: ad.example.net
# port: 389
# base: dc=example,dc=net
# filter: (objectClass=person)
# auth_user: authenticator
# auth_password: itsasecret
#
# A more complicated example, where the authenticator will use TLS encryption,
# will ignore users with disabled accounts, and will pass on the 'cn' and
'mail'
# attributes to CAS clients:
#
# authenticator:
# - class: CASServer::Authenticators::ActiveDirectoryLDAP
# ldap:
# host: ad.example.net
# port: 636
# base: dc=example,dc=net
# filter: (objectClass=person) & !(msExchHideFromAddressLists=TRUE)
# auth_user: authenticator
# auth_password: itsasecret
# encryption: simple_tls
# extra_attributes: cn, mail
#
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

202/404

# It is possible to authenticate against Active Directory without the


# authenticator user, but this requires that users type in their CN as
# the username rather than typing in their sAMAccountName. In other words
# users will likely have to authenticate by typing their full name,
# rather than their username. If you prefer to do this, then just
# omit the auth_user and auth_password values in the above example.
#
#
LDAP AUTHENTICATION

# === LDAP Authentication


======================================================
#
# This is a more general version of the ActiveDirectory authenticator.
# The configuration is similar, except you don't need an authenticator
# username or password. The following example has been reported to work
# for a basic OpenLDAP setup.
#
# authenticator:
# - class: CASServer::Authenticators::LDAP
# ldap:
# host: ldap.example.net
# port: 389
# base: dc=example,dc=net
# username_attribute: uid
# filter: (objectClass=person)
#
# If you need more secure connections via TSL, specify the 'encryption'
# option and change the port. This example also forces the authenticator
# to connect using a special "authenticator" user with the given
# username and password (see the ActiveDirectoryLDAP authenticator
# explanation above):
#
# authenticator:
# - class: CASServer::Authenticators::LDAP
# ldap:
# host: ldap.example.net
# port: 636
# base: dc=example,dc=net
# filter: (objectClass=person)
# encryption: simple_tls
# auth_user: cn=admin,dc=example,dc=net
# auth_password: secret
#
# If you need additional data about the user passed to the client (for example,
# their 'cn' and 'mail' attributes, you can specify the list of attributes
# under the extra_attributes config option:
#
# authenticator:
# - class: CASServer::Authenticators::LDAP
# ldap:
# host: ldap.example.net
# port: 389
# base: dc=example,dc=net
# filter: (objectClass=person)
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

203/404

# extra_attributes: cn, mail


#
# Note that the above functionality is somewhat limited by client
compatibility.
# See the SQL authenticator notes above for more info.
CUSTOM AUTHENTICATION

# === Custom Authentication


====================================================
#
# It should be relatively easy to write your own Authenticator class. Have a
look
# at the built-in authenticators in the casserver/authenticators directory.
Your
# authenticator should extend the CASServer::Authenticators::Base class and
must
# implement a validate() method that takes a single hash argument. When the
user
# submits the login form, the username and password they entered is passed to
# validate() as a hash under :username and :password keys. In the future, this
# hash might also contain other data such as the domain that the user is
logging
# in to.
#
# To use your custom authenticator, specify it's class name and path to the
# source file in the authenticator section of the config. Any other parameters
# you specify in the authenticator configuration will be passed on to the
# authenticator and made availabe in the validate() method as an @options hash.
#
# Example:
#
# authenticator:
# - class: FooModule::MyCustomAuthenticator
# source: /path/to/source.rb
# option_a: foo
# another_option: yeeha
#
MULTIPLE AUTHENTICATORS

# === Multiple Authenticators


==================================================
#
# If you need to have more than one source for authentication, such as an LDAP
# directory and a database, you can use multiple authenticators by making
# :authenticator an array of authenticators.
#
# authenticator:
# - class: CASServer::Authenticators::ActiveDirectoryLDAP
# ldap:
# host: ad.example.net
# port: 389
# base: dc=example,dc=net
# filter: (objectClass=person)
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

204/404

# - class: CASServer::Authenticators::SQL
# database:
# adapter: postgresql
# database: some_database_with_users_table
# username: root
# password:
# host: localhost
# user_table: user
# username_column: username
# password_column: password
#
# During authentication, the user credentials will be checked against the first
# authenticator and on failure fall through to the second authenticator.

Note: The commented-out examples in the cong le may or may not have a line break
between after the hyphen; both are valid YAML.
# OK
- class: CASServer::Authenticators::SQLEncrypted
# Also OK
class: CASServer::Authenticators::SQLEncrypted

As the above examples show, its generally best to specify just dc= attributes in the base key. The
criteria for the Organizational Unit ( OU) and Common Name ( CN) should be specied in the filter
key. The value of the filter: key is where authorized users should be located in the AD
organizational structure. Generally speaking, the filter: key is where you would specify an OU or
an AD Group. In order to authenticate, users will need to be in the specied OU or Group.
Also note that the value for the filter: key must be the full name for the leftmost cn=; you cannot
use the user ID or logon name. In addition, the auth_user: key requires the full Distinguished
Name (DN), including any CNs associated with the user and all of the dc= attributes used in the DN.

Tuning the PostgreSQL Buer Pool Size


If you are experiencing performance issues or instability with the console, you may need to adjust
the buer memory settings for PostgreSQL. The most important PostgreSQL memory settings for PE
are shared_buffers and work_mem. Generally speaking, you should allocate about 25% of your
hardwares RAM to shared_buffers. If you have a large and/or complex deployment you will
probably need to increase work_mem from the default of 1mb. For more detail, see in the
PostgreSQL documentation.
After changing any of these settings, you should restart the PostgreSQL server:
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

205/404

$ sudo /etc/init.d/pe-postgresql restart

Fine-tuning the delayed_job Queue


The console uses a delayed_job queue to asynchronously process resource-intensive tasks such
as report generation. Although the console wont lose any data sent by puppet masters if these jobs
dont run, youll need to be running at least one delayed job worker (and preferably one per CPU
core) to get the full benet of the consoles UI.
Changing the Number of delayed_job Worker Processes
You can increase the number of workers by changing the following setting:
CPUS in /etc/sysconfig/pe-puppet-dashboard-workers on Red-Hat based systems
NUM_DELAYED_JOB_WORKERS in /etc/default/pe-puppet-dashboard-workers on Ubuntu and
Debian
In most congurations, you should run exactly as many workers as the machine has CPU cores.

Changing the Consoles Port


By default, a new installation of PE will serve the console on port 443. However, previous versions of
PE served the consoles predecessor on port 3000. If you upgraded and want to change to the more
convenient new default, or if you need port 443 for something else and want to shift the console
somewhere else, perform the following steps:
1. Stop the pe-httpd service: sudo /etc/init.d/pe-httpd stop.
2. Edit /etc/puppetlabs/httpd/conf.d/puppetdashboard.conf on the console server, and change
the port number in the Listen 443 and <VirtualHost *:443> directives. (These directives will
contain the current port, which is not necessarily 443.)
3. Edit /etc/puppetlabs/console-auth/config.yml on the puppet master server, and change the
cas_url to use your preferred port.
4. Edit /etc/puppetlabs/rubycas-server/config.yml on the puppet master server, and change
the console_base_url to use your preferred port.
5. Make sure to allow access to the new port in your systems rewall rules.
6. Start the pe-httpd service: sudo /etc/init.d/pe-httpd start.

Conguring the Console Location


By default, the puppet master nds the console by reading the contents of
/etc/puppetlabs/puppet/console.conf, which contains the following:

[main]
Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

206/404

server = <console hostname>


port = <console port>
certificate_name = pe-internal-dashboard

To change the location of the console, youll need to specify the console hostname, port, and
certicate name.

Disabling Update Checking


When the consoles web server ( pe-httpd) starts or restarts, it checks for updates. To get the
correct update info, the server will pass some basic, anonymous info to Puppet Labs servers.
Specically, it will transmit:
the IP address of the client
the type and version of the clients OS
the installed version of PE
If you wish to disable update checks (e.g. if your company policy forbids transmitting this
information), you will need to add the following line to the
/etc/puppetlabs/installer/answers.install le:

q_pe_check_for_updates=n

Keep in mind that if you delete the /etc/puppetlabs/installer/answers.install le, update


checking will resume.

Fine Tuning Live Management Node Discovery


If youre running Live Management on a network thats slow, or has intermittent connectivity issues,
you may need to tweak the timeouts for node discovery.
On your console node (the master if this is a monolithic installation), the le
/etc/puppetlabs/httpd/conf.d/puppetdashboard.conf contains the setting #SetEnv
LM_DISCOVERY_TIMEOUT 4, commented out.
The number represents seconds allowed for node discovery. You can uncomment this line and
increase the number to allow more time for node discovery.
After tweaking this setting, youll want to restart the pe-httpd and pe-memcached services to forcerefresh node discovery.
Next: Puppet Core Overview

Puppet Enterprise 3.3 User's Guide Conguring & Tuning the Console & Databases

207/404

An Overview of Puppet
Note: This page gives a broad overview of how Puppet congures systems, and provides
links to deeper information. If you prefer to learn by doing, you can follow the Puppet
Enterprise quick start guide:
Quick Start: Using PE
Quick Start: Writing Modules

Summary of Puppet
Puppet Enterprise (PE) uses Puppet as the core of its conguration management features. Puppet
models desired system states, enforces those states, and reports any variances so you can track
what Puppet is doing.
To model system states, Puppet uses a declarative resource-based language this means a user
describes a desired nal state (e.g. this package must be installed or this service must be
running) rather than describing a series of steps to execute.
Puppet breaks conguration management out into four major areas of activity:
1. The user describes re-usable pieces of conguration by creating or downloading Puppet
modules.
2. The user assigns (and congures) classes to each machine in the PE deployment.
3. Each node fetches and applies its complete conguration from the puppet master server, either
on a recurring schedule or on demand. This conguration includes all of the classes that have
been assigned to that node. Applying a conguration enforces the desired state that was dened
by the user, and submits a report about any changes that had to be made.
4. The user may view aggregate and individual reports to monitor what resources have been
changed by Puppet.
Continue reading this page for an overview of the rst three activities and links to deeper info. See
the Viewing Reports and Inventory Data page to learn how to monitor Puppets activity from the PE
console.

Modules and Manifests


Puppet uses its own domain-specic language (DSL) to describe re-usable pieces of conguration.
Puppet code is saved in les called manifests, which are in turn stored in structured directories
called modules. Pre-built Puppet modules can be downloaded from the Puppet Forge, and most
users will write at least some of their own modules.
See the Modules and Manifests page of this manual for information on how Puppet code is
written and arranged.
Puppet Enterprise 3.3 User's Guide An Overview of Puppet

208/404

Assigning and Conguring Classes


Classes are re-usable pieces of conguration stored in modules. Some classes can be congured to
behave dierently to suit dierent needs. (This is most common with general-purpose classes
written to solve many problems at once.)
To compose a complete conguration for a node, you will generally assign a combination of several
classes to it. (For example, a node that serves as a load balancer might have an HAProxy class, but
it would also have classes to keep time synchronized, manage important le permissions, and
manage login security.)
PE includes several ways to assign and congure classes; some require you to specically identify
each node, others can operate automatically on metadata, and most users will use a combination of
a few methods.
See the Assigning Congurations to Nodes page of this manual for information on how to
compose classes into complete congurations.

Managing and Triggering Conguration Runs


Puppet Enterprise has a default schedule and behavior for each nodes conguration runs, but you
can recongure this arrangement.
Default Run Behavior
In a default PE deployment:
Each agent node runs the puppet agent service ( pe-puppet) as a daemon. This service idles in
the background and does a conguration run at regular intervals.
The default run interval is every 30 minutes, as congured by the runinterval setting in the
nodes puppet.conf le.
Additional on-demand runs can be triggered when necessary; see the Controlling Puppet page
in the orchestration section for details.
Alternate Run Behaviors
PRIORITIZING PROCESSES

You can change the priority of Puppet processes ( puppet agent, puppet apply) using the priority
setting. This can be helpful if you want to manage resource-intensive loads on busy nodes. Note
that the process must be running as privileged user if it is going to raise its priority.
DIFFERENT RUN INTERVAL

You can change the run interval by setting a new value for the runinterval setting in each agent
nodes puppet.conf le.
This le is located at /etc/puppetlabs/puppet/puppet.conf on *nix nodes, and
<DATADIR> \puppet.conf on Windows.
Puppet Enterprise 3.3 User's Guide An Overview of Puppet

209/404

Make sure you put this setting in the [agent] or [main] block of puppet.conf.
Since you will be managing this le on many systems at once, you may wish to manage
puppet.conf with a Puppet template.
RUN FROM CRON

On *nix nodes, the pe-puppet daemon process can sometimes use more memory than is desired.
This was a common problem in PE 2.x which is largely solved in PE 3, but some users may still wish
to disable it.
You can turn o the daemon and still get scheduled runs by creating a cron task for puppet agent
on your *nix nodes. An example snippet of Puppet code, which would create this task on nonWindows nodes:
# Place in /etc/puppetlabs/puppet/manifests/site.pp on the puppet master
node, outside any node statement.
# Run puppet agent hourly (with splay) on non-Windows nodes:
if $osfamily != windows {
cron { 'puppet_agent':
ensure => 'present',
command => '/opt/puppet/bin/puppet agent --onetime --no-daemonize -splay --splaylimit 1h --logdest syslog',
user => 'root',
minute => 0,
}
}

Remember, after creating this task you should turn o the pe-puppet service on *nix nodes.

Windows note: This is unnecessary on Windows, since it doesnt use the same version of the
pe-puppet service; the Windows service was implemented long after the *nix service, and
was designed from the start to limit memory usage. Additionally, its more dicult on
Windows to make a scheduled task run multiple times a day.
ON-DEMAND ONLY

You can stop all scheduled runs by stopping the pe-puppet service on all nodes. This will cause
nodes to only fetch congurations when you explicitly trigger runs with the orchestration engine.
If you are only doing on-demand runs, youre likely to be running large numbers of nodes at once.
For best performance, you should take advantage of the orchestration engines ability to run many
nodes in a controlled series.
Next: Puppet Modules and Manifests

Puppet Enterprise 3.3 User's Guide An Overview of Puppet

210/404

Puppet Modules and Manifests


Summary
Puppet uses its own domain-specic language (DSL) to describe machine congurations. Code in
this language is saved in les called manifests.
Puppet works best when you isolate re-usable chunks of code into their own modules, then
compose those chunks into more complete congurations.
This page covers the rst part of that process: writing manifests and modules. For information on
composing modules into complete congurations, see the Assigning Congurations to Nodes page
of this manual.

Other References
This page consists mostly of small examples and links to detailed information. If you want
more complete context, you should read some of the following documents instead:
Learning the Puppet Language
If you are new to Puppet, start here. For a complete introduction to the Puppet language,
read and follow along with the Learning Puppet series, which will introduce you to the basic
concepts and then teach advanced class writing and module construction.
Learning Puppet
Quick Start
For those who learn by doing, the PE users guide includes a pair of interactive quick start
guides, which walk you through installing, using, hacking, and creating Puppet modules.
Quick Start: Using PE
Quick Start: Writing Modules
Modules in Context
The Puppet Enterprise Deployment Guide includes detailed walkthroughs of how to choose
modules and compose them into complete congurations.
Deployment Guide ch. 3: Automating Your Infrastructure
Geppetto IDE
Geppetto is an integrated development environment (IDE) for Puppet. It provides a toolset for
developing puppet modules and manifests that includes syntax highlighting, content
assistance, error tracing/debugging, and code completion features. Geppetto also provides
Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

211/404

integration with git, enabling side-by-side comparison of code from a given repo complete
with highlighting, code validation, syntax error parsing, and expression troubleshooting.
In addition, Geppetto provides tools that integrate with Puppet products. It includes an
interface to the Puppet Forge, which allows you to create modules from existing modules on
the Forge as well as easily upload your custom modules. Geppetto also provides PE
integration by parsing PuppetDB error reporting. This allows you to quickly nd the
problems with your puppet code that are causing conguration failures. For complete
information, visit the Geppetto documentation.
Printable References
These two cheat sheets are useful when writing your own modules or hacking existing
modules.
Module Layout Cheat Sheet
Core Resource Type Cheat Sheet

The Puppet Language


Puppet congurations are written in the Puppet language, a DSL built to declaratively model
resources.
For complete information about the Puppet language, see the Puppet 3 Language
Reference.
To identify unfamiliar syntax, see the visual index to the Puppet language.

Manifests
Manifests are les containing Puppet code. They are standard text les saved with the .pp
extension. Most manifests should be arranged into modules.
Resources
The core of the Puppet language is declaring resources. A resource declaration looks like this:
# A resource declaration:
file { '/etc/passwd':
ensure => file,
owner => 'root',
group => 'root',
mode => '0600',
}

Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

212/404

When a resource depends on another resource, you should explicitly state the relationship to make
sure they happen in the right order.
See the Resources page of the Puppet language reference for details about resource
declarations.
See the Relationships and Ordering page for details about relationships.

About Manifest Ordering


Puppet Enterprise is now using a new ordering setting in the Puppet core that allows you to
congure how unrelated resources should be ordered when applying a catalog. By default,
ordering will be set to manifest in PE.
You most likely expect that resources will be executed in the order you wrote them in your
manifest lesif there were no dependencies specied. If youre an experienced user and
have been using this kind of explicit ordering in your codebase, youll be able to use
manifest ordering without any problems.
We know that for new PE users learning the puppet language, one of the rst stumbling
blocks is guring out how to order resources so theyre evaluated correctly when puppet
runs. We anticipate that manifest ordering will help mitigate your struggles and help get you
writing more eective puppet code. And as youre learning, we denitely recommend you
study up on relationships and ordering in Puppet.
The following values are allowed for the ordering setting:
manifest: (default) uses the order in which the resources were declared in their manifest
les.
title-hash: orders resources randomly, but will use the same order across runs and
across nodes.
random: orders resources randomly and change their order with each run. This can work
like a fuzzer for shaking out undeclared dependencies.
Regardless of this settings value, Puppet will always obey explicit dependencies set with the
before/require/notify/subscribe metaparameters and the ->/~> chaining arrows; this
setting only aects the relative ordering of unrelated resources.
CHANGING THE RESOURCE ORDERING SETTING

By default, the ordering setting is congured for manifest ordering, but you will not see
this displayed in puppet.conf (located at /etc/puppetlabs/puppet/puppet.conf on the
puppet master).
To toggle the setting to random or title-hash, you will need to add it to the agent section;
for example:
Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

213/404

[agent]
ordering = title-hash
enviroment = production
...
...

Conditional Logic, Variables, and Facts


Puppet manifests can dynamically adjust their behavior based on variables. Puppet includes a set of
useful pre-set variables called facts that contain system proling data.
# Set the name of the Apache package based on OS
case $operatingsystem {
centos, redhat: { $apache = "httpd" }
debian, ubuntu: { $apache = "apache2" }
default: { fail("Unrecognized operating system for webserver") }
}
package {$apache:
ensure => installed,
}
See the Variables page (and the Facts subsection) of the Puppet language reference for
information on variables.
See the Conditional Statements page for information on if, case, and selector statements.
Classes and Dened Types
Groups of resource declarations and conditional statements can be wrapped up into a class:
class ntp {
package { 'ntp':
ensure => installed,
}
file { 'ntp.conf':
path => '/etc/ntp.conf',
ensure => file,
require => Package['ntp'],
source => "puppet:///modules/ntp/ntp.conf"
}
service { 'ntp':
name => ntpd
ensure => running,
enable => true,
subscribe => File['ntp.conf'],
}
}

Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

214/404

Classes are named blocks of Puppet code that can be assigned to nodes. They should be stored in
modules so that the puppet master can locate them by name.
Dened resources (i.e., dened resource types) extend the capability of classes and are stored in
the module structure. They cannot be assigned directly to nodes but can enable you to build much
more sophisticated classes.
See the Classes page of the Puppet language reference for details about dening and declaring
classes.
See the Dened Types page for details about dened resource types.

Puppet Modules
Modules are a convention for arranging Puppet manifests so that they can be automatically located
and loaded by the puppet master. They can also contain plugins, static les for nodes to download,
and templates.
Modules can contain many Puppet classes. Generally, the classes in a given module are all
somewhat related. (For example, an apache module might have a class that installs and enables
Apache, a class that enables PHP with Apache, a class that turns on mod_rewrite, etc.)
A module is:
A directory
with a specic internal layout
which is located in one of the puppet masters modulepath directories.
In Puppet Enterprise, the main modulepath directory for users is located at
/etc/puppetlabs/puppet/modules on the puppet master server.
Module Structure
This example module, named my_module, shows the standard module layout:
my_module/ This outermost directorys name matches the name of the module.
manifests/ Contains all of the manifests in the module.
init.pp Contains one class named my_module. This classs name must match the
modules name.
other_class.pp Contains one class named my_module::other_class.
my_defined_type.pp Contains one dened type named my_module::my_defined_type.
implementation/ This directorys name aects the class names beneath it.
foo.pp Contains a class named my_module::implementation::foo.
bar.pp Contains a class named my_module::implementation::bar.

Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

215/404

files/ Contains static les, which managed nodes can download.


service.conf This les URL would be puppet:///modules/my_module/service.conf.
lib/ Contains plugins, like custom facts and custom resource types.
templates/ Contains templates, which the modules manifests can use.
component.erb A manifest can render this template with
template('my_module/component.erb').
tests/ Contains examples showing how to declare the modules classes and dened types.
init.pp
other_class.pp Each class or type should have an example in the tests directory.
spec/ Contains spec tests for any plugins in the lib directory.
See the Module Fundamentals page of the Puppet 3 reference manual for details about module
layout and location.
Downloading Modules
You can search for pre-built modules on the Puppet Forge and use them in your own
infrastructure.
Use the puppet module search command to locate modules, or browse the Puppet Forges web
interface.
Along with the standard modules you can nd on the Forge, Puppet Labs also provides Puppet
Enterprise supported modules; these supported modules are rigorously tested with PE,
supported via the usual support channels, maintained for a long-term lifecycle, and are
compatible with multiple platforms and architectures.
On your puppet master server, use the puppet module install command to install modules
from the Forge.
See the Installing Modules page for details about installing pre-built modules.

Catalogs and Compilation


In standard master/agent Puppet, agents never see the manifests and modules that comprise their
conguration. Instead, the puppet master compiles the manifests down into a document called a
catalog, and serves the catalog to the agent node.
As mentioned above, manifests can contain conditional logic, as well as things like templates and
functions, all of which can use variables to change what the manifest manages on a system. A
catalog has none of these things; it contains only resources and relationships.
Only sending the catalog to agents allows Puppet to do several things:
Separate privileges: Each individual node has little to no knowledge about other nodes. It only
Puppet Enterprise 3.3 User's Guide Puppet Modules and Manifests

216/404

receives its own resources.


Simulate changes: Since the agent has a declarative document describing its conguration, with
no contingent logic, it has the option of simulating the changes necessary to apply the
conguration. If you do a Puppet run in noop mode, the agent will check against its current state
and report on what would have changed without actually making any changes.
Record and query congurations: Each nodes most recent catalog is stored in PuppetDB, and
you can query the database service for information about managed resources.
Next: Assigning Congurations to Nodes

Puppet: Assigning Congurations to Nodes


Due to technical diculties, this section is not included in this PDF. Please visit
http://docs.puppetlabs.com/pe/3.3/puppet_assign_congurations.html to read this content.

Puppet Tools
Puppet is built on a large number of services and command-line tools. Understanding which to
reach for and when is crucial to using Puppet eectively.
You can read more about any of these tools by running puppet man <SUBCOMMAND> at the command
line.

Services
Puppet agent and puppet master are the heart of Puppets architecture.
The puppet agent service runs on every managed Puppet Enterprise node. It fetches and applies
congurations from a puppet master server.
In Puppet Enterprise, the puppet agent runs without user interaction as the pe-puppet service;
by default, it performs a run every 30 minutes. You can also use the orchestration engine to
manually trigger Puppet runs on any nodes. (If you are logged into an agent node as an
administrator, you can also run sudo puppet agent --test from the command line.)
The puppet agent reads its settings from the [main] and [agent] blocks of
/etc/puppetlabs/puppet/puppet.conf.
The puppet master service compiles and serves congurations to agent nodes.
In Puppet Enterprise, the puppet master is managed by Apache and Passenger, under the
umbrella of the pe-httpd service. Apache handles HTTPS requests from agents, and it spawns
and kills puppet master processes as needed.
Puppet Enterprise 3.3 User's Guide Puppet: Assigning Congurations to Nodes

217/404

The puppet master creates agent congurations by consulting its Puppet modules and the
instructions it receives from the console.
The puppet master reads its settings from the [main] and [master] blocks of
/etc/puppetlabs/puppet/puppet.conf. It can also be congured conditionally by using
environments.
The PuppetDB service collects information from the puppet master, and makes it available to
other services.
The puppet master itself consumes PuppetDBs data in the form of exported resources. You can
also install a set of additional functions to do deeper queries from your Puppet manifests.
External services can easily integrate with PuppetDBs data via its query API. See the PuppetDB
manuals API pages for more details.

Everyday Tools
The node requests page of the PE console is used to add nodes to your Puppet Enterprise
deployment.
After a new agent node has been installed, it requests a certicate from the master, which will
allow it to fetch congurations; the agent node cant be managed by PE until its certicate
request has been approved. See the documentation for the node requests page for more info.
When you decommission a node and remove it from your infrastructure, you should destroy its
certicate information by logging into the puppet master server as an admin user and running
puppet cert clean <NODE NAME>.
The puppet apply subcommand can compile and apply Puppet manifests without the need for a
puppet master. Its ideal for testing new modules ( puppet apply -e 'include <CLASS NAME>'),
but can also be used to manage an entire Puppet deployment in a masterless arrangement.
The puppet resource subcommand provides an interactive shell for manipulating Puppets
underlying resource framework. It works well for one-o administration tasks and ad-hoc
management, and oers an abstraction layer between various OSs implementations of core
functionality.
$ sudo puppet resource package nano ensure=latest
notice: /Package[nano]/ensure: created
package { 'nano':
ensure => '1.3.12-1.1',
}

Advanced Tools
Puppet Enterprise 3.3 User's Guide Puppet: Assigning Congurations to Nodes

218/404

See the cloud provisioning chapter of this guide for more about the cloud provisioning tools.
See the orchestration chapter of this guide for more about the command-line orchestration
tools.
Next: Puppet Data Library

The Puppet Data Library


The Puppet Data Library (PDL) consists of two elements:
The large amount of data Puppet automatically collects about your infrastructure.
The formats and APIs Puppet uses to expose that data.
Sysadmins can access information from the PDL with their choice of tools, including familiar
scripting languages like Ruby, Perl, and Python. This data can be used to build custom reports, add
to existing data sets, or automate repetitive tasks.
Right now, the Puppet Data Library consists of three dierent data services:

PuppetDB
PuppetDB is a built-in part of PE 3.0 and later.
PuppetDB stores up-to-date copies of every nodes facts, resource catalogs, and run reports as part
of each Puppet run. External tools can easily query and search all of this data over a stable,
versioned HTTP query API. This is a more full-featured replacement for Puppets older Inventory
Service interface, and it enables entirely new functionality like class, resource, and event searches.
See the documentation for PuppetDBs query API here.
Since PuppetDB receives all facts for all nodes, you can extend its data with custom facts on your
puppet master server.

EXAMPLE: Using the old Puppet Inventory Service, a customer automated the validation and
reporting of their servers warranty status. Their automation regularly retrieved the serial
numbers of all servers in the data center, then checked them against the hardware vendors
warranty database using the vendors public API to determine the warranty status for each.
Using PuppetDBs improvements over the inventory API, it would also be possible to correlate
serial number data with what the machines were actually being used for, by getting lists of
the Puppet classes being applied to each machine.

Puppet Run Report Service


Puppet Enterprise 3.3 User's Guide The Puppet Data Library

219/404

The Puppet Run Report Service provides push access to the reports that every node submits after
each Puppet run. By writing a custom report processor, you can divert these reports to any custom
service, which can use them to determine whether a Puppet run was successful, or dig deeply into
the specic changes for each and every resource under management for every node.
You can also write out-of-band report processors that consume the YAML les written to disk by
the puppet masters default report handler.
Learn more about the Puppet Run Report Service here.

Puppet Resource Dependency Graph


The Puppet Resource Dependency Graph provides a complete, mathematical graph of the
dependencies between resources under management by Puppet. These graphs, which are stored in
.dot format, can be used with any commercial or open source visualization tool to uncover hidden
linkages and help understand how your resources interconnect to provide working services.

EXAMPLE: Using the Puppet Resource Dependency Graph and Gephi, a visualization tool, a
customer identied unknown dependencies within a complicated set of conguration
modules. They used this knowledge to re-write parts of the modules to get better
performance.
Learn more about the Puppet Resource Dependency Graph here
Next: Puppet References

Puppet References
Puppet has a lot of moving parts and a lot of information to remember. The following resources will
help you keep the info you need at your ngertips and use Puppet eectively.

Terms and Concepts


The Puppet Glossary denes the common and obscure terms for pieces of the Puppet ecosystem.

Resource Types
Resource types are the atomic unit of Puppet congurations, and there are a lot of them to
remember.
The Core Types Cheat Sheet is a fast, printable two-page guide to the most useful resource
types.
The Type Reference is the complete dictionary of Puppets built-in resource types. No other page
will be more useful to you on a daily basis.
Puppet Enterprise 3.3 User's Guide Puppet References

220/404

Puppet Syntax
The Puppet Language Reference covers every part of the Puppet language as of Puppet 3.x.

Conguration and Settings


Conguring Puppet describes Puppets conguration les and covers the ten or so most useful
settings.
The Conguration Reference lists every single setting available to Puppet.
Next: Conguring Puppet Core

Conguring Puppet Core


Conguration Files
All of puppets conguration les can be found in /etc/puppetlabs/puppet/ on *nix systems. On
Windows, you can nd them in Puppets data directory.

References
For an exhaustive description of puppets conguration settings and auxiliary conguration
les, refer to the Conguring Puppet Guide.
For details, syntax and options for the available conguration settings, visit the conguration
reference.
For details on how to congure access to Puppets pseudo-RESTful HTTP API, refer to the Access
Control Guide.

Note: If you havent modied the auth.conf le, it may occasionally be modied when
upgrading between Puppet Enterprise versions. However, if you HAVE modied it, the
upgrader will not automatically overwrite your changes, and you may need to manually
update auth.conf to accomodate new Puppet Enterprise features. Be sure to read the
upgrade notes when upgrading your puppet master to new versions of PE.

Conguring Hiera
Puppet in PE includes full Hiera support, including automatic class parameter lookup.
The hiera.yaml le is located at /etc/puppetlabs/puppet/hiera.yaml on the puppet master
server.

Puppet Enterprise 3.3 User's Guide Conguring Puppet Core

221/404

See the Hiera documentation for details about the hiera.yaml cong le format.
To use Hiera with Puppet Enterprise, you must, at minimum, edit hiera.yaml to set a :datadir
for the :yaml backend, ensure that the hierarchy is a good t for your deployment, and create
data source les in the data directory.
To learn more about using Hiera, see the Hiera documentation.

Disabling Update Checking


When the puppet masters web server ( pe-httpd) starts or restarts, it checks for updates. To get the
correct update info, the server will pass some basic, anonymous info to Puppet Labs servers.
Specically, it will transmit:
the IP address of the client
the type and version of the clients OS
the installed version of PE
If you wish to disable update checks (e.g. if your company policy forbids transmitting this
information), you will need to add the following line to the
/etc/puppetlabs/installer/answers.install le:

q_pe_check_for_updates=n

Keep in mind that if you delete the /etc/puppetlabs/installer/answers.install le, update


checking will resume.
Next: Troubleshooting Puppet

Overview of Orchestration Topics


Puppet Enterprise includes an orchestration engine (MCollective), which can invoke many kinds of
action in parallel across any number of nodes. Several useful actions are available by default, and
you can easily add and use new actions.

Quick Links
Special orchestration tasks:
Controlling Puppet
Browsing and Searching Resources
General orchestration tasks:
Puppet Enterprise 3.3 User's Guide Overview of Orchestration Topics

222/404

Invoking Actions (In the PE Console)


Invoking Actions (Command Line)
List of Built-In Actions
Extending the orchestration engine:
Adding New Actions
Conguring the orchestration engine:
Conguring Orchestration

Note: Sometimes, newly added nodes wont respond immediately to orchestration


commands. These nodes will begin responding to orchestration commands about 30
minutes after Puppet Enterprise is installed. You can accelerate this by logging into the node
and running puppet agent --test as an admin user.

Orchestration Fundamentals
Actions and Plugins
Orchestration isnt quite like SSH, PowerShell, or other tools meant for running arbitrary shell code
in an ad-hoc way.
PEs orchestration is built around the idea of predened actions it is essentially a highly parallel
remote procedure call (RPC) system.
Actions are distributed in MCollective agent plugins, which are bundles of several related actions.
Many plugins are available by default; see Built-In Orchestration Actions.
You can extend the orchestration engine by downloading or writing new plugins and adding
them to the engine with Puppet.
Invoking Actions and Filtering Nodes
The core concept of PEs orchestration is invoking actions, in parallel, on a select group of nodes.
Typically you choose some nodes to operate on (usually with a lter that describes the desired fact
values or Puppet classes), and specify an action and its arguments. The orchestration engine then
runs that action on the chosen nodes, and displays any data collected during the run.
Puppet Enterprise can invoke orchestration actions in two places:
In the PE console (on the live management page)
On the command line

Puppet Enterprise 3.3 User's Guide Overview of Orchestration Topics

223/404

You can also allow your sites custom applications to invoke orchestration actions.
Special Interfaces: Puppet Runs and Resources
In addition to the main action invocation interfaces, Puppet Enterprise provides special interfaces
for two of the most useful orchestration tasks:
Remotely controlling the puppet agent and triggering Puppet runs
Browsing and comparing resources across your nodes

Orchestration Internals
Components
The orchestration engine consists of the following parts:
The pe-activemq service (which runs on the puppet master server) routes all orchestrationrelated messages.
The pe-mcollective service (which runs on every agent node) listens for authorized commands
and invokes actions in response. It relies on the available agent plugins for its set of possible
actions.
The mco command (available to the peadmin user account on the puppet master server) and the
live management page of the PE console can issue authorized orchestration commands to any
number of nodes.
Conguration
See the Conguring Orchestration page.
Security
The orchestration engine in Puppet Enterprise 3.0 uses the same security model as the
recommended standard MCollective deployment. See the security model section on the
MCollective standard deployment page for a more detailed rundown of these security measures.
In short, all commands and replies are encrypted in transit, and only a few authorized clients are
permitted to send commands. By default, PE allows orchestration commands to be sent by:
Read/write and admin users of the PE console
Users able to log in to the puppet master server with full administrator sudo privileges
If you extend orchestration by integrating external applications, you can limit the actions each
application has access to by distributing policy les; see the Conguring Orchestration page for
more details.
You can also allow additional users to log in as the peadmin user on the puppet master, usually by
distributing standard SSH public keys.
Puppet Enterprise 3.3 User's Guide Overview of Orchestration Topics

224/404

Network Trac
Every node (including all agent nodes, the puppet master server, and the console) needs the ability
to initiate connections to the puppet master server over TCP port 61613. See the notes on rewall
conguration in the System Requirements chapter of this guide for more details about PEs
network trac.
Next: Invoking Actions

Invoking Orchestration Actions


About This Page
Puppet Enterprise (PE) has two ways to invoke orchestration actions:
The live management page of the PE console
The Linux command line on the puppet master server
This page covers only the command line. See the Navigating Live Management page of this manual
for instructions on using live management to invoke actions.

Note: Although you will be running these commands on the Linux command line, they can
invoke orchestration actions on both *nix and Windows machines.

MCollective Documentation
Puppet Enterprises orchestration engine, MCollective, has its own section of the documentation
site, which includes more complete details and examples for command line orchestration usage.
This page covers basic CLI usage and all PE-specic information; for more details, see the following
pages from the MCollective docs:
MCollective Command Line Usage
Filtering

Logging In as peadmin
To run orchestration commands, you must log in to the puppet master server as the special
peadmin user account, which is created during installation.

Note: Puppet Enterprise 3.0 does not support adding more orchestration user accounts.
This means that, while it is possible (albeit complex) to allow other accounts on other
machines to invoke orchestration actions, upgrading to a future version of PE may disable
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

225/404

access for these extra accounts, requiring you to re-enable them manually. We do not
provide instructions for enabling extra orchestration accounts.
By default, the peadmin account cannot log in with a password. We recommend two ways to log in:
Using Sudo
Anyone able to log into the puppet master server as an admin user with full root sudo privileges
can become the peadmin user by running:

$ sudo -i -u peadmin

This is the default way to log in as the peadmin user. It means that orchestration commands can
only be issued by the group of users who can fully control the puppet master.
Adding SSH Keys
If you wish to allow other users to run orchestration commands without giving them full control
over the puppet master, you can add their public SSH keys to peadmins authorized keys le.
You can use Puppets ssh_authorized_key resource type to do this, or add keys manually to the
/var/lib/peadmin/.ssh/authorized_keys le.

The mco Command


All orchestration actions are invoked with the mco executable. The mco command always requires a
subcommand to invoke actions.

Note: For security, the mco command relies on a cong le


( /var/lib/peadmin/.mcollective) which is only readable by the peadmin user. PE
automatically congures this le; it usually shouldnt be modied by users.

Subcommands
The mco command has several subcommands, and its possible to add more run mco help for a
list of all available subcommands. The default subcommands in Puppet Enterprise 3.0 are:
Main subcommand:
rpc
This is the general purpose orchestration client, which can invoke actions from any MCollective
agent plugin.
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

226/404

Special-purpose subcommands:
These subcommands only invoke certain kinds of actions, but have some extra UI enhancements to
make them easier to use than the equivalent mco rpc command.
puppet
package
service
Help and support subcommands:
These subcommands can display information about the available agent plugins and subcommands.
help displays help for subcommands.
plugin the mco plugin doc command can display help for agent plugins.
completion a helper for shell completion systems.
Inventory and reporting subcommands:
These subcommands can retrieve and summarize information from Puppet Enterprise agent nodes.
ping pings all matching nodes and reports on response times
facts displays a summary of values for a single fact across all systems
inventory general reporting tool for nodes, collectives and subcollectives
find like ping, but doesnt report response times

Getting Help on the Command Line


You can get information about subcommands, actions, and other plugins on the command line.
Subcommand Help
Use one of the following commands to get help for a specic subcommand:
$ mco help <SUBCOMMAND>
$ mco <SUBCOMMAND> --help

List of Plugins
To get a list of the available plugins, which includes MCollective agent plugins, data query plugins,
discovery methods, and validator plugins, run mco plugin doc.
Agent Plugin Help
Related orchestration actions are bundled together in MCollective agent plugins. (Puppet-related
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

227/404

actions are all in the puppet plugin, etc.)


To get detailed info on a given plugins actions and their required inputs, run:
$ mco plugin doc <PLUGIN>

If there is also a data plugin with the same name, you may need to prepend agent/ to the plugin
name to disambiguate:
$ mco plugin doc agent/<PLUGIN>

Invoking Actions
Orchestration actions are invoked with either the general purpose rpc subcommand or one of the
special-purpose subcommands. Note that unless you specify a lter, orchestration commands will
be run on every server in your Puppet Enterprise deployment; make sure you know what will
happen before conrming any potentially disruptive commands. For more info on lters, see
Filtering Actions below.
The rpc Subcommand
The most useful subcommand is mco rpc. This is the general purpose orchestration client, which
can invoke actions from any MCollective agent plugin. See List of Built-In Actions for more
information about agent plugins.
Example:
$ mco rpc service restart service=httpd

The general form of an mco rpc command is:

$ mco rpc <AGENT PLUGIN> <ACTION> <INPUT>=<VALUE>

For a list of available agent plugins, actions, and their required inputs, see List of Built-In Actions
or the Getting Help header above.
Special-Purpose Subcommands
Although mco rpc can invoke any action, sometimes a special-purpose application can provide a
more convenient interface.

Example:
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

228/404

$ mco puppet runall 5

The puppet subcommands special runall action is able to run many nodes without
exceeding a certain load of concurrent runs. It does this by repeatedly invoking the puppet
agents status action, and only sending a runonce action to the next node if theres enough
room in the concurrency limit.
This uses the same actions that the mco rpc command can invoke, but since rpc doesnt
know that the output of the status action is relevant to the timing of the runonce action, it
cant provide that improved UI.
Each special-purpose subcommand ( puppet, service, and package) has its own CLI syntax. For
example, mco service puts the name of the service before the action, to mimic the format of the
more common platform-specic service commands:
$ mco service httpd status

Run mco help <SUBCOMMAND> to get specic help for each subcommand.

Filtering Actions
By default, orchestration actions aect all PE nodes. You can limit any action to a smaller set of
nodes by specifying a lter.
$ mco service pe-httpd status --with-fact fact_is_puppetconsole=true

Note: For more details about lters, see the following pages from the MCollective docs:
MCollective CLI Usage: Filters
Filtering

All command line orchestration actions can accept the same lter options, which are listed under
the Host Filters section of any mco help <SUBCOMMAND> text:

Host Filters
-W, --with FILTER Combined classes and facts filter
-S, --select FILTER Compound filter combining facts and
classes
-F, --wf, --with-fact fact=val Match hosts with a certain fact
-C, --wc, --with-class CLASS Match hosts with a certain config
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

229/404

management class
-A, --wa, --with-agent AGENT Match hosts with a certain agent
-I, --wi, --with-identity IDENT Match hosts with a certain configured
identity

Each type of lter lets you specify a type of metadata and a desired value. The orchestration action
will only run on nodes where that data has that desired value.
Any number of fact, class, and agent lters can also be combined in a single command; this will
make it so nodes must match every lter to run the action.
Matching Strings and Regular Expressions
Filter values are usually simple strings. These must match exactly and are case-sensitive.
Most lters can also accept regular expressions as their values; these are surrounded by forward
slashes, and are interpreted as standard Ruby regular expressions. (You can even turn on various
options for a subpattern, such as case insensitivity -F "osfamily=/(?i:redhat)/".) Unlike plain
strings, they accept partial matches.
Filtering by Identity
A nodes identity is the same as its Puppet certname, as specied during installation. Identities will
almost always be unique per node.
$ mco puppet runonce -I web3balancer.example.com
You can use the -I or --with-identity option multiple times to create a lter that matches
multiple specic nodes.
You cannot combine the identity lter with other lter types.
The identity lter accepts regular expressions.
Filtering by Fact, Class, and Agent
Facts are the standard Puppet Enterprise facts, which are available in your Puppet manifests and
can be viewed as inventory information in the PE console. A list of the core facts is available here.
Use the -F or --with-fact option with a fact=value pair to lter on facts.
Classes are the Puppet classes that are assigned to a node. This includes classes assigned in the
console, assigned via Hiera, declared in site.pp, or declared indirectly by another class. Use the
-C or --with-class option with a class name to lter on classes.
Agents are MCollective agent plugins. Puppet Enterprises default plugins are available on every
node, so ltering by agent makes more sense if you are distributing custom plugins to only a
subset of your nodes. For example, if you made an emergency change to a custom plugin that
you distribute with Puppet, you could lter by agent to trigger an immediate Puppet run on all
aected systems. ( mco puppet runall 5 -A my_agent) Use the -A or --with-agent option to
lter on agents.
Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

230/404

Since mixing classes and facts is so common, you can also use the -W or --with option to supply a
mixture of class names and fact=value pairs.
Compound Select Filters
The -S or --select option accepts arbitrarily complex lters. Like -W, it can accept a mixture of
class names and fact=value pairs, but it has two extra tricks:
BOOLEAN LOGIC

The -W lter always combines facts and classes with and logic nodes must match all of the
criteria to match the lter.
The -S lter lets you combine values with nested Boolean and/or/not logic:

$ mco service httpd restart -S "((customer=acme and osfamily=RedHat) or


domain=acme.com) and /apache/"
DATA PLUGINS

In addition, the -S lter lets you use data plugin queries as an additional kind of metadata.
Data plugins can be tricky, but are very powerful. To use them eectively, you must:
1. Check the list of data plugins with mco plugin doc.
2. Read the help for the data plugin you want to use, with mco plugin doc data/<NAME>. Note any
required input and the available outputs.
3. Use the rpcutil plugins get_data action on a single node to check the format of the output
youre interested in. This action requires source (the plugin name) and query (the input)
arguments:
$ mco rpc rpcutil get_data source="fstat" query="/etc/hosts" -I web01

This will show all of the outputs for that plugin and input on that node.
4. Construct a query fragment of the format <PLUGIN>('<INPUT>').<OUTPUT>=<VALUE> note the
parentheses, the fact that the input must be in quotes, the .output notation, and the equals
sign. Make sure the value youre searching for matches the expected format, which you saw
when you did your test query.
5. Use that fragment as part of a -S lter:

$ mco find -S "fstat('/etc/hosts').md5=/baa3772104/ and osfamily=RedHat"

You can specify multiple data plugin query fragments per -S lter.

Puppet Enterprise 3.3 User's Guide Invoking Orchestration Actions

231/404

The MCollective documentation includes a page on writing custom data plugins. Installing
custom data plugins is similar to installing custom agent plugins; see Adding New Actions
for details.

Testing Filters With mco find


Before invoking any potentially disruptive action, like a service restart, you should test the lter with
mco find or mco ping, to make sure your command will act on the nodes you expect.

Batching and Limiting Actions


By default, orchestration actions run simultaneously on all of the targeted nodes. This is fast and
powerful, but is sometimes not what you want:
Sometimes you want the option to cancel out of an action with control-C before all nodes have
run it.
Sometimes, like when retrieving inventory data, you want to run a command on just a sample of
nodes and dont need to see the results from everything that matches the lter.
Certain actions may consume limited capacity on a shared resource (such as the puppet master
server), and invoking them on a thundering herd of nodes can disrupt that resource.
In these cases, you can batch actions, to run all of the matching nodes in a controlled series, or limit
them, to run only a subset of the matching nodes.
Batching
Use the --batch <SIZE> option to invoke an orchestration action on only <SIZE> nodes at once.
PE will invoke it on the rst <SIZE> nodes, wait briey, invoke it on the next batch, and so on.
Use the --batch-sleep <SECONDS> option to control how long PE should sleep between batches.
Limiting
Use the --limit <COUNT> option to invoke an action on only <COUNT> matching nodes. <COUNT>
can be an absolute number or a percentage. The nodes will be chosen randomly.
Use the -1 or --one option to invoke an action on just one matching node, chosen randomly.
Next: Controlling Puppet

Orchestration: Controlling Puppet


Puppet Enterprise (PE)s conguration management features rely on the puppet agent service,
which runs on every node and fetches congurations from the puppet master server. (See the
Puppet section of this manual for more details.)
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

232/404

By default, puppet agent idles in the background and performs a run every 30 minutes, but the
orchestration engine can give complete control over this behavior. See the table of contents above
for an overview of the available features.

Note: The orchestration engine cannot trigger a nodes very rst puppet agent run. A nodes
rst run will happen automatically within 30 minutes after you sign its certicate.

Basics
Invoking Actions
The orchestration engine can control Puppet from the PE console and from the puppet master
servers Linux command line. These interfaces dont have identical capabilities, so this page will call
out any dierences when applicable.

See the following pages for basic instructions on invoking actions, including how to log in:
Invoking Actions on the Command Line
Navigating Live Management

In the console, most of these tasks use the Control Puppet tab of the live management page, which
behaves much like the Advanced Tasks tab. On the command line, most of these tasks use the mco
puppet subcommand.

Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

233/404

The Puppet Agent Service


In PE 3.0, puppet agent runs in the background as a system service.
On *nix nodes, this service is named pe-puppet.
On Windows nodes, this services display name is Puppet Agent and its short name is pepuppet.
Agent Status: Enabled, Disabled, etc.
Puppet agent can be in many possible states, which are represented by three attributes:
Running or stopped whether the agent service ( pe-puppet) is running in the background.
Even if its running, the service may or may not be doing anything at the moment. If the service is
stopped, no scheduled runs will occur but you can still trigger on-demand runs.
Applying, idling, or neither whether puppet agent is in the process of applying a
conguration. Idling is only applicable if the service is running, but Puppet may be applying an
on-demand conguration even if the service is stopped.
Enabled or disabled whether theres a lockle preventing puppet agent from performing any
conguration runs. If puppet agent is disabled, the service can idle in the background but no
congurations can be applied even on-demand runs will be rejected until the agent is reenabled.
The orchestration engine can trigger on-demand Puppet runs unless the agent is applying or
disabled. Scheduled runs will only take place if the agent is both running and enabled.
Back to top

Run Puppet on Demand


Use the runonce action to trigger an immediate Puppet run on a few nodes. If you need to run
Puppet on many nodes (more than ten), you should see the many nodes section below.

Behavior Dierences: Running vs. Stopped


You can trigger on-demand Puppet runs whether the pe-puppet service is running or
stopped, but on *nix nodes these cases will behave slightly dierently:
When the service is running, all of the selected nodes will begin a run immediately, and
you cannot specify any special options like noop or tags; they will be ignored. This
behavior is usually ne but sometimes undesirable.
When the service is stopped, the selected nodes will randomly stagger the start of their
runs (splay) over a default interval of two minutes. If you wish, you can specify special
options, including a longer interval (splaylimit). You can also set the force option to
true if you want the selected nodes to start immediately. This behavior is more exible
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

234/404

and resilient.
This dierence only aects *nix nodes; Windows nodes always behave like a stopped *nix
node. The dierence will be addressed in a future version of PE; for now, you may wish to
stop the pe-puppet service before trying to do noop or tags runs.

In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the runonce action. Enter any arguments, and click the red Run button.

ARGUMENTS

If the agent service is stopped (on aected *nix nodes; see above), you can change the way Puppet
runs by specifying optional arguments:
Force ( true/false) Ignore the default splay and run all nodes immediately.
Server Contact a dierent puppet master than normal. Useful for testing new manifests (or a
new version of PE) on a subset of nodes.
Tags (comma-separated list of tags) Apply only resources with these tags. Tags can be class
names, and this is a fast way to test changes to a single class without performing an entire
Puppet run.
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

235/404

Noop ( true/false) Only simulate changes, and submit a report describing what would have
changed in a real run. Useful for safely testing new manifests. If you have congured puppet
agent to always run in no-op mode (via /etc/puppetlabs/puppet/puppet.conf), you can set
this to false to do an enforcing Puppet run.
Splay ( true/false) Defaults to true. Whether to stagger runs over a period of time.
Splaylimit (in seconds) The period of time over which to randomly stagger runs. The more
nodes you are running at once, the longer this should be.
Environment The Puppet environment in which to run. Useful for testing new manifests on a
subset of nodes.
On the Command Line
While logged in to the puppet master server as peadmin, run the mco puppet runonce command.

$ mco puppet runonce -I web01.example.com -I web02.example.com


$ mco puppet runonce -F kernelversion=2.6.32

Be sure to specify a lter to limit the number of nodes; you should generally invoke this action on
fewer than 10 nodes at a time, especially if the agent service is running and you cannot specify
extra options (see above).
EXTRA OPTIONS

If the agent service is stopped (on aected *nix nodes; see above), you can change the way Puppet
runs with command line options. You can see a list of these by running mco puppet --help.

--force Bypass splay options when running


--server SERVER Connect to a specific server or port
--tags, --tag TAG Restrict the run to specific tags
--noop Do a no-op run
--no-noop Do a run with no-op disabled
--environment ENVIRONMENT Place the node in a specific environment for this
run
--splay Splay the run by up to splaylimit seconds
--no-splay Do a run with splay disabled
--splaylimit SECONDS Maximum splay time for this run if splay is set
--ignoreschedules Disable schedule processing

The most useful options are:


--noop, which causes puppet agent to only simulate changes, and submit a report describing
what would have changed in a real run. Useful for safely testing new manifests. If you have
congured puppet agent to always run in no-op mode (via
/etc/puppetlabs/puppet/puppet.conf), you can use --no-noop to do an enforcing Puppet run.
--environment ENVIRONMENT, which causes puppet agent to run in the specied environment.
Also useful for testing new manifests on a subset of nodes.
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

236/404

--tags TAGS, which takes a comma-separated list of tags and applies only resources with those
tags. Tags can be class names, and this is a fast way to test changes to a single class without
performing an entire Puppet run.
--server SERVER, which causes puppet agent to contact a dierent puppet master than normal.
Also useful for testing new manifests (or a new version of PE) on a subset of nodes.
Back to top

Run Puppet on Many Nodes in a Controlled Series


Note: In PE 3.0, this feature is only available on the command line; you cannot do a
controlled run series in the console.
If you want to trigger a run on a large number of nodes more than ten the runonce action isnt
always the best tool. You can splay or batch the runs, but this requires you to guess how long each
run is going to take, and a wrong guess can either waste time or temporarily overwhelm the puppet
master server.
Instead, use the special runall action of the mco puppet subcommand.

$ mco puppet runall 5 -F operatingsystem=CentOS -F operatingsystemrelease=6.4

This action requires an argument, which must be the number of nodes allowed to run at once. It
invokes a run on that many nodes, then only starts the next node when one has nished. This
prevents your puppet master from being overwhelmed by the herd and will delay only as long as is
necessary. The ideal concurrency will vary from site to site, depending on how powerful your
puppet master server is and how complex your congurations are.
The runall action can take extra options like --noop as described for the runonce action; however,
note that restrictions still apply for *nix nodes where the pe-puppet service is running.
Back to top

Enable and Disable Puppet Agent


Disabling Puppet will block all Puppet runs, including both scheduled and on-demand runs. This is
usually used while you investigate some kind of problem. Use the enable and disable actions of
the puppet plugin.
The disable action accepts an optional reason for the lockdown; take advantage of this to keep
your colleagues informed. The reason will be shown when checking Puppets status on those
nodes.
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

237/404

After a node has been disabled for an hour, it will appear as unresponsive in the consoles node
views, and will stay that way until it is re-enabled.
In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the enable or disable action. Enter a reason (if disabling), and click the red
Run button.
On the Command Line
While logged in to the puppet master server as peadmin, run mco puppet disable or mco puppet
enable with or without a lter.
Example: You noticed Puppet runs failing on a load balancer and expect theyll start failing on the
other ones too:
$ mco puppet disable "Investigating a problem with the haproxy module. -NF" -C
/haproxy/

Back to top

Start and Stop the Puppet Agent Service


You can start or stop the pe-puppet service with the start and stop actions of the service plugin.
This can be useful if you need to do no-op runs, or if you wish to stop all scheduled runs and only
run puppet agent on demand.
In the Console
While logged in as a read/write or admin user, navigate to the Advanced Tasks tab, lter and select
your nodes, choose the Service action list, and click the start or stop action. Click the red Run
button.
On the Command Line
While logged in to the puppet master server as peadmin, run mco service pe-puppet stop or mco
service pe-puppet start with or without a lter.
Example: To prepare all web servers for a manifest update and no-op, run:
$ mco service pe-puppet stop -C /apache/

Back to top
Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

238/404

View Puppet Agents Status


Note: Although you can view status on both the console and the command line, the
command line currently gives much better summaries when checking large numbers of
nodes.
As mentioned above, puppet agent can be in various states. The orchestration engine lets you
check the current status on any number of nodes.
In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the status action. Click the red Run button.

Note that on disabled nodes, the reason for disabling is shown in the disable_message eld.
On the Command Line
AGGREGATE STATUS

While logged in to the puppet master server as peadmin, run mco puppet status with or without a
lter. This returns an abbreviated status for each node and a summarized breakdown of how many
nodes are in which conditions.
$ mco puppet status
VIEWING DISABLE MESSAGES

Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

239/404

The one thing mco puppet status doesnt show is the reason why puppet agent was disabled. If
youre checking up on disabled nodes, you can get a more raw view of the status by running mco
rpc puppet status instead. This will display the reason in the Lock Message eld.
Example: Get the detailed status for every disabled node, using the puppet data plugin:

$ mco rpc puppet status -S "puppet().enabled=false"

Back to top

View Statistics About Recent Runs


Note: Detailed statistics are available on both the console and the command line, but the
population summary graphs are only available on the command line.
Puppet keeps records of the last run, including the amount of time spent per resource type, the
number of changes, number of simulated changes, time since last run, etc. You can retrieve and
summarize these statistics with the orchestration engine.
In the Console
While logged in as a read/write or admin user, navigate to the Control Puppet tab, lter and select
your nodes, and click the last_run_summary action. Click the red Run button.
Usually, you should use the graphs and reports on the consoles node views to investigate previous
Puppet runs; they are more detailed, and provide more historical context.

Puppet Enterprise 3.3 User's Guide Orchestration: Controlling Puppet

240/404

On the Command Line


POPULATION SUMMARY GRAPHS

You can get sparkline graphs for the last run statistics across all your nodes with the mco puppet
summary command. This shows the distribution of your nodes, so you can see whether a signicant
group is taking notably longer or seeing more changes.
$ mco puppet summary
Summary statistics for 10 nodes:
Total resources: min:
93.0 max: 155.0
Out Of Sync resources: min:
0.0 max: 0.0
Failed resources: min:
0.0 max: 0.0
Changed resources: min:
0.0 max: 0.0
Config Retrieval time (seconds): min:
1.9 max: 5.8
Total run-time (seconds): min:
2.2 max: 6.7
Time since last run (seconds): min:
314.0 max: 23.4k
DETAILED STATISTICS

While logged in to the puppet master server as peadmin, run mco rpc puppet last_run_summary
with or without a lter. This returns detailed run statistics for each node. (Note that this uses the
rpc subcommand instead of the puppet subcommand.)
Next: Browsing Resources

Orchestration: Browsing and Comparing


Resources
Use the live management pages Browse Resources tab to browse the resources on your nodes and
inspect their current state.

Note: Resource browsing and comparison are only available in the PE console; there is not a
command line interface for these features.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

241/404

If you need to do simple resource inspections on the command line, you can investigate the
puppetral plugins find and search actions. These give output similar to what you can get
from running puppet resource <type> [<name>] locally.

Live Management Basics


Browsing resources requires you to select a node or group of nodes to inspect.
To learn how to navigate the live management page and select/lter nodes, see the Navigating Live
Management page of this manual.

The Browse Resources Tab


The Browse Resources tab contains a resource type navigation list in its left pane. This is used to
switch the right pane between several resource type pages (and a summary page, which includes an
Inspect All button for pre-caching resource data).

Resource Types
The Browse Resources tab can inspect the following resource types:
group
host
package
service
user
For an introduction to resources and types, please see the Resources chapter of Learning Puppet.
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

242/404

The Inspect All Button


The summary views Inspect All button scans all resources of all types and reports on their
similarity. This is mostly useful when you think youve selected a group of very similar nodes but
want to make sure.

After clicking Inspect All, the Browse Resources tab will use the lists of resources it got to prepopulate the corresponding lists in each resource type page. This can save you a few clicks on the
Find Resources buttons (see below).
Resource Type Pages
Resource type pages contain a search eld, a Find Resources button, and (if the Find Resources
button has been used) a list of resources labeled with their nodes and number of variants.

Browsing All Resources of a Type


To browse resources, you must rst select a resource type. You must also have one or more nodes
Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

243/404

selected.
If you have previously clicked the Inspect All button, the resource type page will be pre-populated;
if it is empty, you must click the Find Resources button.

The resource type page will display a list of all resources of that type on the selected nodes, plus a
summary of how similar the resources are. An Update button is available for re-scanning your
nodes. In general, a set of nodes that perform similar tasks should have very similar resources.
The resource list shows the name of each resource, the number of nodes it was found on, and how
many variants of it were found. You can sort the list by any of these properties by clicking the
headers.
To inspect a resource, click its name.

Finding Resources by Name


To nd resources by name, you must rst select a resource type. You must also have one or more
nodes selected.
The search eld on a resource type page is not a standard search eld; it only works with the exact
name of a resource. Wildcards are not allowed. If you are unsure of the name of the resource youre
looking for, you should browse instead.

Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

244/404

To search, enter a resource name in the search eld and conrm with the enter key or the search
button.

Once located, you will be taken directly to the inspect view for that resource. This is the same as the
inspect view available when browsing (see below).

Inspecting and Comparing Resources

Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

245/404

When you inspect a resource, you can see the values of all its properties. If there is more than one
variant, you can see all of them and the properties that dier across nodes will be highlighted.
To see which nodes have each variant, click the on N nodes labels to expand the node lists.

Next: List of Built-In Orchestration Actions

Puppet Enterprise 3.3 User's Guide Orchestration: Browsing and Comparing Resources

246/404

List of Built-In Orchestration Actions


About This Page
This page is a comprehensive list of Puppet Enterprise (PE)s built-in orchestration actions. These
actions can be invoked on the command line or in the PE console.

Related Topics
For an overview of orchestration topics, see the Orchestration Overview page.
To invoke actions in the PE console, see Navigating Live Management.
To invoke actions on the command line, see Invoking Actions.
To add your own actions, see Adding Orchestration Actions.

Actions and Plugins


Sets of related actions are bundled together as MCollective agent plugins. Every action is part of a
plugin.
A default Puppet Enterprise install includes the package, puppet, puppetral, rpcutil, and service
plugins. See the table of contents above for an outline of each plugins actions; click an action for
details about its inputs, eects, and outputs.
You can easily add new orchestration actions by distributing custom MCollective agent plugins to
your nodes. See Adding Orchestration Actions for details.
Back to top

The package Plugin


Install and uninstall software packages
Actions: apt_checkupdates, apt_update, checkupdates, install, purge, status, uninstall,
update, yum_checkupdates, yum_clean
apt_checkupdates
Check for APT updates
(no inputs)
Outputs:
exitcode
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

247/404

(Appears as Exit Code on CLI)


The exitcode from the apt command
outdated_packages
(Appears as Outdated Packages on CLI)
Outdated packages
output
(Appears as Output on CLI)
Output from APT
Back to top

apt_update
Update the apt cache
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the apt-get command
output
(Appears as Output on CLI)
Output from apt-get
Back to top

checkupdates
Check for updates
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

248/404

The exitcode from the package manager command


outdated_packages
(Appears as Outdated Packages on CLI)
Outdated packages
output
(Appears as Output on CLI)
Output from Package Manager
package_manager
(Appears as Package Manager on CLI)
The detected package manager
Back to top

install
Install a package
Input:
package (required)
Package to install
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

249/404

(Appears as Epoch on CLI)


Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top

purge
Purge a package
Input:
package (required)
Package to purge
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

250/404

arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top

status
Get the status of a package
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

251/404

Input:
package (required)
Package to retrieve the status of
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

252/404

version
(Appears as Version on CLI)
Version number
Back to top

uninstall
Uninstall a package
Input:
package (required)
Package to uninstall
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Package epoch number
name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

253/404

provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top

update
Update a package
Input:
package (required)
Package to update
Type: string
Format/Validation: shellsafe
Length: 90
Outputs:
arch
(Appears as Arch on CLI)
Package architecture
ensure
(Appears as Ensure on CLI)
Full package version
epoch
(Appears as Epoch on CLI)
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

254/404

Package epoch number


name
(Appears as Name on CLI)
Package name
output
(Appears as Output on CLI)
Output from the package manager
provider
(Appears as Provider on CLI)
Provider used to retrieve information
release
(Appears as Release on CLI)
Package release number
version
(Appears as Version on CLI)
Version number
Back to top

yum_checkupdates
Check for YUM updates
(no inputs)
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the yum command
outdated_packages
(Appears as Outdated Packages on CLI)
Outdated packages
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

255/404

output
(Appears as Output on CLI)
Output from YUM
Back to top

yum_clean
Clean the YUM cache
Input:
mode
One of the various supported clean modes
Type: list
Valid Values: all, headers, packages, metadata, dbcache, plugins, expire-cache
Outputs:
exitcode
(Appears as Exit Code on CLI)
The exitcode from the yum command
output
(Appears as Output on CLI)
Output from YUM
Back to top

The puppet Plugin


Run Puppet agent, get its status, and enable/disable it
Actions: disable, enable, last_run_summary, resource, runonce, status
disable
Disable the Puppet agent
Input:
message
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

256/404

Supply a reason for disabling the Puppet agent


Type: string
Format/Validation: shellsafe
Length: 120
Outputs:
enabled
(Appears as Enabled on CLI)
Is the agent currently locked
status
(Appears as Status on CLI)
Status
Back to top

enable
Enable the Puppet agent
(no inputs)
Outputs:
enabled
(Appears as Enabled on CLI)
Is the agent currently locked
status
(Appears as Status on CLI)
Status
Back to top

last_run_summary
Get the summary of the last Puppet run
(no inputs)
Outputs:
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

257/404

changed_resources
(Appears as Changed Resources on CLI)
Resources that were changed
config_retrieval_time
(Appears as Cong Retrieval Time on CLI)
Time taken to retrieve the catalog from the master
config_version
(Appears as Cong Version on CLI)
Puppet cong version for the previously applied catalog
failed_resources
(Appears as Failed Resources on CLI)
Resources that failed to apply
lastrun
(Appears as Last Run on CLI)
When the Agent last applied a catalog in local time
out_of_sync_resources
(Appears as Out of Sync Resources on CLI)
Resources that were not in desired state
since_lastrun
(Appears as Since Last Run on CLI)
How long ago did the Agent last apply a catalog in local time
summary
(Appears as Summary on CLI)
Summary data as provided by Puppet
total_resources
(Appears as Total Resources on CLI)
Total resources managed on a node
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

258/404

total_time
(Appears as Total Time on CLI)
Total time taken to retrieve and process the catalog
type_distribution
(Appears as Type Distribution on CLI)
Resource counts per type managed by Puppet
Back to top

resource
Evaluate Puppet RAL resources
Inputs:
name (required)
Resource Name
Type: string
Format/Validation: ^.+$
Length: 150
type (required)
Resource Type
Type: string
Format/Validation: ^.+$
Length: 50
Outputs:
changed
(Appears as Changed on CLI)
Was a change applied based on the resource
result
(Appears as Result on CLI)
The result from the Puppet resource
Back to top
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

259/404

runonce
Invoke a single Puppet run
Inputs:
environment
Which Puppet environment to run
Type: string
Format/Validation: puppet_variable
Length: 50
force
Will force a run immediately else is subject to default splay time
Type: boolean
noop
Do a Puppet dry run
Type: boolean
server
Address and port of the Puppet Master in server:port format
Type: string
Format/Validation: puppet_server_address
Length: 50
splay
Sleep for a period before initiating the run
Type: boolean
splaylimit
Maximum amount of time to sleep before run
Type: number
tags
Restrict the Puppet run to a comma list of tags
Type: string
Format/Validation: puppet_tags
Length: 120
Output:

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

260/404

summary
(Appears as Summary on CLI)
Summary of command run
Back to top

status
Get the current status of the Puppet agent
(no inputs)
Outputs:
applying
(Appears as Applying on CLI)
Is a catalog being applied
daemon_present
(Appears as Daemon Running on CLI)
Is the Puppet agent daemon running on this system
disable_message
(Appears as Lock Message on CLI)
Message supplied when agent was disabled
enabled
(Appears as Enabled on CLI)
Is the agent currently locked
idling
(Appears as Idling on CLI)
Is the Puppet agent daemon running but not doing any work
lastrun
(Appears as Last Run on CLI)
When the Agent last applied a catalog in local time
since_lastrun
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

261/404

(Appears as Since Last Run on CLI)


How long ago did the Agent last apply a catalog in local time
status
(Appears as Status on CLI)
Current status of the Puppet agent
Back to top

The puppetral Plugin


View resources with Puppets resource abstraction layer
Actions: find, search
nd
Get the attributes and status of a resource
Inputs:
title (required)
Name of resource to check
Type: string
Format/Validation: .
Length: 90
type (required)
Type of resource to check
Type: string
Format/Validation: .
Length: 90
Outputs:
exported
(Appears as Exported on CLI)
Boolean ag indicating export status
managed
(Appears as Managed on CLI)
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

262/404

Flag indicating managed status


parameters
(Appears as Parameters on CLI)
Parameters of the inspected resource
tags
(Appears as Tags on CLI)
Tags of the inspected resource
title
(Appears as Title on CLI)
Title of the inspected resource
type
(Appears as Type on CLI)
Type of the inspected resource
Back to top

search
Get detailed info for all resources of a given type
Input:
type (required)
Type of resource to check
Type: string
Format/Validation: .
Length: 90
Output:
result
(Appears as Result on CLI)
The values of the inspected resources
Back to top
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

263/404

The rpcutil Plugin


General helpful actions that expose stats and internals to SimpleRPC clients
Actions: agent_inventory, collective_info, daemon_stats, get_config_item, get_data,
get_fact, inventory, ping
agent_inventory
Inventory of all agents on the server
(no inputs)
Output:
agents
(Appears as Agents on CLI)
List of agents on the server
Back to top

collective_info
Info about the main and sub collectives
(no inputs)
Outputs:
collectives
(Appears as All Collectives on CLI)
All Collectives
main_collective
(Appears as Main Collective on CLI)
The main Collective
Back to top

daemon_stats
Get statistics from the running daemon

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

264/404

(no inputs)
Outputs:
agents
(Appears as Agents on CLI)
List of agents loaded
configfile
(Appears as Cong File on CLI)
Cong le used to start the daemon
filtered
(Appears as Failed Filter on CLI)
Didnt pass lter checks
passed
(Appears as Passed Filter on CLI)
Passed lter checks
pid
(Appears as PID on CLI)
Process ID of the daemon
replies
(Appears as Replies on CLI)
Replies sent back to clients
starttime
(Appears as Start Time on CLI)
Time the server started
threads
(Appears as Threads on CLI)
List of threads active in the daemon
times

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

265/404

(Appears as Times on CLI)


Processor time consumed by the daemon
total
(Appears as Total Messages on CLI)
Total messages received
ttlexpired
(Appears as TTL Expired on CLI)
Messages that did pass TTL checks
unvalidated
(Appears as Failed Security on CLI)
Messages that failed security validation
validated
(Appears as Security Validated on CLI)
Messages that passed security validation
version
(Appears as Version on CLI)
MCollective Version
Back to top

get_cong_item
Get the active value of a specic cong property
Input:
item (required)
The item to retrieve from the server
Type: string
Format/Validation: ^.+$
Length: 50
Outputs:

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

266/404

item
(Appears as Property on CLI)
The cong property being retrieved
value
(Appears as Value on CLI)
The value that is in use
Back to top

get_data
Get data from a data plugin
Inputs:
query
The query argument to supply to the data plugin
Type: string
Format/Validation: ^.+$
Length: 50
source (required)
The data plugin to retrieve information from
Type: string
Format/Validation: ^\w+$
Length: 50
Outputs:
Back to top

get_fact
Retrieve a single fact from the fact store
Input:
fact (required)
The fact to retrieve
Type: string
Format/Validation: ^[\w\-\.]+$
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

267/404

Length: 40
Outputs:
fact
(Appears as Fact on CLI)
The name of the fact being returned
value
(Appears as Value on CLI)
The value of the fact
Back to top

inventory
System Inventory
(no inputs)
Outputs:
agents
(Appears as Agents on CLI)
List of agent names
classes
(Appears as Classes on CLI)
List of classes on the system
collectives
(Appears as All Collectives on CLI)
All Collectives
data_plugins
(Appears as Data Plugins on CLI)
List of data plugin names
facts

Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

268/404

(Appears as Facts on CLI)


List of facts and values
main_collective
(Appears as Main Collective on CLI)
The main Collective
version
(Appears as Version on CLI)
MCollective Version
Back to top

ping
Responds to requests for PING with PONG
(no inputs)
Output:
pong
(Appears as Timestamp on CLI)
The local timestamp
Back to top

The service Plugin


Start and stop system services
Actions: restart, start, status, stop
restart
Restart a service
Input:
service (required)
The service to restart
Type: string
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

269/404

Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after restarting
Back to top

start
Start a service
Input:
service (required)
The service to start
Type: string
Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after starting
Back to top

status
Gets the status of a service
Input:
service (required)
The service to get the status for
Type: string
Format/Validation: service_name
Length: 90
Puppet Enterprise 3.3 User's Guide List of Built-In Orchestration Actions

270/404

Output:
status
(Appears as Service Status on CLI)
The status of the service
Back to top

stop
Stop a service
Input:
service (required)
The service to stop
Type: string
Format/Validation: service_name
Length: 90
Output:
status
(Appears as Service Status on CLI)
The status of the service after stopping
Back to top
Next: Adding New Orchestration Actions

Adding New Orchestration Actions to Puppet


Enterprise
Actions and Plugins
You can extend Puppet Enterprise (PE)s orchestration engine by adding new actions. Actions are
distributed in MCollective agent plugins, which are bundles of several related actions. You can write
your own agent plugins (or download ones created by other people), and use Puppet Enterprise to
install and congure them on your nodes.
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

271/404

Related Topics
For an overview of orchestration topics, see the Orchestration Overview page.
To invoke actions in the PE console, see Navigating Live Management.
To invoke actions on the command line, see Invoking Actions.
For a list of built-in actions, see List of Built-In Orchestration Actions.

About MCollective Agent Plugins


COMPONENTS

MCollective agent plugins consist of two parts:


A .rb le containing the MCollective agent code
A .ddl le containing a description of plugins actions, inputs, and outputs
Every agent node that will be using this plugin needs both les. The puppet master node and
console node each need the .ddl le.

Note: Additionally, some MCollective agent plugins may be part of a bundle of related
plugins, which may include new subcommands, data plugins, and more.
A full list of plugin types and the nodes they should be installed on is available here. Note
that in MCollective terminology, servers refers to Puppet Enterprise agent nodes and
clients refers to the puppet master and console nodes.
DISTRIBUTION

Not every agent node needs to use every plugin the orchestration engine is built to gracefully
handle an inconsistent mix of plugins across nodes.
This means you can distribute special-purpose plugins to only the nodes that need them, without
worrying about securing them on irrelevant nodes. Nodes that dont have a given plugin will ignore
its actions, and you can also lter orchestration commands by the list of installed plugins.

Getting New Plugins


You can write your own orchestration plugins, or download ones written by other people.
Downloading MCollective Agent Plugins
There isnt a central repository of MCollective agent plugins, but there are several good places to
start looking:
A list of the plugins released by Puppet Labs is available here.
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

272/404

If you use Nagios, the NRPE plugin (from Puppet Labs) is a good rst plugin to install.
Searching GitHub for mcollective agent will turn up many plugins, including ones for
vmware_tools, libvirt, junk lters in iptables, and more.
Writing MCollective Agent Plugins
Most people who use orchestration heavily will want custom actions tailored to the needs of their
own infrastructure. You can get these by writing new MCollective agent plugins in Ruby.
The MCollective documentation has instructions for writing agent plugins:
Writing agent plugins
Writing DDL les
Aggregating replies for better command line interfaces
Additionally, you can learn a lot by reading the code of Puppet Enterprises built-in plugins. These
are located in the /opt/puppet/libexec/mcollective/mcollective/ directory on any *nix PE
node.

Installing Plugins on Puppet Enterprise Nodes


Since orchestration actions need to be installed on many nodes, and since installing or upgrading
an agent should always restart the pe-mcollective service, you should use Puppet to install
MCollective agent plugins.
This page assumes that you are familiar with the Puppet language and have written modules
previously.

In the MCollective Documentation


The MCollective documentation includes a guide to installing plugins. Puppet Enterprise
users must use the copy into libdir installation method. The remainder of this page goes
into more detail about using this method with Puppet Enterprise.

Overview of Plugin Installation Process


To install a new agent plugin, you must write a Puppet module that does the following things:
On agent nodes: copy the plugins .rb and .ddl les into the mcollective/agent subdirectory
of MCollectives libdir. This directorys location varies between *nix and Windows nodes.
On the console and puppet master nodes: if you will not be installing this plugin on every agent
node, copy the plugins .ddl le into the mcollective/agent subdirectory of MCollectives
libdir.
If there are any other associated plugins included (such as data or validator plugins), copy them
into the proper libdir subdirectories on agent nodes, the console node, and the puppet master
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

273/404

node.
If any of these les change, restart the pe-mcollective service, which is managed by the
pe_mcollective module.
To accomplish these, you will need to write some limited interaction with the pe_mcollective
module, which is part of Puppet Enterprises implementation. We have kept these interactions as
minimal as possible; if any of them change in a future version of Puppet Enterprise, we will provide
a warning in the upgrade notes for that versions documentation.
Step 1: Create a Module for Your Plugin(s)
You have several options for laying this out:
One class for all of your custom plugins. This works ne if you have a limited number of plugins
and will be installing them on every agent node.
One module with several classes for individual plugins or groups of plugins. This is good for
installing certain plugins on only some of your agent nodes you can split specialized plugins
into a pair of mcollective_plugins::<name>::agent and
mcollective_plugins::<name>::client classes, and assign the former to the aected agent
nodes and the latter to the console and puppet master nodes.
A new module for each plugin. This is maximally exible, but can sometimes get cluttered.
Once the module is created, put the plugin les into its files/ directory.
Step 2: Create Relationships and Set Variables
For any class that will be installing plugins on agent nodes, you should put the following four lines
near the top of the class denition:
Class['pe_mcollective::server::plugins'] -> Class[$title] ~> Service['pemcollective']
include pe_mcollective
$plugin_basedir = $pe_mcollective::server::plugins::plugin_basedir
$mco_etc = $pe_mcollective::params::mco_etc

This will do the following:


Ensure that the necessary plugin directories already exist before we try to put les into them. (In
certain cases, these directories are managed by resources in the
pe_mcollective::server::plugins class.)
Restart the pe-mcollective service whenever new plugins are installed or upgraded. (This
service resource is declared in the pe_mcollective::server class.)
Set variables that will correctly refer to the plugins directory and conguration directory on both
*nix and Windows nodes.

Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

274/404

Note: The Class[$title] notation seen above is a resource reference to the class that
contains this statement; it uses the $title variable, which always contains the name of the
surrounding container.

Step 3: Put Files in Place


First, set le defaults: all of these les should be owned by root and only writable by root (or the
Administrators user, on Windows). The pe_mcollective module has helpful variables for setting
these:
File {
owner => $pe_mcollective::params::root_owner,
group => $pe_mcollective::params::root_group,
mode => $pe_mcollective::params::root_mode,
}

Next, put all relevant plugin les into place, using the $plugin_basedir variable we set above:

file {"${plugin_basedir}/agent/nrpe.ddl":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.ddl',
}
file {"${plugin_basedir}/agent/nrpe.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.rb',
}

Step 4: Congure the Plugin (Optional)


Some agent plugins require extra conguration to work properly. If present, these settings must be
present on every agent node that will be using the plugin.
The main server.cfg le is managed by the pe_mcollective module. Although editing it is
possible, it is not supported. Instead, you should take advantage of the MCollective daemons
plugin cong directory, which is located at "${mco_etc}/plugin.d".
File names in this directory should be of the format <agent name>.cfg.
Setting names in plugin cong les are slightly dierent:
In server.cfg

In ${mco etc}/plugin.d/nrpe.conf

plugin.nrpe.conf_dir = /etc/nagios/nrpe

conf_dir = /etc/nagios/nrpe

You can use a normal le resource to create these cong les with the appropriate values. For
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

275/404

simple congs, you can set the content directly in the manifest; for complex ones, you can use a
template.
file {"${mco_etc}/plugin.d/nrpe.cfg":
ensure => file,
content => "conf_dir = /etc/nagios/nrpe\n",
}
POLICY FILES

You can also distribute policy les for the ActionPolicy authorization plugin. This can be a useful
way to completely disable certain unused actions, limit actions so they can only be used on a subset
of your agent nodes, or allow certain actions from the command line but not from the live
management page.
These les should be named for the agent plugin they apply to, and should go in
${mco_etc}/policies/<plugin name>.cfg. Policy les should be distributed to every agent node
that runs the plugin you are conguring.

Note: The policies directory doesnt exist by default; you will need to use a file resource
with ensure => directory to initialize it.
The policy le format is documented here. When conguring caller IDs in policy les, note that PE
uses the following two IDs by default:
cert=peadmin-public the command line orchestration client, as used by the peadmin user on
the puppet master server.
cert=puppet-dashboard-public the live management page in the PE console.
Example: This code would completely disable the package plugins update option, to force users to
do package upgrades through your centralized Puppet code:
file {"${mco_etc}/policies": ensure => directory,}
file {"${mco_etc}/policies/package.policy":
ensure => file,
content => "policy default allow
deny * update * *
",
}

Step 5: Assign the Class to Nodes


For plugins you are distributing to all agent nodes, you can use the PE console to assign your class
to the special mcollective group. (This group is automatically maintained by the console, and
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

276/404

contains all PE nodes which have not been added to the special no mcollective group.)
For plugins you are only distributing to some agent nodes, you must do the following:
Create two Puppet classes for the plugin: a main class that installs everything, and a client class
that only installs the .ddl le and the supporting plugins.
Assign the main class to any agent nodes that should be running the plugin.
Assign the client class to the puppet_console and puppet_master groups in the console.
(These special groups contain all of the console and puppet master nodes in your deployment,
respectively.)
Step 6: Run Puppet
You can either wait for the next scheduled Puppet run, or trigger an on-demand run using
MCollective.
Step 7: Conrm the Plugin is Installed
Follow the instructions in the MCollective documentation to verify that your new plugins are
properly installed.

Other Kinds of Plugins


In addition to installing MCollective agent plugins, you may occasionally need to install other kinds
of plugins, such as data plugins. This process is eectively identical to installing agent plugins,
although the concerns about restricting distribution of certain plugins to special nodes are
generally not relevant.

Example
This is an example of a Puppet class that installs the Puppet Labs nrpe plugin. The files directory
of the module would simply contain a complete copy of the nrpe plugins Git repo. In this example,
we are not creating separate agent and client classes.
# /etc/puppetlabs/puppet/modules/mco_plugins/manifests/nrpe.pp
class mco_plugins::nrpe {
Class['pe_mcollective::server::plugins'] -> Class[$title] ~> Service['pemcollective']
include pe_mcollective
$plugin_basedir = $pe_mcollective::server::plugins::plugin_basedir
$mco_etc = $pe_mcollective::params::mco_etc
File {
owner => $pe_mcollective::params::root_owner,
group => $pe_mcollective::params::root_group,
mode => $pe_mcollective::params::root_mode,
}
file {"${plugin_basedir}/agent/nrpe.ddl":
Puppet Enterprise 3.3 User's Guide Adding New Orchestration Actions to Puppet Enterprise

277/404

ensure => file,


source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.ddl',
}
file {"${plugin_basedir}/agent/nrpe.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/agent/nrpe.rb',
}
file {"${plugin_basedir}/aggregate/nagios_states.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/aggregate/nagios_states.rb',
}
file {"${plugin_basedir}/application/nrpe.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/application/nrpe.rb',
}
file {"${plugin_basedir}/data/nrpe_data.ddl":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/data/nrpe_data.ddl',
}
file {"${plugin_basedir}/data/nrpe_data.rb":
ensure => file,
source => 'puppet:///modules/mco_plugins/mcollective-nrpeagent/data/nrpe_data.rb',
}
# Set config: If this setting were in the usual server.cfg file, its name
would
# be plugin.nrpe.conf_dir
file {"${mco_etc}/plugin.d/nrpe.cfg":
ensure => file,
content => "conf_dir = /etc/nagios/nrpe\n",
}
}

Next: Conguring Orchestration

Conguring Orchestration
The Puppet Enterprise (PE) orchestration engine can be congured to enable new actions, modify
existing actions, restrict actions, and prevent run failures on non-PE nodes.

Puppet Enterprise 3.3 User's Guide Conguring Orchestration

278/404

Disabling Orchestration on Some Nodes


By default, Puppet Enterprise enables and congures orchestration on all agent nodes. This is
generally desirable, but the Puppet code that manages this will not work on non-PE agent nodes,
and will cause Puppet run failures on them.
Since the puppet master server supports managing non-PE agent nodes (including things like
network devices), you should disable orchestration when adding non-PE nodes.
To disable orchestration for a node, add that node to the special no mcollective group in the PE
console. This will prevent PE from attempting to enable orchestration on that node. See here for
instructions on adding nodes to groups in the console.
(The corresponding mcollective group is automatically managed it contains all nodes that have
not been added to the no mcollective group.)

Adding Actions
See the Adding Actions page of this manual.

Changing the Port Used by MCollective/ActiveMQ


You can change the port that MCollective/ActiveMQ uses with a simple variable change in the
console.
1. In the sidebar, select the mcollective group.
2. On the mcollective group page, click Edit.
3. Under Variables, in the key eld, add fact_stomp_port, and in the value eld, add the port
number you want to use.
4. Click Update.

Conguring Orchestration Plugins


Some MCollective agent plugins, including the default set included with Puppet Enterprise, have
settings that can be congured.
Since the main orchestration conguration le is managed by Puppet Enterprise, you must put
these settings in separate plugin cong les, as described in the Adding Actions page of this
manual.

Restricting Orchestration Actions


See the Policy Files heading in the Adding Actions page of this manual.

Unsupported Features
Puppet Enterprise 3.3 User's Guide Conguring Orchestration

279/404

Adding New Orchestration Users and Integrating Applications


Adding new orchestration users is not supported in Puppet Enterprise 3.0. Future versions of PE
may change the orchestration engines authentication backend, which will block additional
orchestration users from working until they are updated to use the new backend. We plan to
include an easy method to add new orchestration users in a future version of PE.
In the meantime, if you need to add a new orchestration user in order to integrate an application
with Puppet Enterprise, you can:
Obtain client credentials and a cong le as described in the standard MCollective deployment
guide.
Write a Puppet module to distribute the new clients public key into the
${pe_mcollective::params::mco_etc}/ssl/clients/ directory. This class must use include
pe_mcollective to ensure that the directory can be located.
Assign that Puppet class to the mcollective group in the PE console.
Again, this process is unsupported and may require additional work during a future upgrade.
Conguring Subcollectives
Using multiple orchestration subcollectives with Puppet Enterprise is not currently supported, and
requires modifying PEs internal modules. If you enable this feature, your changes will be reverted
by future PE upgrades, and you will need to re-apply your changes after upgrading.
If you choose to enable this unsupported feature, you will need to modify, at minimum, the
/opt/puppet/share/puppet/modules/pe_mcollective/templates/server.cfg.erb and
/opt/puppet/share/puppet/modules/pe_mcollective/templates/activemq.xml.erb les on your
puppet master server(s). Any such modications will be reverted during a future PE upgrade.

Conguring Performance
ActiveMQ Heap Usage (Puppet Master Server Only)
The puppet master node runs an ActiveMQ server to route orchestration commands. By default, its
process uses a Java heap size of 512 MB; this is the best value for mid-sized deployments, but can
be a problem when building small proof-of-concept deployments on memory-starved VMs.
You can set a new heap size by doing the following:
1. In the PE console, navigate to the special puppet_master group.
2. On the puppet_master group page, click Edit.
3. Under Variables, in the key eld, add activemq_heap_mb, and in the value eld add a new heap
size to use (in MB).
4. Click Update.
You can later delete the variable to revert to the default setting.
Puppet Enterprise 3.3 User's Guide Conguring Orchestration

280/404

Registration Interval
By default, all agent nodes will send dummy registration messages over the orchestration
middleware every ten minutes. We use these as a heartbeat to work around weaknesses in the
underlying Stomp network protocol.
Most users shouldnt need to change this behavior, but you can adjust the frequency of the
heartbeat messages as follows:
1. In the PE console, navigate to the special mcollective group.
2. On the mcollective group page, click Edit.
3. Under Variables, in the key eld, add mcollective_registerinterval, and in the value eld add
a new interval (in seconds).
4. Click Update.
You can later delete the variable to revert to the default setting.
Orchestration SSL
By default, the orchestration engine uses SSL to encrypt all orchestration messages. You can disable
this in order to investigate problems, but should never disable it in a production deployment where
business-critical orchestration commands are being run.
To disable SSL:
1. In the PE console, navigate to the mcollective group.
2. On the mcollective group page, click Edit.
3. Under Variables, in the key eld, add mcollective_enable_stopmp_ssl, and in the value eld
add false.
4. Click Update.
You can later delete the variable to revert to the default setting.
Next: Cloud Provisioning: Overview

Running PE Agents without Root Privileges


IMPORTANT: these procedures assume some degree of experience with Puppet Enterprise (PE). If
you are new to PE, we strongly recommend you work through the Quick Start Guide and some of
our other educational resources before attempting to implement non-root agent capability.

Conguring Non-root Agent Access: Overview


In some circumstances, users without root access privileges may need to run the Puppet agent. For
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

281/404

example, consider the following use case:


For security or organizational reasons, your infrastructures platform is maintained by one team
with root privileges while your infrastructures applications are managed by a separate team (or
teams) with diminished privileges. The applications team would like to be able to manage its part of
the infrastructure using Puppet Enterprise, but the platform team cannot give them root privileges.
So, the applications team needs a way to run Puppet without root privileges. In this scenario, PE is
only used for application management, which is performed by a single (applications) team. The
platform team does not use PE to manage any of the application teams nodes.
PE is installed with root privileges, so you will need a root user to set up and provide non-root
access to a monolithic PE master. The root user who performs this installation, will set up the nonroot user(s) on the master and any nodes running a puppet agent.
When you or another user are set up as a non-root user, you will have a reduced set of
conguration management tasks that you can perform. As a non-root user, you will be able to
congure puppet settings (i.e., edit ~/.puppet/puppet.conf), congure Facter external facts, run
puppet agent --test, and run puppet via non-privileged cron jobs (or a similar scheduling
service). You can classify your nodes by writing or editing manifests in the directories where you
have write privileges.

Note: Non-root users are not able to use PEs orchestration capabilities to manage your
nodes, and Mcollective must be disabled on all nodes.

Installation & Conguration


To properly congure non-root agent access, you will need to:
Install a monolithic PE master and modify the default group to exclude live management
(MCollective)
Install and congure PE agents, disable the pe-puppet service on all nodes, and create non-root
users
Verify the non-root conguration
INSTALL AND CONFIGURE A MONOLITHIC MASTER

1. As a root user, install and congure a monolithic PE master. Use the standard installation
method, or use an answer le to automate your installation.
2. Disable live management (MCollective).
This can be done by adding q_disable_live_management=y to your answer le if youre
performing an automated installation. Otherwise you can edit /etc/puppetlabs/puppetdashboard/settings.yml and set the disable_live_management setting to true.
3. After the installation is complete, log into the console and verify that the Live Management tab is
NOT present in the main, top nav bar.
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

282/404

4. Make sure no new agents can get added to the MCollective group.
a. Click the Groups tab, select the default group, and click Edit.
b. Add the no mcollective group and click Update.

INSTALL AND CONFIGURE PE AGENTS AND CREATE NON-ROOT USERS

1. On each agent node, install a PE agent while logged in as a root user. Refer to the instructions
for installing agents.
2. Log in to an agent node as a root user, and add the non-root user with puppet resource user
<unique non-root username> ensure=present managehome=true.

Note: Each and every non-root user must have a unique name.
3. As a root user, still on the agent node, set the non-root users password. For example, on most
*nix systems you would run passwd <username>.

Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

283/404

4. By default, the pe-puppet service runs automatically as a root user, so it needs to be disabled. As
a root user on the agent node, stop the service by running puppet resource service pepuppet ensure=stopped enable=false.

Tip: If you wish to use su - nonrootuser to switch between accounts, make sure to use
the - ( -l in some unix variants) argument so that full login privileges are correctly
granted. Otherwise you may see permission denied errors when trying to apply a
catalog.
5. As the non-root user, generate and submit the cert for the agent node. Log into the agent node
and execute the following command:
puppet agent -t --certname "<unique non-root username.hostname>" --server "<master
hostname>"
This puppet run will submit a cert request to the master and will create a ~/.puppet directory
structure in the non-root users home directory.
6. As the non-root user, create a Puppet conguration le ( ~/.puppet/puppet.conf) to specify the
agent certname and the hostname of the master:
[main]
certname = <unique non-root username.hostname>
server = <master hostname>

7. Log into the console, navigate to the pending node requests, and accept the requests from nonroot user agents.
Note: It is possible to also sign the root user certicate in order to allow that user to also manage
the node. However, you should do so only with great caution as this introduces the possibility of
unwanted behavior and potential security issues. For example, if your site.pp has no default
node conguration, running agent as non-admin could lead to unwanted node denitions
getting generated using alt hostnames, which is a potential security issue. In general, if you
deploy this scenario, you should ensure that the root and non-root users never try to manage
the same resources,ensure that they have clear-cut node denitions, and ensure that classes
scope correctly.
8. You can now connect the non-root agent node to the master and get PE to congure it. Log into
the agent node as the non-root user and run puppet agent -t.
PE should now run and apply the conguration specied in the catalog. Keep an eye on the
output from the runif you see Facter facts being created in the non-root users home
directory, you know that you have successfully created a functional non-root agent.

Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

284/404

VERIFY THE NON-ROOT CONFIGURATION

Check the following to make sure the agent is properly congured and functioning as desired:
The non-root agent node should be able to request certicates and be able to download and
apply the catalog from the master without issue when a non-privileged user executes puppet
agent -t.
The puppet agent service should not be running. Check it with service pe-puppet status.
The non-root agent node should not receive the pe-mcollective class. You can check the
console to ensure that nonrootuser is part of the no mcollective group.

Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

285/404

Non-privileged users should be able to collect existing facts by running facter on agents, and
they should be able to dene new, external Facter facts.
INSTALL AND CONFIGURE WINDOWS AGENTS AND THEIR CERTIFICATES

If you need to run agents without admin privileges on nodes running a Windows OS, take the
following steps:
1. Connect to the agent node as an admin user and install the Windows agent.
2. As an admin user, add the non-admin user with the following command: puppet resource user
<unique non-admin username> ensure=present managehome=true password="puppet"
groups="Users".
Note: Each and every non-admin user must have a unique name. If the non-privileged user
needs remote desktop access, edit the user resource to include the Remote Desktop Users
group.
3. While still connected as an admin user, disable the pe-puppet service with puppet resource
service pe-puppet ensure=stopped enable=false.
4. Log out of the Windows agent machine and log back in as the non-admin user, and then run the
following command:
puppet agent -t --certname "<unique non-privileged username>" --server "<master
hostname>"
This puppet run will submit a cert request to the master and will create a ~/.puppet directory
structure in the non-root users home directory.
5. As the non-admin user, create a Puppet conguration le
( %USERPROFILE%/.puppet/puppet.conf) to specify the agent certname and the hostname of the
master:
[main]
certname = <unique non-privileged username.hostname>
server = <master hostname>

6. While still connected as the non-admin user, send a cert request to the master by running
puppet with puppet agent -t.
7. On the master node, as an admin user, sign the non-root certicate request using the console or
by running puppet cert sign nonrootuser.
Note: It is possible to also sign the root user certicate in order to allow that user to also manage
the node. However, you should do so only with great caution as this introduces the possibility of
unwanted behavior and potential security issues. For example, if your site.pp has no default
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

286/404

node conguration, running agent as non-admin could lead to unwanted node denitions
getting generated using alt hostnames, a potential security issue. In general, then, if you deploy
this scenario you should be careful to ensure the root and non-root users never try to manage
the same resources, have clear-cut node denitions, ensure that classes scope correctly, and so
forth.
8. On the agent node, verify that the agent is connected and working by again starting a puppet
run while logged in as the non-admin user. Running puppet agent -t should download and
process the catalog from the master without issue.
Usage
Non-root users can only use a subset of PEs functionality. Basically, any operation that requires
root privileges (e.g., installing system packages) cannot be managed by a non-root puppet agent.
On *nix systems, as non-root agent you should be able to enforce the following resource types:
cron (only non-root cron jobs can be viewed or set)
exec (cannot run as another user or group)
file (only if the non-root user has read/write privileges)
notify
schedule
ssh_key
ssh_authorized_key
service
augeas
You should also be able to inspect the following resource types (use puppet resource <resource
type>):
host
mount
package
On windows systems as non-admin user you should be able to enforce the following resource types
:
exec
file
You should also be able to inspect the following resource types (use puppet resource <resource
type>):
Puppet Enterprise 3.3 User's Guide Running PE Agents without Root Privileges

287/404

host
package
user
group
service
ISSUES & WARNINGS

When running a cron job as non-root user, using the -u ag to set a user with root privileges
will cause the job to fail, resulting in the following error message:
Notice: /Stage[main]/Main/Node[nonrootuser]/Cron[illegal_action]/ensure: created
must be privileged to use -u
Next: Beginners Guide to Modules

Deactivating a PE Agent Node


Deactivating a PE Agent Node
From time to time, you may need to completely deactivate an agent node in your PE deployment.
For example, you recently spun up a handful of virtual machines that were only needed for a short
time, and now those nodes need to be deactivated. Deactivating a node is not the same as just
using the console or terminal to delete a node. The following procedure outlines how to properly
deactivate an agent node, which includes revoking the nodes certicate, removing the nodeand
its associated reportsfrom PuppetDB, deleting the node from the PE console, and stopping
MCollective/live management on node.
To deactivate a PE agent node:
1. Stop the agent service on the node you want to deactivate.
2. On the master, deactivate the node; run puppet node deactivate <node name>.
This deactivates the agent node in PuppetDB and decrements the PE license count.
3. On the master, revoke the agent certicate; run puppet cert clean <node name>.
4. Complete the agents certicate revocation. On the master, run service pe-httpd restart.
The certicate is only revoked after running pe-httpd restart. In addition, the Apache process
wont re-read the certicate revocation list until the service is restarted. If you dont run pehttpd restart, the node will check in again on the next puppet run and re-register with
puppetDB, which will increment the license count again.
Puppet Enterprise 3.3 User's Guide Deactivating a PE Agent Node

288/404

Tip: You will need to run pe-httpd restart any load-balanced masters in your system.
5. Delete the node from the console. Navigate to the node detail page for the deactivated node, and
click the Delete button.
Alternatively, you can also run /opt/puppet/bin/rake -f /opt/puppet/share/puppetdashboard/Rakefile RAILS_ENV=production node:del[node name].
This action does NOT disable MCollective/live management on the node.
Note: If you delete a node from the node view without rst deactivating the node, the node will
be absent from the node list in the console, but the license count will not decrement, and on the
next puppet run, the node will be listed in the console.
6. To disable MCollective/live management on the node, uninstall the puppet agent, stop the pemcollective service (on the agent, run service pe-mcollective stop), or destroy the agent
node altogether.
7. You should also manually remove the node certicates in
/etc/pupuppetlabs/mcollective/ssl/clients.
At this point, the node should be fully deactivated.

Regenerating a Puppet Agent Certicate


From time to time, you may encounter a situation in which you need to regenerate a certicate for a
puppet agent node. Perhaps there is a security vulnerability in your infrastructure that you can
remediate with a certicate regeneration, or maybe youre receiving strange SSL errors on your
puppet agent node that are preventing you from performing normal operations.
The following steps explain how to regenerate a certicate for a puppet agent node using PEs
built-in certicate authority (CA).
1. On the puppet master, run puppet cert clean <node name>.
2. On the puppet master, run sudo /etc/init.d/pe-httpd restart.
Restarting pe-httpd will prevent the old cert from being used.
3. On the puppet agent node, move /etc/puppetlabs/puppet/ssl to a back up directory, such as
/etc/puppetlabs/puppet/ssl_bak.

Important: Ensure you are on the puppet agent node when you do this. Backing up the ssl
directory, as opposed to deleting it, will enable you to easily recover in the event of a
problem. DO NOT perform step 3 on the puppet master.
Puppet Enterprise 3.3 User's Guide Regenerating a Puppet Agent Certicate

289/404

4. On the puppet agent node, run puppet agent -t.


When you run this command, puppet will generate a new SSL key for the agent node and request
a new certicate from the puppet masters built-in CA.
5. Finally, you will need to accept the puppet agent nodes certicate request with the PE consoles
request manager, or from the command line.
Once the puppet agent nodes certicate is signed, you can either manually kick o a puppet run
from the console or command line, or wait for the agent to run based on the runinterval, the
default of which is 30 minutes. At this point, the agent will perform a full catalog run and can
resume its role in your PE deployment.

Using an External Certicate Authority with


Puppet Enterprise
The dierent parts of Puppet Enterprise (PE) use SSL certicates to communicate securely with each
other. PE uses its own certicate authority (CA) to generate and verify these credentials.
However, you may already have your own CA in place and wish to use it instead of PEs integrated
CA. This page will familiarize you with the certicates and security credentials signed by the PE CA,
then detail the procedures for replacing them.

Before You Begin


Setting up an external certicate authority (CA) to use with PE is beyond the scope of this
document; in fact, this writing assumes that you already have some knowledge of CA and
security credential creation and have the ability to set up your own external CA. This
document will lead you through the certs and security credentials youll need to replace in
PE. However, before beginning, we recommend you familiarize yourself with the following
docs:
SSL Conguration: External CA Support provides guidance on establshing an external CA
that will play nice will Puppet (and therefore PE).
ActiveMQ TLS explains MCollectives security layer.

Locating Certicate Files


After installing PE, you can run puppet cert list --all on your puppet master server to inspect
the inventory of certicates signed using PEs built-in CA. It will include the following:
Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 290/404

Per-node certicates for the puppet master (and any agent nodes)
pe-internal-broker
pe-internal-dashboard
pe-internal-mcolllective-servers
pe-internal-peadmin-mcollective-client
pe-internal-puppet-console-mcollective-client
Each of these will need to be replaced with new certicates signed by your external CA. The steps
below will explain how to nd and replace these credentials.
Locating the PE Agent Certicate and Security Credentials
Every system under PE management (including the puppet master, console, and PuppetDB) runs the
puppet agent service. To determine the proper locations for the certicate and security credential
les used by the puppet agent, run the following commands:
Certicate: puppet agent --configprint hostcert
Private key: puppet agent --configprint hostprivkey
Public key: puppet agent --configprint hostpubkey
Certicate Revocation List: puppet agent --configprint hostcrl
Local copy of the CAs certicate: puppet agent --configprint localcacert

Important: Shared Certicate and Security Credentials


In Puppet Enterprise, the puppet master and the puppet agent services share the same
certicate, so replacing the shared certicate will suce for both services. In other words, if
you replace the puppet master certicate, you dont need to separately replace the agent
certicate.

Locating the PE Master Certicate and Security Credentials


To determine the proper locations for the CA and security credential les, run the following
commands with puppet master:
Certicate: puppet master --configprint hostcert
Private key: puppet master --configprint hostprivkey
Public key: puppet master --configprint hostpubkey
Certicate Revocation List: puppet master --configprint hostcrl
Local copy of the CAs certicate: puppet master --configprint localcacert

Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 291/404

Important: Shared Certicate and Security Credentials


In Puppet Enterprise, the puppet master and the puppet agent services share the same
certicate, so replacing the shared certicate will suce for both services. In other words, if
you replace the Puppet agent certicate, you dont need to separately replace the master
certicate.

Tip: You will also need to create a cert and security credentials for any agent nodes using the
same CA as you used for the puppet master. Weve included instructions at the end of the
doc.

Locating the PE Console Certicate and Security Credentials


The PE console certicates are stored at /opt/puppet/share/puppet-dashboard/certs/. This
directory is located on the puppet master, or on the console server in a split install.
The following les in this directory need to be replaced:
pe-internal-dashboard.ca_cert.pem (replace with your CA cert)
pe-internal-dashboard.private_key.pem
pe-internal-dashboard.ca_crl.pem (replace with your CA CRL)
pe-internal-dashboard.public_key.pem
pe-internal-dashboard.cert.pem
Locating the PuppetDB Certicate and Security Credentials
The following les, located on the puppet master, or on the PuppetDB server in a split install, need
to be replaced:
/etc/puppetlabs/puppetdb/ssl/ca.pem (replace with your CA cert)
/etc/puppetlabs/puppetdb/ssl/private.pem (replace with a copy of the PuppetDB servers
private key)
/etc/puppetlabs/puppetdb/ssl/public.pem (replace with a copy of the PuppetDB servers
certicate)

Important: Shared Certicate and Security Credentials


In Puppet Enterprise, the PuppetDB service uses a copy of the puppet agents private key and
certicate. If you have a split install, you will rst replace the puppet agents private key and
certicate on the PuppetDB server
( /etc/puppetlabs/puppet/ssl/private_keys/<certname>.pem and
/etc/puppetlabs/puppet/ssl/certs/<certname>.pem) and copy them over to the PuppetDB
Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 292/404

SSL directories listed above.

Locating PE MCollective Certicates and Security Credentials


The orchestration credentials, located on the puppet master, need to be replaced.
For each of the le names below, youll need to replace three les: a cert in
/etc/puppetlabs/puppet/ssl/certs, a private key in
/etc/puppetlabs/puppet/ssl/private_keys, and a public key in
/etc/puppetlabs/puppet/ssl/public_keys. Look for the following les:
pe-internal-broker.pem (controls the ActiveMQ server)
pe-internal-mcollective-servers.pem
pe-internal-peadmin-mcollective-client.pem
pe-internal-puppet-console-mcollective-client.pem
These certs and security credentials are generated by the puppetlabs-pe_mcollective module as
part of the PE installation process.

Replacing the PE Certicate Authority and Security


Credentials
Important: For ease of use, we recommend naming ALL of your certicate and security
credential les exactly the same way they are named by PE and replace them as such on the
puppet master; for example, use the cp command to overwrite the le contents of the certs
generated by the PE CA. This will ensure that PE will recognize the le names and not
overwrite any les when you perform Puppet runs. In addition, this will prevent you from
needing to touch various cong les, and thus, limit the chances of problems arising.
The remainder of this doc assumes you will be using identical les names.
We recommend that once youve set up your external CA and security credentials, you rst replace
the les for PE master/agent nodes and the PE console, then replace the les for PuppetDB, and
then replace the PE MCollective les. Remember, naming the new certs and security credentials
exactly as theyre named by PE will ensure the easiest path to success.
Here is a list of the things youll do:
1. Install PE.
2. Choose a certicate authority option.
3. Use your external CA to generate new certicates and security credentials to replace all existing
certicates and security credentials.
4. Replace the PE master and PE console certs and security credentials.
Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 293/404

5. Replace the PE PuppetDB certs and security credentials.


6. Replace the PE MCollective certs and security credentials.
Replace the PE master and PE Console Certicates and Security Credentials
1. Refer to Locating the PE Master Certicate and Security Credentials and copy your new certs and
security credentials to the relevant locations.
2. Refer to Locating pe-internal-dashboard Certicate and Security Credentials and copy your new
certs and security credentials to the relevant locations.
3. On the puppet master, navigate to /etc/puppetlabs/puppet/puppet.conf, and in the [master]
stanza, add ca=false.
4. Run service pe-httpd restart.
Continue to the next step, where youll replace the PuppetDB certs.
Replace the PuppetDB Certicates and Security Credentials
1. (Optionalfor split installs only) Refer to Locating the Puppet Agent Certicate and Security
Credentials and replace the puppet agent service les. These les will be copied to the PuppetDB
SSL directory in the step 2.
2. Refer to Locating the PuppetDB Certicate and Security Credentials and replace the les.
3. Run service pe-puppetdb restart.
4. Run puppet.
After running Puppet, you should be able to access the console, and view your new certicate in
your browser. However, live management will not work; you can access that part of the console, but
it wont be able to nd the master node.
Now you will need to replace the MCollective certicates and security credentials.
Replace the MCollective Certicates and Security Credentials
1. On the puppet master, ensure that you replaced the CA cert ( ca.pem) at
/etc/puppetlabs/puppet/ssl/certs/. (Tip: If you didnt, the above procedures wouldnt have
worked.)
2. Refer to Locating PE MCollective Certicates and Security Credentials. Generate new credentials
for each name, then replace the cert, private key, and public key for each of them.
3. On the puppet master, navigate to /etc/puppetlabs/activemq/.
4. Remove the following two les: broker.ts and broker.ks.
5. Run puppet agent --test to force a Puppet run in the foreground.
During this run, Puppet will copy the credentials you replaced into their nal locations, regenerate
the ActiveMQ truststore and keystore, and restart the pe-activemq and pe-mcollective services.
You should now see the master node in live management and be able to perform Puppet runs and
other live management functions using the console.

Adding Agent Nodes Using Your External CA


Puppet Enterprise 3.3 User's Guide Using an External Certicate Authority with Puppet Enterprise 294/404

Adding Agent Nodes Using Your External CA


1. Install Puppet Enterprise on the node, if it isnt already installed.
2. Using the same external CA you used for the puppet master, create a cert and private key for
your agent node.
3. Locate the les you will need to replace on the agent. Refer to Locating the PE Agent Certicate
and Security Credentials to nd them, but you should use puppet agent --configprint instead
of puppet master --configprint.
4. Copy the agents certicate, private key, and public key into place. Do the same with the external
CAs CRL and CA certicate.
5. Restart the pe-puppet service.
Your node should now be able to do puppet agent runs, and its reports will appear in the console.
If it is a new node, it may not appear in live management for up to 30 minutes. (You can accelerate
this by letting Puppet run once, waiting a few minutes for the node to be added to the MCollective
group in the console, and then running puppet agent -t. Once the )
If you still dont see your agent node in live management, use NTP to verify that time is in sync
across your PE deployment. (You should always do this anyway.)

Conguring the Puppet Enterprise Console


to Use a Custom SSL Certicate
The PE console uses a certicate signed by PEs built-in certicate authority (CA). Since this CA is
specic to PE, web browsers dont know it or trust it, and you have to add a security exception in
order to access the console. You may nd that this is not an acceptable scenario and want to use a
custom CA to create the consoles certicate. However, because several elements of the PE
infrastructure are authenticated with certicates signed by PEs built-in CA, you must bundle the
custom CA with the built-in CA.
ABOUT THE CA BUNDLE

When you use a custom CA to create a certicate for the console, the console still needs to trust
requests from other elements of your PE infrastructure that have been authenticated with
certicates signed by PEs built-in CA; and when making requests to the puppet master, the console
still needs to present a certicate signed by PEs built-in CA.
Also, when the puppet master is acting as a client, it needs to trust the certicates signed by both
the custom CA and PEs built-in CA.
Here are the main things you will need to do:
1. Set up the custom certicates and security credentials (private and public keys).
2. Generate a complete CA bundle for the puppet master.
Step 1: Set up Custom Certs and Security Credentials
Puppet Enterprise 3.3 User's Guide Conguring the Puppet Enterprise Console to Use a Custom SSL
295/404
Certicate

1. Retrieve the custom certicates public and private keys and the customs CAs public key, and,
for ease of use, name them as follows:
public-dashboard.cert.pem
public-dashboard.private_key.pem
public-dashboard.ca_cert.pem
2. Add those les to /opt/puppet/share/puppet-dashboard/certs/.
3. Edit /etc/puppetlabs/httpd/conf.d/puppetdashboard.conf so that it contains the new
certicate and keys. The complete SSL list in puppetdashboard.conf should appear as follows:

SSLCertificateFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.cert.pem
SSLCertificateKeyFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.private_key.pem
SSLCertificateChainFile /opt/puppet/share/puppet-dashboard/certs/publicdashboard.ca_cert.pem
SSLCACertificateFile /opt/puppet/share/puppet-dashboard/certs/pe-internaldashboard.ca_cert.pem
SSLCARevocationFile /opt/puppet/share/puppet-dashboard/certs/pe-internaldashboard.ca_crl.pem

Important: Make sure you do not duplicate any of the above parameters in
/etc/puppetlabs/httpd/conf.d/puppetdashboard.conf.
The rst three certicates in the list are your custom certicates public and private keys and your
custom CAs public key. The fourth and fth entries are PEs built-in CAs public key and certicate
revocation list (CRL). They should not be edited in any way. This conguration will cause the
console to present the signed certicate from your custom CA to clients while still using PEs builtin CA to authenticate requests from the puppet master.
Step 2: Generate the Complete CA Bundle for the Puppet Master
1. On the puppet master, create ca_auth.pem by running cat
/etc/puppetlabs/puppet/ssl/certs/ca.pem /opt/puppet/share/puppetdashboard/certs/public-dashboard.ca_cert.pem >
/etc/puppetlabs/puppet/ssl/ca_auth.pem.

Note: The second path in the above command is the full path the public key of the custom
CA, which you put in /opt/puppet/share/puppet-dashboard/certs/ in step 1.2.
2. Change the permissions of the le you just created by running chmod 644
/etc/puppetlabs/puppet/ss/ca_auth.pem.
Puppet Enterprise 3.3 User's Guide Conguring the Puppet Enterprise Console to Use a Custom SSL
296/404
Certicate

/etc/puppetlabs/puppet/ss/ca_auth.pem.
3. Edit /etc/puppetlabs/puppet/puppet.conf and, in the [master] stanza, add
ssl_client_ca_auth = /etc/puppetlabs/puppet/ssl/ca_auth.pem.
4. Edit /etc/puppetlabs/puppet/console.conf and for certificate_name, change the value to
the DNS FQDN of the console server. Note that the DNS FQDN must match the name of the new
console certicate.
5. Restart the pe-httpd service on both the master and console servers by running sudo
/etc/init.d/pe-httpd restart. (If it is an all-in-one install, you only need to restart the pehttpd service once.)
6. Kick o a puppet run.
You should now be able to navigate to your console and see the custom certicate in your browser.

Bare-Metal Provisioning with Razor


Introducing Razor
Razor is an advanced provisioning application that can deploy both bare metal and virtual systems.
Razor makes it easy to provision a node with no previously installed operating system and bring it
under the management of Puppet Enterprise (PE).
Razors policy-based bare-metal provisioning lets you inventory and manage the lifecycle of your
physical machines. With Razor, you can automatically discover bare-metal hardware, dynamically
congure operating systems and/or hypervisors, and hand nodes o to PE for workload
conguration.
Razor policies use discovered characteristics of the underlying hardware and user-provided data to
make provisioning decisions.
RAZOR AS TECH PREVIEW

This is a Tech Preview release of Razor. This means you are getting early access to Razor
technology so you can test the functionality and provide feedback. However, this Tech Preview
version of Razor is not intended for production use because Puppet Labs cannot guarantee Razors
stability. As Razor is further developed, functionality might be added, removed or changed in a way
that is not backward compatible with this Tech Preview version.
For details about Tech Preview software from Puppet Labs, visit Tech Preview Features Support
Scope.

How Razor Works


The following steps provide a high-level view of the process for provisioning a node with Razor.
Razor identies a new node

Puppet Enterprise 3.3 User's Guide Bare-Metal Provisioning with Razor

297/404

When a new node appears, Razor discovers its characteristics by booting it with the Razor
microkernel and inventorying its facts.
The node is tagged

The node is tagged based on its characteristics. Tags contain a match condition a Boolean
expression that has access to the nodes facts and determines whether the tag should be applied to
the node or not.
The node tags match a Razor policy

Puppet Enterprise 3.3 User's Guide Bare-Metal Provisioning with Razor

298/404

Node tags are compared to tags in the policy table. The rst policy with tags that match the nodes
tags is applied to the node.
Policies pull together all the provisioning elements

The node is provisioned with the designated OS and managed with PE

Puppet Enterprise 3.3 User's Guide Bare-Metal Provisioning with Razor

299/404

The node is now installed and managed by Puppet Enterprise.


Getting Started With Razor
Provisioning with Razor generally entails these steps:
Set up a virtual environment for Razor
Install and set up a Razor server and Razor client
Create Razor objects and provision machines
See Setup Information and Known Issues for specic information about this release.
In addition to the above processes, you can learn more about:
Razor broker types
Razor tasks
Razor tags
Razor command reference
Next: Set Up a Virtual Environment for Razor

Install and Set Up a Virtual Environment for


Testing Razor
Razor is a powerful tool created to automatically discover bare-metal hardware and dynamically
congure operating systems and/or hypervisors. With this power comes the responsibility to test
Razor carefully. Razor is also currently a Tech Preview release. For these reasons, we highly
recommend that you install and test Razor in a completely isolated test environment.
The following sections provide the steps for a basic setup that you can use to evaluate Razor. The
setup steps below use dnsmasq; however, you can use any DHCP and TFTP service with Razor.

Warning: Proceed with caution. We recommend testing on a completely isolated test


environment because running a second DCHCP server on your companys network could
bring down the network. In addition, running a second DHCP server that will boot into the
Razor microkernel and register with the server has a bigger risk. In such a case, if someone
has established a policy that node matches, a simple reboot could cause Razor to replace a
server with a fresh OS install.
Puppet Enterprise 3.3 User's Guide Install and Set Up a Virtual Environment for Testing Razor

300/404

Before You Begin


Things you should know before you set up provisioning:
Razor has been specically validated on RHEL/CentOS 6.4, but it should work on all 6.x versions.
See the CentOS site for options.
The Razor microkernel is 64-bit only. Razor can only provision 64-bit machines.

Install Overview
Below are the essential steps to create a virtual test environment. Each of these steps is described in
more detail in the following sections.
1. Install PE in your virtual environment.
2. Install and congure DHCP/DNS/TFTP service. Weve chosen dnsmasq for this example setup.
3. Congure SELinux to enable PXE boot. Note: youll download iPXE software in the steps for
installing and setting up Razor.
4. Optional: If you installed dnsmasq, then congure dnsmasq for PXE booting and TFTP
When you nish this section, go on to Install and Set Up Razor.
Install PE in Your Virtual Environment
In your virtual testing environment, set up a puppet master running a standard install of Puppet
Enterprise 3.3. For more information, see Installing Puppet Enterprise.
Note: Were nding that VirtualBox 4.3.6 gets to the point of downloading the microkernel from the
Razor server and hangs at 0% indenitely. We dont have this problem with VirtualBox 4.2.22.
Install and Congure dnsmasq DHCP/TFTP Service
The installation thats described here, particularly these prerequisites, are one way to congure
your Razor test environment. Were providing explicit instructions for this setup because its been
tested and is relatively straightforward.
As stated in the Warning above, to avoid breaking your company network or inadvertently
overwriting machines or servers on your network, you should be working with a completely isolated
test environment.
1. Use YUM to install dnsmasq:
yum install dnsmasq
2. If it doesnt already exist, create the directory /var/lib/tftpboot .
3. Change the permissions for /var/lib/tftpboot:

Puppet Enterprise 3.3 User's Guide Install and Set Up a Virtual Environment for Testing Razor

301/404

chmod 655 /var/lib/tftpboot

Temporarily Disable SELinux to Enable PXE Boot


1. Disable SELinux by changing the following setting in the le /etc/sysconfig/selinux:

SELINUX=disabled

Note: Disabling SELinux is highly insecure and should only be done for testing purposes.
Another option is to craft an enforcement rule for SELinux that will enable PXE boot but will not
completely disable SElinux.
2. Restart the computer and log in again.
Edit the dnsmasq Conguration File to Enable PXE Boot
1. Edit the le /etc/dnsmasq.conf, by adding the following line at the bottom of the le:

conf-dir=/etc/dnsmasq.d
2. Write and exit the le.
3. Create the le /etc/dnsmasq.d/razor and add the following conguration information:

# This works for dnsmasq 2.45


# iPXE sets option 175, mark it for network IPXEBOOT
dhcp-match=IPXEBOOT,175
dhcp-boot=net:IPXEBOOT,bootstrap.ipxe
dhcp-boot=undionly.kpxe
# TFTP setup
enable-tftp
tftp-root=/var/lib/tftpboot

4. Enable dnsmasq on boot:


chkconfig dnsmasq on

5. Start the dnsmasq service:


service dnsmasq start

Next: Install and Set Up Razor

Puppet Enterprise 3.3 User's Guide Install and Set Up a Virtual Environment for Testing Razor

302/404

Install and Set Up Razor


A Razor module is included with Puppet Enterprise 3.3 . To install and congure a Razor server, you
must set up your Razor test environment, and then classify the pe_razor node. When PE runs and
applies this Razor classication, the Razor server and a PostgreSQL database will be installed and
congured.
In addition to the Razor server, the Razor client can be installed as a Ruby gem on any machine you
want to use for interacting with Razor.
Important: Because Razor is a Tech Preview, we highly recommend that you set it up in a completely
isolated test environment. This environment must have access to the internet. See Set Up a Virtual
Environment for Razor for details.
Before You Begin
Things you should know before you set up provisioning:
Do not install Razor on the puppet master.
The default port for Razor is 8080. This is also the default port for PuppetDB, so you cannot have
PuppetDB and Razor installed on the same node.
Razor has been specically validated on RHEL/CentOS 6.4, but it should work on all 6.x versions.
See the CentOS site for options.

Hint: With the export command, you can avoid having to repeatedly replace placeholder
text. The steps for installing assume you have declared a server name and the port to use for
Razor with this command:
export RAZOR_HOSTNAME=<server name>
export RAZOR_PORT=8080

For example:
export RAZOR_HOSTNAME=centos6.4
export RAZOR_PORT=8080

The steps below therefore use $RAZOR_HOSTNAME and $RAZOR_PORT for brevity.

Install the Razor Server


The actual Razor software is stored in an external online location, so you need an internet
connection. When you classify a node with the pe_razor module, the software is downloaded. This
process can take several minutes.
Puppet Enterprise 3.3 User's Guide Install and Set Up Razor

303/404

If you dont have access to the internet or would like to pull the PE tarball from your own location,
you can use the class parameter pe_tarball_base_url and stipulate your own URL. Note that the
code assumes that the tarball still has the same name format as on our server.
1. Manually add the pe-razor class in the PE console. To do so, on the console sidebar, click the
Add classes button. Then, in Add classes under Dont see a class? type in pe-razor and click the
green plus (+) button. For information about adding a class and classifying the Razor server
using the PE Console, see the Adding New Classes and Editing Classes on Nodes sections of this
guide.
Note: You can also add the following to site.pp:
node <AGENT_CERT>{
include pe_razor
}

2. On the Razor server, run puppet with: puppet agent -t (otherwise you have to wait for the
scheduled agent run).
Load iPXE Software
You must set your machines to PXE boot. Without PXE booting, Razor has no way to interact with a
system. This is OK if the node has already been enrolled with Razor and is installed, but it will
prevent any changes on the server (for example, an attempt to reinstall the system) from having any
eect on the node. Razor relies on seeing when a machine boots and starts all its interactions
with a node when that node boots.
Razor provides a specic iPXE boot image to ensure youre using a compatible version.
1. Download the iPXE boot image undionly-20140116.kpxe.
2. Copy the image to /var/lib/tftpboot: cp undionly-20140116.kpxe /var/lib/tftpboot.
3. Download the iPXE bootstrap script from the Razor server to the /var/lib/tftpboot directory:

wget http://${RAZOR_HOSTNAME}:${RAZOR_PORT}/api/microkernel/bootstrap?
nic_max=1 -O /var/lib/tftpboot/bootstrap.ipxe

Note: Make sure you dont use localhost as the name of the Razor host. The bootstrap script
chain-loads the next iPXE script from the Razor server. This means that it has to contain the correct
hostname for clients to try and fetch that script from, or it isnt going to work.
Verify the Razor Server
Test the new Razor conguration: wget http://${$RAZOR_HOSTNAME}:${RAZOR_PORT}/api -O
test.out.
Puppet Enterprise 3.3 User's Guide Install and Set Up Razor

304/404

The command should execute successfully, and the output JSON le test.out should contain a list
of available Razor commands.

Install and Set Up the Razor Client


The Razor client is installed as a Ruby gem.
1. Install the client:
gem install pe-razor-client --version 0.15.0

2. You can verify that the Razor client is installed by printing Razor help:
razor -u http://${$RAZOR_HOSTNAME}:${RAZOR_PORT}/api

3. Youll likely get this warning message about JSON: MultiJson is using the default adapter
(ok_json). We recommend loading a dierent JSON library to improve performance. This
message is harmless, but you can disble it with this command:
gem install json_pure

Note: There is also a razor-client gem that contains the client for the open source Razor client.
We strongly recommended that you not install the two clients simultaneously, and that you only use
pe-razor-client with the Razor shipped as part of Puppet Enterprise. If you already have razorclient installed, or are not sure if you do, run gem uninstall razor-client prior to step (1)
above.

Uninstall Razor
To uninstall the Razor Server:
1. Run yum erase pe-razor.
2. Drop the PostgreSQL database that the server used.
3. Change DHCP/TFTP so that the machines that have been installed will continue to boot outside
the scope of Razor.
To uninstall the Razor client:
Run gem uninstall pe-razor-client.

Next: Razor Provisioning Setup


Puppet Enterprise 3.3 User's Guide Install and Set Up Razor

305/404

Set Up Razor Provisioning


This page describes the provisioning setup process. You must rst create some initial objects:
Repo: the container for the objects you install with Razor, such as operating systems.
Broker: the connector between a node and a conguration management system.
Tasks: the installation and conguration instructions.
Policy: the instructions that tell Razor which repos, brokers, and tasks to use for provisioning.
After creating these objects, you register a node on the Razor server. To work through these steps,
you must already have a Razor server up and running, as described in Install and Set Up Razor.

Include Repos
A repo contains all of the actual bits used when installing a node with Razor. The repo is identied
by a unique name, such as centos-6.4. The instructions for an installation are contained in tasks,
which are described below.
To load a repo onto the server, you use: razor create-repo --name=<repo name> --iso-url
<URL>.
For example: razor create-repo --name=centos-6.4 --iso-url
http://mirrors.usc.edu/pub/linux/distributions/centos/6.4/isos/x86_64/CentOS-6.4x86_64-minimal.iso.
Note: Creating the repo can take ve or so minutes, plus however long it takes to download the ISO
and unpack the contents. Currently, the best way to nd out the status is to check the log le.

Include Brokers
Brokers are responsible for handing a node o to a cong management system like Puppet
Enterprise. Brokers consist of two parts: a broker type and information that is specic for the broker
type.
The broker type is closely tied to the conguration management system that the node is being
handed o to. Generally, it consists of a shell script template and a description of what additional
information must be specied to create a broker from that broker type.
For the Puppet Enterprise broker type, this information consists of the nodes server, and the
version of PE that a node should use. The PE version defaults to latest unless you stipulate a
dierent version.
You create brokers with the create-broker command. For example, the following sets up a simple
no-op broker that does nothing: razor create-broker --name=noop --broker-type=noop.
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning

306/404

This command sets up the PE broker, which requires the server parameter.
razor create-broker --name foo --broker-type puppet-pe --configuration '{
"server": "puppet.example.com" }'

Stock Broker Types


Razor ships with some stock broker types for your use: puppet-pe, noop, and puppet.
Note: The puppet-pe broker type depends on the package-based simplied agent installation
method. For details, see Installing Agents.

Include Tasks
Tasks describe a process or collection of actions that should be performed while provisioning
machines. They can be used to designate an operating system or other software that should be
installed, where to get it, and the conguration details for the installation.
Tasks are structurally simple. They consist of a YAML metadata le and any number of ERB
templates. You include the tasks you want to run in your policies (policies are described in the next
section).
Razor provides a handful of existing tasks, or you can create your own. To learn more about tasks,
see Writing Tasks and Templates.

Create Policies
Policies orchestrate repos, brokers, and tasks to tell Razor what bits to install, where to get the bits,
how they should be congured, and how to communicate between a node and PE.
Note: Tags are named rule-sets that identify which nodes should be attached to a given policy.
Because policies contain a good deal of information, its handy to save them in a JSON le that you
run when you create the policy. Heres an example of a policy called centos-for-small. This policy
stipulates that it should be applied to the rst 20 nodes that have no more than two processors that
boot.
{
"name": "centos-for-small",
"repo": { "name": "centos-6.4" },
"task": { "name": "centos" },
"broker": { "name": "noop" },
"enabled": true,
"hostname": "host${id}.example.com",
"root_password": "secret",
"max_count": "20",
"tags": [{ "name": "small", "rule": ["<=", ["num", ["fact",
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning

307/404

"processorcount"]], 2]}]
}

Policy Tables You might create multiple policies, and then retrieve the policies collection. The
policies are listed in order in a policy table. You can inuence the order of policies as follows:
When you create a policy, you can include a before or after parameter in the request to
indicate where the new policy should appear in the policy table.
Using the move-policy command with before and after parameters, you can put an existing
policy before or after another one.
See Razor Command Reference for more information.
CREATE A POLICY

1. Create a le called policy.json and copy the following template text into it:

{
"name": "test_<NODE_ID>",
"repo": { "name": "<OS>" },
"task": { "name": "<INSTALLER>" },
"broker": { "name": "pe" },
"enabled": true,
"hostname": "node${id}.vm",
"root_password": "puppet",
"max_count": "20",
"tags": [{ "name": "<TAG_NAME>", "rule": ["in",["fact",
"macaddress"],"<NODE_MAC_ADDRESS>"]}]
}
2. Edit the options in the policy.json template with information specic to your environment.
3. Apply the policy by executing: razor create-policy --json policy.json.

Identify and Register Nodes


Next, verify that your machine can PXE boot from the Razor server and register itself as a node.
1. PXE boot a node machine in the Razor environment you have constructed for testing.
2. Find out what nodes are registered by executing razor nodes.
Identify a node in the list of registered nodes. The format should look like this:
id: "http://localhost:8080/api/collections/nodes/node1"
name: "node1"
spec: "/razor/v1/collections/nodes/member"

You can also inspect the registered nodes by appending the node name to the command as follows.
Puppet Enterprise 3.3 User's Guide Set Up Razor Provisioning

308/404

The name of the node is generated by the server and follows the pattern nodeNNN where NNN is an
integer. This command provides information such as the log path, hardware information,
associated policies, and facts.
razor nodes <NODE_NAME>

The following command opens a specic nodes log: razor nodes <node name> log.
Next: Razor Command Reference

Razor API Reference


The Razor API is REST-based. For best results, use the following as the base URL for your calls:
http://razor:8080/api.
Note: The following sections contain some example URLs that might be structured dierently from
the URLs your server uses.
Common Attributes
Two attributes are commonly used to identify objects:
id can be used as a GUID for an object. A GET request against a URL with an id attribute will
produce a representation of the object.
name is used for a short, human readable reference to an object, generally only unique amongst
objects of the same type on the same server.
/api reference
The base URL http://razor:8080/api fetches the top-level entry point for navigating through the
Razor command and query facilities. This is a JSON object with the following keys:
collections: read-only queries available on this server.
commands: the commands available on this server.
Each of those keys contains a JSON array, with a sequence of JSON objects that have the following
keys:
name: a human-readable label.
rel: a spec URL that indicates the type of contained data. Use this to discover the endpoint that
you want to follow, rather than the name.
id: the URL to follow to get at this content.

Puppet Enterprise 3.3 User's Guide Razor API Reference

309/404

/svc URLs
The /svc namespace is an internal namespace, used for communication with the iPXE client, the
microkernel, and other internal components of Razor.
This namespace is not enumerated under /api.

Commands
The list of commands that the Razor server supports is returned as part of a request to GET /api in
the commands array. Clients can identify commands using the rel attribute of each entry in the
array, and should make their POST requests to the URL given in the url attribute.
Commands are generally asynchronous and return a status code of 202 Accepted on success. The
url property of the response generally refers to an entity that is aected by the command and can
be queried to determine when the command has nished.
Create new repo
There are two avors of repositories: ones where Razor unpacks ISOs for you and serves their
contents, and ones that are somewhere else, for example, on a mirror you maintain. The rst form
is created by creating a repo with the iso-url property; the server will download and unpack the
ISO image into its le system:
{
"name": "fedora19",
"iso-url": "file:///tmp/Fedora-19-x86_64-DVD.iso"
}

The second form is created by providing a url property when you create the repository; this form is
merely a pointer to a resource somehwere and nothing will be downloaded onto the Razor server:
{
"name": "fedora19",
"url": "http://mirrors.n-ix.net/fedora/linux/releases/19/Fedora/x86_64/os/"
}

Delete a repo
The delete-repo command accepts a single repo name:

{
"name": "fedora16"
}

Puppet Enterprise 3.3 User's Guide Razor API Reference

310/404

Create task
Razor supports both tasks stored in the lesystem and tasks stored in the database; for
development, it is highly recommended that you store your tasks in the lesystem. Details about
that can be found on the Wiki
For production setups, it is usually better to store your tasks in the database. To create a task,
clients post the following to the /spec/create_task URL:

{
"name": "redhat6",
"os": "Red Hat Enterprise Linux",
"os_version": "6",
"description": "A basic installer for RHEL6",
"boot_seq": {
"1": "boot_install",
"default": "boot_local"
}
"templates": {
"boot_install": " ... ERB template for an ipxe boot file ...",
"installer": " ... another ERB template ..."
}
}

The possible properties in the request are:


name

The name of the task; must be unique

os

The name of the OS; mandatory

os_version

The version of the operating system

description

Human-readable description

boot_seq

A hash mapping the boot counter or default to a template

templates

A hash mapping template names to the actual ERB template text

Create broker
To create a broker, clients post the following to the create-broker URL:

{
"name": "puppet",
"configuration": {
"server": "puppet.example.org",
"environment": "production"
},
"broker-type": "puppet"
}

The broker-type must correspond to a broker that is present on the broker_path set in
Puppet Enterprise 3.3 User's Guide Razor API Reference

311/404

The broker-type must correspond to a broker that is present on the broker_path set in
config.yaml.
The permissible settings for the configuration hash depend on the broker type and are declared
in the broker types configuration.yaml.
Delete broker
A broker can be deleted by posting its name to the /spec/delete_broker command:

{
"name": "small",
}

If the broker is used by a policy, the attempt to delete the broker will fail.
Create tag
To create a tag, clients post the following to the /spec/create_tag command:

{
"name": "small",
"rule": ["=", ["fact", "processorcount"], "2"]
}

The name of the tag must be unique; the rule is a match expression.
Delete tag
A tag can be deleted by posting its name to the /spec/delete_tag command:

{
"name": "small",
"force": true
}

If the tag is used by a policy, the attempt to delete the tag will fail unless the optional parameter
force is set to true; in that case the tag will be removed from all policies that use it and then
deleted.
Update tag
The rule for a tag can be changed by posting the following to the /spec/update_tag_rule
command:
{
"name": "small",
Puppet Enterprise 3.3 User's Guide Razor API Reference

312/404

"rule": ["<=", ["fact", "processorcount"], "2"],


"force": true
}

This will change the rule of the given tag to the new rule. The tag will be reevaluated against all
nodes and each nodes tag attribute will be updated to reect whether the tag now matches or not,
i.e., the tag will be added to/removed from each nodes tag as appropriate.
If the tag is used by any policies, the update will only be performed if the optional parameter force
is set to true. Otherwise, the command will return with status code 400.
Create policy
{
"name": "a policy",
"repo": { "name": "some_repo" },
"task": { "name": "redhat6" },
"broker": { "name": "puppet" },
"hostname": "host${id}.example.com",
"root_password": "secret",
"max_count": "20",
"before"|"after": { "name": "other policy" },
"node_metadata": { "key1": "value1", "key2": "value2" },
"tags": [{ "name": "existing_tag"},
{ "name": "new_tag", "rule": ["=", "dollar", "dollar"]}]
}

The overall list of policies is ordered, and polcies are considered in that order. When a new policy is
created, the entry before or after can be used to put the new policy into the table before or after
another policy. If neither before or after are specied, the policy is appended to the policy table.
Tags, brokers, tasks and repos are referenced by their name. Tags can also be created by providing
a rule; if a tag with that name already exists, the rule must be equal to the rule of the existing tag.
Hostname is a pattern for the host names of the nodes bound to the policy; eventually youll be able
to use facts and other fun stu there. For now, you get to say ${id} and get the nodes DB id.
The max_count determines how many nodes can be bound at any given point to this policy at the
most. This can either be set to nil, indicating that an unbounded number of nodes can be bound
to this policy, or a positive integer to set an upper bound.
The node_metadata allows a policy to apply metadata to a node when it binds. This is NON
AUTHORITATIVE in that it will not replace existing metadata on the node with the same keys; it will
only add keys that are missing.
Move policy
This command makes it possible to change the order in which policies are considered when
Puppet Enterprise 3.3 User's Guide Razor API Reference

313/404

matching against nodes. To put an existing policy into a dierent place in the policy table, use the
move-policy command with a body like:

{
"name": "a policy",
"before"|"after": { "name": "other policy" }
}

This will change the policy table so that a policy will appear before or after the policy other
policy.
Enable/disable policy
Policies can be enabled or disabled. Only enabled policies are used when matching nodes against
policies. There are two commands to toggle a policys enabled ag: enable-policy and disablepolicy, which both accept the same body, consisting of the name of the policy in question:

{
"name": "a policy"
}

Modify the max-count for a policy


The command modify-policy-max-count makes it possible to manipulate how many nodes can be
bound to a specic policy at the most. The body of the request should be of the form:
{
"name": "a policy"
"max-count": new-count
}

The new-count can be an integer, which must be larger than the number of nodes that are
currently bound to the policy, or null to make the policy unbounded
Add/remove tags to/from Policy
You can add or remove tags from policies with add-policy-tag and remove-policy-tag
respectively. In both cases supply the name of a policy and the name of the tag. When adding a tag,
you can specify an existing tag, or create a new one by supplying a name and rule for the new tag:
{
"name": "a-policy-name",
"tag" : "a-tag-name",
"rule": "new-match-expression" #Only for `add-policy-tag`
}
Puppet Enterprise 3.3 User's Guide Razor API Reference

314/404

Delete policy
Policies can be deleted with the delete-policy command. It accepts the name of a single policy:

{
"name": "my-policy"
}

Note that this does not aect the installed status of a node, and therefore wont, by itself, cause a
node to be bound to another policy upon reboot.
Delete node
A single node can be removed from the database with the delete-node command. It accepts the
name of a single node:
{
"name": "node17"
}

Of course, if that node boots again at some point, it will be automatically recreated.
Reinstall node
This command removes a nodes association with any policy and clears its installed ag; once the
node reboots, it will boot back into the Microkernel and go through discovery, tag matching and
possibly be bound to another policy. This command does not change its metadata or facts. Specify
which node to unbind by sending the nodes name in the body of the request
{
"name": "node17"
}

Set node IPMI credentials


Razor can store IPMI credentials on a per-node basis. These are the hostname (or IP address), the
username, and the password to use when contacting the BMC/LOM/IPMI lan or lanplus service to
check or update power state and other node data.
This is an atomic operation: all three data items are set or reset in a single operation. Partial
updates must be handled client-side. This eliminates conicting update and partial update
combination surprises for users.
The structure of a request is:

Puppet Enterprise 3.3 User's Guide Razor API Reference

315/404

{
"name": "node17",
"ipmi-hostname": "bmc17.example.com",
"ipmi-username": null,
"ipmi-password": "sekretskwirrl"
}

The various IPMI elds can be null (representing no value, or the NULL username/password as
dened by IPMI), and if omitted are implicitly set to the NULL value.
You must provide an IPMI hostname if you provide either a username or password, since we only
support remote, not local, communication with the IPMI target.
Reboot node
Razor can request a node reboot through IPMI, if the node has IPMI credentials associated. This
only supports hard power cycle reboots.
This is applied in the background, and will run as soon as available execution slots are available for
the task IPMI communication has some generous internal rate limits to prevent it overwhelming
the network or host server.
This background process is persistent: if you restart the Razor server before the command is
executed, it will remain in the queue and the operation will take place after the server restarts.
There is no time limit on this at this time.
Multiple commands can be queued, and they will be processed sequentially, with no limitation on
how frequently a node can be rebooted.
If the IPMI request fails (that is: ipmitool reports it is unable to communicate with the node) the
request will be retried. No detection of actual results is included, though, so you may not know if
the command is delivered and fails to reboot the system.
This is not integrated with the IPMI power state monitoring, and you may not see power transitions
in the record, or through the node object if polling.
The format of the command is:
{
"name": "node1",
}

The node eld is the name of the node to operate on.


The RBAC pattern for this command is: reboot-node:${node}
Set node desired power state
In addition to monitoring power, Razor can enforce node power state. This command allows a
Puppet Enterprise 3.3 User's Guide Razor API Reference

316/404

desired power state to be set for a node, and if the node is observed to be in a dierent power state
an IPMI command will be issued to change to the desired state.
The format of the command is:
{
"name": "node1234",
"to": "on"|"off"|null
}

The name eld identies the node to change the setting on.
The to eld contains the desired power state to set. Valid values are on, off, or null (the JSON
NULL/nil value), which reect power on, power o, and do not enforce power state
respectively.
Power state is enforced every time it is observed; by default this happens on a scheduled basis in
the background every few minutes.
Modify node metadata
Node metadata is similar to a nodes facts except metadata is what the administrators tell Razor
about the node rather than what the node tells Razor about itself.
Metadata is a collection of key => value pairs (like facts). Use the modify-node-metadata command
to add/update, remove or clear a nodes metadata. The request should look like:
{
"node": "node1",
"update": { # Add or update these keys
"key1": "value1",
"key2": "value2",
...
}
"remove": [ "key3", "key4", ... ], # Remove these keys
"no_replace": true # Do not replace keys on
# update. Only add new keys
}

or
{
"node": "node1",
"clear": true # Clear all metadata
}

As above, multiple update and/or removes can be done in the one command, however, clear can
Puppet Enterprise 3.3 User's Guide Razor API Reference

317/404

only be done on its own (it doesnt make sense to update some details and then clear everything).
An error will also be returned if an attempt is made to update and remove the same key.
Update node metadata
The update-node-metadata command is a shortcut to modify-node-metadata that allows for
updating single keys on the command line or with a GET request with a simple data structure that
looks like.
{
"node" : "mode1",
"key" : "my_key",
"value" : "my_val",
"no_replace": true #Optional. Will not replace existing keys
}

Remove Node Metadata


The remove-node-metadata command is a shortcut to modify-node-metadata that allows for
removing a single key OR all keys only on the command like or with a GET request with a simple
datastructure that looks like:
{
"node" : "node1",
"key" : "my_key",
}

or
{
"node" : "node1",
"all" : true, # Removes all keys
}

Collections
Along with the list of supported commands, a GET /api request returns a list of supported
collections in the collections array. Each entry contains at minimum url, spec, and name keys,
which correspond respectively to the endpoint through which the collection can be retrieved (via
GET), the type of collection, and a human-readable name for the collection.
A GET request to a collection endpoint will yield a list of JSON objects, each of which has at
minimum the following elds:
id

a URL that uniquely identies the object

Puppet Enterprise 3.3 User's Guide Razor API Reference

318/404

spec

a URL that identies the type of the object

name

a human-readable name for the object

Dierent types of objects may specify other properties by dening additional key-value pairs. For
example, here is a sample tag listing:
[
{
"spec": "http://localhost:8080/spec/object/tag",
"id": "http://localhost:8080/api/collections/objects/14",
"name": "virtual",
"rule": [ "=", [ "fact", "is_virtual" ], true ]
},
{
"spec": "http://localhost:8080/spec/object/tag",
"id": "http://localhost:8080/api/collections/objects/27",
"name": "group 4",
"rule": [
"in", [ "fact", "dhcp_mac" ],
"79-A8-C3-39-E4-BA",
"6C-35-FE-B7-BD-2D",
"F9-92-DF-E0-26-5D"
]
}
]

In addition, references to other resources are represented either as an array of, in the case of a
one- or many-to-many relationship, or single, for a one- to-one relationship, JSON objects with
the following elds:
url

a URL that uniquely identies the object

obj_id

a short numeric identier

name

a human-readable name for the object

If the reference object is in an array, the obj_id eld serves as a unique identier within the array.

Other things
The default boostrap iPXE le
A GET request to /api/microkernel/bootstrap will return an iPXE script that can be used to
bootstrap nodes that have just PXE booted (it culminates in chain loading from the Razor server)
The URL accepts the parameter nic_max which should be set to the maximum number of network
interfaces that respond to DHCP on any given machine. It defaults to 4.

Puppet Enterprise 3.3 User's Guide Razor API Reference

319/404

Using Razor Tags


A tag consists of a unique name and a rule. The tag matches a node if evaluating it against the
tags facts results in true. Note that tag matching is case sensitive.
For example, here is a tag rule:
["or",
["=", ["fact", "macaddress"], "de:ea:db:ee:f0:00"]
["=", ["fact", "macaddress"], "de:ea:db:ee:f0:01"]]

The tag could also be written like this:


["in", ["fact", "macaddress"], "de:ea:db:ee:f0:00", "de:ea:db:ee:f0:01"]

The syntax for rule expressions is dened in lib/razor/matcher.rb. Expressions are of the form
[op arg1 arg2 .. argn] where op is one of the operators below, and arg1 through argn are the
arguments for the operator. If they are expressions themselves, they will be evaluated before op is
evaluated.
The expression language currently supports the following operators:
Operator

Returns

Aliases

["=", arg1, arg2]

true if arg1 and arg2 are equal

"eq"

["!=", arg1, arg2]

true if arg1 and arg2 are not equal

"neq"

["and", arg1, ..., argn]

true if all arguments are true

["or", arg1, ..., argn]

true if any argument is true

["not", arg]

logical negation of arg, where any value other than false and nil is considered true

["fact", arg1 (, arg2)]

the fact named arg1 for the current node*

["metadata", arg1 (, arg2)]

the metadata entry arg1 for the current node*

["tag", arg]

the result (a boolean) of evaluating the tag with name arg against the current node

["in", arg1, arg2, ..., argn]

true if arg1 equals one of arg2 .. argn

["num", arg1]

arg1 as a numeric value, or raises an error

[">", arg1, arg2]

true if arg1 is strictly greater than arg2

"gt"

["<", arg1, arg2]

true if arg1 is strictly less than arg2

"lt"

[">=", arg1, arg2]

true if arg1 is greater than or equal to arg2

"gte"

Puppet Enterprise 3.3 User's Guide Using Razor Tags

320/404

["<=", arg1, arg2]

true if arg1 is less than or equal to arg2

"lte"

* Note: The fact and metadata operators take an optional second argument. If arg2 is
passed, it is returned if the fact/metadata entry arg1 is not found. If the fact/metadata entry
arg1 is not found and no second argument is given, a RuleEvaluationError is raised.

Writing Broker Types


Brokers are responsible for handing a node o to a conguration management system, and consist
of two parts: a broker type and some information that is specic for each broker type. For the
Puppet broker type, this information consists of the nodes certname, the address of the server, and
the Puppet environment that a node should use. You create brokers with the create-broker
command
The broker type is closely tied to the conguration management system that the node should be
handed o to. Generally, it consists of two things: a (templated) shell script that performs the
hando and a description of the additional information that must be specied to create a broker
from that broker type.
CREATE A PE BROKER

1. Create a directory on the broker_path that is set in your config.yaml le. You can call it
something like sample.broker. By default, the brokers directory in Razor.root is on that path.
2. Write a template for your broker install script. For example, create a le called broker.json and
add the following:
{
"name": "pe",
"configuration": {
"server": "<PUPPET_MASTER_HOST>"
},
"broker-type": "puppet-pe"
}

3. Save broker.json to install.erb in the sample.broker directory.


4. If your broker type requires additional conguration data, add a configuration.yaml le to
your sample.broker directory.
To see examples of brokers, have a look at the stock brokers (pun intended) that ship with Razor.

Writing the broker install script


The broker install script is generated from the install.erb template of your broker. It should
Puppet Enterprise 3.3 User's Guide Writing Broker Types

321/404

return a valid shell script since tasks generally perform the hando to the broker by running a
command like, curl -s <%= broker_install_url %> | /bin/bash. The server makes sure that
the GET request to broker_install_url returns the brokers install script after interpolating the
template.
In the install.erb template, you have access to two objects: node and broker. The node object
gives you access to things like the nodes facts (via node.facts["foo"]) and the nodes tags (via
node.tags), etc.
The broker object gives you access to the conguration settings. For example, if your
configuration.yaml species that a setting version must be provided when creating a broker
from this broker type, you can access the value of version for the current broker as
broker.version.

The broker conguration le


The configuration.yaml le declares what parameters the user must specify when creating a
broker. For the Puppet broker type, it looks something like:
--certname:
description: "The locally unique name for this node."
required: true
server:
description: "The puppet master server to request configurations from."
required: true
environment:
description: "On agent nodes, the environment to request configuration in."

For each parameter, you can provide a human-readable description and indicate whether this
parameter is required. Parameters that are not explicitly required are optional.
Next: Razor Tasks

Writing Tasks and Templates to Automate


Processes
Tasks describe a process or collection of actions that should be performed while provisioning
machines. They can be used to designate an operating system or other software that should be
installed, where to get it, and the conguration details for the installation.
Tasks are structurally basic: they consist of a YAML metadata le and any number of templates.
Puppet Enterprise 3.3 User's Guide Writing Tasks and Templates to Automate Processes

322/404

Once youve automated the install for your operating system (for example, via kickstart or preseed),
turning that into a task is a matter of writing a bit of metadata and templating some of the things
that your task does. For examples, check out the stock tasks that ship with Razor.
Tasks are stored in the le system. The conguration setting task_path determines where in the
le system Razor looks for tasks and can be a colon-separated list of paths. Relative paths in that
list are taken to be relative to the top-level Razor directory. For example, setting task_path to
/opt/puppet/share/razor-server/tasks:/home/me/task:tasks will make Razor search these
three directories in that order for tasks.

Task Metadata
Tasks can include the following metadata in the tasks YAML le. This le is called NAME.yaml where
NAME is the task name.

--description: HUMAN READABLE DESCRIPTION


os: OS NAME
os_version: OS_VERSION_NUMBER
base: TASK_NAME
boot_sequence:
1: boot_templ1
2: boot_templ2
default: boot_local

Only os_version and boot_sequence are required. The base key allows you to derive one task
from another by reusing some of the base metadata and templates. If the derived task has
metadata thats dierent from the metadata in base, the derived metadata overrides the base tasks
metadata.
The boot_sequence hash indicates which templates to use when a node using this task boots. In
the example above, a node will rst boot using boot_templ1, then using boot_templ2. For every
subsequent boot, the node will use boot_local.

Writing Templates
Task templates are ERB templates and are searched in all the directories given in the task_path
conguration setting. Templates are searched in the subdirectories in this order:
1. name/os_version
2. name
3. common
If the task has a base task, the base tasks template directories are searched just before the common
Puppet Enterprise 3.3 User's Guide Writing Tasks and Templates to Automate Processes

323/404

directory.
TEMPLATE HELPERS

Templates can use the following helpers to generate URLs that point back to the server; all of the
URLs respond to a GET request, even the ones that make changes on the server:
file_url(TEMPLATE): the URL that will retrieve TEMPLATE.erb (after evaluation) from the current
nodes task.
repo_url(PATH): the URL to the le at PATH in the current repo.
log_url(MESSAGE, SEVERITY): the URL that will log MESSAGE in the current nodes log.
node_url: the URL for the current node.
store_url(VARS): the URL that will store the values in the hash VARS in the node. Currently only
changing the nodes IP address is supported. Use store_url("ip" => "192.168.0.1") for that.
stage_done_url: the URL that tells the server that this stage of the boot sequence is nished,
and that the next boot sequence should begin upon reboot.
broker_install_url: a URL from which the install script for the nodes broker can be retrieved.
You can see an example in the script, os_complete.erb, which is used by most tasks.
Each boot (except for the default boot) must culminate in something akin to curl <%=
stage_done_url %> before the node reboots. Omitting this will cause the node to reboot with the
same boot template over and over again.
The task must indicate to the Razor server that it has successfully completed by doing a GET request
against stage_done_url("finished"), for example using curl or wget. This will mark the node
installed in the Razor database.
You use these helpers by causing your script to perform an HTTP GET against the generated URL.
This might mean that you pass an argument like ks=<%= file_url("kickstart")%> when booting
a kernel, or that you put curl <%= log_url("Things work great") %> in a shell script.
Next: Razor Conguration & Known Issues

Setup Information and Known Issues


Important Setup Information
Razor has been specically tested in the following setups/environments: RHEL/CentOS 6.4
The Razor microkernel is 64-bit only. Razor can only provision 64-bit machines.
Razor has a minimum RAM requirement of 512MB.

Puppet Enterprise 3.3 User's Guide Setup Information and Known Issues

324/404

To successfully use a machine with Razor and install an operating system on it, the machine must:
+ Be supported by the operating system to be installed on it. + Be able to successfully boot into the
microkernel, which is based on Fedora 19. + Be able to successfully boot the iPXE rmware.
USING RAZOR

The repo contains the actual bits that are used when installing a node; the installation instructions
are contained in tasks. Razor comes with a few predened tasks to get you started. They can be
found in the tasks/ directory in the razor-server repo, and they can all be used by simply
mentioning their name in a policy. This includes the vmware_esxi installer.

Known Issues
Razor doesnt handle local time jumps
The Razor server is sensitive to large jumps in the local time, like the one that is experienced by a
VM after it has been suspended for some time and then resumed. In that case, the server will stop
processing background tasks, such as the creation of repos. To remediate that, restart the server
with service pe-razor-server restart.
JSON warning
When you run Razor commands, you might get this warning: MultiJson is using the default adapter
(ok_json). We recommend loading a dierent JSON library to improve performance.
You can disregard the warning since this situation is completely harmless. However, if youre using
Ruby 1.8.7, you can install a separate JSON library, such as json_pure, to prevent the warning from
appearing.
Razor hangs in VirtualBox 4.3.6
Were nding that VirtualBox 4.3.6 gets to the point of downloading the microkernel from the
Razor server and hangs at 0% indenitely. We dont have this problem with VirtualBox 4.2.22.
Using Razor on Windows
Windows support is ALPHA quality. The purpose of the current Windows installer is to get real world
experience with Windows installation automation, and to discover the missing features required to
fully support Windows infrastructure.
Temp les arent removed in a timely manner
This is due to Ruby code working as designed, and while it takes longer to remove temporary les
than you might expect, the les are eventually removed when the object is nalized.
The no_replace parameter is ignored for the update-node-metadata command
This parameter is not currently working.

Puppet Enterprise 3.3 User's Guide Setup Information and Known Issues

325/404

Confusing POST error message


If you provide an empty string to the --iso-url parameter of the create-repo command, the
Razor client returns a confusing error message:
Error from doing POST http://rgrazor.delivery.puppetlabs.net:8080/api/commands/create-repo
400 Bad Request
urls only one of url and iso_url can be used

The error is meant to indicate that you cannot supply both those attributes at the same time on a
single repo instance.
Updates might be required for VMware ESXi 5.5 igb les
You might have to update your VMware ESXi 5.5 ISO with updated igb drivers before you can install
ESXi with Razor. See this driver package download page on the VMware site for the updated igb
drivers you need.
Next: Cloud Provisioning Overview

A High Level Look at Puppet's Cloud


Provisioning Tools
Puppet Enterprise includes a suite of command-line tools you can use for provisioning new virtual
nodes when building or maintaining cloud computing infrastructures based on VMware vSphere,
Amazon EC2 and Google Compute Engine. You can use these tools to:
Create and destroy virtual machine instances
Classify new nodes (virtual or physical) in the PE console
Automatically install and congure PE on new nodes (virtual or physical)
When used together, these tools provide quick and ecient workows for adding and maintaining
fully congured, ready-to-run virtual nodes in your Puppet Enterprise-managed cloud
environment.
See the sections on VMware, AWS, and GCE provisioning for details about creating and destroying
virtual machines in these environments. Beyond that, the section on classifying nodes and installing
PE covers actions that work on any new machine, virtual or physical, in a cloud environment. To get
an idea of a typical workow in a cloud provisioning environment, see the workow section.
The cloud provisioning tools can be added during an installation of Puppet Enterprise. If you have
already installed PE and you want to install the cloud provisioning tools, simply run the upgrader
again.
Puppet Enterprise 3.3 User's Guide A High Level Look at Puppet's Cloud Provisioning Tools

326/404

Note for Puppet users Most of the information in these sections applies to Puppet as well as PE.
However, provisioning on VMWare is only supported by Puppet Enterprise.

Tools
PEs provisioning tools are built on the node, node_vmware, node_aws, and node_gce
subcommands. Each of these subcommands has a selection of available actions (such as list and
start) that are used to complete specic provisioning tasks. You can get detailed information
about a subcommand and its actions by running puppet help and puppet man.
The VMware, AWS, and GCE subcommands are only used for cloud provisioning tasks. Node, on the
other hand, is a general purpose Puppet subcommand that includes several provisioning-specic
actions. These are:
classify
init
install
The clean action may also be useful when decommissioning nodes.
The cloud provisioning tools except for GCE are powered by Fog, the Ruby cloud services library.
Fog is automatically installed on any machine receiving the cloud provisioner component.
Next: Installing and Conguring Cloud Provisioner

Installing and Conguring Cloud Provisioning


There are many options and actions associated with the main cloud provisioning sub-commands:
node, node_vmware, node_aws and node_gce. This page provides an overview, but check the man
pages for all the details ( puppet man node_aws, etc.).

Prerequisites
Services
The following services and credentials are required:
VMware requires: VMware vSphere 4.0 (or later) and VMware vCenter
Amazon Web Services requires: An existing Amazon account with support for EC2
Google Compute Engine requires: An existing Google account and billing information.

Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

327/404

Installing
Cloud provisioning tools are installed automatically as part of the web-based PE install. If you dont
want to install the cloud provisioning tools, then use an answer le with your Puppet Enterprise
installation, and set the q_puppet_cloud_install option to N.
If you install PE without installing the cloud provisioning tools, and then decide you want to install
them, you can do so using the package manager of your choice (Yum, APT, etc.). The packages you
need are: pe-cloud-provisioner and pe-cloud-provisioner-libs. They can be found in the packages
directory of the installer tarball.

Conguring
To create new virtual machines with Puppet Enterprise, youll need to rst congure the services
youll be using.
Start by creating a le called .fog in the home directory of the user who will be provisioning new
nodes.
$ touch ~/.fog

This will be the conguration le for Fog, the cloud abstraction library that powers PEs
provisioning tools. Once it is lled out, it will consist of a YAML hash indicating the locations of your
cloud services and the credentials necessary to control them. For example:
:default:
:vsphere_server: vc01.example.com
:vsphere_username: cloudprovisioner
:vsphere_password: abc123
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
:aws_access_key_id: AKIAIISJV5TZ3FPWU3TA
:aws_secret_access_key: ABCDEFGHIJKLMNOP1234556/s

See below to learn how to nd these credentials.


You can also specify multiple sets of congurations by creating additional mappings, as follows:
:default:
:vsphere_server: vc01.example.com
:vsphere_username: cloudprovisioner
:vsphere_password: abc123
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
:aws_access_key_id: AKIAIISJV5TZ3FPWU3TA
:aws_secret_access_key: ABCDEFGHIJKLMNOP1234556/s
:production:
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

328/404

:vsphere_server: vc01.prod.example.com
:vsphere_username: cloudprovisioner
:vsphere_password: abc123
:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8
:aws_access_key_id: AKIAIISJV5TZ3FPWU3TA
:aws_secret_access_key: ABCDEFGHIJKLMNOP1234556/s

You can access these congurations by prepending cloud provisioner commands with a special
environment variable, FOG_CREDENTIAL:

FOG_CREDENTIAL=default puppet node_vmware <somecommands>


FOG_CREDENTIAL=production puppet node_vmware <somecommands>

Adding VMware Credentials


To connect to a VMware vSphere server, you must put the following information in your ~/.fog le:
:vsphere_server
The name of your vCenter host (for example: vc1.example.com). You should already know
the value for this setting.
:vsphere_username
Your vCenter username. You should already know the value for this setting.
:vsphere_password
Your vCenter password. You should already know the value for this setting.
:vsphere_expected_pubkey_hash
A public key hash for your vSphere server. The value for this setting can be obtained by
entering the other three settings and then running the following command:
$ puppet node_vmware list

This will result in an error message containing the servers public key hash
notice: Connecting ...
err: The remote system presented a public key with hash
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8 but
we're expecting a hash of <unset>. If you are sure the remote system is
authentic set vsphere_expected_pubkey_hash: <the hash printed in this
message> in ~/.fog
err: Try 'puppet help node_vmware list' for usage

which can then be entered as the value of this setting.


Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

329/404

Adding Amazon Web Services


To connect to Amazon Web Services, you must put the following information in your ~/.fog le:
:aws_access_key_id
Your AWS Access Key ID. See below for how to nd this.
:aws_secret_access_key
Your AWS Secret Key ID. See below for how to nd this.
For AWS installations, you can nd your Amazon Web Services credentials online in your Amazon
account. To view them, go to Amazon AWS and click on the Account tab.

Select the Security Credentials menu and choose Access Credentials. Click on the Access Keys tab to
view your Access Keys.
You need to record two pieces of information: the Access Key ID and the Secret Key ID. To see your
Secret Access Key, click the Show link under Secret Access Key.
Put both keys in your ~/.fog le as described above. You will also need to generate an SSH private
key using Horizon, or simply import a selected public key.
Additional AWS Conguration
For Puppet to provision nodes in Amazon Web Services, you will need an EC2 account with the
following::
At least one Amazon-managed SSH key pair.
A security group that allows outbound trac on ports 8140 and 61613, and inbound SSH trac
on port 22 from the machine being used for provisioning.
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

330/404

Youll need to provide the names of these resources as arguments when running the provisioning
commands.
KEY PAIRS

To nd or create your Amazon SSH key pair, browse to the Amazon Web Service EC2 console.

Select the Key Pairs menu item from the dashboard. If you dont have any existing key pairs, you
can create one with the Create Key Pairs button. Specify a new name for the key pair to create it; the
private key le will be automatically downloaded to your host.
Make a note of the name of your key pair, since you will need to know it when creating new
instances.
SECURITY GROUP

To add or edit a security group, select the Security Groups menu item from the dashboard. You
should see a list of the available security groups. If no groups exist, you can create a new one by
clicking the Create Security Groups button. Otherwise, you can edit an existing group.

Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

331/404

To add the required rules, select the Inbound tab and add an SSH rule. Make sure that inbound SSH
trac is using port 22. You can also indicate a specic source to lock the source IP down to an
appropriate source IP or network. Click Add Rule to add the rule, then click Apply Rule Changes to
save.
You should also ensure that your security group allows outbound trac on ports 8140 and 61613.
These are the ports PE uses to request congurations and listen for orchestration messages.
Demonstration
The following video demonstrates the setup process and some basic functions:

Adding Google Compute Engine Credentials


The following steps describe how to create a Google Compute Engine account, obtain a client ID
and secret, and register node_gce with your GCE account. Note These steps dont cover setting up a
billing method for your GCE account. To set up billing, click Billing in the Google Cloud Console,
and follow the instructions there.
Go to https://cloud.google.com and sign in with your Google credentials. Click the Create Project
button, and give your project a name. This creates your project in the Google Cloud Console. Some
Puppet Enterprise 3.3 User's Guide Installing and Conguring Cloud Provisioning

332/404

options for working with your project are displayed in the left navigation bar.
In the left-hand navigation bar, click APIs and auth and then click Registered Apps.
Click the REGISTER APP button. Give your app a nameit can be whatever you likeand click Native
as the platform.
Click Register. Your apps page opens, and a CLIENT ID and CLIENT SECRET are provided. Note
Youll need the ID and secret, so capture these for future reference.
Now, in PE, run puppet node_gce register <client ID> <client secret> and follow the online
instructions. Youll get a URL to visit in your browser. There, youll log into your Google account and
grant permission for your node to access GCE.
Once permission is granted, youll get a token of about 64 characters. Copy this token as requested
into your node_gce run to complete the registration. * * *
Next: Provisioning with VMware

Provisioning With VMware


Puppet Enterprise provides support for working with VMware virtual machine instances using
vSphere and vCenter. Using actions of the puppet node_vmware sub-command, you can create new
machines, view information about existing machines, classify and congure machines, and tear
machines down when theyre no longer needed.
The main actions used for vSphere cloud provisioning include:
puppet node_vmware list for viewing existing instances
puppet node_vmware create for creating new instances
puppet node_vmware terminate for destroying no longer needed instances.
Note: The command puppet node_vmware assumes that data centers are located at the very top
level of the inventory hierarchy. Any data centers deeper down in the hierarchy (and in eect all
objects hosted by these data centers) are ignored by the command.
Heres a x:
1. Move the data centers hosting the involved VMs/templates to the top level of the inventory
hierarchy. This can be a temporary move.
2. Perform the desired node_vmware actions. Both puppet node_vmware and puppet node_vmware
create should see the VMs/templates hosted on the moved data centers.
3. Move the data centers back, if desired.
If youre new to VMware vSphere, you should start by looking at the vSphere documentation.
Puppet Enterprise 3.3 User's Guide Provisioning With VMware

333/404

Permissions Required for Provisioning with VMWare


The following are the permissions needed to provision with VMWare, listed according to
subcommand. In addition, you should have full admin access to your vSphere pool.
list Lists any VM with read-only permissions or better.
find Requires read-only permissions or better on the target data center, data store, network,
or computer, as well as the full VM folder path that contains the VM in question.
start Requires find permissions + VirtualMachine.Interact.PowerOn on the VM in
question.
stop Requires find permissions + VirtualMachine.Interact.PowerOff on the VM in
question.
terminate Requires find permissions + VirtualMachine.Inventory.Remove on the VM in
question and its parent folder.
create Requires find permissions + VirtualMachine.Inventory.CreateFromExisting,
VirtualMachine.Provisioning.DeployTemplate,
VirtualMachine.Inventory.CreateFromExisting on the template in question, as well as
Datastore.AllocateSpace on the target data store, and Resource.AssignVMToPool on the
target resource pool (the target cluster in non-DRS enabled vCenters).

Listing VMware vSphere Instances


Lets get started by listing the machines currently on our vSphere server. You do this by running the
puppet node_vmware list command:

$ puppet node_vmware list

If you havent yet conrmed your vSphere servers public key hash in your ~/.fog le, youll receive
an error message containing said hash:
$ puppet node_vmware list
notice: Connecting ...
err: The remote system presented a public key with hash
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8 but
we're expecting a hash of <unset>. If you are sure the remote system is
authentic set vsphere_expected_pubkey_hash: <the hash printed in this
message> in ~/.fog
err: Try 'puppet help node_vmware list' for usage

Conrm that you are communicating with the correct, trusted vSphere server by checking the
hostname in your ~/.fog le, then add the hash to the .fog le as follows:

Puppet Enterprise 3.3 User's Guide Provisioning With VMware

334/404

:vsphere_expected_pubkey_hash:
431dd5d0412aab11b14178290d9fcc5acb041d37f90f36f888de0cebfffff0a8

Now you should be able to run the puppet node_vmware list command and see a list of existing
virtual machines:
$ puppet node_vmware list
notice: Connecting ...
notice: Connected to vc01.example.com as cloudprovisioner (API version 4.1)
notice: Finding all Virtual Machines ... (Started at 12:16:01 PM)
notice: Control will be returned to you in 10 minutes at 12:26 PM if locating
is unfinished.
Locating: 100% |ooooooooooooooooooooooooooooooooooooooooooooooooooo|
Time: 00:00:34
notice: Complete
/Datacenters/Solutions/vm/master_template
powerstate: poweredOff
name: master_template
hostname: puppetmaster.example.com
instanceid: 5032415e-f460-596b-c55d-6ca1d2799311
ipaddress: ---.---.---.--template: true
/Datacenters/Solutions2/vm/puppetagent
powerstate: poweredOn
name: puppetagent
hostname: agent.example.com
instanceid: 5032da5d-68fd-a550-803b-aa6f52fbf854
ipaddress: 192.168.100.218
template: false

This shows that youre connected to your vSphere server, and lists an available VMware template (
at master_template) and one virtual machine (agent.example.com). VMware templates contain the
information needed to build new virtual machines, such as the operating system, hardware
conguration, and other details.
Specically, list will return all of the following information:
The location of the template or machine
The status of the machine (for example, poweredOff or poweredOn)
The name of the template or machine on the vSphere server
The host name of the machine
The instanceid of the machine
The IP address of the machine (note that templates dont have IP addresses)
The type of entry - either a VMware template or a virtual machine

Creating a New VMware Virtual Machine


Puppet Enterprise 3.3 User's Guide Provisioning With VMware

335/404

Puppet Enterprise can create and manage virtual machines from VMware templates using the
node_vmware create action.

$ puppet node_vmware create --name=newpuppetmaster -template="/Datacenters/Solutions/vm/master_template"


notice: Connecting ...
notice: Connected to vc01.example.com as cloudprovisioner (API version 4.1)
notice: Locating VM at /Datacenters/Solutions/vm/master_template (Started at
12:38:58 PM)
notice: Control will be returned to you in 10 minutes at 12:48 PM if locating
(1/2) is unfinished.
Locating (1/2): 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| Time: 00:00:16
notice: Starting the clone process (Started at 12:39:15 PM)
notice: Control will be returned to you in 10 minutes at 12:49 PM if starting
(2/2) is unfinished.
Starting (2/2): 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| Time: 00:00:03
--name: newpuppetmaster
power_state: poweredOff
...
status: success

Here node_vmware create has built a new virtual machine named newpuppetmaster with a
template of /Datacenters/Solutions/vm/master_template. (This is the template seen earlier with
the list action.) The virtual machine will be powered on, which may take several minutes to
complete.
Important: All ENC connections to cloud nodes now require SSL support.
The following video demonstrates the above and some other basic functions:

Puppet Enterprise 3.3 User's Guide Provisioning With VMware

336/404

Starting, Stopping and Terminating VMware Virtual


Machines
You can start, stop, and terminate virtual machines with the start, stop, and terminate actions.
To start a virtual machine:
$ puppet node_vmware start /Datacenters/Solutions/vm/newpuppetmaster

You can see weve specied the path to the virtual machine we wish to start, in this case
/Datacenters/Solutions/vm/newpuppetmaster.
To stop a virtual machine, use:
$ puppet node_vmware stop /Datacenters/Solutions/vm/newpuppetmaster

This will stop the running virtual machine (which may take a few minutes).
Lastly, we can terminate a VMware instance. Be aware this will:
Force-shutdown the virtual machine
Delete the virtual machine AND its hard disk images
This is a destructive and permanent action that should only be taken when you wish to delete the
virtual machine and its data!
The following video demonstrates the termination process and some other related functions:

Getting more help


Puppet Enterprise 3.3 User's Guide Provisioning With VMware

337/404

The puppet node_vmware command has extensive in-line help and a man page.
To see the available actions and command line options, run:
$ puppet help node_vmware
USAGE: puppet node_vmware <action>
This subcommand provides a command line interface to work with VMware vSphere
Virtual Machine instances. The goal of these actions is to easily create
new virtual machines, install Puppet onto them, and clean up when they're
no longer required.
OPTIONS:
--render-as FORMAT - The rendering format to use.
--verbose - Whether to log verbosely.
--debug - Whether to log debug information.
ACTIONS:
create Create a new VM from a template
find Find a VMware Virtual Machine
list List VMware Virtual Machines
start Start a Virtual Machine
stop Stop a running Virtual Machine
terminate Terminate (destroy) a VM
See 'puppet man node_vmware' or 'man puppet-node_vmware' for full help.

You can get help on individual actions by running:


$ puppet help node_vmware <ACTION>

For example:
$ puppet help node_vmware start

Next: Provisioning with GCE

Provisioning With Google Compute Engine


Puppet Enterprise provides support for working with Google Compute Engine, a service built on the
Google infrastructure that provides Linux virtual machines for large-scale computing. Using the
puppet node_gce command, you can create new machines, view information about existing
machines, classify and congure machines, and tear machines down when theyre no longer
needed.

Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine

338/404

The main actions for GCE cloud provisioning include:


puppet node_gce list for viewing existing instances
puppet node_gce create for creating new instances
puppet node_gce delete for destroying no longer needed instances
puppet node_gce bootstrap for creating a new GCE VM, then installing PE via SSH
puppet node_gce register for registering your cloud provisioner GCE client with Google Cloud
puppet node_gce ssh to SSH to a GCE VM
puppet node_gce user for managing user login accounts and SSH keys on an instance
If youre new to Google Compute Engine, we recommend reading their Getting Started
documentation.
Below, we take a quick look at these actions and their associated options. For comprehensive
information, see Getting More Help below.

Viewing existing GCE instances


Lets start by nding out about currently-running GCE instances. Run the puppet node_gce list
command with the project argument and the project name. For example, a project named cloudprovisioner-testing-1 would look like:
$ puppet node_gce list --project cloud-provisioner-testing-1

And the output would look like:


#### zone: zones/europe-west1-a
<no instances in zone>
#### zone: zones/us-central1-a
name: gce-test-project
status: running
metadata: sshKeys: myname:ssh-rsa AABB3NrpC2xAEEEEEIOu...
type: https://www.googleapis.com/compute/v1beta15/projects/cloud-provisionertesting-1/zones/us-central1-a/machineTypes/n1-standard-1
kernal:
https://www.googleapis.com/compute/v1beta15/projects/google/global/kernals/gcev20130813
image: https://www.googleapis.com/compute/v1beta15/projects/debiancloud/global/images/debian-7-wheezy-v20130816
router: false
networks: nic0: 10.240.229.40
disks: : scratch read-write

The output gives you a list of instances running in each geographical zone (this example only
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine

339/404

shows two of the available zones). You can see that there is one registered instance on GCE. The
information thats provided for the instance includes the SSH key used to establish the connection,
the type of projectin this case, n1-standard-1which was set during registration, and the image
that the instance contains. Here, the image is a Debian Wheezy OS.
Note: If you have no instances running, each zone thats listed will give the message, no instances
in zone.

Creating a new GCE instance


New instances are created using the node_gce create or the node_gce bootstrap actions. The
create action simply builds a new GCE machine instance, whereas bootstrap is a wrapper action
that creates, classies, and then initializes the node.
Using create
The node_gce create subcommand is used to build a new GCE instance based on a selected
image.
It has these required arguments:
--project to list the project youre working with
--image The image youre using for the instance, as well as the name for the new instance, and
the kind of compute engine you want
For example, if the project where the instance will be created is cloud-provisioner-testing-1, the
image is a specic version of Debian Wheezy supported by GCE (see the list of available images
[here] (https://developers.google.com/compute/docs/images#availableimages)), the instance
name is myname-test-name, and the compute engine is n1-standard-1-d, then your complete
command would look like:
$ puppet node_gce create --project cloud-provisioner-testing-1 --image debian7-wheezy-v20130816 myname-test-name n1-standard-1-d

Once run, youll get the message, Creating the VM is pending. When its complete, you will see the
new instance listed in your Google Cloud Console.
Using bootstrap
The node_gce bootstrap subcommand creates and installs a puppet agent.
It includes the following options:
project lists the project
node name (for example cloud-provisioner-testing-1)
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine

340/404

standard compute size (for example n1-standard-1)


image describes the image (for example, Debian Wheezy)
login transfers the ssh key for the designated login
install-script references a local install script for the instance
installer-answers points to the location of the local le that provides the answers to
installation questions
installer-payload indicates the location of the tar.gz.
With all of these options, the bootstrap subcommand looks like this:

$ puppet node_gce --trace bootstrap --project cloud-provisioner-testing-1 peagent n1-standard-1 --image debian-7-wheezy-v20130816 --login myname --installscript puppet-enterprise-http --installer-answers agent_no_cloud.answer.sample
--installer-payload 'http://commondatastorage.googleapis.com/peinstall%2Fpuppet-enterprise-3.3.0-rc2-8-g629db7a-debian-7-amd64.tar.gz'

In the above example, the installation tarball was uploaded to Google Cloud Storage (shown below)
to make the process faster. (Note: By selecting the Shared Publicly check box, you can avoid having
to sign in while this process runs. Dont forget to clear the check box when youre done.)

When you run the bootstrap subcommand, youll get status messages for each stage, such as:
Waiting for SSH response and Installing Puppet.
If you dont have certicate autosigning turned on, youll get a message that signing certicate
failed. In this case, you can go to your Puppet Enterprise console and check the node requests.
![PE Console with Node Request][noderequest]
Just click the Accept button. Once the certicate request has been accepted, the new agent is
displayed in the PE console, where you can congure and manage it.
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine

341/404

Deleting a GCE instance


Once youve nished with a GCE instance, you can easily delete it. Deleting an instance destroys the
instance entirely and is a destructive, permanent action that should only be performed when youre
condent the instance and its data are no longer needed.
To delete an instance, use the node_gce delete action. Provide both the project and the instance
name.
$ puppet node_gce delete --project cloud-provisioner-testing-1 myname-test-name

After you run this command, wait a few moments, and then youll get the message, Deleting the
VM is done. You can conrm that the instance was deleted by checking your Google Cloud
Console.
The following video demonstrates using many node_gce subcommands.

Getting more help


The puppet node_gce command has a man page, which you can see with this command:

$ puppet man node_gce

You can get help on individual actions by running:


$ puppet help node_gce <ACTION>

For example,
Puppet Enterprise 3.3 User's Guide Provisioning With Google Compute Engine

342/404

$ puppet help node_gce list

You can also get general help:


$ puppet help node_gce

Next: Provisioning with AWS

Provisioning With Amazon Web Services


Puppet Enterprise provides support for working with Elastic Compute Cloud (EC2) virtual machine
instances using Amazon Web Services. Using the puppet node_aws sub-command, you can create
new machines, view information about existing machines, classify and congure machines, and tear
machines down when theyre no longer needed.
The main actions used for AWS cloud provisioning include:
puppet node_aws list for viewing existing instances
puppet node_aws create for creating new instances
puppet node_aws terminate for destroying no longer needed instances
If you are new to Amazon Web Services, we recommend reading their Getting Started
documentation.
Below, we take a quick look at these actions and their associated options. For comprehensive
information, see Getting More Help below.

Viewing Existing EC2 Instances


Lets start by nding out about the currently running EC2 instances. You do this by running the
puppet node_aws list command.

$ puppet node_aws list


i-013eb462:
created_at: Sat Nov 12 02:10:06 UTC 2011
dns_name: ec2-107-22-110-102.compute-1.amazonaws.com
id: i-013eb462
state: running
i-019f0a62:
created_at: Sat Nov 12 03:48:50 UTC 2011
dns_name: ec2-50-16-145-167.compute-1.amazonaws.com
id: i-019f0a62
Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services

343/404

state: running
i-01a33662:
created_at: Sat Nov 12 04:32:25 UTC 2011
dns_name: ec2-107-22-79-148.compute-1.amazonaws.com
id: i-01a33662
state: running

This shows three running EC2 instances. For each instance, the following characteristics are shown:
The instance name
The date the instance was created
The DNS host name of the instance
The ID of the instance
The state of the instance, for example: running or terminated
If you have no instances running, nothing will be returned.

Creating a new EC2 instance


New instances are created using the node_aws create or the node_aws bootstrap actions. The
create action simply builds a new EC2 machine instance. The bootstrap wrapper action creates,
classies, and then initializes the node all in one command.
Using create
The node_aws create subcommand is used to build a new EC2 instance based on a selected AMI
image.
The subcommand has three required options:
The AMI image wed like to use. ( --image)
The name of the SSH key pair to start the image with ( --keyname). See here for more about
creating Amazon-managed key pairs.
The type of machine instance we wish to create ( --type). You can see a list of types here.
Provide this information and run the command:
$ puppet node_aws create --image ami-edae6384 --keyname cloudprovisioner --type
m1.small
notice: Creating new instance ...
notice: Creating new instance ... Done
notice: Creating tags for instance ...
notice: Creating tags for instance ... Done
notice: Launching server i-df7ee898 ...
##################
notice: Server i-df7ee898 is now launched
notice: Server i-df7ee898 public dns name: ec2-50-18-93-82.us-eastPuppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services

344/404

1.compute.amazonaws.com
ec2-50-18-93-82.us-east-1.compute.amazonaws.com

Youve created a new instance using an AMI of ami-edae6384, a key named cloudprovisioner, and
of the machine type m1.small. If youve forgotten the available key names on your account, you can
get a list with the node_aws list_keynames action:

$ puppet node_aws list_keynames


cloudprovisioner (ad:d4:04:9f:b0:8d:e5:4e:4c:46:00:bf:88:4f:b6:c2:a1:b4:af:56)

You can also specify a variety of other options, including the region in which to start the instance.
You can see a full list of these options by running puppet help node_aws create.
After the instance has been created, the public DNS name of the instance will be returned. In this
case: ec2-50-18-93-82.us-east-1.compute.amazonaws.com.
Using bootstrap
The bootstrap action is a wrapper that combines several actions, allowing you to create, classify,
install Puppet on, and sign the certicate of EC2 machine instances. Classication is done via the
console.
In addition to the three options required by create (see above), bootstrap also requires the
following:
The name of the user Puppet should be using when logging in to the new node. ( --login or -username)
The path to a local private key that allows SSH access to the node ( --keyfile). Typically, this is
the path to the private key that gets downloaded from the Amazon EC2 site.
The example below will bootstrap a node using the ami0530e66c image, located in the US East
region and running as a t1.micro machine type.
puppet node_aws bootstrap
--region us-east-1
--image ami-0530e66c
--login root --keyfile ~/.ec2/ccaum_rsa.pem
--keyname ccaum_rsa
--type t1.micro

Demo
The following video demonstrates the EC2 instance creation process in more detail:

Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services

345/404

Connecting to an EC2 instance


You connect to EC2 instances via SSH. To do this you will need the private key downloaded earlier
from the Amazon Web Services console. Add this key to your local SSH conguration, usually in the
.ssh directory.

$ cp mykey.pem ~/.ssh/mykey.pem

Ensure the .ssh directory and the key have appropriate permissions.

$ chmod 0700 ~/.ssh


$ chmod 0600 ~/.ssh/mykey.pem

You can now use this key to connect to our new instance.
$ ssh -i ~/.ssh/mykey.pem root@ec2-50-18-93-82.us-east-1.compute.amazonaws.com

Terminating an EC2 instance


Once youve nished with an EC2 instance, you can easily terminate it. Terminating an instance
destroys the instance entirely and is a destructive, permanent action that should only be performed
when you are condent the instance, and its data, are no longer needed.
To terminate an instance, use the node_aws terminate action.

$ puppet node_aws terminate ec2-50-18-93-82.us-east-1.compute.amazonaws.com


notice: Destroying i-df7ee898 (ec2-50-18-93-82.us-east-1.compute.amazonaws.com)
Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services

346/404

...
notice: Destroying i-df7ee898 (ec2-50-18-93-82.us-east-1.compute.amazonaws.com)
... Done

The following video demonstrates the EC2 instance termination process in more detail:

Getting more help


The puppet node_aws command has extensive in-line help documentation, as well as a man page.
To see the available actions and command line options, run:
$ puppet help node_aws
USAGE: puppet node_aws <action>
This subcommand provides a command line interface to work with Amazon EC2
machine instances. The goal of these actions are to easily create new
machines, install Puppet onto them, and tear them down when they're no longer
required.
OPTIONS:
--render-as FORMAT - The rendering format to use.
--verbose - Whether to log verbosely.
--debug - Whether to log debug information.
ACTIONS:
bootstrap Create and initialize an EC2 instance using Puppet.
create Create a new EC2 machine instance.
fingerprint Make a best effort to securely obtain the SSH host key
fingerprint.
list List AWS EC2 machine instances.
list_keynames List available AWS EC2 key names.
terminate Terminate an EC2 machine instance.

Puppet Enterprise 3.3 User's Guide Provisioning With Amazon Web Services

347/404

See 'puppet man node_aws' or 'man puppet-node_aws' for full help.

For more detailed help you can also view the man page .
$ puppet man node_aws

You can get help on individual actions by running:


$ puppet help node_aws <ACTION>

For example,
$ puppet help node_aws list

Next: Classifying Cloud Nodes and Remotely Installing Puppet

Classifying New Nodes and Remotely


Installing Puppet
Nodes in a cloud infrastructure can be classied and managed as easily as any other machine in a
Puppet Enterprise deployment. You can install a puppet agent (or other component) on them and
add new nodes to pre-existing console groups, further classify and congure those nodes, and
manipulate them with live management.
Many of these tasks are accomplished using the puppet node subcommand. While puppet node
can be applied to physical or virtual machines, several actions have been created specically for
working with virtual machine instances in the cloud. For complete details, view the puppet node
man page.

Classifying nodes
Once you have created instances for your cloud infrastructure, you need to start conguring them
and adding the les, settings, and/or services needed for their intended purposes. The fastest and
easiest way to do this is to add them to your existing console groups. You can do this by assigning
groups to nodes or nodes to groups with the consoles web interface. However, you can also work
right from the command line, which can be more convenient if youre already at your terminal and
have the nodes name ready at hand.
To classify nodes and add them to a console group, run puppet node classify as follows.

Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet

348/404

Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
$ puppet node classify \
--insecure \
--node-group=appserver_pool \
--enc-server=localhost \
--enc-port=443 \
--enc-auth-user=console \
--enc-auth-passwd=password \
ec2-50-19-149-87.compute-1.amazonaws.com
notice: Contacting https://localhost:443/ to classify
ec2-50-19-149-87.compute-1.amazonaws.com
complete

The above example adds an AWS EC2 instance to the console. Note that you use the name of the
node you are classifying as the commands argument and the --node-group option to specify the
group you want to add your new node to. The other options contain the connection and
authentication data needed to properly connect to the node.
Important: All ENC connections to cloud nodes now require SSL support.
Note that until the rst puppet run is performed on this node, Puppet itself will not yet be installed.
(Unless one of the wrapper commands has been used. See below.)
To see additional help for node classication, run puppet help node classify. For more about
how the console groups and classies nodes, see the section on grouping and classifying.
You may also wish review the basics of Puppet classes and conguration to help you understand
how groups and classes interact.
The process of adding a node to the console is demonstrated in the following video:

Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet

349/404

Installing Puppet
Use the puppet node install command to install PE components onto the new instances.

$ puppet node install --install-script=puppet-enterprise -keyfile=~/.ssh/mykey.pem --login=root ec2-50-19-207-181.compute-1.amazonaws.com


notice: Waiting for SSH response ...
notice: Waiting for SSH response ... Done
notice: Installing Puppet ...
puppetagent_certname: ec2-50-19-207-181.compute-1.amazonaws.com-ee049648-36470f93-782b-9f30e387f644
status: success

This commands options specify:


The PE Installer script should be used.
The path to a private SSH key that can be used log in to the VM, specied with the --keyfile
option. The install action uses SSH to connect to the host and so needs access to an SSH key.
For Amazon EC2 or GCE, point to the private key from the key pair you used to create the
instance. In most cases, the private key is in the ~/.ssh directory. (Note that for VMware, the
public key should have been loaded onto the template you used to create your virtual machine.)
The local user account used to log in, specied with the --login option.
For the commands argument, specify the name of the node on which youre installing Puppet
Enterprise.
For the default installation, the install action uses the installation packages provided by Puppet
Labs and stored in Amazon S3 storage. You can also specify packages located on a local host or on
a share in your local network. Use puppet help node install or puppet man node to see more
Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet

350/404

details.
In addition to these default conguration options, you can specify a number of additional options
to control how and what we install on the host. You can control the version of Facter to install, the
specic answers le to use to congure Puppet Enterprise, the certicate name of the agent to be
installed, and a variety of other options. To see a full list of the available options, use the puppet
help node install command.
The process of installing Puppet on a node is demonstrated in detail in the following video:

Classifying and Installing Puppet in One Command


Using node init
Rather than using multiple commands to classify and install Puppet on a node, there are a couple of
other options that combine actions into a wrapper command. Note that you will need access to
the PE installer, which is typically specied with the --installer-payload argument.
If a node has been prepared to remotely sign certicates, you can use the init action which will
install Puppet, classify the node and sign the certicate in one step.
Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
For example:
Note - With classify and init, you need to specify the --insecure option because the PE console
uses the internal certicate name, pe-internal-dashboard, which fails verication because it
doesnt match the host name of the host where the console is running.
Puppet Enterprise 3.3 User's Guide Classifying New Nodes and Remotely Installing Puppet

351/404

For example:
$ puppet node init \
--insecure \
--node-group=appserver_pool \
--enc-server=localhost \
--enc-port=443 \
--enc-auth-user=console \
--enc-auth-passwd=password \
--install-script=puppet-enterprise \
--keyfile=~/.ssh/mykey.pem \
--login=root \
ec2-50-19-207-181.compute-1.amazonaws.com

The invocation above will connect to the console, classify the node in the appserver_pool group,
and then install Puppet Enterprise on this node.
Using autosign.conf
Alternatively, if your CA puppet master has the autosign setting congured, it can sign certicates
automatically. While this can greatly simplify the process, there are some security issues associated
with going this route, so be sure you are comfortable with the process and know the risks.
Next: Sample Cloud Provisioning Workow

A Day in the Life of a Puppet-Powered Cloud


Sysadmin
Tom is a sysadmin for CloudWidget.com, a company that provides web-based application services.
They use a three-tier application architecture with the following types of nodes:
1. A web front-end load balancer
2. A pool of application servers behind the load balancer
3. A database server that serves the application servers
All of these nodes are virtual machines running on a VMware ESX server. The nodes are all currently
being managed with Puppet Enterprise. Using PE, the application servers have all been assigned to
a group which applies a class cloudwidget_appserv.
CloudWidget is growing rapidly, so Tom is not surprised when he checks his inbox and nds
several messages from users complaining about sluggish performance. He checks his monitoring
tool and, sure enough, the load is too high on his application servers and performance is suering.
Its time to add a new node to the application server pool to help better distribute the load.
Tom grabs a cup of coee and res up his terminal. He starts by creating a new virtualized node
Puppet Enterprise 3.3 User's Guide A Day in the Life of a Puppet-Powered Cloud Sysadmin

352/404

with puppet node_vmware create. This gives him a new node with the following characteristics:
a complete OS already installed
whatever is contained in the VMware template he specied as an option of the create action
does not have Puppet installed on it yet
not yet congured to function as a CloudWidget application server
When Tom rst congured Puppet, he set up his workstation with the ability to remotely sign
certicates. He did this by creating a certicate/key pair and then modifying the CAs auth.conf to
allow that certicate to perform authentication tasks. (To nd out more about how to do this, see
the auth.conf documentation and the HTTP API guide.)
This allows Tom to use puppet node init to complete the process of getting the new node up and
running. Puppet node init is a wrapper command that will install Puppet, classify the node,
and sign the certicate ( puppet certicate sign or puppet cert sign). Classifying the node
tells Puppet which conguration groups and classes should be applied to the node. In this case,
applying the cloudwidget_appserv class congures the node with all the settings, les, and
database hooks needed to create a fully congured, ready-to-run app server tailored to the
CloudWidget environment.
Note: if Tom had not done the prep work needed for remote signing of certicates he could run the
puppet node install, puppet node classify and puppet cert sign commands separately.
Now Tom needs to run Puppet on the new node in order to apply the conguration. He could wait
30 minutes for Puppet to run automatically, but instead he SSHs into the machine and runs Puppet
interactively with puppet agent --test.
At this point Tom now has:
A new virtual machine node with Puppet installed.
A node with a signed certicate that is an authorized member of the CloudWidget deployment.
Puppet has fully congured the node with all of the bits and pieces needed to go live and start
doing real work as a fully functioning CloudWidget application server.
The CloudWidget infrastructure is now scaled and running at acceptable loads. Tom leans back and
takes a sip of his coee. Its still hot.
Next: The pe_accounts::user Type

The pe_accounts::user Type


This dened type is part of pe_accounts, a pre-built Puppet module that ships with Puppet
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type

353/404

Enterprise for use in your own manifests.

NOTE: The pe_accounts module is not yet supported on Windows nodes.


The pe_accounts::user type declares a user account. It oers several benets over Puppets core
user type:
It can create and manage the users home directory as a Puppet resource.
It creates and manages a primary group with the same name as the user, even on platforms
where this is not the default.
It can manage a set of SSH public keys for the user.
It can easily lock the users account, preventing all logins.
Puppet Enterprise uses this type internally to manage some of its own system users, but also
exposes it as a public interface.
The pe_accounts::user type can be used on all of the platforms supported by Puppet Enterprise
(except Windows).
Note: In Puppet Enterprise 1.2, this type was called accounts::user. it was renamed in PE 2 to
avoid namespace conicts. If you are upgrading and wish to continue using the older name, the
upgrader can install a wrapper module to enable it. See the chapter on upgrading for more details.

Usage Example
# /etc/puppetlabs/puppet/modules/site/manifests/users.pp
class site::users {
# Declaring a dependency: we require several shared groups from the
site::groups class (see below).
Class[site::groups] -> Class[site::users]
# Setting resource defaults for user accounts:
Pe_accounts::User {
shell => '/bin/zsh',
}
# Declaring our pe_accounts::user resources:
pe_accounts::user {'puppet':
locked => true,
comment => 'Puppet Service Account',
home => '/var/lib/puppet',
uid => '52',
gid => '52',
}
pe_accounts::user {'sysop':
locked => false,
comment => 'System Operator',
uid => '700',
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type

354/404

gid => '700',


groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
sysop+moduledevkey@puppetlabs.com'],
}
pe_accounts::user {'villain':
locked => true,
comment => 'Test Locked Account',
uid => '701',
gid => '701',
groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
villain+moduledevkey@puppetlabs.com'],
}
pe_accounts::user {'jeff':
comment => 'Jeff McCune',
groups => ['admin', 'sudonopw'],
uid => '1112',
gid => '1112',
sshkeys => [
'ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
jeff+moduledevkey@puppetlabs.com',
'ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
jeff+moduledevkey2@puppetlabs.com',
'ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
jeff+moduledevkey3@puppetlabs.com',
'ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
jeff+moduledevkey4@puppetlabs.com'
],
}
pe_accounts::user {'dan':
comment => 'Dan Bode',
uid => '1109',
gid => '1109',
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
dan+moduledevkey@puppetlabs.com'],
}
pe_accounts::user {'nigel':
comment => 'Nigel Kersten',
uid => '2001',
gid => '2001',
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
nigel+moduledevkey@puppetlabs.com'],
}
}
# /etc/puppetlabs/puppet/modules/site/manifests/groups.pp
class site::groups {
# Shared groups:
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type

355/404

Group { ensure => present, }


group {'developer':
gid => '3003',
}
group {'sudonopw':
gid => '3002',
}
group {'sudo':
gid => '3001',
}
group {'admin':
gid => '3000',
}
}

Parameters
Many of the types parameters echo those of the standard user type.
name
The users name. While limitations dier by operating system, it is generally a good idea to restrict
user names to 8 characters, beginning with a letter. Defaults to the resources title.
ensure
Species whether the user and its primary group should exist. Valid values are present and
absent. Defaults to present. Note that when a user is created, a group with the same name as the
user is also created.
shell
The users login shell. The shell must exist and be executable. Defaults to /bin/bash.
comment
A description of the user. Generally a users full name. Defaults to the users name.
home
The home directory of the user. Defaults to /home/<user's name>
uid
The users uid number. Must be specied numerically; defaults to being automatically determined
( undef).
gid
Puppet Enterprise 3.3 User's Guide The pe_accounts::user Type

356/404

The gid of the primary group with the same name as the user. The pe_accounts::user type will
create and manage this group. Must be specied numerically, defaults to being automatically
determined ( undef).
groups
An array of groups the user belongs to. The primary group should not be listed. Defaults to an
empty array.
membership
Whether specied groups should be considered the complete list ( inclusive) or the minimum list
( minimum) of groups to which the user belongs. Valid values are inclusive and minimum; defaults
to minimum.
password
The users password, in whatever encrypted format the local machine requires. Be sure to enclose
any value that includes a dollar sign ($) in single quotes (). Defaults to '!!', which prevents the
user from logging in with a password.
locked
Whether the user should be prevented from logging in. Set this to true for system users and users
whose login privileges have been revoked. Valid values are true and false; defaults to false.
sshkeys
An array of SSH public keys associated with the user. Unlike with the ssh_authorized_key type,
these should be complete public key strings that include the type and name of the key, exactly as
the key would appear in its id_rsa.pub or id_dsa.pub le. Defaults to an empty array.
managehome
A boolean parameter that dictates whether or not a users home directory should be managed by
the account type. If ensure is set to absent and managehome is true, the users home directory will
be recursively deleted.
Next: The pe_accounts Class

The pe_accounts Class


This class is part of pe_accounts, a pre-built Puppet module included with Puppet Enterprise.

Puppet Enterprise 3.3 User's Guide The pe_accounts Class

357/404

NOTE: pe_accounts is not yet supported on Windows nodes.


The pe_accounts class can do any or all of the following:
Create and manage a set of pe_accounts::user resources
Create and manage a set of shared group resources
Maintain a pair of rules in the sudoers le to grant privileges to the sudo and sudonopw groups
This class is designed for cases where your account data is maintained separately from your Puppet
manifests. This usually means one of the following is true:
The data is being read from a non-Puppet directory service or CMDB, probably with a custom
function.
The data is being maintained manually by a user who does not write Puppet code.
The data is being generated by an out-of-band process.
If your sites account data will be maintained manually by a sysadmin able to write Puppet code, it
will make more sense to maintain it as a normal set of pe_accounts::user and group resources,
although you may still wish to use the pe_accounts class to maintain sudoers rules.
To manage users and groups with the pe_accounts class, you must prepare a data store and
congure the class for the data store when you declare it.

Note: This class is assigned to the consoles default group with no parameters, which will
prevent it from being redeclared with any conguration. To use the class, you must:
Unassign it from the default group in the console
Create a wrapper module that declares this class with the necessary parameters
Re-assign the wrapper class to whichever nodes need it

Usage Example
To use YAML les as a data store:
class {'pe_accounts':
data_store => yaml,
}

To use a Puppet class as a data store (and manage sudoers rules):

class {'pe_accounts':
data_store => namespace,
Puppet Enterprise 3.3 User's Guide The pe_accounts Class

358/404

data_namespace => 'site::pe_accounts::data',


manage_sudoers => true,
}

To manage sudoers rules without managing any users or groups:

class {'pe_accounts':
manage_users => false,
manage_groups => false,
manage_sudoers => true,
}

Data Stores
Account data can come from one of two sources: a Puppet class that declares three variables, or a
set of three YAML les stored in /etc/puppetlabs/puppet/data.
Using a Puppet Class as a Data Store
This option is most useful if you are able to generate or import your user data with a custom
function, which may be querying from an LDAP directory or some other data source.
The Puppet class containing the data must have a name ending in ::data. (We recommend
site::pe_accounts::data.) This class must declare the following variables:
$users_hash should be a hash in which each key is the title of a pe_accounts::user resource
and each value is a hash containing that resources attributes and values.
$groups_hash should be a hash in which each key is the title of a group and each value is a hash
containing that resources attributes and values.
See below for examples of the data formats used in these variables.
When declaring the pe_accounts class to use data in a Puppet class, use the following attributes:

data_store => namespace,


data_namespace => {name of class},

Using YAML Files as a Data Store


This option is most useful if your user data is being generated by an out-of-band process or is
being maintained by a user who does not write Puppet manifests.
When storing data in YAML, the following valid YAML les must exist in
/etc/puppetlabs/puppet/data:
pe_accounts_users_hash.yaml, which should contain an anonymous hash in which each key is
Puppet Enterprise 3.3 User's Guide The pe_accounts Class

359/404

the title of a pe_accounts::user resource and each value is a hash containing that resources
attributes and values.
pe_accounts_groups_hash.yaml, which should contain an anonymous hash in which each key is
the title of a group and each value is a hash containing that resources attributes and values.
See below for examples of the data formats used in these variables.
When declaring the pe_accounts class to use data in YAML les, use the following attribute:

data_store => yaml,

Data Formats
This class uses three hashes of data to construct the pe_accounts::user and group resources it
manages.
THE USERS HASH

The users hash represents a set of pe_accounts::user resources. Each key should be the title of a
pe_accounts::user resource, and each value should be another hash containing that resources
attributes and values.
PUPPET EXAMPLE

$users_hash = {
sysop => {
locked => false,
comment => 'System Operator',
uid => '700',
gid => '700',
groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
sysop+moduledevkey@puppetlabs.com'],
},
villain => {
locked => true,
comment => 'Test Locked Account',
uid => '701',
gid => '701',
groups => ['admin', 'sudonopw'],
sshkeys => ['ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
villain+moduledevkey@puppetlabs.com'],
},
}

YAML EXAMPLE

--Puppet Enterprise 3.3 User's Guide The pe_accounts Class

360/404

sysop:
locked: false
comment: System Operator
uid: '700'
gid: '700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
sysop+moduledevkey@puppetlabs.com
villain:
locked: true
comment: Test Locked Account
uid: '701'
gid: '701'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAwLBhQefRiXHSbVNZYKu2o8VWJjZJ/B4LqICXuxhiiNSCmL8j+5zE/VLPIMe
villain+moduledevkey@puppetlabs.com

THE GROUPS HASH

The groups hash represents a set of shared group resources. Each key should be the title of a
group resource, and each value should be another hash containing that resources attributes and
values.
PUPPET EXAMPLE

$groups_hash = {
developer => {
gid => 3003,
ensure => present,
},
sudonopw => {
gid => 3002,
ensure => present,
},
sudo => {
gid => 3001,
ensure => present,
},
admin => {
gid => 3000,
ensure => present,
},
}
YAML EXAMPLE

Puppet Enterprise 3.3 User's Guide The pe_accounts Class

361/404

--developer:
gid: "3003"
ensure: "present"
sudonopw:
gid: "3002"
ensure: "present"
sudo:
gid: "3001"
ensure: "present"
admin:
gid: "3000"
ensure: "present"

Parameters
manage_groups
Species whether or not to manage a set of shared groups, which can be used by all
pe_accounts::user resources. If true, your data store must dene these groups in the
$groups_hash variable or the pe_accounts_groups_hash.yaml le. Allowed values are true and
false; defaults to true.
manage_users
Species whether or not to manage a set of pe_accounts::user resources. If true, your data store
must dene these users in the $users_hash variable or the pe_accounts_users_hash.yaml le.
Allowed values are true and false; defaults to true.
manage_sudoers
Species whether or not to add sudo rules to the nodes sudoers le. If true, the class will add
%sudo and %sudonopw groups to the sudoers le and give them full sudo and passwordless sudo
privileges respectively. You will need to make sure that the sudo and sudonopw groups exist in the
groups hash, and that your chosen users have those groups in their groups arrays. Managing
sudoers is not supported on Solaris.
Allowed values are true and false; defaults to false.
data_store
Species the data store to use for accounts and groups.
When set to namespace, data will be read from the puppet class specied in the data_namespace
parameter. When set to yaml, data will be read from specially-named YAML les in the
/etc/puppetlabs/puppet/data directory. (If you have changed your $confdir, it will look in
$confdir/data.) Example YAML les are provided in the ext/data/ directory of this module.
Puppet Enterprise 3.3 User's Guide The pe_accounts Class

362/404

$confdir/data.) Example YAML les are provided in the ext/data/ directory of this module.
Allowed values are yaml and namespace; defaults to namespace.
data_namespace
Species the Puppet namespace from which to read data. This must be the name of a Puppet class,
and must end with ::data (we recommend using site::pe_accounts::data); the class will
automatically be declared by the pe_accounts class. The class cannot have any parameters, and
must declare variables named:
$users_hash
$groups_hash
See the pe_accounts::data class included in this module (in manifests/data.pp) for an example;
see the data formats section for information on each hashs data structure.
Defaults to pe_accounts::data.
sudoers_path
Species the path to the sudoers le on this system. Defaults to /etc/sudoers.
Next: Maintenance: Maintaining the Console & Databases

Maintaining the Console & Databases


If PEs console becomes sluggish or begins taking up too much disk space, there are several
maintenance tasks that can improve its performance.

Pruning the Console Database with a Cron Job


For new PE installs (3.3 and later), a cron job, managed by a class in the puppetlabspe_console_prune module, is installed that will prevent bloating in the console database by
deleting old data (mainly uploaded puppet run reports) after a set number of days. You can tweak
the parameters of this class as needed, primarily the prune_upto parameter, which sets the time to
keep records in the database. This parameter is set to 30 days by default.
However, to prevent users from deleting data without notice, the cron job is not installed on
upgrades from versions earlier than 3.3.
To prevent bloating in the console database, we recommend adding the pe_console_prune class to
the puppet_console group after upgrading to PE 3.3.
To access the prune_upto parameter:
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases

363/404

1. In the PE console, navigate to the Groups page.


2. Select the puppet_console group.
3. From the puppet_console group page, click the Edit button.
4. From the class list, select pe_console_prune.
5. From the pe_console_prune parameters dialog, edit the parameters as needed. The
prune_upto parameter is at the bottom of the list.
6. Click the Done button when nished.

Restarting the Background Tasks


The console uses several worker services to process reports in the background, and it displays a
running count of pending tasks in the upper left corner of the interface:

If the number of pending tasks appears to be growing linearly, the background task processes may
have died and left invalid PID les. To restart the worker tasks, run:
$ sudo /etc/init.d/pe-puppet-dashboard-workers restart

The number of pending tasks shown in the console should start decreasing rapidly after restarting
the workers.

Optimizing the Database


PostgreSQL should have autovacuum=on set by default. If youre having issues with the database
growing too large and unwieldy, make sure this setting did not get turned o. In most cases, this
should suce. In some cases, more heavyweight maintenance measures may be needed (e.g. in
cases of data corruption from hardware failures). To help with this, PE provides a rake task that
performs advanced database maintenance.
This task, rake db:raw:optimize[mode], runs in three modes:
reindex mode will run the REINDEX DATABASE command on the console database. This is also
the default mode if no mode is specied.
vacuum model will run the VACUUM FULL command on the console database.

Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases

364/404

reindex+vacuum will run both of the above commands on the console database.
To run the task, change your working directory to /opt/puppet/share/puppet-dashboard and
make sure your PATH variable contains /opt/puppet/bin (or use the full path to the rake binary).
Then run the task rake db:raw:optimize[mode]. You can disregard any error messages about
insucient privileges to vacuum certain system objects because these objects should not require
vacuuming. If you believe they do, you can do so manually by logging in to psql (or your tool of
choice) as a database superuser.
Please note that you should have at least as much free space available as is currently in use, on the
partition where your postgresql data is stored, prior to attempting a full vacuum. If you are using
the PE-vendored postgresql, the postgres data is kept in /opt/puppet/var/lib/pgsql/.
The PostgreSQL docs contain more detailed information about vacuuming and reindexing.

Cleaning Old Reports


Agent node reports will build up over time in the consoles database. If you wish to delete the
oldest reports for performance, storage, or policy reasons, you can use the reports:prune rake
task.
For example, to delete reports more than one month old:
$ sudo /opt/puppet/bin/rake \
-f /opt/puppet/share/puppet-dashboard/Rakefile \
RAILS_ENV=production \
reports:prune upto=1 unit=mon

Although this task should be run regularly as a cron job, the actual frequency at which you set it to
run will depend on your sites policies.
If you run the reports:prune task without any arguments, it will display further usage instructions.
The available units of time are yr, mon, wk, day, hr, and min.

Database Backups
You can back up and restore your PE databases by using the standard PostgreSQL tool, pg dump.
Best practices recommend hourly local backups and backups to a remote system nightly for the
console, console_auth and puppetdb databases, or as dictated by your company policy.
Providing comprehensive documentation about backing up and restoring PostgreSQL databases is
beyond the scope of this guide, but the following commands should provide you enough guidance
to perform back ups and restorations of your PE databases.
To backup the databases, run:
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases

365/404

su - pe-postgres -s /bin/bash

pg_dump pe-puppetdb -f /tmp/pe-puppetdb.backup --create
pg_dump console -f /tmp/console.backup --create
pg_dump console_auth -f /tmp/console_auth.backup --create

To restore the databases, run:


su - pe-postgres -s /bin/bash

psql -f /tmp/pe-puppetdb.backup
psql -f /tmp/console.backup
psql -f /tmp/console_auth.backup

Changing the Consoles Database User/Password


The console uses a database user account to access its PostgreSQL database. If this users password
is compromised, or if it needs to be changed periodically, do the following:
1. Stop the pe-httpd service on the console server:

$ sudo /etc/init.d/pe-httpd stop

2. On the database server (which may or may not be the same as the console, depending on your
deployments architecture) use the PostgreSQL administration tool of your choice to change the
users password. With the standard psql client, you can do this with:

ALTER USER console PASSWORD '<new password>';


3. Edit /etc/puppetlabs/puppet-dashboard/database.yml on the console server and change the
password: line under common (or under production, depending on your conguration) to
contain the new password.
4. Start the pe-httpd service on the console server:

$ sudo /etc/init.d/pe-httpd start

You will use the same procedure to change the console_auth database users password, except you
will need to edit both the /opt/puppet/share/console-auth/db/database.yml and
/opt/puppet/share/rubycas-server/config.yml les.
The same procedure is also used for the PuppetDB users password, except youll edit
Puppet Enterprise 3.3 User's Guide Maintaining the Console & Databases

366/404

/etc/puppetlabs/puppetdb/conf.d/database.ini and will restart the pe-puppetdb service.

Changing PuppetDBs Parameters


PuppetDB parameters are set in the jetty.ini le, which is contained in the pe-puppetdb module.
Jetty.ini is managed by PE, so if you change any PuppetDB parameters directly in the le, those
changes will be overwritten on the next puppet run.
Instead, you should use the console to make changes to the parameters of the pe-puppetdb class.
For example, the PuppetDB performance dashboard requires the listen_address parameter to be
set to 0.0.0.0. So, in the console, you would edit the pe_puppetdb class so that the value of the
listen_address parameter is set to 0.0.0.0.

Warning: This procedure will enable insecure access to the PuppetDB instance on your
server.
If you are unfamiliar with editing class parameters in the console, refer to Editing Class Parameters
on Nodes.
Next: Troubleshooting the Installer

Back Up and Restore a Puppet Enterprise


Installation
Once you have PE installed, we recommend that you keep regular backups of your PE infrastructure.
Regular backups allow you to recover from failed upgrades between versions, to troubleshoot
those upgrades, and to quickly recover in the case of system failures. The instructions in this doc
can also help you migrate your PE infrastructure from one set of nodes to another.
To perform a full backup and restore, you will:
1. Back Up Your Database and Puppet Enterprise Files
2. Purge the Puppet Enterprise Installation (Optional)
3. Restore Your Database and Puppet Enterprise Files
Back Up Your Database and Puppet Enterprise Files
To properly back up your PE installation, the following databases and PE les should be backed up.
/etc/puppetlabs/
/opt/puppet/share/puppet-dashboard/certs
The PuppetDB, console, and console_auth databases
Puppet Enterprise 3.3 User's Guide Back Up and Restore a Puppet Enterprise Installation

367/404

The modulepathif youve congured it to be outside the PE default of modulepath =


/etc/puppetlabs/puppet/module:/opt/puppet/share/puppet/modules in puppet.conf.

Note: If you have any custom Simple RPC agents, you will want to back these up. These are
located in the libdir congured in /etc/puppetlabs/mcollective/server.cfg.
On a monolithic (all-in-one) install, the databases and PE les will all be located on the same node
as the puppet master.
On a split install (master, console, PuppetDB/PostgreSQL each on a separate node), they will be
located across the various servers assigned to these PE components.
/etc/puppetlabs/: dierent versions of this directory can be found on the server assigned to
the puppet master component, the server assigned to the console component, and the server
assigned to the PuppetDB/PostgreSQL component. You should back up each version.
/opt/puppet/share/puppet-dashboard/certs: located on the server assigned to the console
component.
The console and console_auth databases: located on the server assigned to the
PuppetDB/PostgreSQL component.
The PuppetDB database: located on the server assigned to the PuppetDB/PostgreSQL component.
Purge the Puppet Enterprise Installation (Optional)
If youre planning on restoring your databases and PE les to the same server(s), youll want to rst
fully purge your existing Puppet Enterprise installation.
PE contains an uninstaller script located at /opt/puppet/bin/puppet-enterprise-uninstaller.
You can also run it from the same directory as the installer script in the PE tarball you originally
downloaded. To do so, run sudo ./puppet-enterprise-uninstaller -p -d. The -p and -d ags
are to purge all conguration data and local databases.

Important: If you have a split install, you will need to run the uninstaller on each server that
has been assigned a component.
After running the uninstaller, ensure that /opt/puppet/ and /etc/puppetlabs/ are no longer
present on the system.
For more information about using the PE uninstaller, refer to Uninstalling Puppet Enterprise.
Restore Your Database and Puppet Enterprise Files
1. Using the standard install process (run the puppet-enterprise-installer script.), reinstall the
same version of Puppet Enterprise that was installed for the les you backed up.
Puppet Enterprise 3.3 User's Guide Back Up and Restore a Puppet Enterprise Installation

368/404

If you have your original answer le, use it during the installation process; otherwise, be sure to
set the same database passwords you used during initial installation.
If you need to review the PE installation process, check out Installing Puppet Enterpise.
2. Run the following commands, in the order specied:
a. service pe-httpd stop
b. service pe-puppet stop
c. service pe-mcollective stop
d. service pe-puppet-dashboard-workers stop
e. service pe-activemq stop
f. service pe-puppetdb stop
3. Purge any locks remaining on the database from the services that were running earlier with
service pe-postgresql restart.
4. Run the following commands, in the order specied:
a. su - pe-postgres -s /bin/bash -c "psql"
b. drop database console;
c. drop database console_auth;
d. drop database "pe-puppetdb";
e. \q

Note: During this process, you may encounter an error message similar to, ERROR: role
"console" already exists. This error is safe to ignore.
5. Restore from your /etc/puppetlabs/ backup the following directories and les:
For a monolithic install, these les should all be replaced on the puppet master:
/etc/puppetlabs/puppet/puppet.conf
/etc/puppetlabs/puppet/ssl (fully replace with backup, do not leave existing ssl data)
/opt/puppet/share/puppet-dashboard/certs
The PuppetDB, console, and console_auth databases
The modulepathif youve congured it to be something other than the PE default.

Puppet Enterprise 3.3 User's Guide Back Up and Restore a Puppet Enterprise Installation

369/404

For a split install, these les and databases should be replaced on the various servers assigned
to these PE components.
/etc/puppetlabs/: as noted earlier, there is a dierent version of this directory for the
puppet master component, the console component, and the database support component
(i.e., PuppetDB and PostgreSQL). You should replace each version.
/opt/puppet/share/puppet-dashboard/certs: located on the server assigned to the console
component.
The console and console_auth databases: located on the server assigned to the database
support component.
The PuppetDB database: located on the server assigned to the database support component.
The modulepath: located on the server assigned to assigned to the puppet master
component.

Note: If you backed up any Simple RPC agents, you will need to restore these on the same
server assigned to the puppet master component.
6. Run chown -R pe-puppet:pe-puppet /etc/puppetlabs/puppet/ssl/.
7. Run chown -R puppet-dashboard /opt/puppet/share/puppet-dashboard/certs/.
8. Restore modules, manifests, hieradata, etc, if necessary. These are typically located in the
/etc/puppetlabs/ directory, but you may have congured them in another location.
9. Run /opt/puppet/sbin/puppetdb-ssl-setup -f. This script generates SSL certicates and
conguration based on the agent cert on your PuppetDB node.
10. Start all PE services you stopped in step 2. (For example, run service pe-httpd start.)

Note: During this process, you may get a message indicating that starting the dashboard
workers failed, but they have in fact started. You can verify this by running service pepuppet-dashboard-workers status.

Troubleshooting Installer Issues


Common Installer Problems
Here are some common problems that can cause an install to go awry.
Upgrades from 3.2.0 Can Cause Issues with Multi-Platform Agent Packages
Users upgrading from PE 3.2.0 to a later version of 3.x (including 3.2.3) will see errors when
attempting to download agent packages for platforms other than the master. After adding pe_repo
classes to the master for desired agent packages, errors will be seen on the subsequent puppet run
Puppet Enterprise 3.3 User's Guide Troubleshooting Installer Issues

370/404

as PE attempts to access the requisite packages. The issue is caused by an incorrectly set parameter
of the pe_repo class. It can be xed as follows:
1. In the console, navigate to the node page for each master node where you wish to add agent
packages.
2. On the masters node page, click Edit and then, for the pe_repo parameter, click Edit parameters
3. Next to the base_path parameter, click Reset value
4. Save the parameter change and update the node.
Once this has been done, you should now be able to add new agent platforms without issue.
A Note about Changes to puppet.conf that Can Cause Issues During Upgrades
If you manage puppet.conf with Puppet or a third-party tool like Git or r10k, you may encounter
errors after upgrading based on the following changes. Please assess these changes before
upgrading.
node_terminus Changes
In PE versions earlier than 3.2, node classication was congured with node_terminus=exec,
located in /etc/puppetlabs/puppet/puppet.conf. This caused the puppet master to execute a
custom shell script ( /etc/puppetlabs/puppet-dashboard/external_node) which ran a curl
command to retrieve data from the console.
PE 3.2 changes node classication in puppet.conf; the new conguration is
node_terminus=console. The external_node script is no longer available; thus,
node_terminus=exec no longer works.
With this change, we have improved security, as the puppet master can now verify the console.
The console certicate name is pe-internal-dashboard. The puppet master now nds the
console by reading the contents of /etc/puppetlabs/puppet/console.conf, which provides the
following:
[main]
server=<console hostname>
port=<console port>
certificate_name=pe-internal-dashboard

This le tells the puppet master where to locate the console and what name it should expect the
console to have. If you want to change the location of the console, you can edit console.conf,
but DO NOT change the certificate_name setting.
The rules for certicate-based authorization to the console are found in
/etc/puppetlabs/console-auth/certificate_authorization.yml on the console node. By
Puppet Enterprise 3.3 User's Guide Troubleshooting Installer Issues

371/404

default, this le allows the puppet master read-write access to the console (based on its
certicate name) to request node data and submit report data.
Reports Changes
Report submission to the console no longer happens using reports=https. PE 3.2 changed the
setting in puppet.conf to reports=console. This change works in the same way as the
node_terminus changes described above.
Installing Without Internet Connectivity
By default, the master node hosts a repo that contains packages used for agent installation. When
you download the tarball for the master, the master also downloads the agent tarball for the same
platform and unpacks it in this repo.
When installing agents on a platform that is dierent from the master platform, the install script
attempts to connect to the internet to download the appropriate agent tarball. If you will not have
internet access at the time of installation, you need to download the appropriate agent tarball in
advance and use the option below that corresponds with your particular deployment.
Option 1
If you would like to use the PE-provided repo, you can copy the agent tarball into the
/opt/staging/pe_repo directory on your master.
If you upgrade your server, you will need to perform this task again for the new version.
Option 2
If you already have a package management/distribution system, you can use it to install agents
by adding the agent packages to your repo. In this case, you can disable the PE-hosted repo
feature altogether by removing the pe_repo class from your master, along with any class that
starts with pe_repo::.
Option 3
If your deployment has multiple masters and you dont wish to copy the agent tarball to each
one, you can specify a path to the agent tarball. This can be done with an answer le, by setting
q_tarball_server to an accessible server containing the tarball, or by using the console to set
the base_path parameter of the pe_repo class to an accessible server containing the tarball.
Is DNS Wrong?
If name resolution at your site isnt quite behaving right, PEs installer can go haywire.
Puppet agent has to be able to reach the puppet master server at one of its valid DNS names.
(Specically, the name you identied as the masters hostname during the installer interview.)
The puppet master also has to be able to reach itself at the puppet master hostname you chose
Puppet Enterprise 3.3 User's Guide Troubleshooting Installer Issues

372/404

during installation.
If youve split the master and console components onto dierent servers, they have to be able to
talk to each other as well.
Are the Security Settings Wrong?
The installer fails in a similar way when the systems rewall or security group is restricting the
ports Puppet uses.
Puppet communicates on ports 8140, 61613, and 443. If you are installing the puppet master
and the console on the same server, it must accept inbound trac on all three ports. If youve
split the two components, the master must accept inbound trac on 8140 and 61613 and the
console must accept inbound trac on 8140 and 443.
If your puppet master has multiple network interfaces, make sure it is allowing trac via the IP
address that its valid DNS names resolve to, not just via an internal interface.
Did You Try to Install the Console Before the Puppet Master?
If you are installing the console and the puppet master on separate servers and tried to install the
console rst, the installer may fail.
How Do I Recover From a Failed Install?
First, x any conguration problem that may have caused the install to fail. See above for a list of
the most common errors.
Next, run the uninstaller script. See the uninstallation instructions in this guide for full details.
After you have run the uninstaller, you can safely run the installer again.
Problems with PE when upgrading your OS
Upgrading your OS while PE is installed can cause problems with PE. To perform an OS upgrade,
youll need to uninstall PE, perform the OS upgrade, and then reinstall PE as follows:
1. Back up your databases and other PE les.
2. Perform a complete uninstall (including the -pd uninstaller option).
3. Upgrade your OS.
4. Install PE.
5. Restore your backup.
Next: Troubleshooting Connections & Communications

Troubleshooting Connections Between


Components
Puppet Enterprise 3.3 User's Guide Troubleshooting Connections Between Components

373/404

Below are some common issues that can prevent the dierent parts of Puppet Enterprise from
communicating with each other.

Agent Nodes Cant Retrieve Their Congurations


Is the Puppet Master Reachable From the Agents?
Although this would probably have caused a problem during installation, its worth checking it rst.
You can check whether the master is reachable and active by trying:
$ telnet <puppet master's hostname> 8140

If the puppet master is alive and reachable, youll get something like:
Trying 172.16.158.132...
Connected to screech.example.com.
Escape character is '^]'.

Otherwise, it will return something like name or service not known.


To x this, make sure the puppet master server is reachable at the DNS name your agents know it
by and make sure that the pe-httpd service is running.
Can the Puppet Master Reach the Console?
The puppet master depends on the console for the names of the classes an agent node should get.
If it cant reach the console, it cant compile congurations for nodes.
Check the puppet agent logs on your nodes, or run puppet agent --test on one of them; if you
see something like err: Could not retrieve catalog from remote server: Error 400 on
SERVER: Could not find node 'agent01.example.com'; cannot compile, the master may be
failing to nd the console.
To x this, make sure that the console is alive by navigating to its web interface. If it cant be
reached, make sure DNS is set up correctly for the console server and ensure that the pe-httpd
service on it is running.
If the console is alive and reachable from the master but the master cant retrieve node info from it,
the master may be congured with the wrong console hostname. Youll need to:
Edit the reporturl setting in the masters /etc/puppetlabs/puppet/puppet.conf le to point
to the correct host.
Edit the ENC_BASE_URL variable in the masters /etc/puppetlabs/puppetdashboard/external_node le to point to the correct host.
Puppet Enterprise 3.3 User's Guide Troubleshooting Connections Between Components

374/404

Do Your Agents Have Signed Certicates?


Check the puppet agent logs on your nodes and look for something like the following:
warning: peer certificate won't be verified in this SSL session

If you see this, it means the agent has submitted a certicate signing request which hasnt yet been
signed. Run puppet cert list on the puppet master to see a list of pending requests, then run
puppet cert sign <NODE NAME> to sign a given nodes certicate. The node should successfully
retrieve and apply its conguration the next time it runs.
Do Agents Trust the Masters Certicate?
Check the puppet agent logs on your nodes and look for something like the following:
err: Could not retrieve catalog from remote server: SSL_connect returned=1
errno=0
state=SSLv3 read server certificate B: certificate verify failed. This is
often
because the time is out of sync on the server or client

This could be one of several things.


ARE AGENTS CONTACTING THE MASTER AT A VALID DNS NAME?

When you installed the puppet master role, you approved a list of valid DNS names to be included in
the masters certicate. Agents will ONLY trust the master if they contact it at one of THESE
hostnames.
To see the hostname agents are using to contact the master, run puppet agent --configprint
server. If this does not return one of the valid DNS names you chose during installation of the
master, edit the server setting in the agents /etc/puppetlabs/puppet/puppet.conf les to point
to a valid DNS name.
If you need to reset your puppet masters valid DNS names, run the following:
$ /etc/init.d/pe-httpd stop
$ puppet cert clean <puppet master's certname>
$ puppet cert generate <puppet master's certname> --dns_alt_names=<commaseparated list of DNS names>
$ /etc/init.d/pe-httpd start
IS TIME IN SYNC ON YOUR NODES?

and was time in sync when your certicates were created?


Compare the output of date on your nodes. Then, run the following command on the puppet
master to check the validity dates of a given certicate:
Puppet Enterprise 3.3 User's Guide Troubleshooting Connections Between Components

375/404

$ openssl x509 -text -noout -in $(puppet master --configprint


ssldir)/certs/<NODE NAME>.pem
If time is out of sync, get it in sync. Keep in mind that NTP can behave unreliably on virtual
machines.
If you have any certicates that arent valid until the future:
Delete the certicate on the puppet master with puppet cert clean <NODE NAME>.
Delete the SSL directory on the oending agent with rm -rf $(puppet agent --configprint
ssldir).
Run puppet agent --test on that agent to generate a new certicate request, then sign that
request on the master with puppet cert sign <NODE NAME>.
DID YOU PREVIOUSLY HAVE AN UNRELATED NODE WITH THE SAME CERTNAME?

If a node re-uses an old nodes certname and the master retains the previous nodes certicate, the
new node will be unable to request a new certicate.
Run the following on the master:
$ puppet cert clean <NODE NAME>

Then, run the following on the agent node:


$ rm -rf $(puppet agent --configprint ssldir)
$ puppet agent --test

This should properly generate a new signing request.


Can Agents Reach the Filebucket Server?
Agents attempt to back up les to the lebucket on the puppet master, but they get the lebucket
hostname from the site manifest instead of their conguration le. If puppet agent is logging
could not back up errors, your nodes are probably trying to back up les to the wrong hostname.
These errors look like this:
err:
/Stage[main]/Pe_mcollective/File[/etc/puppetlabs/mcollective/server.cfg]/content:
change from {md5}778087871f76ce08be02a672b1c48bdc to
{md5}e33a27e4b9a87bb17a2bdff115c4b080 failed: Could not back up
/etc/puppetlabs/mcollective/server.cfg: getaddrinfo: Name or service not known

This usually happens when puppet master is installed with a certname that isnt its hostname. To x
Puppet Enterprise 3.3 User's Guide Troubleshooting Connections Between Components

376/404

these errors, edit /etc/puppetlabs/puppet/manifests/site.pp on the puppet master so that the


following resources server attribute points to the correct hostname:

# Define filebucket 'main':


filebucket { 'main':
server => '<PUPPET MASTER'S DNS NAME>',
path => false,
}

Changing this on the puppet master will x the error on all agent nodes.
Next: Troubleshooting the Console & Database Support

Finding Common Problems


Below are some common issues that can cause trouble with the databases that support the console.
Note: If you will be using your own instance of PostgreSQL (as opposed to the instance PE can
install) for the console and PuppetDB, it must be version 9.1 or higher.

Disabling/Enabling Live Management


Live management is enabled in the console by default when you install PE, but you can congure
your installation to disable it. In addition, live management can be disabled/enabled during
[upgrades][install_upgrade] or normal operations.

PostgreSQL is Taking Up Too Much Space


PostgreSQL should have autovacuum=on set by default. If youre having memory issues from the
database growing too large and unwieldy, make sure this setting did not get turned o. PE also
includes a rake task for keeping the databases in good shape. The console maintenance page has
the details.

PostgreSQL Buer Memory Causes PE Install to Fail


In some cases, when installing PE on machines with large amounts of RAM, the PostgreSQL
database will use more shared buer memory than is available and will not be able to start. This will
prevent PE from installing correctly. The following error will be present in /var/log/pepostgresql/pgstartup.log:

FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=34427584512,03600).

Puppet Enterprise 3.3 User's Guide Finding Common Problems

377/404

A suggested workaround is tweak the machines shmmax and shmall kernel settings before
installing PE. The shmmax setting should be set to approximately 50% of the total RAM; the shmall
setting can be calculated by dividing the new shmmax setting by the PAGE_SIZE. ( PAGE_SIZE can be
conrmed by running getconf PAGE_SIZE).
Use the following commands to set the new kernel settings:
sysctl -w kernel.shmmax=<your shmmax calculation>
sysctl -w kernel.shmall=<your shmall calculation>

Alternatively, you can also report the issue to the Puppet Labs customer support portal.

PuppetDBs Default Port Conicts with Another Service


By default, PuppetDB communicates over port 8081. In some cases, this may conict with existing
services (e.g., McAfees ePO). You can work around this issue by installing with an answer le that
species a dierent port with q_puppetdb_port. For more information on using answer les, take a
look at the documentation for automated installs

New Script to curl the PE Console ENC


In PE versions earlier than 3.2, you could run the external node script ( /etc/puppetlabs/puppetdashboard/external_node) to reach the console ENC. PE 3.2 introduced changes in console
authentication and the external node script was removed. You can now curl the console ENC using
the following script (but be sure to replace <NODE NAME> with an actual node name from your
deployment):
CERT=$(puppet master --configprint hostcert)
CACERT=$(puppet master --configprint localcacert)
PRVKEY=$(puppet master --configprint hostprivkey)
CERT_OPTIONS="--cert ${CERT} --cacert ${CACERT} --key ${PRVKEY}"
CONSOLE=$(awk '/server =/{print $NF}' /etc/puppetlabs/puppet/console.conf)
MASTER="https://${CONSOLE}:443"
curl -k -X GET -H "Accept: text/yaml" ${CERT_OPTIONS} "${MASTER}/nodes/<NODE
NAME>"

Recovering from a Lost Console Admin Password


If you have forgotten the password of the consoles initial admin user, you can create a new admin
user and use it to reset the original admin users password.
On the console server, run the following commands:

Puppet Enterprise 3.3 User's Guide Finding Common Problems

378/404

$ cd /opt/puppet/share/puppet-dashboard
$ sudo /opt/puppet/bin/bundle exec /opt/puppet/bin/rake -s -f
/opt/puppet/share/console-auth/Rakefile db:create_user
USERNAME=<adminuser@example.com> PASSWORD=<password> ROLE="Admin"
RAILS_ENV=production

You can now log in to the console as the user you just created, and use the normal admin tools to
reset other users passwords.

Puppet resource Generates Ruby Errors After Connecting


puppet apply to PuppetDB
Users who wish to use puppet apply (typically in deployments running masterless puppet), need to
get it working with PuppetDB. If they do so by modifying puppet.conf to add the parameters
storeconfigs_backend = puppetdb and storeconfigs = true in both the [main] and [master]
sections), then puppet resource will cease to function and will display a Ruby run error. To avoid
this, the correct way to get puppet apply connected to PuppetDB is to modify
/etc/puppetlabs/puppet/routes.yaml to correctly dene the behavior of puppet apply without
aecting other functions. The PuppetDB manual has complete information and code samples.

The Console Has Too Many Pending Tasks


The console either does not have enough worker processes, or the worker processes have died and
need to be restarted.
See here to restart the worker processes
See here to tune the number of worker processes

Old Pending Tasks Never Expire


In earlier versions of PE 3.x, failed delayed jobs did not get properly deleted. If a report for a job
failed to upload (due to a problem with the report itself), a pending task would be displayed in the
console in perpetuity. This has been xed in PE 3.1. The Background Tasks pane in the console
(upper left corner) now displays a red alert icon when a report fails to upload. Clicking the icon
displays a view with information about the failure and a backtrace. You can stop the reports from
showing the alert by marking them as read with the Mark all as read button.
Note, however, that this will not remove old failed/delayed jobs. You can clean these out by running
/opt/puppet/bin/bundle exec rails runner 'Delayed::Job.delete_all("attempts >= 3")'
on the console node. This command should be run from /opt/puppet/share/puppet-dashboard.

Console Account Conrmation Emails Have Incorrect


Links
Puppet Enterprise 3.3 User's Guide Finding Common Problems

379/404

This can happen if the consoles authentication layer thinks it lives on a hostname that isnt
accessible to the rest of the world. The authentication systems hostname is automatically detected
during installation, and the installer can sometimes choose an internal-only hostname.
To x this:
1. Open the /etc/puppetlabs/console-auth/cas_client_config.yml le for editing. Locate the
cas_host line, which is likely commented-out:

authentication:
## Use this configuration option if the CAS server is on a host different
## from the console-auth server.
# cas_host: console.example.com:443

Change its value to contain the public hostname of the console server, including the correct port.
2. Open the /etc/puppetlabs/console-auth/config.yml le for editing. Locate the
console_hostname line:

authentication:
console_hostname: console.example.com

Change its value if necessary. If you are serving the console on a port other than 443, be sure to
add the port. (For example: console.example.com:3000)

Correcting Broken URLs in the Console


Starting with PE 3.0 and later, group names with periods in them (e.g., group.name) will generate a
page doesnt exist error. To remove broken groups, you can use the following nodegroup:del
rake task:
$ sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile
RAILS_ENV=production nodegroup:del name={bad.group.name.here}

After you remove the broken group names, you can create new groups with valid names and readd your nodes as needed.

Running a 3.x Master with 2.8.x Agents is not Supported


3.x versions of PE contain changes to the MCollective module that are not compatible with 2.8.x
agents. When running a 3.x master with a 2.8.x agent, it is possible that puppet will still continue to
run and check into the console, but this means puppet is running in a degraded state that is not
supported.
Puppet Enterprise 3.3 User's Guide Finding Common Problems

380/404

Next: Troubleshooting Orchestration

Tips & Solutions for Working with Puppet


Troubleshooting Puppet Core
Improving Proling and Debugging of Slow Catalog Compilations
You can get the puppet master to log additional debug-level messages about how much time each
step of its catalog compilation takes by setting profile to true in an agents puppet.conf le (or
specify --profile on the CLI).
If youre trying to prole, be sure to check the --logdest and --debug options on the master
debug must be on, and messages will go to the log destination, which defaults to syslog. If youre
running via Passenger or another Rack server, these options will be set in the cong.ru le.
To nd the messages, look for the string PROFILE in the masters logs each catalog request will
get a unique ID, so you can tell which messages are for which request.
Increase PassengerMaxPoolSize to Decrease Response Times on Node Requests
In some cases, if you perform frequent puppet runs or manage a large number of nodes, Passenger
may get backed up with requests. If this happens, you may see some agents reporting a Could not
retrieve catalog from remote server:execution expired error. To determine if this is indeed
a passenger issue, run /opt/puppet/bin/passenger-status, and check Requests in top-level
queue. If this number is signicantly higher than the number of workers you have, you may need to
increase the PassengerMaxPoolSize.
To increase the PassengerMaxPoolSize, navigate to /etc/puppetlabs/httpd/conf.d/passengerextra.conf, and increase that setting as needed. You must then restart the pe-httpd service by
running sudo /etc/init.d/pe-httpd restart.
Next: Orchestration Overview

Troubleshooting the Orchestration Engine


Agents Not Appearing in Live Management
If you alter an agents name in puppet.conf or make other changes that aect how an agent is
represented on the network, you may nd that while the console shows the agent certicate
request and, subsequently, shows it in node views, you still cannot perform orchestration tasks on
Puppet Enterprise 3.3 User's Guide Tips & Solutions for Working with Puppet

381/404

it using live management. In such cases, you can often force it to reconnect by waiting a minute or
two and then running puppet agent -t until you see output indicating the mcollective server has
picked up the node. The output should look similar to:
Notice:
/Stage[main]/Pe_mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/conten
--- /etc/puppetlabs/mcollective/server.cfg 2013-06-14 15:53:41.251544110 -0700
+++ /tmp/puppet-file20130624-42806-157zyeq 2013-06-24 14:45:09.865182380 -0700
@@ -7,7 +7,7 @@
loglevel = info
daemonize = 1
-identity = crm02
+identity = agent2.example.com
# Plugins
securityprovider = ssl
plugin.ssl_server_private = /etc/puppetlabs/mcollective/ssl/mcollectiveprivate.pem

Tip: You should also run NTP to verify that time is in sync across your deployment.

Accessing the ActiveMQ Console


In some cases, you may need to access the ActiveMQ console to troubleshoot orchestration
messages, which are handled by the pe-activemq service. To do this, you will need to enable the
ActiveMQ console from within the PE console by editing the activemq_enable_web_console
parameter of the pe_mcollective::role::master class. The ActiveMQ node can be reached from
whichever node has the pe_mcollective::role::master class.
To activate the ActiveMQ console:
1. In the PE console, navigate to the Groups page.
2. Select the puppet_master group.
3. From the puppet_master group page, click the Edit button.
4. From the class list, select pe_mcollective::role::master.
5. From the pe_mcollective::role::master parameters dialog, set the
activemq_enable_web_console parameter to true.
6. Click the Done button when nished.
You can access the ActiveMQ console on port 8161.

Puppet Enterprise 3.3 User's Guide Tips & Solutions for Working with Puppet

382/404

AIX Agents Not Registering with Live Management After


3.0 Upgrade
In some cases, the MCollective service on AIX agents may be stuck in the stopping state. In such
cases, the agents will not come back up in live management after the upgrade. You can restore
their connection by forcing the pe-mcollective process to die, by using the following commands
on the agent:
lssrc -s pe-mcollective # note returned pid
kill -9 <pid-of-pe-mcollective>

Running a 3.x Master with 2.8.x Agents is not Supported


3.x versions of PE contain changes to the MCollective module that are not compatible with 2.8.x
agents. When running a 3.x master with a 2.8.x agent, it is possible that puppet will still continue to
run and check into the console, but this means puppet is running in a degraded state that is not
supported.
Next: Troubleshooting: Cloud Provisioner

Finding Common Problems


Below are some common issues with the Cloud Provisioner.
Im Using Puppet Enterprise 3 and Some node Options Dont Work Anymore
Several command options were changed in PE 3. Specically:
--pe-version has been removed from all node_<provider> commands. Users should manually
select the desired source for packages and the installation script they wish to use.
--name for the node_vmware command has been changed to --vname.
--tags for the node_aws command has been changed to --instance_tags.
--group for the node_aws command has been changed to --security_group.
ENC Cant Communicate with Nodes
As of Puppet Enterprise 3.0, SSL is required for all communication between nodes and the ENC. The
--enc-ssl option has been removed.
node_vmware and node_aws Arent Working
If the cloud provisioning actions are failing with an err: Missing required arguments message, you
need to create a ~/.fog le and populate it with the appropriate credentials.
Puppet Enterprise 3.3 User's Guide Finding Common Problems

383/404

Missing .fog File or Credentials


If you attempt to provision without creating a .fog le or without populating the le with
appropriate credentials youll see the following error:
On VMware:
$ puppet node_vmware list
notice: Connecting ...
err: Missing required arguments: vsphere_username, vsphere_password,
vsphere_server
err: Try 'puppet help node_vmware list' for usage

On Amazon Web Services:


$ puppet node_aws list
err: Missing required arguments: aws_access_key_id,
aws_secret_access_key
err: Try 'puppet help node_aws list' for usage

Add the appropriate le or missing credentials to the existing le to resolve this issue.
Note that versions of fog newer than 0.7.2 may not be fully compatible with Cloud Provisioner. This
issue is currently being investigated.
Certicate Signing Issues
ACCESSING PUPPET MASTER ENDPOINT

For automatic signing to work, the computer running Cloud Provisioner (i.e. the CP control node)
needs to be able to access the puppet masters certificate_status REST endpoint. This can be
done in the masters auth.conf le as follows:
path /certificate_status
method save
auth yes
allow {certname}

Note that if the CP control node is on a machine other than the puppet master, it must be able to
reach the puppet master over port 8140.
GENERATING PER-USER CERTIFICATES

The CP control node needs to have a certicate that is signed by the puppet masters CA. While its
possible to use an existing certicate (if, say, the control node was or is an agent node), its
preferable to generate a per-user certicate for a clearer, more explicit security policy.
Start by running the following on the control node: puppet certificate generate {certname} -Puppet Enterprise 3.3 User's Guide Finding Common Problems

384/404

ca-location remote Then sign the certicate as usual on the master ( puppet cert sign
{certname}). Lastly, back on the control node again, run:

puppet certificate find ca --ca-location remote


puppet certificate find {certname} --ca-location remote
This should let you operate under the new certname when you run puppet
commands with the --certname {certname} option.

Next: Troubleshooting Windows

Troubleshooting Puppet on Windows


Puppet Enterprise supports Windows agents, for both the core Puppet conguration management
features and the orchestration features.
Windows agents can have dierent problems and symptoms than *nix agents. This page outlines
some of the more common issues and their solutions.

Tips
Process Explorer
We recommend installing Process Explorer and conguring it to replace Task Manager. This will
make debugging signicantly easier.
Logging
As of Puppet 2.7.x, messages from the puppetd log le are available via the Windows Event Viewer
(choose Windows Logs > Application). To enable debugging, stop the puppet service and restart it
as:
c:\>sc stop puppet && sc start puppet --debug --trace

Puppets windows service component also writes to the windows.log within the same log directory
and can be used to debug issues with the service.

Common Issues
Installation
The Puppet MSI package will not overwrite an existing entry in the puppet.conf le. As a result, if
you uninstall the package, then reinstall the package using a dierent puppet master hostname,
Puppet wont actually apply the new value if the previous value still exists in <data
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

385/404

directory> \etc\puppet.conf.
In general, weve taken the approach of preserving conguration data on the system when doing an
upgrade, uninstall or reinstall.
To fully clean out a system make sure to delete the <data directory>.
Similarly, the MSI will not overwrite the custom facts written to the PuppetLabs\facter\facts.d
directory.
Unattended installation
Puppet may fail to install when trying to perform an unattended install from the command line, e.g.
msiexec /qn /i puppet.msi

To get troubleshooting data, specify an installation log, e.g. /l*v install.txt. Look in the log for
entries like the following:
MSI (s) (7C:D0) [17:24:15:870]: Rejecting product '{D07C45E2-A53E-4D7B-844FF8F608AFF7C8}': Non-assigned apps are disabled for non-admin users.
MSI (s) (7C:D0) [17:24:15:870]: Note: 1: 1708
MSI (s) (7C:D0) [17:24:15:870]: Product: Puppet -- Installation failed.
MSI (s) (7C:D0) [17:24:15:870]: Windows Installer installed the product.
Product Name: Puppet. Product Version: 2.7.12. Product Language: 1033.
Manufacturer: Puppet Labs. Installation success or error status: 1625.
MSI (s) (7C:D0) [17:24:15:870]: MainEngineThread is returning 1625
MSI (s) (7C:08) [17:24:15:870]: No System Restore sequence number for this
installation.
Info 1625.This installation is forbidden by system policy. Contact your system
administrator.

If you see entries like this you know you dont have sucient privileges to install puppet. Make sure
to launch cmd.exe with the Run as Administrator option selected, and try again.

File Paths
Path Separator
Make sure to use a semi-colon (;) as the path separator on Windows, e.g.,
modulepath=path1;path2
File Separator
In most resource attributes, the Puppet language accepts either forward- or backslashes as the le
separator. However, some attributes absolutely require forward slashes, and some attributes
absolutely require backslashes. See the relevant section of Writing Manifests for Windows for more
information.
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

386/404

Backslashes
When backslashes are double-quoted(), they must be escaped. When single-quoted (), they may
be escaped. For example, these are valid le resources:
file { 'c:\path\to\file.txt': }
file { 'c:\\path\\to\\file.txt': }
file { "c:\\path\\to\\file.txt": }

But this is an invalid path, because \p, \t, \f will be interpreted as escape sequences:
file { "c:\path\to\file.txt": }

UNC Paths
UNC paths are not currently supported. However, the path can be mapped as a network drive and
accessed that way.
Case-insensitivity
Several resources are case-insensitive on Windows (le, user, group). When establishing
dependencies among resources, make sure to specify the case consistently. Otherwise, puppet may
not be able to resolve dependencies correctly. For example, applying the following manifest will
fail, because puppet does not recognize that FOOBAR and foobar are the same user:
file { 'c:\foo\bar':
ensure => directory,
owner => 'FOOBAR'
}
user { 'foobar':
ensure => present
}
...
err: /Stage[main]//File[c:\foo\bar]: Could not evaluate: Could not find user
FOOBAR

Dis
Puppet does not show dis on Windows (e.g., puppet agent --show_diff) unless a third-party di
utility has been installed (e.g., msys, gnudi, cygwin, etc) and the diff property has been set
appropriately.

Resource Errors and Quirks


File
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

387/404

If the owner and/or group are specied in a le resource on Windows, the mode must also be
specied. So this is okay:
file { 'c:/path/to/file.bat':
ensure => present,
owner => 'Administrator',
group => 'Administrators',
mode => 0770
}

But this is not:


file { 'c:/path/to/file.bat':
ensure => present,
owner => 'Administrator',
group => 'Adminstrators',
}

The latter case will remove any permissions the Administrators group previously had to the le,
resulting in the eective permissions of 0700. And since puppet runs as a service under the
SYSTEM account, not Administrator, Puppet itself will not be able to manage the le the next
time it runs!
To get out of this state, have Puppet execute the following (with an exec resource) to reset the le
permissions:
takeown /f c:/path/to/file.bat && icacls c:/path/to/file.bat /reset

Exec
When declaring a Windows exec resource, the path to the resource typically depends on the
%WINDIR% environment variable. Since this may vary from system to system, you can use the path
fact in the exec resource:
exec { 'cmd.exe /c echo hello world':
path => $::path
}

Shell Builtins
Puppet does not currently support a shell provider on Windows, so executing shell builtins directly
will fail:
exec { 'echo foo':
path => 'c:\windows\system32;c:\windows'
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

388/404

}
...
err: /Stage[main]//Exec[echo foo]/returns: change from notrun to 0 failed:
Could not find command 'echo'

Instead, wrap the builtin in cmd.exe:

exec { 'cmd.exe /c echo foo':


path => 'c:\windows\system32;c:\windows'
}

Or, better still, use the tip from above:


exec { 'cmd.exe /c echo foo':
path => $::path
}

Powershell
By default, powershell enforces a restricted execution policy which prevents the execution of
scripts. Consequently, make sure to specify the appropriate execution policy in the powershell
command:
exec { 'test':
command => 'powershell.exe -executionpolicy remotesigned -file C:\test.ps1',
path => $::path
}

Package
The source of an MSI package must be a le on either a local lesystem or on a network mapped
drive. It does not support URI-based sources, though you can achieve a similar result by dening a
le whose source is the puppet master and then dening a package whose source is the local le.
Service
Windows services support a short name and a display name. Make sure to use the short name in
puppet manifests. For example use wuauserv, not Automatic Updates. You can use sc query to
get a list of services and their various names.

Error Messages
Error: Could not connect via HTTPS to https://forge.puppetlabs.com / Unable to
verify the SSL certificate / The certificate may not be signed by a valid CA / The
CA bundle included with OpenSSL may not be valid or up to date
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

389/404

This can occur when you run the puppet module subcommand on newly provisioned Windows
nodes.
The Puppet Forge uses an SSL certicate signed by the GeoTrust Global CA certicate. Newly
provisioned Windows nodes may not have that CA in their root CA store yet.
To resolve this and enable the puppet module subcommand on Windows nodes, do one of the
following:
Run Windows Update and fetch all available updates, then visit https://forge.puppetlabs.com
in your web browser. The web browser will notice that the GeoTrust CA is whitelisted for
automatic download, and will add it to the root CA store.
Download the GeoTrust Global CA certicate from GeoTrusts list of root certicates and
manually install it by running certutil -addstore Root GeoTrust_Global_CA.pem.
Service 'Puppet Agent' (puppet) failed to start. Verify that you have sufficient
privileges to start system services.
This can occur when installing puppet on a UAC system from a non-elevated account. Although
the installer displays the UAC prompt to install puppet, it does not elevate when trying to start
the service. Make sure to run from an elevated cmd.exe process when installing the MSI.
Cannot run on Microsoft Windows without the sys-admin, win32-process, win32-dir,
win32-service and win32-taskscheduler gems.
Puppet requires the indicated Windows-specic gems, which can be installed using gem install
<gem>
err: /Stage[main]//Scheduled_task[task_system]: Could not evaluate: The operation
completed successfully.
This error can occur when using version < 0.2.1 of the win32-taskscheduler gem. Run gem
update win32-taskscheduler
err: /Stage[main]//Exec[C:/tmp/foo.exe]/returns: change from notrun to 0 failed:
CreateProcess() failed: Access is denied.
This error can occur when requesting an executable from a remote puppet master that cannot
be executed. For a le to be executable on Windows, set the user/group executable bits
accordingly on the puppet master (or alternatively, specify the mode of the le as it should exist
on the Windows host):
file { "C:/tmp/foo.exe":
source => "puppet:///modules/foo/foo.exe",
}

Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

390/404

exec { 'C:/tmp/foo.exe':
logoutput => true
}

err: getaddrinfo: The storage control blocks were destroyed.


This error can occur when the agent cannot resolve a DNS name into an IP address (for example
the server, ca_server, etc properties). To verify that there is a DNS issue, check that you can
run nslookup <dns>. If this fails, there is a problem with the DNS settings on the Windows agent
(for example, the primary dns sux is not set). See http://technet.microsoft.com/enus/library/cc959322.aspx
err: /Stage[main]//Group[mygroup]/members: change from to Administrators failed:
Add OLE error code:8007056B in <Unknown> <No Description> HRESULT error
code:0x80020009 Exception occurred.
This error will occur when attempting to add a group as a member of another local group, i.e.
nesting groups. Although Active Directory supports nested groups for certain types of domain
group accounts, Windows does not support nesting of local group accounts. As a result, you
must only specify user accounts as members of a group.
err: /Stage[main]//Package[7zip]/ensure: change from absent to present failed:
Execution of 'msiexec.exe /qn /norestart /i "c:\\7z920.exe"' returned 1620: T h i s
i n s t a l l a t i o n p a c k a g e c o u l d n o t b e o p e n e d . C o n t a c
t t h e a p p l i c a t i o n v e n d o r t o v e r i f y t h a t t h i s i s a v a
l i d W i n d o w s I n s t a l l e r p a c k a g e .
This error can occur when attempting to install a non-MSI package. Puppet only supports MSI
packages. To install non-MSI packages, use an exec resource with an onlyif parameter.
err: Could not request certificate: The certificate retrieved from the master does
not match the agent's private key.
This error is usually a sign that the master has already issued a certicate to the agent. This can
occur if the agents SSL directory is deleted after it has retrieved a certicate from the master, or
when running the agent in two dierent security contexts. For example, running puppet agent
as a service and then trying to run puppet agent from the command line with non-elevated
security. Specically, this would happen if youve selected Start Command Prompt with Puppet
but did not elevate privileges using Run as Administrator.
err: Could not evaluate: Could not retrieve information from environment
production source(s) puppet://puppet.domain.com/plugins.
This error will be generated when a Windows agent does a pluginsync from the Puppet master
server, when the latter does not contain any plugins. Note that pluginsync is enabled by default
Puppet Enterprise 3.3 User's Guide Troubleshooting Puppet on Windows

391/404

on Windows. This is a known bug in 2.7.x, see https://projects.puppetlabs.com/issues/2244.


err: Could not send report: SSL_connect returned=1 errno=0 state=SSLv3 read server
certificate B: certificate verify failed. This is often because the time is out of
sync on the server or client.
Windows agents that are part of an Active Directory domain should automatically have their time
synchronized with AD. For agents that are not part of an AD domain, you may need to enable
and add the Windows time service manually:
w32tm /register
net start w32time
w32tm /config /manualpeerlist:<ntpserver> /syncfromflags:manual /update
w32tm /resync

err: You cannot service a running 64-bit operating system with a 32-bit version of
DISM. Please use the version of DISM that corresponds to your computer's
architecture.
As described in the Installation Guide, 64-bit versions of windows will redirect all le system
access from %windir%\system32 to %windir%\SysWOW64 instead. When attempting to congure
Windows roles and features using dism.exe, make sure to use the 64-bit version. This can be
done by executing c:\windows\sysnative\dism.exe, which will prevent le system redirection.
See https://projects.puppetlabs.com/issues/12980
Error: Could not parse for environment production: Syntax error at =; expected }
This error will usually occur if puppet apply -e is used from the command line and the supplied
command is surrounded with single quotes (), which will cause cmd.exe to interpret any => in
the command as a redirect. To solve this surround the command with double quotes () instead.
See https://projects.puppetlabs.com/issues/20528.

Regenerating Certs and Security Credentials


in Split Puppet Enterprise Deployments
Note: If youre visiting this page to remediate your Puppet Enterprise deployment due to
CVE-2014-0160, a.k.a. Heartbleed, please see this announcement for additional
information and links to more resources before using this guide. Before applying these
instructions, please bear in mind that this is a non-trivial operation that contains some
manual steps and will require you to replace certicates on every agent node managed by
your puppet master.

Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
392/404Deploy

Note: This page explains how to regenerate all certicates in a split PE deployment that is,
where the puppet master, PuppetDB, and PE console components are all installed on
separate servers. See this page for instructions on regenerating certicates in a monolithic
PE deployment.

Overview
In some cases, you may nd that you need to regenerate the SSL certicates and security credentials
(private and public keys) that are generated by PEs built-in certicate authority (CA). For example,
you may have a puppet master you need to move to a dierent network in your infrastructure, or
you may nd you need to regenerate all the certicates and security credentials in your
infrastructure due to an unforeseen security vulnerability.
Regardless of your situation, regenerating your certs involves the following four steps (complete
procedures follow below):
1. On your master, youll clear the certs and security credentials, regenerate the CA, and then
regenerate the certs and security credentials.
2. Next, youll clear and regenerate certs and security credentials for PuppetDB.
3. Then, youll clear and regenerate certs and security credentials for the PE console
4. Lastly, youll clear and regenerate certs and security credentials for all agent nodes.
Note that this process destroys the certicate authority and all other certicates. It is meant for use
in the event of a total compromise of your site, or some other unusual circumstance. If you just
need to replace a few agent certicates, you can use the puppet cert clean command on your
puppet master and then follow step four for any agents that need to be replaced.

Step 1: Clear and Regenerate Certs on Your Puppet Master


On your puppet master:
1. Back up the /etc/puppetlabs/puppet/ssl/ directory. If something goes wrong, you may need
to restore this directory so your deployment can stay functional. However, if you needed to
regenerate your certs for security reasons and couldnt, you should contact Puppet Labs support
as soon as you restore service, so we can help you secure your site.
2. Stop the puppet agent service with sudo puppet resource service pe-puppet
ensure=stopped.
3. Stop the orchestration service with sudo puppet resource service pe-mcollective
ensure=stopped.
4. Stop the puppet master service with sudo puppet resource service pe-httpd
ensure=stopped.
5. Clear all certs from your master with sudo rm -rf /etc/puppetlabs/puppet/ssl/*.
6. Regenerate the CA by running sudo puppet cert list -a. You should see this message:

Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
393/404Deploy

Notice: Signed certificate request for ca.


7. Generate the puppet masters new certs with sudo puppet master --no-daemonize --verbose.
8. When you see Notice: Starting Puppet master <your Puppet and PE versions>, type CTRL
+ C.
9. Start the puppet master service with sudo puppet resource service pe-httpd
ensure=running.
10. Start the puppet agent service with sudo puppet resource service pe-puppet
ensure=running.

At this point:
You have a brand new CA certicate and key.
Your puppet master has a certicate from the new CA, and it can once again eld new
certicate requests.
The puppet master will reject any requests for conguration catalogs from nodes that
havent replaced their certicates (which, at this point, will be all of them except the
master).
The puppet master cant serve catalogs even to agents that do have new certicates, since
it cant communicate with the console and PuppetDB.
Orchestration and live management are down.

Step 2: Clear and Regenerate Certs for PuppetDB


On your PuppetDB server:
1. Back up the /etc/puppetlabs/puppet/ssl/ and /etc/puppetlabs/puppetdb/ssl/ directories. If
something goes wrong, you may need to reinstate these directories so your deployment can stay
functional. However, if you needed to regenerate your certs for security reasons and couldnt,
you should contact Puppet Labs support as soon as you restore service, so we can help you
secure your site.
2. Stop the puppet agent service with sudo puppet resource service pe-puppet
ensure=stopped.
3. Stop the orchestration service with sudo puppet resource service pe-mcollective
ensure=stopped.
4. Stop the PuppetDB service with sudo puppet resource service pe-puppetdb ensure=stopped.
5. Delete puppet agents SSL credentials with sudo rm -rf /etc/puppetlabs/puppet/ssl/*.
6. Delete the SSL credentials from the PuppetDB SSL directory with sudo rm -rf
/etc/puppetlabs/puppetdb/ssl/*.
7. Request an agent certicate from the CA puppet master with sudo puppet agent -t. The
puppet master will autosign the request, and the puppet agent will fetch the certicate.
The Puppet agent will also try to do a full Puppet run, which will fail. This is as expected, so dont

Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
394/404Deploy

worry about it.


If the master doesnt autosign the certicate in this step, you may have changed its autosign
conguration. Youll need to manually sign the certicate (see below).
8. Regenerate the certs and security credentials for PuppetDB with sudo
/opt/puppet/sbin/puppetdb-ssl-setup -f.
9. Start the PuppetDB service with sudo puppet resource service pe-puppetdb ensure=running.
10. Re-start the puppet agent service with sudo puppet resource service pe-puppet
ensure=running.
Note: If you are using your own PostgreSQL database on a dierent server and have encrypted
communications with it using SSL, or if you have encrypted communication between database
instances for replication, you should consider regenerating your certicates and keys. For more
information on doing this, see the PostgreSQL documentation.

At this point:
The PuppetDB server is now completely taken care of.
The puppet master can talk to PuppetDB again.
The puppet master cant serve catalogs to agents yet, since it still wont trust the console
server.
Orchestration and live management are still down.

Step 3: Clear and Regenerate Certs for the PE Console


On your console server:
1. Back up the /etc/puppetlabs/puppet/ssl/ and /opt/puppet/share/puppet-dashboard/certs
directories. If something goes wrong, you may need to restore these directories so your
deployment can stay functional. However, if you needed to regenerate your certs for security
reasons and couldnt, you should contact Puppet Labs support as soon as you restore service, so
we can help you secure your site.
2. Stop the puppet agent service with sudo puppet resource service pe-puppet
ensure=stopped.
3. Stop the orchestration service with sudo puppet resource service pe-mcollective
ensure=stopped.
4. Stop the console service with sudo puppet resource service pe-httpd ensure=stopped.
5. Delete puppet agents SSL credentials with sudo rm -rf /etc/puppetlabs/puppet/ssl/*.
6. Request an agent certicate from the CA puppet master with sudo puppet agent -t. The
puppet master will autosign the request, and the puppet agent will fetch the certicate.
The puppet agent will also try to do a full Puppet run, which will fail. This is as expected, so dont
worry about it.

Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
395/404Deploy

If the master doesnt autosign the certicate in this step, you may have changed its autosign
conguration. Youll need to manually sign the certicate (see below).
7. Navigate to the console certs directory with sudo cd /opt/puppet/share/puppetdashboard/certs. Stay in this directory for the following steps.
8. Remove all the credentials in this directory with sudo rm -rf /opt/puppet/share/puppetdashboard/certs/*.
9. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:create_key_pair.
10. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:request. The puppet master
will autosign the request, and the script will fetch the certicate.
If the master doesnt autosign the certicate in this step, you may have changed its autosign
conguration. Youll need to manually sign the certicate (see below). 11. Run sudo
/opt/puppet/bin/rake RAILS_ENV=production cert:retrieve. 12. Ensure the console can access
the new credentials with sudo chown -R puppet-dashboard:puppet-dashboard
/opt/puppet/share/puppet-dashboard/certs. 13. Re-start the console service with sudo puppet
resource service pe-httpd ensure=running. 14. Re-start the puppet agent service with sudo
puppet resource service pe-puppet ensure=running.

At this point:
The console server is now completely taken care of.
The puppet master can talk to the console again, and vice versa.
The puppet master can now serve catalogs to agents.
However, it will only trust agents that have replaced their certicates. The only agents that
have replaced their certicates at this point are the puppet master node, the PuppetDB
node, and the console node.
The console is usable, but because its SSL certicate has been replaced, your web browser
may notice the change, assume it results from a malicious attack, and refuse to allow you
access. If this happens, you may need to go into your browsers collection of cached
certicates and delete the old cert. Details of this process are beyond the scope of this
guide and will vary by browser and platform. (You can delay having to gure this out by
temporarily using a dierent browser.)
Orchestration and live management may not immediately work, but they will start working
again within about 30 minutes, as soon as both the puppet master server and the console
node complete a puppet agent run. (The certicates used by MCollective and the ActiveMQ
service are completely managed by Puppet, and dont have to be manually regenerated.)
On any of the nodes that are completely taken care of, you can start a successful agent run
with sudo puppet agent -t. Try it on your console and PuppetDB nodes to ensure it works
as expected.

Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Split Puppet Enterprise
396/404Deploy

Step 4: Clear and Regenerate Certs for PE Agents


To replace the certs on agents, youll need to log into each agent node and do the following:
1. Stop the puppet agent service. On *nix nodes, run sudo puppet resource service pe-puppet
ensure=stopped. On Windows nodes, run the same command (minus sudo) with Administrator
privileges.
2. Stop the orchestration service. On *nix nodes, run sudo puppet resource service pemcollective ensure=stopped. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
3. Delete the agents SSL directory. On *nix nodes, run sudo rm -rf
/etc/puppetlabs/puppet/ssl/*. On Windows nodes, delete the $confdir\ssl directory, using
the Administrator confdir. See here for more information on locating the confdir.
4. Re-start the puppet agent service. On *nix nodes, run sudo puppet resource service pepuppet ensure=running. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
Once the puppet agent starts, it will automatically generate keys and request a new certicate
from the CA puppet master.
5. If you are not using autosigning, you will need to sign each agent nodes certicate request. You
can do this with the PE consoles request manager, or by logging into the CA puppet master
server, running sudo puppet cert list to see pending requests, and running sudo puppet
cert sign <NAME> to sign requests.
Once an agent nodes new certicate is signed, it will fetch it automatically within a few minutes and
begin a Puppet run. After a node has fetched its new certicate and completed a full Puppet run, it
will once again appear in orchestration and live management. If, after waiting for a short time, you
dont see the agent node in live management, use NTP to make sure time is in sync aross your PE
deployment. On Windows nodes, you may need to log into the node and check the status of the
Marionette Collective service sometimes it can hang while attempting to stop or restart.

Once you have regenerated all agents certicates, everything should now be back to normal
and fully functional under the new CA.

Regenerating Certs and Security Credentials


in Monolithic Puppet Enterprise Deployments
Note: If youre visiting this page to remediate your Puppet Enterprise deployment due to
CVE-2014-0160, a.k.a. Heartbleed, please see this announcement for additional
information and links to more resources before using this guide. Before applying these
instructions, please bear in mind that this is a non-trivial operation that contains some
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Monolithic Puppet
397/404
Enterprise

manual steps and will require you to replace certicates on every agent node managed by
your puppet master.

Note: This page explains how to regenerate all certicates in a monolithic PE deployment
that is, where the puppet master, PuppetDB, and PE console components are all installed on
the same server. See this page for instructions on regenerating certicates in a split PE
deployment.

Overview
In some cases, you may nd that you need to regenerate the certicates and security credentials
(private and public keys) generated by PEs built-in certicate authority (CA). For example, you may
have a puppet master that you need to move to a dierent network in your infrastructure, or you
may nd that you need to regenerate all the certicates and security credentials in your
infrastructure due to an unforeseen security vulnerability.
Regardless of your situation, regenerating your certicatess involves the following four steps
(complete procedures follow below):
1. On your master, youll clear the certs and security credentials, regenerate the CA, and then
regenerate the certs and security credentials.
2. Next, youll clear and regenerate certs and security credentials for PuppetDB.
3. Then, youll clear and regenerate certs and security credentials for the PE console.
4. Lastly, youll clear and regenerate certs and security credentials for all agent nodes.
Note that this process destroys the certicate authority and all other certicates. It is meant for use
in the event of a total compromise of your site, or some other unusual circumstance. If you just
need to replace a few agent certicates, you can use the puppet cert clean command on your
puppet master and then follow step four for any agent certs that need to be replaced.

Step 1: Clear and Regenerate Certicates on Your Puppet


Master
On your monolithic puppet master:
1. Back up the /etc/puppetlabs/puppet/ssl/, /etc/puppetlabs/puppetdb/ssl/, and
/opt/puppet/share/puppet-dashboard/certs directories. If something goes wrong, you may
need to restore these directories so your deployment can stay functional. However, if you needed
to regenerate your certs for security reasons and couldnt, you should contact Puppet Labs
support as soon as you restore service so we can help you secure your site.
2. Stop the puppet agent service with sudo puppet resource service pe-puppet
ensure=stopped.
3. Stop the orchestration service with sudo puppet resource service pe-mcollective
ensure=stopped.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Monolithic Puppet
398/404
Enterprise

4. Stop the puppet master service with sudo puppet resource service pe-httpd
ensure=stopped.
5. Clear all certs from your master with sudo rm -rf /etc/puppetlabs/puppet/ssl/*.
6. Regenerate the CA by running sudo puppet cert list -a. You should see this message:
Notice: Signed certificate request for ca.
7. Generate the puppet masters new certs with sudo puppet master --no-daemonize --verbose.
8. When you see Notice: Starting Puppet master <your Puppet and PE versions>, type CTRL
+ C.
9. Start the puppet master service with sudo puppet resource service pe-httpd
ensure=running.
10. Start the puppet agent service with sudo puppet resource service pe-puppet
ensure=running.

At this point:
You have a brand new CA certicate and key.
Your puppet master has a certicate from the new CA, and it can once again eld new
certicate requests.
The puppet master will reject any requests for conguration catalogs from nodes that
havent replaced their certicates (which, at this point, will be all of them except the
master).
The puppet master cant serve catalogs even to agents that do have new certicates, since
it cant communicate with the console and PuppetDB.
Orchestration and live management are down.

Step 2: Clear and Regenerate Certs for PuppetDB


On your monolithic puppet master:
1. Stop the PuppetDB service with sudo puppet resource service pe-puppetdb ensure=stopped.
2. Clear the certs and security credentials from the PuppetDB SSL directory with sudo rm -rf
/etc/puppetlabs/puppetdb/ssl/*.
3. Regenerate the certs and security credentials for PuppetDB with sudo
/opt/puppet/sbin/puppetdb-ssl-setup -f.
4. Start the PuppetDB service with sudo puppet resource service pe-puppetdb ensure=running.

At this point:
The puppet master can talk to PuppetDB again.
The puppet master cant serve catalogs to agents yet, since it still wont trust the console
service.
Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Monolithic Puppet
399/404
Enterprise

Orchestration and live management are still down.

Step 3: Clear and Regenerate Certs for the PE Console


On your monolithic puppet master:
1. Navigate to the console certs directory with sudo cd /opt/puppet/share/puppetdashboard/certs. Stay in this directory for the following steps.
2. Remove all the credentials in this directory with sudo rm -rf /opt/puppet/share/puppetdashboard/certs/*.
3. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:create_key_pair.
4. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:request. The cert will be
generated, and a CSR submitted.
5. Use puppet cert sign pe-internal-dashboard to sign the console certicate request.
6. Run sudo /opt/puppet/bin/rake RAILS_ENV=production cert:retrieve.
7. Ensure the console can access the new credentials with sudo chown -R puppetdashboard:puppet-dashboard /opt/puppet/share/puppet-dashboard/certs.
8. Restart the console service with sudo service pe-httpd restart.

At this point:
The puppet master can talk to the console again, and vice versa.
The puppet master can now serve catalogs to agents.
However, it will only trust agents that have replaced their certicates. The only agent that
has replaced its certicate at this point is the monolithic puppet master.
The console is usable, but because its SSL certicate has been replaced, your web browser
may notice the change, assume it results from a malicious attack, and refuse to allow you
access. If this happens, you may need to delete the old cert from your browsers collection
of cached certicates. Details of this process are beyond the scope of this guide and will
vary by browser and platform. (You can delay having to gure this out by temporarily
using a dierent browser.)
Orchestration and live management may not immediately work, but they will start working
again as soon as both the puppet master server and the console node complete a puppet
agent run. (The certicates used by MCollective and the ActiveMQ service are completely
managed by Puppet and dont have to be manually regenerated.)
On the monolithic puppet master, you can now start a successful agent run with sudo
puppet agent -t.

Step 4: Clear and Regenerate Certs for PE Agents


Puppet Enterprise 3.3 User's Guide Regenerating Certs and Security Credentials in Monolithic Puppet
400/404
Enterprise

To replace the certs on agents, youll need to log into each agent node and do the following:
1. Stop the puppet agent service. On *nix nodes, run sudo puppet resource service pe-puppet
ensure=stopped. On Windows nodes, run the same command (minus sudo) with Administrator
privileges.
2. Stop the orchestration service. On *nix nodes, run sudo puppet resource service pemcollective ensure=stopped. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
3. Delete the agents SSL directory. On *nix nodes, run sudo rm -rf
/etc/puppetlabs/puppet/ssl/*. On Windows nodes, delete the $confdir\ssl directory, using
the Administrator confdir. See here for more information on locating the confdir.
4. Re-start the puppet agent service. On *nix nodes, run sudo puppet resource service pepuppet ensure=running. On Windows nodes, run the same command (minus sudo) with
Administrator privileges.
Once puppet agent starts, it will automatically generate keys and request a new certicate from
the CA puppet master.
5. If you are not using autosigning, you will need to sign each agent nodes certicate request. You
can do this with the PE consoles request manager, or by logging into the CA puppet master
server, running sudo puppet cert list to see pending requests, and running sudo puppet
cert sign <NAME> to sign requests.
Once an agent nodes new certicate is signed, it will fetch it automatically within a few minutes and
begin a Puppet run. After a node has fetched its new certicate and completed a full Puppet run, it
will once again appear in orchestration and live management. If, after waiting for a short time, you
dont see the agent node in live management, use NTP to make sure time is in sync aross your PE
deployment. On Windows nodes, you may need to log into the node and check the status of the
Marionette Collective service sometimes it can hang while attempting to stop or restart.

Once you have regenerated all agents certicates, everything should now be back to normal
and fully functional under the new CA.

Alternate Workow to Replace Compliance


Tool
This page describes an alternate workow which will allow you to maintain baseline states and
audit changes in your puppet-controlled infrastructure.

Compliance Alternate Workow


WORKFLOW IN BRIEF

Instead of writing audit manifests: Write manifests that describe the desired baseline state(s).
Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool

401/404

This is identical to writing Puppet manifests to manage systems: you use the resource
declaration syntax to describe the desired state of each signicant resource.
Instead of running puppet agent in its default mode: Make it sync the signicant resources in
no-op mode, which can be done for the entire Puppet run, or per-resource. (See below.) This
causes Puppet to detect changes and simulate changes, without automatically enforcing the
desired state.
In the console: Look for pending events and node status. Pending is how the console
represents detected dierences and simulated changes.
CONTROLLING YOUR MANIFESTS

As part of a solid change control process, you should be maintaining your Puppet manifests in a
version control system like Git. A well-designed branch structure in version control will allow
changes to your manifests to be tracked, controlled, and audited.
NO-OP FEATURES

Puppet resources or catalogs can be marked as no-op before they are applied by the agent
nodes. This means that the user describes a desired state for the resource, and Puppet will detect
and report any divergence from this desired state. Puppet will report what should change to bring
the resource into the desired state, but it will not make those changes automatically.
To set an individual resource as no-op, set the noop metaparameter to true.

file {'/etc/sudoers':
owner => root,
group => root,
mode => 0600,
noop => true,
}

This allows you to mix enforced resources and no-op resources in the same Puppet run.
To do an entire Puppet run in no-op, set the noop setting to true. This can be done in the
[agent] block of puppet.conf, or as a --noop command-line ag. If you are running puppet
agent in the default daemon mode, you would set no-op in puppet.conf.
IN THE CONSOLE

In the console, you can locate the changes Puppet has detected by looking for pending nodes,
reports, and events. A pending status means Puppet has detected a change and simulated a x,
but has not automatically managed the resource.
You can nd a pending status in the following places:
The node summary, which lists the number of nodes on which changes were detected.

Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool

402/404

The list of recent reports, which uses an orange asterisk to show reports in which changes were
detected.

The log and events tabs of any report containing pending events. These tabs will show you what
changes were detected, and how they dier from the desired system state described in your
manifests.

Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool

403/404

AFTER DETECTION

When a Puppet node reports no-op events, this means someone has made changes to a no-op
resource that has a desired state desribed. Generally, this either means an unauthorized change
has been made, or an authorized change was made but the manifests have not yet been updated to
contain the change. You will need to either:
Revert the system to the desired state (possibly by running puppet agent with --no-noop).
Edit your manifests to contain the new desired state, and check the changed manifests into
version control.
BEFORE DETECTION

However, your admins should generally be changing the manifests before making authorized
changes. This serves as documentation of the changes approval.
SUMMARY

In this alternate workow, you are essentially still maintaining baselines of your systems desired
states. However, instead of maintaining an abstract baseline by approving changes in the console,
you are maintaining concrete baselines in readable Puppet code, which can be audited via version
control records.
2010 Puppet Labs info@puppetlabs.com 411 NW Park Street / Portland, OR 97209 1-877-5759775

Puppet Enterprise 3.3 User's Guide Alternate Workow to Replace Compliance Tool

404/404

You might also like