DEV Community

Martin Alfke for betadots

Posted on • Edited on

Scaling Puppet Infrastructure

In large environments with many nodes one must take care of Puppet server scaling.
When and if scaling is needed, depends on the number of nodes and on code complexity and size.
This article describes the different ways of tuning Puppet server infrastructure.

Table of content:

  1. Puppet Server tuning
    1. Performance tuning a single node
    2. Scaling horizontally
  2. PuppetDB tuning
  3. Analyzing performance win
  4. Distributed Puppet agent runs
  5. Summary

Scaling the hard way:

  1. Scaling beyond JRuby limits
  2. Multiple Puppet Server instances

Puppet Server tuning

Performance tuning single node

We sometimes see the usage of horizontal scaling, even when there is no need to do it.
As scaling horizontally needs further infrastructure components like Load Balancer and Puppet Server Compilers, we usually first ask to do performance tuning on the single node instance.

A properly configured and optimized single Puppet server should be able to handle up to 2500 nodes - using default runinterval of 30 minutes.

There are three different possibilities for tuning, which sometimes rely upon each other:

Java Version

Puppet server package has a dependency on java openjdk (headless).
On most Linux distributions this will install Java 1.8.

Please ensure to upgrade to Java 17 and set the java alternative accordingly:

e.g (on Almalinux 8)

alternatives --set java /usr/lib/jvm/java-17-openjdk-17.0.9.0.9-2.el8.x86_64/bin/java
Enter fullscreen mode Exit fullscreen mode

Number of JRuby instances

With its default configuration a Puppet Server spins up 2 JRuby instances.
Each instance is able to handle a single Puppet Agent request.
When there are more requests, these get queued.

Each JRuby instance needs 512 MB of Java Heap RAM.
Just to be sure: it is not possible to run more than 32 JRuby instances on a single node!

Adopting the number of JRuby instances takes place in /etc/puppetlabs/puppetserver/conf.d/puppetserver.conf within the jruby-puppet section by setting the max-active-instances parameter:

jruby-puppet: {
    ...
    max-active-instances: 8   # <------- JRuby instances
    ...
}
Enter fullscreen mode Exit fullscreen mode

Please note that increasing the number of JRuby instances causes the Java process to need more Java HEAP RAM.

Puppet Server Java Heap Size

The Java engine should have enough RAM to be able to spin up and maintain the JRuby instances.
If you forget to increase Java Heap size, you will see Java out-of-memory error messages in the Puppet server logfile.

Increasing Java Heapsize takes place in /etc/default/puppetserver (Debian based) or /etc/sysconfig/puppetserver (RedHat based).

The Java Heap size is provided as a Java argument:

# /etc/[sysconfig|default]/puppetserver
JAVA_ARGS="-Xms18g -Xmx18g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger
Enter fullscreen mode Exit fullscreen mode

In this example we have set the Java Heap size (upper and lower limit) to 18 GB RAM.

# Lower limit
-Xms18g
# Upper limit
-Xmx18g
Enter fullscreen mode Exit fullscreen mode

It is considered best practice to set upper and lower limit to the same value, so Java reserves the RAM upon start up.

Please note that the maximum possible value is limited by the amount of the system RAM.
If you increase the Java heap size beyond system RAM, you will find Kernel out-of-memory errors in the journal.

Besides this one must be aware that Java switches memory handling when using more than 32 GB of heap, which reduce the amount of objects. Also see the blog post from Codecentric or Atlassian.

Reserved Code Cache

When Puppet servers receive a request from a node the Puppet Code is loaded into code cache in memory.
The default value is set to 512 MB RAM.

A larger Puppet code base or complex code might need a larger Code cache setting.
Code cache is configured as Java argument in /etc/[sysconfig|default]/puppetserver:

# /etc/[sysconfig|default]/puppetserver
JAVA_ARGS="-Xms18g -Xmx18g -XX:ReservedCodeCacheSize=1024m -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger
Enter fullscreen mode Exit fullscreen mode

In this example we have set the code cache size to 1024 MB RAM.

-XX:ReservedCodeCacheSize=1024m
Enter fullscreen mode Exit fullscreen mode

Please note that the maximum possible value is at 2048m.

Please ensure that the Puppet server process was restarted after setting all the required configuration options.

A short mathematical sizing rule of thumb:

Java Heap size (M) = ( No of JRuby instances * 512M ) + 512M
Reserved Code Cache Size (M) = No of JRuby instances * 128M
Enter fullscreen mode Exit fullscreen mode

Scaling horizontally

If a single Puppet server is not able to handle the amount of requests - e.g. in large infrastructures exceeding 2000 nodes - one has the option to set up additional compilers.

Please note that each Puppet server infrastructure component should receive the performance tuning settings!

Infrastructure setup

A high performance Puppet infrastructure consists of the following components:

  • CA Server
  • Compiler(s)
  • Load balancer

A Puppet agent sends the request to the Load balancer.
The load balancer passes the request to a free compiler.

From a Puppet agent point of view, the request is sent to the Load balancer and the response is received from the compiler.
This has a special meaning when it comes to SSL certificates and strict SSL validation.

We will discuss this when we come to the compiler setup.

Puppet CA Server

The Puppet CA server is a standalone single instance, which is used to spin up and maintain additional Puppet compilers.
From CA server point of view a compiler is just a node.

All new CSR's must be signed on the CA server.
When you want to scale your Puppet infrastructure the CA Server will be your existing Puppet server.

When we want to sign the compilers certificates including dns_alt_names, we must configure the CA instance, to be able to do this by modifying the /etc/puppetlabs/puppetserver/conf.d/ca.conf file:

We must allow subject alt names setting:

# /etc/puppetlabs/puppetserver/conf.d/ca.conf
certificate-authority: {
    # allow CA to sign certificate requests that have subject alternative names.
    allow-subject-alt-names: true  # <----- enable SAN cert signing

    # allow CA to sign certificate requests that have authorization extensions.
    # allow-authorization-extensions: false

    # enable the separate CRL for Puppet infrastructure nodes
    # enable-infra-crl: false
}
Enter fullscreen mode Exit fullscreen mode

Please ensure that the Puppet server process was restarted after doing all the required changes.

Adding compilers

Compilers should not act as CA Server!
The CA functionality is managed in /etc/puppetlabs/puppetserver/services.d/ca.cfg

Now we need to change the settings so that a compiler does not act as CA server, but pass all CA related requests to the CA server:

# To enable the CA service, leave the following line uncommented
# puppetlabs.services.ca.certificate-authority-service/certificate-authority-service
# To disable the CA service, comment out the above line and uncomment the line below
puppetlabs.services.ca.certificate-authority-disabled-service/certificate-authority-disabled-service
puppetlabs.trapperkeeper.services.watcher.filesystem-watch-service/filesystem-watch-service
Enter fullscreen mode Exit fullscreen mode

Please not that all compilers (and the CA server) should receive the Puppet code. In most environments we see that the compilers and the CA server have a NFS mount on which the code is deployed on the CA server and used on all compilers.

The NFS share is beyond scope of this document.

The Puppet agent will not connect to a compiler but to the load balancer.

If we keep the default setup, the Puppet agent will refuse to connect to the load-balancer, if DNS ALT Names are missing in the compilers certificate.

The compiler MUST have the Load balancer DNS name configured, prior generating the CSR.

This can be achieved by adding the dns_alt_names configuration setting into 7etc/puppetlabs/puppet/puppet.conf into the agent section.

[agent]
dns_alt_names = loadbalancer.domain.tld,compiler-fqdn.domain.tld
Enter fullscreen mode Exit fullscreen mode

Adding load balancer

Each request to a Puppet Server starts a compilation process with different runtimes. One should not configure the roundrobin distribution algorithm on the load balancer.

Instead we want to distribute the connections to the Puppet compiler with the least work to do - which means the one with the least connections. In HAproxy this setting is called leastconn.

Besides this you do not want to rely on Layer 3 IP connection, but on Layer 7 functionality. Within HAproxy one should add the SSL directive to each backend.

PuppetDB tuning

In most environments PuppetDB is used to store facts, reports, catalogs and exported resources.
The more nodes you have in your infrastructure, the higher the load and the memory requirement on the PuppetDB process.

Tuning PuppetDB

PuppetDB is a HTTP(S) Rest API in front of a PostgreSQL database.
One can say that PuppetDB also is a kind of web service.

Java Heap size

The most important setting is Java heap size.
Per default a PuppetDB is configured to use 512MB Heap size.

Configuration takes place in /etc/sysconfig/puppetdb (RedHat/SLES) or 7etc/default/puppetdb (Debian).

The Java Heap size is provided as a Java argument:

# /etc/[sysconfig|default]/puppetdb
JAVA_ARGS="-Xms1g -Xmx1g ...
Enter fullscreen mode Exit fullscreen mode

In this example we have set the Java Heap size (upper and lower limit) to 1 GB RAM.

# Lower limit
-Xms1g
# Upper limit
-Xmx1g
Enter fullscreen mode Exit fullscreen mode

It is considered best practice to set upper and lower limit to the same value, so Java reserves the RAM upon start up.

We usually receive good results with 1GB or 2GB heap site (depending on the number of nodes)

Database connection pool

Within the PuppetDB configuration file, which is located at /etc/puppetlabs/puppetdb/conf.d/database.conf, one can set the maximum numbers of idle and active PostgreSQL connections within the database section:

maximum-pool-size = 25 # default
Enter fullscreen mode Exit fullscreen mode

Please note that PuppetDB uses two connection pools:

  • read pool
  • write pool

The read pool can be set individually by setting maximum-pool-size in the read-database section. The default value is 10.

The maximum-pool-size should be set to the sum of both.

Please check your PostgreSQL configuration (especially the max connection setting) prior increasing the connection pool.

Block facts

Another possibility is to lower the number of facts stored.

This can be achieved by setting facts-blocklist.

facts-blocklist = ["fact1", "fact2", "fact3"]
Enter fullscreen mode Exit fullscreen mode

Command processing threads

Besides this PuppetDB lets you configure the number of threads by setting threads within the command-processing section in /etc/puppetlabs/puppetdb/conf.d/config.ini.
As default PuppetDB will use half the number of cores of the system.

Web server threads

Last but least one can set the max-threads in /etc/puppetlabs/puppetdb/conf.d/jetty.ini.
This specifies the number if parallel possible HTTP and HTTPS connections.

Scaling PuppetDB

In former times, it was best practice to have a single PuppetDB instance on the Puppet Server.
If you have the need for scaling Puppet Servers, one should consider having PuppetDB on each Puppet infrastructure node, which connect to a centralized PostgreSQL database.

Analyzing performance win

Every time one does performance tuning, one also wants to get proof that the new setting has improved performance.
The most simple check is by getting the compile times from puppetserver logfile.

awk '/Compiled catalog/ {print $12" "$14}' /var/log/puppetlabs/puppetserver/puppetserver.log

production 0.07
production 0.05
production 0.05
production 0.04
production 0.04
production 0.03
production 0.03
Enter fullscreen mode Exit fullscreen mode

You can also get highest and lowest compile time

awk '/Compiled catalog/ {print $12" "$14}' /var/log/puppetlabs/puppetserver/puppetserver.log | sort | awk 'NR==1; END{print}'

production 0.03
production 0.07
Enter fullscreen mode Exit fullscreen mode

If you need an average compile time the following command will be helpful

awk '/Compiled catalog/ {print $12" "$14}' /var/log/puppetlabs/puppetserver/puppetserver.log | sort | awk '{total += $2; count++ } END { print total/count }'

0.0442857
Enter fullscreen mode Exit fullscreen mode

Another option is provided by the Puppet Operational Dashboards.

This will setup a Grafana frontend which reads data from an InfluxDB, which gets its content from a Telegraf agent which connects via HTTPS to the Puppet server metrics endpoints.

In Puppet Enterprise the PostgreSQL database also uses SSL certificates, which also allows fetching PostgreSQL metrics.
When running Puppet Open Source, one needs to add an additional telegraf configuration to query the PostgreSQL database using a user and password and push the data into InfluxDB.

Distributed Puppet agent runs

Within the introduction we mentioned that a single server can handle up to 2500 nodes.
This number of nodes requires that Puppet agent runs must be evenly distributed over time.

The default way for a Puppet agent is running as a daemon, which will schedule a Puppet agent run upon start and triggers the next runs every 30 minutes (runinterval config option).

Due to the reason that systems might be starting in parallel one will still see overloaded Puppet server infrastructure.
As a result one will recognize that the Puppet agent run takes very long time. If the Agent did not receive a reply from Puppet server within 5 minutes (http_read_timeout config option), it will stop the actual request and repeat again in 30 minutes.

The config options can be modified in puppet.conf config file in section agent.

Please reconsider if Puppet should run every 30 minutes.
In some infrastructure Puppet is running in noop mode as changes may only be deployed during maintenance window.
In this case one still wants the node to regular check its configuration, but maybe 4 times a day only, so people can identify within the reporting that the systems are still in desired state.
Doubling the runinterval option reduces the load on the Puppet server to its half, so the Puppet server can handle double number of nodes.

But one still will see huge and many load spikes on the Puppet server.
It still can happen that to many nodes try to request for their configuration in parallel.

We usually recommend to customers to run the agent via cron.
One run will be done at reboot and the other runs will take place as recurring cron entries.
To distribute the agent runs over time one can make use of the fqdnrand function inside Puppet. This function generates a unique number for a hostame or FQDN.

One can use the reidmv puppet_run_scheduler module to configure Puppet accordingly.

Please note that within Puppet Enterprise environments the Puppet agent on the Puppet servers must be running as agent daemon service!

Summary

Tuning and scaling a Puppet server is very simple, as it is a single Java process and a web service only.

Tuning is mostly related to the Java process and memory handling and is usually limited by the amount of RAM and numbers of CPUs being available.

Scaling can be achieved by using simple, already established solutions to distribute web traffic amongst several servers.

Distributing the Puppet agent runs ensures evenly used Puppet servers which allows more nodes to get managed.

Scaling beyond JRuby limits

We already mentioned that you can not spin up more than 32 JRuby instances.

We tried it. Be sure. It is not working at all.

But what to do if you have high end (e.g. HPC) hardware?

  • 72 cores
  • 248 GB RAM

Usually you want to slice the big hardware e.g using KVM or Docker and run multiple VMs or Puppet server containers for example by using Voxpupuli CRAFTY.

Normally we see such a hardware only within virtualization or container based environments or within a HPC platform.

In our specific use case the usage of KVM, Docker, Podman or Kubernetes was not possible. Don't ask.

We talked about this in the Puppet community Slack channel and people told us that the only option is to spin up multiple Puppet server instances by "some specific symlinking and copying of config files".

Just to be sure: this is not something you can configure out-of-the-box, but needs lots of changes also to Puppet internal scripts.

Multiple Puppet Server instances

You want it the hard way. Here we go..

Please note that this is just first findings, not automated and of course unsupported!

Requirements:

  • big hardware
  • multiple IP addresses on the node
    • one for the main Puppet CA server
    • one for the HAproxy loadbalancer
    • and one for each compiler
  • DNS configured for each IP
  • Puppet CA server configured and running

Assumptions:

  • the main installation is used for the Puppet CA server
  • compilers will use copy of this installation
  • main CA server FQDN: puppetserver.domain.tld
  • compiler FQDN: puppetcompiler1.domain.tld
  • load balancer FQDN: puppetbalancer.domain.tld

First we need to stop the running Puppet Server, then we can copy several directories and make modifications to configuration files and scripts.
Afterwards we can start the Puppet server processes and then we can install and configure HAproxy.

systemctl stop puppetserver
Enter fullscreen mode Exit fullscreen mode

Files and directories needed

Each Puppet server instance needs several files and directories copied from the main installation:

# Service config
cp -pr /etc/sysconfig/puppetserver /etc/sysconfig/puppetservercompiler1
# Puppet agent config
cp -pr /etc/puppetlabs/puppet /etc/puppetlabs/puppetcompiler1
# Puppet server config
cp -pr /etc/puppetlabs/puppetserver /etc/puppetlabs/puppetservercompiler1
# Puppet server application
cp -pr /opt/puppetlabs/server /opt/puppetlabs/servercompiler1
# Puppet server logging
cp -pr /var/log/puppetlabs/puppetserver /var/log/puppetlabs/puppetservercompiler1
# Puppet server service
cp -pr /usr/lib/systemd/system/puppetserver.service /usr/lib/systemd/system/puppetservercompiler1.service
Enter fullscreen mode Exit fullscreen mode

Config files

/etc/puppetlabs/puppet/puppet.conf

The main Puppet configuration from the main Puppet CA Server should only receive an entry for the server:

puppet config set --section main server puppetserver.domain.tld
Enter fullscreen mode Exit fullscreen mode

/etc/sysconfig/puppetservercompiler1

Change the paths:

...
INSTALL_DIR="/opt/puppetlabs/servercompiler1/apps/puppetserver"
CONFIG="/etc/puppetlabs/puppetservercompiler1/conf.d"
# Bootstrap path
BOOTSTRAP_CONFIG="/etc/puppetlabs/puppetservercompiler1/services.d/,/opt/puppetlabs/servercompiler1/apps/puppetserver/config/services.d/"
...
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/services.d/ca.cfg

Disable CA

# To enable the CA service, leave the following line uncommented
#puppetlabs.services.ca.certificate-authority-service/certificate-authority-service
# To disable the CA service, comment out the above line and uncomment the line below
puppetlabs.services.ca.certificate-authority-disabled-service/certificate-authority-disabled-service
puppetlabs.trapperkeeper.services.watcher.filesystem-watch-service/filesystem-watch-service
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/conf.d/global.conf

Adopt path

global: {
    # Path to logback logging configuration file; for more
    # info, see http://logback.qos.ch/manual/configuration.html
    logging-config: /etc/puppetlabs/puppetservercompiler1/logback.xml
}
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/conf.d/metrics.conf

Adopt the server id

metrics: {
    # a server id that will be used as part of the namespace for metrics produced
    # by this server
    server-id: puppetservercompiler1
    registries: {
        ...
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/conf.d/puppetserver.conf

Adopt paths

jruby-puppet: {
    ruby-load-path: [/opt/puppetlabs/puppet/lib/ruby/vendor_ruby]

    # Adopt path
    gem-home: /opt/puppetlabs/servercompiler1/data/puppetserver/jruby-gems

    # Adopt paths
    gem-path: [${jruby-puppet.gem-home}, "/opt/puppetlabs/servercompiler1/data/puppetserver/vendored-jruby-gems", "/opt/puppetlabs/puppet/lib/ruby/vendor_gems"]

    # Adopt path
    server-conf-dir: /etc/puppetlabs/puppetcompiler1

    server-code-dir: /etc/puppetlabs/code

    # Adopt path
    server-var-dir: /opt/puppetlabs/servercompiler1/data/puppetserver

    # Adopt path
    server-run-dir: /var/run/puppetlabs/puppetservercompiler1

    # Adopt path
    server-log-dir: /var/log/puppetlabs/puppetservercompiler1

    ...
}
...
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/conf.d/webserver.conf

Adopt Path and IP and/or Port

webserver: {
    access-log-config: /etc/puppetlabs/puppetservercompiler1/request-logging.xml
    client-auth: want
    ssl-host: 10.110.10.102 # <---- add compiler1 IP and/or
    ssl-port: 8141          # <---- use another port
}
Enter fullscreen mode Exit fullscreen mode

/etc/puppetlabs/puppetservercompiler1/logback.xml

Adopt paths

<configuration scan="true" scanPeriod="60 seconds">
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%t] [%c{2}] %m%n</pattern>
        </encoder>
    </appender>

    <appender name="F1" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!-- TODO: this path should not be hard-coded -->
        <file>/var/log/puppetlabs/puppetservercompiler1/puppetserver.log</file>
        <append>true</append>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!-- rollover daily -->
            <fileNamePattern>/var/log/puppetlabs/puppetservercompiler1/puppetserver-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <!-- each file should be at most 200MB, keep 90 days worth of history, but at most 1GB total-->
            <maxFileSize>200MB</maxFileSize>
            <maxHistory>90</maxHistory>
            <totalSizeCap>1GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%t] [%c{2}] %m%n</pattern>
        </encoder>
    </appender>

    <appender name="STATUS" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/var/log/puppetlabs/puppetservercompiler1/puppetserver-status.log</file>
        <append>true</append>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!-- rollover daily -->
            <fileNamePattern>/var/log/puppetlabs/puppetservercompiler1/puppetserver-status-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <!-- each file should be at most 200MB, keep 90 days worth of history, but at most 1GB total-->
            <maxFileSize>200MB</maxFileSize>
            <maxHistory>90</maxHistory>
            <totalSizeCap>1GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <!-- note that this will only log the JSON message (%m) and a newline (%n)-->
            <pattern>%m%n</pattern>
        </encoder>
    </appender>

    <!-- without additivity="false", the status log messages will be sent to every other appender as well-->
    <logger name="puppetlabs.trapperkeeper.services.status.status-debug-logging" level="debug" additivity="false">
        <appender-ref ref="STATUS"/>
    </logger>

    <logger name="org.eclipse.jetty" level="INFO"/>
    <logger name="org.apache.http" level="INFO"/>
    <logger name="jruby" level="info"/>

    <root level="info">
        <!--<appender-ref ref="STDOUT"/>-->
        <!-- ${logappender} logs to console when running the foreground command -->
        <appender-ref ref="${logappender}"/>
        <appender-ref ref="F1"/>
    </root>
</configuration>
Enter fullscreen mode Exit fullscreen mode

/usr/lib/systemd/system/puppetservercompiler1.service

#
# Local settings can be configured without being overwritten by package upgrades, for example
# if you want to increase puppetserver open-files-limit to 10000,
# you need to increase systemd's LimitNOFILE setting, so create a file named
# "/etc/systemd/system/puppetservercompiler1.service.d/limits.conf" containing:
#   [Service]
#   LimitNOFILE=10000
# You can confirm it worked by running systemctl daemon-reload
# then running systemctl show puppetserver | grep LimitNOFILE
#
[Unit]
Description=puppetserver compiler1 Service
After=syslog.target network.target nss-lookup.target

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/puppetservercompiler1
User=puppet
TimeoutStartSec=300
TimeoutStopSec=60
Restart=on-failure
StartLimitBurst=5
PIDFile=/run/puppetlabs/puppetservercompiler1/puppetserver.pid

# https://tickets.puppetlabs.com/browse/EZ-129
# Prior to systemd v228, TasksMax was unset by default, and unlimited. Starting in 228 a default of '512'
# was implemented. This is low enough to cause problems for certain applications. In systemd 231, the
# default was changed to be 15% of the default kernel limit. This explicitly sets TasksMax to 4915,
# which should match the default in systemd 231 and later.
# See https://github.com/systemd/systemd/issues/3211#issuecomment-233676333
TasksMax=4915

#set default privileges to -rw-r-----
UMask=027


ExecReload=/opt/puppetlabs/servercompiler1/apps/puppetserver/bin/puppetserver reload
ExecStart=/opt/puppetlabs/servercompiler1/apps/puppetserver/bin/puppetserver start
ExecStop=/opt/puppetlabs/servercompiler1/apps/puppetserver/bin/puppetserver stop

KillMode=process

SuccessExitStatus=143

StandardOutput=syslog

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Scripts to adopt

Unluckily several scripts must be modified, too.

Mostly due to hardcoded paths.

/opt/puppetlabs/servercompiler1/apps/puppetserver/bin/puppetserver

#!/bin/bash

#set default privileges to -rw-r-----
umask 027

set -a
if [ -r "/etc/default/puppetservercompiler1" ] ; then
    . /etc/default/puppetservercompiler1
elif [ -r "/etc/sysconfig/puppetservercompiler1" ] ; then
    . /etc/sysconfig/puppetservercompiler1
elif [ `uname` == "OpenBSD" ] ; then
    JAVA_BIN=$(javaPathHelper -c puppetserver)
    JAVA_ARGS="-Xms2g -Xmx2g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger"
    TK_ARGS=""
    USER="_puppet"
    INSTALL_DIR="/opt/puppetlabs/servercompiler1/apps/puppetserver"
    CONFIG="/etc/puppetlabs/puppetservercompiler1/conf.d"
else
    echo "You seem to be missing some important configuration files; could not find /etc/default/puppetservercompiler1 or /etc/sysconfig/puppetservercompiler1" >&2
    exit 1
fi
...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/servercompiler1/apps/puppetserver/cli/apps/foreground

#!/usr/bin/env bash

restartfile="/opt/puppetlabs/servercompiler1/data/puppetserver/restartcounter"
cli_defaults=${INSTALL_DIR}/cli/cli-defaults.sh
...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/servercompiler1/apps/puppetserver/cli/apps/reload

#!/usr/bin/env bash
set +e

restartfile="/opt/puppetlabs/servercompiler1/data/puppetserver/restartcounter"
reload_timeout="${RELOAD_TIMEOUT:-120}"
timeout="$reload_timeout"
realname="puppetservercompiler1"

...

initial="$(head -n 1 "$restartfile")"
pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetservercompiler1/conf.d")"
kill -HUP $pid >/dev/null 2>&1

...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/servercompiler1/apps/puppetserver/cli/apps/start

#!/usr/bin/env bash
set +e
env

pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetservercompiler1/conf.d")"

restartfile="/opt/puppetlabs/servercompiler1/data/puppetserver/restartcounter"
start_timeout="${START_TIMEOUT:-300}"

real_name="puppetservercompiler1"

...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/servercompiler1/apps/puppetserver/cli/apps/stop

#!/usr/bin/env bash
set +e

pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetservercompiler1/conf.d")"
realname="puppetservercompiler1"

...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/servercompiler1/apps/puppetserver/cli/cli-defaults.sh

INSTALL_DIR="/opt/puppetlabs/servercompiler1/apps/puppetserver"

if [ -n "$JRUBY_JAR" ]; then
  echo "Warning: the JRUBY_JAR setting is no longer needed and will be ignored." 1>&2
fi

CLASSPATH="${CLASSPATH}:/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/facter.jar:/opt/puppetlabs/servercompiler1/data/puppetserver/jars/*"
Enter fullscreen mode Exit fullscreen mode

Adoptions to scripts from main installation

As we now have changed the way on how to determine the correct pid, we must also do this adoption at the main puppet server cli commands.

/opt/puppetlabs/server/apps/puppetserver/cli/apps/reload

...

pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d")"

...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/server/apps/puppetserver/cli/apps/start

...

pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d")"

...
Enter fullscreen mode Exit fullscreen mode

/opt/puppetlabs/server/apps/puppetserver/cli/apps/stop

...

pid="$(pgrep -f "puppet-server-release.jar.* -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d")"

...
Enter fullscreen mode Exit fullscreen mode

Start the stack

systemctl start puppetserver
systemctl start puppetservercompiler1
Enter fullscreen mode Exit fullscreen mode

Happy hacking and success on performance tuning your Puppet server infrastructure.

Martin Alfke
ma@betadots.de

Top comments (0)