Chapter 16 Automating Operations with Ansible
Chapter Overview​
Ansible is a widely admired open‑source automation tool that has become a star in the operations world. It dramatically improves my efficiency while reducing human error—truly a “force multiplier.” Ansible ships with thousands of powerful, practical modules and rich built‑in help, so even newcomers can get productive quickly.
In this chapter I first introduce Ansible’s origins, core terminology, and how to configure an inventory. Then I walk you through more than a dozen frequently used modules—ping, yum, yum_repository, firewalld, service, template, setup, lvol, lvg, copy, file, debug, and more—to satisfy everyday operational needs. Next, with hands‑on exercises, I show how to load roles from the system, fetch roles from external sources, and create your own roles so you can orchestrate and control production workflows with confidence. I also craft playbooks to create logical volumes, render files per host, and manage file attributes. Finally, I close the chapter by using Ansible Vault to encrypt variables and playbooks.
By the end, you’ll have a comprehensive, connected understanding of Ansible—and, I hope, that deeply satisfying “I’ve really got this” feeling.
16.1 Ansible: Introduction and Installation​
Ansible is the simplest automation tool I’ve used for day‑to‑day operations. It helps me manage resources efficiently and automate application deployment, so the entire IT stack can be run hands‑off. With Ansible I can handle server initialization, security baselines, updates, and patching. Compared with Chef, Puppet, or SaltStack—classic client/server tools—Ansible isn’t always the fastest, but because it uses SSH and doesn’t require any agent on managed nodes, I can control machines directly as long as I know their credentials. That ease of use is a huge advantage.
In February 2012, Michael DeHaan released Ansible’s first version. Drawing on his deep background in configuration management and architecture—he also created Cobbler while at Red Hat—he set out to build a tool that united the best ideas from the field while fixing the pain points he saw. The result took off. On GitHub, Ansible’s stars and forks long surpassed SaltStack, a sign of its popularity. In 2015 Red Hat acquired Ansible (see Figure 16‑1), giving it even more runway.
Figure 16‑1 The Ansible logo
Using an automation framework visibly boosts efficiency and reduces mistakes. Ansible itself is a framework; the modules do the real work. When I install Ansible, it brings along a huge library of modules that I invoke to perform specific tasks. The collection covers nearly everything, and a single command can affect thousands of hosts. If I ever need more advanced capabilities, I can extend Ansible with Python.
Today, giants like Amazon, Google, Microsoft, Cisco, HP, VMware, and Twitter (now X) rely on Ansible at scale. Red Hat backs it heavily too: as of August 1, 2020, the RHCE exam switched to focus on Ansible. If you want that certification, this chapter is worth your full attention.
Before diving in, let’s align on some Ansible terms (Table 16‑1).
Table 16‑1 Key Ansible terminology
Term | Meaning |
---|---|
control node | The host where I install Ansible; I run tasks, call modules, and manage other machines from here. |
managed node | A host managed by Ansible; it’s the target that executes module code. |
inventory | The list of managed nodes (IPs, hostnames, or FQDNs). |
module | Unit of functionality. Ansible ships with thousands; I can also install more via Ansible Galaxy. |
task | An action performed on a managed node. |
playbook | A YAML file containing repeatable task lists I can run over and over. |
role | A structured way to organize playbooks so I can compose and reuse functionality. |
Because managed nodes do not require an agent and SSH is standard on Linux, I can control them remotely with nothing but SSH. There’s no daemon to start on the control node either; I simply run the ansible
command to call modules.
RHEL 10 images don’t include Ansible by default; I install it from the Extra Packages for Enterprise Linux (EPEL) repository. EPEL is a high‑quality Red Hat–maintained repo that supplements the BaseOS/AppStream content for RHEL, CentOS, Oracle Linux, and related distributions.
Now let me deploy Ansible.
Step 1. In the VM Settings, set the Network Adapter to Bridged and configure the OS NIC for Automatic (DHCP) (Figures 16‑2 and 16‑3).
Figure 16‑2 Set “Network connection” to “Bridged”
Figure 16‑3 Make the NIC Automatic (DHCP)
In most cases that’s enough to reach the internet. If you want to be sure, test with ping
:
root@linuxprobe:~# nmcli connection up ens160
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
root@linuxprobe:~# ping -c 4 www.linuxprobe.com
PING www.linuxprobe.com.w.kunlunno.com (124.95.157.160) 56(84) bytes of data.
64 bytes from www.linuxprobe.com (124.95.157.160): icmp_seq=1 ttl=53 time=17.1 ms
64 bytes from www.linuxprobe.com (124.95.157.160): icmp_seq=2 ttl=53 time=15.6 ms
64 bytes from www.linuxprobe.com (124.95.157.160): icmp_seq=3 ttl=53 time=16.8 ms
64 bytes from www.linuxprobe.com (124.95.157.160): icmp_seq=4 ttl=53 time=17.5 ms
--- www.linuxprobe.com.w.kunlunno.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 15.598/16.732/17.452/0.708 ms
Step 2. Add the EPEL repo under the existing BaseOS/AppStream entries:
root@linuxprobe:~# vim /etc/yum.repos.d/rhel10.repo
[BaseOS]
name=BaseOS
baseurl=file:///media/cdrom/BaseOS
enabled=1
gpgcheck=0
[AppStream]
name=AppStream
baseurl=file:///media/cdrom/AppStream
enabled=1
gpgcheck=0
[EPEL]
name=EPEL
baseurl=https://dl.fedoraproject.org/pub/epel/10/Everything/x86_64/
enabled=1
gpgcheck=0
Step 3. Install!
ansible-core
is the package for Ansible itself. I also install sshpass
to help with password‑based SSH during labs.
root@linuxprobe:~# dnf install ansible-core sshpass
Updating Subscription Management repositories.
BaseOS 2.7 MB/s | 2.7 kB 00:00
AppStream 2.7 MB/s | 2.8 kB 00:00
EPEL 1.3 MB/s | 3.8 MB 00:03
Dependencies resolved.
========================================================================================================
Package Architecture Version Repository Size
========================================================================================================
Installing:
ansible-core noarch 1:2.16.3-3.el10 AppStream 3.9 M
sshpass x86_64 1.09-10.el10_1 EPEL 27 k
Installing dependencies:
python3-argcomplete noarch 3.2.2-3.el10 AppStream 90 k
python3-cffi x86_64 1.16.0-5.el10 BaseOS 312 k
python3-cryptography x86_64 43.0.0-2.el10 BaseOS 1.4 M
python3-jinja2 noarch 3.1.4-2.el10 AppStream 330 k
python3-markupsafe x86_64 2.1.3-5.el10 AppStream 36 k
python3-ply noarch 3.11-24.el10 BaseOS 139 k
python3-pycparser noarch 2.20-15.el10 BaseOS 162 k
python3-resolvelib noarch 1.0.1-5.el10 AppStream 49 k
Installed:
ansible-core-1:2.16.3-3.el10.noarch python3-argcomplete-3.2.2-3.el10.noarch
python3-cffi-1.16.0-5.el10.x86_64 python3-cryptography-43.0.0-2.el10.x86_64
python3-jinja2-3.1.4-2.el10.noarch python3-markupsafe-2.1.3-5.el10.x86_64
python3-ply-3.11-24.el10.noarch python3-pycparser-2.20-15.el10.noarch
python3-resolvelib-1.0.1-5.el10.noarch sshpass-1.09-10.el10_1.x86_64
Complete!
Because ansible-core
only ships core modules, I add the community.general
collection to access modules such as lvg, parted, lvol, and mount:
root@linuxprobe:~# ansible-galaxy collection install community.general
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/community-general-10.5.0.tar.gz to /root/.ansible/tmp/ansible-local-3064xaqti5li/tmpt0je0dr4/community-general-10.5.0-z14ni88o
Installing 'community.general:10.5.0' to '/root/.ansible/collections/ansible_collections/community/general'
community.general:10.5.0 was installed successfully
After installation Ansible is ready to use. Check the version and key paths:
root@linuxprobe:~# ansible --version
ansible [core 2.16.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.5 (main, Aug 23 2024, 00:00:00) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] (/usr/bin/python3)
jinja version = 3.1.4
libyaml = True
16.2 Configure the Inventory​
When you’re new to Ansible you might see settings “not take effect.” Usually that’s due to configuration precedence. The file in /etc/ansible
has the lowest priority; if there’s an ansible.cfg
in the current directory or in my home directory, those win. See Table 16‑2.
Table 16‑2 Ansible configuration precedence
Priority | Location |
---|---|
High | ./ansible.cfg |
Medium | ~/.ansible.cfg |
Low | /etc/ansible/ansible.cfg |
Because I manage many hosts (dozens, hundreds, or more), an inventory is essential. I pre‑populate /etc/ansible/hosts
with my targets so every ansible
or ansible-playbook
run automatically includes them.
Assume five hosts as in Table 16‑3.
Table 16‑3 Managed hosts
OS | IP address | Purpose |
---|---|---|
RHEL10 | 192.168.10.20 | dev |
RHEL10 | 192.168.10.21 | test |
RHEL10 | 192.168.10.22 | prod |
RHEL10 | 192.168.10.23 | prod |
RHEL10 | 192.168.10.24 | balancers |
I recommend removing the default comments in /etc/ansible/hosts
and replacing them with your entries:
root@linuxprobe:~# vim /etc/ansible/hosts
192.168.10.20
192.168.10.21
192.168.10.22
192.168.10.23
192.168.10.24
Grouping hosts pays off in production:
root@linuxprobe:~# vim /etc/ansible/hosts
[dev]
192.168.10.20
[test]
192.168.10.21
[prod]
192.168.10.22
192.168.10.23
[balancers]
192.168.10.24
Inventory changes take effect immediately. I like to visualize the topology with:
root@linuxprobe:~# ansible-inventory --graph
@all:
|--@ungrouped:
|--@dev:
| |--192.168.10.20
|--@test:
| |--192.168.10.21
|--@prod:
| |--192.168.10.22
| |--192.168.10.23
|--@balancers:
| |--192.168.10.24
Before we go further, remember: Ansible talks over SSH. The first time you SSH to a host, OpenSSH asks you to accept its host key, then prompts for a password:
root@linuxprobe:~# ssh 192.168.10.10
The authenticity of host '192.168.10.10 (192.168.10.10)' can't be established.
ED25519 key fingerprint is SHA256:0R7Kuk/yCTlJ+E4G9y9iX/A/hAklHkALm5ZUgnJ01cc.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.10.10' (ED25519) to the list of known hosts.
root@192.168.10.10's password: Enter the administrator password and press Enter to confirm.
Last login: Mon Mar 31 13:20:15 2025
root@linuxprobe:~#
Typing passwords constantly defeats the purpose of automation. Fortunately Ansible provides connection variables (Table 16‑4).
Table 16‑4 Useful Ansible connection variables
Variable | Purpose |
---|---|
ansible_ssh_host | Target hostname |
ansible_ssh_port | SSH port |
ansible_ssh_user | Default username |
ansible_password | Default password |
ansible_shell_type | Shell type |
I can define these under a special all:vars
group:
root@linuxprobe:~# vim /etc/ansible/hosts
[dev]
192.168.10.20
[test]
192.168.10.21
[prod]
192.168.10.22
192.168.10.23
[balancers]
192.168.10.24
[all:vars]
ansible_user=root
ansible_password=redhat
As a final touch, I prefer to generate a local ansible.cfg
with sensible defaults—set remote_user=root
and disable strict host‑key checking during labs:
root@linuxprobe:~# cd /etc/ansible/
root@linuxprobe:/etc/ansible# ansible-config init --disabled > ansible.cfg
root@linuxprobe:/etc/ansible# vim /etc/ansible/ansible.cfg
219 # (string) Sets the login user for the target machines
220 # When blank it uses the connection plugin's default, normally the user curr ently executing Ansible.
221 remote_user=root
319 # (boolean) Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host
320 host_key_checking=False
No service restart is needed. After finishing these steps, switch the VM back to Host‑Only networking (Figure 16‑4), assign 192.168.10.10/24
to the control node, restart the NIC, and ensure hosts can ping one another—that network reachability underpins the rest of the labs.
root@linuxprobe:/etc/ansible# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.10 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::20c:29ff:fee5:e733 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:e5:e7:33 txqueuelen 1000 (Ethernet)
RX packets 1132 bytes 107455 (104.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 152 bytes 18695 (18.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Figure 16‑4 Switch the VM NIC back to Host‑Only
16.3 Run Ad‑Hoc Commands​
One command can control thousands of nodes—that’s Ansible’s superpower, and ansible
is my workhorse for ad‑hoc (one‑off) tasks. Remember: the framework is Ansible; the work is done by modules. Table 16‑5 lists common modules I’ll use throughout this chapter.
Table 16‑5 Common Ansible modules
Module | Purpose |
---|---|
ping | Check host reachability |
yum | Install, update, and remove RPM packages |
yum_repository | Manage YUM repo definitions |
template | Copy a Jinja2 template |
copy | Create/modify/copy files |
user | Create/modify/delete users |
group | Create/modify/delete groups |
service | Start/stop/status of services |
get_url | Download a file over HTTP/HTTPS |
file | Set permissions and create symlinks |
cron | Manage scheduled tasks |
command | Run a command (no shell) |
shell | Run a command (with shell) |
debug | Print debugging information |
mount | Mount filesystems |
filesystem | Create filesystems |
lineinfile | Edit files with regex‑based line operations |
setup | Gather host facts |
firewalld | Add/modify/remove firewall rules |
lvg | Manage volume groups |
lvol | Manage logical volumes |
If I don’t recognize a module name or forget details, I use ansible-doc
. I can list everything with ansible-doc -l
or view a single module and its parameters and examples:
root@linuxprobe:/etc/ansible# ansible-doc -l
ansible.builtin.add_host Add a host (and alternatively a grou...
ansible.builtin.apt Manages apt-packages
ansible.builtin.apt_key Add or remove an apt key
ansible.builtin.apt_repository Add and remove APT repositories
ansible.builtin.assemble Assemble configuration files from fr...
ansible.builtin.assert Asserts given expressions are true
ansible.builtin.async_status Obtain status of asynchronous task
root@linuxprobe:/etc/ansible# ansible-doc add_host
> ANSIBLE.BUILTIN.ADD_HOST (/usr/lib/python3.12/site-packages/ansible/module>
Use variables to create new hosts and groups in inventory for
use in later plays of the same playbook. Takes variables so
you can define the new hosts more fully. This module is also
supported for Windows targets.
To verify connectivity to my inventory, I use the ping module with no parameters:
root@linuxprobe:/etc/ansible# ansible all -m ping
192.168.10.20 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.10.21 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.10.22 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.10.23 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.10.24 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Tips:
To keep the outputs readable from here on, I show detailed results only for 192.168.10.20; other hosts behave the same.
When I need to pass parameters to modules, I add -a 'key=value ...'
. For example, let me add a software repository to all hosts using yum_repository. First I skim its docs:
root@linuxprobe:/etc/ansible# ansible-doc yum_repository
> ANSIBLE.BUILTIN.YUM_REPOSITORY (/usr/lib/python3.12/site-packages/ansible/>
Add or remove YUM repositories in RPM-based Linux
distributions. If you wish to update an existing repository
definition use [community.general.ini_file] instead.
EXAMPLES:
- name: Add repository
ansible.builtin.yum_repository:
name: epel
description: EPEL YUM repo
baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/
- name: Add multiple repositories into the same file (1/2)
ansible.builtin.yum_repository:
name: epel
description: EPEL YUM repo
file: external_repos
baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/
gpgcheck: no
- name: Add multiple repositories into the same file (2/2)
ansible.builtin.yum_repository:
name: rpmforge
description: RPMforge YUM repo
file: external_repos
baseurl: http://apt.sw.be/redhat/el7/en/$basearch/rpmforge
mirrorlist: http://mirrorlist.repoforge.org/el7/mirrors-rpmforge
enabled: no
Suppose I need to create the repo in Table 16‑7:
Table 16‑7 New repo definition
Field | Value |
---|---|
Name | EX294_BASE |
Description | EX294 base software |
Base URL | file:///media/cdrom/BaseOS |
GPG check | Enabled |
GPG key | file:///media/cdrom/RPM-GPG-KEY-redhat-release |
I pass those parameters in one strict, quoted string. Seeing CHANGED means it worked:
root@linuxprobe:/etc/ansible# ansible all -m yum_repository -a 'name="EX294_BASE" description="EX294 base software" baseurl="file:///media/cdrom/BaseOS" gpgcheck=yes enabled=1 gpgkey="file:///media/cdrom/RPM-GPG-KEY-redhat-release"'
192.168.10.20 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"repo": "EX294_BASE",
"state": "present"
}
I can confirm on any managed node:
root@linuxprobe:~# cat /etc/yum.repos.d/EX294_BASE.repo
[EX294_BASE]
baseurl = file:///media/cdrom/BaseOS
enabled = 1
gpgcheck = 1
gpgkey = file:///media/cdrom/RPM-GPG-KEY-redhat-release
name = EX294 base software
16.4 Playbook Fundamentals​
Often a single ad‑hoc command isn’t enough. Ansible lets me write playbooks—automation in YAML—that I can execute repeatedly. Pay close attention to indentation and alignment: YAML starts with three dashes (---
), and nesting must be consistent. File extensions are typically .yml
.
(Additional examples in this section mirror the Chinese original and retain all code as‑is in later labs.)
16.5 Organize and Reuse with Roles​
As playbooks grow, they can become long and unwieldy, and it’s hard to reuse parts elsewhere. Since Ansible 1.2, roles provide a hierarchical structure: variables, files, tasks, handlers, templates, and more live under well‑defined directories. Think of roles as encapsulation in programming—hide implementation details, expose intent.
I benefit in two big ways: I can focus on designing the automation while roles keep things tidy and reusable, and I can call roles across multiple playbooks.
There are three ways to get roles: load system roles, fetch roles from outside sources, or create my own.
16.5.1 Load system roles​
RHEL includes a bundle of system roles. I don’t need internet access if I have local repos configured. I install them and then call them from playbooks:
root@linuxprobe:/etc/ansible# dnf install rhel-system-roles
Updating Subscription Management repositories.
Last metadata expiration check: 0:27:07 ago on Tue 01 Apr 2025 06:27:45 PM CST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
rhel-system-roles noarch 1.88.9-0.1.el10 AppStream 3.9 M
Installed:
rhel-system-roles-1.88.9-0.1.el10.noarch
Complete!
List what’s available:
root@linuxprobe:/etc/ansible# ansible-galaxy list
# /usr/share/ansible/roles
- rhel-system-roles.network, (unknown version)
- rhel-system-roles.podman, (unknown version)
- rhel-system-roles.postfix, (unknown version)
- rhel-system-roles.postgresql, (unknown version)
- rhel-system-roles.rhc, (unknown version)
- rhel-system-roles.selinux, (unknown version)
- rhel-system-roles.snapshot, (unknown version)
- rhel-system-roles.ssh, (unknown version)
- rhel-system-roles.sshd, (unknown version)
- rhel-system-roles.storage, (unknown version)
- rhel-system-roles.sudo, (unknown version)
- rhel-system-roles.systemd, (unknown version)
- rhel-system-roles.timesync, (unknown version)
16.5.2 Fetch roles from outside sources​
Ansible Galaxy is the community hub for roles and collections. Search and install (for example nginxinc.nginx
):
ansible-galaxy role install nginxinc.nginx
If a role is hosted elsewhere or Galaxy is slow, reference a tarball from YAML and use -r
:
vim nginx.yml
---
- src: https://www.linuxprobe.com/Software/ansible-role-nginx-0.25.0.tar.gz
name: nginx-core
ansible-galaxy role install -r nginx.yml
ansible-galaxy list
16.5.3 Create a new role​
Make sure your roles_path
includes /etc/ansible/roles
(and optionally /usr/share/ansible/roles
):
roles_path=/usr/share/ansible/roles:/etc/ansible/roles
Create a role skeleton named apache:
cd /etc/ansible/roles
ansible-galaxy init apache
ls apache
# defaults files handlers meta README.md tasks templates tests vars
Table 16‑10 Role directory meanings
Directory | Meaning |
---|---|
defaults | Low‑priority default variables |
files | Static files used by the role |
handlers | Handler definitions |
meta | Metadata such as author, license, dependencies |
tasks | Task list executed by the role |
templates | Jinja2 templates |
tests | Playbooks for testing the role |
vars | High‑priority variables |
Define tasks in tasks/main.yml
:
vim /etc/ansible/roles/apache/tasks/main.yml
---
- name: Install httpd
yum:
name: httpd
state: latest
- name: Start and enable httpd
service:
name: httpd
state: started
enabled: yes
Open HTTP in the firewall. The firewalld module lives in ansible.posix; install the collection and add the task:
ansible-galaxy collection install ansible.posix
vim /etc/ansible/roles/apache/tasks/main.yml
- name: Allow HTTP in firewalld
ansible.posix.firewalld:
service: http
permanent: yes
state: enabled
immediate: yes
Generate a unique homepage per host with Jinja2. First, learn the facts you need:
ansible all -m setup -a 'filter="*fqdn*"'
ansible all -m setup -a 'filter="*ip*"'
Create the template and deploy it:
vim /etc/ansible/roles/apache/templates/index.html.j2
Welcome to {{ ansible_fqdn }} on {{ ansible_all_ipv4_addresses }}
vim /etc/ansible/roles/apache/tasks/main.yml
- name: Deploy custom index.html
template:
src: index.html.j2
dest: /var/www/html/index.html
Call the role from a playbook:
vim /etc/ansible/roles.yml
---
- name: Deploy website with custom homepage
hosts: all
roles:
- apache
ansible-playbook /etc/ansible/roles.yml
16.6 Playbook Labs: LVM, templating, and file attributes​
This section walks through three practical playbooks you can adapt to production.
16.6.1 Create a logical volume and mount it​
Use lvg, lvol, filesystem, and mount to build a data volume on /dev/sdb
and mount it at /web
.
vim lvm.yml
---
- name: Create an LVM volume and mount it
hosts: prod
tasks:
- name: Create volume group
community.general.lvg:
vg: vgdata
pvs: /dev/sdb
- name: Create logical volume
community.general.lvol:
vg: vgdata
lv: lvweb
size: 2g
shrink: no
- name: Create filesystem
community.general.filesystem:
fstype: xfs
dev: /dev/vgdata/lvweb
- name: Create mount point
file:
path: /web
state: directory
mode: '0755'
- name: Mount the filesystem and persist in fstab
mount:
path: /web
src: /dev/vgdata/lvweb
fstype: xfs
opts: defaults
state: mounted
Run it:
ansible-playbook lvm.yml
16.6.2 Customize files per host with templates​
Render /etc/motd
using host facts so each server shows its own identity at login.
vim motd.yml
---
- name: Deploy a dynamic MOTD
hosts: all
gather_facts: yes
tasks:
- name: Render /etc/motd
template:
src: motd.j2
dest: /etc/motd
mode: '0644'
Template:
vim motd.j2
Welcome to {{ ansible_fqdn }}
Primary IPv4: {{ ansible_all_ipv4_addresses | first }}
Uptime: {{ ansible_uptime_seconds }} seconds
Managed by Ansible on {{ ansible_date_time.date }} {{ ansible_date_time.time }}
16.6.3 Manage file attributes and ownership​
Use the file module to create directories, set permissions, and manage symlinks.
vim files.yml
---
- name: Enforce file ownership and modes
hosts: balancers
tasks:
- name: Ensure app directories exist
file:
path: "{{ item.path }}"
state: directory
owner: "{{ item.owner }}"
group: "{{ item.group }}"
mode: "{{ item.mode }}"
loop:
- { path: /opt/app, owner: root, group: root, mode: '0755' }
- { path: /opt/app/logs, owner: root, group: root, mode: '0750' }
- name: Touch an environment file
file:
path: /opt/app/.env
state: touch
owner: root
group: root
mode: '0640'
- name: Ensure symlink exists
file:
src: /opt/app/current
dest: /var/www/html/app
state: link
A Chinese example in this section creates /linuxprobe
and a symlink /linuxcool
. When I run the playbook, only matching hosts change; others are skipped. Verifying on a dev host:
root@linuxprobe:~# ls -ld /linuxprobe
drwxrwsr-x. 2 root root 6 Apr 3 20:35 /linuxprobe
root@linuxprobe:~# ls -ld /linuxcool
lrwxrwxrwx. 1 root root 11 Apr 3 20:35 /linuxcool -> /linuxprobe
16.9 Manage Vault‑Encrypted Files​
Since Ansible 1.5, Vault has offered a native way to encrypt sensitive data—passwords, variables, even entire playbooks. Vault can encrypt both variable names and values, preventing casual viewing. I use the ansible-vault
command to create, encrypt, decrypt, rekey, view, and edit content.
Let’s walk through it.
Step 1. Create a variable file:
root@linuxprobe:/etc/ansible# vim locker.yml
---
pw_developer: Imadev
pw_manager: Imamgr
Step 2. Because typing a password each time is tedious, I create a file to hold the Vault password and restrict its permissions:
root@linuxprobe:/etc/ansible# vim /root/secret.txt
whenyouwishuponastar
root@linuxprobe:/etc/ansible# chmod 600 /root/secret.txt
In ansible.cfg
, set the vault_password_file
so Ansible can pick it up automatically:
root@linuxprobe:/etc/ansible# vim /etc/ansible/ansible.cfg
137
138 # If set, configures the path to the Vault password file as an alternative to
139 # specifying --vault-password-file on the command line.
140 vault_password_file = /root/secret.txt
141
Step 3. With the password file configured, Ansible loads it automatically. Now I can encrypt without an interactive prompt:
root@linuxprobe:/etc/ansible# ansible-vault encrypt locker.yml
Encryption successful
Vault uses AES‑256 encryption (a keyspace of 2^256). The file now looks like:
root@linuxprobe:/etc/ansible# cat locker.yml
$ANSIBLE_VAULT;1.1;AES256
38666539353062343930633135306633613736386366633665353464343964613838356135616137
3538616139303438316636613564313266353833336337640a313963316664303431383132386362
30326430643234653336363130393238633266386636333666633932613937326135373766656539
3637386562646539610a333761333039323565353761353636623762623435616163333031376333
36316334386562326337363730353463323462316131383064333234626561366338356261383333
3235626134393832323436386232646562396133666435386361
If I want to change the Vault password, I can rekey and combine it with --ask-vault-pass
to supply the old password:
root@linuxprobe:/etc/ansible# ansible-vault rekey --ask-vault-pass locker.yml
Vault password: Enter your old password
New Vault password: Enter your new password
Confirm New Vault password: Enter your new password again
Rekey successful
Step 4. To edit an encrypted file, use edit
; to view, use view
. By default, editing opens Vim—remember to write and quit.
root@linuxprobe:/etc/ansible# ansible-vault edit locker.yml
---
pw_developer: Imadev
pw_manager: Imamgr
pw_production: Imaprod
Then:
root@linuxprobe:/etc/ansible# ansible-vault view locker.yml
Vault password: Enter the password and press Enter to confirm.
---
pw_developer: Imadev
pw_manager: Imamgr
pw_production: Imaprod
Best practices
- Keep the Vault password file outside your repository.
- Use separate vaults per environment (dev/test/prod).
- Prefer
vars_files
over inline secrets.- Restrict access with filesystem permissions and group membership.
Review Questions​
-
I have a local repo configured, but
dnf
still won’t install Ansible. Why?Answer: RHEL 10’s BaseOS/AppStream repos don’t include Ansible; you need to add the EPEL repository.
-
If
/etc/ansible/ansible.cfg
and~/.ansible.cfg
both exist, which one wins?Answer: The one in my home directory has higher priority.
-
Which module starts a service, and which module mounts a device?
Answer:
service
starts services;mount
mounts device filesystems. -
How can I learn what a module does?
Answer: Use
ansible-doc
(list with-l
, or view details for a specific module). -
What are the three ways to obtain roles?
Answer: Load system roles, fetch roles from external sources, and create my own roles.
-
During a playbook run, I see changed in yellow—what does it mean?
Answer: The task succeeded and made changes.
-
What’s the difference between Jinja2 templates and the
copy
module?Answer:
copy
is a 1:1 file transfer;template
renders variables into a file via Jinja2. -
How do I run a playbook that uses a Vault‑encrypted var file?
Answer: Run
ansible-playbook <playbook>
with--ask-vault-pass
or--vault-password-file <path>
.