Skip to content

Instantly share code, notes, and snippets.

@dmccuk
Last active October 31, 2024 11:44
Show Gist options
  • Save dmccuk/20842287a694a4c9d5a4dd30f7ef38f0 to your computer and use it in GitHub Desktop.
Save dmccuk/20842287a694a4c9d5a4dd30f7ef38f0 to your computer and use it in GitHub Desktop.

The issue you're encountering is likely due to the use of the MBR (Master Boot Record) partitioning scheme. MBR has a partition size limitation of 2TB. If you have disks larger than 2TB and you want to use the entire space in a single partition, you should use the GPT (GUID Partition Table) partitioning scheme instead of MBR.

To use the full capacity of a 3TB (or larger) disk on RHEL6, you'd want to use the GPT format. Here's how you can create a GPT partition:

  1. Backup any data: Before making any changes to the disk, ensure you've backed up any data on it.

  2. Install gdisk: The gdisk utility is a version of fdisk for GPT. It might not be installed by default:

    sudo yum install gdisk
  3. Start gdisk:

    sudo gdisk /dev/sdx

    Replace /dev/sdx with the appropriate disk name.

  4. Create a new partition:

    • Press o to create a new empty partition table.
    • Press n to create a new partition.
    • Follow the prompts to set the start and end of the partition.
    • Press w to write changes to disk.
  5. Format the partition: Once you've created the partition, you'll need to format it with a filesystem, for example:

    sudo mkfs.ext4 /dev/sdx1

    Replace /dev/sdx1 with the appropriate partition name.

  6. Mount & use: Now you can mount this partition and start using it.

By moving to GPT, you can utilize the full size of disks larger than 2TB. Also, keep in mind that while GPT is widely supported, ensure any other systems that might access this disk also support GPT. RHEL6 and most modern systems support it without any issues.

The message you're seeing is typical when you start gdisk on a disk that currently has an MBR partition table. gdisk is letting you know that you can convert the MBR partition table to GPT if you wish. However, converting from MBR to GPT is not a simple switch, and any data on the disk would be at risk.

To answer your question:

  1. If you are initializing a new disk: If the 3TB disk is new and doesn't have data on it, then you can safely create a GPT partition table on it without concerns. Once you've done that, you can move data to it from other disks without any issues.

  2. If you're trying to convert an existing disk with data: It's riskier. Before attempting to convert from MBR to GPT, you should make a complete backup of any data on the disk. Conversion tools are designed to change the partition table without data loss, but things can go wrong, and a backup is crucial.

  3. Interoperability: As for moving data onto the new GPT-partitioned disk, RHEL6 will have no trouble reading and writing to GPT partitions. GPT is well-supported in modern Linux distributions. The primary consideration is if you ever expect this disk to be used on older systems or certain other devices which might not support GPT. But for a standard RHEL6 (and newer) environment, you should be good to go.

Lastly, always remember to backup any important data before making major changes to disk structures.

# Retrieve the newest certificate from the local machine's personal store
$newestCert = Get-ChildItem -Path Cert:\LocalMachine\My | Sort-Object NotAfter -Descending | Select-Object -First 1

if ($newestCert) {
    Write-Output "Newest Certificate Thumbprint: $($newestCert.Thumbprint)"
    
    # Update the existing WinRM HTTPS listener with the newest certificate's thumbprint
    Set-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{ Address = "*"; Transport = "HTTPS" } -ValueSet @{ CertificateThumbprint = $newestCert.Thumbprint }

    Write-Output "HTTPS listener updated with newest certificate."
}
else {
    Write-Error "No certificates found in the local machine's personal store."
}

https://aws.amazon.com/marketplace/pp/prodview-fxjjedym32gky https://repost.aws/questions/QU7bw453xcRUyYfO5BBnJnqg/oracle-linux-8-uek-availability

When you want to convert a Linux server in VMware into a template for cloning, you need to ensure that all system-specific information and configurations are cleaned up to prevent conflicts or unintended configurations on the cloned systems. Here's a checklist you can follow before converting a VM to a template:

  1. Hostname: Reset the hostname to a generic name.

    echo "localhost" > /etc/hostname
  2. Network Configuration: Remove or clear the network configuration files.

    • For systems using ifcfg scripts (like RHEL/CentOS):

      rm -f /etc/sysconfig/network-scripts/ifcfg-ens*
    • For systems using Netplan (like newer versions of Ubuntu):

      rm -f /etc/netplan/*.yaml
  3. SSH Keys: Delete SSH server keys. New keys will be generated on the first boot of the cloned system.

    rm -f /etc/ssh/ssh_host_*
  4. Log Files: Clear system logs to start fresh on the cloned VMs.

    find /var/log -type f -exec truncate -s 0 {} \;
  5. Command History: Clear the command history of all users, especially root.

    rm -f /root/.bash_history
    rm -f /home/*/.bash_history
  6. Temporary Files: Delete any temporary files.

    rm -rf /tmp/*
    rm -rf /var/tmp/*
  7. Machine ID: Clear the machine ID. This ID will be regenerated on the next boot.

    echo "" > /etc/machine-id
  8. UUIDs & MAC Address Config: Ensure that any system-specific UUIDs or MAC addresses are not hardcoded in configuration files.

  9. Packages & Software: Consider removing or generalizing software to fit the intended use of the template.

  10. Users & Passwords: Ensure you remove or reset passwords, especially if you’ve set custom passwords for applications or services.

  11. Custom Services: If there are any custom services or applications, ensure they're configured to start fresh for new clones.

  12. Unmount Drives/Devices: Ensure you've unmounted any temporary devices or drives.

  13. Package Database: You might want to clean the package manager cache.

  • For yum (RHEL/CentOS):

    yum clean all
  • For apt (Debian/Ubuntu):

    apt clean
  1. VMware-specific Configurations:
  • Uninstall open-vm-tools or VMware Tools if you've installed them. You can install them again once the VM is deployed from the template.
  • Remove any cron jobs or scheduled tasks that are specific to the current VM.
  1. Shutdown the VM:
shutdown -h now

Once the VM is powered off, you can convert it to a template in the vSphere/VMware client. When deploying new VMs from the template, ensure to customize settings, such as CPU, memory, and disk size, according to the requirements of the specific VM.

ANsible AD user adding

---
- name: Setup SSH key for AD user
  hosts: your_target_hosts
  become: yes  # Become root to manage user and home directory
  vars:
    ad_username: "AD_user"
    ssh_public_key: "ssh-rsa AAAAB3N..."

  tasks:
    - name: Ensure AD user's home directory exists
      ansible.builtin.user:
        name: "{{ ad_username }}"
        create_home: yes
        home: "/home/{{ ad_username }}"

    - name: Create .ssh directory for AD user
      ansible.builtin.file:
        path: "/home/{{ ad_username }}/.ssh"
        state: directory
        owner: "{{ ad_username }}"
        group: "{{ ad_username }}"
        mode: '0700'

    - name: Add SSH key to authorized_keys
      ansible.builtin.lineinfile:
        path: "/home/{{ ad_username }}/.ssh/authorized_keys"
        line: "{{ ssh_public_key }}"
        create: yes
        owner: "{{ ad_username }}"
        group: "{{ ad_username }}"
        mode: '0600'

Packer setup

Step 1: Download Oracle Linux 7 ISO

Download the OEL7 ISO:

  • Visit the Oracle Linux download page.
  • Download the ISO for Oracle Linux 7. This file will be used by Packer to install the OS in the VM.

Step 2: Create a Packer Template

Packer Template:

Create a file named oel7.json (or any other name you prefer) for your Packer template. Here's a basic template to get you started:

{
  "builders": [{
    "type": "virtualbox-iso",
    "guest_os_type": "Oracle_64",
    "iso_url": "path_to_your_downloaded_oel7_iso",
    "iso_checksum": "checksum_of_iso",
    "iso_checksum_type": "md5",
    "headless": true,
    "ssh_username": "your_username",
    "ssh_password": "your_password",
    "vm_name": "OEL7_VM",
    "shutdown_command": "shutdown -P now",
    "boot_wait": "2m",
    "disk_size": 20480,
    "memory": 2048,
    "cpus": 2,
    "format": "ova",
    "output_directory": "output-virtualbox-ova",
    "boot_command": [
      "<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
    ],
    "http_directory": ".",
    "http_port_min": 8000,
    "http_port_max": 9000
  }]
}

  • Replace "path_to_your_downloaded_oel7_iso" with the actual path to the ISO file.
  • Replace "checksum_of_iso" with the actual checksum of the ISO (you can usually find this on the download page or calculate it using a tool like md5sum).
  • Update ssh_username and ssh_password with credentials you want to set for your VM.

Step 3: Run Packer

Build the VM:

  • Open a terminal on your RHEL8 server.
  • Navigate to the directory where your oel7.json file is located.
  • Run the following command:
packer build oel7.json
  • Packer will create a VM based on this template.

Step 4: Export the VM as an OVA File

Manual Export Using VMware Workstation Player:

  • After Packer completes the build, open VMware Workstation Player.
  • Find the VM that Packer created. It should be listed in VMware Workstation Player's library.
  • Right-click on the VM and choose Export to OVF or a similar option.
  • Follow the prompts to export the VM as an OVA file.

Additional Notes

  • Network Configuration: This basic template does not include network configuration or provisioning scripts. You might need to modify it to suit your specific requirements.

Manual Steps: The export to OVA is a manual step in VMware Workstation Player.

VMware Compatibility: Ensure that the guest_os_type in the Packer template is compatible with VMware Workstation Player.

By following these steps, you should be able to create a basic OVA file for OEL7 using Packer and VMware Workstation Player. Remember,

  • this is a basic template and might need further customization based on your specific requirements.
#version=RHEL7
# System authorization information
auth --enableshadow --passalgo=sha512

# Use CDROM installation media
cdrom

# Run the Setup Agent on first boot
firstboot --enable

# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'

# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on --ipv6=auto --no-activate
network  --hostname=localhost.localdomain

# Root password
rootpw --plaintext your_root_password
# Create a user
user --name=your_username --password=your_password --iscrypted --gecos="User"

# System services
services --enabled="chronyd"

# System timezone
timezone America/New_York --isUtc

# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda

# Clear the Master Boot Record
zerombr

# Partition clearing information
clearpart --all --initlabel

# Disk partitioning information
autopart --type=lvm

# Enable installation repo
url --url="http://mirror.centos.org/centos/7/os/x86_64/"
repo --name="AppStream" --baseurl="http://mirror.centos.org/centos/7/AppStream/x86_64/"

%packages
@base
%end

%addon com_redhat_kdump --enable --reserve-mb='auto'
%end

Ansible for python2

Option 1: Set in the Inventory File

Edit Your Inventory File: Open your Ansible inventory file (typically hosts or inventory).

Specify Python Interpreter:

For each host or group of hosts that need to run Python 2 scripts, add the ansible_python_interpreter variable and set it to the path of the Python 2 interpreter. Example:

[python2_hosts]
host1 ansible_python_interpreter=/usr/bin/python2
host2 ansible_python_interpreter=/usr/bin/python2
Run Ansible Playbook: Execute your Ansible playbook as usual. Ansible will use Python 2 for the specified hosts.

Option 2: Set in Playbooks

Define Variable in Playbook: In your Ansible playbook, you can set the ansible_python_interpreter variable at the play level or for specific tasks.

Example Playbook:

yaml
Copy code
- hosts: python2_hosts
  vars:
    ansible_python_interpreter: "/usr/bin/python2"
  tasks:
    - name: Run a Python 2 script
      script: path_to_your_python2_script.py

Run Ansible Playbook: When you run this playbook, Ansible will use Python 2 for the hosts under python2_hosts.

Option 3: Set in Role Variables

Create or Edit Role: If you're using roles in Ansible, you can define the ansible_python_interpreter variable within the role's variable file (vars/main.yml).

Define Variable:

ansible_python_interpreter: "/usr/bin/python2"

Use Role in Playbook: Include this role in your playbook. Ansible will apply the specified Python interpreter when executing tasks from this role.

Additional Notes Path to Python Interpreter: Ensure that the path to the Python 2 interpreter (/usr/bin/python2 in the examples) matches the actual path on the target hosts. Python 2 End of Life: Be aware that Python 2 has reached its end of life. It's advisable to plan for migrating to Python 3 where possible, as Python 2 no longer receives updates or security fixes. Testing: Test your configuration in a non-production environment first to ensure it works as expected. By configuring the ansible_python_interpreter variable, you can control which Python interpreter Ansible uses for specific hosts, plays, or roles, allowing for compatibility with scripts that require Python 2.


Kickstart stuff

#version=RHEL8

# Use graphical install
graphical

# Use CDROM installation media
cdrom

# Run the Setup Agent on first boot
firstboot --enable

# Keyboard layouts
keyboard --xlayouts='us'

# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on --activate
network  --hostname=localhost.localdomain

# Root password (use a strong password here)
rootpw --plaintext your_root_password

# System timezone
timezone America/New_York --isUtc

# System bootloader configuration
bootloader --location=mbr --boot-drive=sda

# Clear the Master Boot Record
zerombr

# Disk partitioning information
autopart --type=lvm
# For manual partitioning, use something like:
# part /boot --fstype=xfs --size=1024
# part pv.100 --size=1 --grow
# volgroup vg_system pv.100
# logvol / --fstype=xfs --name=lv_root --vgname=vg_system --size=10240
# logvol swap --name=lv_swap --vgname=vg_system --size=2048

# Enable firewall and disable SELinux
firewall --enabled
selinux --disabled

# System services
services --enabled="chronyd"

# Do not configure the X Window System
skipx

# Package selection (minimal installation)
%packages
@^minimal-environment
%end

%post
# Post-installation script
# You can put your custom post-installation scripts here
%end

HCL:

source "virtualbox-iso" "ubuntu-example" {
  vm_name         = "packer-ubuntu-vm"
  iso_url         = "<iso-url>"
  iso_checksum    = "sha256:<iso-checksum>"
  guest_os_type   = "Ubuntu_64"
  ssh_username    = "ubuntu"
  ssh_password    = "password"
  boot_wait       = "10s"

  disk_size       = 10240 // Size of the primary disk in MB

  // VirtualBox-specific settings
  vboxmanage = [
    ["modifyvm", "{{.Name}}", "--memory", "4096"],
    ["modifyvm", "{{.Name}}", "--cpus", "2"]
  ]
}

build {
  sources = ["source.virtualbox-iso.ubuntu-example"]

  // Local-shell provisioner to add a second disk
  provisioner "local-shell" {
    inline = [
      "VBoxManage createhd --filename output-{{build_name}}/additional_disk.vdi --size 10240", // Size of the additional disk in MB (10 GB)
      "VBoxManage storageattach '{{build_name}}' --storagectl 'SATA Controller' --port 1 --type hdd --medium 'output-{{build_name}}/additional_disk.vdi'"
    ]
  }

  // ... other provisioners if any ...
}

- hosts: all
  vars:
    ansible_python_interpreter: /usr/bin/python3
  tasks:
    - name: Example task
      <task-module>: <task-arguments>
[target-hosts]
your_host_or_group ansible_python_interpreter=/usr/bin/python3

- name: Get hostname
  ansible.builtin.command: hostname
  register: result_hostname

- name: Assert that hostname has 3 parts
  assert:
    that:
      - result_hostname.stdout.split('.') | length == 3
    fail_msg: "Hostname does not have 3 parts"
    success_msg: "Hostname has 3 parts"

testing dirrent hostnames

---
- name: Test server name format logic
  hosts: localhost
  gather_facts: no
  vars:
    test_server_names:
      - "ukdc1-9k-abc01"
      - "ab-t1-b-9k-abc01"
      - "another-format-server"

  tasks:
    - name: Determine format and set facts accordingly for test server names
      vars:
        server_name: "{{ item }}"
      set_fact:
        env: "{{ (server_name[6] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[8]) }}"
        lhp: "{{ (server_name[9:12] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[11:14]) }}"
        zone: "{{ (server_name[7] if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else server_name[4]) }}"
        use_format: "{{ ('first' if server_name | length > 10 and server_name[3] == '-' and server_name[5] == '-' else 'new') }}"
      loop: "{{ test_server_names }}"
      loop_control:
        loop_var: item

    - name: Display server name and extracted facts
      debug:
        msg: "Server: {{ item }}, Format: {{ use_format }}, Env: {{ env }}, LHP: {{ lhp }}, Zone: {{ zone }}"
      loop: "{{ test_server_names }}"
      loop_control:
        loop_var: item

Change a password:

Changing the root password across multiple Linux systems is a common use case for Ansible, which can automate this process efficiently and securely. The best practice for changing the root password with Ansible involves using the `user` module along with an encrypted password generated using a tool like `openssl` or `mkpasswd`. This approach ensures that the new password is encrypted in transit and not exposed in plain text in your Ansible playbook or logs.

### Step 1: Generate an Encrypted Password

First, generate an encrypted password. You can use `openssl` for this:

```bash
openssl passwd -6 -salt xyz yourpassword

Replace yourpassword with the desired new password. The -6 flag specifies the SHA-512 encryption algorithm, and -salt xyz adds a salt to the encryption process to enhance security. Remember to replace xyz with a random salt value.

Alternatively, use mkpasswd (you might need to install the whois package to get this command):

mkpasswd --method=sha-512

Step 2: Create an Ansible Playbook

Next, create an Ansible playbook to change the root password. Here's a simple playbook example:

---
- name: Change root password
  hosts: all
  become: yes

  tasks:
    - name: Update root password
      user:
        name: root
        password: "{{ encrypted_password }}"

Replace {{ encrypted_password }} with the encrypted password string you generated earlier. For better security practices, you should use Ansible Vault to encrypt the password variable or the entire file containing the password.

Using Ansible Vault

To avoid storing the encrypted password directly in the playbook:

  1. Create a file with the encrypted password variable:
ansible-vault create secret.yml

When prompted, enter the password for the vault and add the following content:

encrypted_password: 'encrypted-password-here'

Replace encrypted-password-here with your encrypted password.

  1. Include the vault file in your playbook using vars_files:
---
- name: Change root password
  hosts: all
  become: yes
  vars_files:
    - secret.yml

  tasks:
    - name: Update root password
      user:
        name: root
        password: "{{ encrypted_password }}"
  1. Run the playbook, providing the vault password:
ansible-playbook playbook.yml --ask-vault-pass

This method keeps the encrypted password secured and avoids exposing sensitive information directly in your playbook or source control.

Remember, changing the root password is a critical operation that can affect system access. Ensure you have proper backups and recovery processes in place before making such changes across your infrastructure.

One line password checks...

echo "<password>" | su - root -c 'echo "Success"' 2>/dev/null && echo "Yes, the password works." || echo "No, the password doesn't work."
- name: Check Root Access
  hosts: all
  become: yes
  become_method: su
  tasks:
    - name: Attempt to read a file only root can access
      ansible.builtin.command: cat /root/.ssh/authorized_keys
      register: result
      ignore_errors: yes

    - name: Check if access was successful
      ansible.builtin.debug:
        msg: "Root access verified."
      when: result.rc == 0

    - name: Report failure to access root
      ansible.builtin.debug:
        msg: "Root access denied."
      when: result.rc != 0

Possible steps to install OpenJDK 17 on rhel7

To install OpenJDK 17 on RHEL 7, you can follow these more detailed steps. This guide assumes you have access to the terminal and appropriate permissions to execute commands (typically as the root user or via sudo).

Step 1: Download OpenJDK 17

First, you'll need to download the OpenJDK 17 binaries. While RHEL's default repositories may not provide the latest JDK version, you can download it from the official OpenJDK website or AdoptOpenJDK, which is now part of the Eclipse Foundation (Eclipse Temurin™).

  • Visit Adoptium and choose the appropriate OpenJDK 17 version for Linux/x64.

Step 2: Extract the JDK Archive

Once downloaded, upload the tar.gz file to your RHEL 7 system, if you downloaded it from another machine. Use scp or similar tools if necessary. Then, extract the JDK archive to an appropriate directory, such as /usr/lib/jvm. This is a common directory for Java installations but may not exist by default.

sudo mkdir -p /usr/lib/jvm
cd /usr/lib/jvm
sudo tar -xzf /path/to/downloaded/openjdk-17_linux-x64_bin.tar.gz

Replace /path/to/downloaded/ with the actual path where your OpenJDK tar.gz file is located.

Step 3: Set Up Environment Variables

For the system and users to recognize the newly installed Java version as the default, set up environment variables. Edit or create the /etc/profile.d/jdk.sh file:

sudo vi /etc/profile.d/jdk.sh

Add the following lines, replacing /usr/lib/jvm/jdk-17 with the actual path to your JDK if different:

export JAVA_HOME=/usr/lib/jvm/jdk-17
export PATH=$JAVA_HOME/bin:$PATH

Save and exit the editor. Make the script executable:

sudo chmod +x /etc/profile.d/jdk.sh

Apply the changes:

source /etc/profile.d/jdk.sh

Step 4: Verify the Installation

To ensure OpenJDK 17 is correctly installed and set as the default Java version, use:

java -version

You should see the version of OpenJDK being displayed, indicating that OpenJDK 17 is now the default Java version.

Step 5: Update Alternatives

For systems with multiple Java installations, use the update-alternatives command to manage the default version:

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk-17/bin/java 1
sudo update-alternatives --config java

Follow the prompts to select OpenJDK 17 if it's not already the default.

Additional Notes

  • Be aware of the RHEL 7 lifecycle and consider updating to a newer RHEL version for better compatibility and security in the long term.
  • Always perform such installations and configurations in a testing environment before applying them to production systems.
  • Keep track of any custom repository or installation paths you use for easier maintenance and updates.

lineinefile update:

- name: Replace line in /etc/fstab
  ansible.builtin.lineinfile:
    path: /etc/fstab
    regexp: '^/dev/shm\\s+/dev/shm\\s+tmpfs\\s+defaults\\s+0\\s+0$'
    line: 'tmpfs  /dev/shm  tmpfs  defaults 0 0'
    backrefs: yes

check with extra-vars:

- name: Test patch_id variable
  hosts: localhost
  tasks:
    - name: Display patch_id if provided
      debug:
        msg: "The provided patch_id is {{ patch_id }}"
      when: patch_id is defined and patch_id != ''

    - name: Display a default message if patch_id is not provided
      debug:
        msg: "No patch_id was provided."
      when: patch_id is not defined or patch_id == ''

Some kernel parameters

Here's a brief description of each specified kernel.sched_* parameter:

kernel.sched_wakeup_granularity_ns: Determines how much longer the current task will run before another task is woken up. A lower value can improve system responsiveness by allowing newly awakened tasks to preempt the current task more quickly.

kernel.sched_tuneable_scaling: Controls how the scheduler's tunable values are scaled across CPUs. It adjusts the scheduler's behavior based on the system's workload and CPU characteristics, aiming for a balance between performance and power efficiency.

kernel.sched_schedstats: Enables or disables the collection of detailed scheduling statistics. Useful for debugging or optimizing system performance but may incur overhead.

kernel.sched_rt_runtime_us: Specifies the time slice, in microseconds, allocated to real-time tasks within each sched_rt_period_us. It controls how much CPU time is guaranteed to real-time tasks.

kernel.sched_rt_period_us: Defines the period, in microseconds, over which real-time tasks are allowed to run. It sets the time frame for real-time scheduling.

kernel.sched_rr_timeslice_ms: Sets the time slice, in milliseconds, allocated to each task in a round-robin scheduling scheme. It determines how long a task will run before the scheduler switches to the next task in the round-robin queue.

kernel.sched_nr_migrate: Controls the maximum number of active tasks that can be migrated from one CPU to another during load balancing. It affects how tasks are distributed across CPUs.

kernel.sched_min_granularity_ns: Sets the minimum granularity of the scheduler, in nanoseconds. This value helps prevent too frequent preemptions, ensuring that tasks have a minimum execution time before being rescheduled.

kernel.sched_migration_cost: Represents the typical cost, in nanoseconds, of migrating a task from one CPU to another. A higher value suggests that task migrations are more costly, potentially influencing the scheduler's decisions on moving tasks.

kernel.sched_latency_ns: The total period over which the scheduler aims to run all runnable tasks at least once, in nanoseconds. It's a key parameter in defining the scheduler's behavior, influencing how long tasks may wait before getting CPU time.

kernel.sched_energy_wave, kernel.sched_domain: These parameters are less commonly documented and could be specific to certain kernel versions or configurations, focusing on advanced scheduling domains and energy efficiency mechanisms.

kernel.sched_deadline_period_min_us and kernel.sched_deadline_period_max_us: Define the minimum and maximum allowable period, in microseconds, for tasks scheduled under the SCHED_DEADLINE policy, which is used for tasks with specific timing requirements.

kernel.sched_child_runs_first: Determines whether a child process runs first after being created with fork() before the parent process. This can influence the performance of certain applications.

kernel.sched_cfs_bandwidth_slice_us: Specifies the time slice, in microseconds, for bandwidth control under the Completely Fair Scheduler (CFS), impacting how bandwidth is allocated among competing tasks.

kernel.sched_autogroup_enabled: Enables automatic grouping of tasks, which can improve the responsiveness of interactive tasks by effectively grouping and scheduling related tasks together.

Please note, the exact behavior and availability of some parameters may vary depending on the kernel version and configuration


---
- name: Append a line to all .bashrc files in /home and list them
  hosts: all
  become: yes  # Use with caution, elevates permissions to root
  tasks:
    - name: Find all .bashrc files in /home
      ansible.builtin.find:
        paths: "/home"
        patterns: "*.bashrc"  # Ensures that it looks for all files ending with .bashrc
        recurse: yes
        file_type: file
      register: bashrc_files

    - name: Debug found .bashrc files
      ansible.builtin.debug:
        msg: "Found .bashrc files: {{ bashrc_files.files | map(attribute='path') | list }}"
      when: bashrc_files.files | length > 0

    - name: Warn if no .bashrc files found
      ansible.builtin.debug:
        msg: "No .bashrc files found in /home."
      when: bashrc_files.files | length == 0
      
    - name: Append line to each .bashrc file in /home
      ansible.builtin.lineinfile:
        path: "{{ item.path }}"
        line: "export ANSIBLE='ansible line'"
        create: no
      loop: "{{ bashrc_files.files }}"
      when: bashrc_files.files | length > 0


---
- name: Append a line to all .bashrc files in /home using command
  hosts: all
  become: yes  # Necessary for access to all home directories
  tasks:
    - name: Find all .bashrc files in /home using shell command
      ansible.builtin.command: "find /home -type f -name .bashrc"
      register: find_result
      changed_when: false
      ignore_errors: yes

    - name: Debug found .bashrc files
      ansible.builtin.debug:
        msg: "{{ find_result.stdout_lines }}"
      when: find_result.stdout_lines is defined and find_result.stdout_lines | length > 0

    - name: Warn if no .bashrc files found
      ansible.builtin.debug:
        msg: "No .bashrc files found in /home."
      when: find_result.stdout_lines is undefined or find_result.stdout_lines | length == 0

    - name: Append line to each .bashrc file found
      ansible.builtin.lineinfile:
        path: "{{ item }}"
        line: "export ANSIBLE='ansible line'"
        create: no
      loop: "{{ find_result.stdout_lines }}"
      when: find_result.stdout_lines is defined and find_result.stdout_lines | length > 0

Ansible to delete files per server based on a list

if the data looks like this:

server1,/path/to/files/1
server1,/path/to/files/2
server2,/path/to/files/1
server2,/path/to/files/2
server3,/path/to/files/1
server4,/path/to/files/2
server4,/path/to/files/1
server5,/path/to/files/2

run this sed command to create the file_paths.yml file

sed -e '1i file_paths:' -e 's/\([^,]*\),\(.*\)/ - { host: '\''\1'\'', path: '\''\2'\'' }/' server_paths.txt > file_paths.yml OR:

awk -F, '{print "  - { host: '\''"$1"'\'', path: '\''"$2"'\'' }"}' input.txt > output.yml

Once you have the list, create the playbook:

- hosts: all
  gather_facts: no

  tasks:
    - name: Include variable file with file paths
      include_vars:
        file: file_paths.yml

    - name: Delete specified files on each server
      file:
        path: "{{ item.path }}"
        state: absent
      loop: "{{ file_paths }}"
      when: inventory_hostname == item.host
      become: yes  # Use sudo to delete the files

run the playbook: ansible-playbook -i hosts.ini delete_files.yml


Collect information about remote files

The point of this playbook is to collect the hostname, owner, group and permissions of a file or list of files and generate a variable file that can be used to restore the data should we need to back to it's original state.

The ansible code.

It will need some tweaking. This version only does one listed file. When you do any of this thype of work, the point is to start slow and get each tasks working as expected BEFORE you just create the whole thing and try to work out why it doesn't work. You also get to understand what you;re doing and see realtime output of how it's going and if it's going int he right direction.

---
- name: Collect file information and generate variable file
  hosts: all
  gather_facts: false
  vars_files:
    - files_info.yml
  tasks:
    - name: Gather file information
      stat:
        path: "{{ item.path }}"
      register: file_stat
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host

    - name: Append to list of collected file info
      set_fact:
        file_info:
          - { hosts: '{{ item.host }},', path: '{{ item.path }}', user: '{{ file_stat.stat.pw_name }}', group: '{{ file_stat.stat.gr_name }}', perms: '{{ '%04o' % file_stat.stat.mode | int }}' }
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host
      register: file_info_results

    - name: Combine file info results
      set_fact:
        all_file_info: "{{ file_info_results.results | map(attribute='ansible_facts.file_info') | list | flatten }}"

    - name: Initialize variable file if not present
      delegate_to: localhost
      copy:
        content: "file_info:\n"
        dest: "{{ playbook_dir }}/variable_file.yml"
      when: not file_info | default([])

    - name: Append file info to variable file
      delegate_to: localhost
      lineinfile:
        path: "{{ playbook_dir }}/variable_file.yml"
        line: "  - { hosts: '{{ item.host }},', path: '{{ item.path }}', user: '{{ item.user }}', group: '{{ item.group }}', perms: '{{ item.perms }}' }"
        insertafter: "file_info:"
      loop: "{{ all_file_info }}"


---
- name: Collect file information via shell and generate variable file
  hosts: all
  gather_facts: false
  vars_files:
    - files_info.yml
  tasks:
    - name: Gather file information via shell
      shell: |
        file_path="{{ item.path }}"
        stat_output=$(stat -c '%U %G %a' "$file_path")
        user=$(echo $stat_output | cut -d' ' -f1)
        group=$(echo $stat_output | cut -d' ' -f2)
        perms=$(echo $stat_output | cut -d' ' -f3)
        echo "{{ inventory_hostname }}:$file_path:$user:$group:$perms"
      register: file_info_shell
      loop: "{{ files_info }}"
      when: inventory_hostname == item.host

    - name: Append results to file
      lineinfile:
        path: "/tmp/collected_file_info.txt"
        create: yes
        line: "{{ item.stdout }}"
      loop: "{{ file_info_shell.results }}"

    - name: Convert shell results to structured data
      set_fact:
        structured_file_info: >
          {{
            structured_file_info | default([]) +
            [{'hosts': result.split(':')[0] + ',', 'path': result.split(':')[1], 'user': result.split(':')[2], 'group': result.split(':')[3], 'perms': result.split(':')[4]} for result in lookup('file', '/tmp/collected_file_info.txt').splitlines()]
          }}

    - name: Create final YAML output
      template:
        src: "file_info_template.j2"
        dest: "/tmp/variable_file.yml"
      delegate_to: localhost

file_info:
{% for item in structured_file_info %}
  - { hosts: '{{ item.hosts }}', path: '{{ item.path }}', user: '{{ item.user }}', group: '{{ item.group }}', perms: '{{ item.perms }}' }
{% endfor %}

    - name: Initialize variable file with header
      delegate_to: localhost
      copy:
        content: "file_info:\n"
        dest: "/tmp/variable_file.yml"
      when: not lookup('file', '/tmp/variable_file.yml', errors='ignore')
      
    - name: Append results to variable file
      delegate_to: localhost
      lineinfile:
        path: "/tmp/variable_file.yml"
        line: "  - { hosts: '{{ item.stdout.split(':')[0] }}', path: '{{ item.stdout.split(':')[1] }}', user: '{{ item.stdout.split(':')[2] }}', group: '{{ item.stdout.split(':')[3] }}', perms: '{{ item.stdout.split(':')[4] }}' }"
        insertafter: "file_info:"
      loop: "{{ file_info_shell.results }}"
      when: "'stdout' in item and item.stdout != ''"
      

---
- name: Convert file paths to structured YAML
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Initialize YAML file with header
      lineinfile:
        path: /tmp/structured_file_list.yml
        line: "---\nfile_list:"
        create: yes

    - name: Read file paths from local file
      slurp:
        src: file_paths.txt
      register: file_content

    - name: Append file paths to structured YAML
      lineinfile:
        path: /tmp/structured_file_list.yml
        line: "  - { host: '{{ item.split()[0] }}', path: '{{ item.split()[1] }}' }"
      loop: "{{ file_content.content | b64decode | splitlines() }}"
      when: "item"

---
- name: Check if files exist on remote servers
  hosts: all
  vars:
    file_list:
      - { host: 'server1', path: '/path/to/file1' }
      - { host: 'server1', path: '/path/to/file12' }
      - { host: 'server2', path: '/path/to/another/file1' }

  tasks:
    - name: Check file existence using shell
      shell: test -f "{{ item.path }}" && echo "exists" || echo "does not exist"
      register: shell_output
      loop: "{{ file_list }}"
      when: inventory_hostname == item.host
      ignore_errors: true  # Prevents the task from failing and allows the playbook to continue

    - name: Display file existence results
      debug:
        msg: "File {{ item.item.path }} {{ item.stdout.trim() }}"
      loop: "{{ shell_output.results }}"
      when: inventory_hostname == item.item.host

COmmand to check if file exists, the pull out more info

ssh $server 'if [ ! -f /etc/opt/quest/qpm4u/pm.settings ]; then echo "$HOSTNAME Not Under QPM Management"; else case $(grep masters /etc/opt/quest/qpm4u/pm.settings) in *"ukserver01"*) echo "$HOSTNAME Managed by Old QPM Master";; *"ukserver001"*) echo "$HOSTNAME Managed by New QPM Master";; *) echo "$HOSTNAME Not Under QPM Management";; esac; fi'

---
- name: Set up Python alternatives
  hosts: all
  become: yes

  tasks:
    - name: Ensure Python 2.7 is installed
      yum:
        name: python2
        state: present

    - name: Ensure Python 3.6 is installed
      yum:
        name: python36
        state: present

    - name: Install alternatives for python
      alternatives:
        name: python
        link: /usr/bin/python
        path: /usr/bin/python3.6
        priority: 50

    - name: Install alternatives for python2
      alternatives:
        name: python2
        link: /usr/bin/python2
        path: /usr/bin/python2.7
        priority: 30

    - name: Install alternatives for python3
      alternatives:
        name: python3
        link: /usr/bin/python3
        path: /usr/bin/python3.6
        priority: 40

    - name: Set default Python version
      alternatives:
        name: python
        path: /usr/bin/python3.6


---
- name: Ensure Apache is configured correctly
  hosts: webservers
  become: yes

  vars:
    config_file_path: /etc/httpd/conf/httpd.conf

  tasks:
    - name: Copy Apache configuration file
      copy:
        src: files/httpd.conf
        dest: "{{ config_file_path }}"
      notify: Restart Apache

  handlers:
    - name: Restart Apache
      service:
        name: httpd
        state: restarted

grep -oP '^\w+.*?(\(.*?\))' input.txt | sed 's/.*(\(.*\))/\1/' | awk '{print $1, $NF}'

---
- name: Test new user login
  hosts: target_host
  gather_facts: no
  tasks:
    - name: Ensure the expect module is present
      package:
        name: expect
        state: present
      become: yes

    - name: Test SSH login
      expect:
        command: ssh -o StrictHostKeyChecking=no testuser@localhost whoami
        responses:
          password: "testpassword"
      register: result
      failed_when: "'testuser' not in result.stdout"

    - name: Debug login test result
      debug:
        msg: "SSH login test passed: {{ result.stdout }}"

    - name: Test sudo privileges
      expect:
        command: ssh -o StrictHostKeyChecking=no testuser@localhost 'sudo whoami'
        responses:
          password: "testpassword"
      register: sudo_result
      failed_when: "'root' not in sudo_result.stdout"

    - name: Debug sudo test result
      debug:
        msg: "Sudo test passed: {{ sudo_result.stdout }}"

check a file exists

---
- name: Check if files exist on specified hosts
  hosts: all
  vars_files:
    - file_list.yml
  tasks:
    - name: Check if file exists
      stat:
        path: "{{ item.path }}"
      register: file_status
      loop: "{{ file_list }}"
      when: inventory_hostname == item.host

    - name: Output file path if it exists
      debug:
        msg: "File {{ item.path }} exists on {{ item.host }}"
      when: file_status.stat.exists
      loop: "{{ file_list }}"
      loop_control:
        label: "{{ item.path }}"
      failed_when: not file_status.stat.exists

In the UK, employment law generally mandates that employees must be treated fairly and without discrimination, regardless of whether they work remotely or onsite. Key principles to consider include:

  1. Equality and Non-Discrimination: Equality Act 2010: The Equality Act protects workers from discrimination based on protected characteristics (e.g., age, disability, gender reassignment, race, religion or belief, sex, sexual orientation). Discrimination against remote workers purely based on their working arrangements could potentially be seen as indirect discrimination if it disproportionately affects certain groups.

Fair Treatment: Employers must ensure that their policies do not unfairly disadvantage remote workers compared to onsite workers, unless there is a legitimate and justifiable reason for doing so.

  1. Pay and Benefits: Equal Pay for Equal Work: The principle of equal pay for equal work applies, meaning that remote workers should receive the same pay and benefits as their onsite counterparts if they are performing the same role under similar conditions.

Performance-Related Pay: If bonuses or pay rises are linked to performance, employers should ensure that the criteria used are fair and do not disadvantage remote workers. For example, remote workers should not be penalized simply because they do not work onsite.

  1. Justification for Different Treatment: Business Justifications: There may be situations where different treatment could be justified, such as if onsite workers have additional responsibilities or face higher costs associated with commuting. However, this must be clearly justified and proportionate.

Travel and Attendance: Requiring employees to attend the office or travel as part of their role could justify some differences in treatment if it is a key aspect of their job. However, holding back pay rises or bonuses solely because an employee works remotely, without a valid business reason, could be seen as unfair treatment.

  1. Contractual Obligations: Employment Contracts: Employers must adhere to the terms of the employment contract, which may include provisions regarding pay, bonuses, and working conditions. If remote workers have different terms explicitly stated in their contracts, these must be followed, but any changes must be agreed upon by both parties.

Company Policies: Any differentiation in treatment must align with the company’s policies and procedures, and these must be applied consistently to avoid claims of unfair treatment.

  1. Best Practices: Transparency: Employers should be transparent about the criteria used for pay rises and bonuses and ensure that these are applied fairly to all employees, regardless of their work location.

Communication: Regular communication with remote workers to discuss performance, expectations, and any potential impacts on their pay or bonuses can help avoid misunderstandings and potential grievances.

Conclusion: In summary, while UK law does not prohibit different treatment of remote and onsite workers, such treatment must be justified, fair, and non-discriminatory. Employers should be cautious in withholding pay rises or bonuses from remote workers purely based on their work location, as this could lead to claims of unfair treatment or discrimination.

If there are legitimate business reasons for the differentiation, these should be clearly communicated, and the criteria for pay and bonuses should be applied consistently to avoid legal issues.


---
- name: Check if remote server can ping another server
  hosts: first_server  # This is the server where you'll initiate the ping from
  become: yes  # Optional, depending on if you need to become a specific user
  tasks:
    - name: Check if the remote server can ping another server
      shell: ping -c 3 {{ target_server }}
      register: ping_result
      ignore_errors: yes  # In case ping fails, don't stop the playbook

    - name: Display a success message if ping works
      debug:
        msg: "Ping to {{ target_server }} was successful from {{ inventory_hostname }}"
      when: ping_result.rc == 0  # rc 0 means success

    - name: Display a failure message if ping fails
      debug:
        msg: "Ping to {{ target_server }} failed from {{ inventory_hostname }}"
      when: ping_result.rc != 0  # Non-zero return code means failure
      

---
- name: Check NICs and retrieve IP addresses
  hosts: all
  gather_facts: no
  tasks:
    - name: Get IP addresses of all NICs
      shell: ip -o -4 addr show | awk '{print $2, $4}'
      register: nic_info

    - name: Count the number of NICs
      shell: echo "{{ nic_info.stdout_lines | length }}"
      register: nic_count

    - name: Check if the server has two NICs or not
      set_fact:
        nic_output: "{{ nic_info.stdout_lines | join(' ') if nic_count.stdout|int >= 2 else nic_info.stdout_lines[0] + ' NA' }}"
      
    - name: Display the NIC information
      debug:
        msg: "{{ inventory_hostname }}: {{ nic_output }}"


Certainly! When preparing to discuss the use of the Ansible Automation Platform for deploying VMs in VMware/ESXi, there are several key sections and requirements you should consider to ensure a successful deployment strategy. Here’s a breakdown of the topics you’ll likely want to address:

1. Infrastructure Overview

  • Environment: Describe your current VMware/ESXi environment:
    • Version of vCenter and ESXi.
    • Number of clusters, data centers, and the size of the infrastructure.
    • Network setup and storage configurations (datastores, vSAN, etc.).
  • Connectivity: Ensure that Ansible Automation Platform can reach the vCenter and ESXi hosts. This includes:
    • Firewall rules and network access for the Ansible control nodes.
    • Required ports (e.g., 443 for API communication with vCenter).

2. Ansible Automation Platform Setup

  • Ansible Control Node Requirements: Determine the setup for Ansible Automation Platform itself:
    • Required resources for the Ansible controller and any execution environments.
    • Ansible version and required Python libraries (pyvmomi for VMware).
  • Integrating with vCenter:
    • Ansible requires API access to vCenter to create and manage VMs.
    • Make sure you have a vCenter service account with sufficient permissions (e.g., Datastore, Network, VM creation).

3. Authentication and Permissions

  • Service Account for Automation:
    • Create a dedicated service account in vCenter with the appropriate permissions (e.g., Datastore.Browse, Resource.AssignVMToPool, VirtualMachine.Provision).
  • Credential Management:
    • Determine how credentials (vCenter username/password) will be stored and secured within Ansible (e.g., using Ansible Vault for encrypting sensitive data).
    • Plan for secret management and integration with tools like HashiCorp Vault if needed.

4. Playbooks and Roles for VM Deployment

  • VM Deployment Playbooks: Design the Ansible playbooks that will handle VM creation:

    • Defining VM Specs:
      • CPU, memory, storage, network configurations, and OS.
      • OS templates or ISO locations for building new VMs.
    • Provisioning Workflow:
      • Create and configure a VM from a template or ISO.
      • Attach it to the right network and datastore.
      • Define VM customization parameters (e.g., hostname, IP address).
    • Custom Scripts:
      • Incorporate any scripts that should be executed post-deployment (e.g., configuring SSH, installing software).
  • Reusable Roles:

    • Plan for roles that can be reused across different environments (e.g., creating networks, managing datastores).
    • Separate the logic for different environments (development, staging, production).

5. VM Templates and Images

  • Template Strategy:
    • Decide if you will use existing VM templates or build new ones.
    • Which OS images and versions will be supported?
  • Customization:
    • Use vmware_guest module in Ansible to customize network settings (IP, hostname, DNS) after deployment.
    • Use cloud-init or other tools for Linux VMs to handle post-boot configurations.

6. Network Configuration

  • Virtual Network Configuration:
    • Plan for connecting VMs to the right virtual networks.
    • Which VLANs, port groups, or virtual distributed switches will be used?
  • IP Management:
    • Integration with IPAM systems (if you need to automate IP address assignments).
    • Static vs. DHCP configurations.

7. Storage Configuration

  • Datastore Selection:
    • Define which datastores will host the new VMs.
    • Plan for storage allocation (e.g., thin vs. thick provisioning).
  • Customization for Storage:
    • Any specific disk configurations (e.g., adding additional data disks to VMs).
    • Ensuring VM backups through integration with existing backup solutions.

8. Validation and Testing

  • Deployment Testing:
    • Develop a strategy for testing deployments to ensure that VMs come up correctly.
    • Validate that VMs have the right configurations (networking, hostname, installed software).
  • Integration Testing:
    • Plan for how Ansible can validate VM health (e.g., using Ansible's wait_for modules to ensure services start correctly).

9. Logging and Monitoring

  • Ansible Job Results:
    • Capture logs and results of Ansible job runs for auditing and troubleshooting.
  • vCenter Logs:
    • Determine if there are any additional logging requirements in vCenter for tracking automated actions.

10. Error Handling and Rollback

  • Error Handling:
    • Define steps for handling failures in deployment (e.g., reverting changes if a playbook fails).
  • Rollback Strategy:
    • Plan for removing or reverting a VM if an issue is detected post-deployment.
  • Timeouts and Retries:
    • Specify timeouts and retries in Ansible playbooks for network or API call issues.

Example Playbook Structure

Here is an example of how a simple Ansible playbook might look for deploying a VM:

---
- name: Deploy a VM on VMware
  hosts: localhost
  gather_facts: no
  tasks:
    - name: Create a VM from template
      vmware_guest:
        hostname: "{{ vcenter_hostname }}"
        username: "{{ vcenter_user }}"
        password: "{{ vcenter_password }}"
        validate_certs: no
        datacenter: "{{ datacenter_name }}"
        folder: "{{ vm_folder }}"
        name: "{{ vm_name }}"
        template: "{{ vm_template }}"
        cluster: "{{ cluster_name }}"
        networks:
          - name: "{{ network_name }}"
            ip: "{{ vm_ip }}"
            netmask: "{{ vm_netmask }}"
            gateway: "{{ vm_gateway }}"
        hardware:
          memory_mb: 2048
          num_cpus: 2
        wait_for_ip_address: yes
      register: deploy_result

    - name: Output the result of the deployment
      debug:
        var: deploy_result

Summary of Requirements for Your Meeting:

  1. Overview of the existing infrastructure and connectivity needs.
  2. Setup details of Ansible Automation Platform, including integration with vCenter.
  3. Service account requirements and credential management.
  4. Detailed playbook structure for VM creation and customization.
  5. Networking and storage considerations for VMs.
  6. Testing, validation, and monitoring of automated deployments.
  7. Error handling and rollback strategies.

Prepare to discuss how these elements will fit into your specific environment and any questions or concerns you may have about automating VM deployments in VMware/ESXi using Ansible. This will help ensure that Red Hat can provide the right guidance and support for your deployment needs.


When you increase the swap space on a RHEL8 system (e.g., by enlarging the swap disk), there are several configurations and steps you need to follow to ensure the system recognizes the updated swap space. Here's what you need to do:

Steps After Modifying Swap Space:

1. Resize the Swap Partition (if applicable)

If you're increasing the size of the swap space by enlarging the swap partition on a disk, you'll need to adjust the partition using tools like parted or fdisk. Ensure the partition reflects the new size, and format it again as swap:

mkswap /dev/sdXn

Replace /dev/sdXn with the correct partition (e.g., /dev/sdc1).

2. Update /etc/fstab (If Necessary)

Check and update the /etc/fstab file to ensure it points to the correct swap device or partition. This file ensures that the swap space is activated automatically at boot.

  1. Find the UUID of the new swap device:

    Use blkid to get the UUID of the updated swap partition:

    blkid /dev/sdXn
  2. Edit /etc/fstab:

    Update or ensure that the swap entry in /etc/fstab points to the correct device or UUID. Example of an entry:

    UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx none swap sw 0 0

    If the device name is used instead of the UUID, ensure the correct device (e.g., /dev/sdc1) is listed:

    /dev/sdc1 none swap sw 0 0

3. Update Initramfs (If Necessary)

If the swap device is referenced in the initramfs (for systems using hibernation or where the swap device was embedded in the initramfs), you should rebuild the initramfs to include the updated swap information.

  1. Rebuild the initramfs:

    dracut --force

    This command rebuilds the initial RAM disk, which is used during boot. It ensures that any references to the old swap size or configuration are updated.

4. Activate the New Swap Space

After resizing and configuring the swap partition, activate the new swap space:

  1. Turn off the current swap:

    If the swap space is currently active, deactivate it before making changes:

    swapoff /dev/sdXn
  2. Turn on the new swap:

    Reactivate the updated swap space:

    swapon /dev/sdXn
  3. Verify the swap space:

    Check if the new swap size is correctly configured by running:

    swapon --show

    This command will show you the active swap devices and their sizes. You can also use free -h to confirm the total available swap.

    free -h

5. Update GRUB (If Using Hibernation)

If your system uses hibernation, the resume= parameter in GRUB must point to the updated swap partition.

  1. Edit /etc/default/grub:

    Open the GRUB configuration file:

    vi /etc/default/grub
  2. Update the resume= parameter:

    Find the resume= parameter in GRUB_CMDLINE_LINUX and ensure it points to the correct swap device or UUID. For example:

    GRUB_CMDLINE_LINUX="resume=UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

    Update it with the correct UUID obtained from blkid.

  3. Rebuild the GRUB configuration:

    After modifying the GRUB file, rebuild the GRUB configuration:

    grub2-mkconfig -o /boot/grub2/grub.cfg

6. Reboot (if Necessary)

Although it's not always required, rebooting the system will ensure that all the changes (such as in GRUB and initramfs) are applied cleanly.

reboot

Summary of Steps:

  1. Resize the swap partition using parted or fdisk (if applicable).
  2. Update /etc/fstab to ensure the new swap partition or UUID is referenced.
  3. Rebuild initramfs if your system relies on it (dracut --force).
  4. Activate the new swap with swapon /dev/sdXn.
  5. Update GRUB (if using hibernation) to reference the correct swap device.
  6. Optionally, reboot the system.

These steps will ensure that your RHEL8 system properly recognizes and uses the newly resized swap space. Let me know if you need more help!


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment