Thursday, October 7, 2021

thumbnail

How to Re-name Network Interface in Linux and Configuring Network Bonding or NIC Teaming

Re-naming Network Interface and Configuring Network Bonding or NIC Teaming in CentOS 7|RedHat 7 Linux



1. Previous versions of linux will have interface name like eth0 and eth1 and CentOS 7 later versions will have enp2s0f0  like that. but if you would like to rename  then follow the below steps.

1. Edit file /etc/default/grub and add following

# GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap net.ifnames=0 biosdevnames=0 rhgb quiet"

2. Regenerate a GRUB configuration file

# grub2-mkconfig -o /boot/grub2/grub.cfg

3. Edit NAME and DEVICE parameter in ifcfg file /etc/sysconfig/network-scripts

# mv ifcfg-enp2s0f0 ifcfg-eth0 
# mv ifcfg-enp2s0f1 ifcfg-eth1
# mv ifcfg-ens1f0 ifcfg-eth2
# mv ifcfg-ens1f1 ifcfg-eth3
# mv ifcfg-ens2f0 ifcfg-eth4
# mv ifcfg-ens2f1 ifcfg-eth5

4. Edit NAME and DEVICE parameter in ficfg file

# vi ifcfg-eth0 and rename NAME and DEVICE to eth0. similarly for all ifcfg-eth*
# systemctl disable NetworkManager

5. Then reboot the server to apply the changes.

# Shutdown -r now 

Configuring Network Bonding or Teaming:

1. Edit and update each of the all physical NIC per below configuration.

# [] ifcfg-eth0 - kickstart interface

	TYPE=Ethernet
	PROXY_METHOD=none
	BROWSER_ONLY=no
	BOOTPROTO=dhcp
	DEFROUTE=yes
	IPV4_FAILURE_FATAL=no
	IPV6INIT=yes
	IPV6_AUTOCONF=yes
	IPV6_DEFROUTE=yes
	IPV6_FAILURE_FATAL=no
	IPV6_ADDR_GEN_MODE=stable-privacy
	NAME=eth0
	UUID=1c126857-adca-436d-8ffe-5cf1a6497937
	DEVICE=eth0
	ONBOOT=no
	
        
  
	     [] ifcfg-bond0 - Bonging interface [Master]
	
	DEVICE=bond0
	BONDING_OPTS="mode=4 miimon=100 updelay=6000 lacp_rate1"
	BONDING_MASTER=yes
	BOOTPROTO=none
	NM_CONTROLLED=no
	IPV6INIT=no
	NAME=bond0
	ONBOOT=0
	   
	    [] ifcfg-bond0.351
	
	DEVICE=bond0.351
	BOOTPROTO=none
	IPADDR=192.168.46.131
	NETMASK=255.255.255.0
	DOMAIN="acg.com corp.acg.com"
	GATEWAY=192.168.46.1
	DNS1=192.168.24.30
	DNS2=192.168.25.30
	ONBOOT=yes
	NM_CONTROLLED=no
	IPV6INIT=no
	NAME=bond0.351
	VLAN=yes
	
	     [] ifcfg-eth1 - Slave interface
	
	TYPE=Ethernet
	IPV6INIT=NO
	NAME=eth1
	UUID=1a008bd9-f21d-42f9-905e-c5dc09f87745
	DEVICE=eth1
	ONBOOT=yes
	MASTER=bond0
	SLAVE=yes
	NM_CONTROLLED=no
	
	    [] ifcfg-eth2 - Slave interface
	
	TYPE=Ethernet
	IPV6INIT=no
	NAME=eth2
	UUID=9233e629-70a0-49cb-9f1d-b7deb39b1571
	DEVICE=eth2
	ONBOOT=yes
	MASTER=bond0
	SLAVE=yes
	NM_CONTROLLED=no
	
	    [] ifcfg-eth3 - iSCSI interface [Physical]
	
	TYPE=Ethernet
	BOOTPROTO=none
	IPV6INIT=no
	NAME=eth3
	UUID=8c233834-8d55-4071-9890-839989c15f85
	DEVICE=eth3
	ONBOOT=yes
	NM_CONTROLLED=no
	
	    [] ifcfg-eth3.705 - iSCSI interface
	
	DEVICE=eth3.705
	BOOTPROTO=none
	ONBOOT=yes
	IPADDR=172.16.5.131
	PREFIX=24
	NETWORK=172.16.5.0
	VLAN=yes
	
	
	    [] ifcfg-eth4 - iSCSI interface [Physical]
	
	TYPE=Ethernet
	BOOTPROTO=none
	IPV6INIT=no
	NAME=eth4
	UUID=9f5656af-ef5d-4073-a85f-856a3bcd1c17
	DEVICE=eth4
	ONBOOT=yes
	NM_CONTROLLED=no
	
	    [] ifcfg-eth4.706 - iSCSI interface
	
	DEVICE=eth4.706
	BOOTPROTO=none
	ONBOOT=yes
	IPADDR=172.16.6.131
	PREFIX=24
	NETWORK=172.16.6.0
	VLAN=yes
	
	    [] ifcfg-eth5
	
	TYPE=Ethernet
	PROXY_METHOD=none
	BROWSER_ONLY=no
	BOOTPROTO=dhcp
	DEFROUTE=yes
	IPV4_FAILURE_FATAL=no
	IPV6INIT=yes
	IPV6_AUTOCONF=yes
	IPV6_DEFROUTE=yes
	IPV6_FAILURE_FATAL=no
	IPV6_ADDR_GEN_MODE=stable-privacy
	NAME=eth5
	UUID=7aafaebe-dad8-4bfd-a778-f1a8e8f143b0
	DEVICE=eth5
	ONBOOT=no

2. Restart the server or network service

# systemctl restart network


thumbnail

How to Increase or Extend iSCSI Volume

Increasing or Extending iSCSI Volume on CentOS 7/RedHat 7 server. 




1. Increase the preferred storage value on the storage array. For example: on the Dell EMC, navigate to Storage -> Block -> LUNs and then increase the requires LUN size.

2. Check which multipath its using and their corresponding device like sdb, sdc etc.. on the linux server

3. Using below command you can find the multipath that has been using.

# multipath -l
dev-vm1_vws1 (3600601600ed04500bfc4f95ab84860ae) dm-3 DGC     ,VRAID           
	size=1.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
	`-+- policy='round-robin 0' prio=0 status=active
	  |- 1:0:0:0 sdb 8:16  active undef running
	  |- 2:0:0:0 sdc 8:32  active undef running
	  |- 3:0:0:0 sdd 8:48  active undef running
	  |- 4:0:0:0 sde 8:64  active undef running
	  |- 5:0:0:0 sdf 8:80  active undef running
	  |- 6:0:0:0 sdg 8:96  active undef running
	  |- 7:0:0:0 sdh 8:112 active undef running
  `- 8:0:0:0 sdi 8:128 active undef running

2. Rescan the device for the respective device. In the above example, it lists sdb, sdc, sdd, sde, sdf, sdg, sdh and sdi

# echo 1 > /sys/block/sdb/device/rescan 
# echo 1 > /sys/block/sdc/device/rescan 
# echo 1 > /sys/block/sdd/device/rescan 
# echo 1 > /sys/block/sde/device/rescan 
# echo 1 > /sys/block/sdf/device/rescan 
# echo 1 > /sys/block/sdg/device/rescan 
# echo 1 > /sys/block/sdh/device/rescan 
# echo 1 > /sys/block/sdi/device/rescan

3. Resize the multipath

# multipathd resize map dm-3

4. Resize the file system using xfs_growfs or resize2fs

# xfs_growfs /dev/mapper/dev-vm1_vws1 (use resize2fs for ext4 file system)

5. Then use df -h with the mount point for the extended file system to make sure it got updated.

thumbnail

How to Configure NIS client on Ubuntu_20 and Ubuntu_18

Configuring NIS Client on Ubuntu_20 and Ubuntu_18 version




1. Use the following nisclient_ubuntu20.sh script to configure the nisclient on ubuntu 20 system.

# nisclient_ubuntu20.sh
#!/bin/bash

## Install NIS and RPCbind packages
## During this step you need to enter the domain for NIS

apt-get install -y rpcbind nis

## domain can be updated under incase if you have entered wrong one during the setup /etc/defaultdomain 

## Add the following line to /etc/yp.conf

NISSERVER1="<NIS Server FQDN Name"
NISSERVER2="NIS Server2 FQDN Name"
DOMAIN="<NIS domain that has configured in your network>" echo "domain $DOMAIN server $NISSERVER1" >> /etc/yp.conf echo "domain $DOMAIN server $NISSERVER2" >> /etc/yp.conf ## Edit /etc/nsswitch.conf ## For Ubuntu 20.04 sed -i 's/systemd$/systemd nis/g;s/files$/files nis/g;s/dns$/dns nis/g' /etc/nsswitch.conf ##update NIS Domain echo "<NIS Domain>" > /etc/defaultdomain ## Restart NIS service systemctl restart rpcbind systemctl restart nis

2. Use the following nisclient.sh script to configure the nisclient on ubuntu 18 system.

# nisclient_ubuntu18.sh
#!/bin/bash

## Install NIS and RPCbind packages
## During this step you need to enter the NIS domain 

apt-get install -y rpcbind nis

## domain can be updated under /etc/defaultdomain

## Add the following line to /etc/yp.conf

NISSERVER1="<NIS Server FQDN Name"
NISSERVER2="<NIS Server2 FQDN Name"
DOMAIN="NIS Domain" echo "domain $DOMAIN server $NISSERVER1" >> /etc/yp.conf echo "domain $DOMAIN server $NISSERVER2" >> /etc/yp.conf ## Edit /etc/nsswitch.conf ## For Ubuntu 20.04 sed -i 's/systemd$/systemd nis/g;s/compat$/compat nis/g;s/dns$/dns nis/g' /etc/nsswitch.conf ##update NIS Domain echo "NIS Domain" > /etc/defaultdomain ## Restart NIS service systemctl restart rpcbind systemctl restart nis


thumbnail

How to check or restart nagios service

Checking Nagios service status and/or restarting the nagios service. 




1. Checking the status of nagios service using the following command:

# omd status
Doing 'status' on site nagios:
mkeventd:       running
rrdcached:      running
npcd:           running
nagios:         running
apache:         running
stunnel:        running
xinetd:         running
crontab:        running
-----------------------
Overall state:  running

2. Restarting nagios service.

# omd restart
Doing 'restart' on site nagios:
OK
Removing Crontab...OK
Stopping xinetd...OK
Stopping stunnel...waiting for termination...OK
Stopping apache...killing 54173.................OK
Stopping nagios...not running...OK
Stopping npcd...OK
Stopping rrdcached.../omd/sites/nagios/etc/rc.d/20-rrdcached: line 77: kill: (54117) - No such process
Failed
Stopping mkeventd...killing 54107....OK
Starting mkeventd...OK
Starting rrdcached...removing stale pid file...
OK
Starting npcd...OK
Starting nagios...OK
Starting apache...OK
Starting stunnel...OK
Starting xinetd...OK
Initializing Crontab...OK



Wednesday, October 6, 2021

thumbnail

saltstack command for installing packages

Saltstack command for installing a package on the remote server or salt-minion the client



1. The following command is to install, activate and start the qualys-cloud-agent on RedHat and CentOS

# vi qualys-cloud-agent-redhat.sls 
---

{% if grains['os_family'] == "RedHat" -%}

install qualys-cloud-agent:
  pkg.installed:
    - sources:
      - qualys-cloud-agent: salt://_files//qualys-cloud-agent.x86_64.rpm

activate qualys-cloud-agent:
  module.run:
    - name: cmd.run
    - cmd: "/usr/local/qualys/cloud-agent/bin/qualys-cloud-agent.sh ActivationId=c4171172-7e24-47d8-a995-4364923c3b54 CustomerId=aac8d569-c20c-3a9f-e040-10ac130471e6"

enable qualys-cloud-agent service:
  service.running:
    - name: qualys-cloud-agent
    - enable: True
    - reload: True

{% endif %}

2. The following command is to install, activate and start the qualys-cloud-agent on RedHat and CentOS

# vi qualys-cloud-agent-debian.sls 
---

{% if grains['os_family'] == "Debian" -%}

install qualys-cloud-agent:
  pkg.installed:
    - sources:
      - qualys-cloud-agent: salt://_files//qualys-cloud-agent.x86_64.deb

activate qualys-cloud-agent:
  module.run:
    - name: cmd.run
    - cmd: "/usr/local/qualys/cloud-agent/bin/qualys-cloud-agent.sh ActivationId=c4171172-7e24-47d8-a995-4364923c3b54 CustomerId=aac8d569-c20c-3a9f-e040-10ac130471e6"

enable qualys-cloud-agent service:
  service.running:
    - name: qualys-cloud-agent
    - enable: True
    - reload: True

{% endif %}

3. To push / install the package to the single host use the following command and for group of systems you can use nodegroup name so push it.

# salt dev-vm1.acg.com state.apply qualys-cloud-agent-debian

4. Use the following command to create sshusers list and this group will be pushed to /etc/group on the client and only this sshusers will have access to ssh and rest of them will be blocked

# vi sshusers.sls
---
sshusers admin access:
  group.present:
    - name: sshusers
    - members:
      - root
      - dev_user1
      - dev_user2
     

5. Use the following state file to give sudo access to the list of users mentioned in the server_admins list under below mentioned path.

# vi sudoers.sls
setup server_admins sudoers access:
  file.managed:
    - name: /etc/sudoers.d/server_admins
    - source: salt://_files/sudoers/server_admins
    - user: root
    - group: root
    - mode: 440

6. Use the following state file to mount file systems per auto.master configuration in the specified server

# vi automount.sls
---
# RHEL5 & RHEL6
{% if salt['my_helpers.occurrences']('ldap', '/etc/auto.master') > 0 %}
Remove all brocade data:
  file.absent:
    - name: /etc/auto.master
{% endif %}

## RHEL7
{% if salt['my_helpers.occurrences']('sss', '/etc/auto.master') > 0 %}
Remove all brocade data:
  file.absent:
    - name: /etc/auto.master
{% endif %}

copy /etc/auto.master:
  file.managed:
    - name: /etc/auto.master
    - source: salt://_files/asic/_etc_auto.master

reload autofs daemon:
  service.running:
    - name: autofs
    - enable: True
    - reload: True
    - watch:
      - file: /etc/auto.master

7. To restrict other than sshusers for ssh use the following state file

# vi sshd-server.sls
---

include:
  - nisclient

{% if grains['os_family'] == "RedHat" -%}
{% if grains['osmajorrelease'] == 5  %}
{% if grains['osarch'] == "x86_64"  %}

/usr/local/sbin/sshd:
  file.managed:
  - source: salt://_files/el5/_usr_local_sbin_sshd
  - user: root
  - group: root
  - mode: 755

/etc/sysconfig/sshd:
  file.managed:
  - source: salt://_files/el5/_etc_sysconfig_sshd
  - user: root
  - group: root
  - mode: 644

{% if salt['file.file_exists']('/usr/local/sbin/sshd') -%}

/etc/init.d/sshd:
  file.replace:
    - name: /etc/init.d/sshd
    - pattern: SSHD=.*
    - repl: SSHD=/usr/local/sbin/sshd
    - append_if_not_found: True
    - backup: master
	service.running:
    - name: sshd
    - watch:
      - file: /etc/init.d/sshd

{% endif %}
{% endif %}

{% endif %}
{% endif %}

Disable GSSAPIAuthentication:
  file.line:
    - name: /etc/ssh/sshd_config
    - match: 'GSSAPIAuthentication yes'
    - mode: delete

Disable GSSAPICleanupCredentials:
  file.line:
    - name: /etc/ssh/sshd_config
    - match: 'GSSAPICleanupCredentials yes'
    - mode: delete

Enable UseDNS:
  file.replace:
    - name: /etc/ssh/sshd_config
    - pattern: ^#UseDNS .*
    - repl: UseDNS no
    - append_if_not_found: True
	- backup: master

Set UseDNS to "no":
  file.replace:
    - name: /etc/ssh/sshd_config
    - pattern: UseDNS .*
    - repl: UseDNS no
    - append_if_not_found: True
    - backup: master
  service.running:
    - name: sshd
    - watch:
      - file: /etc/ssh/sshd_config

sshusers group access:
  group.present:
    - name: sshusers
    - gid: 1000
    - system: True
    - addusers:
      - root

Copy nologin script:
  file.managed:
    - name: /opt/script/nologin
    - source: salt://_files/_nologin
    - mode: 755
    - makedirs: True
	/etc/ssh/sshd_config:
  file.append:
    - name: /etc/ssh/sshd_config
    - text: |

        # Allow access to sshusers group
        Match Group *,!sshusers
            ForceCommand /opt/script/nologin

8. Use the following script to force users not login for everyone other than mentioned Match group

# nologin
#!/bin/sh
echo -ne "\e[31m\e[1m"
cat << EOF
####################################################
#                                                  #
# You are not authorized to log on to this machine #
#                                                  #
####################################################
EOF
echo -ne "\e[0m"

9. Use the following command for sudoers file

# vi ~/_file/sudoers/server_admins
#
# server_admins sudoers file
#
User_Alias DEV=dev_user1,dev_user2,dev_user3
DEV        ALL=(ALL) NOPASSWD: ALL
qa_user1   ALL = (root) ALL

10. To get full inventory of a host

# salt nc-efabuild-01.extremenetworks.com grains.items
# salt nc-efabuild-01.extremenetworks.com grains.items os_faimily


thumbnail

Saltstack server side commands

Saltstack configuration Management command from salt-master 



1. switch to saltadmin and perform salt configuration management changes.

2. state files contains all the configuration file that push to the salt-minion. refer the following command to get that details.

#  pwd
/srv/salt
# ls
etc  _files  _modules  _pillar  RCS  _states  var

3. to create node group then use the following command

# cd /srv/salt/etc/master.d
# vi devservers.conf
---
#
# List of Dev Servers

nodegroups:
  devsrv:
    - dev-vm1.acg.com
    - dev-vm2.acg.com

4. To push any state files to the note group you can use the following command. you need to have the sshusers file under ~/_states folder to push that to the dev servers

# salt -N <node-group-name> state.apply sshusers
# salt -N devsrv state.apply sshusers

5. To check reachability of all the servers in the node group you can use the following command

# salt -N devsrv test.ping

6. To check the ping for single host you can use the following command

# salt dev-vm1.acg.com test.ping
# salt dev-vm1* state.highstate
# salt dev-vm1* state.apply devservers.sshusers (here devservers is the folder under_states and then sshusers exist there)

7. top.sls is the file which all the clients talks to and overwrite the configuration per that. so make sure if you make any changes update that.

# cd /srv/salt/_states/top.sls
---

base:
  # All minions get the following three state files applied
  '*':
    - dnsclient
    - saltstack-patch
    - motd
    # - ntpd
    # - smtp
    - salt-agent
    # - check_mk
    # (not required) - services
    # - post-grains

  devsrv:
    - match: nodegroup
    - nisclient
    - pam
    - root_crontab
    - sshd-server
    - devsrv-sshusers
    - devsrv-automount
- salt-minion
(similarly add for other nodegroups)

8. If you have two salt-master (meaning master and slave) then if you make any update to states and files then to sync that to the another salt-master (or slave) use the following command

# sync-states
# sync-files



thumbnail

How to start stop clearcase services

Stop and Start clearcase services



1. Use the following command to start and stop clearcase service during the server maintenance.

# /etc/rc.d/init.d/clearcase start 
# /etc/rc.d/init.d/clearcase stop

2. Use the following command to verify if the service is running

# ps -ef | grep albd

3. To verify if the clearcase installed you can refer the following path.

# /opt/ibm/RationalSDLC/clearcase/linux_x86

4. To verify if the build is running on that clearcase running build server

# ps -ef | grep -E "make|emake|emake_wrapper|build_harness"


thumbnail

Using rsync to migrate data from one storage to another storage

Migrating data from one server to another server || Migrating data from old storage to new storage



To migrate or copy the data from one server to another server we can use rsync or scp tool. for better results i would suggest to use rsync so that we can preserve the same kind of permission, symlink and everything when it copied over to the new location.

In this example, am migrating data from older VNX storage to new Unity 680F storage. so first i had created the new file system on the new storage with required size to accommodate old storage data and also enabled SMB storage to access the same share from windows side

To find out how to create file system on the Unity storage refer another link.

I have created new file system on the Unity storage and then mounted that new file system and older one in the acg-vm1 centos 7 server and from there i will be running rsync to copy the data to new file system with preserving file permissions.

1. Use the following rsync command to migrate the data. .ckpt is for the snapshot to be excluded, & sign at the end to run the job in the background and 2>&1 for sending the standard error and output to the log file

# vi home-data-sync.sh
#!/bin/bash

/usr/bin/rsync -v -v -a -P -x -H -p -E --exclude '.ckpt*' --delete-excluded --ignore-errors --delete /mnt/oldhome/* /mnt/new-home1/ > /root/oldhome/oldhome.log 2>&1 &

2. Once data copied or rsync job completed you can verify the logs if there is any error reported or you can also run echo $? to see if the error code 0 means no error


# echo $?
0

3. You can then verify the size on both old and new location if it matches to make sure all data copied

4. You can use the following command to find the usage. if you do not want to sort the size then you can remove sort -rh option. This will get the all the folders usage size under /mnt/oldhome/ directory

# cd /mnt/oldhome/
# du -h --max-depth=1 *   --exclude=terminated   | sort -rh > /root/oldhomedir_usage-10062021.txt


thumbnail

How to schedule a cron job in linux

Scheduling cron job on Linux 

First you need to create bash or shell script for the job that you want to perform and then to schedule that job in a recurring manner we will be using cron tab to schedule it based on our requirement to run it whether hourly, daily, weekly or monthly



1. Sample script for deleting the files that are 5 days old so that i can keep only 5 days backup and later data will be deleted.

# vi deletebackup.sh
#!/bin/bash
find /opt/jirabackup/export/20* -type f -mtime +5 -exec rm -rf {} \;

2. Edit the crontab to schedule this delete backup job to run every day at 3AM

# crontab -e
00 03 * * * /root/deletejirabackup.sh >> /tmp/deletejirabackup.sh.log 2>&1

3. To edit the crontab for another user, use the following command:

# crontab -u acg_user1 -e

4. Cron job time format to refer and based on this format we need create the cron job to perform our regular task

# tail /etc/crontab 

# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name  command to be executed

5. To list the scheduled cron job use the following command

# crontab -l


thumbnail

How to re-register salt-key on salt-master

Re-register salt-key on salt-master when you are having issue for the key to be registered in salt-master



Perform the following steps on the salt-minion side to fix the hostname issue on the salt-master.

1. Update the /etc/hosts file with the correct hostname that you want to be

2. Then on salt-minion stop salt-minion service

# systemctl stop salt-minion
OR
# service salt-minion stop

3. Delete the salt-minion keys. you need to find the salt-minion key location and then remote minion.pem and minion.pub keys.

# rm -rf /opt/salt/etc/pki/minion/minion.p*
/opt/salt/etc/pki/minion
    - minion.pem
    - minion.pub

Perform the following steps on the salt-master side to fix the hostname issue

1. Delete the salt-key for the respective server

# salt-key -d acg-vm1.acg.com

2. After that restart the salt-minion service on the client side.

# systemctl restart salt-minion

3. Then you can verify on salt-master with the following command to find if the key is registered.

# salt-key -L | grep acg-vm1

4. once the key shows in salt-master then you should be able to push the configuration changes to the client which is salt-minion

5. To verify the salt-minion service on the client side use the following command

# systemctl status salt-minion
# service salt-minion status
Redirecting to /bin/systemctl status salt-minion.service
\u25cf salt-minion.service - The Salt Minion
   Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-08-06 23:35:55 EDT; 1 months 29 days ago
     Docs: man:salt-minion(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltstack.com/en/latest/contents.html
 Main PID: 1008 (python2.7)
   CGroup: /system.slice/salt-minion.service
           \u251c\u25001008 /opt/salt/bin/python2.7 /opt/salt/bin/salt-minion --config-dir=/opt/salt/etc
           \u251c\u25001769 /opt/salt/bin/python2.7 /opt/salt/bin/salt-minion --config-dir=/opt/salt/etc
           \u2514\u25001776 /opt/salt/bin/python2.7 /opt/salt/bin/salt-minion --config-dir=/opt/salt/etc

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.


Powered by Blogger.