NagiOS

NagiOS is a tool used to monitor IT infrastructure. In Server:

apt-get install -y nagios3 nagios-nrpe-plugin

site will be available at: http://localhost/nagios3.

cd /etc/nagios3/conf.d

On Client side: (Install NPRE)

apt-get install -y nagios-plugins nagios-nrpe-server

This next step is where you get to specify any manual commands that Monitoring server can send via NRPE to these client hosts.

Make sure to change allowed_hosts to your own values.

allowed_hosts=<Nagios Server IP>

Edit /etc/nagios/nrpe.cfg

service nagios-nrpe-server restart

Add Server Configurations on Monitoring Server

Path:/etc/nagios3/conf.d

define host {
use generic-host
host_name lp
alias lp
address 10.45.1.140
}

define service {
use generic-service
host_name lp
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}

define service {
use generic-service
host_name lp
service_description SSH
check_command check_ssh
notifications_enabled 0
}

define service {
use generic-service
host_name lp
service_description Current Load
check_command check_load!5.0!4.0!3.0!10.0!6.0!4.0
}

define service{
use generic-service ; Name of service template to use
host_name lp
service_description Disk Space
check_command check_all_disks!20%!10%
}

define service{
use generic-service ; Name of service template to use
host_name lp
service_description chekc snmp
check_command snmp_procname
#check_command check_snmp -H 10.45.1.183 -c public -o .1.3.6.1.4.1.2021.3000.21
}

service nagios3 restart

start monitoring the client.

For easy deployment you can also use jumpbox Nagios also available on AWS.

Basic Troubleshooting:
Enabling External Commands in Nagios / Ubuntu (If you catch the error:Error: Could not stat() command file ‘/var/lib/nagios3/rw/nagios.cmd’!)
service nagios3 stop
dpkg-statoverride –update –add nagios www-data 2710 /var/lib/nagios3/rw
dpkg-statoverride –update –add nagios nagios 751 /var/lib/nagios3
service nagios3 start

Custom Command and Service for hostname.cfg:

define command{
command_name check_snmp
command_line $USER1$/check_snmp -H $HOSTADDRESS$ -C public -o $ARG1$
}

define service{
use generic-service
host_name lp2
service_description Memory
check_command check_snmp!.1.3.6.1.4.1.2021.9.1.7.1
}

The First and second arguments are already in command definition we only need to add the OID which is last arguments with exclamation mark ( !) before it.

NRPE – Nagios Remote Plugin Executor
It is used to remotely execute Nagios plugins on other Linux/Unix machines. This allows you to monitor remote machine metrics (disk usage, CPU load, etc.). NRPE can also communicate with some of the Windows agent addons, so you can execute scripts and check metrics on remote Windows machines as well.
Example of NSPE:
Things to consider:
configuration file: /etc/nagios/nrpe.cfg:
allowed_hosts=127.0.0.1,54.85.162.178,10.94.0.122
dont_blame_nrpe=1 (To pass the optional command arguments to the nagios plugins)
Uncomment the following if you want to monitor these:

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200

(Note command[check_disk] and /usr/lib/nagios/plugins/check_disk check_disk name should match on both side.)
Example of setting the service in Server:
/etc/nagios3/conf.d/
vi lp2.cfg

define command {
command_name check_nrpe_load
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c “check_disk”
}

define service {
use generic-service
# hostgroup_name nrpe-services
host_name lp2
service_description My Disk space
check_command check_nrpe_load
}

Ref: https://www.digitalocean.com/community/tutorials/how-to-install-nagios-on-ubuntu-12-10
http://www.techrepublic.com/blog/linux-and-open-source/nagios-monitoring-with-nrpe-allows-better-tracking-of-remote-systems/
http://assets.nagios.com/downloads/nagiosxi/docs/NRPE-Troubleshooting-and-Common-Solutions.pdf

GIT Tutorial Part 1:

Git Tutorials:

Git is the distributed version control system unlike SVN, which is client server based system. In Ubuntu and Debian install GIT with apt-get install git.

1.    Initialise the GIT repository:

$ mkdir gitrepo

$ cd gitrepo

$ git init

This will create a directory named .git where all indexes are stored required for the git. This directory is used to track all the files in the repository. It contains various file, only you may need a file  .config.

2.    Performing your First Commit:

$ git add <file name> or to add all the files in the repository, git add .

$ git commit –m “ Message for the commit”

Always remember the 3 cycles:

Basic Cycle:

  • Make changes
  • Add the changes
  • Commit the changes to the repository with a message.

3.    Commit Messages:

Present Tense not past tense

Short single line

 4.    Viewing the Commit Log

$ git log

Commit no, Author and Date:

Each commit has unique ID called SHA1

$ To view help git help log

git log –n 1

git log  –since=2012-06-15

git log –author=”daya”

git log –grep=”Init”

5.    Referring to Commits

  • GIT refers to each commit by a unique number called SHA1, i. e a change set is represented by checksum.
  • When we submit each change to the GIT repository, git will generates a checksum for each change set.
  • Applying checksum algorithm in the data generates simple number.
  • Same data always have same checksum values.
  • In git data integrity is fundamental changing data would change checksum.
  • Git uses SHA1 algorithm to create checksum.  ( 40 character Hex String).

6.    Making changes to file:

git status shows the difference between working directory, staging index and the repository.

Macs-MacBook-Pro:gitrepo daya$ git status

# On branch master

# Untracked files:

#   (use “git add <file>…” to include in what will be committed)

#

#                                       loop.py

#                                       repo.sh

nothing added to commit but untracked files present (use “git add” to track)

Macs-MacBook-Pro:gitrepo daya$ git add repo.sh

Macs-MacBook-Pro:gitrepo daya$ git status

# On branch master

# Changes to be committed:

#   (use “git reset HEAD <file>…” to unstage)

#

#                                       new file:   repo.sh

#

# Untracked files:

#   (use “git add <file>…” to include in what will be committed)

#

#                                       loop.py

(Here one file in staging index and one file in working index)

Macs-MacBook-Pro:gitrepo daya$ git commit -m  “Added the repo file”

[master 2969cc4] Added the repo file

1 file changed, 2 insertions(+)

create mode 100644 repo.sh

                                                      git commit –m “message”

Will commit the files in staging index.

7.    Editing Files:

Macs-MacBook-Pro:gitrepo daya$ git status

# On branch master

# Changes not staged for commit:

#   (use “git add <file>…” to update what will be committed)

#   (use “git checkout — <file>…” to discard changes in working directory)

#

#                                       modified:   repo.sh

#

git add repo.sh (added in the staging index)

git commit –m “ Modified the repo, with apt-get

8.    Viewing changes with diff:

git diff or git diff filename

Tracking User History in Linux

In Linux system users and their work are tracked through history. User may easily delete history and its file .bash_history. In order to preserve the history file and not letting it for user to delete it, following tweaking to bash can be set. It redirect each and every command typed in the command line to the syslog via logger command. Moreover, it also set history and history size to infinite limit that could not be cleared by the user.

Paste these in your /etc/profile file:

#Redirect the history to syslog.
 export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "$USER[$$] $SSH_CONNECTION")'

#Prevent unset of histfile, /etc/profile
 export HISTFILE=~/.bash_history
 export HISTSIZE=10000

export HISTFILESIZE=999999
 export HISTTIMEFORMAT="%F %T:"
 # Don't let the users enter commands that are ignored in the history file
 HISTIGNORE=""
 HISTCONTROL=""
 readonly HISTFILE
 readonly PROMPT_COMMAND
 readonly HISTSIZE
 readonly HISTFILESIZE
 readonly HISTIGNORE
 readonly HISTCONTROL
 readonly HISTTIMEFORMAT

#overwrite the default homedirecotry of user
 sed -i  's/HISTFILESIZE=2000//g' ~/.bashrc
 sed -i  's/HISTSIZE=1000//g' ~/.bashrc
 sed -i  's/HISTCONTROL=ignoreboth//g' ~/.bashrc

(Note: You can use other Linux Auditing tools like snoopy, acct  etc).

Partitioning with Preseed

Debian /Ubuntu Preseeding provides a set of answer to the Installer so that you do not have to enter the answers manually during the Installation Process. Most of the questions asked by the Debian Installer can be pressed away. It can fully automate installation and even provide the features that are not available during the normal installation. To create a manual partitioning scheme we use expert recipe of Preseeding.

<owner> <question name> <question type> <value>

<owner>: “d-i” which stands for Debian Installer.

: partman-auto, partman-auto-raid and partman-auto-lvm are the packages that handle automatic partitioning of various types, and some of the questions (prompts) issued by them are described here:

<question type> says what sort of value to expect (eg string, boolean, select (for a menu)…)

<question value> this is where you put the answer that you would otherwise be entering interactively

Expert Recipe for manual partitioning


# configuration to create:

#  * 15G + 50 % RAM /

#  * 8G swap

#  * the rest formatted with LVM  on /opt

d-i     partman-auto/method     string  lvm

d-i     partman-auto/disk       string  /dev/sda

# the install makes sure we want to wipe the lvm

d-i     partman-lvm/device_remove_lvm   boolean true

d-i     partman-auto/confirm    boolean true

d-i partman-lvm/device_remove_lvm boolean true

d-i partman-lvm/confirm boolean true

d-i partman-lvm/confirm_nooverwrite boolean true

d-i     partman-auto/expert_recipe      string  es ::   \

#For GPT Partition Table

32 32 32 free                          \

$iflabel{ gpt }                  \

method{ biosgrub }               \

.                                   \

15000+50% 15000 15000+50% ext4          \

$primary{ }             \

$bootable{ }            \

method{ format }        \

format{ }               \

use_filesystem{ }       \

filesystem{ ext4 }      \

mountpoint{ / }         \

.                               \

8000 8000 8000 linux-swap       \

method{ swap }          \

format{ }               \

.                               \

64 1000 10000000 ext4            \

method{ format }        \

format{ }               \

use_filesystem{ }       \

filesystem{ ext4 }       \

lv_name{ data }         \

$defaultignore{ }       \

$lvmok{ }               \

mountpoint{ /opt }        \

.

d-i     partman-auto-lvm/guided_size    string  100%

d-i     partman/choose_partition        \

select  Finish partitioning and write changes to disk

d-i     partman-auto/confirm    boolean true

d-i     partman-auto/choose_recipe      es

References:

https://wikitech.wikimedia.org/wiki/PartMan

http://ftp.dc.volia.com/pub/debian/preseed/partman-auto-recipe.txt

SSH

Password based Authentication are considered to be vulnerable in the Network world. You are highly recommended to use ssh key based Authentication. SSH uses public key cryptography, that uses  public key and private keys.

On the client Machine:

$ ssh-key-get -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/Users/mac/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/mac/.ssh/id_rsa.
Your public key has been saved in /Users/mac/.ssh/id_rsa.pub.
The key fingerprint is:
14:87:b3:26:cc:eb:79:05:09:97:16:35:3e:f3:1e:9b mac@Macs-MacBook-Pro.local
The key’s randomart image is:
+–[ RSA 2048]—-+
|        o=+      |
|      . *+ .     |
|     o +.++      |
|      +.=  +     |
|       +S.  o    |
|      .   .. +   |
|     . . .  E    |
|      o .        |
|       .         |
+—————–+
Macs-MacBook-Pro:~ mac$

Server Machine:

  1. In the server, copy the public key(id_rsa.pub)  inside ~user/.ssh/  as authorized_keys.
  2. Edit /etc/ssh/sshd_config to reflect PasswordAuthentication no.
  3. From the client you can ssh as ssh -i id_rsa user@server.
  4. For protecting the private key, you can use ssh pass phrase. Which acts as extra security features for ssh.

Note: Private key is used in the client, you used to access the system, the public key remained in the Server, as authorized_keys.

Resume scp:

Sometime we may have to resume the scp when connection are terminated accidentally. It can be easily done with the use of rsync:   rsync --partial --progress --rsh=ssh user@host:/Remote_File Local_File

Rsync:

rsync is a file transfer program in UNIX based system that synchronize the files and directories from one system to another. It uses the delta encoding when possible to minimize the data transfer time.

  1. rsync -alovzrP –delete -e ssh user@host:Remote_File  Local_File
  2. rsync -alovzrP Local_File user@host:/Remote_File

rsync Push Operation:

Pushes directory from local system to remote system.

rsync -a ~/dir1 username@remote_host:destination_directory

rsync Pull Operation:

rsync -a username@remote_host:/home/username/dir1 place_to_sync_on_local_machine

Pull directory from remote system to local system.

Important Note:

rsync -a dir1/ dir2

This is necessary to mean “the contents of dir1“.

OSPF Core Configuration

1. Configure OSPF for the above network diagram. R1 will acts as an ASBR by redistributing a series of static routes into the OSPF network. These routes should NOT increment their metric as they pass through the network and should have an initial OSPF cost of 200. All routers should have a router-id reflecting their hostname; you should be able to ping this router-id throughout the entire OSPF network.

2. After completing the initial step of the lab, one of the routers in Area0 will become the DR and one will become the BDR, for the Ethernet segment. Which router will become the DR and BDR? Write DR and BDR next to the respective router below.

  • R1
  • R2
  • R3

3. Ensure R1 becomes the DR on the Ethernet segment in Area 0. R2 and R3 should not become DR or BDR for the Ethernet segment in Area 0. After this change is made, what type of neighbor relationship should exists between R1 and R2? What about R2 and R3?

4. Implement summarization at the ABRs in the network to make the routing tables throughout the network as efficient as possible.

5. Implement summarization at the ASBR. The summary route should have the same attributes as the original, individual routes redistributed into the network.

6. The organization plans to upgrade to Gigabit Ethernet int he coming months. OSPF should accurately calculate its metric assuming Gigabit Ethernet will be fastest link in the network.

Solutions:

To configure the static route:
R1(config)# ip route 172.16.0.0 255.255.255.0 null0
R1(config)# ip route 172.16.1.0 255.255.255.0 null0
R1(config)# ip route 172.16.2.0 255.255.255.0 null0
R1(config)# ip route 172.16.3.0 255.255.255.0 null0
 
To redistribute:
R1(config)#router ospf 1
R1(config)# redistribute static subnets metric 200 metric-type 2

 

 
 

Operating System case studies notes

Hello guys, plz find the solution for some of the questions of the case studies. I have tried to complete some of the questions raised by Mr. Balaram Sharma. If you still have confusion plz let me know. For detail plz go thru the Modern Operating System Tanenbaum.

Good Luck for your exam.

:),

os-case-studeis