Skip to Content

Installing multiple npm versions in linux

Written on May 6, 2019 at 6:35 AM, by

To install multiple npm versions and switch between them, you should use Node Version Manager (nvm)

nvm can be installed by the below commands for a particular user where the commands are run:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash

After successful completion, if you want to install a specific Node Js version use the below command:
nvm install 8.9.4

Finally, check npm version using:
npm version

To check node js version:
node -v

Simple command to generate a random password in linux terminal

Written on April 28, 2019 at 12:51 AM, by

We can use the below command to generate a random alpha-numeric strong password from a linux terminal (bash/sh)
< /dev/urandom tr -dc A-Za-z0-9 | head -c14; echo


-c  --> specifies the length of the random string generated.

OpenSSL commands to extract private key and cert from pfx/p12 file

Written on February 26, 2018 at 8:55 AM, by

Export the private key file from the pfx file:
#openssl pkcs12 -in filename.pfx -nocerts -out key.pem

Remove the passphrase from the private key:
#openssl rsa -in key.pem -out server.key

Export the certificate file from the pfx file:
#openssl pkcs12 -in filename.pfx -clcerts -nokeys -out cert.pem

Adding/Editing SVN external urls to a directory from Shell

Written on August 18, 2017 at 6:50 AM, by

Sometimes it is useful to create a working copy that is made out of a number of different checkouts. For example, you may want different files or subdirectories to come from different locations in a repository, or perhaps from different repositories altogether. If you want every user to have the same layout, you can define the svn:externals properties to pull in the specified resource at the locations where they are needed.

 

Command to get the externals already set for a folder:

svn propget svn:externals <folder_name>

To edit an already assigned external or add a new external to a folder use the command:

  1. first set an appropriate editor to edit the externals:

              SVN_EDITOR=/bin/vi
              export SVN_EDITOR

2.  Then use the below command:

svn propedit svn:externals <folder_name> <absolute_path_to_the_folder>

eg: svn propedit svn:externals test /home/kevin/modules/configuration/test

The above command will open the editor “vi” and we can provide the necessary externals url there and save the file.

 

eg: $ svn propget svn:externals test
/svn/myrepo/application/branches/new1.4.1 suite

 

Note that “suite” is a subfolder inside “test” directory to which the external svn path has to be fetched.

That means, the svn branch “/svn/myrepo/application/branches/new1.4.1” will get synced to the folder /home/kevin/modules/configuration/test/suite whenever it is referenced.

 

Placing custom Nagios NRPE script to monitor NFS Client

Written on March 24, 2017 at 1:39 AM, by

Changes in the Nagios server:

 

First of all enable NRPE plugin for the client host in Nagios server:

 

  1. Make sure check_nrpe command is defined inside commands.cfg file. If not, add it (assuming nrpe plugin is installed along with Nagios ):

 

define command{

command_name check_nrpe

command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

}

 

Nrpe enables some default monitoring for the host like CPU Load, Current Users, total processes … etc.

 

  1. The custom nrpe script that we are planning to place for nfs monitoring can be downloaded from here.  The script has to be placed in the client server and not in the Nagios server but we have to make the script definition in the commands.cfg file in Nagios server itself. So add the below lines to define the nfs check in commands.cfg file:

Read more

Enabling session persistence (stickiness) for nginx (open source)

Written on March 17, 2017 at 12:03 AM, by

For those who are not willing to spend some bucks on purchasing nginx plus but your manager insist upon enabling session persistence in nginx, the best option would be check the nginx approved set of modules here –> Nginx 3rd party modules

There is a 3rd party module in that list by the name : Sticky upstream

Download here

 

Obviously, nginx has to be recompiled to enable this 3rd party module (not a dynamic module).

Download the desired version of nginx source code from  here.

If you already have an nginx version running in your server and want to replace it with the new one, check the compile options used to install the old version using the command :

#nginx -V 

Remember to add the option “–with-http_gunzip_module –add-module=<path_to_module_location>” during compilation.

Read more

keepalived setup for application high availability in centos7

Written on March 14, 2017 at 8:08 AM, by

The requirement was to setup a HA application environment. We had two tomcat servers as backend nodes (application hosting servers). An nginx server was put in front of these two servers to give two functinalities: load-balancing and reverse proxy.

Two Nginx servers were setup. One would be acting as a backup node if the primary server fails. This failover mechanism was achieved using keepalived tool.

Keepalived can be installed using yum in centos7. The current version of keepalived provided through centos7 is v1.2.13.

Environment diagram:

 

 

VIP is 192.168.7.22

Master node is 192.168.7.47

Backup node is 192.168.7.44

 

In keepalived we have a master server and backup servers. Backup servers act as a failover point depending on the priority set for them. keepalived uses a protocol called VRRP (virtual Router Redundancy Protocol) to communicate between the master node and the backup nodes. So it is important we make sure VRRP traffic is allowed between the servers in firewalld (to and fro communication should be allowed). The master server at an interval of 1 sec (default value) will multicast packets to the network which is identified by the backup nodes in the same network using a parameter in keepalived.conf called “virtual_router_id”. It is just a unique number (between 0 … 255) that identifies the packets in the network. So make sure this value is kept the same in Master and backup nodes.

We will need to setup a Virtual IP address (VIP) for keepalived to failover to the backup node if the Master fails. This is not something we need to get from the network admin, but we just need to mention a free IP in the keepalived conf and keepalived will start using it as a VIP. Note that we do not need to configure this IP as a new interface in the server, as the linux systems can add multiple IPs to the same ethernet card virtually. You can view the VIP ip getting assigned to the active node automatically whenever a failover happens using the command:

Read more

Watch realtime HTTP requests per second

Written on February 27, 2017 at 5:03 AM, by

watch -n 1 'a="$(date +%d/%b/%Y:%H:%M:$(($(date +%S)-1)))";grep -c "$a" access.log'


reference website : http://dgtool.treitos.com

Learn to Docker (Basic referral commands)

Written on September 29, 2016 at 1:53 AM, by

Install Docker:

curl -fsSL https://get.docker.com/ | sh

Test docker :

docker run hello-world

– The docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can access it with sudo. For this reason, docker daemon always runs as the root user.

** Command to get docker info:
docker info

** Search for a package:
docker search ubuntu

** Download an image to local:
docker pull ubuntu

** list all available images in your system:
docker images

** To remove a docker image:
docker rmi ubuntu

When you execute a command against an image you basically obtain a container. After the command that is executing into container ends, the container stops (you get a non-running or exited container). If you run another command into the same image again a new container is created and so on.

** To get container ID:
docker ps -l

Once the container ID has been obtained, you can start the container again with the command that was used to create it, by issuing the following command:

docker start c629b7d70666

*** A more elegant alternative so you don’t have to remember the container ID would be to allocate a unique name for every container you create by using the –name option on command line, as in the following example:

# docker run –name myname  ubuntu cat /etc/debian_version

*** In order to interactively connect into a container shell session, and run commands as you do on any other Linux session, issue the following command:

# docker run -it ubuntu bash

-i => interactive
-t => gives a tty for input and output

*** To detach from a container bash shell hit ctl+p and ctl+q
*** to attach back again use :
# docker attach <container id>

To stop a running container from the host session issue the following command:

# docker kill <container id>

*** Installing ngin inside a docker ubuntu container:

– start the container with the nginx package installed

docker run –name kevin-nginx ubuntu bash -c “apt-get -y install nginx”

**** Commit changes to a docker: Commiting changes to a container will create a new docker image (`docker images`)

docker commit <container ID> <new image tag name to be given>

*** tag a docker image:

docker tag <docker id> <repo_name:tag_name>

*** Running an interactive terminal on an docker image that was created :
docker run -it kevin:nginx bash

docker run -it <repo_name:tag_name> bash

*** Run a command inside an image without entering the image:
docker run kevin:nginx which nginx

docker run <repo_name:tag_name> <command_to_excecute>

*** execute a command with an image by giving the container thus formed with a custom name :
# docker run –name <custum_name> <repo_name:tag_name> <command_to_excecute>

eg: docker run –name test kevin:nginx /etc/init.d/nginx stop

*** We need to map the nginx port running inside a docker container to the host to make it available for access. For that, start the container by mapping the nginx port to an arbitary unused port of the host :
# docker run -it -p <host_port>:<docker_contnr_port> <repo_name:tag_name> /bin/bash

eg: docker run -it -p 81:80 kevin:nginx bash   (nginx will be available from port 81 of your host IP)

 

 

 

References:

Install Docker and Learn Basic Container Manipulation in CentOS and RHEL 7/6 – Part 1

https://docs.docker.com

 

Give network access to a folder in windows

Written on March 8, 2016 at 4:47 AM, by

To give access for network windows service to a folder use the below command :

icacls C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys  /grant “NETWORK SERVICE”:(R)