SFTP uses SSH protocol, so first we have to install windows version of openssh the server.
Luckily we get the precompiled version of it, so we just have to unzip the contents to a folder. Please note that this installation should be used only in a non-prod environment.
Download link : https://github.com/PowerShell/Win32-OpenSSH/releases/download/3_19_2016/OpenSSH-Win64-1.1.zip (I find that the latest versions give a 1067 error while starting)
Inside PowerShell prompt execute below command to bypass execution restrictions:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
Once server is up, edit c:\OpenSSH-Win\sshd_config
Modify below line in config file:
subsystem sftp c:\OpenSSH-Win\sftp-server.exe -d C:/Users/ftp-testing/Work
-d –> default working directory for ftp logins
Open cmd in elevated rights
It will show installation successful.
Open services.msc and go to sshd
Make sure sshd starts “Automatically”
Generate SSH keys for the server (they are necessary to start sshd):
Start SSHD service
Connect ssh using some client tools, like, winscp.
Common screen commands that we can use are:
Starting Named Session:
screen -S session_name
Detach from screen session without killing that session:
Reattaching a screen session:
screen -r (works if only one session is present)
List all existing screen sessions:
If there are more than one screen session, we should mention the screen session id:
To install multiple npm versions and switch between them, you should use Node Version Manager (nvm)
nvm can be installed by the below commands for a particular user where the commands are run:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
After successful completion, if you want to install a specific Node Js version use the below command:
nvm install 8.9.4
Finally, check npm version using:
To check node js version:
We can use the below command to generate a random alpha-numeric strong password from a linux terminal (bash/sh)
< /dev/urandom tr -dc A-Za-z0-9 | head -c14; echo
-c --> specifies the length of the random string generated.
Export the private key file from the pfx file:
#openssl pkcs12 -in filename.pfx -nocerts -out key.pem
Remove the passphrase from the private key:
#openssl rsa -in key.pem -out server.key
Export the certificate file from the pfx file:
#openssl pkcs12 -in filename.pfx -clcerts -nokeys -out cert.pem
Sometimes it is useful to create a working copy that is made out of a number of different checkouts. For example, you may want different files or subdirectories to come from different locations in a repository, or perhaps from different repositories altogether. If you want every user to have the same layout, you can define the
svn:externals properties to pull in the specified resource at the locations where they are needed.
Command to get the externals already set for a folder:
svn propget svn:externals <folder_name>
To edit an already assigned external or add a new external to a folder use the command:
- first set an appropriate editor to edit the externals:
2. Then use the below command:
svn propedit svn:externals <folder_name> <absolute_path_to_the_folder>
eg: svn propedit svn:externals test /home/kevin/modules/configuration/test
The above command will open the editor “vi” and we can provide the necessary externals url there and save the file.
eg: $ svn propget svn:externals test
Note that “suite” is a subfolder inside “test” directory to which the external svn path has to be fetched.
That means, the svn branch “/svn/myrepo/application/branches/new1.4.1” will get synced to the folder /home/kevin/modules/configuration/test/suite whenever it is referenced.
Changes in the Nagios server:
First of all enable NRPE plugin for the client host in Nagios server:
- Make sure check_nrpe command is defined inside commands.cfg file. If not, add it (assuming nrpe plugin is installed along with Nagios ):
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
Nrpe enables some default monitoring for the host like CPU Load, Current Users, total processes … etc.
- The custom nrpe script that we are planning to place for nfs monitoring can be downloaded from here. The script has to be placed in the client server and not in the Nagios server but we have to make the script definition in the commands.cfg file in Nagios server itself. So add the below lines to define the nfs check in commands.cfg file:
For those who are not willing to spend some bucks on purchasing nginx plus but your manager insist upon enabling session persistence in nginx, the best option would be check the nginx approved set of modules here –> Nginx 3rd party modules
There is a 3rd party module in that list by the name : Sticky upstream
Obviously, nginx has to be recompiled to enable this 3rd party module (not a dynamic module).
Download the desired version of nginx source code from here.
If you already have an nginx version running in your server and want to replace it with the new one, check the compile options used to install the old version using the command :
Remember to add the option “–with-http_gunzip_module –add-module=<path_to_module_location>” during compilation.
The requirement was to setup a HA application environment. We had two tomcat servers as backend nodes (application hosting servers). An nginx server was put in front of these two servers to give two functinalities: load-balancing and reverse proxy.
Two Nginx servers were setup. One would be acting as a backup node if the primary server fails. This failover mechanism was achieved using keepalived tool.
Keepalived can be installed using yum in centos7. The current version of keepalived provided through centos7 is v1.2.13.
VIP is 192.168.7.22
Master node is 192.168.7.47
Backup node is 192.168.7.44
In keepalived we have a master server and backup servers. Backup servers act as a failover point depending on the priority set for them. keepalived uses a protocol called VRRP (virtual Router Redundancy Protocol) to communicate between the master node and the backup nodes. So it is important we make sure VRRP traffic is allowed between the servers in firewalld (to and fro communication should be allowed). The master server at an interval of 1 sec (default value) will multicast packets to the network which is identified by the backup nodes in the same network using a parameter in keepalived.conf called “virtual_router_id”. It is just a unique number (between 0 … 255) that identifies the packets in the network. So make sure this value is kept the same in Master and backup nodes.
We will need to setup a Virtual IP address (VIP) for keepalived to failover to the backup node if the Master fails. This is not something we need to get from the network admin, but we just need to mention a free IP in the keepalived conf and keepalived will start using it as a VIP. Note that we do not need to configure this IP as a new interface in the server, as the linux systems can add multiple IPs to the same ethernet card virtually. You can view the VIP ip getting assigned to the active node automatically whenever a failover happens using the command:
watch -n 1 'a="$(date +%d/%b/%Y:%H:%M:$(($(date +%S)-1)))";grep -c "$a" access.log'
reference website : http://dgtool.treitos.com