Edit the Nginx config file for the domain (/etc/nginx/sites-available/domain.conf) Edit existing configuration or add new for subfolders like: location /examples {proxy_pass http://localhost:3000/examples/;proxy_buffering off;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-Host $host;proxy_set_header X-Forwarded-Port $server_port;} Then restart nginx: sudo systemctl restart nginx http://localhost/examples -- proxies for --> http://localhost:3000/examples Alternative code: upstream defines a cluster that you can proxy requests to. location / { proxy_pass "http://backend/"; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; proxy_set_header X-Forward-Proto http; proxy_set_header X-Nginx-Proxy true; proxy_redirect off; } } upstream backend { server localhost:3000; }
Category: Uncategorized
Nginx on Linux: Shell script to install nginx: #!/bin/bash sudo yum update -y sudo amazon-linux-extras install nginx1 -y sudo systemctl enable nginx sudo systemctl start nginx configure firewall: sudo ufw allow OpenSSH sudo ufw app list sudo ufw allow 'Nginx HTTP' sudo ufw allow 'Nginx Full' sudo ufw enable sudo ufw status Ubuntu: yum install epel-release -y yum install nginx -y export HOSTNAME=$(curl -s http://169.254.169.254/metadata/v1/hostname) export PUBLIC_IPV4=$(curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address) echo Droplet: $HOSTNAME, IP Address: $PUBLIC_IPV4 > /usr/share/nginx/html/index.html systemctl enable nginx systemctl start nginx chkconfig nginx on iptables -A INPUT -p tcp --dport 80 -j ACCEPT service iptables save service iptables restart mkdir /sites chcon -Rt httpd_sys_content_t /sites/ vim /sites/index.html Add the default index: This is new documentroot vi /etc/nginx/conf.d/default.conf Add this: root /myroot service nginx restart Install mysql sudo apt install mysql-server sudo mysql_secure_installation This will ask if you want to configure the VALIDATE PASSWORD PLUGIN. Answer Y for yes, or anything else to continue without enabling. If you answer “yes”, you’ll be asked to select a level of password validation. Your server will next ask you to select and confirm a password for the MySQL root user. Even though the default authentication method for the MySQL root user dispenses the use of a password, even when one is set, you should define a strong password here as an additional safety measure. For the rest of the questions, press Y and hit the ENTER key at each prompt. Remove anonymous users? - y install php sudo add-apt-repository universe sudo apt update && sudo apt install php-fpm php-mysql or apt-get install php7.4 -y or apt-get install php7.4-fpm php7.4-cli php7.4-mysql php7.4-curl php7.4-json -y Configure Nginx for PHP We now need to make some changes to our Nginx server block. The location of the server block may vary depending on your setup. By default, it is located in /etc/nginx/sites-available/default. Edit the file in nano. sudo nano /etc/nginx/sites-available/default 7.1. Prioritize index.php Press CTRL + W and search for index.html. Now add index.php before index.html /etc/nginx/sites-available/default index index.php index.html index.htm index.nginx-debian.html; 7.2. Server Name Press CTRL + W and search for the line server_name. Enter your server’s IP here or domain name if you have one. /etc/nginx/sites-available/default server_name YOUR_DOMAIN_OR_IP_HERE; 7.3. PHP Socket Press CTRL + W and search for the line location ~ \.php. You will need to uncomment some lines here by removing the # signs before the four lines marked in red below.…
https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.htmlhttps://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare Create 2 s3 buckets - domain.com and www.domain.comIn first s3, Enable: Static website hostingNote the "ENDPOINT"On second s3, Enable: Static website hosting, set "Redirect requests for an object" -> Target bucket: domain.comBuckets -> Properties -> Server access logging - EnablePermissions -> Set "Block all public access" - untick all.Add bucket policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::Bucket-Name/*" ] } ] } Add DNS : domain.com to endpoint , www.domain to domain.comCNAME flattening - With CNAME flattening, Cloudflare finds the IP address that a CNAME points to.
#netstat -rKernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface66.193.212.0 * 255.255.255.0 U 0 0 0 eth0192.168.8.0 * 255.255.255.0 U 0 0 0 eth0link-local * 255.255.0.0 U 0 0 0 eth0default 66.193.212.1 0.0.0.0 UG 0 0 0 eth0 It will give the information about the IP table routing information. Here flags indicate the stat. u- Interface is upG- Not a direct entry.O- No ARP at this interface. Below script will give you the count of the IP address which is having a concorrent connection to the server. netstat -plan | grep tcp | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -n 6 74.67.169.1746 89.32.176.1087 174.6.50.67 91.194.84.10612 73.85.58.2141424 0.0.0.0 After finding the IP address you can run below command to check the stat of the connection. netstat -plan | grep 91.194.84.106 | awk ‘{print $6}’ Below command will give the List count of the number of connections the IPs are connected to the server using TCP or UDP protocol. netstat -ntu | grep ESTAB | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -nr Windows netstat command:Run below command to find all the active established TCP connection netstat -an | findstr TCP | findstr EST Below command should give the established connect with the port 80 netstat -an | findstr :80 | findstr EST Run this command to check the TCP connection in the server. netstat -an | findstr TCP | more Below command will give you the details about all the established TCP connection. netstat -an | findstr TCP | findstr EST Below command you can use to count the number of TCP connection netstat -an | find /C “TCP” You can also find count of number of establish command with the help of below command. netstat -an | find /C “EST”
Step#1 Hold Windows + R keys to open the run dialog box. Step#2 Type “Services.msc” on the search space and press Enter. Step#3 Find the “Windows Update” service by navigating to the very bottom of the services list. Step#4 Right-click “Windows Update” and select Stop. Windows Update will stop. Step#5 Now press Windows + E to open explorer. Step#6 Navigate to the following directory: “C:\Windows\SoftwareDistribution.” Step#7 Copy the address and paste it on the address bar of Windows Explorer to open the Window. Step#8 Select all files by pressing CTRL + A, and press the DELETE key. Step#9 Restart your computer. Step#10 Open the “Services” window once more and navigate to “Windows Update.” Step#11 Right-click on “Windows Update” and click on Start. Step#12 Check its “Status” column to see if it reads “Running.” Step#13 Check for updates
COPY and ADD are both Dockerfile instructions that serve similar purposes. They let you copy files from a specific location into a Docker image. Purpose Copy the files form source to destination. COPY and ADD both has two forms. ADD [–chown=<user>:<group>] <src>… <dest> ADD [–chown=<user>:<group>] [“<src>”,… “<dest>”] This latter form is required for paths containing whitespace. And COPY COPY [–chown=<user>:<group>] <src>… <dest> COPY [–chown=<user>:<group>] [“<src>”,… “<dest>”] This latter form is required for paths containing whitespace. Create tow sample files. $ mkdir copy_add $ cd copy_add $ echo “this is sample file” > sample.txt $ echo “this is example file” > example.txt Write a Dockerfile using COPY and ADD FROM busybox COPY sample.txt /tmp ADD example.txt /tmp CMD [“sh”] Build the docker image $ docker build -t copyaddtest . Run as a container $ docker run -it –name copyaddtest copyaddtest sh Test it # ls -ltr /tmp Here as we seen COPY and ADD we can use same purpose but there are couple of difference, lets discuss Difference COPY The COPY instruction copies files or directories into the Docker image. It takes a src and destination as arguments. Source can be absolute or relative from current WORKDIR or wild cards. Destination path can be absolute or relative to current WORKDIR. For Example: COPY ./requirements.txt /app/requirements.txt COPY package.json package-lock.json /app COPY package*.json /app COPY . /app ADD The ADD instruction copies files, directories, remote file or tar archive into the Docker image. It takes a src and destination as arguments. Source can be files and directories. Source can be a URL. The ADD instruction will download the file from the URL and save it to the destination. We don’t need to use curl or wget to download a file. Source can be a local tar/zip archive. The ADD instruction will automatically extract it to the destination. We don’t need to run unarchive commands manually. Use ADD when you want download a file from a URL or extract local archive file. For Example: For Example: ADD ./example.tar.gz /tmp/ ADD https://bootstrap.pypa.io/get-pip.py /get-pip.py /get-pip.py ADD example.txt /tmp/ ADD https://mirrors.estointernet.in/apache/tomcat/tomcat-8/v8.5.58/bin/apache-tomcat- 8.5.58.tar.gz /tmp/ $ tar -cvzf example.tar.gz ./ Use ADD FROM busybox ADD example.tar.gz /tmp/ CMD [“sh”] Build the image $ docker build -t copyaddtest:v1 . Run the container $ docker run -it –name copyaddtest1 copyaddtest:v1 sh Test it Use COPY Write a sample Dockerfile FROM busybox COPY example.tar.gz /tmp/ CMD [“sh”] Let’s build…
Prune unused Docker objectsDocker takes a conservative approach to cleaning up unused objects (often referred to as“garbage collection”), such as images, containers, volumes, and networks: these objects aregenerally not removed unless you explicitly ask Docker to do so. This can cause Docker to useextra disk space. For each type of object, Docker provides a prune command. In addition, youcan use docker system prune to clean up multiple types of objects at once. This topic showshow to use these prune commands.Prune imagesThe docker image prune command allows you to clean up unused images. By default, dockerimage prune only cleans up dangling images. A dangling image is one that is not tagged and isnot referenced by any container. To remove dangling images:$ docker image prune To remove all images which are not used by existing containers, use the -a flag:$ docker image prune -a By default, you are prompted to continue. To bypass the prompt, use the -f or –force flag.You can limit which images are pruned using filtering expressions with the –filter flag. Forexample, to only consider images created more than 24 hours ago:$ docker image prune -a –filter “until=24h”for more options:https://docs.docker.com/engine/reference/commandline/image_prune/ Prune containersWhen you stop a container, it is not automatically removed unless you started it with the –rmflag. To see all containers on the Docker host, including stopped containers, use docker ps -a.You may be surprised how many containers exist, especially on a development system! Astopped container’s writable layers still take up disk space. To clean this up, you can use thedocker container prune command.$ docker container prune By default, you are prompted to continue. To bypass the prompt, use the -f or –force flag. By default, all stopped containers are removed. You can limit the scope using the –filter flag.For instance, the following command only removes stopped containers older than 24 hours:$ docker container prune –filter “until=24h”for more options https://docs.docker.com/engine/reference/commandline/container_prune/Prune volumesVolumes can be used by one or more containers, and take up space on the Docker host.Volumes are never removed automatically, because to do so could destroy data.$ docker volume pruneBy default, you are prompted to continue. To bypass the prompt, use the -f or –force flag.By default, all unused volumes are removed. You can limit the scope using the –filter flag. Forinstance, the following command only removes volumes which are not labelled with the keeplabel:ref: https://docs.docker.com/engine/reference/commandline/volume_ls/$ docker volume prune –filter “label!=keep” for more options https://docs.docker.com/engine/reference/commandline/volume_prune/Prune networksDocker networks…
Docker Compose is a tool for defining and running multi-container Dockerapplications. With Compose, you use a YAML file to configure your application’sservices. Then, with a single command, you create and start all the services fromyour configuration.Using Compose is basically a three-step process: Define your app’s environment with a Dockerfile so it can be reproduced anywhere.Define the services that make up your app in docker-compose.yml so theycan be run together in an isolated environment.Run docker-compose up and Compose starts and runs yourentire app. A docker-compose.yml looks like this: Install Docker ComposePrerequisitesDocker Compose relies on Docker Engine for any meaningful work, so make sureyou have Docker Engine installed either locally or remote, depending on yoursetup. sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-uname -s–uname -m | sudo tee /usr/local/bin/docker-compose > /dev/null sudo chmod +x /usr/local/bin/docker-compose docker-compose –version $ mkdir composetest$ cd composetest Run this command to download the current stable release of Docker Compose:2.Apply executable permissions to the binary. 3. Test the installation. Docker-compose examplea simple Python web application running on Docker Compose. The application uses the Flaskframework and maintains a hit counter in Redis. While the sample uses Python, the conceptsdemonstrated here should be understandable even if you’re not familiar with it.Make sure you have already installed both Docker Engine and Docker Compose. You don’t need toinstall Python or Redis, as both are provided by Docker images.Step 1: Setup Create a directory for the project. 2. Create a file called app.py in your project directory and paste this in: import timeimport redisfrom flask import Flask app = Flask( name )cache = redis.Redis(host=’redis’, port=6379) def get_hit_count():retries = 5 whileTrue:try:return cache.incr(‘hits’)except redis.exceptions.ConnectionError as exc: if retries== 0:raise excretries -= 1time.sleep(0.5) @app.route(‘/’) defhello():count = get_hit_count()return ‘Hello World! I have been seen {} times.\n’.format(count) if name == ” main “:app.run(host=”0.0.0.0″, debug=True) In this example, redis is the hostname of the redis container on the application’snetwork. We use the default port for Redis, 6379. flaskredis FROM python:3.4-alpineADD . /codeWORKDIR /codeRUN pip install -r requirements.txt CMD[“python”, “app.py”]Explanation: Note the way the get_hit_count function is written. This basic retry looplets us attempt our request multiple times if the redis service is not available. This isuseful at startup while the application comes online, but also makes our applicationmore resilient if the Redis service needs to be restarted anytime during the app’slifetime. In a cluster, this also helps handling momentary connection drops betweennodes. Create another file called requirements.txt in your project directory and paste this…
DockerfileA Dockerfile is analogous to a recipe, specifying the required ingredients for anapplication. Let’s take a simple example of a nodejs application whose Dockerfilelooks as follows:FROM node:boron Create app directory WORKDIR/home/code Install app dependencies RUN npminstallEXPOSE 8080CMD [ “npm”, “start” ]Notice EXPOSE 8080. You may have seen this statement in most of the Dockerfile(s)across Docker hub.The EXPOSE instruction informs Docker that the container listens on the specifiednetwork ports at runtime. EXPOSE does not make the ports of the container accessible tothe host.A simpler explanationThe EXPOSE instruction exposes the specified port and makes it available only for inter-container communication. Let’s understand this with the help of an example.Let’s say we have two containers, a nodejs application and a redis server. Our node appneeds to communicate with the redis server for several reasons.For the node app to be able to talk to the redis server, the redis container needs to exposethe port. Have a look at the Dockerfile of official redis image and you will see a line sayingEXPOSE 6379. This is what helps the two containers to communicate with each other.So when your nodejs app container tries to connect to the 6379 port of the rediscontainer, the EXPOSE instruction is what makes this possible.Note: For the node app server to be able to communicate with the redis container, it’simportant that both the containers are running in the same docker network. Binding the container port with the hostSo EXPOSE helps in inter-container communication. What if, there’s a need to bind theport of the container with that of the host machine on which the container is running?Pass the -p (lower case p) as a option to the docker run instruction asfollows docker run -p <HOST_PORT>:<CONTAINER:PORT>IMAGE_NAMEFind out more about this in the official documentation.
For Docker containers to communicate with each other and the outside world viathe host machine, there has to be a layer of networking involved. Docker supportsdifferent types of networks, each fit for certain use cases.When Docker is installed, a default bridge network named docker0 is created. Each newDocker container is automatically attached to this network, unless a custom network isspecified.Besides docker0 , two other networks get created automatically by Docker: host (noisolation between host and containers on this network, to the outside world they are onthe same network) and none (attached containers run on container-specific networkstack).Network driversDocker’s networking subsystem is pluggable, using drivers. Several drivers exist bydefault, and provide core networking functionality:bridge: The default network driver. If you don’t specify a driver, this is the type of networkyou are creating. Bridge networks are usually used when your applications run instandalone containers that need to communicate.host: For standalone containers, remove network isolation between the container andthe Docker host, and use the host’s networking directly. host is only available for swarmservices on Docker 17.06 and higher.overlay: Overlay networks connect multiple Docker daemons together and enableswarm services to communicate with each other. You can also use overlay networks tofacilitate communication between a swarm service and a standalone container, orbetween two standalone containers on different Docker daemons. Tmahis strategyremoves the need to do OS- level routing between these containers.macvlan: Macvlan networks allow you to assign a MAC address to a container, makingit appear as a physical device on your network. The Docker daemon routes traffic tocontainers by their MAC addresses. Using the macvlan driver is sometimes the bestchoice when dealing with legacy applications that expect to be directly connected to thephysical network, rather thanrouted through the Docker host’s network stack.none: For this container, disable all networking. Usually used in conjunction with acustom network driver. none is not available for swarm services. Check the default network list in yourmachine. # docker network ls The docker0 interface is a virtual Ethernet bridge that connects our containers and thelocal host network. If we look further at the other interfaces on our Docker host, we’ll finda series of interfaces starting with veth.Every time Docker creates a container, it creates a pair of peer interfaces that are likeopposite ends of a pipe (i.e., a packet sent on one will be received on the other). It givesone of the peers to the container to become its eth0 interface and keeps…