Ansible_Playbooks and Roles

Install git in ami linux2 git_install.yml — – hosts: 172.31.16.196 become: yes gather_facts: False tasks: – name: Installing git yum: name: git state: present Install maven in ami linux2 [root@ip-172-31-18-141 ansible]# cat maven_install.yml — – hosts: 172.31.16.196 become: yes tasks: – name: download maven and install shell: | wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo yum install -y apache-maven ansible galaxy Galaxy is a hub for finding and sharing Ansible content. Use Galaxy to jump-start your automation project with great content from the Ansible community. Galaxy provides pre-packaged units of work known to Ansible as roles. Roles can be dropped into Ansible PlayBooks and immediately put to work. You’ll find roles for provisioning infrastructure, deploying applications, and all of the tasks you do everyday. Creating Roles What is ansible roles ? A role enables the sharing and reuse of Ansible tasks. It contains Ansible playbook tasks, plus all the supporting files, variables, templates, and handlers needed to run the tasks. A role is a complete unit of automation that can be reused and shared. In practical terms, a role is a directory structure containing all the files, variables, handlers, Jinja templates, and tasks needed to automate a workflow. When a role is created, the default directory stucture contains the following: tasks – contains the main list of tasks to be executed by the role. handlers – contains handlers, which may be used by this role or even anywhere outside this role. defaults – default variables for the role. vars – other variables for the role. files – contains files which can be deployed via this role. templates – contains templates which can be deployed via this role. meta – defines some meta data for this role. Refer bellow link for more info.. https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html?highlight=roles&extId CarryOver=true&sc_cid=701f2000001OH7YAAW How to create a role using ansible-galaxy ? Using the ansible-galaxy command line tool that comes bundled with Ansible, you can create a role with the init command. For example, the following will create a role directory structure called test-role-1 in the current working directory $ ansible-galaxy init test-role-1 https://galaxy.ansible.com/docs/contributing/creating_role.html refer for more Using Roles The classic (original) way to use rolesDocker Private registry is via the roles: option for a given play: — hosts: webservers roles:commonwebservers This designates the following behaviors, for each role ‘x’: If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play.If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play.If roles/x/vars/main.yml exists, variables listed therein will be added to the play.If roles/x/defaults/main.yml exists, variables listed…

Amazon Aurora Serverless

Aurora Serverless is an on-demand, auto-scaling configuration for Aurora (MySQL-compatible edition) where the database will automatically start-up, shut down, and scale up or down capacity based on your application’s needs. Aurora Serverless enables you to run your database in the cloud without managing any database instances. It’s a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads, because it automatically starts up, scales capacity to match your application’s usage, and shuts down when not in use. Manually managing database capacity can take up valuable time and can lead to inefficient use of database resources. With Aurora Serverless, you simply create a database endpoint, optionally specify the desired database capacity range, and connect your applications. You pay on a per-second basis for the database capacity you use when the database is active, and migrate between standard and serverless configurations with a few clicks in the Amazon RDS Management Console. Get Started with Amazon Aurora Serverless Try the 10 Minute Tutorial Simple Aurora Serverless removes the complexity of managing database instances and capacity. Scalable Aurora Serverless seamlessly scales compute and memory capacity as needed, with no disruption to client connections. Cost-effective Pay only for the database resources you consume, on a per-second basis. Highly available Built on distributed, fault-tolerant, self-healing Aurora storage with 6-way replication to protect against data loss. Use Cases Infrequently-Used Applications You have an application which is only used for a few minutes several times per day or week, such as a low-volume blog site, and you want a cost-effective database that only requires you to pay for it when active. With Aurora Serverless, you only pay for the database resources you consume. New Applications You are deploying a new application and are unsure of what instance size you need. With Aurora Serverless, you simply create an end-point and let the database auto-scale to the capacity requirements of your application. Variable Workloads You are running a lightly used application, with peaks of 30 minutes to several hours a few times each day or several times per year, such as in HR, budgeting, operational reporting, etc. With Aurora Serverless, you no longer have to provision to either peak or average capacity. Unpredictable Workloads You are running workloads where there is database usage throughout the day, but also peaks of activity that are hard to predict. For example, a traffic site where you might see a surge of activity when it starts…

Docker Expose

Dockerfile A Dockerfile is analogous to a recipe, specifying the required ingredients for an application. Let’s take a simple example of a nodejs application whose Dockerfile looks as follows: FROM node:boron Create app directory WORKDIR /home/code Install app dependencies RUN npm install EXPOSE 8080 CMD [ “npm”, “start” ] Notice EXPOSE 8080. You may have seen this statement in most of the Dockerfile(s) across Docker hub. The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host. A simpler explanation The EXPOSE instruction exposes the specified port and makes it available only for inter- container communication. Let’s understand this with the help of an example. Let’s say we have two containers, a nodejs application and a redis server. Our node app needs to communicate with the redis server for several reasons. For the node app to be able to talk to the redis server, the redis container needs to expose the port. Have a look at the Dockerfile of official redis image and you will see a line saying EXPOSE 6379. This is what helps the two containers to communicate with each other. So when your nodejs app container tries to connect to the 6379 port of the redis container, the EXPOSE instruction is what makes this possible. Note: For the node app server to be able to communicate with the redis container, it’s important that both the containers are running in the same docker network. Binding the container port with the host So EXPOSE helps in inter-container communication. What if, there’s a need to bind the port of the container with that of the host machine on which the container is running? Pass the -p (lower case p) as a option to the docker run instruction as follows docker run -p : IMAGE_NAME Find out more about this in the official documentation.

×