Launching web-server with AWS using Terraform
* Task 2 *
1. Create Security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
Step 1 : Creating key and security group which allows port 80 and NFS port(2049)
We are creating tls_private_key “webserver_private_key” which will generates a secure private key and encodes it as PEM
Then we are creating aws_key_pair “webserver_key” which will provide an ec2 key_pair resource which is used to control login access to EC2 instances.
We are creating security group using aws_security_group which will allow port 80 and port 2049
AWS EFS lets you create scalable file storage to be used on EC2. You don’t have to bother about capacity forecasting as it can scale up or down on-demand.
resource "tls_private_key" "webserver_private_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_file" "private_key" {
content = tls_private_key.webserver_private_key.private_key_pem
filename = "webserver_key.pem"
file_permission = 0400
}resource "aws_key_pair" "webserver_key" {
key_name = "webserver_key"
public_key = tls_private_key.webserver_private_key.public_key_openssh
}
resource "aws_security_group" "SG1" {
name = "http-ssh"
description = "Allow http and ssh"ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "allow_http_ssh"
}
}
Step 2 :Launch EC2 instance
While launching our ec2 instanc we are using key and security group we have created previously in step1
We are using Amazon Linux 2 AMI (HVM), SSD Volume Type image from AWS having AMI : ami-0447a12f28fddb066
We are including remote-exec provisioner which will help us to run remote system commands through Terraform.
We are installing httpd server ,git hub and EFS utils using “remote-exec” provisioner
resource "aws_instance" "webserver" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.webserver_key.key_name
security_groups=[aws_security_group.SG1.name]tags = {
Name = "webserver_task1"
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_private_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
"sudo yum install -y amazon-efs-utils",
]
}
}
Step 3 : Create EFS and mount EFS using “aws_efs_mount_target”
resource "aws_efs_file_system" "efs-task2" {
creation_token = "my-task2"tags = {
Name = "MyTask2"
}
}resource "aws_efs_mount_target" "alpha" {
file_system_id = "${aws_efs_file_system.efs-task2.id}"
subnet_id = aws_instance.webserver.subnet_id
security_groups = [aws_security_group.SG1.id]
}
Step 4: Create s3 bucket
We are creating s3 bucket using aws_s3_bucket which will have public-read access
resource "aws_s3_bucket" "task2_s3" {
bucket = "task2-s3"
acl = "public-read"
tags = {
Name = "task2-s3"
Environment = "Dev"
}
}
Step 5: Add object in s3 bucket
We are adding an image to our bucket which will be later used in our website(dnld14.jpg)
resource "aws_s3_bucket_object" "sea_image" {
bucket = aws_s3_bucket.task2_s3.bucket
key = "dnld14.jpg"
source = "C:\\Users\\lenovo\\Pictures\\dnld14.jpg"
acl= "public-read"
}
Step 6: Create CloudFront
We will create cloudfront disribution for our s3 bucket
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.task2_s3.bucket_regional_domain_name
origin_id = aws_s3_bucket.task2_s3.id
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = aws_s3_bucket.task2_s3.idforwarded_values {
query_string = falsecookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}
price_class = "PriceClass_200"restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "CA", "IN"]
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
depends_on = [aws_s3_bucket.task2_s3]
}
Step 7: Let’s Deploy our website now
We are writing a remote-exec provisoner which will format partition our EBS volume and then mount it. Then we will clone our github code using ‘git clone’ which will be stored in /var/www/html
resource "null_resource" "nullremote" {depends_on = [
aws_efs_mount_target.alpha
]
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_private_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo mount -t ${aws_efs_file_system.efs-task2.id}:/ /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/gauri-repose/multicloud1.git /var/www/html/",
"sudo systemctl restart httpd"]
}}
Step 8: Updating code using cloufront URL
We will edit our website code which will contain image from our cloudfront
Step 9: The public address of our instance will give us desired output