Salesforce, Python, SQL, & other ways to put your data where you need it

Need event music? 🎸

Live and recorded jazz, pop, and meditative music for your virtual conference / Zoom wedding / yoga class / private party with quality sound and a smooth technical experience

Infrastructure as code homework #1

21 Sep 2020 🔖 linux devops
💬 EN

Table of Contents

I’d like to get involved in a project at work that involves taking a tool comprised of a lot of databases and web servers (running Java in Apache Tomcat) and putting them into “the cloud” using best-practice architectures for installation, upgrade, maintenance, patching, backup, & recovery considerations.

After reading this helpful background about Tomcat by Secure Any Cloud, I think I’ve just come up with my first “hello world” homework assignment.

The plan

  • Rent a machine from a cloud service with which I have some free credits.
    • Always use Terraform to control working with the cloud service; avoid cloud-service-specific tools & GUIs as much as possible.
  • Install & run Linux, Java, & Tomcat hosting Apache’s sample.war file on a non-8080 port.
    • Always use Git & continuous integration/deployment tools (Ansible? Jenkins?) & container images (Kubernetes? Docker?) rather than SSHing / SFTPing into the rented machine’s CLI or clicking around in GUI configuration panels when possible.
  • Make sure sample has a public IP address – maybe even a public domain name to be snazzy.
    • Continue to pay attention to tooling & automation / infrastructure-as-code.
  • Visit http://...:xxxx/sample/ from my home computer to confirm it’s alive and on the web.
  • Kill the server & deallocate any IP / domain names gracefully.
    • Always use Terraform to control working with the cloud service; avoid cloud-service-specific tools & GUIs as much as possible.

The accomplishment

According to one of my colleagues, the jargon I can say I’m studying is:

  1. Provisioning a server in the cloud
  2. Deploying an application to that server

It’s nice to have goals & a plan.

Misc notes


Some notes I took from Secure Any Cloud’s article that led to me making this plan:

  • A lot of what a 3rd-party vendor will be providing my workplace is WAR files for services running Apache Tomcat.
  • Tomcat’s been around since 1998.
  • Tomcat operates only on the HTTP protocol.
  • Tomcat processes (provides a runtime environment for the Java code within) Java servlets, encapsulating code & business logic to define how requests & responses should be handled by a server.
    • “Servlet” is an API from the Java Platform Enterprise Engine designed to work with web servers.
    • Monitoring a server for incoming client requests is the web server’s job, not hte servlet’s.
    • “web.xml” maps servlet classes to incoming cloient requests.
    • Servlets are responsible for providing responses back to Tomcat as JSP pages.
  • Tomcat processes JSP pages.
    • Java is a server-side view rendering technology (Salesforce Visualforce Pages are based off of it, I believe).
    • Tomcat returns responses back to a client by rendering the JSP pages that servlets hand off to it.
  • As a developer, our 3rd-party vendor would’ve written the servlets or JSP’s and let Tomcat handle routing.
  • Tomcat provides a port 8080 connector (Coyote engine?) for serving stuff over HTTP; can’t tell if that’s related to all this.
  • On Windows, you’d install a JDK and then GUI-install Tomcat, pointing it at your JRE & detailing some things like port 8080 for HTTP during install.
    • It would likely install to C:\Tomcat8, which is where you’d likely think of CATALINA_HOME and CATALINA_BASE referring to in documentation.
  • Look in server.xml to be sure what the “application base directory” (e.g. CATALINA_HOME/webapps) is – likely C:\Tomcat8\webapps.
    • That’s wher you’ll drop .war files (with the server stopped).
  • There’s a file in CATALINA_BASE/conf/ (on Windows, likely C:\Tomcat8\conf\) called tomcat-users.xml that it seems you can add data to to make web sites present an authentication wall (again, edit with the server stopped).
  • As with most web servers, the typical way to see if they’re reunning well is to visit http://localhost:XXXX/whatever/ in a web browser from the same box.

All this sounds like I’ve had to manage the installion/upgrade/configuration of:

  1. An operating system
  2. Java
  3. Apache Tomcat

(Some sort of driver for talking to a database, like JDBC, would probably be #4 in a more real-world-like setup. More to explore later.)

Infrastructure as code / containerization with Docker/Kubernetes can help. Or so I’d think.

Note: per some GitHub repos I was looking at, containers can be configured themselves to be direct-connected to or to be behind a load balancer. More to explore for another day.

Terraform / Azure

I stumbled into some Azure credits I have to use within the next some-odd months, so I’ll be using Azure for this.

Terraform Azure getting started

Terraform recommends logging into Azure with Azure’s CLI for the purposes of playing around from my own PC.

Azure’s CLI installer prompts for admin rights to the computer; az command subsequently recognized by my Windows cmd prompt but not by my Git Bash prompt. That’s annoying, as I never bothered to put the terraform.exe I downloaded into my path, so I have to execute everything as "c:\downloaded_here\terraform.exe" -v instead of terraform -v, but whatever.

Terraform Azure build

I set up a folder called c:\example\tfhello and put 1 file into it,

# Configure the Azure provider
provider "azurerm" {
  features {}
  subscription_id = "ABC123"
  tenant_id = "ZYX987"

I tried running terraform plan from that folder’s prompt, but Terraform yelled at me. Turns out Terraform doesn’t want the filename; it will infer it thank you very much.

Just terraform plan.

But then it was unhappy I hadn’t yet run terraform init, so I did.

A .terraform folder showed up with a plugins subfolder. plugins contained selections.json and /, none of which seemed to change as I worked.

I ran terraform plan but nothing changed on my filesystem, as I didn’t have anything in for it to plan.

I moved on to adding a resource to Just for good measure, I also added a terraform property to the top of the file, in keeping with examples from the documentation.

# Configure the Azure provider

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"

provider "azurerm" {
  features {}
  subscription_id = "ABC123"
  tenant_id = "ZYX987"

resource "azurerm_resource_group" "tfrg" {
  name     = "myTFResourceGroup"
  location = "westus2"

Now when I ran terraform plan it created some files while it thought, then deleted them, and told me I’d create 1 new Azure “resource group” (whatever that is – don’t look at me like I know the cloud yet) if I ran apply, but asked me was I sure I didn’t want to actually write the plan to disk with plan’s --out option?

Terraform Azure resource group documentation

Okay, sure: terraform plan --out planout.txt

That created a new file on my hard drive called planout.txt that is not terribly human-eyes-friendly.

Next I ran terraform apply planout.txt. This created terraform.tfstate on my hard drive. And, with a bit of a lag after Terraform told me it was done, indeed a new Resource Group showed up in my web panel for my Azure account.

Great – time to tear it back down.

First I tried terraform destroy -target azurerm_resource_group.tfrg.

Terraform destroy notes

That deleted it from Azure, but Terraform kept giving me nastygrams saying this wasn’t really the right way to delete stuff.

So I did terraform apply planout.txt again, and once I could see a resource group again, I ran plain-old terraform destroy. It had the same effect since whatever input destroy was using only had one “resource” to tear down, but I got fewer nastygrams.

Terraform destroy documentation

I suppose I need to study exactly where apply and destroy get their info, especially seeing as destroy doesn’t seem to be making use of planout.txt the way I surgically asked apply to use.

Terraform / AWS

Got my AWS credits working, too – just have to be careful to get my CLI logged in w/ the right credentials and to use the only region that seems to come w/ my trial.

Also, have to be careful w/ what C:\users\MY_USERNAME\.aws\credentials is on my computer, since I jump around AWS accounts using that file.

# Configure AWS provider
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 2.70"

provider "aws" {
  profile = "default"
  region  = "us-east-1"

# Configure AWS resources
# (note:  "ami-830c94e3" is the "AMI ID" of a West-2-specific chipset, disk storage type, OS version, etc.)
# ("ami-XYZ123" is the "AMI ID" of a CentOS 7.  Not sure it's still the right one.)
# LESSON LEARNED:  This creates an EC2 "volume" as a side effect but doing "destroy" doesn't delete the volume created.
resource "aws_instance" "tfinst" {
  ami           = "ami-XYZ123"
  instance_type = "t2.micro"

A big lesson learned the hard way about this code is that terraform destroy didn’t destroy the AWS EC2 “volume” that terraform apply planout.txt created as a side effect.

I will have to figure out how people manage intracies like that.

Where next

I’m still a little lost where to go from here to implement this project.

Ansible and HashiCorp: Better Together was interesting. This looks like it might be an implementation thereof but is a bit much for me right now. Xero elaborates on architecture with Terraform & Jenkins (both of these last 2 links found w/ Google search terraform jenkins ansible docker "hello world"). This blog, found the same way, might also add context.

I suppose … I suppose my next step should be working towards the “fuzzy middle” of this problem from the opposite (web app) end.

Forget Ansible & Jenkins. If necessary, forget Terraform for now.

Figure out what it takes to get sample.war up and running with the fewest IAC tools possible (but without cheating with other people’s docker files and such – although OK to use one as a learning tool).

Manually provisioning & deploying, if necessary, should give me a lot more context about what I will want to automate.


I’m a little mystified whether I’ll eventually need to learn to build a Docker base image from scratch (2), rather than trusting public repositories.

I asked a colleague what they think we’ll end up doing.

If they say “hand-build,” that could be tough for me to learn, as I’m not sure I have a computer with enough space on it to install a VM and put an OS onto it, and I don’t want to burn through my free cloud credits too quickly.

They said that for many applications, another option besides “Docker Hub” or “DIY” is to trust your major cloud provider’s distributions of operating systems, JDK, etc. If the cloud provider offers an option to let them spin up a machine already running a given OS and JDK, often times you can let them worry about keeping the base images and such clean.

--- ---