I recently returned from a very successful Hashiconf 2019 where lots of new features where announces for the Hashicorp products. Here are some of the mayor announcements.
- Terraform Cloud (TFC)
- Rebranding of Terraform Enterprise SaaS to Terraform Cloud.
- TFC is all about collaboration. When more than 1 person starts working on a Terraform project it requires backend management of the state file and you should start orchestrating Terraform runs using a deployment pipeline. This is all now provided by Terraform Cloud!
- Free tier (up to 5 users)
- User interface
- Remote state management for storing, view and locking of state files.
- VCS connection management
- Collaboration on runs
- Remote runs and applies
- Private module registry
- Paid tiers (more than 5 users)
- Both the paid tiers are available for free until 01.01.2020!
- TFC: Teams
- Create multiple teams
- Control permissions of users on those teams
- TFC: Teams & governance
- This tier is also available for free until 01.01.2020
- Use Sentinel and Cost Estimation
- More information and pricing on offerings available here
- More information here.
- Terraform clustering
- This is only available with Terraform enterprise (TFE) and current in beta version
- More information here.
- Terraform Cost Estimation
- This is available for both TFE and TFC
- Is executed between the plan and apply phases of a TF run.
- Can also use Sentinal to control costs with defined policies
- More information here.
- It definitely felt like Consul was the new shiny toy at this years conference and the related sessions were packed.
- HashiCorp Consul Service (HCS) on Azure
- Native provisioning of a Consul cluster into any region through the Azure marketplace.
- Although the Pong game live demo did not go as planned I do see the value and potential for this product!
- Currently only available in private beta
- More information here
- Consul Enterprise now support VMware NSX Service Mesh Federation
- Support for the Service Mesh Federation Specification.
- More information here.
Now back to what we are here for…Terraform Cloud!
I am not a developer and have been looking for a reason to use WSL for a while and found a good use case to Terraform using VS Code on Linux.
In my opinion Hashicorp’s Terraform is the de facto choice in the infrastructure as code space just like Kubernetes is for container orchestration. It provides the ability to version your infrastructure and automate the provisioning of your resources across different cloud vendors as well as on-premise.
To get this working requires a couple of steps which I will provide here. Also at the time of writing this I am running Windows 10 Pro, with Version 10.0.18362 Build 18362.
- Open Powershell as Administrator and run the following command to enable this feature
- “Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux”
- Open MS store and download your favorite distribution, I selected Ubuntu.
- Don’t close the store just yet and wait for the installation to complete.
- You can also open from command prompt by typing “Ubuntu.exe” from the distro installation folder, or selecting ubuntu from app menu.
- Create a UNIX username
- Create a UNIX password
- Now lets update our distro to latest
- Run “sudo apt-get update”
- Run “sudo apt-get upgrade”
Install Terraform on linux distro:
- Run the following commands to install unzip
- “sudo apt-get install unzip”
- Copy the link address to latest Linux 64-bit download from this page here
- Run the following command to install Terraform
- “wget https://releases.hashicorp.com/terraform/0.12.7/terraform_0.12.7_linux_amd64.zip”
- “unzip terraform_0.12.7_linux_amd64.zip”
- “sudo mv terraform /usr/local/bin”
- Run the following command to verify its has been implemented successfully
- “terraform version”
- Should show “Terraform v0.12.7” (based on the version I downloaded)
Install the Azure and AWS CLI on the linux distro
This is not necessary but super useful if you have deploying to these cloud vendors.
- Azure CLI installation steps
- Run the following command to verify its working
- “az -v”
- AWS CLI installation steps
- Run the following command to install
- “sudo apt-get install awscli”
- Run the following command to verify its working
- “aws version”
Very happy to be acknowledged as a for another year! We have such a great and congrats to all the other 2019 vExperts. Keep up the good work, its worth it.
Being a VMware Partner has its perks and EMPOWER is definitely one of them! In previous years we had PEX which was scheduled alongside VMworld, but attending the technical sessions the weekend before was tough and made for a very long week in Vegas Last year this returned to its original standalone schedule with a great location and provided a lot more content to technical partners.
2019 is no different, and definitely improved upon with the addition of Lisbon (Europe) and Singapore (Asia) conference locations. VMware has also done a great job of listening to partner feedback to make this a standout event with recently added sessions like:
- VMware’s Storage and Availability Vision and Strategy
- VMware’s Hybrid Cloud Vision and Strategy
- Partner VCDX Session: Customer Win with Hyper-Converged Infrastructure
- Partner VCDX Session: Customer Win – Architectural Considerations for SDDC Customer Wins
- Partner VCDX Session: Customer Win – Hybrid Cloud Use Cases
Here are some of the highlights that I think are worth mentioning and what you can provide to your manager as reasons for attending, other than this awesome letter of course.
- Exclusive access to the same content that is enabling the VMware field teams. All marketing fluff has been removed 😊
- Hands-on labs are available with first time access to brand new labs for VMC on AWS and PKS.
- Expert led Livefire workshops with technical content and training.
- Free VMware certification tests for all technical tracks, which can help your company achieve their competencies. Register for exam here.
- Separation between technical and sales roles with dedicated sales conference on last 2 days of event
- Interaction with experts showcasing the latest VMware products in the Demo Zone
- Networking with other like-minded partners
- Team building! In many cases your teams are spread out across the country and this a great way to get together for some team building activities.
If these are not enough reasons already, I have $100 off registration codes available for technical passes to the conference. Please DM me via Twitter, Linkedin, or email me via my web site.
Schedule Builder will be opened publicly to all registrants on Thursday, March 7th at 11AM PST.
It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5. Here are my notes.
My distributed enterprise vRA 7.4 environment consists of the following components:
- vRA VIP
- vRA IaaS Manager VIP
- 2 x Windows vRA IaaS Manager Service servers
- vRA IaaS Web VIP
- 2 x Windows vRA IaaS Web servers
- 2 x Windows vRA DEM + Agent servers
- vRO VIP
- 2 x external vRO appliances
- External SQL database for vRA and vRO
- Running SovLabs extensibility software
There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).
If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!
Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database! You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here. For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.
- Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
- I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version. At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.
vRealize Suite LifeCycle Manager (vRSLM) has now been around for a while and if you are a vRealize or vCloud Suite license holder this is definitely a product that should be part of our VMware portfolio. I am a bit backward because in my last post is showed how to upgrade your vRA environment using vRSLM and only now will I show how to actually install vRA which actually just comes out of necessity because one of my colleagues accidentally delete all my lab servers 🙂
For this post, I am using the latest vRSLM 1.3 and will be deploying a distributed vRA 7.4.
- jvra01 – vRA appliance with embedded vRO (recommended design to use embedded instead of external vRO since 7.3)
- jvra02 – vRA appliance with embedded vRO
- jvraweb01 – vRA IaaS Web
- jvraweb02 – vRA IaaS Web
- jvramgr01 – vRA IaaS Manager
- jvramgr02- vRA IaaS Manager
Since vSSLM automates and simplifies the deployment of your VMware SDDC stack, most of your time will be spent on prerequisites, so let’s start with that.
- Manually deploy 4 x vRA Iaas Windows Servers in your vCenter Server environment.
- Make sure they are added to the domain and DNS and NTP is working.
- Disable UAC on all Windows servers. Make sure to reboot if you have to disable this.
- Make sure that IPv6 is disabled on all Windows servers
- Add the windows service account as part of User Rights Assignment under Local Security Policies for Log on as a Service and Log on as a batch job on all windows machines.
- Verify the minimum resource requirements is set on all Windows servers. Set to at least 8GB.
- SQL Database
- Make sure the domain user has added the SQL server to the domain
- Make sure the domain user is added as part of the SQL DB user Logins list with the sysadmin privilege
- Load Balancer
- Make sure that the second member of each pool in the vRealize Automation load balancer is disabled.
There are also some scripts available to download to verify the prerequisites when you run the precheck for the creation of the vRA environment so this can be done later as well.
- Ensure that the vRSLCM appliance has correct FQDN configured
- Command for correcting the hostname is “/opt/vmware/share/vami/vami_set_hostname <hostname>”
- After setting the correct hostname, verify by using the command “hostname -f” or from 1.3 version of LCM, we can also verify from the settings page.
- Under vRSLM settings:
- Register with My VMware to access licenses, download Product Binaries, and consume Marketplace content.
- Download the vRealize Automation 7.4.0 product
- If you already have the OVA downloaded then you can import it under the Product binaries tab.
- Verify that you have vRealize Automation binaries status as completed.
- If you using a self-signed certificated in your environment (not recommended), then create a self-signed wildcard certificate for vRealize Suite product deployments.
- Best is to generate a single SAN certificate with all the product or management virtual host names or a wildcard certificate and provide this certificate when you create the environment for the first time. This ensures support for post provisioning actions such as Add Products and Scale Out.
- Configure NTP Servers for deploying products in environments
- Under Data Centers
- Create a Data Center with an associated location.
- Add the vCenter Server where the vRA environment will be deployed to.
- Make sure the data collection is successful.
As with most of my other blog posts, I am just providing a step by step guide for quick reference. Please refer to the documentation here for detailed information and please read the vRealize Automation 7.4 Release Notes known issues section which is updated regularly and helps you to be better prepare for the upgrade.
My environment consists of a distributed vRealize Automation running version 7.2 with an external clustered vRealize Orchestrator, which I am upgrading and not migrating to 7.4 Build 8182598. This will be a similar process if you have vRA 7.1 and greater. If you have an older version, refer to VMware’s documentation here.
The in-place upgrade process for the distributed vRA environment happens in 3 stages in the following order:
- vRealize Automation appliances
- IaaS Web server
- vRealize Orchestrator
Pre-requisites before we start:
- Make sure all VMware products are compatible with vRA’s current and new release by consulting the Product Interoperability Matrix.
- Verify enough storage space on servers
- At least 5GB on IaaS, SQL and Model Manager
At least 5 GB on the root partition of vRA appliance
5 GB on the /storage/db partition for the master vRA appliance
5 GB on the root partition for each replica virtual appliance
- Verify that MSDTC is enabled on all vRA and associated SQL servers.
- Check that the service “Distributed Transaction Coordinator” is running.
- The primary IaaS Website node (Model Manager data is installed) must have JAVA SE Runtime Environment 8, 64 bits, update 161 or later installed, and also verify JAVA_HOME environment variable is set correctly after the upgrade.
- If using embedded Postgres DB in a distributed vRA environment
- On master vRA node, navigate to /var/vmware/vpostgres/current/pgdata/
- Close any opened files in the pgdata directory and remove any files with a .swp suffix
- Verify the correct ownership of all files in this directories: postgres:users
- In a distributed vRA environment, change Postgres synchronous replication to async.
- Click .
- Click Async Mode and wait until the action completes.
- Verify that all nodes in the Sync State column display Async status
- I have only a master and replica so I am already async but just FYI
- In vRA tenants verify the following
- Make sure that no custom properties have spaces in the names.
- All saved and in-progress requests have finished successfully
Additional requirements before we start: