ISV Deploys Repeatable Process for Migrating On-Prem Customers to New Version of Application
Client Overview
The client is a low-code workflow automation platform that helps companies solve workflow problems. Users can set up the platform and streamline workflows immediately as a SaaS solution or host it on-prem, depending on their needs.
The Objective: Design a Repeatable Process for Migrating On-Prem Customers to the New Platform Version
The client upgraded the current version of its platform (built on Windows) to a new version (built with Kubernetes). The new version takes advantage of microservices for more scalability and resilience while providing more features to users.
From a scalability perspective, it was easier for the team to operate the new version as a SaaS by deploying it in Amazon EKS.
Challenge: Struggles with Kubernetes
The client had numerous customers who were not using the SaaS version and were opting to host on-prem instead. Their team’s expertise was in Windows, not Kubernetes or Linux.
Without that expertise, setting up a Kubernetes cluster, deploying the client’s platform on top of it, and linking it to their database would have been extremely difficult.
The client needed a way of packaging what they deployed in the cloud to their clients’ crop clouds (Azure, AWS, and Google) and on-prem (VMware and Bare Metal) environments. They also needed help with implementation, managed support, and/or training for these environments.
The Solution: Build a System for Repeatable Kubernetes Deployment
Shadow-Soft provided professional services to automate the Kubernetes' deployment of the client’s platform to their on-premise clients.
We built the infrastructure to automate the on-prem deployment of the new platform version. Then, we worked with the client’s customers, helping them migrate to the new version built with Kubernetes with their existing platform settings and features.
Our Process for Setting Up the ISV Infrastructure
We started by interviewing the client’s team and working with them to map out the deployment model for their customers who are not using the SaaS-based platform.
Our team identified the target environments: AWS, Azure, VMware, and Bare Metal constructs. Setting up the infrastructure was straightforward. We built automation for EKS and AKS, then built automation for virtual machines in VMware.
We built the Kubernetes distribution on the necessary infrastructure and spun up the team’s requirements to deploy the SaaS platform to the on-prem environment.
For non-cloud-based environments, we recommended RKE2 for distribution because it is easy to set up, lightweight, flexible, and offers more configuration security and support for the infrastructure. We then automated the creation of RKE2 into these different environments.
After we set up the infrastructure, we worked with the client to identify the required subsystems to support their platform and build out a reference framework for those needs. We automated those components for their customers as part of the installation process. We also set up a system for testing to ensure the client was properly running.
With all those components in place, we automated each component of their requirements modularly. For example, the system allows users to select a storage provider (on-prem or in the cloud) depending on their needs. It also allows on-prem customers to expose the client’s platform outside the cluster, similar to a cloud deployment.
Once we built out the above and documented it (system requirements, external changes needed based on network, DNS, etc.), we also documented how all the different automation components worked, the options available, and when to use them.
This was done to keep the environments as similar as possible, resulting in a simple, easy-to-maintain infrastructure for the client’s team.
ISV Deployment Process:
- Review and Assess
- Review options for automating upgrades, maintenance, and backup of client’s future Kubernetes environments.
- Determine automation solutions for deploying infrastructure components of the customer’s Kubernetes environment: Terraform, Ansible, and RKE2.
- Determine the cost for installation to clients.
- Determine if there is a migration path for clients on the old system to the new system and how that impacts future deployments.
- Automate Kubernetes Deployment
- Develop standardized code to deploy infrastructure for RKE2 for each environment type (VMWare, AWS, and Azure).
- Deployment code includes Terraform for each IaaS, Ansible for RKE2 setup.
- Create or leverage existing helm charts for the following:
- Redis, MongoDB, Minio, Longhorn, and Microsoft SQL
- Integrate new helm charts with existing The client helm charts
- Build an operator to maintain the state of all helm charts during deployment
- Documentation, Testing, and Backup
- Create validation testing for future deployments to ensure the client is operationalized.
- Create environment backup and recovery procedures using Velero for persistent workloads.
- Document deployment, backup/recovery strategy, validation testing procedure, and upgrade methodology.
This process took 10 weeks.
Our Process for Setting Up the On-Prem Deployment for Customers
With this system in place, we worked with the client’s customers to deploy the new platform version in the on-prem environment using the system we created.
We embedded one of our engineers in the customer’s team to lead the full deployment, from setting up the VMs to deploying RKE2 and all the prerequisites for the client.
We added additional features outside the predetermined deployment based on individual customer needs as needed. For example, we added a health check system and a log-monitoring stack for one customer that integrates into their existing tooling.
This custom approach ensured new customers not only moved over to the new on-prem version of the platform smoothly but also maintained the custom elements they required to get the most value from the platform.
Customer Deployment Process:
- Deploy Kubernetes cluster with the client application to the customer’s virtualized environment.
- Validate environment is operational.
- Build new features as needed.
- Ensure new features run in the new environment.
This process takes a few hours to one week, depending on the complexity of the customer’s use of the client.
Key Features
When the customer is ready to move to the new version of the platform, they tell us where they want it deployed. After, we collect limited access credentials from the client using an automated process. Then, our automated code base quickly spins up the infrastructure depending on customer needs.
After a short requirements-gathering session, we rapidly set up the new version in under an hour using the code base. Or, we can give them access and guide them through the process.
For new customers, this concludes the process.
If they are an existing customer migrating to the system, we work with the client and customer to ensure the environment fits their existing systems.
Features:
- Controlled access, keeping existing systems secure.
- Rapid deployment (under one hour).
- Custom deployment fit to existing systems.
- Help migrate their database into the new system.
- Support and knowledge transfer for the migration.
- Work with the client to authenticate system parameters.
Roadblocks
The process of setting up the infrastructure and automating deployment is very quick. However, navigating internal blockers and unique setups slowed the process down for some customers. To keep things moving forward, we communicated frequently with the client.
There was no VMware environment to build the code base and locally deploy it at the start of the engagement. Our team had to install the hardware and get VMware running on-prem so we could build out the solution for the client.
Navigating these roadblocks added more time to the project.
Tools
We built the resources (VMs and cloud-based infrastructure) in Terraform. We built all the components (Installing RKE2, the storage provider, exposing the platform via ingress, etc.) using Red Hat Ansible.
Results: “Easy Button” for Deployment
We worked as “the easy button” for the client’s customers, rapidly deploying the on-prem infrastructure needed to run the new version. As a result, they could work with minimal disruption or frustration as we set up the new infrastructure.
The repeatable process we built for the client allows them to offer a repeatable, predefined implementation service to clients, improving customer satisfaction without exceptional costs.
We continue to work alongside the client with their customers. While we’re seen as a separate entity from the client, we still work hard to strengthen their customer relationships with seamless delivery and building out additional infrastructure features as needed.
Key results for the client:
- They can offer the new version of the platform to all customers, regardless of the infrastructure.
- They can use more complex technology to build a more robust platform with new features without worrying about infrastructure complexities.
- They do not need to manage customer infrastructure migration, allowing their team to focus on platform developments instead of learning and mastering Kubernetes.
- Their customers get an affordable and accessible migration to the new version.
- Their customers get access to the newest features on the latest platform.
- Their customers can migrate to the new version without learning Kubernetes.
Next Steps: Continued Support for ISV’s Customers
As we continue to deploy the new version of the platform for the client’s customers, we’ll continue to add new features requested by customers along with the requested deployment.
These efforts enhance the client’s offering, strengthening relationships with current and new customers.