Historically, TEAM IM has been an Oracle-centric shop, with the majority of clients running Intranet applications on-prem. In the past 20+ years, we have built and hosted countless Enterprise applications on Oracle’s WebLogic architecture, and we are quite familiar with what to expect regarding topics such as security and performance.
As new technologies have emerged, TEAM IM has transitioned to building stateless web applications with RESTful interfaces using the Angular 2+ and Spring Boot frameworks. We have traveled a long and arduous journey to figure out how to best host our web applications with the following factors in mind: security, performance, cost, scalability.
Amazon EC2
Our initial transition from WebLogic included hosting both the UI and API layers on a Tomcat instance on Amazon EC2. We were familiar with administering our own servers, and this approach gave us total control of the environment.
This configuration allowed us to host 4-5 of our smaller applications per Tomcat instance and have dedicated instances for our more heavily used applications. This worked for the first year but managing our own hardware felt archaic, and we craved a more sophisticated solution.
Azure Web App Services
The next transition involved hosting the UI layer in Azure Storage Accounts (blob storage) and the API layer in Azure Web App Services (Tomcat). We proxied the services with an Azure Front-Door, which is quick and painless.
With this configuration, we hosted Dev, Test, and Prod servers on unique Tomcat containers that shared processing power. This configuration worked fine for a single-node deployment but was too complex when we attempted to scale the larger sites. The Front-Door also ended up being prohibitively expensive when running so many environments.
Azure Kubernetes Services
The next evolvement included Dockerizing our UI and API layers and hosting them with Azure Kubernetes Services. With Kubernetes, all resources are shared in a cluster. Each client has multiple pods (applications) running in the cluster for high availability. When a heavy load is introduced, the cluster will automatically add or decrease pods based on the current usage, so we always ensure we have enough resources available.
We are very pleased with the current architecture, and we have no plans to change it in the near future.