What's the deal with IBM WebSphere Application Server?
05 Jun 2025
When I make a web site, the programming language I choose is typically Node.js (a flavor of JavaScript that can run on a server operating system’s (“OS”) command line (“CLI”), rather than needing to run inside of a web browser). I’ve played around a bit with Python, C# (via .NET), PHP (via WordPress), and Ruby (via Jekyll), but that’s about it. Despite years of Java-like programming experience via Salesforce’s Apex, I’ve never built a web site from scratch using Java.
Since I’ve been working a lot on the Azure cloud, if I wanted to write a web server using Java, I’d probably look at programming in its Java SE, Apache Tomcat or Wildfly / JBoss variants, since I have no interest in managing virtual machines and their operating systems, and those are the variants of Java that Azure App Service can run.
However, just like server-executable JavaScript’s syntax actually comes in the widespread Node.js flavor, but also in Deno and Bun variants, as well as various proprietary “serverless function” variants from major cloud providers, there are a lot more variants out there for writing Java that’ll listen for HTTP requests and return an HTTP response.
One of them is IBM’s WebSphere Application Server (“WAS”).
Like other Java-based web server frameworks such as Tomcat, once IBM WAS takes note of an incoming HTTP request, it delegates the work of figuring out what to do about it to a Java Web Application Resource/Archive (“WAR”) file written by a software developer.
But it’s a little more complex than others such as Tomcat. Let’s dive into some interesting pieces of trivia about WebSphere / WAS.
WAS cannot run on fully managed cloud platforms
Apparently, WAS can’t run on the Azure App Service (“AAS”) platform as a service (“PaaS”) because WAS has a tendency to leverage aspects of its host OS (and/or aspects of the other software also installed onto that host OS, like the Java Virtual Machine (“JVM”)) to “statefully” store and retrieve information about the way it’s currently configured.
AAS doesn’t expose that level of OS access to code running on it, so WAS can’t run on AAS.
I presume the same applies to AWS Elastic Beanstalk and to Google Cloud Run.
WAS also needs host OS access because unlike, say, Tomcat, it manages “not just HTTP traffic, but also application deployment, transaction processing, security, clustering, session management, and integration with databases and other enterprise systems. These capabilities require deep integration with the operating system for process management, file system access, networking, and resource allocation.”
In fact, WAS apparently doesn’t even always serve web content directly, but instead often delegates that work to yet another web server installed on the host OS (e.g. “Apache HTTP Server, Netscape Enterprise Server, Microsoft Internet Information Services (IIS), IBM HTTP Server for i5/OS, IBM HTTP Server for z/OS, and IBM HTTP Server for AIX/Linux/Microsoft Windows/Solaris”).
Container / VM configuration
To run WAS correctly, the virtual machine (“VM”) or container computing environment on which you’re running WAS needs to be set up in a particular way.
- (click expander icon at left – it’s a rightward-facing triangle – if the (noncomprehensive!) list of required configurations below are not showing up)
Note:
- If your host OS will be a container, you do all of this as part of “building” the “container image” before publishing it as a “version” of that container image into a “container registry.” You’ll always be just spinning down containers running your old container image, and spinning up containers running your new container image, no matter whether you’re:
- Just spinning up a new workload for the first time, or:
- Releasing new fixes / features / configuration settings.
- If your host OS will be a VM, you do all of this as part of either:
- Spinning up the VM for the first time, or:
- Each time you releasing a fix / feature for your application or the VM’s configuration settings.
Install a Java runtime
You probably want to use some sort of infrastructure as code tool to automate getting a Java runtime (JVM) installed onto the VM/container.
Install IBM Installation Manager
Apparently, WAS is typically installed onto machines using IBM Installation Manager, so you probably need to install that onto the VM/container first.
You probably want to use some sort of infrastructure as code tool to automate downloading, unpacking, and running this installer.
Install IBM WAS
Using IBM Installation Manager, install WAS onto the VM/container.
You probably want to use some sort of infrastructure as code tool to automate downloading, unpacking, and running this installer.
Expose shared files
You’ve probably got your WAS license file on some sort of shared network drive, so you’ll want to mount that drive into the VM/container’s filesystem and make sure that WAS has appropriate read access to the license file. Otherwise, WAS might refuse to start up, on account of believing it’s an unlicensed copy.
Similarly, if you’re using WAS to control a whole cluster of VM/container servers, there will probably be some shared files that you’ll store on a shared network drive (e.g. your *.war
files). So you’ll also want to mount that drive and grant appropriate access to its files and folders.
You’ll also probably need to create some “service users” in your VM/container’s OS, configure some “system services” of that OS, modify some “system files” in that OS, and set some “environment variables” in that OS.
You probably want to use some sort of infrastructure as code tool to automate these VM/container user configuration, filesystem configuration, and access control steps.
Application deployment
WAR file deployment
Once IBM WAS is installed and running on your VM/container, having a copy of your application’s .war
file visible within its filesystem might not suffice for WAS to actually run it.
Apparently, this is often done by executing WebSphere’s CLI tool/script called wsadmin.sh
.
You probably want to use some sort of infrastructure as code tool to automate executing that script.
Host OS virtualization options
Note: as long as you’re already working on pondering the “containers vs. VMs?” decision, you might want to also ask yourself if the web application in question even really needs to run on IBM WAS, or if a simpler, more PaaS-host-friendly Java framework might suit your needs better.
(Why bother installing, configuring, and maintaining WAS and its host OS if you don’t have to?)
Containers
You might want to consider running WAS on containers rather than VMs.
Apparently IBM has been trying to improve container and cloud support since introducing the “Liberty” versions of WAS.
Virtual machines
Here are some reasons you might still want to use classic long-lived virtual machines instead of immutable throwaway-able containers as a host OS for WAS:
- (click expander icon at left – it’s a rightward-facing triangle – if the VM advantages below are not showing up)
Container skills
Your team doesn’t have the skill to securely and reliably manage container-based servers.
OS kernel isolation
Nitpicky security or compliance rules at your organization mean you’re not allowed to let different containers share the same host OS kernel and need better resource isolation than containers offer.
OS miscellany
You’ve got a really quirky way you’re using WAS that requires host OS configurations that just can’t be done easily in containers, but can be done easily in VMs.
Or you’ve got a really quirky way you’re using WAS that absolutely positively requires making ad-hoc OS-level modifications, rather than modifying the settings by simply replacing containers configured the old way with containers configured the new way. (Perhaps even with load balancing and a “blue-green deployment” pattern to avoid the application running on WAS experiencing any downtime.)
Distributed computing orchestration
You’ve got a way of using WAS in a distributed fashion that would make it extremely toilsome to change over from traditional WebSphere Network Deployment (“ND”) clustering/orchestration to a container-native distributed computing orchestration architecture.
IBM WAS strong suits
For a “greenfield” (brand new) Java application today, you might not even want to use WAS. You might want to choose a lighter-weight Java application server / framework, and/or choose to develop for a cloud-native platforms.
However, here are some great reasons to consider WAS for a project:
- (click expander icon at left – it’s a rightward-facing triangle – if the WAS strong suits below are not showing up)
Legacy code and WebSphere expertise
It’d be too darned much toil to refactor it.
Licensing
Apparently a WebSphere license comes with a license to run the Java Enterprise Edition (“Java EE”) and support from IBM for it?
Similarly, if your compliance rules force you to have certain kinds of commercial support, long-term maintenance, or certifications that only IBM can provide, that might be a great reason to develop on WAS.
Distributed computing orchestration
Being even older than Kubernetes, WAS already thought a long time ago about clustering, failover, scalability features, etc., and does it “batteries included.”
You might not want to have to reinvent the wheel and learn load balancing, container orchestration, etc. when WAS can just … do all of that for you.
(On the other hand, it’s 2025, so you might not need to learn container orchestration if your workload is safe to run in the cloud. There’s probably a cloud-provided PaaS that will take care of that for you. In theory, all you’d need to do would be to build your container image and publish it into a container registry – the cloud PaaS should be able to take it from there.)
IBM integrations
If you’ve got an IBM mainframe, or IBM DB2 databases, there’s a lot of IBM-authored middleware (e.g. connection pooling and advanced JDBC features) that WAS integrates with nicely right out of the box.
For applications that need to integrate with such products, Websphere’s probably gonna be way more fun to configure and write Java code for than, say, Tomcat.
Replatforming tips
Here’s a fun conversation I had with an LLM:
“Let’s talk refactoring.
“Given a legacy app that doesn’t integrate with IBM products but needs these kinds of orchestration advantages, what’s involved into moving to some sort of option like you mentioned?
“I presume that for starters, although WAR files are portable, you’re gonna have to edit your source code that becomes the WAR file because it probably made assumptions about its runtime.
“What else?
“Let’s talk mostly about Azure App Service or about containers on Azure PaaS’es.”
- (click expander icon at left – it’s a rightward-facing triangle – if the LLM response below is not showing up)
Moving a legacy Java app (not tied to IBM products) to Azure App Service or containers on Azure PaaS involves several layers of work—some technical, some organizational. While Java WAR files are technically portable, most legacy apps make assumptions about their runtime environment that require code and configuration changes for a successful migration. Here’s what’s typically involved:
1. Assessment and Planning
- Inventory and Analyze Dependencies: Identify all external dependencies, integrations, and OS-level assumptions (e.g., file paths, environment variables, local storage, custom scripts)1234.
- Choose Migration Path: Decide between “lift-and-shift,” partial refactoring, or full re-architecture. For Azure App Service or containers, you’re usually looking at refactoring or re-platforming, not a pure lift-and-shift2354.
2. Code and Configuration Refactoring
- Externalize Configuration: Legacy apps often use hardcoded file paths or expect config files on local disk. Move these to environment variables or Azure App Service/Container configuration settings235.
- File I/O Adjustments: Azure App Service and containers have ephemeral local storage. If your app writes files locally (logs, temp files, uploads), you’ll need to redirect these to Azure Blob Storage or another persistent service.
- Session Management: If your app stores session state on the local filesystem or in-memory, switch to a distributed cache (e.g., Azure Cache for Redis) to support scaling and failover.
- Database Connectivity: Update JDBC URLs and drivers to connect to managed Azure databases (e.g., Azure SQL, Azure Database for PostgreSQL/MySQL, or Azure-managed DB2 if available).
- Security and Secrets: Move sensitive data (passwords, API keys) to Azure Key Vault or App Service secrets management.
3. Runtime and Platform Assumptions
- OS-Level Access: Remove or refactor any code that expects direct OS access (e.g., shell scripts, native binaries, cron jobs). Azure App Service restricts OS access; containers offer more flexibility, but you still need to package everything needed inside the image236.
- Startup Scripts: App Service and containers have specific entry points. You may need to adapt your app’s startup scripts or use a custom Dockerfile for containers.
4. Packaging and Deployment
- WAR to JAR or Docker: For Azure App Service, you can deploy a WAR file directly. For containers, you’ll need a Dockerfile that builds an image with your app and its dependencies76.
- CI/CD Integration: Set up automated builds and deployments using Azure DevOps, GitHub Actions, or another pipeline tool.
5. Testing and Optimization
- Cloud Readiness Testing: Use Azure’s testing tools (Azure Monitor, Application Insights) to validate that the app works as expected in the new environment 23.
- Performance Tuning: Adjust scaling, memory, and CPU settings for App Service or container instances. Monitor and tune for cloud-specific performance issues.
6. Operational Changes
- Logging and Monitoring: Integrate with Azure Monitor, Log Analytics, and Application Insights for centralized logging and alerting.
- Backup and Disaster Recovery: Implement Azure-native backup and DR solutions.
- Scaling and Orchestration: For App Service, scaling is mostly automatic. For containers, configure scaling rules and health checks in Azure Kubernetes Service (AKS) or Azure Container Apps76.
7. Team Training and Process Updates
- Cloud Skills: Upskill your team on Azure services, PaaS/container management, and cloud security best practices1.
Key Takeaways
- Yes, you’ll almost always need to refactor code—especially for configuration, file storage, and OS-level assumptions.
- Session state, file I/O, and external dependencies are the most common pain points.
- Azure App Service is best for traditional web apps with minimal OS dependencies; containers (AKS, Azure Container Apps) are better for apps needing more control or with complex dependencies76.
- Testing and optimization in the cloud are crucial before going live123.
Migrating to Azure PaaS or containers is very achievable, but expect to invest in refactoring and modernization to fully leverage orchestration, scalability, and managed services.
Pre-replatforming tips
Sound like too much work? I thought so, too. Here’s the followup question I wrote the LLM:
“This is great, thanks.
“That sounds like too much toil for a lot of teams.
“But adding more infrastructure-configuration and more app feature deployment via source control + CI/CD could probably easily be slipped into a lot of software development lifecycles for legacy WebSphere apps.
“What architectural mini-refactors, while a team is in the systems doing work anyway, might be quick and easy now?
What might help reduce later toil if a team ever also wanted to later refactor/replatform legacy Websphere apps that don’t have tight IBM coupling off to some sort of cloud PaaS?”
- (click expander icon at left – it’s a rightward-facing triangle – if the LLM response below is not showing up)
Here are some architectural mini-refactors you can make now—while improving infrastructure configuration and CI/CD for your legacy WebSphere apps—that will make any future move to Azure App Service or Azure container PaaS much smoother, even if you’re not planning a full migration yet:
1. Externalize Configuration
- Move all environment-specific settings (URLs, credentials, feature flags) out of static files and into environment variables or external configuration sources.
- Avoid hardcoding file paths, hostnames, or credentials in code or bundled config files.
- Prepare to use Azure App Service’s Application Settings or container environment variables for config injection later8.
2. Centralize and Externalize Secrets
- Store sensitive data (passwords, API keys) outside of your WARs and property files.
- Use a secrets management tool now (even if not Azure Key Vault yet) so you can swap in Azure Key Vault references easily in the future8.
3. Reduce Server-Level Dependencies
- If your app relies on server-level classpath libraries, shared JNDI resources, or global data sources, start moving these into your application package or externalizing them.
- Aim for a model where each WAR is self-contained, reducing reliance on shared server resources8.
4. Abstract File Storage
- Refactor any code that reads/writes to local disk (uploads, logs, temp files) to use a storage abstraction.
- Prepare to redirect file storage to Azure Blob Storage or another cloud storage service in the future, since local disk is ephemeral in PaaS/container environments.
5. Session Management
- If your app stores session state in memory or on local disk, consider moving to a distributed session store (like Redis) or at least abstract session management so you can switch easily later9.
6. Modularize Scheduled Jobs
- Separate scheduled/batch jobs from the main application logic. Move them into standalone modules or services.
- This will make it easier to migrate them to Azure Functions, Logic Apps, or separate containers later8.
7. Standardize Logging
- Use a logging framework that supports external sinks (e.g., Log4j, SLF4J with appenders for cloud logging).
- Avoid writing logs to local disk only; prepare to send logs to Azure Monitor or another centralized logging service10.
8. Database Connectivity
- Parameterize JDBC URLs, credentials, and connection pool settings.
- Prepare for cloud-managed databases by not assuming static IPs or local hostnames; use DNS and credentials that can be swapped for Azure SQL or other managed services9.
9. Container-Ready Practices
- Avoid OS-specific dependencies, custom shell scripts, or hardcoded assumptions about the runtime environment.
- Use standard Java startup scripts and document any required OS packages or environment setup in a way that would translate easily to a Dockerfile8.
Why These Steps Help
- They reduce coupling to the WebSphere runtime and on-premises infrastructure, making your app more portable.
- They align with best practices for cloud migration and containerization, minimizing future rework.
- They are often quick wins when you’re already modernizing deployment and configuration management.
Summary:
Focus on externalizing config and secrets, reducing server-level dependencies, abstracting file/session storage, modularizing jobs, and standardizing logging and DB connectivity. These mini-refactors are low-hanging fruit during CI/CD and infra improvements, and will significantly ease any future move to Azure App Service or container-based PaaS—even if you’re not planning that move yet8910.
LLM answer sources
-
https://www.linkedin.com/pulse/how-migrate-legacy-applications-azure-cloud-sherdil-cloud-omiaf ↩ ↩2 ↩3
-
https://www.scnsoft.com/application/migration/azure ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
https://summ-it.eu/legacy-applications-azure-migrations/ ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
https://www.skytap.com/terms-glossary/legacy-application-migration-to-azure/ ↩ ↩2
-
https://medhacloud.com/blog/can-i-migrate-legacy-applications-to-azure/ ↩ ↩2
-
https://www.davidyardy.com/blog/azure-appservices-containersapps ↩ ↩2 ↩3 ↩4
-
https://learn.microsoft.com/en-us/azure/developer/java/migration/migrate-websphere-to-jboss-eap-on-azure-app-service ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
https://learn.microsoft.com/en-us/azure/developer/java/migration/migrate-websphere-to-vms-with-ha-dr ↩ ↩2 ↩3
-
https://github.com/MicrosoftDocs/azure-dev-docs/blob/main/articles/java/migration/migrate-websphere-to-virtual-machines.md ↩ ↩2