Deploying applications to WebLogic with weblogic-maven-plugin

WebLogic has made the deployment of artifacts and resources significantly easier for Maven projects through the new plugin capabilities with weblogic-maven-plugin (WLS 11) and now the wls-maven-plugin (WLS 12). The newer version of the plugin has many additional features, but also has quite a few bugs and therefore I will use the capabilities of both plugins. The general usage of the plugins is to deploy our application resources (such as JDBC and JMS configurations), our optional packages, our web applications, and also to run wlst scripts against those deployments to change their deployment order.

The plugin is not something that can be downloaded directly, it is included within the installs for WebLogic Server 11 or WebLogic Server 12. So in order to get access to the plugin, I first install both servers locally. Once I do this, I can access the plugins:

1) With WebLogic Server 11, I can build the weblogic-maven-plugin using the wljarbuilder.jar

2) With WebLogic Server 12, I can find the plugin already available via out install directory at: $MIDDLEWARE_PATH_12C_ZIP/wlserver/server/lib/wls-maven-plugin.jar

Once I have access to these plugin jars, I will load them into our Maven repository manager (which is Nexus, but also could be Artifactory…for example). This way our plugin jars will be available to all our maven projects, and I will configure them through our plugin poms.

When using the plugin for deployments, I will need to gather some information such as what environments I will be deploying the resources to, what server names are, what cluster names are, what ports, what type of resource, the name I want the deployment to be, and other information. Since the deployments will change per environment, one beneficial strategy is to design a property file per environment. Once this is complete, I can use maven property substitution to derive our deployment information dynamically during the plugin runtime from the environment specific properties.

For more details on the Properties Maven Plugin. There is one nuance or issue with this plugin that you should be aware of, is it does not support properties within a jar file. If you follow this thread in, you will see a working plugin that uses the concepts of the “Properties Maven Plugin”, but allows the reading of properties from an external jar file: properties-ext-maven-plugin. This way I can run the external properties plugin prior to the weblogic-maven-plugin to read in our environment specific properties (one jar for each environment containing specific properties for that environment).

Now I can configure our weblogic-maven-plugin within our pom file:

<plugin> <groupid></groupid> <artifactid>weblogic-maven-plugin</artifactid> <version>${weblogic.version}</version> <executions> <execution> <id>deploy</id><phase>compile</phase> <goals> <goal>deploy</goal> </goals> <configuration> <adminurl>t3://${deploy.hostname}:${deploy.port}</adminurl> <user>${deploy.userId}</user><password>${deploy.password}</password> <upload>true</upload> <action>deploy</action> <targets>${target.names}</targets> <remote>true</remote> <verbose>true</verbose> <source>${}/lib/${}.war <name>${}</name> </configuration> </execution> </executions> </plugin>
  • deploy.hostname/deploy.port = properties that are read in per environment to point to different server targets
  • deploy.userId = the userid used to login to the weblogic server
  • deploy.password = the password used to login to the weblogic server
  • target.names = the target is usually the cluster name for the environment
  • = This is the name of the deployment of the console, which I may want to name differently than the war itself, since the war might contain Maven version/classifiers within the name.

Once I deploy this war, I may have the need to undeploy (which can be done via the console our through our plugin again:

<plugin> <groupid></groupid> <artifactid>weblogic-maven-plugin</artifactid> <version>${weblogic.version}</version> <execution> <id>undeploy</id><phase>clean</phase><goals><goal>undeploy</goal> </goals><configuration><adminurl>t3://${deploy.hostname}:${deploy.port}</adminurl><user>${deploy.userId}</user><password>${deploy.password}</password><upload>true</upload><action>undeploy</action><targets>${target.names}</targets><remote>true</remote><verbose>true</verbose><name>${}</name></configuration> </execution> </plugin>

This configuration isn’t really much different. I just don’t need to supply the source and need to change the action to undeploy.

Another feature that I may want to do and can just become tedious based on the size of our deployments is to change the deployment order. This is not something that is capable through the WebLogic Server 11 plugin, so in this case I will utilize the WebLogic Server 12 plugin. The main difference is that now I am using the WebLogic Server 12 plugin, it has the capabilities to create domains, run wlst scripting, and other features that require an installation of WebLogic Server 12. This does not necessarily mean you have to install a WebLogic Server 12 in your environments, it just means that if you are using this plugin locally or via a CI Tool such as Hudson, you will need to install a WebLogic Server 12 into that environment so that the plugin can utilize the wlst shell and other scripts in order to use the plugin. This could be an issue for developers running the scripts locally, so in this case, I utilize profiles to run plugin configurations that use WebLogic Server 12 plugin so that developers can do deployments via the WebLogic Server 11 (which has no additional dependencies) and advanced features via WebLogic Server 12.

<plugin><groupid></groupid> <artifactid>wls-maven-plugin</artifactid> <executions><execution><phase>package</phase><goals><goal>wlst</goal></goals></execution> </executions> <configuration>      from java.util import * from import * import
    print 'Updating deployment order .... '
    connect(System.getProperty("deploy.user"), System.getProperty("deploy.password"), System.getProperty("deploy.admin")) edit() startEdit() cd(System.getProperty("deploy.mbean")) cd(System.getProperty("")) set ('DeploymentOrder', System.getProperty("deploy.order")) save() activate() disconnect()

  • deploy.user = the userid used to login to the weblogic server
  • deploy.password = the password use to login to the weblogic server
  • deploy.admin = the admin url for logging into the weblogic domain
  • deploy.mbean = AppDeployments or Libraries based on whether this is a war or something like an Optional Package
  • = the exact name as it appears through JMX tree or console, for Optional Packages, this includes the version in a format such as deployName#deploySpecVersion@deployImplVersion

The space and indentation are important when including the script inline within the pom as opposed to having the wlst script as an external file. Another thing to mention is that the capability to open wlst shell, change the deployment order, and close the wlst shell appears to consume some resources and if I need to do this for multiple resources, I should do this in one connection. Otherwise I have run into issues where running this wlst plugin per resource (10+ resources) will crash our Hudson instance.

This plugin (and all the WebLogic Server 12 plugin goals) have to be run with a command line parameter to supply the installation directory for the WebLogic Server 12:


In another post, I will explain how I utilize this plugin to deploy our Optional Packages as well as deploy our JMS/JDBC resources.

buy dianabol uk


Share and Enjoy

WebLogic Encryption and Domains

When building out WebLogic domains, some of the biggest tasks to accomplish are scripting of creating a domain, scripting WebLogic resources, and utilizing our build architecture to provide jobs for pushing out resources to our WebLogic domain. The reason is I should never be manually configuring a domain because that increases the room for error when promoting domain changes between environments, and complicates the ability to rebuild a new domain based on a previous one. By scripting out the domain creation, along with the ability to promote resources via a build environment, I greatly increase the time required to build a brand new environment. The Hudson CI build server can be used to produce SCP jobs that copy necessary logging files or classpath jars, and I can utilize the maven-weblogic-plugin to deploy JMS and JDBC resources, however the plugin has bugs in deploying optional packages.

When creating WebLogic domains, I can configure the domain to be either Development or Production Modes. In order to do development against clustered environments like production environments, the WebLogic Server will be configured as “Production Mode” in all environments. The one issue that this can cause is that resources (such as JDBC Configurations) require the database password to be encrypted. In order to do this, a user must encrypt the password utilizing the domains salt file (SerializedSystemIni.dat). This can be done via command line in the WebLogic Domain, by configuring the maven exec plugin, or by using WebLogic custom APIs as in this post that has a class named SecurityHelper.

In order to encrypt passwords via a maven project, all I need to do is take advantage of the class and utilize a maven plugin that will allow us to execute it. One way to do this is to utilize the exec-maven-plugin. As the documentation describes, it helps execute system or Java programs. So in order to invoke, I will need to understand what type of parameters are necessary. The main piece of configuration it needs is the domain root directory, but it does not need this information to access any executables. It only needs this pathing because it is going to append a relative path to it (security/SerializedSystemIni.dat) to determine where the salt is located. So I can utilize this parameter to utilize different salt files for different environments (therefore enabling us to encrypt passwords for any environment I store a salt file for).

So in the following image, I have 3 environments (sandbox, development, production) and I have our salt files from the server checked into this project. I now only have to flip the env folder (like sandbox) as the WebLogic Root Domain parameter, when I want to build a password based on this env.

The following is the sample pom file which uses the mvn exec:exec plugin/goal to execute the Encrypt class.

<project xmlns="" xmlns:xsi=""
    <name>WLS ENV Encrypt</name>
    <description>Encrypts WLS JDBC Passwords - mvn exec:exec</description>
                        <!-- automatically creates the classpath using all project dependencies,
                            also adding the project build directory -->
                        <classpath />
                        <!-- Password can be supplied as an argument or entered when
                        the maven build is run -->

So I can invoke mvn exec:exec -Ddomain.root=dev, then the dev/security/SerializedSystemIni.dat file will be used to encrypt the properties. This greatly simplifies the creation of data sources, as an administrator doesn’t have to log into the remote server and run a shell script that invokes the Encrypt class.

But what if you have multiple domains, it would seem to complicate the issue to have all the SerializedSystemIni.dat files in SVN. This does have some benefits though incase the file/domain gets corrupted, which can happen. So to resolve this issue, I can share a salt file between multiple domains (like all non-prod servers use the same salt and all prod servers use the same salt). When WebLogic Server builds out a domain, it creates the salt file and then uses that to encrypt passwords within the config.xml and also the weblogic username/password within the The interesting thing is that you can override a pre-existing domains configuration with another domains configuration in order to share the salt files.

  1. Install our first non-prod domain
  2. Backup the SerializedSystemIni.dat, config.xml, and the file
  3. Create our second non-prod domain
  4. First backup the files listed in step #2 on this second domain
  5. Copy the SerializedSystemIni.dat from our first domain to the security folder in our second domain
  6. Either do a straight copy or use a compare tool to copy over passwords from our domain one config.xml to our domain two’s config.xml
  7. Do the same thing for the

It was really surprisingly that simple. Once the domain was brought up, I was able to use my maven project to create an encrypted password that I placed into a JDBC Resource Config file and deployed to the container and it was testable through the WebLogic console. As I stepped through this process, the domain would not correctly start if I used the wrong encrypted username/password in the I was able to easily resolve these issues primarly by making sure that the config.xml and used the correct new salt for encrypting any of the passwords.

As I stated in the first paragraph, the reason for doing all these steps is really because I don’t want to be creating JDBC or JMS resources through the console. So in order to promote these types of resources via a Hudson Job, I have to figure how I would encrypt the JDBC password for the different domains (each that had their own salt file). So by utilizing the same salt in specific environments, having the salt files in SVN was more manageable and helped our ability to backup these files as well.

I have had some issues where the Managed Server fails to login with the WebLogic credentials, so the following steps are a cleaner way of backing up domain salt and config.xml, and using it to configure another domain with the same salts.

  1. Take a 64 bit (or 32 bit) domain install and backup the SerializedSystemIni.dat and the config.xml
  2. Go to another server and stop Admin/Mgd Server that is being modified.
  3. Delete data/cache under the Mgd Server
  4. SCP the backed up SerializedSystemIni.dat to the server.
  5. Replace $/security/SerializedSystemIni.dat with the one that was SCP’d to the server.
  6. Vi the config.xml: $/config/config.xml and replace 3 three passwords in it with the config.xml that I backed up in step 1 (these passwords are encrypted according to the salt).
  7. Go to $/servers/sandboxAdminServer/security/ and change the user/pwd to what you want weblogic/admin
  8. Start the Admin Server Only.
  9. Copy $/servers/sandboxAdminServer/security folder to $/servers/sandboxMgdServer1/security
  10. Start the Mgd Server.

Share and Enjoy

Simplifying Build Management – Build Once, Deploy Anywhere

With many types of languages and approaches to development, it becomes apparent that there are also different approaches to build management of artifacts/resources regardless of language. The diagram below is mainly to provide a picture of how deployment artifacts are tightly coupled to the environments in which they run and how to improve build management. This can be done by moving the deployment artifacts towards a more agnostic view of the environment to which they are deployed. Many times what happens is either applications hard code environment specific properties or environmental properties are built into the final deployment artifact.

The following diagram is an explanation of three levels of a “Dependency Maturity Model” that is used to determine how to move the current applications from Level 1 up to Level 3 (which will help provide the strategy to “Build Once, Deploy Anywhere”).

Dependency Maturity Model

Level 1 – Env Embedded

This is the first level of an application and it’s reliance on dependencies such as environment properties. In this case, I want to track where the environment specific properties exist (from the source to the build artifact). The problem with this level is that the applications and components have hard-coded properties such as FTP locations, email addresses, and other properties that are required to change per the environment they exist in. And depending on the development shop you work in, you could have multiple environments (such as Development, Integration, Quality Assurance, User Acceptance Testing, Staging, and Production). When the applications/components are littered with environment specific properties/dependencies, it becomes necessary for build management to rebuild the source into the build/deployment artifact for each environment. So the code not only has to be built for each specific environment, the code has to be modified in order to build it for a specific environment. This will require environment specific coding changes, builds, deployments, and potential testing issues based on the changes that are made for that environment. The build artifact is therefore embedded with environment properties and is only deployable to the environment to which it is built for. This would be the equivalent of having APIs in Java (JARs) that have environment properties littered through the claases and then have those JAR files pulled into a WAR file that has some of the same redundant properties within it. So I need to fix the issue of changing the code for each environment by removing the environmental properties out of the code and put that into an external resource (Level 2).

Level 2 – Env Abstract

This is the second level of an application in which I decide to remove the redundant environment properties from the applications/components and put them into an external resource (such as a property file) that all the applications will import and use. This greatly helps development because know I don’t have the potential for redundant and incorrect properties littering the code (maintaining a single source of information for the properties). This also helps the build management somewhat because the code doesn’t have to be modified before each build is done. This would essentially be like removing the environment properties from the APIs in Java (JARs) and the WAR file and putting those properties in a single file (and JAR) that gets pulled into the final WAR. This definitely was a progression from Level 2, the only problem is that I still have not met the “Build Once, Deploy Anywhere” strategy. The reason is that I have successfully pulled the environment specific properties into a single source of truth (property file/JAR), but when I do a build, I are still building an environment specific artifact. So when I combined the Agnostic Code with the Abstract Environment JAR (for Development), I essentially get a Development only WAR. The same can be said when I take that Agnostic Code with the QA Environment JAR, I will get a QA only war from the build. This requires the deployments to only deploy the environment specific artifact (WAR) to that environment, because it cannot work in any other environment. So the next thing I need to do is remove the environment specific resource from the build artifact (Level 3).

Level 3 – Env Abstract

This is the third level of an application in which I now take the final step to realize “Build Once, Deploy Anywhere”. In order to do this, I need to simply not include the environment specific resource into the final application build artifact. This is really as simple as it sounds, when the application/component is brought together into the final artifact (WAR), the environment specific resource (JAR) is not included. This creates a final build artifact that is agnostic to the environment it is deployed to and therefore only needs to be built once and can be deployed to any environment without issue. This is a huge advantage because essentially through the Development Architecture, the Continuous Integration tool can build the application and deploy it to the Maven Repository and then the same Continuous Integration tool can pull that artifact from the Maven Repository and deploy it to any of the environments. This helps insure that the artifact (WAR) that the QA is testing again is the same exact artifact (WAR) that is being deployed int Staging.

Now you maybe thinking that this sounds good and makes sense, but what about the environment properties in Level 3, where did they go. This is where I can utilize capabilities such as OSGi bundles or application server specific capabilities for shared libraries. In WebLogic, this concept can be accomplished through what is called Optional Packages. Essentially what is done is that through the MANIFEST, I define an entry for the environment specific artifact (JAR) that gives an identification and version for that artifact. Then when the agnostic artifact (WAR) is deployed, it also has an entry in it’s MANIFEST for a dependency on the environment specific artifact (JAR) by listing that identification information and version. This way the environment specific artifact (JAR) can be deployed to the container separately from the WAR and at anytime (a new deploy only requires the dependent WARs to be restarted). For local testing, I utilize the capabilities of Maven to define the environment specific artifact as a test so that it can be used for local testing (since it is excluded from the final build artifact). In an additional blog post, I will go deeper into what I had to do to create these Optional Packages and then finally how to create a custom plugin to deploy these artifacts because the maven-weblogic-plugin does not support the deployment of these types of optional jars (ones that are not configured with WebLogic specific config files).

Share and Enjoy