IBM Watson – An Automated Supply Chain Scenario

This articleby James Miller, the author of IBM Watson Projects, describes a use case project that will focus on analyzing how effective a supply chain is for a retail department store. This automated supply chain scenario will look to provide insights into an organization’s supply chain data and processes, in an attempt to isolate the cause of poor delivery performance.

Problem definition

For this article’s use case, let’s pretend there is an organization named Folly Surf located in South Carolina in the US, which distributes surfboards. Their supply chain group is responsible for the following:

  • Procurement of the fundamental components of the product (the various types of surfboards)

  • Assembly (which includes a process known as shaping)

  • Delivery to the customers, who in this case are various independent surf shops who have placed orders for the boards

In the years since its inception, born in some surfer’s garage, the company’s product has grown in popularity, driven by both the surfer’s reputation and a high level of satisfaction (the board performs as advertised). This has increased demand beyond the company’s ability to provide the product and is threatening not only short-term profitability but the company’s future plans to expand its store locations.

Various efforts to improve operational efficiencies have had undesirable results.

For example, when deliveries are all on time, overall product quality has suffered, resulting in unhappy consumers and returns. When it is ensured that quality levels are met or exceeded, deliveries have been late, again resulting in unhappy customers and lost sales. Finally, when assembly teams are expanded, ensuring quality as well as the ability to deliver on time, the assembly team runs out of materials and parts.

Before things get too far out of control or beyond a repairable situation, the Folly Surf group is interested in seeing what insights can be identified with their data and Watson Analytics.

Getting started

Surfboard construction starts with a dense foam core and stringer, which is then covered with several layers of an epoxy resin, fiberglass, paint, and then finished with a high-gloss protective layer. Assembly also includes attaching one or more fins.

So, the following list of materials defines the our supply chain:

  • Foam core
  • Stringer
  • Resin
  • Fiberglass
  • Paint
  • Protective treatment
  • Fins and fin assembly

The materials listed here are shipped by a number of suppliers to one of the two assembly facilities where the boards are fashioned and then sent to a warehouse where they are inventoried until ordered. Once an order is placed by the surf shop, the boards are picked, packed, and shipped, and voila! – We have a supply chain that describes the chain management:


Although supply chain management has multiple objectives, this article focuses on one of the most fundamental objective: achieving efficient fulfillment. Efficient fulfillment is (perhaps loosely) described as making inventory readily available to the customer to fulfill demand.

However, readily available must also be accompanied by the most efficient use of cross-chain resources, maintaining minimal inventory levels, ensuring little or no waste, and permitting the lowest costs overall.

The next section explores the data from our imaginary surfboard scenario.

Gathering and reviewing data

Supply chain data is not singular in source, in that it is composed of a variety of informational data points and collections such as accounts payable, accounts receivable, manufacturing data, cost of goods sold, various vendor records, and so on.

Let’s say that this data has been compiled using numerous methods and means, and has been provided in the form of a file of supply chain data for analysis.

However, you don’t have a field-by-field description of the data; you only have a file of supply chain data, and you haven’t been offered any further details as to what specifically is in the data file. What to do?

Watson Analytics offers a really good exploration feature, so just go ahead and perform the steps to load the data.

Building the Watson project

Now, let’s see how to build this project. 

Loading your data

  1. In practice, the most common file exchange format is either an Excel file or a comma-separated (CSV) text file, and Watson Analytics is happy with both formats, as long as you understand that Watson wants lists of data, not formatted report files (files containing nested rows or column headings, or total and subtotal rows).
  2. In addition, there are different file size limits for each edition of Watson Analytics:ibmwatson2
  1. Additionally, when you upload (or add) a dataset in Excel worksheet form, Watson Analytics may ask you to provide additional details about your data file. For example, if a Microsoft Excel file contains several worksheets, you are prompted to select one worksheet to add; you might also be prompted to select a row to use as the column headings.

Reviewing the data

Datasets loaded into Watson Analytics are sometimes referred to as assets. The supply chain asset (a CSV file named SuperSupplyChain.csv) is now loaded and ready for review:


As you can see, Watson Analytics has already created the SuperSupplyChain panel.It starts the review by giving a data quality rating of (only) 67 percent. In Watson Analytics, you can start with an Explore.

Explore creates powerful visualizations on your data to help you discover patterns and relationships that may impact your business, find new insights, or help you identify new questions that you can ask of your data.

Click on the Explore image in the upper left of the Welcome page (as shown in the following screenshot):


After clicking Explore, you can locate and select your SuperSupplyChain file:


At this point, Watson Analytics explores your data and presents its findings as a page of entry points or prompts:


For example, Watson prompts What are the values of FullfillmentTime for each Month(CustomerDeliveryDate)?:


To see the answer to that question (or the results of running a query to retrieve these values), you can just click on the question; Watson runs the query for you and presents the results in an awesome visualization:


It appears that it takes longer on average to fulfill an order placed in December or November, which may make sense if you consider the fact that these months are known for holiday gifting.

You can clearly see the Watson Analytics added value here. Watson automates the process of having to do the following:

  1. Think of a question (query)
  2. Formulate a query based upon the question
  3. Execute the query
  4. Review resultant data
  5. Think of an appropriate visualization type
  6. Create the visualization using the query’s result
  7. Draw a conclusion

Although it is helpful for Watson to provide this insight, the conclusion that holiday months have longer fulfillment times seems common sense. How about some more exploring?Consider the theorythat the cause for the bottleneck in fulfillment is the assembly plants. To confirm this using Watson, you can create a new page by clicking on New:


You can enter a new question for Watson Analytics in several ways:

  • On the Welcome page, click a dataset and enter a question
  • On the Welcome page, click Explore, select a dataset, and enter a question
  • On the Welcome page, click Add, tap Exploration, select a dataset, and enter a question
  • In Explore, click New and enter a question

Next, back at the prompt page, you can enter your own question (or answer to the question: What do you want to explore next?) into the search bar:

doesassemblyid impact fullfillmenttime?

Notice that as you type, Watson auto fills the column names from the file and quickly generates a new list of exploration prompts related to the question (Watson Analytics matches the words you type in your question to the column headings in your dataset):


Right in the top left, there’s a Very relevant prompt: How do the values of FullfillmentTime compare byAssemblyID. Since FullfillmentTime is the performance statistic we’d want to improve on, and we have a notion that one or other assembly plants may be a problem, this prompt does seem relevant. Again, you can drill into the topic by clicking on it:


From this visualization, it appears that the assembly plants both impact fulfillment times pretty equally (or at least I don’t see a material difference between the two).

For the sake of brevity, I will tell you that exploring suppliers yielded a similar conclusion, as well as products. So, if the different suppliers, assemblers, or products do not uniquely impact fulfillment times, what does?

Well, it’s not too much of a stretch to consider that instead of comparing the performance of the different suppliers or the performance of the different assemblers, perhaps we should see if there is any disparity between the time it takes for materials to arrive from suppliers (to the assembly plants) and the time it takes for the assembled product to arrive from the assemblers (to the warehouse):


So, we might think of this question as follows: is there a difference between the time it takes for a supplier to ship materials to an assembly plant and the time it takes for an assembly plant to ship the assembled product to the warehouse? Thinking in Watson terms, since we have columns of data that contain these totals, we might type our query as follows:

how does daysfromsuppliertoassembly compare to daysfromassemblytowarehouse

The following screenshot shows the entered question:


From there, Watson Analytics gives us the following (Very relevant) prompt:


And if you drill into this prompt, Watson Analytics provides the following visualization:


After reviewing this visualization, we might conclude that the time required to ship materials from any supplier to either assembly plant can be (perhaps materially) longer than the time that required to ship the assembled product to the warehouse. But is this the case? Do all suppliers perform equally? Perhaps Watson can answer this.

Suppose we start by posing the question What is the value of DaysFromSupplierToAssembly? Watson Analytics produces the following (not very interesting) visualization:


Again, not a very interesting graphic, but if we click on + Add a column (under Columns Bars, shown here), we can improve our visualization:


At this point, Watson Analytics provides a list of the columns in our data file. We are concerned at this point with suppliers, and that column is not listed, so I can type the column name and search for it:


Once the column name SupplierID comes up, you can click on it to select it and have Watson Analytics add thatcolumn name to the visualization. Now, the graphic is much more interesting:


From this visualization, we may understand that supplier number 1 is typically slower in fulfilling its orders than supplier 2 and supplier 3. So, now we have become aware that the number of days it takes to fulfill a customer’s order (or the number of days the customer has to wait for the surfboard they ordered) is often affected by waiting for materials to be sent by suppliers to the assembly plants, not the time it takes to assemble the product, and there is a particular supplier that appears to have the most delays. Now, we have an actionable insight into improving supply chain performance.

As you can see, while you are performing data explorations, Watson Analytics helps you uncover not only answers in your data (which lead to making better decisions) but perhaps even more questions. Watson’s ability to quickly provide powerful visualizations is the key to recognizing patterns within the data you are exploring. Watson allows you to refine each of the visualizations in different ways and as you do so, Watson Analytics updates the graphic to relate to the new context that you are examining.

Once you become comfortable that you have gained an important insight through a particular visualization, you’ll probably want to share it later. You can set aside interesting or important visualizations created from Explore. You can then add the visualizations to the dashboards and stories that you create in Assemble (which we will cover in the Sharing section later in this chapter).

Visualizations saved from Explore remain interactive when you add them to a dashboard or story. You can also change the data that’s displayed in the visualization in Assemble in the same ways that you edit it in Explore.

To save a visualization, simply click on the Collection icon (as shown here) to add the visualization to the collection:


Creating a prediction

Obtaining analytical insights from data with Watson Analytics is accomplished with the Predict feature. The steps for creating a prediction are simple. These steps are referred to as a Prediction Workflow. This workflow is outlined in the Watson documentation and is worth reviewing here (at a higher level, perhaps):

  1. Add data.

  2. Click on Prediction.

  3. Select (up to five) target fields that you want to predict. A target is a variable from your dataset that you want to understand. The target field’s outcomes are influenced by other fields in the data.

  1. Click Create. Watson Analytics then automatically analyzes the data as it creates the prediction.

  2. When the analysis is completed, view the results. On the Top Predictors page, you can select a predictor that is interesting and open its visualization.

  3. On the Main Insight page (for the predictor that you chose in step 5), you can examine the top insights that were derived from the analysis.

  4. Go to the Details page to drill into the details for the individual fields and interactions.

Supply chain prediction

Now that we have explored our supply chain data and identified an insight that we think is worthwhile, let’s go ahead and use the data to create a Watson Analytics prediction, like so:

  1. From the Welcome page, click on Predict:


  1. Choose the file:


  1. On the Create a new analysis page (shown next), provide a name for our prediction, select the column named FullfillmentTime as our target, and then click Create:


Watson Analytics creates our prediction for us:



It’s important to first settle on a definition for a predictor. Generally, the following is accepted.

A predictor variable is a variable that can be used to predict the value of another variable (as in statistical regression)

When you open a prediction, the Top Predictors page appears. The spiral visualization you see shows you the top key drivers or predictors (in color, with other predictors in gray). The closer the predictor is to the center of the spiral, the stronger that predictor is.

There is a visualization generated for each key predictor, giving you information about what drives each behavior and outcome. If you click on one of the predictors (or hover over it), you can see some details about it. Each predictor has a corresponding snapshot visualization that contains information about the predictor and how it affects the target. The color of the circle in the spiral visualization is also found in the corresponding detailed visualization.

In our prediction, the blue circle in the spiral visualization for the DaysFromSupplierToAssembly predictor is included in the corresponding detailed visualization for DaysFromSupplierToAssembly (shown here) and if you click on the visualization, you can see it in more detail on the Main Insight page:


Once Watson has created a prediction, you are not locked in to what you specified on the Create a New Analysispage. You can dive deeper using the prediction scenario selector (shown here) to specify how many fields you want to view that act as predictors for your target. In our prediction, what might we combine with DaysFromSupplierToAssembly to be a more exact predictor of fulfillment time?


If you select Two Fields, you see a new set of visualizations and see how those two variables influence the target. If you select Combination, the visualizations provide a deeper and more predictive analysis, displaying how a combination of the variables influences the target.

Main insights

The results of predictions in Watson Analytics are presented as a combination of both visual and text insights. Text insights describe the results of the Watson Analytics analysis. Visual insights are visualizations that support the text insights. All of the insights are prearranged into insight sets to make them easier to digest.

In our supply chain prediction, we started with a presupposition that the cause of increasingly longer order delivery times was due to a problem at the assembly plants. After creating an exploration, we saw first that delivery times increased during November and December, but that was expected, due to higher order volumes. Next, we compared the performance of each assembly plant and found that they performed pretty much the same. From there, we checked for different performance levels of each supplier and also explored products, to see whether a specific product required additional lead time.

Finally, we found that there is a difference between the time it takes to ship materials from suppliers to the assembly plants and the time required to ship assembled product from the assembly plants to the warehouse.

With this awareness in mind, we then created a Watson Analytics prediction using the FullfillmentTime column as the target. In the following section, we will examine how to save and share the results in more detail.

If you enjoyed reading this article and want to learn more about IBM Watson, you can explore IBM Watson Projects. Featuring a unique, learn-as-you-do approach with eight exciting projects that put AI into practice for optimal business performance, the book is a must-read for data scientists, AI, NLP and ML engineers, and data analysts.

Understanding MicroProfile

To understand MicroProfile, first we need to understand the Java EE/ Jakarta EE and its challenges and goals.

In a real world, companies need to be able to response to new challenges faster and faster. With this, enterprise world has challenges different from non-enterprise world, because companies need to be able to change fast to correspond to innovate and provides solutions better and better. Furthermore, companies need be scalable to provide its service to a major number of users. With this, the IT began to split the problems and provides solutions in multiples applications that make a communications between them, and those applications began to share data. Then, a new set of problems began to appear in an enterprise world, such as problems about integrations and relationship between components. With this, a new set of patterns to solutions needed to be created to response to the new challenges.

Java EE and now Jakarta EE is an umbrella project that provide patterns solutions to common problems that occur in an enterprise world. It is a set of specs (such as CDI, JPA, JSF, EJB and others) to solve several enterprise problems, and permit that multiple vendors implements these specs. With this, developers can write an enterprise application with Java EE/ Jakarta EE using any vendor that provides the implementation of these specs.

However, with emergence of Cloud and the Cloud Native Application concept, new challenges emerged too and created a gap on Java EE. One gap created on Java EE was solutions to problems about Microservice architecture and solutions to implementing the 12 factors.

MicroProfile is an umbrella project with aim to be an extension of Java EE/ Jakarta EE to Microservice architecture. It contains a set of specs that is formed by some specs of Java EE and other own specs. With this, Microprofile has some specs to solve problems about Microservice architecture and to implementing the 12 factors. MicroProfile initiative with Jakarta EE has a promise to closer the Java to enterprise to Cloud Native Application, promoting a common solutions to explore the Cloud.

MicroProfile 1.4 is based on Java EE 7 and provide support to some specs of Java EE 7, and the MicroProfile 2.0 is based on Java EE 8 and provide support to some specs of Java EE 8. Below we have an image to illustrate that.

MicroProfile 1.4


MicroProfile 2.0


Note that the MicroProfile don’t have support to all specs of Java EE/Jakarta EE. The aim is over time, other Java EE/Jakarta EE spec will be inserted to MicroProfile.

If you want to know more about MicroProfile, access this link MicroProfile.


Creating New Java EE 8 Project

In this post I’ll show you how to create a new Java EE 8 project using an archetype created by me. This archetype is on Maven Central and is an archetype to create a WAR project with the following classes and dependencies.


  • Java EE 8.0
  • JUnit 3.8.1
  • log4j-core 2.11.1


  • JobBusiness: An example of Business Object that contains the business logic of Job.
  • Dao: An example of abstract DAO class.
  • JobDao: An example of DAO to Job using JPA.
  • Entity: An example of abstract entity.
  • Job: An example of entity to mapping Job with JPA
  • JobResource: An example of JAX-RS Resource.

To use this archetype run the following maven command:

mvn archetype:generate -DarchetypeGroupId=net.rhuanrocha -DarchetypeArtifactId=javaee8-war-archetype -DarchetypeVersion=1.0.2 -DgroupId=<new project Group Id> -DartifactId=<new project artifact Id>

After running this command, the project will be created on your current folder.

Creating a Custom Maven Archetype

In this post I will cover to you how to create a custom maven archetype. In our example we’ll create an archetype to WAR project. To this post we’ll assume that you already install Java and Maven in your machine.

Create a custom maven archetype is a good practices to:

  • Promote a pattern of architecture to projects.
  • Turn easy the task to create new projects

In an enterprise environment, we may want to create a pattern of architecture to projects. To this, a maven archetype may help us to do that, because maven archetype permit us create a project skeleton with all dependencies, packages and configurations defined, and the developer can use this skeleton in your projects as base, with a easy way.

To facilitate us, we’ll create a new maven project and then will modify this project to creating our archetype. To create a new maven project run the following command:

mvn -B archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=net.rhuanrocha -DartifactId=example-archetype

After running this command, a new project will be created with the following pom.xml.

<project xmlns="" xmlns:xsi=""

Now, we need create a file called archetype.xml. This file describe the structure of the project generated by this archetype. This file need be created in the following folder:


Below we have an example of this file.

<archetype xmlns="" xmlns:xsi=""




Now, I’ll create a following folder:


This folder will have the skeleton used when a new project is create using this archetype. Inside this folder, we’ll have the classes, packages and configurations that will be generated by archetype. With this, inside this folder we’ll put the pom.xml and the src folder with all classes and configuration files. Below, we have a example of pom.xml inside archetype-resource.

<project xmlns="" xmlns:xsi=""
    <name>Example Archetype</name>



Note that the pom.xml uses the parameter ${groupId}, ${artifactId} and ${version} that are parameters sent when the command running to create a project using this archetype. The command to generate new project using this archetype will be covered ahead.

Now, I’ll create the class HelloWorldResource. This class will be inserted on the package that its name begin with value sent on the ${groupId} parameter. Below, we have the example:

package ${groupId}.resource;

import javax.inject.Inject;

public class JobResource {

    private JobBusiness jobBusiness;

    public Response sayHelloWorld(){

        return Response
                .ok( "Hello World")


Note that the parameter ${groupId} is used to define the package name.

Now I’ll create the JAXSConfiguration class. Below we have a example:

package ${groupId};


public class JAXRSConfiguration extends Application {


Now, to install this archetype on your local repository, run the following command:

mvn clean install

Now, to create a new project using the custom maven archetype created, run the following command:

mvn archetype:generate -DarchetypeGroupId=net.rhuanrocha -DartifactId=example-archetype -DarchetypeVersion=1.0-SNAPSHOT -DgroupId=<group id> -DartifactId=<name of your artifact>


Implementing External Configuration Store Pattern with Jakarta EE

In my another posts I explained ways to turn your Jakarta EE application decoupled. In this post I will cover how to decouple the application configuration from our application.

Software delivery have many steps that need be done and one of these step is make the configuration of our application, such as configuration about file system and directories, configuration about paths to other services or resource, configuration about database and others configuration about environment.

Generally, we works with development environment, testing environment, staging environment and production environment. The application need be configured in each of these environment that have its values of configuration — each environment have the same properties but the value of these properties can be different — that allow the application to work. Many developers make the configuration using a configuration file (.properties, .xml or another) inside application to configure them, but this way turn the package of the application coupled with the environment, and the developer need generate one package to each environment. Each package need know the details of the environment that it will running. It is a bad practices and increase the complexity of  the delivery of application both to application with monolithic architecture and to application with microservice architecture.

External configuration store pattern is an operational pattern (some literatures define as architecture pattern or cloud design pattern) that decouple the configuration details from application. With this, the application don’t know the value of configuration properties, but only know which properties it need to be read from configuration store. Below we have the figure showing the diference between application using external configuration store pattern and application don’t using external configuration store pattern.

Blank Diagram (1)

Benefits of using external configuration store pattern

The use of external configuration store pattern have several benefits, but I will talk about some. The main benefit is that we can update any configuration values without shall rebuild our application. Once the package was generated, then this package will running on any environment except if another problem that is not related with configuration occur or if the environment is with a wrong configuration. Furthermore, we allow another team (infrastructure team or middleware team) manage the configuration without need a help of developer, because the package of application don’t need be updated.

Another benefit of external configuration store pattern is that it can make all configuration centralized and many applications can read the configuration properties from the same location.

Implementing external configuration store pattern using Jakarta EE

The implementation of external configuration store pattern can be done with the following ways:

  • Using the application server as configuration server by system properties
  • Using an external file or a set of external files.
  • Using a datasource (relational database, NoSQL or other)
  • Using a custom configuration server

In this post I’ll show you how to implemente external configuration store pattern using the application server as configuration server by system properties and using an external file or a set of external files.

In our scenario, we’ll have three JAX-RS resources, one to return a welcome message, other to upload files and other to download files. On resource that return a welcome message we’ll have one method to get message from system properties of application server and another method to get message from an external file. To implement that we will use CDI producer to read the properties both from application server and from external file. Furthermore, we’ll create the qualifier @Property to be used by producer at moment of injection.

Creating the

This file is used when the application uses a external file. This is the only configuration that is configured inside application, to permit the application know where is the configuration store.


Implementing Qualifier

The code below has the implementation of qualifier used to configure the injection and permit the producer product the value injected.

@Target({ElementType.TYPE, ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER})
public @interface Property {

    @Nonbinding String key() default "";

    @Nonbinding boolean container() default true;

    @Nonbinding String fileName() default "";


Note that the qualifier have three attributes, key, container and fileName. The attribute key is used to pass the key of property. The attribute container is used to define if the attribute will be read from Jakarta EE container. If the attribute container is true, the application will search the property on the application server.  The attribute fileName is used to pass the file path when we are using an external file. When the fileName is passed and the attribute container is true, then the application will search first on the Jakarta EE container and if the property is not found on the Jakarta EE container, then is find on external file.

Implementing Producer

The code below has implementation of PropertyProducer, the producer used to inject the properties.

public class PropertyProducer {

    public String readProperty(InjectionPoint point){

        String key = point

        if( point
                .container() ){

           String value = System.getProperty(key);

           if( Objects.nonNull(value) ){
               return value;


        return readFromPath(point
                .fileName(), key);


    private String readFromPath(String fileName, String key){

        try(InputStream in = new FileInputStream( readPathConfigurationStore() + fileName)){

            Properties properties = new Properties();
            properties.load( in );

            return properties.getProperty( key );

        } catch ( Exception e ) {
            throw new PropertyException("Error to read property.");


    private String readPathConfigurationStore(){

        Properties configStore = new Properties();

        try( InputStream stream = PropertyProducer.class
                .getResourceAsStream("/") ) {

        catch ( Exception e ) {
            throw new PropertyException("Error to read property.");

        return configStore.getProperty("path");


Implementing the Config

This class is the core of this post, because it class contain the configurations of application that was read from both application server and external file. This class is a singleton and all configuration properties injected are centralized in this class.

public class Config {

    public String WELCOME;

    @Property(key="message.welcome", container = false, fileName = "")
    public String WELCOME_EXTERNAL_FILE;

    public String PATH_DOWNLOAD;

    public String PATH_UPLOAD;


Implementing the WelcomeResource

The code below has the implementation of a JAX-RS resource that has two methods, one method to return a welcome message defined into system properties of application server, and other method to return a welcome message defined into external file.

public class WelcomeResource {

    private Config config;

    public Response message(){

        Map map = new HashMap();
        map.put("message", config.WELCOME);

        return Response
                .status( Response.Status.OK )
                .entity( map )


    public Response messageExternalFile(){

        Map map = new HashMap();
        map.put("message", config.WELCOME_EXTERNAL_FILE);

        return Response
                .status( Response.Status.OK )
                .entity( map )


Implementing FileDao

The code below have the FileDao implementation, the class to read and write file. FileDao is used by UploadResource and DownloadResource.

public class FileDao {

    private Config config;

    public boolean save( File file ){

        File fileToSave = new File(config.PATH_UPLOAD + "/" + file.getName());

        try (InputStream input = new FileInputStream( file )) {

            Files.copy( input, fileToSave.toPath() );

        } catch (Exception e) {
            return false;

        return true;

    public File find( String fileName ){

        File file = new File(config.PATH_DOWNLOAD + "/" + fileName);

        if( file.exists() && file.isFile() ) {

            return file;

        return null;


Implementing UploadResource

The code below has the implementation of a JAX-RS resource process an upload of file. Note that we have the FileDao class used to read and write the file.

public class UploadResource {

    private FileDao fileDao;

    public Response upload(@NotNull File file){

        if( file ) ){
            return Response
                    .created(URI.create("/download?fileName="+ file.getName()))

        return Response.serverError().build();


Implementing DownloadResource

The code below has the implementation of a JAX-RS resource process a download of file. Note that we have the FileDao class used to read and write the file.

public class DownloadResource {

    private FileDao fileDao;

    public Response download(@NotNull @QueryParam("fileName") String fileName){

        File file = fileDao.find( fileName );

        if( Objects.isNull( file ) ){

            return Response.status(Response.Status.NOT_FOUND).build();


        return Response.ok(file)
                "attachment; filename=\"" + fileName + "\"")



Eclipse MicroProfile Config

The Jakarta EE is a new project and is based on Java EE 8, but many peoples talk about a possible merge between MicroProfile and Jakarta EE. For me, these projects will be increasingly closer. The MicroProfile project has a solution called Eclipse MicroProfile Config,  today on 1.3 version, that permit us implementing the external configuration store pattern. If you want to know more about Eclipse MicroProfile Config, access:


Using external configuration store pattern, we decouple configurations from application. Thus, we can update some configurations without rebuild our application. Furthermore, other teams may managing the configuration without a developer intervention and we can share the same set of configurations with any applications.

The use of this pattern is a good practice, mainly if the application was done with a microservice architecture, because it promote a better delivery and an easier maintenance.

If you wanna see the full code of this example, access the github in this link:



Understanding the Benefits of Develop Jakarta EE Application Based on Spec

Jakarta EE is an umbrella project based on Java EE 8, that contain a set of specs. These specs are the same specs of Java EE 8, but migrated from Oracle to Eclipse Foundation. With this, Jakarta EE Platform provide the same features then Java EE 8 and permit us develop applications without carer about vendors.

The Java ecosystem have been provided more and more tools to develop decoupled applications. The word decoupled application can appear vage and to turn the understanding better, we’ll explain a definition of decoupled application to us.

Decoupled Applications

The level of decouple between elements (Components, Applications, Services) is directly related with the level of abstraction of its details. Thus, if two elements are decoupled, that means that both element don’t need know the details of other element. For instance, if the service “register” is decoupled from service “payment”, then the service “register” don’t know the details of implementation of service “payment”, and the service”payment” don’t know the details of implementation of service “register” . Both will only know that when a service is called the service will generate a specific action, but will not know how this action will be done. It permit us change a service without affect the other service.With this, the architecture turn more plugable.

In an application development, we can achieve several kind of decouple, such as decouple of configurations, decouple of codes, decouple of infrastructure, and to Jakarta EE we can promote a decouple of vendors. With this, when I develop an application with a decouple from vendors, we will be free to use any vendors without change the code.

Jakarta EE Application Based on Spec

When I’ll write an application using Jakarta EE, I can use the specs without using specifics features of vendors or using features of vendors. Then, when I use features of some vendors, my application turn coupled with the respectively vendor and this vendor become a dependency of the application. With this, if you want to change one vendor to another vendor, you will need update the code.

Jakarta EE application based on spec is an application that is wrote using only the specs and without using features of some specific vendor. With this, a Jakarta EE application based on spec don’t know about details of which vendor will be used and don’t have a couple or dependency with any vendors. This concept is very similar with concepts of interface and class of object-oriented programming, where, the specs are like interfaces and the implementations of a vendor are like classes.

The benefit of write a Jakarta EE application based on spec is the decoupled between application and vendor. With this, you can run your application using any vendor. For instance, if you create a Jakarta EE Application that uses the JPA spec you can run this application on Widfly that have the Hibernate as a JPA implementation, on GlassFish that have the EclipseLink as a JPA implementation, or another application server that contain another vendor to JPA implementation. Thus, your application becomes a potable application.

Writing Code Using Jakarta EE Spec

To show an example of write a code based on spec we’ll figure out a scenario using JPA spec. We gonna mapping a table called person of a relational database and writing a DAO to read and write datas in this table. Below we have the figure showing model of table.

Captura de tela de 2018-08-29 14-14-01

First of all we’ll create the persistence.xml to configure the unit persistence. Below we have the persistence.xml.

    <persistence-unit name="javaee8">

            <property name="javax.persistence.schema-generation.database.action" value="create" />


Note that we configure the property called javax.persistence.schema-generation.database.action that is a property of JPA to create the schema on the database automatically. This property will work with any vendor.

Creating Entity

Now, we will create the entity called Person, that is the class to mapping the table person contained on the database.

@Entity(name = "person")
public class Person implements Serializable{

    private Long id;

    private String firstName;

    private String lastName;

    //Getters and Setters

    //equals and hashCode


Note that on the code above, we used only feature of JPA spec and don’t used feature of some specific vendor.

Creating DAO

Now, we’ll create the PersonDao class that is a DAO responsible to read and write datas on table. The PersonDao is a stateless Session Bean of EJB.

public class PersonDao {

    protected EntityManager em;

    public Optional<Person> findById(Long id){

        return Optional.ofNullable(em.find(Person.class, id));

    public List<Person> findAll(){

        return em.createQuery("select p from "+Person.class.getName()+" p ", Person.class)


    public Person save(Person person){
        return em.merge(person);

Note that on the code above we uses only feature of JPA spec.


To define if we’ll develop an application based on spec or using features of vendors we need understanding the necessities of the application at moment of development. If you should writing the application independent of vendors you shall develop the application based on spec. However, if you will have gain using features of specific vendor you can develop the application using the vendor. My advice for you is: Have preference to develop Jakarta EE application based on spec. Because your application becomes portable and it’s closer to cloud native application.

If you want see all codes of our example access:


How to Use Event CDI on Java EE 8

Nowadays, the companies needs provide response faster and faster to the market, and its success depend of that. This necessity reflects directly on software, that need provide a faster response too. With this, software need be delivered faster and faster, need be scalable more and more and need decrease risks of impacts with updates. Thinking in that, the Java EE have been provided some tools that permit us use Reactive Programming and asynchronous processing, to permit us a faster response to end user and increase and scale the capacity of response to request.

In this post i will show how can we use Event CDI to process tasks using Reactive Programming. In our example i’ll demonstrate how to use Event CDI with synchronous and asynchronous way.

In our scenario, we’ll have a service that receive a request to send an e-mail, and then the service throw an event to an observer that will send an e-mail using JavaMail.

Tools used:

  • GlassFish 5.0
  • Java EE 8
  • IntelliJ IDEA 2017.3.4 (Ultimate Edition)
  • (To use email test)
  • Postman (To test the service)

Creating the Qualifiers

First of all, we’ll create a qualifier to distinguish our events. To event of send an e-mail, i will create the qualifier called EmailSender. Below we have the qualifier.

@Target({ElementType.PARAMETER, ElementType.FIELD})
public @interface EmailSender {


Creating the Event

In this step we’ll create the event to send an e-mail. This event will have an attribute to email of destination and an attribute to message. Below we have the event example called EmailEvent.

public class EmailEvent {

    private String emailTo;

    private String message;

    public String getEmailTo() {
        return emailTo;

    public void setEmailTo(String emailTo) {
        this.emailTo = emailTo;

    public String getMessage() {
        return message;

    public void setMessage(String message) {
        this.message = message;

    public static EmailEvent build(String emailTo, String message ){

        EmailEvent emailEvent = new EmailEvent();

        return emailEvent;


Creating the Observer

In this step, we’ll create the observer, that will have two methods. One method to send e-mail with synchronous way and another method to send e-mail with asynchronous way. Below we have the observer example.

public class EmailSender {

    @Resource(lookup = "mail/MyMailSession")
    private Session session;

    private Logger logger = LogManager.getLogger(this.getClass());

    //Method used to send email with synchronous way
    public void send(@Observes @net.rhuanrocha.eventcdiexample.qualifier.EmailSender EmailEvent emailEvent) throws NamingException {

        runSend(emailEvent);"Email sent!");

    // Method used to send email with asynchronous way
    public void sendAsyn(@ObservesAsync @net.rhuanrocha.eventcdiexample.qualifier.EmailSender EmailEvent emailEvent) throws NamingException {

        runSend(emailEvent);"Email sent Async!");


    private void runSend(EmailEvent emailEvent) throws NamingException {

        try {

            MimeMessage message = new MimeMessage(session);
            message.setRecipients(Message.RecipientType.TO, emailEvent.getEmailTo());
            message.setSubject("Sending email with JavaMail");

            /**Method to send the created message*/

        } catch (MessagingException e) {
            throw new RuntimeException(e);

Note that on runSend(EmailEvent emailEvent) method, we send an e-mail using the JavaMail spec, and the JavaMail Session is got from context using lookup. With this, you need make a configuration to JavaMail Session with the JNDI name “mail/MyMailSession” on the Server Application. Thus, to this example work well you need configure the following properties on the JavaMail Session:

  • mail.address = ${email_from}
  • mail.smtp.port=${port}
  • mail.smtp.pass=${passowrd}
  • mail.smtp.user=${user}
  • mail.smtp.auth=${auth}

Writing the Resource

In this step we’ll create the JAX-RS resource, that will receive a request to send an e-mail and will throw the event to observer. This resource have two methods, one to send email with synchronous way and another to send email with asynchronous way. Below we have the resource called EmailResource.

public class EmailResource {

    private Event emailEvent;

    private Logger logger = LogManager.getLogger(this.getClass());

    public Response sendEmail(@Email @NotBlank @FormParam("email") String email,
                              @NotBlank @FormParam("message") String message ){"email:"+email + " message:"+message);,message));

        return Response.ok().build();


    public Response sendEmailAsync(@Email @NotBlank @FormParam("email") String email,
                              @NotBlank @FormParam("message") String message ){"email:"+email + " message:"+message);


        return Response.ok().build();



To test these codes, i used the Postman and the following results was got.

With synchronous way:

Captura de tela de 2018-08-18 20-00-44

With asynchronous way:

Captura de tela de 2018-08-18 19-54-24

Note that using synchronous way we had a response with 2320 ms and using asynchronous way we had a response with 79 ms. With this, the asynchronous way replied very faster than synchronous way.


The use of Event in CDI permit us using the Reactive Programming in our application. With this, we can promote a decouple between logics and components turning the maintenance better. Furthermore, using Event in CDI we can promote a faster response to end user, because we can working with a non-blocking process.

If you want the completed codes of this example you can access the github and get the code.


My books about Java EE 8 Design Patterns and best practices

In my book, I explained about Java EE 8 design patterns and best practices. Below you have the book description.

Book Description

Patterns are essential design tools for Java developers. Java EE Design Patterns and Best Practices helps developers attain better code quality and progress to higher levels of architectural creativity by examining the purpose of each available pattern and demonstrating its implementation with various code examples. This book will take you through a number of patterns and their Java EE-specific implementations.

In the beginning, you will learn the foundation for, and importance of, design patterns in Java EE, and then will move on to implement various patterns on the presentation tier, business tier, and integration tier. Further, you will explore the patterns involved in Aspect-Oriented Programming (AOP) and take a closer look at reactive patterns. Moving on, you will be introduced to modern architectural patterns involved in composing microservices and cloud-native applications. You will get acquainted with security patterns and operational patterns involved in scaling and monitoring, along with some patterns involved in deployment.

By the end of the book, you will be able to efficiently address common problems faced when developing applications and will be comfortable working on scalable and maintainable projects of any size.

Buy now!