Jakarta EE: Multitenancy with JPA on WildFly, Part 1

In this two-part series, I demonstrate two approaches to multitenancy with the Jakarta Persistence API (JPA) running on WildFly. In the first half of this series, you will learn how to implement multitenancy using a database. In the second half, I will introduce you to multitenancy using a schema. I based both examples on JPA and Hibernate.

Because I have focused on implementation examples, I won’t go deeply into the details of multitenancy, though I will start with a brief overview. Note, too, that I assume you are familiar with Java persistence using JPA and Hibernate.

Multitenancy architecture

Multitenancy is an architecture that permits a single application to serve multiple tenants, also known as clients. Although tenants in a multitenancy architecture access the same application, they are securely isolated from each other. Furthermore, each tenant only has access to its own resources. Multitenancy is a common architectural approach for software-as-a-service (SaaS) and cloud computing applications. In general, clients (or tenants) accessing a SaaS are accessing the same application, but each one is isolated from the others and has its own resources.

A multitenant architecture must isolate the data available to each tenant. If there is a problem with one tenant’s data set, it won’t impact the other tenants. In a relational database, we use a database or a schema to isolate each tenant’s data. One way to separate data is to give each tenant access to its own database or schema. Another option, which is available if you are using a relational database with JPA and Hibernate, is to partition a single database for multiple tenants. In this article, I focus on the standalone database and schema options. I won’t demonstrate how to set up a partition.

In a server-based application like WildFly, multitenancy is different from the conventional approach. In this case, the server application works directly with the data source by initiating a connection and preparing the database to be used. The client application does not spend time opening the connection, which improves performance. On the other hand, using Enterprise JavaBeans (EJBs) for container-managed transactions can lead to problems. As an example, the server-based application could do something to generate an error to commit or roll the application back.

Implementation code

Two interfaces are crucial to implementing multitenancy in JPA and Hibernate:

  • MultiTenantConnectionProvider is responsible for connecting tenants to their respective databases and services. We will use this interface and a tenant identifier to switch between databases for different tenants.
  • CurrentTenantIdentifierResolver is responsible for identifying the tenant. We will use this interface to define what is considered a tenant (more about this later). We will also use this interface to provide the correct tenant identifier to MultiTenantConnectionProvider.

In JPA, we configure these interfaces using the persistence.xml file. In the next sections, I’ll show you how to use these two interfaces to create the first three classes we need for our multitenancy architecture: DatabaseMultiTenantProvider, MultiTenantResolver, and DatabaseTenantResolver.

DatabaseMultiTenantProvider

DatabaseMultiTenantProvider is an implementation of the MultiTenantConnectionProvider interface. This class contains logic to switch to the database that matches the given tenant identifier. In WildFly, this means switching to different data sources. The DatabaseMultiTenantProvider class also implements the ServiceRegistryAwareService, which allows us to inject a service during the configuration phase.

Here’s the code for the DatabaseMultiTenantProvider class:

public class DatabaseMultiTenantProvider implements MultiTenantConnectionProvider, ServiceRegistryAwareService{
    private static final long serialVersionUID = 1L;
    private static final String TENANT_SUPPORTED = "DATABASE";
    private DataSource dataSource;
    private String typeTenancy ;

    @Override
    public boolean supportsAggressiveRelease() {
        return false;
    }
    @Override
    public void injectServices(ServiceRegistryImplementor serviceRegistry) {

        typeTenancy = (String) ((ConfigurationService)serviceRegistry
                .getService(ConfigurationService.class))
                .getSettings().get("hibernate.multiTenancy");

        dataSource = (DataSource) ((ConfigurationService)serviceRegistry
                .getService(ConfigurationService.class))
                .getSettings().get("hibernate.connection.datasource");


    }
    @SuppressWarnings("rawtypes")
    @Override
    public boolean isUnwrappableAs(Class clazz) {
        return false;
    }
    @Override
    public <T> T unwrap(Class<T> clazz) {
        return null;
    }
    @Override
    public Connection getAnyConnection() throws SQLException {
        final Connection connection = dataSource.getConnection();
        return connection;

    }
    @Override
    public Connection getConnection(String tenantIdentifier) throws SQLException {

        final Context init;
        //Just use the multi-tenancy if the hibernate.multiTenancy == DATABASE
        if(TENANT_SUPPORTED.equals(typeTenancy)) {
            try {
                init = new InitialContext();
                dataSource = (DataSource) init.lookup("java:/jdbc/" + tenantIdentifier);
            } catch (NamingException e) {
                throw new HibernateException("Error trying to get datasource ['java:/jdbc/" + tenantIdentifier + "']", e);
            }
        }

        return dataSource.getConnection();
    }

    @Override
    public void releaseAnyConnection(Connection connection) throws SQLException {
        connection.close();
    }
    @Override
    public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
        releaseAnyConnection(connection);
    }
}

As you can see, we call the injectServices method to populate the datasource and typeTenancy attributes. We use the datasource attribute to get a connection from the data source, and we use the typeTenancy attribute to find out if the class supports the multiTenancy type. We call the getConnection method to get a data source connection. This method uses the tenant identifier to locate and switch to the correct data source.

MultiTenantResolver

MultiTenantResolver is an abstract class that implements the CurrentTenantIdentifierResolver interface. This class aims to provide a setTenantIdentifier method to all CurrentTenantIdentifierResolver implementations:

public abstract class MultiTenantResolver implements CurrentTenantIdentifierResolver {

    protected String tenantIdentifier;

    public void setTenantIdentifier(String tenantIdentifier) {
        this.tenantIdentifier = tenantIdentifier;
    }
}

This abstract class is simple. We only use it to provide the setTenantIdentifier method.

DatabaseTenantResolver

DatabaseTenantResolver also implements the CurrentTenantIdentifierResolver interface. This class is the concrete class of MultiTenantResolver:

public class DatabaseTenantResolver extends MuiltiTenantResolver {

    private Map<String, String> regionDatasourceMap;

    public DatabaseTenantResolver(){
        regionDatasourceMap = new HashMap();
        regionDatasourceMap.put("default", "MyDataSource");
        regionDatasourceMap.put("america", "AmericaDB");
        regionDatasourceMap.put("europa", "EuropaDB");
        regionDatasourceMap.put("asia", "AsiaDB");
    }

    @Override
    public String resolveCurrentTenantIdentifier() {


        if(this.tenantIdentifier != null
                && regionDatasourceMap.containsKey(this.tenantIdentifier)){
            return regionDatasourceMap.get(this.tenantIdentifier);
        }

        return regionDatasourceMap.get("default");

    }

    @Override
    public boolean validateExistingCurrentSessions() {
        return false;
    }

}

Notice that DatabaseTenantResolver uses a Map to define the correct data source for a given tenant. The tenant, in this case, is a region. Note, too, that this example assumes we have the data sources java:/jdbc/MyDataSource, java:/jdbc/AmericaDB, java:/jdbc/EuropaDB, and java:/jdbc/AsiaDB configured in WildFly.

Configure and define the tenant

Now we need to use the persistence.xml file to configure the tenant:

<persistence>
    <persistence-unit name="jakartaee8">

        <jta-data-source>jdbc/MyDataSource</jta-data-source>
        <properties>
            <property name="javax.persistence.schema-generation.database.action" value="none" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgresPlusDialect"/>
            <property name="hibernate.multiTenancy" value="DATABASE"/>
            <property name="hibernate.tenant_identifier_resolver" value="net.rhuanrocha.dao.multitenancy.DatabaseTenantResolver"/>
            <property name="hibernate.multi_tenant_connection_provider" value="net.rhuanrocha.dao.multitenancy.DatabaseMultiTenantProvider"/>
        </properties>

    </persistence-unit>
</persistence>

Next, we define the tenant in the EntityManagerFactory:

@PersistenceUnit
protected EntityManagerFactory emf;


protected EntityManager getEntityManager(String multitenancyIdentifier){

    final MuiltiTenantResolver tenantResolver = (MuiltiTenantResolver) ((SessionFactoryImplementor) emf).getCurrentTenantIdentifierResolver();
    tenantResolver.setTenantIdentifier(multitenancyIdentifier);

    return emf.createEntityManager();
}

Note that we call the setTenantIdentifier before creating a new instance of EntityManager.

Conclusion

I have presented a simple example of multitenancy in a database using JPA with Hibernate and WildFly. There are many ways to use a database for multitenancy. My main point has been to show you how to implement the CurrentTenantIdentifierResolver and MultiTenantConnectionProvider interfaces. I’ve shown you how to use JPA’s persistence.xml file to configure the required classes based on these interfaces.

Keep in mind that for this example, I have assumed that WildFly manages the data source and connection pool and that EJB handles the container-managed transactions. In the second half of this series, I will provide a similar introduction to multitenancy, but using a schema rather than a database. If you want to go deeper with this example, you can find the complete application code and further instructions on my GitHub repository.

Jakarta EE: Creating an Enterprise JavaBeans Timer

Enterprise JavaBeans (EJB) has many interesting and useful features, some of which I will be highlighting in this and upcoming articles. In this article, I’ll show you how to create an EJB timer programmatically and with annotation. Let’s go!

The EJB timer feature allows us to schedule tasks to be executed according a calendar configuration. It is very useful because we can execute scheduled tasks using the power of Jakarta context. When we run tasks based on a timer, we need to answer some questions about concurrency, which node the task was scheduled on (in case of an application in a cluster), what is the action if the task does not execute, and others. When we use the EJB timer we can delegate many of these concerns to Jakarta context and care more about business logic. It is interesting, isn’t it? Continue reading Jakarta EE: Creating an Enterprise JavaBeans Timer

Jakarta EE 8 Released: The New Era of Java EE

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Java EE has a new home and new brand and is being released today September 10th. The Java EE was migrated from Oracle to Eclipse Foundation and now is Jakarta EE, that is under Eclipse Enterprise for Java (EE4J) project. Today the Eclipse Foundation is releasing the Jakarta EE 8 and we’ll see what it means in this post.

The Java EE was a very stronger project and was highly used in many kind of enterprise Java application and many big framework like Spring and Struts. Some developers has questioned its features and evolving processes, But looking at its high usage and time in the market, its success is unquestionable. But the enterprise world doesn’t stop and new challenges are emerging all the time. The speed of change has grown more and more in the enterprise world because the companies should be prepared more and more to answer to the market challenges. Thus, the technologies should follow these changes in the enterprise world and adapt itself to provide the better solutions in these cases.

With that in mind, the IT world promoted many changes and solutions too, to be able to provide a better answer to enterprise world. One of these solutions was the Cloud Computing computing. Resuming Cloud Computing concept  in a few words, Cloud Computing is solution to provide computer resource as a service (IaaS, PaaS, SaaS). This allows you to use only the resources you need resource and scale up and down  when needed.

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Jakarta EE Goals

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications.

The Java ecosystem has a new focus that is putting your power in the service of the cloud computing and Jakarta EE is a key to that.

Jakarta EE has a goal to accelerate business application development for Cloud Computing (cloud native application), working based on specifications worked by many vendors. This project is starting based on Java EE 8, where its specifications, TCKs and Reference Implementations (RI) was migrated from Oracle to Eclipse Foundation. But to evolve these specification to attend to Cloud Computing we can not work with the same process worked on Java EE project, because it is too slow to current enterprise challenges. Thus, the first action of Eclipse Foundation is changing the process to evolve Jakarta EE.

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications. With this Jakarta EE 8 is a mark at Java enterprise history, because inserts these specification in a new process to boost these specification to cloud native application approach.

Jakarta EE Specification Process

The Jakarta EE Specification Process (JESP) is the new process will be used by Jakarta EE Working Group to evolve the Jakarta EE. The JESP is replacing the JCP process used previously for java EE.

The JESP is based on Eclipse Foundation Specification Process (EFSP) with some changes These changes are informed in https://jakarta.ee/about/jesp/. Follows the changes:

  • Any modification to or revision of this Jakarta EE Specification Process, including the adoption of a new version of the EFSP, must be approved by a Super-majority of the Specification Committee, including a Super-majority of the Strategic Members of the Jakarta EE Working Group, in addition to any other ballot requirements set forth in the EFSP.
  • All specification committee approval ballot periods will have the minimum duration as outlined below (notwithstanding the exception process defined by the EFSP, these periods may not be shortened)
    • Creation Review: 7 calendar days;
    • Plan Review: 7 calendar days;
    • Progress Review: 14 calendar days;
    • Release Review: 14 calendar days;
    • Service Release Review: 14 calendar days; and
    • JESP Update: 7 calendar days.
  • A ballot will be declared invalid and concluded immediately in the event that the Specification Team withdraws from the corresponding review.
  • Specification Projects must engage in at least one Progress or Release Review per year while in active development.

The goals of JESP is being a process as lightweight as possible, with a design closer to open source development and with code-first development in mind. With this, this process promotes a new culture that focus on experimentation to evolve these specification based on experiences gained with experimentation.

Jakarta EE 9

The Jakarta EE 8 is focusing in update its process to evolve and the first updates in feature will come in Jakarta EE 9. The main update expected in Jakarta EE 9 is the birth of Jakarta NoSQL specification.

Jakarta NoSQL is a specification to promote a ease integration between Java applications and NoSQL database, promoting a standard solution to connect Java application to NoSQL databases with a high level abstraction. It is fantastic and is a big step to close Java platform to Cloud Native approach, because NoSQL database is widely used on Cloud environments and its improvement is expected. The Jakarta NoSQL is based on JNoSQL that will be its reference implementation.

Another update expected on Jakarta EE is about namespace. Basically the Oracle gave the Java EE project to Eclipse Foundation, but the trademark is still from Oracle. It means the Eclipse Foundation can not use java or javax to project’s name or namespace in new features that come in Jakarta EE. Thus, the community is discussing about the transition of old name to jakarta.* name. You can see this thread here.

Conclusion

Jakarta EE is opening a new era in the Java ecosystem getting the Java EE that was and is a very important project to working under a very good open source process, to improvementsAlthough this Jakarta EE version come without features updates it is opening the gate to new features that is coming on Jakarta EE in the future. So we’ll see many solutions based on specifications to working on cloud soon, in the next versions of Jakarta EE.

Understanding the Current Java Moment

Java platform is one the most used platform in last years and has the largest ecosystem in the world of technology. Java platform permit us develop applications to several platforms, such as Windows, Linux, embedded systems, mobile. However, Java had received many claims such as Java is fat, Java take a lot of memory, Java is verbose. In fact Java was created to solve big problems not small problems, although it could be used to solve small problems. You can solve small problems with Java, but you see the real benefit of Java when you have a big problem, mainly when this problem is about enterprise environments. Then when you created a hello world application with Java and compared to hello world application written in another language you could see a greater memory use,  you could write more line codes and other. But when you created a big application that integrates to another applications and resource, in this point you saw the real benefit of Java platform.

Java is great to enterprise environment because its power to solve complexity problems and its multi-platform characteristic, but also because it promotes more security to business, promoting a backward compatibility and solutions based in specifications. Then, the business has more guarantee that a new update of Java won’t breaking your systems and has a solution decoupled vendors, permitting to business changes vendors when it is needed.

Java has a big ecosystem, with emphasis on Java EE (now Jakarta EE) that promotes several specifications to solve common problems at enterprise environment. Some of these specifications are: EJB, JPA, JMS, JAX-RS, JAX-WS, and other. Furthermore, we have the Spring that tuned the Java ecosystem, although it is not based on specifications but uses some specifications from Java EE.

Cloud Computing and Microservices

Cloud Computing is a concept that has been grown up over and over along of year, and has changed how developers architect, write and think applications. Cloud Computing is a set of principles and approaches that has as aim provide computing resources as a service (PaaS, IaaS, SaaS).  With this, we can uses only needed resource to run applications, and scale when needed. Then we could optimize the computing resource uses and consequently optimize cost to business. It is fantastic, but to avail from Cloud Computing the applications should be according to this approach. With this, microservice architecture came as a good approach to architect and thinking applications to Cloud Computing (Cloud Native Application).

Microservice architecture is an approach that break a big application (monolith) in many micro-applications or micro-services, generally broken in business domain. With this, we can scale only the business domains that really needs without scale all business domains, we have a fault tolerance, because if one business domain falls, the other business domain will not falls together, and we have a resilience, because the microservice that falls can be restored. Then, microservice architecture permit us explore the Cloud Computing benefits and optimize optimize the computing resource uses.

Java and Cloud Computing

As said above, “In fact Java was created to solve big problems not small problems, although it could be used to solve small problems”. But Cloud Native Application approach breaks a big and complex application into many small and simpler applications (such as microservices). Furthermore, the life cycle of  an application is very smaller in the microservice architecture than when we use a monolith.  Besides that, in Cloud Native Application approach the complexity is not in the applications, but it is in the communication between these application (integrations between them), managements of them and monitoring. In other word the complexity is about how these applications (microservices) will interacts with which other and how can we identify a problem in some application with fast way.  With this, the Java platform and its ecosystem had many gap to solve, that will be shown below:

Fat JVM: We had many Java applications started with many library that was not used and the JVM will loaded several things that this application didn’t needs. It is okay when we have a big application that solve complex problems, but to small applications (like microservices) it is not so good.

JVM JIT Optimization: The JVM has a JIT Optimization that optimize the application  running along of time. In other words, a longer application life cycle has more optimization done. Then to JVM is better to runs an application for a long time than have an application running for short time. In cloud computing, applications are born and dies all the time and its life cycle are smaller.

Java Application Has a Bigger Boot Time: Many Java applications has a long boot time comparing to application written in another language, because these applications commonly solve some things in boot time.

Java Generate a Fat Package (war, ear, jar): Many Java Applications has a large package size, mainly when it has some libraries inside them (in lib folder). It can grow up the delivery time, degrading the delivery process.

Java EE Has Not Pattern Solutions to Microservice: The Java EE has many important specs to solve enterprise problems, but it has not specs to solve problems that came from microservice architecture and cloud computing.

The Updates of Java and Java EE are Slow: The Java and Java EE had a slow process to update their features and to create new features. It is bad because the enterprise environment is in continuous change and has new challenge at all the times.

With this, the Java ecosystem had several changes and initiatives to solve each gap created by cloud computing, and the put Java on the top again.

Java On Top Again

Java platform is a robust platform that promotes many solutions for any things, but to me it is not the best of Java. To me the best of Java world is the community, that is very strong and workaholic. Then, in a few time, the Java community promoted many actions and initiatives that boosted the Java platform to cloud computing approach, promoting solutions to turn Java closer to cloud native application approach more and more. Many peoples call it as Cloud Native Java. The principals actions and initiatives that have been done in the Java ecosystem are: Jakarta EE, Microprofile, new Java release cycle, improvement at Java Language, improvement at JVM and Quarkus. Then I’ll explain how each of these actions and initiatives have impacted the Java ecosystem.

Jakarta EE: Java EE was one of the most import project at Java ecosystem. Java EE promoted many pattern solutions to enterprise problems, but this project was migrated from Oracle to Eclipse Foundation and had many changes in the work`s structure, and now is called Jakarta EE.

The Jakarta EE is an umbrella project that promotes pattern solutions (specifications) to enterprise world and has a new process to approve new features and evolve the existent features. With this, the Jakarta EE can evolve fast and improve more and more the enterprise solutions. It is nice, because nowadays the enterprise has changed very fast and has had new challenges o all the time. As the technology is a tool to innovate,  this one need be able to change quickly when needed.

Microprofile: The Java EE and now Jakarta EE has many good solutions to enterprise world. But this project don`t have a pattern solutions to many problems about microservice architecture. It does not means that you can not implement solutions to microservice architecture, but you will need implement these solutions by yourself and these solutions will be in your hands.

Microprofile is an umbrella project that promotes many pattern solutions (specifications) to microservice architecture problems. Microprofile has compatibility with Java EE and permit the developers develop applications using microservice architecture with easier way.  Some of these specifications are: Microprofile Config, Microprofile Opentrancing, Microprofile RestClient, Microprofile Fault Tolerance and others.

Java Releases Cycle: The Java release cycle changed and nowadays the Java releases are released each six months. It’s an excellent change because permit the Java platform response fast to new challenges. Beside that, it promotes a faster evolve of Java platform.

Improvement at Java Language: The Java had several changes that improved some features, such as the functional feature and other from Java. Beside that, the Java language had the Jigsaw project that introduced the modularity on Java. With this, we can create thinner Java applications that can be easily scaled.

Improvement at JVM: The JVM had some issues when used in containers, mainly about measurements about memory and CPU. It was bad because the the container is very important to cloud computing. With containers we don’t delivery the application only, but we delivery all environment with its dependencies.

Since Java 9 the JVM had many updates that turned the communication with containers better. With this, the JVM is closer to cloud computing necessities.

Quarkus: Quarkus is the latest news to the Java ecosystem and has been at the top of the talks. Quarkus is a project tailored to GraalVM and OpenJDK HotSpot that promotes a Kubernate Java Application stack to permit developers write applications to cloud using the breed Java libraries and standards. With Quarkus we can write applications with very faster boot time, incredibly low RSS memory and an amazing set of tools to facilitate the developers to write applications.

Quarkus is really an amazing project that defines a new future to Java platform. This project works with Container First concept and uses the technique of compile time boot to boost the Java applications. If you want to know more about Quarkus click here.

All of these projects and initiatives in the Java ecosystem bring Java back into focus and starts the new era for the Java platform. With this, Java enter on cloud computing offering your way of working with specifications, promoting a standardized solutions to cloud computing. It is amazing to Java and to cloud computing, because from these standardized solutions will emerge many enterprise solutions with support of many companies, turning the adopt of these solutions safer.

 

 

 

 

 

Creating Logger with AOP using CDI Interceptor

AOP is a programming paradigm that allows us to separate business logic from some technical code that crosscuts all application

Log is a big important element of all applications, because it permit developers to analysis  fail and behaviors of the applications. A good log promote a faster analysis and a faster solutions to fail and bad behaviors.

When a developer will develop an application using oriented object programming (OOP), he think about how he can separate and distribute logics and responsibility to classes and how he can separate the bussiness logic from remaining. However, when we think about logs, promote this separation is not an easy task, because log is a concern that crosscut all application. With this, the Aspect-Oriented Programming (AOP) is a good tool to resolve that and promote this separation. In my book I said that, “AOP is a programming paradigm that allows us to separate business logic from some technical code that crosscuts all application“. In this post I will not explain about AOP, but I’ll show you how to use AOP to separate the log logic from business logic using CDI Interceptor. If you can know more about look my book.

First of all, we need define the exceptions that need by logged as error. For example, if the application throw a business exception, probably you don’t need log as a error, because it will grow up us log and exception is not a application fail or bad behaviors, but is an error of the user. However, a business exception need be logged as debug. Lets go to code…

First we need create a qualifier. In this example we created a qualifier called Logger.

@Inherited
@InterceptorBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.TYPE})
public @interface Logger {

}

Now, we’ll create the CDI Interceptor that will intercept the calls and treat the log.

import org.apache.logging.log4j.LogManager;
import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;
import java.io.Serializable;

@Interceptor
@Logger
public class LoggerInterceptor implements Serializable {

    @AroundInvoke
    public Object proccessMethod(InvocationContext context) throws Exception {

        org.apache.logging.log4j.Logger logger =
                LogManager.getLogger(context.getTarget().getClass());

        try {
            logger.debug("Called: " + context.getMethod());
            return context.proceed();
        }
        catch(Exception e){
            treatException(e, context);
            throw e;

        }

    }

    private void treatException(Exception e, InvocationContext context){

        org.apache.logging.log4j.Logger logger =
                LogManager.getLogger(context.getTarget().getClass());

        if( !(e instanceof BusinessException)) {
            logger.error(e);
        }
        else{
            logger.debug("Business Logic", e);
        }

    }

Note that, on treatException method, we log the BusinessException as debug. Furthermore, we do a log and throw the exception again. It is considered a bad practice, but in our example is not a bad practice, because the logic of logger is separated from business logic, with this the business logic need receive the exception to apply some treatment.

Now, we’ll configure our the business logic class to be intercepted by our LoggerInterceptor class. We will do that using the qualifier Logger

@Stateless
@Logger
public class MyBusiness {
   ...
}

We used the interceptor to intercept a EJB class, but we can use with any class managed by Java EE context.

As the last step, you need configure the Interceptor on the beans.xml file.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
    <interceptors>

        <class>net.rhuanrocha.logger.LoggerInterceptor</class>

    </interceptors>
</beans>

If you want to know more about AOP get my book in this link:

https://www.packtpub.com/application-development/java-ee-8-design-patterns-and-best-practices

Understanding Eclipse Microprofile Config

When we’ll develop an application, we need define some configurations to our application, such as : configuration about environment, configuration about integration and others. For a long time, many developers putted its configurations inside application. But, it has a problem, because when we need change these configurations we need rebuild the application and the same package can work in one environment but take a error in another environment, because the package is prepared to specific environment. It’s a bad practice, but when we work with Cloud and microservice this problem grow up.

Eclipse Microprofile Config is a solution that permit us externalize configurations from application (more specifically microservice). This solution works with ConfigSource, which provides configuration properties and works with an order (defined by an ordinal). This solution has 3 default ConfigSources, but we can create a custom ConfigSource to get the configuration properties of any source. These default ConfigSource are:

  1. System.getProperties() (ordinal=400)
  2. System.getenv() (ordinal=300)
  3. All META-INF/microprofile-config.properties files on the ClassPath (default ordinal=100, separately configurable via a config_ordinal property inside each file)

With this solution you can read the configuration from a external source using CDI Injection. Bellow we have an example.

@Inject
@ConfigProperty(name ="bd.subject.path",defaultValue = "http://172.18.0.5:8080/")
private String bdPath;

On the code above we get a configuration of path to a database by CDI Injection, and if the property is not found by the ConfigSource, the default valuer ( defaultValue = “http://172.18.0.5:8080/&#8221;) is inserted.

If you can see more about this example, access the repository on github: https://github.com/rhuan080/microprofile-example

 

Creating New Java EE 8 Project

In this post I’ll show you how to create a new Java EE 8 project using an archetype created by me. This archetype is on Maven Central and is an archetype to create a WAR project with the following classes and dependencies.

Dependencies:

  • Java EE 8.0
  • JUnit 3.8.1
  • log4j-core 2.11.1

Classes:

  • JobBusiness: An example of Business Object that contains the business logic of Job.
  • Dao: An example of abstract DAO class.
  • JobDao: An example of DAO to Job using JPA.
  • Entity: An example of abstract entity.
  • Job: An example of entity to mapping Job with JPA
  • JobResource: An example of JAX-RS Resource.

To use this archetype run the following maven command:

mvn archetype:generate -DarchetypeGroupId=net.rhuanrocha -DarchetypeArtifactId=javaee8-war-archetype -DarchetypeVersion=1.0.2 -DgroupId=<new project Group Id> -DartifactId=<new project artifact Id>

After running this command, the project will be created on your current folder.

Creating a Custom Maven Archetype

In this post I will cover to you how to create a custom maven archetype. In our example we’ll create an archetype to WAR project. To this post we’ll assume that you already install Java and Maven in your machine.

Create a custom maven archetype is a good practices to:

  • Promote a pattern of architecture to projects.
  • Turn easy the task to create new projects

In an enterprise environment, we may want to create a pattern of architecture to projects. To this, a maven archetype may help us to do that, because maven archetype permit us create a project skeleton with all dependencies, packages and configurations defined, and the developer can use this skeleton in your projects as base, with a easy way.

To facilitate us, we’ll create a new maven project and then will modify this project to creating our archetype. To create a new maven project run the following command:


mvn -B archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=net.rhuanrocha -DartifactId=example-archetype

After running this command, a new project will be created with the following pom.xml.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>net.rhuanrocha</groupId>
  <artifactId>example-archetype</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>example-archetype</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Now, we need create a file called archetype.xml. This file describe the structure of the project generated by this archetype. This file need be created in the following folder:

src/main/resources/META-INF/maven

Below we have an example of this file.

<archetype xmlns="http://maven.apache.org/plugins/maven-archetype-plugin/archetype/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-archetype-plugin/archetype/1.0.0 http://maven.apache.org/xsd/archetype-1.0.0.xsd">
    <id>customArchitype</id>
    <sources>

        <source>src/main/java/resource/HelloWorldResource.java</source>

    </sources>
    <resources>
        <resource>src/main/webapp/WEB-INF/beans.xml</resource>
    </resources>

</archetype>

Now, I’ll create a following folder:

               src/main/resources/archetype-resource

This folder will have the skeleton used when a new project is create using this archetype. Inside this folder, we’ll have the classes, packages and configurations that will be generated by archetype. With this, inside this folder we’ll put the pom.xml and the src folder with all classes and configuration files. Below, we have a example of pom.xml inside archetype-resource.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>${groupId}</groupId>
    <artifactId>${artifactId}</artifactId>
    <version>${version}</version>
    <packaging>war</packaging>
    <name>Example Archetype</name>
    <url>http://rhuanrocha.net/</url>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <jakartaee>8.0</jakartaee>
        <junit>3.8.1</junit>

    </properties>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>${jakartaee}</version>
            <scope>provided</scope>
        </dependency>

    </dependencies>
</project>

Note that the pom.xml uses the parameter ${groupId}, ${artifactId} and ${version} that are parameters sent when the command running to create a project using this archetype. The command to generate new project using this archetype will be covered ahead.

Now, I’ll create the class HelloWorldResource. This class will be inserted on the package that its name begin with value sent on the ${groupId} parameter. Below, we have the example:

package ${groupId}.resource;

import javax.inject.Inject;
import javax.ws.rs.*;

@Path("helloWorld")
public class JobResource {

    @Inject
    private JobBusiness jobBusiness;

    @GET
    public Response sayHelloWorld(){

        return Response
                .ok( "Hello World")
                .build();
    }    

}

Note that the parameter ${groupId} is used to define the package name.

Now I’ll create the JAXSConfiguration class. Below we have a example:

package ${groupId};

import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {

}

Now, to install this archetype on your local repository, run the following command:


mvn clean install

Now, to create a new project using the custom maven archetype created, run the following command:


mvn archetype:generate -DarchetypeGroupId=net.rhuanrocha -DartifactId=example-archetype -DarchetypeVersion=1.0-SNAPSHOT -DgroupId=<group id> -DartifactId=<name of your artifact>

 

Implementing External Configuration Store Pattern with Jakarta EE

In my another posts I explained ways to turn your Jakarta EE application decoupled. In this post I will cover how to decouple the application configuration from our application.

Software delivery have many steps that need be done and one of these step is make the configuration of our application, such as configuration about file system and directories, configuration about paths to other services or resource, configuration about database and others configuration about environment.

Generally, we works with development environment, testing environment, staging environment and production environment. The application need be configured in each of these environment that have its values of configuration — each environment have the same properties but the value of these properties can be different — that allow the application to work. Many developers make the configuration using a configuration file (.properties, .xml or another) inside application to configure them, but this way turn the package of the application coupled with the environment, and the developer need generate one package to each environment. Each package need know the details of the environment that it will running. It is a bad practices and increase the complexity of  the delivery of application both to application with monolithic architecture and to application with microservice architecture.

External configuration store pattern is an operational pattern (some literatures define as architecture pattern or cloud design pattern) that decouple the configuration details from application. With this, the application don’t know the value of configuration properties, but only know which properties it need to be read from configuration store. Below we have the figure showing the diference between application using external configuration store pattern and application don’t using external configuration store pattern.

Blank Diagram (1)

Benefits of using external configuration store pattern

The use of external configuration store pattern have several benefits, but I will talk about some. The main benefit is that we can update any configuration values without shall rebuild our application. Once the package was generated, then this package will running on any environment except if another problem that is not related with configuration occur or if the environment is with a wrong configuration. Furthermore, we allow another team (infrastructure team or middleware team) manage the configuration without need a help of developer, because the package of application don’t need be updated.

Another benefit of external configuration store pattern is that it can make all configuration centralized and many applications can read the configuration properties from the same location.

Implementing external configuration store pattern using Jakarta EE

The implementation of external configuration store pattern can be done with the following ways:

  • Using the application server as configuration server by system properties
  • Using an external file or a set of external files.
  • Using a datasource (relational database, NoSQL or other)
  • Using a custom configuration server

In this post I’ll show you how to implemente external configuration store pattern using the application server as configuration server by system properties and using an external file or a set of external files.

In our scenario, we’ll have three JAX-RS resources, one to return a welcome message, other to upload files and other to download files. On resource that return a welcome message we’ll have one method to get message from system properties of application server and another method to get message from an external file. To implement that we will use CDI producer to read the properties both from application server and from external file. Furthermore, we’ll create the qualifier @Property to be used by producer at moment of injection.

Creating the configurationStore.properties

This file is used when the application uses a external file. This is the only configuration that is configured inside application, to permit the application know where is the configuration store.


path=${path_to_configuration_store}

Implementing Qualifier

The code below has the implementation of qualifier used to configure the injection and permit the producer product the value injected.

@Qualifier
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER})
public @interface Property {

    @Nonbinding String key() default "";

    @Nonbinding boolean container() default true;

    @Nonbinding String fileName() default "";

}

Note that the qualifier have three attributes, key, container and fileName. The attribute key is used to pass the key of property. The attribute container is used to define if the attribute will be read from Jakarta EE container. If the attribute container is true, the application will search the property on the application server.  The attribute fileName is used to pass the file path when we are using an external file. When the fileName is passed and the attribute container is true, then the application will search first on the Jakarta EE container and if the property is not found on the Jakarta EE container, then is find on external file.

Implementing Producer

The code below has implementation of PropertyProducer, the producer used to inject the properties.

public class PropertyProducer {

    @Property
    @Produces
    public String readProperty(InjectionPoint point){

        String key = point
                .getAnnotated()
                .getAnnotation(Property.class)
                .key();

        if( point
                .getAnnotated()
                .getAnnotation(Property.class)
                .container() ){

           String value = System.getProperty(key);

           if( Objects.nonNull(value) ){
               return value;
           }

        }

        return readFromPath(point
                .getAnnotated()
                .getAnnotation(Property.class)
                .fileName(), key);

    }

    private String readFromPath(String fileName, String key){

        try(InputStream in = new FileInputStream( readPathConfigurationStore() + fileName)){

            Properties properties = new Properties();
            properties.load( in );

            return properties.getProperty( key );

        } catch ( Exception e ) {
            e.printStackTrace();
            throw new PropertyException("Error to read property.");
        }

    }

    private String readPathConfigurationStore(){

        Properties configStore = new Properties();

        try( InputStream stream = PropertyProducer.class
                .getResourceAsStream("/configurationStore.properties") ) {

            configStore.load(stream);
        }
        catch ( Exception e ) {
            e.printStackTrace();
            throw new PropertyException("Error to read property.");
        }

        return configStore.getProperty("path");
    }

}

Implementing the Config

This class is the core of this post, because it class contain the configurations of application that was read from both application server and external file. This class is a singleton and all configuration properties injected are centralized in this class.

@Singleton
public class Config {

    @Inject
    @Property(key="message.welcome")
    public String WELCOME;

    @Inject
    @Property(key="message.welcome", container = false, fileName = "config.properties")
    public String WELCOME_EXTERNAL_FILE;

    @Inject
    @Property(key="path.download")
    public String PATH_DOWNLOAD;

    @Inject
    @Property(key="path.upload")
    public String PATH_UPLOAD;

}

Implementing the WelcomeResource

The code below has the implementation of a JAX-RS resource that has two methods, one method to return a welcome message defined into system properties of application server, and other method to return a welcome message defined into external file.

@Path("/welcome")
public class WelcomeResource {

    @Inject
    private Config config;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response message(){

        Map map = new HashMap();
        map.put("message", config.WELCOME);

        return Response
                .status( Response.Status.OK )
                .entity( map )
                .build();

    }

    @GET
    @Path("/external")
    @Produces(MediaType.APPLICATION_JSON)
    public Response messageExternalFile(){

        Map map = new HashMap();
        map.put("message", config.WELCOME_EXTERNAL_FILE);

        return Response
                .status( Response.Status.OK )
                .entity( map )
                .build();

    }
}

Implementing FileDao

The code below have the FileDao implementation, the class to read and write file. FileDao is used by UploadResource and DownloadResource.

@Stateless
public class FileDao {

    @Inject
    private Config config;

    public boolean save( File file ){

        File fileToSave = new File(config.PATH_UPLOAD + "/" + file.getName());

        try (InputStream input = new FileInputStream( file )) {

            Files.copy( input, fileToSave.toPath() );

        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }

        return true;
    }

    public File find( String fileName ){

        File file = new File(config.PATH_DOWNLOAD + "/" + fileName);

        if( file.exists() && file.isFile() ) {

            return file;
        }

        return null;
    }

}

Implementing UploadResource

The code below has the implementation of a JAX-RS resource process an upload of file. Note that we have the FileDao class used to read and write the file.

@Path("/upload")
public class UploadResource {

    @Inject
    private FileDao fileDao;

    @POST
    public Response upload(@NotNull File file){

        if( fileDao.save( file ) ){
            return Response
                    .created(URI.create("/download?fileName="+ file.getName()))
                    .build();
        }

        return Response.serverError().build();

    }
}

Implementing DownloadResource

The code below has the implementation of a JAX-RS resource process a download of file. Note that we have the FileDao class used to read and write the file.

@Path("/download")
public class DownloadResource {

    @Inject
    private FileDao fileDao;

    @GET
    public Response download(@NotNull @QueryParam("fileName") String fileName){

        File file = fileDao.find( fileName );

        if( Objects.isNull( file ) ){

            return Response.status(Response.Status.NOT_FOUND).build();

        }

        return Response.ok(file)
                .header("Content-Disposition",
                "attachment; filename=\"" + fileName + "\"")
                .build();
    }

}

 

Eclipse MicroProfile Config

The Jakarta EE is a new project and is based on Java EE 8, but many peoples talk about a possible merge between MicroProfile and Jakarta EE. For me, these projects will be increasingly closer. The MicroProfile project has a solution called Eclipse MicroProfile Config,  today on 1.3 version, that permit us implementing the external configuration store pattern. If you want to know more about Eclipse MicroProfile Config, access: https://microprofile.io/project/eclipse/microprofile-config

Conclusion

Using external configuration store pattern, we decouple configurations from application. Thus, we can update some configurations without rebuild our application. Furthermore, other teams may managing the configuration without a developer intervention and we can share the same set of configurations with any applications.

The use of this pattern is a good practice, mainly if the application was done with a microservice architecture, because it promote a better delivery and an easier maintenance.

If you wanna see the full code of this example, access the github in this link:  https://github.com/rhuan080/jakartaee-example-external-conf

 

 

Understanding the Benefits of Develop Jakarta EE Application Based on Spec

Jakarta EE is an umbrella project based on Java EE 8, that contain a set of specs. These specs are the same specs of Java EE 8, but migrated from Oracle to Eclipse Foundation. With this, Jakarta EE Platform provide the same features then Java EE 8 and permit us develop applications without carer about vendors.

The Java ecosystem have been provided more and more tools to develop decoupled applications. The word decoupled application can appear vage and to turn the understanding better, we’ll explain a definition of decoupled application to us.

Decoupled Applications

The level of decouple between elements (Components, Applications, Services) is directly related with the level of abstraction of its details. Thus, if two elements are decoupled, that means that both element don’t need know the details of other element. For instance, if the service “register” is decoupled from service “payment”, then the service “register” don’t know the details of implementation of service “payment”, and the service”payment” don’t know the details of implementation of service “register” . Both will only know that when a service is called the service will generate a specific action, but will not know how this action will be done. It permit us change a service without affect the other service.With this, the architecture turn more plugable.

In an application development, we can achieve several kind of decouple, such as decouple of configurations, decouple of codes, decouple of infrastructure, and to Jakarta EE we can promote a decouple of vendors. With this, when I develop an application with a decouple from vendors, we will be free to use any vendors without change the code.

Jakarta EE Application Based on Spec

When I’ll write an application using Jakarta EE, I can use the specs without using specifics features of vendors or using features of vendors. Then, when I use features of some vendors, my application turn coupled with the respectively vendor and this vendor become a dependency of the application. With this, if you want to change one vendor to another vendor, you will need update the code.

Jakarta EE application based on spec is an application that is wrote using only the specs and without using features of some specific vendor. With this, a Jakarta EE application based on spec don’t know about details of which vendor will be used and don’t have a couple or dependency with any vendors. This concept is very similar with concepts of interface and class of object-oriented programming, where, the specs are like interfaces and the implementations of a vendor are like classes.

The benefit of write a Jakarta EE application based on spec is the decoupled between application and vendor. With this, you can run your application using any vendor. For instance, if you create a Jakarta EE Application that uses the JPA spec you can run this application on Widfly that have the Hibernate as a JPA implementation, on GlassFish that have the EclipseLink as a JPA implementation, or another application server that contain another vendor to JPA implementation. Thus, your application becomes a potable application.

Writing Code Using Jakarta EE Spec

To show an example of write a code based on spec we’ll figure out a scenario using JPA spec. We gonna mapping a table called person of a relational database and writing a DAO to read and write datas in this table. Below we have the figure showing model of table.

Captura de tela de 2018-08-29 14-14-01

First of all we’ll create the persistence.xml to configure the unit persistence. Below we have the persistence.xml.

<persistence>
    <persistence-unit name="javaee8">

        <jta-data-source>jdbc/MyDataSource</jta-data-source>
        <properties>
            <property name="javax.persistence.schema-generation.database.action" value="create" />
        </properties>

    </persistence-unit>
</persistence>

Note that we configure the property called javax.persistence.schema-generation.database.action that is a property of JPA to create the schema on the database automatically. This property will work with any vendor.

Creating Entity

Now, we will create the entity called Person, that is the class to mapping the table person contained on the database.

@Entity(name = "person")
public class Person implements Serializable{

    @Id
    @GeneratedValue
    @Column
    private Long id;

    @Column
    @NotBlank
    private String firstName;

    @Column
    @NotBlank
    private String lastName;

    //Getters and Setters

    //equals and hashCode

}

Note that on the code above, we used only feature of JPA spec and don’t used feature of some specific vendor.

Creating DAO

Now, we’ll create the PersonDao class that is a DAO responsible to read and write datas on table. The PersonDao is a stateless Session Bean of EJB.

<
@Stateless
public class PersonDao {

    @PersistenceContext
    protected EntityManager em;

    public Optional<Person> findById(Long id){

        return Optional.ofNullable(em.find(Person.class, id));
    }

    public List<Person> findAll(){

        return em.createQuery("select p from "+Person.class.getName()+" p ", Person.class)
                .getResultList();

    }

    public Person save(Person person){
        return em.merge(person);
    }
}

Note that on the code above we uses only feature of JPA spec.

Conclusion

To define if we’ll develop an application based on spec or using features of vendors we need understanding the necessities of the application at moment of development. If you should writing the application independent of vendors you shall develop the application based on spec. However, if you will have gain using features of specific vendor you can develop the application using the vendor. My advice for you is: Have preference to develop Jakarta EE application based on spec. Because your application becomes portable and it’s closer to cloud native application.

If you want see all codes of our example access: https://github.com/rhuan080/jakartaee-example-jpa