Jakarta EE: Multitenancy with JPA on WildFly, Part 1

In this two-part series, I demonstrate two approaches to multitenancy with the Jakarta Persistence API (JPA) running on WildFly. In the first half of this series, you will learn how to implement multitenancy using a database. In the second half, I will introduce you to multitenancy using a schema. I based both examples on JPA and Hibernate.

Because I have focused on implementation examples, I won’t go deeply into the details of multitenancy, though I will start with a brief overview. Note, too, that I assume you are familiar with Java persistence using JPA and Hibernate.

Multitenancy architecture

Multitenancy is an architecture that permits a single application to serve multiple tenants, also known as clients. Although tenants in a multitenancy architecture access the same application, they are securely isolated from each other. Furthermore, each tenant only has access to its own resources. Multitenancy is a common architectural approach for software-as-a-service (SaaS) and cloud computing applications. In general, clients (or tenants) accessing a SaaS are accessing the same application, but each one is isolated from the others and has its own resources.

A multitenant architecture must isolate the data available to each tenant. If there is a problem with one tenant’s data set, it won’t impact the other tenants. In a relational database, we use a database or a schema to isolate each tenant’s data. One way to separate data is to give each tenant access to its own database or schema. Another option, which is available if you are using a relational database with JPA and Hibernate, is to partition a single database for multiple tenants. In this article, I focus on the standalone database and schema options. I won’t demonstrate how to set up a partition.

In a server-based application like WildFly, multitenancy is different from the conventional approach. In this case, the server application works directly with the data source by initiating a connection and preparing the database to be used. The client application does not spend time opening the connection, which improves performance. On the other hand, using Enterprise JavaBeans (EJBs) for container-managed transactions can lead to problems. As an example, the server-based application could do something to generate an error to commit or roll the application back.

Implementation code

Two interfaces are crucial to implementing multitenancy in JPA and Hibernate:

  • MultiTenantConnectionProvider is responsible for connecting tenants to their respective databases and services. We will use this interface and a tenant identifier to switch between databases for different tenants.
  • CurrentTenantIdentifierResolver is responsible for identifying the tenant. We will use this interface to define what is considered a tenant (more about this later). We will also use this interface to provide the correct tenant identifier to MultiTenantConnectionProvider.

In JPA, we configure these interfaces using the persistence.xml file. In the next sections, I’ll show you how to use these two interfaces to create the first three classes we need for our multitenancy architecture: DatabaseMultiTenantProvider, MultiTenantResolver, and DatabaseTenantResolver.

DatabaseMultiTenantProvider

DatabaseMultiTenantProvider is an implementation of the MultiTenantConnectionProvider interface. This class contains logic to switch to the database that matches the given tenant identifier. In WildFly, this means switching to different data sources. The DatabaseMultiTenantProvider class also implements the ServiceRegistryAwareService, which allows us to inject a service during the configuration phase.

Here’s the code for the DatabaseMultiTenantProvider class:

public class DatabaseMultiTenantProvider implements MultiTenantConnectionProvider, ServiceRegistryAwareService{
    private static final long serialVersionUID = 1L;
    private static final String TENANT_SUPPORTED = "DATABASE";
    private DataSource dataSource;
    private String typeTenancy ;

    @Override
    public boolean supportsAggressiveRelease() {
        return false;
    }
    @Override
    public void injectServices(ServiceRegistryImplementor serviceRegistry) {

        typeTenancy = (String) ((ConfigurationService)serviceRegistry
                .getService(ConfigurationService.class))
                .getSettings().get("hibernate.multiTenancy");

        dataSource = (DataSource) ((ConfigurationService)serviceRegistry
                .getService(ConfigurationService.class))
                .getSettings().get("hibernate.connection.datasource");


    }
    @SuppressWarnings("rawtypes")
    @Override
    public boolean isUnwrappableAs(Class clazz) {
        return false;
    }
    @Override
    public <T> T unwrap(Class<T> clazz) {
        return null;
    }
    @Override
    public Connection getAnyConnection() throws SQLException {
        final Connection connection = dataSource.getConnection();
        return connection;

    }
    @Override
    public Connection getConnection(String tenantIdentifier) throws SQLException {

        final Context init;
        //Just use the multi-tenancy if the hibernate.multiTenancy == DATABASE
        if(TENANT_SUPPORTED.equals(typeTenancy)) {
            try {
                init = new InitialContext();
                dataSource = (DataSource) init.lookup("java:/jdbc/" + tenantIdentifier);
            } catch (NamingException e) {
                throw new HibernateException("Error trying to get datasource ['java:/jdbc/" + tenantIdentifier + "']", e);
            }
        }

        return dataSource.getConnection();
    }

    @Override
    public void releaseAnyConnection(Connection connection) throws SQLException {
        connection.close();
    }
    @Override
    public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
        releaseAnyConnection(connection);
    }
}

As you can see, we call the injectServices method to populate the datasource and typeTenancy attributes. We use the datasource attribute to get a connection from the data source, and we use the typeTenancy attribute to find out if the class supports the multiTenancy type. We call the getConnection method to get a data source connection. This method uses the tenant identifier to locate and switch to the correct data source.

MultiTenantResolver

MultiTenantResolver is an abstract class that implements the CurrentTenantIdentifierResolver interface. This class aims to provide a setTenantIdentifier method to all CurrentTenantIdentifierResolver implementations:

public abstract class MultiTenantResolver implements CurrentTenantIdentifierResolver {

    protected String tenantIdentifier;

    public void setTenantIdentifier(String tenantIdentifier) {
        this.tenantIdentifier = tenantIdentifier;
    }
}

This abstract class is simple. We only use it to provide the setTenantIdentifier method.

DatabaseTenantResolver

DatabaseTenantResolver also implements the CurrentTenantIdentifierResolver interface. This class is the concrete class of MultiTenantResolver:

public class DatabaseTenantResolver extends MuiltiTenantResolver {

    private Map<String, String> regionDatasourceMap;

    public DatabaseTenantResolver(){
        regionDatasourceMap = new HashMap();
        regionDatasourceMap.put("default", "MyDataSource");
        regionDatasourceMap.put("america", "AmericaDB");
        regionDatasourceMap.put("europa", "EuropaDB");
        regionDatasourceMap.put("asia", "AsiaDB");
    }

    @Override
    public String resolveCurrentTenantIdentifier() {


        if(this.tenantIdentifier != null
                && regionDatasourceMap.containsKey(this.tenantIdentifier)){
            return regionDatasourceMap.get(this.tenantIdentifier);
        }

        return regionDatasourceMap.get("default");

    }

    @Override
    public boolean validateExistingCurrentSessions() {
        return false;
    }

}

Notice that DatabaseTenantResolver uses a Map to define the correct data source for a given tenant. The tenant, in this case, is a region. Note, too, that this example assumes we have the data sources java:/jdbc/MyDataSource, java:/jdbc/AmericaDB, java:/jdbc/EuropaDB, and java:/jdbc/AsiaDB configured in WildFly.

Configure and define the tenant

Now we need to use the persistence.xml file to configure the tenant:

<persistence>
    <persistence-unit name="jakartaee8">

        <jta-data-source>jdbc/MyDataSource</jta-data-source>
        <properties>
            <property name="javax.persistence.schema-generation.database.action" value="none" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgresPlusDialect"/>
            <property name="hibernate.multiTenancy" value="DATABASE"/>
            <property name="hibernate.tenant_identifier_resolver" value="net.rhuanrocha.dao.multitenancy.DatabaseTenantResolver"/>
            <property name="hibernate.multi_tenant_connection_provider" value="net.rhuanrocha.dao.multitenancy.DatabaseMultiTenantProvider"/>
        </properties>

    </persistence-unit>
</persistence>

Next, we define the tenant in the EntityManagerFactory:

@PersistenceUnit
protected EntityManagerFactory emf;


protected EntityManager getEntityManager(String multitenancyIdentifier){

    final MuiltiTenantResolver tenantResolver = (MuiltiTenantResolver) ((SessionFactoryImplementor) emf).getCurrentTenantIdentifierResolver();
    tenantResolver.setTenantIdentifier(multitenancyIdentifier);

    return emf.createEntityManager();
}

Note that we call the setTenantIdentifier before creating a new instance of EntityManager.

Conclusion

I have presented a simple example of multitenancy in a database using JPA with Hibernate and WildFly. There are many ways to use a database for multitenancy. My main point has been to show you how to implement the CurrentTenantIdentifierResolver and MultiTenantConnectionProvider interfaces. I’ve shown you how to use JPA’s persistence.xml file to configure the required classes based on these interfaces.

Keep in mind that for this example, I have assumed that WildFly manages the data source and connection pool and that EJB handles the container-managed transactions. In the second half of this series, I will provide a similar introduction to multitenancy, but using a schema rather than a database. If you want to go deeper with this example, you can find the complete application code and further instructions on my GitHub repository.

Jakarta EE: Creating an Enterprise JavaBeans Timer

Enterprise JavaBeans (EJB) has many interesting and useful features, some of which I will be highlighting in this and upcoming articles. In this article, I’ll show you how to create an EJB timer programmatically and with annotation. Let’s go!

The EJB timer feature allows us to schedule tasks to be executed according a calendar configuration. It is very useful because we can execute scheduled tasks using the power of Jakarta context. When we run tasks based on a timer, we need to answer some questions about concurrency, which node the task was scheduled on (in case of an application in a cluster), what is the action if the task does not execute, and others. When we use the EJB timer we can delegate many of these concerns to Jakarta context and care more about business logic. It is interesting, isn’t it? Continue reading Jakarta EE: Creating an Enterprise JavaBeans Timer

Jakarta EE 8 Released: The New Era of Java EE

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Java EE has a new home and new brand and is being released today September 10th. The Java EE was migrated from Oracle to Eclipse Foundation and now is Jakarta EE, that is under Eclipse Enterprise for Java (EE4J) project. Today the Eclipse Foundation is releasing the Jakarta EE 8 and we’ll see what it means in this post.

The Java EE was a very stronger project and was highly used in many kind of enterprise Java application and many big framework like Spring and Struts. Some developers has questioned its features and evolving processes, But looking at its high usage and time in the market, its success is unquestionable. But the enterprise world doesn’t stop and new challenges are emerging all the time. The speed of change has grown more and more in the enterprise world because the companies should be prepared more and more to answer to the market challenges. Thus, the technologies should follow these changes in the enterprise world and adapt itself to provide the better solutions in these cases.

With that in mind, the IT world promoted many changes and solutions too, to be able to provide a better answer to enterprise world. One of these solutions was the Cloud Computing computing. Resuming Cloud Computing concept  in a few words, Cloud Computing is solution to provide computer resource as a service (IaaS, PaaS, SaaS). This allows you to use only the resources you need resource and scale up and down  when needed.

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Jakarta EE Goals

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications.

The Java ecosystem has a new focus that is putting your power in the service of the cloud computing and Jakarta EE is a key to that.

Jakarta EE has a goal to accelerate business application development for Cloud Computing (cloud native application), working based on specifications worked by many vendors. This project is starting based on Java EE 8, where its specifications, TCKs and Reference Implementations (RI) was migrated from Oracle to Eclipse Foundation. But to evolve these specification to attend to Cloud Computing we can not work with the same process worked on Java EE project, because it is too slow to current enterprise challenges. Thus, the first action of Eclipse Foundation is changing the process to evolve Jakarta EE.

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications. With this Jakarta EE 8 is a mark at Java enterprise history, because inserts these specification in a new process to boost these specification to cloud native application approach.

Jakarta EE Specification Process

The Jakarta EE Specification Process (JESP) is the new process will be used by Jakarta EE Working Group to evolve the Jakarta EE. The JESP is replacing the JCP process used previously for java EE.

The JESP is based on Eclipse Foundation Specification Process (EFSP) with some changes These changes are informed in https://jakarta.ee/about/jesp/. Follows the changes:

  • Any modification to or revision of this Jakarta EE Specification Process, including the adoption of a new version of the EFSP, must be approved by a Super-majority of the Specification Committee, including a Super-majority of the Strategic Members of the Jakarta EE Working Group, in addition to any other ballot requirements set forth in the EFSP.
  • All specification committee approval ballot periods will have the minimum duration as outlined below (notwithstanding the exception process defined by the EFSP, these periods may not be shortened)
    • Creation Review: 7 calendar days;
    • Plan Review: 7 calendar days;
    • Progress Review: 14 calendar days;
    • Release Review: 14 calendar days;
    • Service Release Review: 14 calendar days; and
    • JESP Update: 7 calendar days.
  • A ballot will be declared invalid and concluded immediately in the event that the Specification Team withdraws from the corresponding review.
  • Specification Projects must engage in at least one Progress or Release Review per year while in active development.

The goals of JESP is being a process as lightweight as possible, with a design closer to open source development and with code-first development in mind. With this, this process promotes a new culture that focus on experimentation to evolve these specification based on experiences gained with experimentation.

Jakarta EE 9

The Jakarta EE 8 is focusing in update its process to evolve and the first updates in feature will come in Jakarta EE 9. The main update expected in Jakarta EE 9 is the birth of Jakarta NoSQL specification.

Jakarta NoSQL is a specification to promote a ease integration between Java applications and NoSQL database, promoting a standard solution to connect Java application to NoSQL databases with a high level abstraction. It is fantastic and is a big step to close Java platform to Cloud Native approach, because NoSQL database is widely used on Cloud environments and its improvement is expected. The Jakarta NoSQL is based on JNoSQL that will be its reference implementation.

Another update expected on Jakarta EE is about namespace. Basically the Oracle gave the Java EE project to Eclipse Foundation, but the trademark is still from Oracle. It means the Eclipse Foundation can not use java or javax to project’s name or namespace in new features that come in Jakarta EE. Thus, the community is discussing about the transition of old name to jakarta.* name. You can see this thread here.

Conclusion

Jakarta EE is opening a new era in the Java ecosystem getting the Java EE that was and is a very important project to working under a very good open source process, to improvementsAlthough this Jakarta EE version come without features updates it is opening the gate to new features that is coming on Jakarta EE in the future. So we’ll see many solutions based on specifications to working on cloud soon, in the next versions of Jakarta EE.

Understanding the Current Java Moment

Java platform is one the most used platform in last years and has the largest ecosystem in the world of technology. Java platform permit us develop applications to several platforms, such as Windows, Linux, embedded systems, mobile. However, Java had received many claims such as Java is fat, Java take a lot of memory, Java is verbose. In fact Java was created to solve big problems not small problems, although it could be used to solve small problems. You can solve small problems with Java, but you see the real benefit of Java when you have a big problem, mainly when this problem is about enterprise environments. Then when you created a hello world application with Java and compared to hello world application written in another language you could see a greater memory use,  you could write more line codes and other. But when you created a big application that integrates to another applications and resource, in this point you saw the real benefit of Java platform.

Java is great to enterprise environment because its power to solve complexity problems and its multi-platform characteristic, but also because it promotes more security to business, promoting a backward compatibility and solutions based in specifications. Then, the business has more guarantee that a new update of Java won’t breaking your systems and has a solution decoupled vendors, permitting to business changes vendors when it is needed.

Java has a big ecosystem, with emphasis on Java EE (now Jakarta EE) that promotes several specifications to solve common problems at enterprise environment. Some of these specifications are: EJB, JPA, JMS, JAX-RS, JAX-WS, and other. Furthermore, we have the Spring that tuned the Java ecosystem, although it is not based on specifications but uses some specifications from Java EE.

Cloud Computing and Microservices

Cloud Computing is a concept that has been grown up over and over along of year, and has changed how developers architect, write and think applications. Cloud Computing is a set of principles and approaches that has as aim provide computing resources as a service (PaaS, IaaS, SaaS).  With this, we can uses only needed resource to run applications, and scale when needed. Then we could optimize the computing resource uses and consequently optimize cost to business. It is fantastic, but to avail from Cloud Computing the applications should be according to this approach. With this, microservice architecture came as a good approach to architect and thinking applications to Cloud Computing (Cloud Native Application).

Microservice architecture is an approach that break a big application (monolith) in many micro-applications or micro-services, generally broken in business domain. With this, we can scale only the business domains that really needs without scale all business domains, we have a fault tolerance, because if one business domain falls, the other business domain will not falls together, and we have a resilience, because the microservice that falls can be restored. Then, microservice architecture permit us explore the Cloud Computing benefits and optimize optimize the computing resource uses.

Java and Cloud Computing

As said above, “In fact Java was created to solve big problems not small problems, although it could be used to solve small problems”. But Cloud Native Application approach breaks a big and complex application into many small and simpler applications (such as microservices). Furthermore, the life cycle of  an application is very smaller in the microservice architecture than when we use a monolith.  Besides that, in Cloud Native Application approach the complexity is not in the applications, but it is in the communication between these application (integrations between them), managements of them and monitoring. In other word the complexity is about how these applications (microservices) will interacts with which other and how can we identify a problem in some application with fast way.  With this, the Java platform and its ecosystem had many gap to solve, that will be shown below:

Fat JVM: We had many Java applications started with many library that was not used and the JVM will loaded several things that this application didn’t needs. It is okay when we have a big application that solve complex problems, but to small applications (like microservices) it is not so good.

JVM JIT Optimization: The JVM has a JIT Optimization that optimize the application  running along of time. In other words, a longer application life cycle has more optimization done. Then to JVM is better to runs an application for a long time than have an application running for short time. In cloud computing, applications are born and dies all the time and its life cycle are smaller.

Java Application Has a Bigger Boot Time: Many Java applications has a long boot time comparing to application written in another language, because these applications commonly solve some things in boot time.

Java Generate a Fat Package (war, ear, jar): Many Java Applications has a large package size, mainly when it has some libraries inside them (in lib folder). It can grow up the delivery time, degrading the delivery process.

Java EE Has Not Pattern Solutions to Microservice: The Java EE has many important specs to solve enterprise problems, but it has not specs to solve problems that came from microservice architecture and cloud computing.

The Updates of Java and Java EE are Slow: The Java and Java EE had a slow process to update their features and to create new features. It is bad because the enterprise environment is in continuous change and has new challenge at all the times.

With this, the Java ecosystem had several changes and initiatives to solve each gap created by cloud computing, and the put Java on the top again.

Java On Top Again

Java platform is a robust platform that promotes many solutions for any things, but to me it is not the best of Java. To me the best of Java world is the community, that is very strong and workaholic. Then, in a few time, the Java community promoted many actions and initiatives that boosted the Java platform to cloud computing approach, promoting solutions to turn Java closer to cloud native application approach more and more. Many peoples call it as Cloud Native Java. The principals actions and initiatives that have been done in the Java ecosystem are: Jakarta EE, Microprofile, new Java release cycle, improvement at Java Language, improvement at JVM and Quarkus. Then I’ll explain how each of these actions and initiatives have impacted the Java ecosystem.

Jakarta EE: Java EE was one of the most import project at Java ecosystem. Java EE promoted many pattern solutions to enterprise problems, but this project was migrated from Oracle to Eclipse Foundation and had many changes in the work`s structure, and now is called Jakarta EE.

The Jakarta EE is an umbrella project that promotes pattern solutions (specifications) to enterprise world and has a new process to approve new features and evolve the existent features. With this, the Jakarta EE can evolve fast and improve more and more the enterprise solutions. It is nice, because nowadays the enterprise has changed very fast and has had new challenges o all the time. As the technology is a tool to innovate,  this one need be able to change quickly when needed.

Microprofile: The Java EE and now Jakarta EE has many good solutions to enterprise world. But this project don`t have a pattern solutions to many problems about microservice architecture. It does not means that you can not implement solutions to microservice architecture, but you will need implement these solutions by yourself and these solutions will be in your hands.

Microprofile is an umbrella project that promotes many pattern solutions (specifications) to microservice architecture problems. Microprofile has compatibility with Java EE and permit the developers develop applications using microservice architecture with easier way.  Some of these specifications are: Microprofile Config, Microprofile Opentrancing, Microprofile RestClient, Microprofile Fault Tolerance and others.

Java Releases Cycle: The Java release cycle changed and nowadays the Java releases are released each six months. It’s an excellent change because permit the Java platform response fast to new challenges. Beside that, it promotes a faster evolve of Java platform.

Improvement at Java Language: The Java had several changes that improved some features, such as the functional feature and other from Java. Beside that, the Java language had the Jigsaw project that introduced the modularity on Java. With this, we can create thinner Java applications that can be easily scaled.

Improvement at JVM: The JVM had some issues when used in containers, mainly about measurements about memory and CPU. It was bad because the the container is very important to cloud computing. With containers we don’t delivery the application only, but we delivery all environment with its dependencies.

Since Java 9 the JVM had many updates that turned the communication with containers better. With this, the JVM is closer to cloud computing necessities.

Quarkus: Quarkus is the latest news to the Java ecosystem and has been at the top of the talks. Quarkus is a project tailored to GraalVM and OpenJDK HotSpot that promotes a Kubernate Java Application stack to permit developers write applications to cloud using the breed Java libraries and standards. With Quarkus we can write applications with very faster boot time, incredibly low RSS memory and an amazing set of tools to facilitate the developers to write applications.

Quarkus is really an amazing project that defines a new future to Java platform. This project works with Container First concept and uses the technique of compile time boot to boost the Java applications. If you want to know more about Quarkus click here.

All of these projects and initiatives in the Java ecosystem bring Java back into focus and starts the new era for the Java platform. With this, Java enter on cloud computing offering your way of working with specifications, promoting a standardized solutions to cloud computing. It is amazing to Java and to cloud computing, because from these standardized solutions will emerge many enterprise solutions with support of many companies, turning the adopt of these solutions safer.

 

 

 

 

 

How to: Microprofile Opentracing with Jaeger

In this post I will explain how can we use Microprofile OpenTracing with Jaeger. This post will be practical and will not explain about Microprofile OpenTracing concept deeply. If you wanna know more about this concept you can access my post here.

To use OpenTracing in our application, we need to do two steps, these steps are:

  1. Prepare the application to expose the data in compliance with OpenTracing.
  2. Configure a monitor to get those data and expose information about application ( in this post we’ll use Jaeger).

Preparing the Application

First of all, to make the application able to communicating with Jaeger, we need configure the environment of the application. To do that we need configure some environment’s variable. Below we have some environment’s variable:

$ export JAEGER_AGENT_HOST=jaeger
$ export JAEGER_SERVICE_NAME=speaker
$ export JAEGER_REPORTER_LOG_SPANS=true
$ export JAEGER_SAMPLER_TYPE=const
$ export JAEGER_SAMPLER_PARAM=1

In example above, we are making a configuration to connect the service called speaker into Jaeger.

Now, we need to configure our JAX-RS resource and JAX-RS client (or RestClient starting at Microprofile OpenTracing 1.3) to propagate trace information across services. To do that we need uses an annotation called @Trace on JAX-RS resource (and Rest Client when use Microprofile OpenTracing 1.3). Below we have an example of that.

Configuring the JAX-RS endpoint to be traced

@Path("speakers")
@Traced
public class SpeakerEndpoint {

...

}

Configuring the JAX-RS Client

When we call another service, the information of tracing need be propagated to it service (service called). With this, we need to configure the JAX-RS Client to propagate the information of tracing. To do that, we need registering the JAX-RS Client to OpenTracing.

public Speaker findById( String speakerId ){
    Client client = ClientTracingRegistrar
                      .configure(ClientBuilder.newBuilder())
                      .build();
    Speaker response = null;
    try {
       response = client.target(path)
               .path("/speakers/"+speakerId)
               .request()
               .get(Speaker.class);

    }
    finally {
       client.close();
    }
    return response;
}

Configuring the Rest Client (OpenTracing 1.3)

Since version 1.3 Microprofile OpenTracing integrates with RestClient . With this, you can use the @Traced to configure its RestClient interfaces to be traced. Thus, the information about tracing is propagated to service that was called.

@Path("/speakers")
@RegisterRestClient
@Traced
public interface SpeakerClientService {

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Speaker findById( String speakerId );
}

 

Configuring Jaeger

Now, we need to configure the Jaeger. When the Jaeger is configured and started. Then the applications send information about tracing to Jaeger and the Jaeger cover those information with an UI. In our example we’ll use docker to start the Jaeger. Below we have a Docker command.

$ docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14268:14268 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.11

Then, access http://localhost:16686 to see the Jaeger UI. Below we have a figure with Jaeger UI screem used in our example.

Screenshot from 2019-02-09 19-13-22

When you start  Jaeger, it’ll start to receive service’s data and show in its UI. With this, each access to your services will be shown there.

Conclusion

Using Microprofile Opentracing we can use distributed trace with easier way, without write several line codes and connecting to Jaeger to shows these traces with a good UI. If you want to know more about this Microprofile spec click here.

To see the complete example used here access: https://github.com/rhuan080/microprofile-example

 

Introduction: Observability with Microprofile

Microservice architecture is an approach that is very compliant with the cloud environment, permitting us the ability to create a cloud native application. With microservice architecture, we promote resilience, fault tolerance and scale. However, this approach has different challenges than monolithic applications.

In a monolithic application, all code and business role is inside one application. With this, all transactions occur inside the same application. However, the microservice architecture breaks the monolithic application into services that are packaged independently and have an independent life. With this, we have several cross cutting concerns that occur in several services. In other words, we have several processes, concerns or logic that act with distributed way. One of these concerns is about monitoring and logging. With this, we have the concept of Observability.

What is Observability?

The concept of Observability is a concept that came from the control theory, that is a math theory.

Formally, a system is said to be observable if, for any possible sequence of state and control vectors (the latter being variables whose values one can choose), the current state (the values of the underlying dynamically evolving variables) can be determined in finite time using only the outputs.

Wikipedia

Some developers define Observability in a microservice architecture as the set of metrics, logging and tracing tools, but I think observibility as a more generic concept. Metrics, logging and tracing is only a way to provide observability, but in the future we can use another way.

To me, observibility is the capacity of a system expose precise information about its states with easier and faster way. Differently from monitoring, observability is about the own system. When we say about monitoring, the focal point is the tools that will be used to monitoring the system, but the system can be easy or hard to monitor. When we say about observability, the focal point is the own system, that need to provide this information in a easier and faster way. Monitoring a system with observability is always easy, because the system exposes its information facilitating the monitoring.

Observability with Microprofile

With aim to promote a easy support to observability, the Microprofile has some specs to permit the developer to implement observability in our microservices. The Microprofile has 3 main specs to do that, those specs are:  Microprofile OpenTracing, Microprofile Metrics and Microprofile HealthCheck.

Screenshot from 2019-02-27 16-44-31

Microprofile OpenTracing

Microprofile OpenTracing is a spec that permits us to use distributed tracing using the OpenTracing API to trace  the flow of a request across service. This spec is compatible to Zipkin and compatible to Jaeger , and permits us to use Zipkin or Jaeger to show the information about the distributed tracing. Below we have an example of how to use Microprofile OpenTracing.

@Path("subjects")
@Traced
public class SubjectEndpoint {



}

 

Microprofile Metrics

Microprofile Metrics is a spec that permits us to expose metrics information about our applications. With this, we can expose precise metrics to be consumed with easier and faster ways. Below we have an example of how to use Microprofile Metrics.

@Counted
public CounterBean() {
}

Microprofile HealthCheck

Health Check is a spec that permit us expose if the application is up or down in our environment. It works as a boolean response (yes or no) to the question “Is my application still running ok?“. Below we have an example of how to use Microprofile HealthCheck.

@Health
@ApplicationScoped
public class ApplicationHealthCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .named("application-check").up()
                .withData("CPUAvailable", Runtime.getRuntime().availableProcessors())
                .withData( "MemoryFree", Runtime.getRuntime().freeMemory())
                .withData("TotalMemory", Runtime.getRuntime().totalMemory())
                .build();
    }
}

Conclusion

The Microprofile has been to promote several solutions to microservice challenges, and one of those solutions are the specs to promote observability in our microservices. With this, we can use microprofile with Jaeger, Zipkin, Prometheus and others to promote better observability and monitoring. This article was only a introduction and I will post more details of these specs in the next posts.

If you want to know more about Microprofile access microprofile.io .

Creating Logger with AOP using CDI Interceptor

AOP is a programming paradigm that allows us to separate business logic from some technical code that crosscuts all application

Log is a big important element of all applications, because it permit developers to analysis  fail and behaviors of the applications. A good log promote a faster analysis and a faster solutions to fail and bad behaviors.

When a developer will develop an application using oriented object programming (OOP), he think about how he can separate and distribute logics and responsibility to classes and how he can separate the bussiness logic from remaining. However, when we think about logs, promote this separation is not an easy task, because log is a concern that crosscut all application. With this, the Aspect-Oriented Programming (AOP) is a good tool to resolve that and promote this separation. In my book I said that, “AOP is a programming paradigm that allows us to separate business logic from some technical code that crosscuts all application“. In this post I will not explain about AOP, but I’ll show you how to use AOP to separate the log logic from business logic using CDI Interceptor. If you can know more about look my book.

First of all, we need define the exceptions that need by logged as error. For example, if the application throw a business exception, probably you don’t need log as a error, because it will grow up us log and exception is not a application fail or bad behaviors, but is an error of the user. However, a business exception need be logged as debug. Lets go to code…

First we need create a qualifier. In this example we created a qualifier called Logger.

@Inherited
@InterceptorBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.TYPE})
public @interface Logger {

}

Now, we’ll create the CDI Interceptor that will intercept the calls and treat the log.

import org.apache.logging.log4j.LogManager;
import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;
import java.io.Serializable;

@Interceptor
@Logger
public class LoggerInterceptor implements Serializable {

    @AroundInvoke
    public Object proccessMethod(InvocationContext context) throws Exception {

        org.apache.logging.log4j.Logger logger =
                LogManager.getLogger(context.getTarget().getClass());

        try {
            logger.debug("Called: " + context.getMethod());
            return context.proceed();
        }
        catch(Exception e){
            treatException(e, context);
            throw e;

        }

    }

    private void treatException(Exception e, InvocationContext context){

        org.apache.logging.log4j.Logger logger =
                LogManager.getLogger(context.getTarget().getClass());

        if( !(e instanceof BusinessException)) {
            logger.error(e);
        }
        else{
            logger.debug("Business Logic", e);
        }

    }

Note that, on treatException method, we log the BusinessException as debug. Furthermore, we do a log and throw the exception again. It is considered a bad practice, but in our example is not a bad practice, because the logic of logger is separated from business logic, with this the business logic need receive the exception to apply some treatment.

Now, we’ll configure our the business logic class to be intercepted by our LoggerInterceptor class. We will do that using the qualifier Logger

@Stateless
@Logger
public class MyBusiness {
   ...
}

We used the interceptor to intercept a EJB class, but we can use with any class managed by Java EE context.

As the last step, you need configure the Interceptor on the beans.xml file.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
    <interceptors>

        <class>net.rhuanrocha.logger.LoggerInterceptor</class>

    </interceptors>
</beans>

If you want to know more about AOP get my book in this link:

https://www.packtpub.com/application-development/java-ee-8-design-patterns-and-best-practices

Microprofile Config: Creating a Custom ConfigSource

In this post I’ll show you how can I create a custom ConfigSource to read properties from an external file. In this post I will not explain about what is Eclipse Microprofile Config, and if you want to know what is, access my last post Understanding Eclipse Microprofile Config .

The Eclipse Microprofile Config provide support to read configuration properties by a ConfigSource class. With this, if you need create a custom support to any configuration source (such as database, file system, service), you need create a custom ConfigSource class to provide this support.

In this example I create a ConfigSource to read configuration properties from an external file(outside of package), and the package will have only one property inside, that is the property to get the path of the file configuration. Bellow we have FileSystemConfigSource class that implementing ConfigSource.

public class FileSystemConfigSource implements ConfigSource {

    private final String FILE_CONFIG_PROPERTY = "net.rhuanrocha.mp-speaker.config.file.path";
    private final String CONFIG_SOURCE_NAME = "FileSystemConfigSource";
    private final int ORDINAL = 300;

    private String fileConfig;

    @Override
    public Map getProperties() {

        try(InputStream in = new FileInputStream( readPath() )){

            Properties properties = new Properties();
            properties.load( in );

            Map map = new HashMap();
            properties.stringPropertyNames()
                    .stream()
                    .forEach(key-> map.put(key, properties.getProperty(key)));

            return map;

        } catch (IOException e) {
            e.printStackTrace();
        }

        return null;
    }

    @Override
    public Set getPropertyNames() {

        try(InputStream in = new FileInputStream( readPath() )){

            Properties properties = new Properties();
            properties.load( in );

            return properties.stringPropertyNames();

        } catch (IOException e) {
            e.printStackTrace();
        }

        return null;
    }

    @Override
    public int getOrdinal() {
        return ORDINAL;
    }

    @Override
    public String getValue(String s) {

        try(InputStream in = new FileInputStream( readPath() )){

            Properties properties = new Properties();
            properties.load( in );

            return properties.getProperty(s);

        } catch (IOException e) {
            e.printStackTrace();
        }

        return null;
    }

    @Override
    public String getName() {
        return CONFIG_SOURCE_NAME;
    }

    public String readPath(){

        if(Objects.nonNull(fileConfig)){
            return fileConfig;
        }

        final Config cfg = ConfigProvider.getConfig();
        return fileConfig = cfg.getValue(FILE_CONFIG_PROPERTY, String.class);
    }

}

Furthermore, we need create a property in META-INF/microprofile-config.properties with the path of the external file, and put the config_ordinal property inside META-INF/microprofile-config.properties major then the ordinal of the FileSystemConfigSource, because the FileSystemConfigSource read the property from META-INF/microprofile-config.properties. Below we have the example.

config_ordinal = 400
net.rhuanrocha.mp-speaker.config.file.path=/etc/mp-speaker/config/configuration.properties

Note: My suggest is insert in META-INF/microprofile-config.properties only the property to path of external file and the config_ordinal property.

If you want to see more about this example, access the repository on github by: https://github.com/rhuan080/microprofile-example

 

Understanding Eclipse Microprofile Config

When we’ll develop an application, we need define some configurations to our application, such as : configuration about environment, configuration about integration and others. For a long time, many developers putted its configurations inside application. But, it has a problem, because when we need change these configurations we need rebuild the application and the same package can work in one environment but take a error in another environment, because the package is prepared to specific environment. It’s a bad practice, but when we work with Cloud and microservice this problem grow up.

Eclipse Microprofile Config is a solution that permit us externalize configurations from application (more specifically microservice). This solution works with ConfigSource, which provides configuration properties and works with an order (defined by an ordinal). This solution has 3 default ConfigSources, but we can create a custom ConfigSource to get the configuration properties of any source. These default ConfigSource are:

  1. System.getProperties() (ordinal=400)
  2. System.getenv() (ordinal=300)
  3. All META-INF/microprofile-config.properties files on the ClassPath (default ordinal=100, separately configurable via a config_ordinal property inside each file)

With this solution you can read the configuration from a external source using CDI Injection. Bellow we have an example.

@Inject
@ConfigProperty(name ="bd.subject.path",defaultValue = "http://172.18.0.5:8080/")
private String bdPath;

On the code above we get a configuration of path to a database by CDI Injection, and if the property is not found by the ConfigSource, the default valuer ( defaultValue = “http://172.18.0.5:8080/&#8221;) is inserted.

If you can see more about this example, access the repository on github: https://github.com/rhuan080/microprofile-example

 

Creating New Project Using Microprofile 2.0

In this post I’ll show you how to create a new Microprofile 2.0 (based on Java EE 8) project using an archetype created by me. This archetype is on Maven Central and is an archetype that create a Microprofile project using thorntail with the following classes and dependencies.

Dependencies:

  • Microprofile 2.0
  • Thorntail 2.2.0

Classes:

  • MyEndpoint: An exemple of endpoint .
  • ApplicationHealthCheck: An example of health check.
  • ExceptionMapper: A class to create an exception mapper.
  • MicroProfileConfiguration: Class to configure a Microprofile.

 

To use this archetype run the following maven command:

mvn archetype:generate -DarchetypeGroupId=net.rhuanrocha -DarchetypeArtifactId=microprofile2.0-archetype -DarchetypeVersion=1.0.1 -DgroupId=<new project group id> -DartifactId=<new project artifact id>

After running this command, the project will be created on your current folder.