Skip to content

java-libraries

Netty Websocket SSL

This is a small guide on how to create a Netty Websocket client/server application, communicating over SSL(wss). This guide showcases how to use JKS keystores/truststores, as they are the most common way of storing private keys and certificates in the Java world.

This guide will show:

  • How to create a private key along with a self signed certificate using Java keytool
  • How to create a truststore containing the self signed certificate. This certificate will be used by the websocket client for 'trusting' the websocket server upon SSL connection
  • A simple Netty websocket server example, exposing an SSL connection, using the private key generated in the above step
  • A simple Netty websocket client example, establishing an SSL connection to the server, using the JKS truststore created in the above step

Creating Our Keystore/Truststore

Java Keytool is a nice and easy to use utility, shipped with the JDK, for performing various cryptographic tasks (i.e. generating keys, generating and manipulating certificates etc). The official documentation is pretty easy to follow.

For our example, we need to generate a public/private key pair along with a self signed certificate. This can be done with the below command, the output of which is a JKS store, containing our private key and the self signed certificate.

keytool -genkeypair -alias TestKey -keyalg RSA -keysize 2048 -keystore TestKeystore.jks -storetype JKS

The above JKS keystore will be used by our Netty websocket server to perform the SSL handshake.

Once we have the keystore, we can actually extract the self signed certificate and import it into a JKS trustore. This truststore will be used by our Websocket client, determining which certificates to trust. If the client does not trust the certificate presented by the server the SSL handshake will not be successful.

The command to extract the certificate into a .cert file is:

keytool -exportcert -rfc -alias TestKey -keystore TestKeystore.jks -storepass changeit -storetype JKS -file TestCert.cert

And the command to import that exported certificate into a JKS truststore is:

keytool -importcert -file TestCert.cert -keystore TestTruststore.jks -storepass changeit -storetype JKS

Example Netty Application

Now that we have both the keystore (to be used by the Server) and the truststore (to be used by the client) we can create our demo Netty client/server applications.

Effectively, all we need to do is adding an SSLHandler in the ChannelPipeline. This SSLHandler needs to reference the SSL context created by the respective JKS keystore/truststore.

Server

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.websocketx.TextWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.SslContextBuilder;
import java.security.KeyStore;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;

public class NettyWSServer {

    public void start() {

        final NioEventLoopGroup bossGroup = new NioEventLoopGroup(1);
        final NioEventLoopGroup worker = new NioEventLoopGroup(1);
        final ServerBootstrap wsServer = new ServerBootstrap()
            .group(bossGroup, worker)
            .channel(NioServerSocketChannel.class)
            .handler(new LoggingHandler(LogLevel.INFO))
            .childHandler(new ChannelInitializer<Channel>() {
                @Override
                protected void initChannel(final Channel channel) throws Exception {
                    ChannelPipeline pipeline = channel.pipeline();

                    pipeline.addLast(createSSLContext().newHandler(channel.alloc()));

                    pipeline.addLast(new HttpServerCodec());
                    pipeline.addLast(new HttpObjectAggregator(64_000));
                    pipeline.addLast(new WebSocketServerProtocolHandler("/"));

                    pipeline.addLast(new SimpleChannelInboundHandler<TextWebSocketFrame>() {

                        @Override
                        protected void channelRead0(ChannelHandlerContext ctx, TextWebSocketFrame msg) throws Exception {
                            System.out.println("Message=" + msg.text());
                            ctx.writeAndFlush(new TextWebSocketFrame(msg.text() + " back"));
                        }
                    });
                }
            });

        System.out.println("WS Server started");
        wsServer.bind(10_000)
            .channel().closeFuture().syncUninterruptibly();
    }

    private SslContext createSSLContext() throws Exception{
        KeyStore keystore = KeyStore.getInstance("JKS");
        keystore.load(NettyWSServer.class.getResourceAsStream("/TestKeystore.jks"), "changeit".toCharArray());

        KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
        keyManagerFactory.init(keystore, "changeit".toCharArray());

        SSLContext sslContext = SSLContext.getInstance("TLS");
        sslContext.init(keyManagerFactory.getKeyManagers(), null, null);

        return SslContextBuilder.forServer(keyManagerFactory).build();
    }

    public static void main(String[] args) {
        new NettyWSServer().start();
    }
}

Client

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.http.DefaultHttpHeaders;
import io.netty.handler.codec.http.HttpClientCodec;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.websocketx.TextWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13;
import io.netty.handler.codec.http.websocketx.WebSocketClientProtocolHandler;
import io.netty.handler.codec.http.websocketx.WebSocketVersion;
import io.netty.handler.ssl.SslContextBuilder;
import java.net.URI;
import java.security.KeyStore;
import java.util.Objects;
import javax.net.ssl.TrustManagerFactory;

public class NettyWSClient {

    public void start() {

        final EventLoopGroup bossLoop = new NioEventLoopGroup(1);
        Bootstrap client = new Bootstrap()
            .group(bossLoop)
            .channel(NioSocketChannel.class)
            .handler(new ChannelInitializer<NioSocketChannel>() {
                @Override
                protected void initChannel(NioSocketChannel channel) throws Exception {
                    ChannelPipeline pipeline = channel.pipeline();

                    KeyStore truststore = KeyStore.getInstance("JKS");
                    truststore.load(NettyWSClient.class.getResourceAsStream("/TestTruststore.jks"), "changeit".toCharArray());
                    TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
                    trustManagerFactory.init(truststore);

                    pipeline.addLast(SslContextBuilder.forClient().trustManager(trustManagerFactory).build().newHandler(channel.alloc()));

                    pipeline.addLast(new HttpClientCodec(512, 512, 512));
                    pipeline.addLast(new HttpObjectAggregator(16_384));
                    final String url = "wss://localhost:10000";
                    final WebSocketClientHandshaker13 wsHandshaker = new WebSocketClientHandshaker13(new URI(url),
                        WebSocketVersion.V13, "", false, new DefaultHttpHeaders(false), 64_000);
                    pipeline.addLast(new WebSocketClientProtocolHandler(wsHandshaker));

                    pipeline.addLast(new SimpleChannelInboundHandler<TextWebSocketFrame>() {

                        @Override
                        public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
                            if (evt instanceof WebSocketClientProtocolHandler.ClientHandshakeStateEvent) {
                                WebSocketClientProtocolHandler.ClientHandshakeStateEvent handshakeStateEvent = (WebSocketClientProtocolHandler.ClientHandshakeStateEvent) evt;
                                switch (handshakeStateEvent) {
                                    case HANDSHAKE_COMPLETE:
                                        System.out.println("Handshake completed. Sending Hello World");
                                        ctx.writeAndFlush(new TextWebSocketFrame("Hello World"));
                                        break;
                                }
                            }
                        }

                        @Override
                        protected void channelRead0(final ChannelHandlerContext ctx, TextWebSocketFrame msg) throws Exception {
                            System.out.println("Message=" + msg.text());
                        }
                    });
                }
            });
        client.connect("localhost", 10_000).channel().closeFuture().syncUninterruptibly();
    }

    public static void main(String[] args) {
        new NettyWSClient().start();
    }
}

Java 9 Process API

In a previous blog post I wrote about one of my favourite features of Java 9, the JShell. At this post, I will write about another feature I am excited about. The new Java 9 Process API. I will also present some code showing how powerful and intuitive it is.

The new API adds greater flexibility to spawning, identifying and managing processes. As an example, before Java 9 someone would need to do the following in order to retrieve the PID of a running process:

The above is not intuitive and seems like a hack. It feels to someone that the Java process should at least easily expose its own PID.

Moreover, I quite a few times needed to spawn new child processes from inside a Java process and manage them. The process of doing so is very cumbersome. A reference to the child process has to be kept throughout the program's execution if the developer wishes to destroy that process later. Not to mention that getting the PIDs of the children processes is also a pain.

Fortunately, Java 9 comes to fix those issues and provide a clean API for interaction with processes. More specifically two new interfaces has been added to the JDK:

1. java.lang.ProcessHandle 2. java.lang.ProcessHandle.Info

The two new interfaces add quite a few methods. The first one methods for retrieving a PID, all the processes running in the system and also methods for relationships between processes. The second one mainly provides meta information about the process.

As someone would expect most of the methods have native, platform specific implementations. The OpenJDK's implementation of ProcessHandle can be found here. Also the Unix specific implementation can be seen here.

I have created a very simple program which makes use of most of the features of this new Process API. The program does the below:

  • Can retrieve the running process' PID
  • Can start a long running process
  • Can start a short running process, which terminates about ~5seconds after starting
  • Can list all child processes that were spawned by the parent one
  • Can kill all child processes that were spawned by the parent one
  • Attaches a callback when a child process exits. This is done using the onExit() method of the ProcessHandle

The sample class is provided below. For the entire example please see here:

Log4j2 vs Log4j

Log4j2 is the evolution not only to Log4j but also to Logback, as it takes Logback's feature one step forward. The main selling point is the improved performance, throughput of messages and latency, which apparently is a huge leap forward compared to Log4j and also Logback.

Other interesting Log4j2 features are:

  • Automatic reloading of logging configurations
  • Property Support: Log4j2 loads the system's properties and they can be evaluated even at the configuration level
  • Java8 lambdas and lazy evaluation: It provides an API for wrapping a log message inside a lambda statement, which only gets evaluated if truly needed
  • Garbage free: An interesting architectural feature, as Log4j2 has no or very little (in case of web apps) garbage. You can read more about that here.
  • Async loggers using the LMAX Disruptor: The disruptor is a very interesting technology and it is always provoking to examine use cases of it being used in strain

I played around with Log4j2 and in general i was very happy with its API, implementation ( it actually separates the API from the implementation, even though that means the developer needs to add 2 maven dependencies), configuration simplicity and finally the performance.

Even though measuring a logger's performance with JMH is not advisable i tried to compare its performance (using async and sync loggers) against the old Log4j. The performance (average time and throughput) was indeed better and at the edge cases 15K ops/ms faster!. Having said that, you should take that with a pinch of salt, because as mentioned earlier JMH is not the right tool to performance measure and compare the two logging implementation.

For reference the simple java program used to perform the various tests can be found in Github.

Some indicative results, performing 3 runs for each logger can be seen below.

Log4j2 Async Logger: #1 Benchmark Mode Cnt Score Error Units Log4JBenchmarking.logMessage thrpt 20 84.875 ± 6.383 ops/ms Log4JBenchmarking.logMessage avgt 20 0.015 ± 0.001 ms/op #2 Benchmark Mode Cnt Score Error Units Log4JBenchmarking.logMessage thrpt 20 87.430 ± 9.362 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.001 ms/op #3 Log4JBenchmarking.logMessage thrpt 20 79.753 ± 13.381 ops/ms Log4JBenchmarking.logMessage avgt 20 0.013 ± 0.001 ms/op ----------------------------------------------------------------- Log4j2 Logger: #1 Log4JBenchmarking.logMessage thrpt 20 75.881 ± 10.960 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.002 ms/op #2 Log4JBenchmarking.logMessage thrpt 20 79.698 ± 12.290 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.002 ms/op #3 Log4JBenchmarking.logMessage thrpt 20 87.428 ± 6.678 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.001 ms/op ----------------------------------------------------------------- Log4j Logger: #1 Log4JBenchmarking.logMessage thrpt 20 72.490 ± 8.350 ops/ms Log4JBenchmarking.logMessage avgt 20 0.014 ± 0.002 ms/op #2 Log4JBenchmarking.logMessage thrpt 20 84.169 ± 9.227 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.001 ms/op #3 Log4JBenchmarking.logMessage thrpt 20 72.599 ± 10.801 ops/ms Log4JBenchmarking.logMessage avgt 20 0.012 ± 0.001 ms/op

Hibernate Tools - JPA Entity generation

Recently i was reviewing and trying some examples using the Hibernate Tools. More specifically, i was trying their latest version (5.0.0.CR1) in order to generate some JPA entity POJOs, out of a database schema.

Hibernate Tools, can either be used programmatically from their Java API, or using their pre-defined ANT tasks. The below examples demonstrate the programmatic way and a Mavenized way, by invoking ANT from within Maven.

I used an in-memory HSQL database, with two very simple tables. A Users table with an ID and a name and an Address table with an ID, some fields and a foreign key to the Users table, mimicking a many-to-one dependency.

The code that starts up the HSQL server and creates the tables can be found in GitHub.

As mentioned above the Hibernate Tools can be invoked programmatically. Initially i found it a bit tricky as i hadn't realized i needed to invoke the JDBC Configuration step before i invoked the POJOs generation step. Probably, this is needed in order for the tool to read the Hibernate configuration file, and identify the database and its schema. The configuration that is needed is actual rather trivial:

  • Set the destination folder
  • Point the tool to the hibernate configuration file, in order to pickup the database details
  • Invoke the JDBCConfigurationTask in order to identify the database schema
  • Invoke the Hbm2JavaGenerationTask in order to generate the JPA entities out of the above database schema

A sample code that does the above is shown below:

The java code that is generated for the two database tables is the below:

The whole process can be made as part of a maven compilation step. This is done using the ANT tasks that are provided. The relevant section of the pom.xml file is the below. Additionally, using the maven helper plugin the generated classes can automatically be added on the project's classpath, bulletproffing the application ( and automating the tedious task of re-generating the entities ) of future changes to the database schema.

The complete example can be found in GitHub.

Java Enum as a class

Recently i have been asked a fairly simple question. "Can you extend an enum?". My reaction to that was "Why would you want to do that?". But, given a second thought, i realized that i didn't really know the answer. Of course i knew that in Java enums are treated as classes, but i had no clue how they look like inside the JVM, whether they were made final or not. I could of course try to extend an enum in IntelliJ and see whether the IDE would give me an error or not.

However, the correct way is to inspect how the class looks like after it gets deconstructed back from its bytecode. This can be done using the javap utility which comes along with the JDK. For example imagine we have the following enum:

Using the javap utility we can dissasemble the .class file, which will not give us the above result.

[bash] javap Weekdays.class [/bash]

The class that the JVM knows is:

Finally we got our answer. The enums are indeed represented as classes inside the JVM and those classes are final, hence we cannot extend them.

Deadlock

This article will present a deadlock and some tools to examine and identify it.

A deadlock situation happens when two or more threads are waiting to acquire the object monitor of one or more objects that are already locked one of the competing threads. Hence, the threads will wait forever, if there are no detection and prevention strategies.

The following little code snippet simulates the occurrence of a deadlock, between two competing threads.

In the above situation, thread named 'Left-1' tries and acquires the monitor of object named 'left'. Then it sleeps for a couple of seconds and tries to acquire the monitor of object named 'right', but 'Right-1' thread has already done so. The two threads have no back out logic, hence that program execution will freeze forever.

Detecting a deadlock

Although, in the above example the program is trivial and we can immediately understand where and why the deadlock is happening, in a real-world application that might be a bit tricky. The easiest way is to get a thread dump and analyze it.

  • Using an IDE

In case you were running the application locally, from your IDE, most of the chances are that your IDE already have the ability to do so. I am mainly using IntelliJ. You can find that functionality in the 'Run' window as shown below.

IntelliJ_dump_threads

That will dump in your standard output all the threads with their stack and the state they are in.

[bash highlight="4,5,14,15"] "Right-1" #13 prio=5 os_prio=31 tid=0x00007f8444219800 nid=0x5503 waiting for monitor entry [0x000070000134f000] java.lang.Thread.State: BLOCKED (on object monitor) at com.nikoskatsanos.deadlock.Deadlocked.lambda$start$1(Deadlocked.java:44) - waiting to lock <0x00000007970c0328> (a java.lang.Object) - locked <0x00000007970c05c0> (a java.lang.Object) at com.nikoskatsanos.deadlock.Deadlocked$$Lambda$2/1241276575.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

"Left-1" #12 daemon prio=5 os_prio=31 tid=0x00007f8443944800 nid=0x530f waiting for monitor entry [0x000070000124c000] java.lang.Thread.State: BLOCKED (on object monitor) at com.nikoskatsanos.deadlock.Deadlocked.lambda$start$0(Deadlocked.java:33) - waiting to lock <0x00000007970c05c0> (a java.lang.Object) - locked <0x00000007970c0328> (a java.lang.Object) at com.nikoskatsanos.deadlock.Deadlocked$$Lambda$1/1022308509.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [/bash]

The above is part of the thread dump created by IntelliJ, which includes the stack of our two deadlocked threads. By analyzing the above snippet, we can see that both threads are in the 'BLOCKED' state and both of them are waiting to lock an object. If we observe closer the objects that each thread is trying to lock, is the one already locked by the other thread. This indication and a look at our source code, which will ensure that those locks will never be released, is enough for us to come to our conclusion.

  • Using a tool

Another way to analyze and detect a deadlock, would be to use a more sophisticated tool. There are plenty out there some of them commercial and some of them shipped with your JDK. Three of the most popular are: JConsole, JVisualVM and JavaMissionControl.

Those tools are very easy in use and all of them are quite similar. JConsole is probably the simplest. Using JConsole requires to launch the application and connect to the process running the application you want to analyze. Once started, the user can find a tab named 'Threads'. That screen will give the user everything he/she needs. The user can examine the existing threads. The information is actually the same as the one produced by IntelliJ above and we will see the reason further below. But most importantly the user can notice a detect deadlock button on the bottom. By just using that button makes it extremely easy to find if a deadlock is present in the application. It will look like below, which indicates the two threads on the left hand side are in a deadlock.

jconsole_deadlock_screen

  • Using jstack

Finally, in many cases the application might be running in a server and the only way to interact with it is a shell. In such cases the user needs to use command line utilities provided by the JDK itself. More specifically the jstack. jstack is what is actually used underneath the covers by the above two ways.

In order to do that the user needs to find the process' PID. That can be done either by using OS level command or by just using the jps command, which also comes with the JDK. Once the user has the pid he/she can invoke jstack command in order to get an output similar to the above tools.

[bash] jstack -l ${PID} [/bash]

The full source code for the example can be found in GitHub.

A ThreadFactory

After Java 1.5 writing multithreaded code become much easier, compared to prior versions. Lots of logic was encapsulated behind classes that were baked inside the JDK. Additionally, the way developers were creating their threads radically changed.

Making use of Executors and ExecutorServices took away the boilerplate code that was needed in order to create and manage the lifecycle of threads.

But in order to make monitoring and debugging easier, threads should have descriptive names. Most of the above executors make use of the DefaultThreadFactory which gives a not so descriptive name (i.e pool-1-thread-1).

Fortunately enough the programmer can pass in its own implementation of a ThreadFactory.

A sample implementation, which gives the thread a descriptive name and a counter can be the following:

The class, along with some unit tests can be found on GitHub.

The 'Now' Service

Quite many times, in our applications, we need to make use of the current time in milliseconds. Most of us follow the easy way and rely on java's System.currentTimeMillis().

The problem with arises when some unit tests need to be written that rely on that functionality. Additionally, a few times the developer needs to assert a specific time in his/her unit test. In order to avoid such situations someone could hide that logic behind a simple interface, with its solely purpose to return a long number indicating the number of milliseconds. Then the user can have one or multiple implementations of that, according to his/her needs. The main advantage with that is that the implementation becomes irrelevant. Hence, the user can use mocks when testing, or a dependency injection framework to supply the preferred implementation without having to touch the source code.

A sample implementation could be the following really simple functional interface.

An implementation which uses the beloved System.currentTimeMillis() could be the following:

The source code could be found at GitHub, along with some unit tests.

The Apache Commons CLI, Command Line Parsing

Quite a few times when writing a Java application, there is the need of passing command line arguments to the program itself. Usually, the application would have to validate those arguments. For example, making sure that the user passed in a numeric value, or a boolean one. Sometimes validate that the user has passed in all the mandatory parameters etc. This process is usually very tedious.

Fortunately enough, there are a few libraries that can do that for us. My preferred one is the Apache Commons CLI. It's a great library, with an extremely straightforward to use API. It also provides multiple parser styles. Naming a few:

  • GNU like options (i.e --key ~/.ssh/key.pem)
  • POSIX like options (i.e -xvfz)
  • Short and long options (i.e -k ~/.ssh/key.pem or -key ~/.ssh/key.pem)
  • Java like property options (i.e -Dkeystore=~/.ssh/key.pem)

The library can automatically parse the arguments to their correct types (i.e Integer, String, Boolean etc), throwing appropriate exceptions when the user has passed a wrong argument type. Additionally, it validates those arguments and it ensures that all the mandatory arguments have been passed to the application. The programmer would only have to retrieve the values that the user has passed, without having to worry about anything else.

The Maven dependency is:

[xml] commons-cli commons-cli 1.2 [/xml]

As i mentioned above its API is simple self-explaining and straight forward. At the very least there are two things that a developer needs to create. The command line Options and a CommandLineParser.

The Options can be created like below

An example of creating the command line parser is below.

The entire example, along with some unit tests, can be found on Github.