Creating an executable JAR with Maven

I have a very simple mavenized Java project that results in a JAR, both in its "slim" and "fat" flavor. Here I want to make the result executable and, along the way, get rid of the "slim" one. The expected result is a JAR file that I could run like this
java -jar myApp.jar
To to that, I have to provide a manifest in the JAR, pointing out to the class that should be run. This is a task that is easily done with Maven. It is enough to to add an "archive" element to the maven-assembly-plugin in the build section of the project POM.
<archive>
    <manifest>
        <mainClass>simple.Main</mainClass>
    </manifest>
</archive>
Let's say that I have no use anymore for the "slim" JAR, and I don't want Maven to generate it anymore. I can get this effect disabling the execution for the relevant plugin, maven-jar, setting its phase to nothing.
<plugin>
	<artifactId>maven-jar-plugin</artifactId>
	<version>3.2.0</version>
	<executions>
		<execution>
			<id>default-jar</id>
			<phase />
		</execution>
	</executions>
</plugin>
Now, running the usual maven command, I should get the expected behavior.
mvn clean package
Full code is on GitHub, here is the POM file.

Go to the full post

Test on build with Maven

Let's add some testing to the tiny mavenized Java app seen in the previous couple of posts.

Actually, the app is so tiny that there is that there's nothing to test. This is easily solved, adding a method to the Main class
public static String getLoggerClassName() {
    return LOG.getClass().getName();
}
I plan to use JUnit 5 (aka Jupiter) as testing framework, asserting via Hamcrest, so I add both these dependencies to the POM.
<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.6.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.hamcrest</groupId>
    <artifactId>hamcrest</artifactId>
    <version>2.2</version>
    <scope>test</scope>
</dependency>
As I said, I plan to use JUnit and Hamcrest for testing only. I don't want to bother the user with them. That's the reason why I signaled to Maven that their scope is test.

Now I write a test for the newly added method
@Test
void getLoggerClassName() {
    String actual = Main.getLoggerClassName();
    assertThat(actual, is("ch.qos.logback.classic.Logger"));
}
I can run it from Eclipse in the usual way, anything works fine.

I build the app with Maven, clean package, and I get both my jars, slim and fat, as before. In both cases without the test dependencies, and that's good. Still, something is missing. Tests are not executed in this phase. Bad. I'd like to check anything works fine when I build my jar.

The issue is in the Maven Surefire Plugin. By default, an ancient version is executed that do not work nicely with JUnit 5. So, I force Maven to use a newer one. It's name is a bit scary, having an M in its version number. M as Milestone. However, it looks like we have to live with it. So I use it.
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>3.0.0-M4</version>
</plugin>

Actually, there's no need to specify the group id in this case, since maven plugins is the default. Your choice.

That's it. Now the Maven package build trigger also the execution of the tests. Nice.

I have pushed the full code on GitHub. So, please, check there for details.

Go to the full post

Jar with dependencies through Maven

I'm creating a Java desktop app, a very simple one, see the previous post if you don't believe me, using Maven as build manager. The second step here is adding a dependency, and let Maven build a "fat" jar including them, to ease its deployment.

Say that I just want to use Logback classic in my application. I search for it on MvnRepository, I stay away from the alpha version so I go for 1.2.3.

Given that, modify my pom.xml is mostly a matter of copy and paste. Just remember that any dependency belongs to the dependencies element.
<project ...>
    <!-- ... -->
    <dependencies>
     <dependency>
      <groupId>ch.qos.logback</groupId>
      <artifactId>logback-classic</artifactId>
      <version>1.2.3</version>
     </dependency>
    </dependencies>
</project>
Now I can Logback, and I'm going to do it through SLF4J.
import org.slf4j.Logger;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Main {
    private static final Logger LOG = LoggerFactory.getLogger(Main.class);

    public static void main(String[] args) {
        // ...
        LOG.info("Hello");
    }
}
I run my application in Eclipse, everything goes as expected, getting as console output the required log
14:32:12.986 [main] INFO simple.Main - Hello
Now I run a maven build, as done in the previous post, setting the goals to clean package, and generate a jar.

I run it from the shell in the usual way
java -cp simple-0.0.1-SNAPSHOT.jar simple.Main
And I get a nasty feedback
Exception in thread "main" java.lang.NoClassDefFoundError:
    org/slf4j/LoggerFactory at simple.Main.(Main.java:7)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
    at java.base/jdk.internal.loader....
    ...
    ... 1 more
However, this is an expected behavior. Maven generates by default a "slim" jar, avoiding to include in it the dependencies. The user has to specify in the classpath where Java has to find them.

When we want to create a "fat" jar, we should say to Maven explicitly to do it. We do that with the Maven Assembly Plugin.

Maven, please, when you build this project, use the maven-assembly-plugin, version 3.3, when packaging. And, remember to build also a "fat" jar with all the dependencies!
<build>
 <plugins>
  <plugin>
   <artifactId>maven-assembly-plugin</artifactId>
   <version>3.3.0</version>
   <executions>
    <execution>
     <phase>
      package
     </phase>
     <goals>
      <goal>single</goal>
     </goals>
    </execution>
   </executions>
   <configuration>
    <descriptorRefs>
     <descriptorRef>jar-with-dependencies</descriptorRef>
    </descriptorRefs>
   </configuration>
  </plugin>
 </plugins>
</build>
Now, when I run the Maven build step, I get both slim and fat jar, here is how I run the latter
java -cp simple-0.0.1-SNAPSHOT-jar-with-dependencies.jar simple.Main
It works fine, so I push the changes on github.

Go to the full post

Simple Maven Eclipse Project

What I'm about to do here is creating a simple desktop app using Apache Maven as build automation tool.

To follow me, ensure you have on your machine:
  1. Java JDK - I use version 11, the currently latest available LTS, but the app is so simple that virtually any JDK will do
  2. Eclipse IDE - Actually, on my machine I have Spring Tool Suite (STS) 4.6, but again, the job is so basic, that almost any Eclipse-based IDE would be alright

From Eclipse, I select the "Maven Project" wizard from "File | New | Project ...", there I go for creating a simple project, skipping archetype selection. Then I have to specify a group and artifact id for my stuff. I enter "dd" as group id and "simple" as artifact id.

The other fields have a good default - so I keep them as suggested and I just push the "Finish" button.

Since I used a very minimal approach, I'm not surprised by the warning in my project. I go straight to my pom.xml to complete it.

I want to set the character set used in my source code, using UTF-8 (strictly not a necessity, still a good idea), and, most importantly, I want to set both the source and target compiler to JDK 11.
I do that specifying these properties inside the project element:
<properties>
    <project.build.sourceEncoding>
        UTF-8
    </project.build.sourceEncoding>
    <maven.compiler.source>
        11
    </maven.compiler.source>
    <maven.compiler.target>
        11
    </maven.compiler.target>
</properties>
Here you can see the full pom.xml.

Theoretically, Eclipse should trigger a project rebuild as you save a change in the Maven configuration file. Unfortunately, this is not what often happens. However, you could force the job by pushing Alt-F5, or right-clicking on the project name in the Project Explorer, then select Maven | Update Project...

I add a plain vanilla class, simple.Main, with a main function to perform the usual hello job and I have the first version of my application.

Now I'd like to use maven to generate my app jar. To do that, I have to specify which goals my maven build target should achieve. I right click on the project folder in the Eclipse Package Explorer, then in the Run as menu I select the "Maven build ..." item. As goals I specify "clean package", meaning that I want to clean everything up, so to recompile all source files and then package them in a jar file.

Now I can run this target. In the console window I see the feedback from Maven until I get the final result, a jar, named in my case simple-0.0.1-SNAPSHOT.jar, has been put in the project target directory.

To see if it works as I expect, I open a shell, go to the target directory, and there I execute the jar:
java -cp simple-0.0.1-SNAPSHOT.jar simple.Main
I'm happy to get back the expected hello message, so I put the project on Github.

Go to the full post

At least one JAR was scanned for TLDs yet contained no TLDs

As soon as I added a jar to my Tomcat base web application, I got this new info log message.
INFO [main] org.apache.jasper.servlet.TldScanner.scanJars
At least one JAR was scanned for TLDs yet contained no TLDs
It makes sense, since it was a JDBC driver, and I would have been very surprised if it contained a TLD.

Let's see how to get rid of it.

Firstly, I read the rest of the logging message
Enable debug logging for this logger for a complete list
of JARs that were scanned but no TLDs were found in them.
Skipping unneeded JARs during scanning can
improve startup time and JSP compilation time.
Actually, I already knew the name of the JAR culprit, since I just added it, so I could have skipped directly to the second step. However, I followed the instructions and ...

Tomcat logging properties

I (temporarily) changed the logging.properties for my Tomcat instance, adding this line
org.apache.jasper.servlet.TldScanner.level = FINE

After that, my log file got more interesting, and I focused my attention to this line.
FINE [main] org.apache.jasper.servlet.TldScanner$TldScannerCallback.scan
No TLD files were found in [... x.jar] Consider adding the
JAR to the tomcat.util.scan.StandardJarScanFilter.jarsToSkip
property in CATALINA_BASE/conf/catalina.properties file.
Very clear indeed.

Tomcat Catalina properties

I easily found the tomcat.util.scan.StandardJarScanFilter.jarsToSkip property in file catalina.properties from my Tomcat configuration folder. It is a longish comma separated string containing a list of JAR that are known not to contain TLD.

I just added the name of my JAR to it, and the case was closed.

Go to the full post

Testing Spring Boot with JUnit5

I decided that is time to move my tests in Spring app from JUnit 4 to Jupiter. It is not too complicated, given that Spring is already supporting JUnit 5, even though the default is still the previous version. However it requires being careful in a couple of steps.

Firstly, I have changed the POM in this way:
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
    <exclusions>
        <exclusion>
            <!-- get rid of JUnit 4 -->
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<!-- using JUnit 5 instead -->
<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <!-- using the version set by Spring -->
    <scope>test</scope>
</dependency>
I have excluded JUnit 4 from the spring-boot-starter-test dependency, and then I have inserted a new dependency for Jupiter, all of them are obviously in the test scope. Notice that I didn't set the version for JUnit 5, accepting instead the one proposed by Spring. Just to stay on the safe side.

The changes in "normal" testing are quite simple, usually limited in using the right imports for the new library. Something a bit more engaging is to be done for Spring Application test. What I had for JUnit 4 was something like this
@RunWith(SpringRunner.class)
@SpringBootTest
public class MyApplicationTests {
    // ...
In Jupiter, however, the annotation RunWith has been replaced with the more powerful ExtendWith, so the code has to be changed to
@ExtendWith(SpringExtension.class)
@SpringBootTest
public class MyApplicationTests {
    // ...
Not too bad, isn't it?

Go to the full post

Another interesting MOOC for Java developer

I've just completed the EdX MOOC Data Structures and Software Design by University of Pennsylvania. I would suggest it to someone with a high-beginner or lower-intermediate experience as Java software developer. I think that the lessons would sound reasonably interesting even if your level is higher, however you could be get a bit bored by the exercises - kind of too simple. This is why I didn't push anything on my GitHub account. To have some fun, I didn't use any IDE, just solved the problems in the old way, using a dumb editor, compiling and running them from command line :D Homework 11 is, in my opinion, the most interesting one in the course. We have to refactor a poorly written, albeit working, piece of code to make it fast enough to match the requirements.

Go to the full post

A nice Java testing MOOC

I've just finished the first week of the MOOC "Automated Software Testing: Practical Skills for Java Developers" provided by TU Delft on EdX. It looks to me they did a good job. Maybe, if you are a native English speaker, the speakers' accent could sometimes sound a bit weird. However, as continental European, I kind of enjoy it (and I know that my Italian accent sounds even funnier).

The course is focused on Java, JUnit 5, using IntelliJ as IDE.

It looks like a brand new production, so be ready to step into typos and minor mistakes here and there. The most annoying of them, is the one in Question 11 on the Quizzes.

We have to write tests to detect where is the bug in a function named mirrorEnds() that sort of detect how much a string is palindromic. Unfortunately, a wrong solution has to be selected to get the point on it!

If you are taking the course, I suggest you to have a look at the discussion forum to get help.

And, if you want to compare your code with mine, please find my forked repository on GitHub.

Go to the full post

Autowired, Resource, Inject

In chapter three, Introducing IoC and DI in Spring, of Pro Spring 5, I have read the example about Using Setter Injection in the annotation flavor, where for the "renderer" service the Autowired annotation was used in its setter to inject the "provider" bean in its parameter.
A note hints that other annotations, Resource and Inject, could be used, but no example is provided. I thought it was better to practice a bit on them too.

Autowired

The provided example is actually what we usually see in Spring code:
@Override
@Autowired
public void setMessageProvider(MessageProvider provider) {
    this.messageProvider = provider;
}
We should only be aware that Autowired is pure Spring, as its full name documents.

Inject

We could get the same result using the JSR standard annotation Inject:
@Override
@Inject
public void setMessageProvider(MessageProvider provider) {
    this.messageProvider = provider;
}
I had to change the provided build.gradle file for chapter three, setter-injection project, since it didn't have the compile dependency for injection, named misc.inject and defined in the main build.gradle file as
inject         : "javax.inject:javax.inject:1"
Autowired and Qualifier

Say that we have more than one Component implementing the MessageProvider interface. The default one is name "provider", and so it is chosen by Spring in the above examples. But what if I want to inject a different one, for instance "pro2":
@Component("pro2")
class MessageProvider2 implements MessageProvider {

    @Override
    public String getMessage() {
        return "Hello from Message Provider 2!";
    }
}
The Spring commonly used approach is using the Qualifier annotation.
@Override
@Autowired
@Qualifier("pro2")
public void setMessageProvider(MessageProvider provider) {
    this.messageProvider = provider;
}
Both Autowired and Qualifier are Spring specific annotations.

Resource

A JSR standard alternative to the couple Autowired + Qualifier is given by the Resource annotation.
@Override
@Resource(name = "pro3")
public void setMessageProvider(MessageProvider provider) {
    this.messageProvider = provider;
}
Here above, the Component named "pro3" is going to be injected in provider.

Inject and Named

Another JSR standard alternative to Autowired + Qualifier is provided by the couple Inject + Named.
@Override
@Inject
@Named("pro4")
public void setMessageProvider(MessageProvider provider) {
    this.messageProvider = provider;
}
I have pushed to GitHub my changes. You could see there the build.gradle file, my new Components, and the alternative setters.

Go to the full post

HackerRank Array Manipulation

We have an array of integers of a given size, that could be in the order of tenth of millions. It is initialized with all zero elements.
We change it a few times, up to tenth of thousands, adding to given subintervals each time a positive number.
At the end, we want to know which is the highest values stored in the array.

This is a Data Structures HackerRank problem. Here below I show you a naive solution, and a smarter one, both of them using Python as implementation language.

An example

First thing, I wrote a test following the provided example. As a welcomed side effect, it helps me to clarify the problem to myself.
Given an array sized 5, let modify it three times, adding 100 to the first and second element, then 100 to the elements from the second up to the fifth, then again 100 to the third and fourth.

The array should go through these states:
  0   0   0   0   0
100 100   0   0   0
100 200 100 100 100
100 200 200 200 100
At the end, 200 should be the highest value in it.

My test code is:
def test_provided_naive_0(self):
    manipulator = NaiveManipulator(5)
    manipulator.set(1, 2, 100)
    manipulator.set(2, 5, 100)
    manipulator.set(3, 4, 100)
    self.assertEqual(200, manipulator.solution())
As I'm sure you have guessed, I have implemented my naive solution as a class, NaiveManipulator, that is initialized passing the size of the underlying array, and that has a couple of methods, set() to perform a transformation, and solution() to get the requested value at the end.

Let's see its code.
class NaiveManipulator:
    def __init__(self, sz):
        self.data = [0] * sz  # 1
    
    def set(self, first, last, value):
        for i in range(first-1, last):  # 2
            self.data[i] += value  # 3

    def solution(self):
        return max(self.data)
1. The array, initialized with the passed size and with all zero elements, is kept in the member named "data".
2. The indices are given as 1-based, so I convert them in 0-based before using them.
3. Each element in the specified interval is increased by the passed value.
4. Just a call to the built-in function max()

This implementation is really naive. It works fine, but only for limited input data.

A more challenging example

What happens if I have thousands of transformations on a moderately large array, where the subinterval sizes are in the order of the array size?

Let's write a test to see it.
def test_naive(self):
    manipulator = NaiveManipulator(1_000)
    for _ in range(2_000):
        manipulator.set(10, 800, 1)
    self.assertEqual(2_000, manipulator.solution())
It works fine. However, we start seeing how it is getting time consuming. The fact is that in test_naive() we have a for-loop, inside it we call the manipulator set() where there is another for-loop. This algorithm has a O(N*M) time complexity, where N is the number of transformations and M the (average) size of the subintervals. It is enough to have both N and M in the order of thousands to get puny performances by this algorithm.

Be lazy to be faster

Do we really need to increase all the values in the subinterval each time? Why don't we just set where all the increases would start and end, and then perform the increase just once? That could save as a lot of time.

I have refactored my NaiveManipulator to a smarter ArrayManipulator, keeping the same interface, so to nullify the impact on the user code.
class ArrayManipulator:
    def __init__(self, sz):
        self.data = [0] * (sz + 2)  # 1
    
    def set(self, first, last, value):  # 2
        self.data[first] += value
        self.data[last+1] -= value

    def solution(self):  # 3
        return max(itertools.accumulate(self.data))
1. The data array is going to change its meaning. It is not storing the actual value for each element, but the difference between the previous element and the current one. This explains why I need to increase its size by two, since I add a couple of sentinel elements, one at the begin, the other at the end.
2. Instead of changing all the elements in the subinterval, now I change just the first, to signal an increment sized "value", and the one after the last one, to signal that we are getting back to the original value.
3. Large part of the job is now done here. I call accumulate() from the standard itertools library to convert the values stored in the data array to the actual value, then I pass its result to max(), to select its biggest value.

On my machine, this algorithm is about 200 times faster that the previous one on test_naive. Enough to pass the HackerRank scrutiny.

Full python code and test case pushed to GitHub.

Go to the full post

How to close a Spring ApplicationContext

I've just finished reading chapter 2 Getting Started of Pro Spring 5. Nice and smooth introduction to Spring, and to Dependency Injection motivation.

Going through the provided examples, I had just a minor on the code provided. In a couple of cases, the created Spring contexts is stored in plain ApplicationContexts reference. This is normally considered a good way of writing code. We don't want to bring around a fat interface when a slimmer one would be enough. Anyway, ApplicationContext has no close() method declared, leading to an annoying warning "Resource leak: 'ctx' is never closed".

Here is one place where you could see the issue, in class HelloWorldSpringDI:
ApplicationContext ctx =
    new ClassPathXmlApplicationContext("spring/app-context.xml");

MessageRenderer mr = ctx.getBean("renderer", MessageRenderer.class);
mr.render();
An obvious solution would be explicitly cast ctx to ClassPathXmlApplicationContext, and call close() on it. However, we could do better than that. The Spring application contexts close()'s come from the interface Closeable. This means that we could refactor our code wrapping the ApplicationContext usage in a try-with-resource block. We let Java taking care of calling close() when the context goes out of scope.

The resulting code is something like that:
try (ClassPathXmlApplicationContext ctx =
        new ClassPathXmlApplicationContext("spring/app-context.xml")) {
    MessageRenderer mr = ctx.getBean("renderer", MessageRenderer.class);
    mr.render();
}
I pushed on GitHub my patched code for both HelloWorldSpringDI and HelloWorldSpringAnnotated, that is, the other class suffering for the same lack of application context closing.

Go to the full post

The Gradle nature of Pro Spring 5

I've started reading Pro Spring 5: An In-Depth Guide to the Spring Framework and Its Tools. It looks fun and interesting. The first issue I had was on having its code from GitHub working in my STS IDE. Here are a few notes on what I did to solve it, if someone else stumble in the same place.

First step was easy. After navigating through the File menu in this way ...
File
    Import...
        Git
            Project from Git
                GitHub
                    Clone URI
I entered the repository address: https://github.com/Apress/pro-spring-5.git

Second step should have been even easier. Add Gradle Nature to the project. It's just a matter of right-clicking on the project, and from the menu select
Configure
    Add Gradle Nature
Unfortunately, there are a few version numbers in the main "build.gradle" file for the project that are not currently working, leading to a number of annoying exceptions like:
A problem occurred evaluating project ':chapter02:hello-world'.
Could not resolve all files for configuration ':chapter02:hello-world:compile'.
Could not resolve org.springframework:spring-context:5.0.4.BUILD-SNAPSHOT.
Required by:
    project :chapter02:hello-world
Could not resolve org.springframework:spring-context:5.0.4.BUILD-SNAPSHOT.
Could not get resource ...
Could not HEAD ...
Read timed out
org.gradle.tooling.BuildException: Could not run build action ...
I tried to minimize the impact of my changes on the original configuration file.

Basically, when I had an issue, I moved to a newer version, preferring a stable RELEASE to BUILD-SNAPSHOT's or release candidates.

I also had a problem for
io.projectreactor.ipc:reactor-netty:0.7.0.M1
That I moved to
io.projectreactor.ipc:reactor-netty:0.7.9.RELEASE
More complicated was the problem for chapter five. It looks that I have something wrong for aspectJ in my configuration. I didn't to spend too much time on that now, assuming that it would lead to to some detail in that specific area. So I opened the aspectj-aspects gradle build file and commented the dependency to gradle-aspectj and the plugin application for aspectj itself.

Now my build.gradle for chapter five - aspectj-aspects has these lines in:
}

    dependencies {
//      classpath "nl.eveoh:gradle-aspectj:2.0"
    }
}
// apply plugin: 'aspectj'
Sure enough, it won't compile. Still, I finally added the gradle nature to the project and I can go on reading the book. I'll take care of that issue when I'll reach chapter five.

I've forked the original APress repository, and there I pushed my variants for the main build.gradle file, and the one for chapter 5, aspectj-aspects. See there for details.

Go to the full post