Testing Spring Boot applications with TestContainers and Selenium WebDriver – Part Three

This is the third of a short series of posts showing how the TestContainers project can be leveraged to help test a Spring Boot application in a variety of ways.

In the first post, we concentrated on using the TestContainers database support and the second post
used TestContainers to run a Spring Boot test that ran all our dependencies using Docker containers.

This third post looks at the final layer – UI. We will cover using the WebDriver Container support to spin up the UI, run our UI test and capture the whole session in a video.

testcontainers-demo

We will continue to use the testcontainers-demo application as the System under test (SUT). The application routes notification messages from a JMS Queue to a RabbitMQ exchange, storing each notification in a Postgres database. This application also provides a web interface to see a list of all the messages that are routed by the application.

UI tests

The TestContainers projects contains support for WebDriver Containers that are pre-packaged Docker images based on the selenium docker images. A JUnit test case can spin up one of these containers, grab the RemoteWebDriver and start testing.

For our test example we are going to spin up a Chrome browser, navigate to the homepage and assert that the correct page is being shown.

In order for a WebDriver Container to connect to our UI, we need to ensure that the code is exposed via a local port. Spring Boot testing supports us here with the @SpringBootTest annotation webEnvironment value. We are going run this test on a random port.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
public class UITest {

We continue by adding a JUnit rule to load a WebDriver Container.

@Rule
public BrowserWebDriverContainer chrome = 
                              new BrowserWebDriverContainer()
	    .withRecordingMode(VncRecordingMode.RECORD_FAILING, new File("./target/"))
            .withCapabilities(new ChromeOptions());

This container is configured with a default set of Chrome options and sets the container to save the recordings of any failed tests to our “target” directory.

Each test can now grab a RemoteWebDriver instance to drive the Chrome browser.

@Test
	public void shouldSuccessfullyPassThisTestUsingTheRemoteDriver() throws InterruptedException {

		RemoteWebDriver driver = chrome.getWebDriver();

		System.out.println("Selenium remote URL is: " + chrome.getSeleniumAddress());
		System.out.println("VNC URL is: " + chrome.getVncAddress());

String url = "http://host.docker.internal:" + port + "/";
		System.out.println("Spring Boot URL is: " + url);
		driver.get(url);

If this test was to fail, then the target directory would have a *.flv file created for playback. This is a great feature which provides a valuable feedback mechanism for debugging broken tests. I’ve downloaded VLC to playback the captured format.

Because the @SpringBootTest annotation requires that the entire application is loaded, this means that our JMS and RabbitMQ auto configuration is also enabled, so for this test we also need to ensure that our other containers are running.

The full JUnit test is below:

package com.robintegg.testcontainersdemo;

import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;

import java.io.File;
import java.util.List;

import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase.Replace;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.context.SpringBootTest.WebEnvironment;
import org.springframework.boot.web.server.LocalServerPort;
import org.springframework.context.ApplicationContextInitializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.testcontainers.containers.BrowserWebDriverContainer;
import org.testcontainers.containers.BrowserWebDriverContainer.VncRecordingMode;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.PostgreSQLContainer;

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
@AutoConfigureTestDatabase(replace = Replace.NONE)
@ContextConfiguration(initializers = { UITest.Initializer.class }, classes = RabbitMqTestConfiguration.class)
public class UITest {

	@LocalServerPort
	private int port;

	// @formatter:off
	@Rule
	public BrowserWebDriverContainer chrome = new BrowserWebDriverContainer()
			.withRecordingMode(VncRecordingMode.RECORD_FAILING, new File("./target/"))
			.withCapabilities(new ChromeOptions());
	// @formatter:on

	@ClassRule
	public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:latest");

	@ClassRule
	public static GenericContainer<?> activeMQContainer = new GenericContainer<>("rmohr/activemq:latest")
			.withExposedPorts(61616);

	@ClassRule
	public static GenericContainer<?> rabbitMQContainer = new GenericContainer<>("rabbitmq:management")
			.withExposedPorts(5672);

	@Test
	public void shouldSuccessfullyPassThisTestUsingTheRemoteDriver() throws InterruptedException {

		RemoteWebDriver driver = chrome.getWebDriver();

		System.out.println("Selenium remote URL is: " + chrome.getSeleniumAddress());
		System.out.println("VNC URL is: " + chrome.getVncAddress());

		String url = "http://host.docker.internal:" + port + "/";
		System.out.println("Spring Boot URL is: " + url);
		driver.get(url);

		List<WebElement> results = new WebDriverWait(driver, 15)
				.until(ExpectedConditions.visibilityOfAllElementsLocatedBy(By.tagName("h1")));

		assertThat(results.size(), is(1));
		assertThat(results.get(0).getText(), containsString("Notifications"));

	}

	@Test
	public void shouldFailThisTestUsingTheRemoteDriverAndGenerateAVideoRecording() throws InterruptedException {

		RemoteWebDriver driver = chrome.getWebDriver();

		System.out.println("Selenium remote URL is: " + chrome.getSeleniumAddress());
		System.out.println("VNC URL is: " + chrome.getVncAddress());

		String url = "http://host.docker.internal:" + port + "/";
		System.out.println("Spring Boot URL is: " + url);
		driver.get(url);

		// added for effect when viewing the video
		Thread.currentThread().sleep(1000);

		List<WebElement> results = new WebDriverWait(driver, 15)
				.until(ExpectedConditions.visibilityOfAllElementsLocatedBy(By.tagName("h1")));

		assertThat(results.size(), is(2));

	}

	static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {

		@Override
		public void initialize(ConfigurableApplicationContext configurableApplicationContext) {

			DemoApplicationTestPropertyValues.using(postgreSQLContainer, activeMQContainer, rabbitMQContainer)
					.applyTo(configurableApplicationContext.getEnvironment());

		}

	}

}

Why wait? Try TestContainers today 🙂

So we’ve seen in this mini-series a set of configurations and uses for the TestContainers project.

There’s plenty of other uses for the TestContainers project which we will have a look at in the future including getting TestContainers up and running with one of my favorite tools – Serenity BDD.

Testing Spring Boot applications with TestContainers – Part Two

This is the second of a short series of posts showing how the TestContainers project can be leveraged to help test a Spring Boot application in a variety of ways.

In the first post, we concentrated on using the TestContainers database support to ensure that our Flyway scripts and Spring Data JPA configuration were integrated correctly.

For this second part, we will move up a gear and look to use TestContainers to run a Spring Boot test that will run all our dependencies using Docker containers.

testcontainers-demo

We will continue to use the testcontainers-demo application as the System under test (SUT). The application routes notification messages from a JMS Queue to a RabbitMQ exchange, storing each notification in a Postgres database. This application also provides a web interface to see a list of all the messages that are routed by the application.

Application integration tests

In order to run a test that reads from the JMS Queue and publishes to the RabbitMQ Exchange we will need JMS and RabbitMQ brokers. Using TestContainers we can spin up these brokers using docker and configure our tests to use the transient brokers.

TestContainers does not have any advanced support for these services so we will need to use the GenericContainer support. To do this we specify an image we want to run.

@ClassRule
public static GenericContainer<?> activeMQContainer = new GenericContainer<>("rmohr/activemq:latest")
			.withExposedPorts(61616);

@ClassRule
public static GenericContainer<?> rabbitMQContainer = new GenericContainer<>("rabbitmq:management")
			.withExposedPorts(5672);

In the above code, using the available fluent methods, we also declare what ports we want TestContainers to make available to our unit test.

In the first post we configured the JUnit test to override our application’s Spring Boot properties with the TestContainer configuration values. Now we have three containers, we can look to move those items into a single static method for convenience. Also note that for the ActiveMQ and RabbitMQ port properties we have to specify what port mapping we want to retrieve.

public class DemoApplicationTestPropertyValues {

	public static TestPropertyValues using(PostgreSQLContainer<?> postgreSQLContainer,
			GenericContainer<?> activeMQContainer, GenericContainer<?> rabbitMQContainer) {
		List<String> pairs = new ArrayList<>();

		// postgres
		pairs.add("spring.datasource.url=" + postgreSQLContainer.getJdbcUrl());
		pairs.add("spring.datasource.username=" + postgreSQLContainer.getUsername());
		pairs.add("spring.datasource.password=" + postgreSQLContainer.getPassword());
		// activemq
		pairs.add("spring.activemq.broker-url=tcp://localhost:" + activeMQContainer.getMappedPort(61616));
		// rabbitmq
		pairs.add("spring.rabbitmq.port=" + rabbitMQContainer.getMappedPort(5672));

		return TestPropertyValues.of(pairs);
	}
}

This code will now be called from the ApplicationContextInitializer in our test

static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {

		@Override
		public void initialize(ConfigurableApplicationContext configurableApplicationContext) {

			DemoApplicationTestPropertyValues.using(postgreSQLContainer, activeMQContainer, rabbitMQContainer)
					.applyTo(configurableApplicationContext.getEnvironment());

		}

	}

Now that we have all three containers configured we are ready to write a test that can invoke the application through actual services rather than in memory versions.

The full JUnit test is below:

package com.robintegg.testcontainersdemo.routing;

import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.CoreMatchers.notNullValue;
import static org.junit.Assert.assertThat;

import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase.Replace;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.ApplicationContextInitializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.PostgreSQLContainer;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.robintegg.testcontainersdemo.inbound.JMSNotification;

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureTestDatabase(replace = Replace.NONE)
@ContextConfiguration(initializers = { RoutingTest.Initializer.class }, classes = RabbitMqTestConfiguration.class)
public class RoutingTest {

	@ClassRule
	public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:latest");

	@ClassRule
	public static GenericContainer<?> activeMQContainer = new GenericContainer<>("rmohr/activemq:latest")
			.withExposedPorts(61616);

	@ClassRule
	public static GenericContainer<?> rabbitMQContainer = new GenericContainer<>("rabbitmq:management")
			.withExposedPorts(5672);

	@Autowired
	private JmsTemplate jmsTemplate;

	@Autowired
	private RabbitTemplate rabbitTemplate;

	@Autowired
	private ObjectMapper objectMapper;

	@Autowired
	private NotificationRepository notificationRepository;

	@Test
	public void shouldStoreANotifcationFromTheJmsQueueAndForwardToTheRabbitMQExchange() throws Exception {

		// given
		String message = "TestContainers are great";
		JMSNotification jmsNotification = new JMSNotification(message);

		// when
		sendNotificationToJmsQueue(jmsNotification);

		// then
		assertThatNotificationIsForwardedToRabbitMq(message);
		assertThatNotificationIsStoredInTheDatabase(message);

	}

	private void assertThatNotificationIsStoredInTheDatabase(String message) {
		Notification notification = notificationRepository.findAll().get(0);
		assertThat(notification.getMessage(), is(message));
		assertThat(notification.getSource(), is("JMS"));
		assertThat(notification.getId(), notNullValue());
	}

	private void assertThatNotificationIsForwardedToRabbitMq(String message) {
		Notification notification = readNotificationFromRabbitMqQueue();
		assertThat(notification.getMessage(), is(message));
		assertThat(notification.getSource(), is("JMS"));
		assertThat(notification.getId(), notNullValue());
	}

	private void sendNotificationToJmsQueue(JMSNotification jmsNotification) throws Exception {
		jmsTemplate.convertAndSend("jms.events", objectMapper.writeValueAsString(jmsNotification));
	}

	private Notification readNotificationFromRabbitMqQueue() {
		ParameterizedTypeReference<Notification> notificationTypeRef = new ParameterizedTypeReference<Notification>() {
		};

		Notification notification = rabbitTemplate.receiveAndConvert("testcontainers.test.queue", 1000,
				notificationTypeRef);
		return notification;
	}

	static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {

		@Override
		public void initialize(ConfigurableApplicationContext configurableApplicationContext) {

			DemoApplicationTestPropertyValues.using(postgreSQLContainer, activeMQContainer, rabbitMQContainer)
					.applyTo(configurableApplicationContext.getEnvironment());

		}

	}

}

Now we’ve got a template TestContainers JUnit test you can start to explore further scenarios that might be more relevant to your own projects.

One extension might be to use databases loaded with production levels of data to test performance of your application. This can be managed by attaching volumes to your database containers.

Furthermore, externally managed HTTP services could be replaced with WireMock stubs running in containers.

TestContainers also contains some pretty useful support for containers running WebDriver. This gives you excellent support for further UI test automation and regression testing. This is something we will elaborate on in the next post.

Testing Spring Boot applications with TestContainers

This is the first of a short series of posts showing how the TestContainers project can be leveraged to help test a Spring Boot application in a variety of ways.

In this first part, we are going to concentrate on using the TestContainers database support to ensure that our Flyway scripts and Spring Data JPA configuration are integrated correctly.

testcontainers-demo

We will be using the testcontainers-demo application as the System under test (SUT). The application routes notification messages from a JMS Queue to a RabbitMQ exchange, storing each notification in a Postgres database. This application also provides a web interface to see a list of all the messages that are routed by the application.

What is TestContainers?

The TestContainers site perfectly describes it’s goals:

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.

And the areas of testing that it can help with:

Testcontainers make the following kinds of tests easier

Data access layer integration tests: use a containerized instance of a MySQL, PostgreSQL or Oracle database to test your data access layer code for complete compatibility, but without requiring complex setup on developers’ machines and safe in the knowledge that your tests will always start with a known DB state. Any other database type that can be containerized can also be used.
Application integration tests: for running your application in a short-lived test mode with dependencies, such as databases, message queues or web servers.
UI/Acceptance tests: use containerized web browsers, compatible with Selenium, for conducting automated UI tests. Each test can get a fresh instance of the browser, with no browser state, plugin variations or automated browser upgrades to worry about. And you get a video recording of each test session, or just each session where tests failed.

Data access layer integration tests

As we are only looking at the data layer in this post, we can make use of the Spring Boot Auto-configured tests feature. Our application uses the Spring Data JPA framework to store and retrieve Notifications to a Postgres database. Testing this approach is supported by the Auto-configured Data JPA tests. By default the support will wire up a inmemory database and use the JPA “create-drop” functionality to apply a schema to the db for testing purposes.

Our application uses Flyway to manage it’s database schema. This is normally applied when the application starts up as part of the Spring Boot support for Flyway scripts. By using only an inmemory database we are not validating the Flyway script or that the Flyway script and JPA annotated entities are being kept in sync.

The Flyway script for our project is shown below:

CREATE TABLE notification (
    id BIGINT GENERATED BY DEFAULT AS IDENTITY,
    message varchar(255) not null,
    source varchar(255) not null
);

create sequence notification_id_sequence start with 1 increment by 1;

We have a corresponding Repository interface and Entity in the code as shown below:

public interface NotificationRepository extends JpaRepository<Notification,Long>  {}
/**
 * Notification representing an event in the ecosystem
 */
@Entity
@Table(name = "notification")
public class Notification {
    @Id
    @SequenceGenerator(name = "notification_id_generator", sequenceName = "notification_id_sequence", allocationSize = 1)
    @GeneratedValue(generator = "notification_id_generator")
    private Long id;
    private String message;
    private String source;

We want to use TestContainers to start up a Postgres database, allow Spring Boot to apply the Flyway script, then test our NotificationRepository is configured correctly and can talk to a running instance of Postgres using JUnit tests.

We start with a plain Auto-configured Data JPA unit test.

@RunWith(SpringRunner.class)
@DataJpaTest
public class NotificationRepositoryTest {
    @Autowired
    private NotificationRepository repository;

At this point the test will fail because an embedded database cannot be found on the classpath. The next step is to use the @AutoConfigureTestDatabase annotation to configure the JUnit test to not replace the application database configuration.

@RunWith(SpringRunner.class)
@DataJpaTest
@AutoConfigureTestDatabase(replace = Replace.NONE)
public class NotificationRepositoryTest {
    @Autowired
    private NotificationRepository repository;

The tests will now be picking up your application database configuration which will likely be pointing to your local development environment. So next steps are to introduce the PostgresContainer from the TestContainers project.

@RunWith(SpringRunner.class)
@DataJpaTest
@AutoConfigureTestDatabase(replace = Replace.NONE)
public class NotificationRepositoryTest {
    @ClassRule
    public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:latest");
    @Autowired
    private NotificationRepository repository;

This configuration now loads up a Postgres container in Docker at the start of the test. This can be configured per test if required.

However, Spring Boot has not been configured to point to this running database yet. This requires a little more configuration of the JUnit test.

Adding a ConfigFileApplicationContextInitializer to the test will allow us to inject some new environment variables into the test context that will point the application code at the running docker environment.

@RunWith(SpringRunner.class)
@DataJpaTest
@AutoConfigureTestDatabase(replace = Replace.NONE)
@ContextConfiguration(initializers = { NotificationRepositoryTest.Initializer.class })
public class NotificationRepositoryTest {
    @ClassRule
    public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:latest");
    @Autowired
    private NotificationRepository repository;
...
    static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
        @Override
        public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
        TestPropertyValues.of(
            "spring.datasource.url=" + postgreSQLContainer.getJdbcUrl(),
            "spring.datasource.username=" + postgreSQLContainer.getUsername(),
            "spring.datasource.password=" + postgreSQLContainer.getPassword())
                .applyTo(configurableApplicationContext.getEnvironment());
            }
}

This code allows us to get a handle on the container configuration and override the spring boot properties, in doing so that test will now apply the flyway script to our containerised database and the jpa code is connected.

The console will show all the logs from the TestContainers code starting up the database container before executing the tests.

...
2019-02-09 16:35:08.796  INFO 8016 --- [           main] o.f.c.internal.database.DatabaseFactory  : Database: jdbc:postgresql://localhost:32815/test (PostgreSQL 11.1)
...
2019-02-09 16:35:09.263  INFO 8016 --- [           main] o.f.core.internal.command.DbMigrate      : Successfully applied 1 migration to schema "public" (execution time 00:00.303s)
...
2019-02-09 16:35:13.806  INFO 8016 --- [           main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2019-02-09 16:35:15.394  INFO 8016 --- [           main] c.r.t.r.NotificationRepositoryTest       : Started NotificationRepositoryTest in 11.925 seconds (JVM running for 27.482)
2

The full JUnit test is below:

package com.robintegg.testcontainersdemo.routing;

import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.core.IsEqual.equalTo;
import static org.junit.Assert.assertThat;

import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase;
import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase.Replace;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.boot.test.util.TestPropertyValues;
import org.springframework.context.ApplicationContextInitializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.testcontainers.containers.PostgreSQLContainer;

@RunWith(SpringRunner.class)
@DataJpaTest
@AutoConfigureTestDatabase(replace = Replace.NONE)
@ContextConfiguration(initializers = { NotificationRepositoryTest.Initializer.class })
public class NotificationRepositoryTest {

	@ClassRule
	public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:latest");

	@Autowired
	private NotificationRepository repository;

	@Test
	public void shouldStoreEachNotification() {

		// given
		repository.save(new Notification("message1", "test"));
		repository.save(new Notification("message2", "test"));

		// when
		long count = repository.count();

		// then
		assertThat(count, is(2L));

	}

	@Test
	public void shouldStoreEachNotificationWithAUniqueIdentifier() {

		// given
		Notification n1 = repository.save(new Notification("message3", "test"));
		Notification n2 = repository.save(new Notification("message4", "test"));

		// when
		Notification persistedNotification1 = repository.getOne(n1.getId());
		Notification persistedNotification2 = repository.getOne(n2.getId());

		// then
		assertThat(persistedNotification1, equalTo(n1));
		assertThat(persistedNotification2, equalTo(n2));

	}

	static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {

		@Override
		public void initialize(ConfigurableApplicationContext configurableApplicationContext) {

			TestPropertyValues
					.of("spring.datasource.url=" + postgreSQLContainer.getJdbcUrl(),
							"spring.datasource.username=" + postgreSQLContainer.getUsername(),
							"spring.datasource.password=" + postgreSQLContainer.getPassword())
					.applyTo(configurableApplicationContext.getEnvironment());

		}

	}

}

So at the end of this post we have successfully created a JUnit test that executes the data access layer of the application against a Postgres database running in a Docker container

In the second part, we will move up a gear and look to use TestContainers to run a Spring Boot test that will run all our dependencies using Docker containers.

Documenting your database with SchemaSpy

Why?

I’m a big fan of auto generating documentation to help visualise and understand complex artefacts such as codebases. In my new workplace, we also have a large complex database and not much in the way of support for understanding it or it’s history.

This complexity effects productivity in a number of ways:

  • It takes longer than it should to parse and understand a database, especially through tools such as Postgres Admin and the command line
  • It takes longer to support other colleagues to understand and use the database
  • Poor understanding leads to poor coding and bugs

A good piece of documentation supports collaboration and learning.

How can SchemaSpy help?

SchemaSpy (http://schemaspy.org/) is a database documenting utility written in Java that analyses your schema and generates an HTML report of your database schema, including some very useful Entity Relationship diagrams.

There are a couple of ways to run SchemaSpy. Here we’ll look at running the utility using docker. See the image page @ https://hub.docker.com/r/schemaspy/schemaspy/.

The image contains drivers for:

  •  mysql
  • mariadb
  • postgresql
  • jtds

The images has 3 volumes:

  • /drivers (if you want to override the included drivers, or add another driver)
  • /output need to host-mount this to get any output (must be writable by other (safes bet))
  • /config if you want to add schemaspy.properties file.

Configuration properties and command line options are documented on the readthedocs site @ https://schemaspy.readthedocs.io/en/latest/

To run SchemaSpy using the image you can use the command below.

docker run -v "$PWD:/output" schemaspy/schemaspy:latest [options]

Example

So in our example, i’ll be connecting to a local database called world-db (see https://github.com/ghusta/docker-postgres-world-db) and providing a config file

config/schemaspy.properties

schemaspy.t=pgsql 
schemaspy.host=host.docker.internal 
schemaspy.port=5432 
schemaspy.db=world-db 
schemaspy.u=world
schemaspy.p=world123
schemaspy.schemas=public 

Now that we have a configuration file we can run SchemaSpy via Docker

docker run -v "${PWD}\target:/output" -v "${PWD}\config:/config"  schemaspy/schemaspy:latest -configFile /config/schemaspy.properties  -noimplied -nopages -l

The docker command will output to the console whilst executing…

Once complete, the HTML documentation will be available in the output directory. The HTML can be explored to see all the table information and relationships.

Screenshot of the output

Further thoughts

Running from the command line gives you an excellent opportunity to integrate this into your CI pipelines so that it can be regularly updated.

One thing the documentation will certainly highlight is a lack of comments on your tables, views and functions. Yes, you can probably derive meaning from column names but often a simple comment is nicer 😉

Got suggestions for any alternatives or power user tips, why not comment below?

Why does Spring Initializr set the parent pom relativePath to empty?

If, like me, you use the https://start.spring.io/ web service to create your new Spring Boot projects, then you may have noticed that the pom file defined an empty relativePath element with accompanying comment.

<parent
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>2.0.5.RELEASE</version>
  <relativePath/> <!-- lookup parent from repository -->
</parent>

Now I’ve not really had much concern to question this setting. It’s certainly never given me any issues on my local development environment or or any CI server, but I recently received criticism on its usage when submitting a technical code exercise for a new job. This prompted me to dig deeper as I couldn’t explain the reason why or why not to include it.

tl;dr – it’s an optimisation. in fact it says so in the comment 🙂

As you can see, the pom file already has a comment, presumably because this is perhaps a slightly irregular usage.

Diving into the maven documentation, the relativePath field is documented:

http://maven.apache.org/ref/3.0/maven-model/maven.html#class_parent

The relative path of the parent pom.xml file within the check out. If not specified, it defaults to ../pom.xml. Maven looks for the parent POM first in this location on the filesystem, then the local repository, and lastly in the remote repo. relativePath allows you to select a different location, for example when your structure is flat, or deeper without an intermediate parent POM. However, the group ID, artifact ID and version are still required, and must match the file in the location given or it will revert to the repository for the POM. This feature is only for enhancing the development in a local checkout of that project. Set the value to an empty string in case you want to disable the feature and always resolve the parent POM from the repositories.

Great, so what does that mean? It means that if you leave it out, which may be the more common approach, similar to the Spring Boot guides, then maven will firstly look at parent directory, fail to find the pom and then fallback to looking up in your local repo and then remote repository. Spring Initializr, by using the empty relativePath, forces maven to move straight to your local/remote repositories, so missing a step that would obviously fail.

So, if nothing else, I’ve learnt a little bit about maven here and perhaps helped somebody else explain why things are what they are 🙂

Good Read: Web Form Design by Luke Wroblewski

TL;DR

  • Broad overview of all the considerations that constitute good form design
  • Not technical and easily digestible
  • Includes the why, what and when of best practices based on experience and research
  • 226 pages – 14 Short focussed chapters

Why read this book?

Given much of my current day to day work is presenting web forms for gathering data from users and putting that data in one system or another, I decided that it would be worth revisiting some of the web form design classics in order to generate some new inspiration around our current designs and usability. I have a copy of ‘Don’t make me think’ by Steve Krug and ‘Web Form Design’ comes highly recommended by other reviewers of that book.

The book is a collection of insights and best practices for Web form design. Chapters cover form structure and organisation as well as the individual form elements and interactions.

Even though it is almost 10 years old, the fundamentals are still sound and relevant to today’s design challenges such as mobile. In fact, as I read through the book it was interesting to me just how many of the book’s practices are evident in many of the current crop of website frameworks such as Bootstrap.

Inspirations

My original aim was to generate some inspiration and I feel that I’ve gained some from reading this book.

Consistency

I certainly don’t claim to be a designer but educating myself on the fundamentals has given me a toolkit of ideas and patterns that can be applied, much like software architecture principles.

In my current position, I am responsible for maintaining some projects that have been written and owned by a number of teams and developers over time. This has invariably led to a number of inconsistencies with the form designs. The book advocates consistency across a system to avoid overloading the user, so I’m hoping to use the book as a reference for applying a consistent approach to areas such as error handling, form fields, headings, text and actions. Incrementally updating those areas should improve the user experience.

Gradual engagement

In the penultimate chapter, the author writes about killing the sign-up form and moving to “Gradual engagement”. A process whereby a user does not need to explicitly sign up and get a username and password but is silently “adopted” by the system they are using.

One of the systems that I work on is for Personal Loan applications. This uses a typical wizard approach of gathering a customer’s data over a number of steps in order to apply for a Personal Loan. In this wizard, we currently have a step exclusively for putting in an email, password and security question to create an account that is attached to the loan application. The account gives the user access to their details and application decision.

I had been exploring the possibility of a passwordless mechanism to remove the need to create an account purely for the application process, thus reducing the number of steps in the wizard and putting more burden on the user to remember another username and password combination. The suggested email approach is probably a little too much burden in terms of having to wait for emails to sign in but helps in getting out of the user’s way.

The “adoption” approach could also remove the create account page altogether. In this scenario, a user account is created once the user has entered an email address as part of their contact details. In order to access the account later, the user would need to validate against some data that we have captured such as date of birth.

Add to basket

Polacode + PlantUML – Visual Studio Code extensions for coders

Polacode

The Polacode extension can generate screenshots of your code in Visual Studio Code.

It’s a great extension to get example code into a well formatted and nicely displayed image. Good for code snippets where you don’t see a need for any copy or pasting. Also useful for putting screenshots into chat windows like skype or slack.

PlantUML

I’m a big fan of using PlantUML to generate technical diagrams on the fly using a text based dsl. I’ve previously written about deploying a PlantUML editor in a docker container and having a solution in Visual Studio Code gives me a neat offline ability to edit files too.

PlantUML  is one of a number of PlantUML extensions wrappers. This one by the author jebbs seemed to be the most downloaded which can be an indicator of a good extension. It certainly has a number of supported features including image generation and document formatting.

PlantUML documents can be recognised and edited if they have a supporting file extension such as *.puml

Those documents can then be exported as diagrams of various formats.

Installation notes

  • Install GraphViz by going to https://graphviz.gitlab.io/, download the *.msi and install
  • To install PlantUML go to http://plantuml.com/, download the JAR file and put in an easily accessible folder, such as C:\tools\plantuml\plantuml.1.2018.1.jar
  • Install the PlantUML extension through the Extension tab in Visual Studio Code and update your user settings to point at your latest PlantUML jar

Publish Maven Site to Netlify

Having written a couple of posts with the Hexo static site generator I wanted to start building out my site with extras such as Pocket links and details about my side Projects. Trying to extend the site has proved harder that I thought it might, so I’m going to switch over to another platform for publishing. This is likely going to be based on Spring Boot which I had previously considered writing after trying JBake.

What I liked about Hexo originally was that I could easily publish to Netlify out of the box. It looked like I would only be able to publish the static output as part of a build step outside of Netlify. So I started to look around for an alternative and ended up on a site called StaticGen which I think is produced by Netlify and lists out all their known Static Site Generators. Turns out there were plenty of Java based generators. One of them, called Orchid, had a Deploy to Netlify badge indicating that a Java project might be supported by Netlify, if not offically documented or easily found.

The starter project that you can deploy to Netlify is available through the github repo JavaEden/OrchidStarter. The process of deploying to Netlify was simple and the deployment uses the gradle wrapper to download and execute the build step for Orchid.

My preference is to use Maven over Gradle so I investigated the possibility of using maven to generate the site and have that deployed on Netlify.

Starting with a standard maven project

mvn -B archetype:generate \
  -DarchetypeGroupId=org.apache.maven.archetypes \
  -DgroupId=com.robintegg.blog \
  -DartifactId=netlify-upload

Influenced by the Gradle Wrapper, there is a [Maven Wrapper](mvn -N io.takari:maven:wrapper) available. The wrapper can be added using the maven task

mvn -N io.takari:maven:wrapper

Windows Users Be aware that the `./mvnw` command needs to retain it’s executable state when being deployed to GitHub so that it can be executed by Netlify. The Lennart Schedin blog provides details and explanations on how to do this. The commands for changing the permissions are summarized below:

>git ls-tree HEAD100644 blob 55c0287d4ef21f15b97eb1f107451b88b479bffe mvnw
>git update-index --chmod=+x .mvnw
>git status>git commit -m "Changing file permissions"
>git ls-tree HEAD100755 blob 55c0287d4ef21f15b97eb1f107451b88b479bffe   mvnw

For supporting the Netlify deployment, you can add the optional netlify.toml which will override any deploy settings you might have set in the UI.

[build]  base    = ""  publish = "target/site"  command = "./mvnw clean site"

Now, you can publish to Github and deploy the project through Netlify.

My finished Github repo teggr/netlify-upload and is was automatically deployed to https://compassionate-easley-929b6f.netlify.com.

Deploying a PlantUML Spring Boot application in a docker container to sloppy.io

I’m a big fan of including PlantUML diagrams for documentation using Asciidoc and Spring REST docs. Using PlantUML also saves time when visualising software designs as the cycle time of editing and seeing the new diagram can be much shorter that using drag and drop tools like Visio.

PlantUML works by taking in textual notation for a sequence diagram like below:

@startuml
Alice -> Bob: Authentication RequestBob --> Alice: Authentication Response
Alice -> Bob: Another authentication RequestAlice <-- Bob: another authentication Response
@enduml

and is able to generate a UML diagram.

The PlantUML site does have a link to an online demo tool where you can try on the various supported UML diagram types.

  • Sequence diagram
  • Usecase diagram
  • Class diagram
  • Activity diagram
  • Component diagram
  • State diagram
  • Object diagram
  • Deployment diagram
  • Timing diagram

The online demo tool has limited capabilities and I wanted to build my own version that would both cater for online editing and also storage of those diagrams so that I could use it as a notebook of software diagrams.

I started to build an Online Editor for myself last year. The application was initially deployed to Heroku with basic editing functionality. However, PlantUML java library has a dependency on the Graphviz library for all other diagram types but sequence. So to add the ability to add further diagram types to the Online Editor it makes sense to package both the application and its dependency in a docker container. The docker container then also needs to be hosted somewhere. For the hosting I’m going to trial sloppy.io where they have a nice simple interface for deploying and running Docker containers.

tl;dr

  • Build a Docker container for Spring Boot application with PlantUML and Graphviz dependencies
  • Push to Docker Hub
  • Build and run on sloppy.io

Online Editor GitHub Project

To get a copy of the project you can clone the github repository.

git clone https://github.com/teggr/online-editor

The project is currently a simple Spring Boot web application, built using maven and has a dependency on PlantUML available through maven central.

pom.xml
<!-- plant uml -->
<dependency>
    <groupId>net.sourceforge.plantuml</groupId>
    <artifactId>plantuml</artifactId>
    <version>8059</version>
</dependency>

Run the Spring Boot application and open http://localhost:8383 to see the working application in your browser. To build, run a `mvn install` from the command line.

Adding Docker

Assuming that you have Docker installed and available from the command line then, you have all you need to build and run the container locally.

To get the application up and running I’ve started with a small Dockerfile that builds on the OpenJDK container and installs both GraphViz and locally built application. Upon being run, the container will lauch the Spring Boot application.

Dockerfile
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y \  graphviz
ENV GRAPHVIZ_DOT /usr/bin/dot
COPY target/online-editor-0.0.1-SNAPSHOT.jar /usr/share/online-editor-0.0.1-SNAPSHOT.jar
EXPOSE 8383
CMD ["java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-jar", "/usr/share/online-editor-0.0.1-SNAPSHOT.jar"]

To build the image and run the container, we can use the following commands. The Online Editor should then again be available on http://localhost:8383

docker build -t online-editordocker run --publish 8383:8383 online-editor

Storing Image to Docker Hub

Next step for the Docker Image is to host it somewhere that can be accessed by sloppy.io. They have built in support for Docker Hub and Quay.io.
For this project I’m pushing to my own Docker Hub account and repository:

docker build -t teggr/online-editor:0.0.1docker push teggr/online-editor:0.0.1

The docker image is available at https://hub.docker.com/r/teggr/online-editor/

Deploying

Now that we have a publicly hosted docker image we can run the application on sloppy.io. When I started this post there was a 14 day free trial and paid plans from £5 a month.
sloppy.io has some useful guides to deploying your first docker container.

https://kb.sloppy.io/getting-started

Download the CLI and add the exe to PATH

c:> sloppyusage: sloppy [--version] <command> [<args>] [--help]
Available commands are:
    change           Change the configuration of an application on the fly
    delete           Delete a project, a service or an application
    docker-login     Uploads docker credentials to sloppy.io
    docker-logout    Removes docker credentials from sloppy.io
    logs             Fetch the logs of a project, service or app
    restart          Restart an app
    rollback         Rollback an application
    scale            Scale the number of instances in an application
    show             Show settings of a project, a service or an application
    start            Start a new project on the sloppy service
    stats            Display metrics of a running app
    version          Prints the sloppy version

Next step is to add your API token as an environment variable. In windows I did this through the system environment variables gui.
Once the CLI has been successfully configured you can start the application remotely on the command line. You can also do this through the admin console.

sloppy start --var=domain:robintegg-online-editor.sloppy.zone sloppy-online-editor.json

This makes my application available at http://robintegg-online-editor.sloppy.zone

Summary

Getting a Spring Boot application into a Docker container is relatively simple now. I’ve taken a fairly simple approach for the moment in constructing my docker image. There are many best practices to follow when creating Dockerfiles. The offical OpenJDK image also documents some best practices.

Getting your application into a Docker image is only half the job done. The other half is wading through all the Docker hosting options. From do it yourself to fully managed, minute-by-minute to monthly billing there are so many options. I’ve plumped for sloppy.io as it seemed to be a nicely thought out offering to help me get my container up and running with minimal fuss. To that end I was not disappointed, I signed up for the free account, followed the get started guide to understand my next steps, then deployed!

Sloppy.io gives a nice PAAS level of abstraction for deploying containers and doesn’t seem overly priced. I can deploy up to 50 containers, though I need more time to understand how I can run multiple containers within my allocated resoures. I’m sure I will extend my trial period into a paid one. Well done sloppy 🙂

As for the Online Editor, next steps are to add some storage features and start using TravisCI to build and deploy the application.

First look at Java support in Visual Studio Code

Microsoft recently released a Visual Studio Java extension pack and a Java Test Runner to the Visual Studio Code market place. The Visual Studio Java extension pack adds debugging support to the Red Hat language support for Java. The Java Test Runner adds support for executing JUnit tests.

I’ve recently started to use Visual Studio in my work environment for JavaScript development and have now had a chance to take a look at the extension pack and the supported features.
The hope is that the Visual Studio code could be ready to be a full replacement for Eclipse/Spring Tool Suite in my development toolset for Java and JavaScript applications.

tl;dr

  • Quick and easy installation (2-3 extensions)
  • Good support for autocompletion, code actions and compilation errors in maven projects
  • Out of the box support for maven projects without modules
  • Supports Spring Boot application launching and debugging
  • Older legacy or multi-module projects don’t seem to be completely supported and documentation could be more thorough
  • Lacking some workflow elements from Spring Tool Suite / Eclipse

Would I use Visual Studio code for a small new project or demo completely: YES, would I use Visual Studio code for every day development at work: NOT YET, perhaps on some smaller components as more time needed to determine support for those individual applications.

Assessing Java application development in Visual Studio code

  1. See what features are available
  2. See if features cover my common use cases
  3. Summary

There is a comprehensive tutorial available with azure integrations on the Visual Studio code web site https://code.visualstudio.com/docs/java/java-tutorial.

Installation Pre-requisites

  • JDK
  • VSCode

Installation

The Java Extension Pack should be available through the Visual Studio Code” extension tab in the editor.

The Java Extension Pack wraps two extensions togther.

  • Language Support for Java by RedHat
  • Debugger for Java by Microsoft

Note: After installation make sure you read the individual extension installation instructions. You will need to set the `java.home` user setting as per language extension instructions. Don’t forget double backslashes in the path to the root of your JDK.
Install the Java Test Runner extension

Creating a maven project

Start by opening a new Visual Studio Code window (Ctrl + Shift + N) and opening a new console (Ctrl + ‘).
Navigate to directory where you want to create your new maven project.

cd {to-you-project-dir}

Then create a new empty maven project using the maven command line. Alternatively the created project can be cloned from the github project teggr/vs-code-first-look

mvn -B archetype:generate \
      -DarchetypeGroupId=org.apache.maven.archetypes \
      -DgroupId=com.robintegg.blog \
      -DartifactId=vs-code-first-look

Maven will then create a new folder called vs-code-first-look with a prepopulated `pom.xml`, `App` class and `AppTest` unit test.

log[INFO] ----------------------------------------------------------------------------[INFO] Using following parameters for creating project from Old (1.x) Archetype: maven-archetype-quickstart:1.0[INFO] ----------------------------------------------------------------------------[INFO] Parameter: basedir, Value: C:\projects[INFO] Parameter: package, Value: com.robintegg.blog[INFO] Parameter: groupId, Value: com.robintegg.blog[INFO] Parameter: artifactId, Value: vs-code-first-look[INFO] Parameter: packageName, Value: com.robintegg.blog[INFO] Parameter: version, Value: 1.0-SNAPSHOT[INFO] project created from Old (1.x) Archetype in dir: C:\projects\vs-code-first-look[INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 30.087 s[INFO] Finished at: 2018-01-05T12:35:11+00:00[INFO] Final Memory: 14M/190M[INFO] ------------------------------------------------------------------------

After the maven build has completed, then in the Visual Studio Code file explorer, open the newly created folder. You should then see the new maven project.

Java Extension Pack features

The language extension should recognise both maven and gradle projects. I’m not sure what maven version is being picked up by the extension. An embedded version or the one availble from my path.
Open up the Java and XML files and you can try out the extension features.

Autocomplete

Code actions

Compilation errors

Debugger for Java

Visual Studio Code requires a launch configuration to be created that will support running and debugging a Java application.

Adding configuration can be done through the menu

If you have a maven project without any modules, then the tool appears to be able to find the ‘main’ method and prepopulate for you.

To run the application, use Ctrl + F5

To use debugging, add breakpoints in your editor and run the application with debugging enabled, using F5

Unit Testing

Using the maven project above, if you open a Junit test you are then able to interact with the Java Test Runner features.
Each test and class declaration gets a “run test | debug test” option which will run the test(s) in the background.

There’s a test explorer window and a test report tool. Supports junit 4.8 upwards so some classes extending TestCase won’t work out the box with the extension, but these can call always be run with maven.

Multi Module support

Many of the maven projects that I deal with on a daily basis will be multi-module maven projects.

I’ve create a project on github called teggr/vs-code-multimodule] that contains an example multi-module maven project. This projects contains a class in the `vs-code-core` module which will be added as a dependency to the `vs-code-web` module.

Loading this project is fine, if you change the root pom.xml, you might see a message like below:

I’m assuming this is Visual Studio Code making sure that it’s m2eclipse or internal build files are keeping in sync with the poms.
One error I have not been able to solve yet, is managing the modules and the classpath. If you simply open the root project folder then you will be presented with a warning below complaining that the classes in the modules are not on the classpath.

Eclipse shows the same type of behaviour if you only import the root project, so I tried add the child modules folders to a workspace. I’ve not spent much time with workspaces in Visual Studio code.

This seems to resolve the classpath resolution. It does mean that you do need to open the workspace file through explorer (does this need to be checked into github?), not just the folder as you can with a single maven project. Not sure if there’s a way to recursively import each module into a workspace, like “Import…” in Eclipse.

Once the classpath resolution was working ok, I tried unsuccessfully to run the application and kept getting a host/port error come. More investigation required here I think.

Spring Boot support

Quite often I like to try out and demo applications using Spring Boot.

I have a simple web application in development for testing Spring Boot support. If I clone teggr/all-in-java and open this folder in Visual Studio code then I should be able to start the web application.

Visual Studio code does create the launch configuration nicely for you

The application can then be run and debugged as documented above.

Summary

Having already familiarised myself with Visual Studio Code with JavaScript, the process of installing the extensions was quick and easy, though I would say the documenation could do with a few tweaks. I also had to stumble across the test runner in the tutorial, perhaps more cross referencing of the tools would help.
For the small projects that I imported, the Java compiler (based on m2eclipse) was quick and accurate so I can’t fault it there. Compared to Eclipse there are a number of refactoring code actions missing which I make much use of on a daily basis.

I will need to spend more time researching the multi-module support and how to launch an application from a child module. This is key for Visual Studio Code to be my main development tool.
As the extensions worked well with Spring Boot, this is good for demos and some of my side projects. At work I still have some projects that require WAR deployments into containers and I’m not sure how to manage this development cycle outside of an Eclipse environment without lots of manual steps.

Given Spring Tool Suite is my main development tool, it will always be difficult to transition to a new tool easily and maintain a productive workflow. I’ve tried IntelliJ a number of times but always go back to Eclipse. Visual Studio Code already seems to have slipped into my development workflow and will become an even more useful tool once I’ve got to grips with a few more features.