Jun 7, 2017

"I know it when I see it" - Perceptions of Code Qualty


Everybody talks about code quality, so surely we have a good understanding of what good (or bad) code is, exactly. Right?
 
Fig 1.: Code Quality according to XKCD (https://xkcd.com/1513/)

Well, no. There are certainly many books and scholarly articles on the topic, but they present a wide array of different, and often conflicting views. It doesn’t get better if you turn to industry: If you ask three professional programmers, you get four different opinions, and they’re often fuzzy and apply only to the kind of software the programmer is experienced with.

All attempts to come up with a simple and crisp definition that everybody accepts have failed. In the end of the day, people will resort to “I know it when I see it”. Unfortunately, that doesn’t quite cut it, neither from a scientific point of view, nor from a practical point of view.

Why should I bother?

Some software engineers might be tempted at this point to simply say “Not my problem” and turn away. However, consider the following two scenarios where this lack of a good definition truly is your problem. First, imagine a teaching environment, be it a secondary school or a university, and keep in mind that today’s students are tomorrow’s engineers so they will be your colleagues in no time. In any such setting, students expect to be told what is good code, and what isn’t. After all, that definition will surely affect how their work is graded. Therefore, the definition should be simple, universal, and easy to apply. However, there is a tension between simplicity and universality: simple solutions often fail in difficult situations. That is why practitioners often reject textbook definitions of code quality as simplistic, or vague.

Fig.2: WTF/h as a candidate code quality metric (http://techstroke.com/best-measure-of-code-quality/)

Now imagine a second scenario of a professional programmer acquainting herself with a piece of existing code. In order to understand the code, an IDE can provide valuable help by flagging suspect code to guide the programmer’s attention. Clearly, providing metrics (and threshold values) that the IDE should implement requires absolute precision in the definition of code quality. Without the necessary underpinnings, the tools will be of much less help, to fewer people.

However, the problem is not a shortage of definitions, concepts, and tools – quite the opposite, and all of them claim to be just the right thing, naturally. What we need is guidance to select our approach, lest we want to waste our energy and enthusiasm on ineffective ways or outright hoaxes (and yes, that happens a lot). Unfortunately, there is precious little evidence to help along the way. 

Now what?

In this situation, researchers and practitioners from Sweden, Germany, the Netherlands, the United States, and Finland teamed up to form a Working Group at ITiCSE (see WG 2), me among them representing QAware. The working group pursues three goals. First, it needs to validate the above observation and thus turn it into fact. Next up, we want to clarify and systematize the existing aspects of code quality to inform the conversation about code quality. Finally, we want to elicit and contrast the views on code quality that teachers, students, and professionals hold, respectively, with a view to deriving recommendations for programming education with a greater practical value.
Fig. 3: Aspects of Code quality (http://blog.techcello.com/2013/06/how-can-techcello-help-in-increasing-the-overall-quality-of-your-application/)

Based on the literature (and common sense), we have some up front idea of what we might find. For instance, we expect to find consistent opinions within groups of people in similar professional situations (i.e., teachers, students, and professional programmers), and different opinions across these groups, simply because they have very different levels of expertise, and are likely concerned with different kinds of quality issues. We expect a progression of levels of more and more global properties.
  • SYNTAX At the one end of the spectrum, there are syntax level issues, such as confusing the tokens “=” with “==”, and preference of language constructs (e.g., avoiding unsafe constructs, default-switch-cases and so on).
  • PRAGMATICS One step further up, small-scale pragmatic issues like identifier naming, indentation, and simple structural metrics like cyclomatic complexity.
  • UNITS The next level addresses complete units of code (often a class or module), and considers its overall structure, unit-level metrics (e.g., method/class length) correctness and completeness. 
  • ARCHITECTURE Finally, there is a level of architecture that is concerned with the structure and interrelation of units, e.g., it considers depth of inheritance trees, design patterns, architectural compliance, and other system-level properties.
Clearly, one has to master the lower levels before one can work effectively on the higher levels. But to what degree are the various groups aware of the elements of this hierarchy? Which are the predominant concerns, and what tools and sources of information used by the various populations? And which of the many issues at each level are really relevant, and how do they compare?

Starting Point

There are two types of evidence that exist addressing such questions. On the one hand, there are quantitative studies (mostly controlled experiments and quasi experiments) on very low-level aspects of code quality. Such studies are usually conducted on students and focus on simple metrics [1,2,4,7,8], or individual aspects such as readability [3,5]. Such studies aspire to provide scientific reliability, though necessarily losing ecological validity in the process. On the other hand, there are surveys and experience reports based on practitioner experiences such as [6,9] that generally lack the degree of focus (and, too often, also scientific rigor), but offer a higher degree of validity. Our Study, in contrast, uses a qualitative study design and is the first to look at differences across groups. 

Of course, many a practitioner might object that these questions in particular, or even scientific enquiry in general, while interesting, are of purely academic concern. People might often object that science is too slow, and lags behind coding practice and thus is unable to give good guidance for today’s developers. I beg to differ. While I am ready to accept criticisms of science being slow, sometimes wrong, and often not immediately applicable, it is still the only reliable (!) way forward. The IT industry is highly hype-driven, but lasting improvements are rare. 

Leaving aside this philosophical argument, I believe questions like the ones addressed in our study offer a set of very practical benefits.
  • Raising the awareness about code quality in academic (or school) teaching will trickle down into increased quality awareness and coding capabilities of graduates, and thus junior practitioners.
  • Reliable (i.e., scientific) insight into the relative contributions and effects of the various factors allows practitioners to focus their efforts on those properties that truly make a difference.
  • Finally, fostering understanding of the respective viewpoints should improve mutual understanding, and thus contribute to more collaboration, which I truly believe in—for the common good.
 Stay tuned for the initial results of our study due in late June, and follow me on Twitter @stoerrle!

 

References

[1]    Breuker, Dennis M., Jan Derriks, Jacob Brunekreef. "Measuring static quality of student code." Proc. 16th Ann Joint Conf. Innovation and Technology in Computer Science Education. ACM, 2011.
[2]    Buse, Raymond PL, Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on Software Engineering 36.4 (2010): 546-558.
[3]    Börstler, Jürgen, Michael E. Caspersen, Marie Nordström. "Beauty and the Beast: on the readability of object-oriented example programs." Software Quality Journal 24.2 (2016): 231-246.
[4]    Börstler, Jürgen, et al. "An evaluation of object oriented example programs in introductory programming textbooks." ACM SIGCSE Bulletin 41.4 (2010): 126-143.
[5]    Börstler, Jürgen, Barbara Paech. "The Role of Method Chains and Comments in Software Readability and Comprehension—An Experiment." IEEE Transactions on Software Engineering 42.9 (2016): 886-898.
[6]    Christakis, Maria, Christian Bird. "What developers want and need from program analysis: An empirical study." Proc. 31st IEEE/ACM Intl. Conf. Automated Software Engineering. ACM, 2016.
[7]    Posnett, Daryl, Abram Hindle, Premkumar Devanbu. "A simpler model of software readability." Proc. 8th Working Conf. Mining Software Repositories. ACM, 2011.
[8]    Stegeman, Martijn, Erik Barendsen, Sjaak Smetsers. "Towards an empirically validated model for assessment of code quality." Proceedings of the 14th Koli Calling Intl. Conf. on Computing Education Research. ACM, 2014.
[9]    Stevenson, Jamie, Murray Wood. "How do practitioners recognise software design quality: a questionnaire survey." (2016).

May 18, 2017

ApacheCon / Apache BigData - Day 2

Here is my conference coverage for ApacheCon and Apache BigData NA 2017 day 2. See day 1 coverage here.

Apache Ignite
Like last year in Vancouver Apache Ignite is again a big thing. It's really an amazing piece of technology. Here's the feature puzzle of Apache Ignite:
At the conference the following Ignite topics were covered for the lately released version 2.0:

SQL Grid
Ignite supports ANSI SQL 99 compliant access to the data within a memory grid. It supports even the tricky things like (distributed) joins and groupings and full-text search within the data model and geo-spatial qeries. The data is always consistent and transactions are ACID. Even if Ignite acts as an read-through/write-through cache for a relational database. This is a very interesting use case as this allows Ignite to act as an caching SQL proxy in front of an relational database. Ignite SQL can be accessed by an own JDBC and ODBC driver as well as by the Ignite SQL API. The relational data model within Ignite can be described and modified with SQL DDL and DMLs as well as by code annotations and XML configuration. The relational data model can also be imported from relational databases. Indexes are stored in-memory (off-heap) as B+ trees.

Streaming
With data streamers you can import data into an Ignite Cluster as stream with automatic partitioning support. Prebuilt data streamers for Kafka, RocketMQ, sockets, JMS, MQTT and others are available. The processing side are continuous SQL queries on sliding windows.

Web Console
There is a web console for Apache Ignite available for query execution, result visualization and monitoring. It also provides a schema import wizard from relational databases. 

File System
Ignite provides an in-memory file system which implements the Hadoop FileSystem API. So it can be used as a HDFS or Alluxio replacement for {Hadoop, Spark, Flink}. In this scenario it can also act as an caching layer between {Hadoop, Spark, Flink} and real (and persistent) HDFS. 

Ignite 2.1
Ignite 2.1 will be released within the next months. The big new thing will be an own high-performance persistent storage implementation to be able to provide durable scenarios without relying on external persistent storage solutions.

Btw.: Ignite claims to be way faster than Hazelcast and an Ignite book has just being completed.

Presto
When it comes to interactive analysis of big data Facebook's Presto seems to be the jack of all trades. It supports full ANSI-SQL (including joins) has its own JDBC driver and Tableau web connector and can connect to various data sources like files within HDFS in formats like Parquet and ORC as well as other persistent storages like Cassandra, Hive, PostgreSQL, and Redis. Presto can be enhanced by UDFs and provides enterprise-grade features like Kerberos and LDAP authentication and secured cluster-internal communication. Presto is maintained by a solid community and has a broad user base. There's also a nice web interface for Presto available from Airbnb. Beside Facebook also Teradata contributes to Presto with about 20 developers and provides an own Presto distribution with enterprise support available.

IoT
Apache is very busy in providing an open source IoT stack on top of mynewt, an real time operating system (RTOS) for low-level devices (Cortex M0-M4, MIPS, RISC-V) with included device management features like build and package mangement, remote firmware upgrade, secure bootloader and signed images.


Incubating Edget provides analytics capabilities at the edge from the cloud to the IoT fog.

May 17, 2017

ApacheCon / Apache BigData - Day 1

The Apache Foundation event management team is really excellent in choosing venues for their conferences. After Vancouver, BC last year this year's ApacheCon and Apache BigData takes place in beautiful Miami, FL. Following my conference coverage of day 1. See day 2 coverage here.

Notebooks
Notebooks for data analysis are very en vogue. Apache Zeppelin and Jupyter are the super heroes in that area. Pixiedust is a nice extension to Jupyter providing easy-to-use data visualization primitives. Helium is a new plugin system and package repository for Zeppelin providing various ready-to-use Zeppelin extensions (visualizations, interpreters, spell).

Cloud
Basically no surprise but a little bit surprisingly intensive is the promotion of Apache CloudStack as open source IaaS platform and competitor to OpenStack. I thought this war is over and OpenStack is the clear winner - but Apache doesn't want to capitulate.

Flink and Spark ... and Beam
Flink seems to be at eye level with Spark. Each time Spark is mentioned also Flink is mentioned. Apache Beam is also very good covered at the conference providing an abstraction layer atop of both. But concerning Apache Beam I'm very suspicious of abstraction frameworks of abstraction frameworks. Beam is also an abstraction for Google Cloud Dataflow. So it maybe also exists for Google having a "no vendor lock-in" argument. Btw.: Google is one of the most contributing companies to Beam.

Messaging
There are two new players around in the field of messaging systems. In the range between Kafka and classical messaging systems like ActiveMQ and RabbitMQ RocketMQ is just in the middle. RocketMQ is an open source contribution of Alibaba - one of the largest web-scale companies on earth. You can find a nice comparison chart of RocketMQ with Kafka and ActiveMQ here. RocketMQ provides more guarantees compared to Kafka like strict ordering but at a price: It's based on a master/slave architecture so it's not as scalable like Kafka. But compared with ActiveMQ and RabbitMQ it has a significant higher throughput through leveraging the pull/distributed log principle of Kafka. As RocketMQ also provides a JMS interface it could be on a real sweet spot between Kafka and ActiveMQ/RabbitMQ. Apache DistributedLog is not a full fledged messaging solution but a building block therefor. It provides a distributed log implementation - f.e. Kafka is also based on a distributed log. Allegro open-sourced Hermes, a message broken on top of Kafka extending Kafka with REST pub/consumer interfaces, message tracing and monitoring, and guaranteed message delivery at a sub-millisecond cost atop of Kafka.

Hardware Diversification
Spark and others are prepared to support diverse Hardware like GPUs, TPUs and non-volatile / durable RAM ... also with a talk on QAware research project "how to leverage the GPU on Spark". There is also a native lib from Intel (Math Kernel Library) which claims to speed-up ML use cases on Spark by 9x at no additional cost.

Dataservices
Dataservices is a new way how to process data and an alternative to Spark and Flink if you want to implement and run data processing applications atop of a microservice platform. I did a talk on how to implement dataservices with Spring Cloud Data Flow.
Others proposed to use a serverless framework like OpenWhisk to implement dataservices.

Jan 23, 2017

Setting up a distributed Ehcache with Mule ESB Community Edition

Setting up a distributed Ehcache on Mule ESB Community Edition is in fact quite simple and can be achieved in a few steps. After creating an Ehcache configuration, we set up a cache manager managed by Spring. We then use the previously defined caches in our Mule configuration together with a cache key extractor in a custom caching interceptor.

Related

If you have Mule Enterprise Edition, check out the Caching Scope which allows caching of predefined blocks inside flows. Ehcache also provides distributed caching with Terracotta and BigMemory Max.

Prerequisites

In our example we are using Mule 3.8.0 together with Ehcache 2.6.3 and Spring 4.1.6 inside a Glassfish 4 server. Mule is configured using XML configuration files.

Setting up the distributed Ehcache

The Ehcache is configured using ehcache.xml configuration files which consist at least of a list of cache configurations. For the distributed cache, we also need a peer provider, a peer listener and an event listener for each cache.

  • peer provider: locates other peers and manages a list of peers belonging to the distributed cache
  • peer listener: listens for incoming cache changes
  • cache event listener: listens for local cache changes and distributes changes to other peers

First, we set up the peer provider which locates other peers in the network and manages a list of peers which belong to the distributed cache:

<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=224.0.0.1, multicastGroupPort=22401"/>

The peer discovery can either be done in automatic mode using multicast (as listed above) or in manual mode explicitly specifying the remote peer addresses. The latter approach is usually safer for company networks or data centers, but requires a lot of lines of configuration when using more than just a few caches and server instances. In automatic mode, the peer provider sends multicast messages to all server instances in the multicast group and tells them about its caches and the port on which the peer listener (see below) listens for incoming cache changes.

Together with the peer provider, we need a peer listener which listens for incoming cache changes:

<cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="socketTimeoutMillis=2000"/>

If we do not specify a port as in the example above, Ehcache automatically chooses a high numbered port which is still unused. For company networks or data centers, you might want to specify the port explicitly. When using automatic peer discovery, the information about which server instance uses which port is distributed by the peer provider over multicast messages. In case of a manual peer discovery, the addresses and ports have already been stated explicitly in the peer provider configuration.

Finally each cache needs a cache event listener (defined inside its cache tag) which distributes cache changes such as new cache entries to the remote peers.

<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/>

Also see the Ehcache Replication Guide which is still available on the Ehcache.org website.

Configuring the cache manager

Next, we configure an Ehcache manager in our Spring configuration file. We will use the cache manager later to retrieve the caches and use them in our caching interceptor.

<bean id="ehcacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
    <property name="configLocation" value="classpath:/ehcache.xml"/>
</bean>
<!-- optional: -->
<bean id="ehcacheCacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
    <property name="cacheManager" ref="ehcacheManager"/>
</bean>
<cache:advice id="ehcacheAdvice" cache-manager="ehcacheCacheManager"/>

The cache manager uses the previously defined ehcache.xml files. This is usually a good place to define separate configuration files for your environments e.g. using ${ } property expansion. For example you might want to disable the distributed cache when testing locally or define different addresses or ports for your production environment. Note that the configLocation is a Spring resource, so if you want to point it to a file not in the classpath, use the file: prefix, e.g. file:/path/to/ehcache.xml.

If we also want to use our caches elsewhere with Spring, it is a good idea to define a Spring cache manager (in the example above called ehcacheCacheManager) and an appropriate cache advice.

An interface for cache key extractors

In order to provide a cache key to our caching interceptor for every service, we implement a cache key extractor defined by a simple extractor interface.

public interface CacheKeyExtractor {
    Object extractKeyFrom(MuleEvent event);
}

For each service, we implement a concrete cache key extractor. An extractor could e.g. analyse the payload of the request, parse it and extract the relevant information that is a viable cache key. Since we pass the MuleEvent, we also have access to inbound, outbound and session properties set by Mule or could retrieve other information from our Spring context.

In order to be able to access our cache key extractor implementations in the Mule configuration files, we define them as Spring components (e.g. @Component("fooCacheKeyExtractor")) and give them a unique name for simple usage.

Implementing the caching interceptor

The last component needed for a working cache is the caching interceptor. It is implemented as a custom Mule interceptor. On a cache hit, further execution of the flow is stopped and the cached payload is returned. On a cache miss, the flow continues and the result of the execution is put into the cache. Logging messages and documentation are stripped from the following example code.

@Component
public class CachingInterceptor implements Interceptor {
    private static final String HTTP_STATUS_OK = "200";
    private Ehcache cache = null;
    private MessageProcessor next = null;
    private CacheKeyExtractor extractor = null;

    @Override
    public MuleEvent process(MuleEvent event) throws MuleException {
        Object cacheKey = extractor.extractKeyFrom(event);
        if (cacheKey == null) {
            return next.process(event);
        }
        Element cachedElement = cache.get(cacheKey);
        if (cachedElement == null) {
            // cache miss
            return updateCache(cacheKey, event);
        } else {
            // cache hit
            return lookupCache(cachedElement, event);
        }
    }

    private MuleEvent updateCache(Object cacheKey, MuleEvent event) throws MuleException {
        // invoke the intercepted processor
        MuleEvent result = next.process(event);
        String status = result.getMessage().getInboundProperty("http.status");
        if (!HTTP_STATUS_OK.equals(status)) {
            return result;
        }
        // cache the payload of the intercepted processor
        try {
            byte[] payload = result.getMessage().getPayloadAsBytes();
            if (payload != null) {
                cache.put(new Element(cacheKey, payload));
            }
        } catch (IOException e) {
        } catch (Exception e) {
        }
        return result;
    }

    private MuleEvent lookupCache(Element cachedElement, MuleEvent event) throws MuleException {
        // extract the cached payload
        try {
            Object payload = cachedElement.getObjectValue();
            MuleMessage cachedMessage = new DefaultMuleMessage(payload, event.getMessage(), event.getMuleContext());
            return new DefaultMuleEvent(cachedMessage, event);
        } catch (IOException e) {
            cachedElement.setTimeToLive(0);
            return next.process(event);
        }
    }

    @Override
    public void setListener(MessageProcessor messageProcessor) {
        next = messageProcessor;
    }
    
    public void setCache(Ehcache cache) { this.cache = cache; }
    public void setCacheKeyExtractor(CacheKeyExtractor extractor) {
        this.extractor = extractor;
    }
}

The caching interceptor can now be used in our Mule flows.

Configuring Mule

The Mule configuration is now simple. We first need access to our caches so we can insert them into the caching interceptor. The Spring EhCacheFactoryBean already provides the extraction of caches from our previsouly defined cache manager.

<beans:beans>
    <beans:bean id="fooServiceCache" class="org.springframework.cache.ehcache.EhCacheFactoryBean">
        <beans:property name="cacheName" value="fooServiceCache"/>
        <beans:property name="cacheManager" ref="ehcacheManager"/>
    </beans:bean>

    <beans:bean id="barServiceCache" class="org.springframework.cache.ehcache.EhCacheFactoryBean">
        <beans:property name="cacheName" value="barServiceCache"/>
        <beans:property name="cacheManager" ref="ehcacheManager"/>
    </beans:bean>
</beans:beans>

In our flows, we can now insert the custom caching interceptor. The interceptor is configured with the cache to be used (the name must match the one in the ehcache.xml file) and a cache key extractor that knows how to extract a cache key for this specific service. Since we defined the extractor as a named Spring bean, we can now easily inject it here. On a side note, a more sophisticated implementation of the caching interceptor could also e.g. find the extractor by some name magic using Spring. The message processor listener, which is also needed by the caching interceptor, is automatically set by Mule.

<flow name="fooService">
    <inbound-endpoint ref="foo-service-inbound-endpoint"/>
    <!-- ... -->
    <custom-interceptor class="de.qaware.caching.CachingInterceptor">
        <beans:property name="cache" ref="fooServiceCache"/>
        <beans:property name="cacheKeyExtractor" ref="fooServiceCacheKeyExtractor"/>
    </custom-interceptor>
    <!-- ... -->
    <outbound-endpoint ref="foo-service-outbound-endpoint"/>
</flow>

And that’s it. Calls to our foo service are now cached and distributed to our other nodes. Subsequent calls of our foo service should now be answered faster.

Troubleshooting

If you have problems with the Ehcache configuration, first make sure that the correct ehcache.xml file is loaded. Spring and Ehcache will switch to a default failsafe configuration in case of errors which will lead you on a wrong trail. Also have a look at the Ehcache log message at debug log level. Ehcache should print a lot of peer discovery messages for automatic mode and give you a hint on problems with your configuration. In case of problems with Mule, also have a look at the log messages in debug mode, they are quite verbose.

Nov 30, 2016

Continuously delivering a Go microservice with Wercker on DC/OS

Currently, I am really into the field of building cloud native applications and the associated technology stacks. Normally I would use Java as a primary language to implement such an application. But since everyone seems to be using Go at the moment, I figured it's about time to learn a new language to see how it fits into the whole cloud native universe.

So let's implement a small microservice written in Go, build a Docker image and push it to Docker hub. We will be using the Docker based CI platform Wercker to continuously build and push the image whenever we change something in the code. The complete example source code of this article can be found on Github here.

Before you start

Make sure you have all the required SDKs and tools installed. Here is the list of things I used for the development of this showcase:
  • Visual Studio Code with Go language plugin installed
  • The Go SDK using Brew
  • The Docker Toolbox or native Docker, whatever you prefer
  • The Make tool (optional)
  • The Wercker CLI, for easy local development (optional)

Go micro service in 10 minutes

If you are new to the Go language, make sure you read the Go Bootcamp online book

To build the micro service, we will only be using the 'net/http' and 'encoding/json' standard libraries that come with Go. We define the response structure of our endpoint using a plain Go struct. The main function registers the handler function for the '/api/hello' endpoint and then listens on port 8080 for any incoming HTTP requests. The handler function takes two parameters: a response writer and a pointer the original HTTP request. All we do in here is to create and initialize the response structure, marshall this structure to JSON and finally write the data to the response stream. Per default, the Go runtime will use 'text/plain' as content type, so we also set the 'Content-Type' HTTP header to the expected value for the JSON formatted response.

package main

import (
    "encoding/json"
    "net/http"
)

// Hello response structure
type Hello struct {
    Message string
}

func main() {
    http.HandleFunc("/api/hello", hello)
    http.ListenAndServe(":8080", nil)
}

func hello(w http.ResponseWriter, r *http.Request) {

    m := Hello{"Welcome to Cloud Native Go."}
    b, err := json.Marshal(m)

    if err != nil {
        panic(err)
    }

    w.Header().Add("Content-Type", "application/json;charset=utf-8")
    w.Write(b)
}

Now it is time to trigger the first Go build for our micro service. Open a terminal, change directory into you project folder and issue the following command:

go build -o cloud-native-go

You should now have an executable called 'cloud-native-go' in your project directory which you can use to run the micro service. You should also be able to call the '/api/hello' HTTP endpoint on localhost, e.g. curl http://localhost:8080/api/hello. Done.

Go CI/CD pipeline using Wercker

Wercker is a Docker native CI/CD automation platform for Kubernetes, Marathon and general microservice deployments. It is pretty easy to use, allows local development and is free for community use. For the next step, make sure you have the Wercker CLI tools installed. The instructions can be found here.

Create a file called 'wercker.yml' in the root directory of your project and add the following code snippet to it to define the local development build pipeline. We specify the Docker base box to use for the build as well as the commands to build and run the app.

dev:
  # The container definition we want to use for developing our app
  box: 
    id: golang:1.7.3-alpine
    cmd: /bin/sh
  steps:
    - internal/watch:
        code: |
          CGO_ENABLED=0 go build -o cloud-native-go
          ./cloud-native-go
        reload: true

In order to continuously build and run our Go microservice locally, and also watch for changes to the sources, you only have to issue the following Wercker CLI command:

wercker dev --publish 8080

This will download the base box, and then build and run the app inside the container. Om case of changes Wercker will rebuild and restart the application automatically. You should now be able to call the '/api/hello' endpoint via the IP address of your local Docker host and see the result message, e.g. curl http://192.168.99.100:8080/api/hello.

Once the application and the development build are working, it is time to define the pipelines to build the application and to push the image to Docker hub. The first pipeline does have 3 basic steps: first call Go Lint, then build the application and finally copy the build artifacts to the Wercker output folder for the next pipeline to use as inputs. The following code excerpt should be pretty self-explanatory.

build:
  # The container definition we want to use for building our app
  box: 
    id: golang:1.7.3-alpine
    cmd: /bin/sh
  steps:
    - wercker/golint
    - script:
        name: go build
        code: |
          CGO_ENABLED=0 go build -o cloud-native-go
    - script:
        name: copy binary
        code: cp cloud-native-go "$WERCKER_OUTPUT_DIR"

The final pipeline will use the outputs from the previous pipeline, build a new image using a different base box and then push the final image to Docker hub. Again, there is not much YAML required to do this. But wait, where is the Dockerfile required to do this? If you pay close attention you will notice that some of the attributes of the 'interna/docker-push' step resemble the different Dockerfile keywords.

deploy:
  # The container definition we want to use to run our app
  box: 
    id: alpine:3.4
    cmd: /bin/sh
  steps:
    - internal/docker-push:
        author: "M.-L. Reimer <mario-leander.reimer@qaware.de>"
        username: $USERNAME
        password: $PASSWORD
        repository: lreimer/cloud-native-go
        tag: 1.0.0 $WERCKER_GIT_COMMIT latest
        registry: https://registry.hub.docker.com
        entrypoint: /pipeline/source/cloud-native-go
        ports: "8080"

Once you have saved and pushed the 'wercker.yml' file to Github, create a new Wercker application and point it to this Github repo. Next, define the build pipeline using the Wercker web UI. Also make sure that you define the $USERNAME and $PASSWORD variables as secure ENV variables for this application and that you set them to your Docker Hub account. After the next 'git push' you will see the pipeline running and after a short while the final Docker images should be available at Docker Hub. Sweet!

Wercker is also capable of deploying the final Docker image to a cluster orchestrator such as Kubernetes, Marathon or Amazon ECS. So as a final step, we will enhance our pipeline with the automatic deployment to a DC/OS cluster running Marathon.

    - script:
        name: generate json
        code: chmod +x marathon.sh && ./marathon.sh
    - script:
        name: install curl
        code: apk upgrade && apk update && apk add curl
    - wercker/marathon-deploy:
        marathon-url: $MARATHON_URL
        app-name: $APP_NAME
        app-json-file: $APP_NAME.json
        instances: "3"
        auth-token: $MARATHON_AUTH_TOKEN

First, we execute a shell script that generates the Marathon service JSON definition from a template enhanced with some Wercker ENV variables. Then we install 'curl' as this tool is required by the next step and it's not included in the Alpine base image. Finally, we will use the built-in Wercker step to deploy 3 instances of our microservice to a DC/OS cluster. We use several ENV variables here, which need to be set on a deployment pipeline level. Important here are $MARATHON_URL and $MARATHON_AUTH_TOKEN, which are required to connect and authenticate to the Marathon REST API.

Summary and Outlook

Implementing simple microservices in Go is pretty straight forward. However, things like service discovery, configuration, circuit breakers or metrics aren't covered by the current showcase application yet. For real cloud native Go applications we will have a closer look at libraries such as Go-Kit or Go-Micro in the next instalment.

Stay tuned. To be continued ...

References


GOTO Berlin 2016 – Recap

I recently returned from Berlin where I attended the GOTO Berlin 2016 conference. Here are some of the insights I brought with me.

Diverse keynotes
There have been some amazing keynotes on important topics like prejudices, (neuro)diversity and algorithms gone wrong (producing biased, unfortunate and hurting results). I liked these talks a lot. Make sure you check out the talks done by Linda Rising, Sallyann Freudenberg and Carina C. Zona.

The Cloud is everywhere
This is no surprise. There have been many talks about cloud native applications and micro services. Mary Poppendieck did a good keynote, why these applications are so important now and in the future. On a more technical side IBM presented OpenWhisk as an alternative to Amazon's Lambda for building serverless architectures. It supports JavaScript, Swift, Python and Java right out of the box. Additionally, arbitrary executables can be added using Docker containers. What's especially notable about OpenWhisk is that it is completely open source (see https://github.com/openwhisk/openwhisk). So you could think about switching your provider or even host it by yourself. Of course IBM offers hosting on their very own cloud platform BlueMix.

UI in times of micro services
There have been a lot of talks covering the idea of using micro services and splitting up your application in different parts with potentially different independent development teams. Most of the time this is all about the backend. On the front end side you still end up with a monolithic, maybe single page, web application that uses these micro services.
Zalando introduced it's open source framework ‘Mosaic’, a framework for microservices for the frontend, that should tackle these problems. They do this by replacing placeholders in a template with HTML fragments. This happens during the initial page request on the server side (asynchronous replacements via AJAX are supported). The HTML fragments can be provided by the same team that developed the backing micro service.
Mosaic currently offers two server side components. One written in Go and one in Node.js.
Side note: to make the different application fragments look the same, they still have to provide some shared library code (in their case React components).

New ways to visualize data with VR/AR/MR
There was a talk and some demos about the new Microsoft HoloLens. Philipp Bauknecht put the HoloLens in the space of ‘mixed reality’ (as only existing device, Pokemon Go was the example for Augmented Reality). His talk covered some basics about the hardware, possible usage scenarios, existing apps and how to develop new applications.
The interesting part were some completely new possibilities of displaying data, which could result in amazing new kinds of applications. This is (with VR) one of the first really new output device for quite some time! Very exciting.

This and that

  • Ola Gasidlo mentioned PouchDb, an open-source JavaScript database inspired by Apache CouchDB. Interestingly, it enables applications to store data locally while offline, and then synchronize the data with CouchDB or compatible servers when the application is back online.
  • Ola introduced the phrase ‘Lie Fi’ to me: Lie Fi - Having a data connection, but no packages are coming through ;-)
  • Martin Kleppmann did an interesting talk about his algorithm for merging concurrent data changes. He did this with the example of a text editor like Google Docs. The project he is currently working on is actually about using cloud technology but with encrypted data (so you don't have to trust the cloud provider that much). The project is called Trve Data.

Nov 7, 2016

Modular Software Systems with Jigsaw - Part II


With version 9, Java has finally got its long-awaited support for building software modules. The Jigsaw module system becomes part and parcel of the JDK and JRE runtime environment. This article describes how to set up statically and dynamically interchangeable software based on Jigsaw in order to design modular and component-oriented applications. Java itself uses the Jigsaw Platform Module System [JSR376] for internal modularization of the previously monolithic runtime environment (rt.jar). Applications use Jigsaw to ensure the integrity of their architecture. Moreover, applications can be deployed with a minimal JRE runtime environment, which only contains the JDK modules needed by application. Jigsaw also allows, similar to OSGi, to write plug-in modules which provide applications with new functions not available at compile time.

Modules

Modules are independently deployable units (deployment units) hiding the implementation from the user. The core of the modularization is based on the information hiding principle: Users do not need to know the implementation details to access the module. These details are hidden behind an interface. In this way, the complexity visible to the user is reduced to the complexity of the interface. All a user needs to know about a module is contained in the module's public classes, interfaces and methods. Details of the implementation are hidden. Modules transfer the public/private principle of object orientation to entire libraries. The principle of inconspicuous implementation has been known for a long time. David Parnas described the visibility principle at module level and its advantages back in 1972 [Par72].

Fig 1: Library vs Module

A module consists of an interface and an implementation part in a single deployment unit/library. (See Fig 1.) The benefits of this way of encapsulation are the same as with object-orientation.

  • Implementation of a module can be changed without affecting the user. 
  • Complex functionality is hidden behind a simple interface. 

The result is improved testability, maintainability and understandability. Today, in the age of cloud and microservices, a modular design is mandatory! If you package the parts needed for microservice remote communication in separate modules and define module interfaces solely by application functions, then local and distributed deployment are just a mouse click away. If you want to exchange module implementations at runtime or to choose one of alternative implementations (plug-in), it’s necessary to separate interfaces and implementation into two independent modules, yielding an API module along with a potentially interchangeable implementation module. Modules exchangeable at runtime are known as plug-in modules. This in turn requires absolute separation of interface and implementation in various deployment units.

Fig II: Separation of Interface and Implementation for Plug-In Modules

Designing modular applications has long been a tradition with Java, and there are many competing approaches to designing software modules. But they all have one thing in common; a module is mapped as a library. Libraries can be realized in Java as a collection of classes, interfaces and additional resources in JARs. JARs are just ZIP files, completely open to whatever access. Therefore, many applications define their components by a mix of several different approaches:

  • Mapping to package structures by naming conventions
  • Mapping to libraries (JARs)
  • Mapping to libraries, including meta information for checking dependencies and visibility (e.g. OSGi) 
  • Checking dependencies using analysis tools (e.g. SonarQube or Structure101)
  • Checking dependencies using build tools (e.g. Maven or Gradle) as well as
  • Using ClassLoader hierarchies for controlling visibility at runtime (e.g. Java EE) 
All of these approaches have advantages and disadvantages. However, none of them has solved the core problem: as it is, Java has no module concept. That changes with Java 9: with Jigsaw, modules can be designed which control visibility and dependencies at JAR level. Modules make some of their types available as interfaces to the outside world. The interfaces of a Jigsaw module consist of one or more packages. Compiler and JVM ensure that no access occurs past the interface directly to private types (classes, interfaces, enums, annotations).

Jigsaw provides the necessary tools for analysis and control of dependencies. With the analysis tool jdeps, dependencies between JARs and modules can be analyzed and illustrated (with DOT/GraphViz). The Java 9 runtime libraries themselves are based on Jigsaw. The previously monolithic runtime library rt.jar is now split up in Java 9. Cyclic dependencies among modules have been removed. They are forbidden in Jigsaw because they would prevent interchangeability at module level. With the jlink tool, applications can be built with minimal Java Runtime. These applications only contain the effectively utilized modules from the set of JDK modules. The core of Jigsaw is the descriptor module module-info.java, to be compiled by the Java-Compiler into a class module-info.class and is found on the top level package in every Jigsaw JAR archive.
This file contains a module with a name and an optional version number. With requires, a module indicates its dependencies on other modules. With provides, a module indicates that it implements the interface of the specified module. With exports, the interface is indicated as a package name. permits makes a module visible only for the specified modules. With the view section, multiple views on a module can be declared. This mechanism is necessary for downward compatibility. A module can thus support multiple versions of an interface module and remain compatible in spite of further development of old modules.

Sending Email with Jigsaw 

The simple application developed in the following sends emails. It consists of two modules:

  • The Mail module consists of one public interface and one private implementation. The interface of the module consists of one Java interface as well as the types of parameters and exceptions. It contains, in addition, a factory interface (Factory Pattern) for creating the implementation module.
  • The MailClient module uses the Mail module. It may only use the interface; direct access to the implementation classes is forbidden. 
Fig III: The most Simplest Module for Sending Mails

Java 9 Jigsaw now ensures that:

  • The MailClient module only accesses exported classes/packages of the Mail module. Direct access with Jigsaw leads to compiler and runtime errors when trying to get round this restriction using Reflection-API.
  • The Mail module only uses the specified dependencies on other modules. This decouples the module implementation from the client and makes it exchangeable. Along with the support from internal and external view into a module
Jigsaw also prevents
  • cyclic dependencies among modules. Dependency of the Mail module on the MailClient is thus forbidden and is checked by the compiler and the JVM. 
  • uncontrolled propagation of transitive dependencies from the Mail component on to the MailClient. It is possible to control whether or not the interface of dependent modules are visible to the user of the interface. 

The Mail Module Example in Jigsaw Source Code 

Jigsaw introduces a new directory structure for modules in the source code. The source path is now located at the top level, defining modules, together with their sources. The directory corresponds with the module name. So the Java compiler can also find dependent modules in the source code with no cumbersome path declarations required for each module.

src
|–– Mail
| |–– de
| | |–– qaware
| |   |–– mail
| |     |–– MailSender.java
| |     |–– MailSenderFactory.java
| |     |–– impl
| |       |–– MailSenderImpl.java
| |–– module-info.java
|–– MailClient
|   –– de
|     |–– qaware
|       |–– mail
|         |–– client
|           |–– MailClient.java
|           |–– module-info.java


Of course, with Jigsaw, modules can be stored in any directory structure. But the chosen layout has the advantage that all modules can be compiled in one compiler run, and only one search path needs to be declared. Modules in Java 9 Jigsaw contain a special file, the Module descriptor, called module-info.java in the default package of the library. In our example, the Mail component exports only one package.

The associated file module-info.java looks like this:

module  Mail {
    exports  ME;
}

The exports instruction refers to a package. Across multiple export instructions, multiple packages can be defined as part of the interface. In our example, all types in the de.qaware.mail package are visible to the user while subpackages are invisible. The export instruction is not recursive. Types in the de.qaware.mail.impl sub-package are not accessible from any other module. One user of the Mail module is the MailClient.

The module descriptor looks like this:

module  MailClient  {
    requires  Mail;
}


The requires instruction takes a module name and optionally supports the information of whether the Mail module is visible at runtime (requires … for reflection) or just at compile time (requires … for compilation). By default, the requires instruction refers to the Java compiler as well as the JVM at runtime. As will be shown in the following, the source code of the MailClient component uses the interface of the Mail component. Part of the interface is the Java interface MailSender as well as a factory which creates an implementation object on demand. In this example, the parameters for the Mail address and the message are simple Java strings. Every Jigsaw module automatically depends on the Java base module, modul java.base. Base packages such as java.lang or java.io are found in this module. For this reason, the use of the String class is not explicitly declared in the module.

package  de.qaware.mail.client;

import  de.qaware.mail.MailSender;
import  de.qaware.mail.MailSenderFactory;

public  class MailClient  {
    public  static void main(String  [] args) {
        MailSender  mail = new MailSenderFactory().create();
        mail.sendMail("johannes@xzy.de",  "Hello  Jigsaw");
    }
}

Let us remind ourselves: access to private implementation classes is not possible. Any attempt to create an instance of the MailSenderImpl class directly with new or via Reflection without calling up the factory would fail with the following error message:

    ../MailClient.java:9:  error:  MailSenderImpl is not visible because 
    package de.qaware.mail.impl  is not visible 
    1 error.


That is exactly what we want. No one but the exported artifacts in the "de.qaware.mail" package can externally access a class in the MailSender module. Non-exported packages are invisible. In order for modular Java programs to be compiled without an external build tool like Ant, Maven or Gradle, it is necessary that the javac Java compiler can find dependent modules, even if they are present in the source code only. Therefore the Java compiler has been expanded with the declaration of the module source path. With the new option, -modulesourcepath, the Java compiler search path for dependent modules is shared. For experienced Java programmers it is very unusual to see multiple modules in the "src" sub-directories, which are named after the modules. If one were to follow JDK conventions, then these directories would be named by packages (e.g. de.qaware.mail). That can become very confusing, yet has the advantage that the module names are globally unique. This, however, plays no role in projects that are not public. Therefore, we use technically descriptive names such as Mail, Mail-Client or MailAPI. The great advantage of this new code structure, however, is that one single command can compile all modules.

From Mail module to Mail plug-in

In the above example, the interface of the Mail module is closely coupled with the implementation. Jigsaw knows no visibility rules within a module between interface and implementation. Bidirectional dependencies are permitted here. As it is, the Mail module is not exchangeable at runtime but it becomes so if interface and implementation are separated into different modules (see Ill. 4). This conventional plug-in design is necessary whenever there are multiple implementations of one interface:

// src/MailClient/modul-info.java
module  MailClient  {
    requires  MailAPI;
}
// src/MailAPI/modul-info.java
module  MailAPI  {
    exports  de.qaware.mail;
}
// src/WebMail/modul-info.java
module  WebMail  {
    requires  MailAPI;
}

The MailClient module now depends on the new module, MailAPI. The MailAPI module exports the interface but has no implementation of its own. This interface is implemented by a third module, WebMail, which implements the interface rather than exporting something. The client and the implementation module would declare the API module via requires, and this is what the compiler needs to know at compile time. Ill. 4: The Mail module as an exchangeable plug-in But now, we have a problem at runtime because the implementation classes are inaccessibly hidden in the WebMail module, and another one because the factory must be located in the MailAPI module in order to be visible to the client. Unfortunately, this leads to a cycle and a compiler error because the factory depends on the implementation. The question is how to create a hidden implementation class? With JDK9 the amended ServiceLoader class in the java.util package comes in handy: a service interface can be connected with a private implementation class using the provides information in the module descriptor of the implementation module. So the ServiceLoader can access the implementation class and instantiates it. Creation using reflection with class.forName().newInstance() is not possible any more. This decision impacts all dependency injection frameworks, such as Spring or Guice. Today's implementations of these frameworks must be adapted for Jigsaw’s new ServiceLoader mechanism. The client module declares the use of a service by means of the uses clause. The implementing module declares via provides which implementation may be created by the ServiceLoader, and that allows instantiation in a client module via ServiceLoader:

// src/MailAPI/modul-info.java
module  MailAPI  {
    exports  de.qaware.mail;
}

// src/MailClient/modul-info.java
module  MailClient  {
    requires  MailAPI;
    uses de.qaware.mail.MailSender
}

// src/WebMail/modul-info.java
module  WebMail  {
    requires  MailAPI;
    provides  de.qaware.mail.MailSender 
        with  de.qaware.mail.smtp.SmtpSenderImpl;
}

// src/MailClient/de/qaware/mail/client/MailClient.java

// OK: Create  implementation  by using the java.util.ServiceLoader
MailSender  mail = ServiceLoader.load(MailSender.class).iterator().next();

// NOK:  Reflection  is not allowed:
// mail = (MailSender)  Class.forName("de.qaware.mail.impl.MailSenderImpl").getConstructors()[0].newInstance();

// NOK:  Direct  instantiation  is not allowed:
// mail = new de.qaware.mail.impl.MailSenderImpl();

Declaration of a service in the META-INF directory is no longer necessary. Direct use via Reflection is still forbidden, and will be signaled by a runtime error. Likewise the implementation class is of course private and cannot be directly utilized. The module path and automatic modules Java 9 supports the declaration of new modules at runtime. For reasons of downward compatibility, a new loading mechanism has been introduced for module JARs: the module path. Just like with the class path, JARs and/or entire directories can be declared from which modules are loaded. For JARs in the module path with no module descriptor, a default descriptor will automatically be generated. This descriptor exports everything and adds the module as a dependency to all other modules. Such a module is called an "automatic module". This approach guarantees coexistence between Jigsaw modules and normal JARs. Both can even be stored in the same directory:

# run
java -mp mlib -m MailClientBuilding, packetizing and executing modules 
With one single command all modules of an application can be compiled and neatly stored in an output folder.

# compile
javac -d build -modulesourcepath  src $(find  src -name  "*.java")
This command compiles the module under the root path src and saves the generated classes in an identical directory structure in the ./build path. The contents of the .build directory can now be packed into separate JAR files. Declaration of the start class (--main-class) is optional:

# pack
jar --create
--file mlib/WebMail@1.0.jar
--module-version  1.0
-C build/WebMail  .

jar --create
--file mlib/MailAPI@1.0.jar
--module-version
-C build/MailAPI  .

jar --create
--file mlib/MailClient@1.0.jar
--module-version  1.0
--main-class  de.qaware.mail.client.MailClient
-C build/MailClient  .
Three modules are now in the mlib output directory. JVM is able to start the application when this path is given as a module path:
# run
java -mp mlib -m MailClient

Delivering modular applications

In the past, in order to deliver a runnable application, the complete Java Runtime (JRE) had to be included. Start scripts defining the class path were necessary for the application itself, in order to be able to start them correctly with their dependent libraries. The JRE always delivered the full Java functionality, even if only a small part of it was effectively needed. Now there is the jlink command in Java 9 which allows building applications linked only with the necessary parts of the JDKs. Only required modules are included, minimizing the Java runtime environment. If, for example, an application uses no CORBA, no CORBA support would be included.

# link
jlink --modulepath  $JAVA_HOME/jmods:mlib  --addmods  MailClient,Mail
--output  mailclient
The application can now be started with a single script. Knowledge of modules and their dependencies is not necessary. The output directory generated by jlink looks like this:
 .
 |——  bin
 |     |——  MailClient
 |     |——  java
 |     |——  keytool
 |——  conf
 |     |——  net.properties
 |     |——  security
 |     |——  java.policy
 |     |——  java.security
 |——  lib
 ...

The directory tree shows the complete minimal runtime environment of the application. In the bin directory, you can find the generated start script with which the application can be started without any parameters. All utilized modules are automatically packed into one file. The application can now be started by calling up the MailClient start script in the bin directory.

cd mailclient/bin
./MailClient
Sending  mail to: x@x.de  message:  A message  from JavaModule  System

Summary

The team around Marc Reinhold at Oracle has done an excellent job. Using Jigsaw, modular software systems can be developed solely on the basis of built-in Java resources. The impact on existing tools and development environments is significant. Therefore it requires some effort to make popular development environments and build Jigsaw-compliant systems. But this will happen before long because Jigsaw is part and parcel of Java 9. Unfledged tool support, as was the case with OSGi, probably belongs to the past. Jigsaw does not relieve us of the task of designing, implementing and testing sound modules, and is therefore no panacea against monolithic, poorly maintainable software. But Jigsaw makes good software design easier and reachable for anybody.

Links and Literature

  • [JSR376] Java Specification Request 376: Java Platform Module System, http://openjdk.java.net/projects/jigsaw/spec/
  • [Par72] D. L. Parnas, On the Criteria To Be Used in Decomposing Systems into Modules, in: CACM, December, 1972, https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf