Improving test coverage in log message

Log message test coverage for the java.util.logging.Logger depends on the log level by default. As an optimization the lambda functions that constitute log messages are only executed if the configured log level is higher or equal the log message level.

In effect this means that for optimum test coverage you would have to set the log level to FINEST for your unit tests. But that will spam your console or log files.

Thankfully the designers thought of that issue and provided a means to use multiple log consumers in parallel. Christoph created a NoOpLoggingHandler and added it to the list of log consumers in the logging.properties of the JUnit tests.

This makes sure all lambdas are called and we get maximum test coverage.

If you want to learn more check out the documentation of the LogManager class.

Release letters – useful for users but a coupling nightmare for developers

Release letters are useful. No doubt about that.

They are the go-to place for users who want to know what’s new in a software release.

Granted that information is already available in your projects ticket system but you can’t expect your users to dig through tickets just to be up-to-date.

So you duplicate information. Which is unsatisfying because it creates coupling:

  • You copy information from the features and bug fixes from the tickets
  • You add links to the ticket system
  • You copy the version number
  • and you should not forget to enter the right release date shortly before you release

In some of our commercial projects we had this process automated to a high degree. The only thing really missing was translating the tech talk from the tickets into short descriptions that are helpful for your users.

We need to reach that point.

 

Publishing to Maven Central

We already publish openfasttrace to JCenter, see openfasttrace distribution. Using libraries from JCenter in a Gradle build only requires adding repositories { jcenter() } to your build.gradle.

You can do the same with maven by adding the following to your pom.xml:

<repositories>
  <repository>
    <id>central</id>
    <name>bintray</name>
    <url>http://jcenter.bintray.com</url>
  </repository>
</repositories>

But we want to make it even easier for maven users by publishing our artifacts to the Maven Central repository which maven uses by default.

The easiest way to publish to Maven Central is synchronization via bintray. The setup process is not trivial, so I want to share my experience.

  1. Apply for a repository for your organisation (in our case: org.itsallcode)
  2. Optionally generate a gpg key without password and upload it to bintray. As an alternative you can also use bintray’s key.
  3. Configure your bintray maven repository to automatically sign uploaded files
  4. Make sure your project conforms to Maven Central’s quality requirements:
    1. Your pom.xml must contain information about the project license and developers (elements <licenses> and <developers>)
    2. Publish the source code by adding this to your pom.xml:
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-source-plugin</artifactId>
        <version>3.0.1</version>
        <executions>
          <execution>
            <id>attach-sources</id>
            <goals>
              <goal>jar</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    3. Publish javadoc by adding this to pom.xml:
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-javadoc-plugin</artifactId>
        <version>3.0.1</version>
        <executions>
          <execution>
            <id>attach-javadocs</id>
            <goals>
              <goal>jar</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
  5. Now you can publish your artifacts to JCenter as usual via mvn deploy. Don’t forget to increment your version number in pom.xml.
  6. The last step is to synchronize the artifacts by clicking the “Sync” button in the “Maven Central” tab in bintray. If everything was OK after some time you should see the message “Successfully synced and closed repo”.

Then it will take some time (in my case around 40 minutes) until your packages are shown in Maven Central and you can search for it. And then it will take up to two hours until you can find them at search.maven.org.

Named vs. Numeric Requirement IDs

With every new project there will be a discussion whether requirement IDs should have a unique name or simply a numbering scheme.

If you look at OFT‘s specification document, you will see that we chose named IDs. The reason in our case is quite simple: we use the ID as reference in OFT’s native specification format (aka. “requirement-enhanced Markdown”) and it is a lot simpler to understand the connections between the specification items in different artifact types if you can tell by the name what the requirement ID is about. It also helps debugging.

That being said there are major drawbacks with this approach:

  1. You have to think of and type an ID for every requirements
  2. In large collections of requirements you might need hierarchical ID parts to make the IDs unique (like req~import.full-coverage-tag-format~1)
  3. Sometimes you pick bad names and are forced to decide whether to bulk-change all documents or live with the name
  4. Some tools (like Doors for example) enforce numeric IDs.

Bottom line: deciding between named and numeric IDs a task that you should think long and hard about before you start your project.

Good news is OFT supports both flavors.

OFT Specifications as PDF

PDFs are a fixed size document format, which means that they made more sense in days when PCs all had about the same video resolutions and screen geometries. But even then they were never perfect for displaying them on a screen because most are in portrait mode and monitors very seldom were. Nowadays displays especially in mobile devices come in all shapes and sizes, so fixed size formats are even more obsolete.

What PDFs excel at to the present day is a universal document exchange format for read-only documents — especially if you plan on printing them.

While specifications seldom get printed these days, they still tend to get archived  (especially PDF-A) and a universally accepted format helps. That being said, we plan to make converting OFT-native (aka. “requirement-enhanced Markdown”) easy.

The requirements are:

  • Creates PDFs
  • Is platform-independent
  • Separates content from layout
  • Customizable style (so that projects or companies can apply their corporate design)
  • Based on free software

While there is nothing wrong with users replacing parts of the tool chain with proprietary choices, our reference implementation is going to be free-software only.

These are the options we are discussing so far:

HTML + CSS + HTML2PDF Renderer

This is a variant where we let a Markdown renderer Create HTML for us and use printer-centric CSS as style and layout customization method. The benefits are that you have a broad base of developers these days who know how to tweak CSS, so it is easy for them to tweak the CSS stylesheet however they need. The downside is that the quality depends mostly on how good the PDF converter is.

LaTeX 2 PDF

LaTeX makes absolutely beautiful and professional documents. There is no doubt about that. The ability to customize via macros it is only limited by the user’s imagination.

On the other hand outside of the academic world there are not so many people who have previous experience with LaTeX. Also there are a lot of dependencies involved and they differ depending on the platform.

DocBook

DocBook shares the basic concept of separation of content and style with TeX. DocBook layouts are customizable through XSLT stylesheets. The DocBook documents are XML files that have a strictly defined schema, so the content structure is not customizable like in TeX. Depending on your perspective this is either a weakness or a strength (since it enforces a uniform document format).

DocBook is also known for producing quality PDFs. And the dependencies are should be pretty homogeneous between platforms.

Popularity Contest

I tried to find numbers about the popularity of LaTeX vs. DocBook. Since non popped up right away, I went for a different approach: comparing search term popularity.

I know that the results need to be treated with a healthy dose of skepticism, since more searches could simply mean one of the two is harder to use. Also while there is only one DocBook, there is a whole bunch of TeX variants out there.

If the search term popularity is any indicating LaTeX wins this contest with flying colors.

What’s your opinion? Any arguments I missed?

And this is why we can’t have nice things

The option to allow anyone to register to your WordPress blog is basically useless and should be removed. The reason why I am saying this is that once the automated spam bots find your blog, they start registering users in the hopes of using your blog as a spam distribution platform.

You can use CAPTCHAs as a gate keeper to your registration dialogs. But the fact alone that this is necessary angers me. Just because a bunch of rightfully underpaid software engineering dropouts thinks it is a good idea to support the spam industry, the rest of us have a harder and harder time using the Internet for something useful.

I hope that later generations will look back at this advertise-any-crap-anywhere madness with mild amusement telling themselves that we were a bunch of brainwashed consumers who just didn’t know better.

You there, spam bot programmer you proved that you at least know the basics of software development, make something useful out of that and improve your karma.

Dating back WordPress blog posts

Sometimes I collect material for a blog post in a file but then forget to publish it. Thankfully WordPress lets you backdate posts.

I found three old blog posts that I wrote for the static blog which I never published. Since they fit nicely with the topics here I published them under the date when I originally wrote them.

Releasing to Maven Central

While we already published releases on JCenter, we are now in the process of getting OpenFastTrace published on Maven Central. The goal is of course to make using OFT as convenient as possible for everyone.

I am happy to see that the people from Maven Central take security seriously and do not just let anyone publish modules under any package name. They asked us to prove that we own the domain “itsallcode.org”, so that gave me the necessary kick in the butt to speed up my plan to set up a web presence for OFT outside of Github on our domain. This blog is the first part.