Sunday, August 12, 2018

OCF Remote Ops - Resources



Monday, May 14, 2018

Bazel: genrule patching an external repo

Just for fun I decided to try a quick-and-dirty Bazel configuration for Iotivity (github mirror).  It turned out to be much easier than I had expected. Over the space of a weekend I was able to enable Bazel builds for the core C/C++ API and also the Java and Android APIs. These should be considered Proof of Concept for the moment, since they need to be refined a bit (compiler options, platform-specific configuration, etc.)  Only tested on Mac OS X, but should be easily adapted to Linux and Windows.

I did come across one hairball that took the better part of a day to figure out.  It involves patching an external package. Since fixing it involved various troubleshooting techniques that are not documented, this article will describe the problem, the solution, and some of the ways that I figured out what was going on. I'll also try to explain how external repos and genrules work.

Iotivity uses several external packages (libcoap, mbedtls, tinycbor).  The Scons-based build system looks for them, and if it does not find them, displays a message instructing the user to download the package and exits. When Scons is rerun, it builds the packages along with Iotivity.

Bazel offers a much better solution. You just set such packages up as external repositories and Bazel will download and build them as needed, without user intervention. It's embarrassingly simple. Here's how tinycbor is handled:


    name = "tinycbor",
    urls = [""],
    sha256 = "48e664e10acec590795614ecec1a71be7263a04053acb9ee81f7085fb9116369",
    strip_prefix = "tinycbor-0.5.1",
    build_file = "config/tinycbor.BUILD",

This defines an external repository, whose label is @tinycbor.  In file config/tinycbor.BUILD you specify the build the same way you would if it were a local repo:

    name = "tinycbor-lib",
    copts = ["-Iexternal/tinycbor/src"],
    srcs = ["src/cborparser.c",
    hdrs = glob(["src/*.h"]),
    visibility = ["//visibility:public"]

That's it! Test it by running this from the command line:  $ bazel build @tinycbor//:tinycbor-lib. Bazel will download the library to a hidden directory, unzip it, compile it, and make it available to other tasks under the @<repo>//<target> label, ie. @tinycbor//:tinycbor-lib.  Add it as a dependency for a cc_* build like so:  deps = ["@tinycbor//:tinycbor-lib"]

The problem is that Iotivity patches the mbedtls library. Furthermore, it provides a customized config.h intended to replace the one that comes with the library (using copy rather than patch). It took a considerable amount of trial and error to figure out how to do this with Bazel. So we have three tasks:

  1. Configure the external repo as a new_http_archive rule in WORKSPACE
  2. Define a genrule to patch the library
  3. Arrange for the custom config.h to replace the default version
  4. Define a cc_library rule to compile the patched code
Here's how I set things up:

    name = "mbedtls",
    urls = [""],
    sha256 = "dacb9f5dd438c456b9ef6627637f46e16fd41e86d828217ec9f8547d3d22a338",
    strip_prefix = "mbedtls-mbedtls-2.4.2",
    build_file = "config/mbedtls/BUILD",

In config/mbedtls I have the following files: BUILD, ocf.patch, and config.h.

A Bazel genrule allows you to run Bash commands from Bazel. It must lists all inputs and all outputs, so that Bazel can guarantee that you indeed output exactly what you promised, no more and no less. It writes output to ./bazel-genfiles/ which it then makes available to other tasks. Unfortunately the documentation is a little weak, so I had to discover the hard way just what Bazel considers an output.

Exploring external repos using genrule

Firsts let's take a look at what happens when Bazel downloads and unzips an external repo. We can do this using a simple genrule in config/mbedtls/BUILD:

    name = "gentest",
    srcs = glob(["**/*"]),
    outs = ["genrule.log"],
    cmd  = "pwd > [email protected]"

Run "$ bazel build @mbedtls". You should get a message like the following:

Target @mbedtls//:gentest up-to-date:

Browse the genrule.log file and you'll see it contains the working directory of the genrule cmd, something like:


The first lesson here is that Bazel sandboxes execution for this external repo.

The second lesson is that you must write outputs to the appropiate Bazel-defined directory. That's what the [email protected] is for: it's the name of the real the output file. If you use "pwd > genrule.log", you'll get an error: "declared output 'external/mbedtls/genrule.log' was not created...".  That does not mean that genrule.log was not written, it means rather that it was written in the wrong place.
You can see what [email protected] is by using "echo [email protected] > [email protected]"; the log will then contain:


Now try changing the cmd to "ls > [email protected]".  Then genrule.log should contain:


Now try "ls -R > [email protected]" to get a recursive listing of the tree; examine it and you will see that Bazel has unzipped the mbedtls library in ./external/mbedtls.

Finally, try this cmd: "\n".join(["ls > genrule.log", "ls > [email protected]"])

This will show you that ls > genrule.log gets written to the execroot, whereas ls > [email protected] gets written to the right place.

Applying a patch

Now let's write a genrule to apply the patch. This is a little tricker, since it has multiple outputs. If you try to use [email protected] Bazel will complain. Furthermore, since patch updates files in place, we need to copy the library to a new directory and apply the patch there. Finally, we need to make the patch file available - since our genrule will execute in the sandboxed execroot, we do not automatically have access to config/mbedtls/ocf.patch.

First let's expose ocf.patch. This is simple but involves an obscure function. Put the following at the top of config/mbedtls/BUILD:  exports_files(["config.h", "ocf.patch"])  This will make config/mbedtls/ocf.patch available under a Bazel label: "@//config/mbedtls:ocf.patch"

Our genrule starts out like this:

    name = "patch",
    srcs = glob(["**/*.c"])
         + glob(["**/*.h"])
         + glob(["**/*.data"])
         + glob(["**/*.function"])
         + glob(["**/*.sh"])
         + ["@//config/mbedtls:ocf.patch"],

The globs pick up all the files that are listed in the patch file and thus required as input (plus others, but that's ok). It also must list the patch file, since that is an input. All inputs must be explicitly listed.

Our command will look like this:

    cmd  = "\n".join([
        "cp -R external/mbedtls/ patched",
        "patch -dpatched -p1 -l -f < $(location @//config/mbedtls:ocf.patch)"

We first copy the entire tree to a new directory "patched" (e.g. external/mbedtls/include -> patched/include, etc).  We then need to add -dpatched to the patch command, so it runs from the correct subdir.  To access the patch file we use $(location @//config/mbedtls:ocf.patch); this is a Bazel feature that retuns the correct (Bazel-controlled) path for ocf.patch.

This will apply the patch, but it will not produce the output required by genrule. It's just like "ls > gentest.log" above: the output gets written but not in the write place. Where is the right place? That's what $(@D) is for. It's a so-called "Makefile variable"; see Other Variables available to the cmd attribute of a genrule. It resolves to the Bazel-defined output directory when you have multiple outputs. In this case:

bazel-out/darwin-fastbuild/genfiles/external/mbedtls.  (Compare this to the value of [email protected]).

So now we need to copy the files we care about to $(@D). Fortunately this is easy; everything we need is already under patched/ so we just add "cp -R patched $(@D)" to our cmd.

Finally we need to specify the outputs.  Note that we only need source files for the library, even though the patchfile applies to additional files (e.g. some programs and test files). So we can limit our output to those files:

    outs = ["patched/" + x for x in glob(["**/library/*.c"])]
         + ["patched/" + x for x in glob(["**/*.h"],

Here we use a Python facility (the language of Bazel is a Python variant).  We are only interested in the library files so we do not output any of the other stuff. We also exclude config.h since we are supplying a custom version.

NOTE: through trial and error, I have discovered that genrule will allow you to output files that are not listed in the outs array, but it will not emit them. In this case, our command copies the entire source tree to $(@D), but our outs array only contains c files and h files.  The resulting genfiles tree contains only those files, to the exclusion of various other files in the source (e.g. *.data). So evidently Bazel is smart enough to eliminate files not listed in outs from $(@D).

Here's the final genrule:

    name = "patch",
    srcs = glob(["**/*.c"])
         + glob(["**/*.h"])
         + glob(["**/*.data"])
         + glob(["**/*.function"])
         + glob(["**/*.sh"])
         + ["@//config/mbedtls:ocf.patch"],
    outs = ["patched/" + x for x in glob(["**/library/*.c"])]
         + ["patched/" + x for x in glob(["**/include/**/*.h"],
    cmd  = "\n".join([
        "cp -R external/mbedtls/ patched",
        "patch -dpatched -p1 -l -f < $(location @//config/mbedtls:ocf.patch)",
        "cp -R patched $(@D)",

Build the library from the patched sources

First off, get the vanilla build working. This is pretty easy, it looks similar to the tinycbor example above.

Unfortunately, getting the lib to build using the patches turned out to be quite difficult. What I came up with is the following (which I do not entirely understand).

First, I had a devil of a time getting the header paths right. In the end the only thing I found that works is to list them all explicitly; globbing does not work.  So I have:

mbedtls_hdrs = ["patched/include/mbedtls/aes.h",

Then I have:

    name = "mbedtls-lib",
    copts = ["-Ipatched/include",
    data = [":patch"],
    srcs = [":patch"],
    hdrs = mbedtls_hdrs + ["@//config/mbedtls:config.h"],
    includes = ["patched/include", "patched/include/mbedtls", "x"],
    visibility = ["//visibility:public"]

Omitting either hdrs or includes causes breakage, dunno why.

For that matter, to be honest, I don't yet know if the build is good, because I have not used the lib with a running app yet.  But it builds!

Tuesday, September 12, 2017

Bazel: Building a JNI Lib

Here's how I managed to use Bazel to compile a JNI wrapper for a C library.  To understand it you'll need to understand the basics of Bazel; you should go through Java and C++ tutorials on the Bazel site, and understand workspaces, packages, targets, and labels. You will also need to understand Working with external dependencies.  That documentation is a little thin, but in this article I'll try to explain how it works, at least as I understand it.

To start, a JNI wrapper for a C/C++ library - let's call it libfoo - will involve three things (native file extensions omitted since they are platform-dependent):
  1. The Java jar that exposes the API - we'll call it libfooapi.jar in this case
  2. The JNI layer (written in C in this case, C++ also works) that wraps your C library (translates between it and the API jar) - we'll call this libjnifoo
  3. Your original C library - libfoo
So the task of your build is to build the first two.

Code Organization

My C library (libfoo) is built (using Bazel) as a separate project on the local filesystem. This makes it an "external Bazel dependency"; we'll see below how to express this in our JNI workspace/build files.

The source for the JNI lib project looks something like this (on OS X):
$ tree foo
├── README.adoc
├── src
│   ├── c
│   │   ├── jni_init.c
│   │   ├── jni_init.h
... other jni layer sources ...
│   │   └── z.h
│   └── main
│       ├── java
│       │   └── org
│       │       └── foo
│       │           ├──
│       │           ├──
Note that WORKSPACE establishes the project root; you will execute Bazel commands within the foo directory. Note also that we only have one "package" (i.e. directory containing a BUILD file). We're going to build the jar and the jni lib as targets in the same package.

The third thing we need is the JDK, since our JNI code has a compile-time dependency on "jni.h"; this is a little bit tricky, since this too is an external dependency, and you cannot brute-force it by giving an absolute path the the JDK include directories - Bazel rejects such paths. We'll see how to deal with such "external non-Bazel dependencies" below.


My WORKSPACE file defines my libfoo library as a local, external, Bazel project:

    name = "libfoo",
    path = "/path/to/libfoo/bazel/project",

In this example, /path/to/libfoo/bazel/project will be a Bazel project - it will contain a WORKSPACE file and one or more BUILD files.  Defining a local "repository" like this in the workspace puts a (local) name on the "project" referenced by the path.  Note that projects do not have proper names; a "project" is really just a repository of workspaces, packages, and targets defined by WORKSPACE and BUILD files - hence the name "local_repository".

The "external" output dirs

When you define an external repo like this, Bazel will create (or link) the necessary resources in subdirectories of the output directories, named appropriately; in this case, "external/libfoo".

In other words, if you define @foo_bar, you will get "external/foo_bar" in the output dirs.

TODO: explain - does Bazel just create soft links to the referenced repo?

JDK Dependency

Our JNI library will have a compile-time dependency on the header files of the local JDK. Many more traditional build systems would allow you to express this by add the absolute file path to those include directories; Bazel disallows this.  Instead, we need to define such resources as an external repository; in this case, a non-Bazel external repository.

You could do this yourself for JDK resources, at least in principle (I tried and failed).  Fortunately Bazel predefines the repository, packages and targets you need.  The external repo you need is named "@local_jdk" (note: underscore, not dash); the targets are defined in  Frankly I do not completely understand how local_jdk and etc. definitions work, but they do. In this example, we will use:
  • "@local_jdk//:jni_header"
  • "@local_jdk//:jni_md_header-darwin"
(If you look at the source, you will see that these are labels for "filegroup" targets - a convenient way to reference a group of files.)

(See also - I don't know how this is related to the other file.)


The local name for the external repository (project) specified in your WORKSPACE makes it possible to refer to it using Bazel labels.  In your BUILD files you use the @ prefix to refer to such external repositories; in this case, "@libfoo//src:bar" would refer to the bar target of the src package of the libfoo repository, as defined in the WORKSPACE above. Note that because of the way Bazel labels are defined, we can expect to find target "bar" defined in file "src/BUILD" in the libfoo repo.

My buildfile has two targets, one for the Java, one for the C.  The Java target is easy:

    name = "fooapi",  # will produce libfooapi.jar
    srcs = glob(["src/main/**/*.java"]))

The C target is a little more complicated:

    name = "jni",
    srcs = glob(["src/c/*.c"]) + glob(["src/c/*.h"])
    + ["@local_jdk//:jni_header",
    deps = ["@libfoo//src/ocf"],
    includes = ["external/libfoo/src",

Important: note that we put the @local_jdk labels in srcs, not deps.  That's because (I think) they are filegroups, and the labels you put in deps should be "rule" targets rather than file targes.

Exposing the headers

Note that it is not sufficient to express the dependencies on local_jdk; you must also specify the include directories with that repo in order to expose jni.h etc.  That's what the "includes" attribute is for.  You must list all the (external) directories needed by your sources.

Saturday, January 21, 2017

boot-gae: Interactive Clojure Development on Google App Engine

[Third in a series of articles on boot-gae]

boot-gae supports interactive development of Clojure applications on GAE. It does not have a true REPL, but it's pretty close: edit the source, save your edits, refresh the page. Your changes will be loaded by the Clojure runtime on page refresh.

This is mildly tricky on GAE. GAE security constraints prevent the Clojure runtime from accessing the source tree, since it is not on the classpath. Nothing outside of the webapp's root directory tree can be on the classpath.

Part of the solution is obvious: monitor the source tree, and whenever it changes, copy the changed files to the output directory. The built-in watch task makes this easy; gae/monitor composes that task with some other logic to make it work.

The tricky bit here is to make sure the changed files get copied to the appropriate place in the output directory; for Clojure source files, that means

target/WEB-INF/classes    ;; for servlet apps

target/<servicename>/WEB-INF/classes    ;; for service apps

boot-gae tasks use configuration parameters to construct the path. The gae/monitor (and the gae/build) task uses the built-in sift task to move input from the source tree to the right place.

That's half a solution; we still need to get Clojure to reload the changed files. The trick here is to use a  Java filter to monitor the files in the webapp and reload them on change, just as the gae/monitor does with source files. A filter in a Java Servlet app dynamically intercepts requests and responses; by installing a filter, we can ensure that changed Clojure code can be reloaded whenever any page is loaded.  See The Essentials of Filters for more information.

The gae/reloader task generates and installs the appropriate filter. No configuration is necessary; the whole process is hidden, so all the programmer need do is run the gae/reloader task.

The reloader task generates a reloader "generator" file (named using gensym) whose contents look like this:

(ns reloadergen2244)

(gen-class :name reloader
           :implements [javax.servlet.Filter]
           :impl-ns reloader)

It saves this to a hidden location (this is easily done, since doing so one of the core features of boot) and then AOT-compiles it to produce the reloader.class file.

It also generates the Clojure file that implements the filter's doFilter method. Here's the content of that file:

(ns reloader
  (:import (javax.servlet Filter FilterChain FilterConfig
                          ServletRequest ServletResponse))
  (:require [ns-tracker.core :refer :all]))
(defn -init [^Filter this ^FilterConfig cfg])
(defn -destroy [^Filter this])
(def modified-namespaces (ns-tracker ["./"]))
(defn -doFilter
  [^Filter this
   ^ServletRequest rqst
   ^ServletResponse resp
   ^FilterChain chain]
  (doseq [ns-sym (modified-namespaces)]
    (println (str "reloading " ns-sym))
    (require ns-sym :reload))
  (.doFilter chain rqst resp))

This process results in some transient files, which are filtered out of the final result. The only files we need are reloader.class (which the servlet container needs) and reloader.clj (to which reloader.class will delegate calls to the filter methods, like doFilter).

If you want to inspect the transient files, you can retain them by passing the --keep (short: -k) flag to the gae/build task. Here's an example of what you will find in WEB-INF/classes in that case (other files omitted):


Since the reloadergen* files are not needed by the app, then are removed by default.


This works find for local development; however, it's just a waste in an application deployed to the cloud. Before deploying (gae/deploy), be sure to omit the gae/reloader task from your build pipeline; is you're using gae/build, use the --prod (short: -p) flag.

boot-gae: building and assembling service-based apps

[Second in a series of articles on using boot-gae to develop Clojure apps on GAE]

Google App Engine supports two kinds of application. The traditional kind is what I'll call a servlet app - a standard, Java Servlet application. It may contain multiple servlets and filters, but everything is in one WAR directory. Servlets can communicate with each other using several techniques, including direct method invocation, or using System properties to pass information, etc. The key point is that they need not send each other HTTP messages in order to cooperate.

The other kind of application, which I will call a service-based, or just services app, assembles one or more servlet apps into an application. Each servlet app is called a service (formerly: module), and functions as a micro-service in the assembled application. Such microservices collaborate via HTTP.

See Microservices Architecture on Google App EngineService: The building blocks of App Engine, and Configuration Files for more information.

boot-gae makes it easy to develop service-based applications, using the same code as for servlet applications. To build a service, do this (from the root of the service project):

$ boot gae/build -s

The -s (--service) switch tells boot-gae to build a service; the result will be placed in target/<servicename>. Building a service, unlike building a servlet app, will generate a jar file for the service. Install this:

$ boot install -f target/<servicename>/<service-jar-file-name>.jar

Do this for each service. Then, from the root directory of the service-based app, run the assemble task:

$ boot gae/assemble

To run the assembled app, use gae/run. The two commands can be combined:

$ boot gae/assemble gae/run

To interactively develop a service running in a services app, change to the service's root directory and run

$ boot gae/monitor -s

Now when you edit your service's code, the changes will be propagated to the assembled service-based app, where they will be loaded on page refresh.

How It Works

The service components and the services app must be correctly configured for this to work, of course. Each service component must include a :gae map in its build.boot file; it looks like this:

 :gae {:app-id "microservices-app"
       :version "v1"
       :module {:name "greeter"
                :app-dir (str (System/getProperty "user.home")

The :version string must conform to the GAE rules: The version identifier can contain lowercase letters, digits, and hyphens. It cannot begin with the prefix "ah-" and the names "default" and "latest" are reserved and cannot be used...Version names should begin with a letter, to distinguish them from numeric instances which are always specified by a number (see appengine-web.xml Reference).

The :app-dir string must be the path of the service-based app's root directory.

The :name string will be used (by gae/monitor -s) to construct the path of the service in its WAR directory in the services app; in this case, the result will will be


The gae/monitor -s task will copy source changes to this directory.

The services app must also include the :gae map in its build.boot file, but without the :module entry. In addition, the component services must be included in the :checkouts vector; for example:

:checkouts '[[ "0.2.0-SNAPSHOT" :module "default" :port 8083]
            [tmp/greeter "0.1.0-SNAPSHOT" :module "greeter" :port 8088]
            [tmp/uploader "0.1.0-SNAPSHOT" :module "uploader" :port 8089]]

The first service listed will be the default service; it must be named "default".  The :module string here must match the :module :name string of the service's build.boot.

WARNING: this will change, so that service components will be listed in :dependencies.

Finally, the services app must contain a services.edn file, which looks like this:

{:app-id "boot-gae-greetings"
 ;; first service listed is default service
 :services [{:service "default"}
            {:service "greeter"}
            {:service "uploader"}]}

WARNING: this will change. We have all the information needed to assemble the app in build.boot, so this edn file is not needed.

See standard environment examples for working demos.

Previous article: Building Clojure Apps on Google App Engine with boot-gae
Next article: boot-gae: Interactive Clojure Development on Google App Engine

Friday, January 20, 2017

Building Clojure Apps on Google App Engine with boot-gae

It's relatively easy to get a Clojure application running on GAE's devserver; you just need to use gen-class to AOT compile a servlet. See for example Clojure in the cloud. Part 1: Google App Engine. The problem is that you then need to restart the devserver whenever you want to exercise code changes, which is way too slow.

One way around this limitation is to run Jetty or some other Java servlet container rather that the devserver.  See for example:
The problem with this strategy is exactly that it does not use the official development server from Google. That server is a modified version of Jetty, with strict security constraints, providing a near-exact emulation of the production environment (which also runs a version of Jetty). If you develop with some other servlet container, you won't know if your code is going to run in production until you actually deploy to the cloud.

So there are two problems to be addressed if we want to use the official devserver.  One is that Java servlets must be compiled, since the servlet container will search for byte-code on disk when it comes time to load a servlet; most solutions I've seen end up AOT-compiling the entire app. The other problem is that GAE's security constraints will prevent your app from accessing anything outside of the webapp's directories. That means, for example, that any jar dependencies should be installed in WEB-INF/lib. If you want to load Clojure source files at runtime, they must be on the classpath, e.g. in WEB-INF/classes.

boot-gae is a new set of tools that solves these problems. Using it, you can easily develop Clojure apps with REPL-like interactivity in the devserver environment. It automates just about everything, so building and running an application is as simple as:

$ boot gae/build gae/run

To develop interactively, switch to another terminal session and run

$ boot gae/monitor

It's that simple. Now changes in your source tree will be propagated to the output tree, where they will be reloaded on page refresh.

The gae/build task is a convenience task that composes a number of core tasks that take care of everything:

  • installing jar dependencies in WEB-INF/lib
  • generating the config files WEB-INF/appengine-web.xml and WEB-INF/web.xml
  • generating one stub .class file for each servlet and filter
  • copying Clojure source files from the source tree to WEB-INF/classes for runtime reloading
  • copying static web assets (html, js, css, jpeg, etc.) from the source tree to the appropriate output directory
  • generating and installing a reloader filter, which will be used to detect and reload changed namespaces at runtime

The process is controlled via simple *.edn files. For example, servlets are specified in servlets.edn, which looks like this:

{:servlets [{:ns greetings.hello
             :name "hello-servlet"
             :display {:name "Awesome Hello Servlet"}
             :desc {:text "blah blah"}
             :urls ["/hello/*" "/foo/*"]
             :params [{:name "greeting" :val "Hello"}]
             :load-on-startup {:order 3}}
            {:ns greetings.goodbye
             :name "goodbye-servlet"
             :urls ["/goodbye/*" "/bar/*"]
             :params [{:name "op" :val "+"}
                      {:name "arg1" :val 3}

                      {:name "arg2" :val 2}]}]}

Here two servlets are specified. One task - gae/servlets - will use this data to generate a "servlets generator" source file that looks like this:

(ns servletsgen2258)

(gen-class :name greetings.hello
           :extends javax.servlet.http.HttpServlet
           :impl-ns greetings.hello)

(gen-class :name greetings.goodbye
           :extends javax.servlet.http.HttpServlet
           :impl-ns greetings.goodbye)

This file is then AOT-compiled to produce the two class files, WEB-INF/classes/greetings/hello.class and WEB-INF/classes/greetings/goodbye.class. The programmer then need only supply an implementation for the service method of HttpServlet, in an appropriately named Clojure file - in this case, in the source tree, greetings/hello.clj and greetings/goodbye.clj will both contain something like (defn -service ...) or (ring/defservice ...).

Another task, gae/webxml, will use the same information to generate WEB-INF/web.xml.

Thus with boot-gae, only minimal servlet and filter stubs are AOT-compiled. The gen-class source code is itself automatically generated, then AOT-compiled to produce the corresponding class files, and discarded. The programmer never even sees this code (but can keep it for inspection by passing a -k parameter).

boot-gae is available at  A companion repository, contains sample code with commentary.

Thursday, July 7, 2016

Java cacerts on Intel Edisons and Gateways

The Intel Edison ships with loads of ca certificates in /etc/ssl/certs, and it also ships with Java 8 (OpenJDK), but it does not ship with a pre-configured cacerts file for Java. Ditto for the Wind River Linux installation that is preinstalled on the Dell 3290 gateway device.

So if you try to use Java to access something over HTTPS you'll get a security exception. For example, if you want to run Clojure using the excellent boot build too, the first time you try "$ boot repl" on either WRLinux/Gateway or Yocto/Edison, you’re going to get an exception along the lines of:

Exception in thread "main"
java.lang.RuntimeException: Unexpected error:
the trustAnchors parameter must be non-empty

The problem is that although both systems ship with a large set of ca certificates, as well as preinstalled OpenJDK 1.8, the latter is not configured to use the former out of the box.  Since I virtually never have to deal with this sort of thing it took me a couple of hours to figure out what the problem was and how to fix it.  All you have to do is create a Java “trust store” by running a Java utility calledkeytoolonce for each certificate you want to add to the trust store. (A trust store is analogous to a key store; the latter is where you keep your keys, the former is where you keep public certificates of trust; if you’re a glutton for punishment take a look at Configuring Java CAPS for SSL Support.)
Here’s what you need to know to make sense of that:
  • Java is installed at /usr/lib64/jvm (WRLinux on the Gateway), or /usr/lib/jvm (Yocto on Edison)
  • The standard place for the Java truststore is $JAVA_HOME/jre/lib/security/cacerts. Note that cacerts is a file, not a directory. You create it with
  • keytool
  • keytool is located in $JAVA_HOME/jre/bin
  • The certificates are in /etc/ssl/certs, which is a directory containing soft links to /usr/share/ca-certificates

Now you just need something to save you the drudgery on installing everything in /etc/ssl/certs into the Java trust store, since keytool must be given an individual file rather than a directory of files as input. Fortunately, somebody already wrote that script for us. You can find it at Introduction to OpenJDK in the subsection Install or update the JRE Certificate Authority Certificates (cacerts) file. I just copied that to ~/bin/mkcacerts, chmodded it to make it executable, and then created a one-time helper:


# use this for Yocto/Edison:
# LIB=lib
# use this for WRLinux/Gateway
if [ -f /usr/$LIB/jvm/java-8-openjdk/jre/lib/security/cacerts ]; then
  mv /usr/$LIB/jvm/java-8-openjdk/jre/lib/security/cacerts \
# if you have a ca-certificates.crt file, use this:
# -f "/etc/ssl/certs/ca-certificates.crt"
# otherwise use
# -d "/etc/ssl/certs/"
./mkcacerts                 \
        -d "/etc/ssl/certs/"           \
        -k "/usr/$LIB/jvm/java-8-openjdk/bin/keytool"      \
        -s "/usr/bin/openssl"          \
        -o "/usr/$LIB/jvm/java-8-openjdk/jre/lib/security/cacerts"

There is one tricky bit: notice the "-d" parameter in the script above. Use that if /etc/ssl/certs contains certificates but no "ca-certificates.crt" - this was the case in earlier versions of the Edison. But if you find /etc/ssl/certs/ca-certificates.crt, then replace that line as noted in the comment.

Now cd to ~/bin and run $ ./  You'll see a bunch of alarming-looking output as it churns through the certificates, adding them to the Java truststore (cacerts file). It takes a while, but once it's done you should be good to go with Java and HTTPS.