Tuesday, May 12, 2020

Bazelizing Coda

I recently decided to spend most of my newly copious free time getting up to speed on blockchain, and in particular the Coda project.  Coda looks very promising; the zkSNARK stuff is about as close to "indistinguishable from magic" as it gets, and it's very recent, so I expect it will become more and more common in blockchain technologies. Plus the Coda folks are actively encouraging (and incentivizing) community participation in various ways, so I figure time spent working on it is time well spent.

My first order of business, aside from figuring out how to run a sandbox node (i.e. in a docker container) is to build the thing on my Mac so that I can start fiddling with the code.  Unfortunately, I was unable to do that. They have instructions for building on a Mac, outside of docker, but I immediately ran into problems. I started to debug the build process and quickly decided that I would rather throw caution to the wind and just bazelize the whole thing. Or at least spend a few days doing so, to get a feel for how much work a complete Bazelization would involve.

I have some previous experience doing this with a moderately complex C/C++ project, so I know that in most cases adding Bazel support is relatively easy. Furthermore, Bazel has good support for cross-platform builds (I said good, not simple), and I want to be able to build Linux and other binaries on my Mac.  A major selling point of the Coda protocol is that it is light weight; if that is true, it should run on smallish systems - I'm thinking Raspberry Pi, Thingy 91, Khadas SBCs (Fuschia!), etc. Support for cross-platform builds is essential.  And finally, the Coda build code is a bit on the ad-hoc side; it involves at least three build systems (Dune for OCaml, the main implementation language, plus Nix, plus the build systems of various dependencies), a bunch of shell scripts, dockerfiles, etc.  Bazelization would reduce the complexity to a considerable degree, and also, in principle, improve reliability and quality.

Another incentive: bazelizing a codebase is a good way to learn its structure.

So off I went. As I expected, much of the work was pretty easy. I was able to Bazelize most of the C/C+ dependencies in about a day and a half - and that includes refreshing my memory, since I had not worked with Bazel for a couple of years. I did run into some problems but I made enough progress over the course of a week to decide to finish and polish the thing.  This is the first in a series of articles describing what I did and how I dealt with Bazelization, in case it may be helpful either to Coda peoples or Bazel peoples.

There are two phases. Phase I is to bazelize the C/C++ dependencies, and Phase II is to bazelize the OCaml part.  Phase I should be relatively easy since I have some experience in that area; dunno about Phase II, since I don't have any experience with OCaml, and have only dabbled in writing the kind of Bazel rules needed to support it.

Phase I: C/C++ libraries

Phase I involves several steps:

  • Local native builds
    • get local builds working on my machine (Mac Catalina)
    • get local builds working on Linux and Windows.  The former should be trivial, given working Mac builds.  I think Bazel has pretty good Windows support these days, so that should only be a bit more work, maybe sorta.
  • Cross-platform builds
    • From host X targeting Y, with various HW architectures, etc.
      • Priorities: start with host Mac targeting linux on x86_64 and arm (raspberry pi, android)
      • Support Android NDK and at least one musl-based toolchain
    • Support for a variety of toolchains
    • Easy extensibility - it should be easy to add another target toolchain.
    • Use the latest Bazel facilities (e.g. platforms and toolchains).

A. Local native builds

Here is the list of direct dependencies:
  • boost
  • libffi
  • libgmp
  • libpatch
  • libprocps
  • libsodium
  • jemalloc
  • libomp
  • openssl
  • libpq
  • libsnark
  • zlib
Boost was easy, since somebody already took the trouble to bazelize it and make the code available as a library of Bazel rules (rules_boost). All you have to do is declare the github repo as a "git_repository" in your Bazel WORKSPACE file; then using a Boost module is as easy as declaring it as a dependency in you cc_library target like so:  deps = ["@boost/:algorithm"]. Isn't that awesome?

Most of the other libs were straightforward, but a few took a little work. There are basically two ways to add Bazel support to a third party library from the outside. You can write BUILD files containing the target recipes needed to build the code, or, if the library already contains a build system, you can have Bazel run it.  This used to be rather a pain, but at some point in the last few years Bazel added support for the very common configure-make and CMake build systems in the form of the rules_foreign_cc library.

A good example of the use of this library involves libsodium. Release versions of this library come with the standard "configure" shell script, and the build instructions are to run "./configure" and then "make".  The configure_make rule defined in rules_foreign_cc does this for you. So all you need to do to use libsodium is register it as an external repository in your WORKSPACE file.  First grab the rules_foreign_cc library:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
    name = "rules_foreign_cc",
    strip_prefix="rules_foreign_cc-master",
    url = "https://github.com/bazelbuild/rules_foreign_cc/archive/master.zip",
    sha256 = "55b7c4678b4014be103f0e93eb271858a43493ac7a193ec059289fbdc20b9023",
)
load("@rules_foreign_cc//:workspace_definitions.bzl", "rules_foreign_cc_dependencies")
rules_foreign_cc_dependencies()

Then grab libsodium:


all_content = """filegroup(name = "all", srcs = glob(["**"]), visibility = ["//visibility:public"])"""
http_archive(
  name="libsodium",
  type="zip",
  url="https://github.com/jedisct1/libsodium/archive/1.0.18-RELEASE.zip",
  sha256="7728976ead51b0de60bede2421cd2a455c2bff3f1bc0320a1d61e240e693bce9",
  strip_prefix = "libsodium-1.0.18-RELEASE",
  build_file_content = all_content,
)


Then in your BUILD file load the rules library and use the configure_make rule it defines:

load("@rules_foreign_cc//tools/build_defs:configure.bzl", "configure_make")
configure_make(
    name = "libsodium",
    configure_env_vars = { "AR": "" }, ## macos needs this
    lib_source = "@libsodium//:all",
    out_lib_dir = "lib",
    shared_libraries = ["libsodium.dylib"], ## macos
    visibility = ["//visibility:public"],
)

Now you can add it as a dependency wherever you need it, e.g. in test/libsodium/lib/BUILD:

cc_library(
    name = "test_libsodium",
    srcs = glob(["*.cpp"]),
    hdrs = glob(["*.h"]),
    deps = ["//:libsodium"], # meaning, the libsodium target in the BUILD file at the project root
    visibility = ["//visibility:public"]
)

Easy-peasy. Openssl and some other libs also work like this.  Unfortunately not all configure-make packages are this easy. Sometimes they ship with autogen.sh but not the configure file it generates, in which case Bazel's configure_make rule will do you no good. Such is the case with jemalloc.  But Bazel provides a rule called genrule (general rule) that allows us to deal with this situation. Briefly, you use genrule to run autogen.sh, and then list that as input to the configure_make rule. Works great - but you do have to inspect the code to figure out how to write the genrule.  Bazel insists that all the inputs and all the outputs of a genrule must be explicitly listed (this is an annoying but good thing, since it helps guarantee replicable builds), and since different libs will have different files, you have to write the genrule by hand.  This was necessary for jemalloc and libffi.

The rules_foreign_cc library also supports CMake in the form of a cmake_external rule. That works for libomp (openmp).

Debugging a build failure

libsnark is a little tricky.  It uses CMake, so we use cmake_external as above, but out of the box, it fails.  Building with --verbose_failures --subcommands --sandbox_debug the message is:

$ bazel build //:libsnark
...
rules_foreign_cc: Build script location: bazel-out/darwin-fastbuild/bin/libsnark/logs/CMake_script.sh
rules_foreign_cc: Build log location: bazel-out/darwin-fastbuild/bin/libsnark/logs/CMake.log

Target //:libsnark failed to build
(02:07:58) INFO: Elapsed time: 2.371s, Critical Path: 1.93s
(02:07:58) INFO: 0 processes.
(02:07:58) FAILED: Build did NOT complete successfully


So we look in bazel-out/darwin-fastbuild/bin/libsnark/logs/CMake.log and find:

-- Found PkgConfig: /usr/local/bin/pkg-config (found version "0.29.2")
-- Checking for module 'libcrypto'
--   No package 'libcrypto' found

Hmm. We saw a message about building Openssl while the build was running. Let's check the working area.  Bazel builds stuff in it's own tmp dirs. You can find them listed in the logfile we just examined (CMake.log in this case; config.log in the configure_make case). Here's an example:

$ less bazel-out/darwin-fastbuild//bin/libsnark/logs/CMake.log
Bazel external C/C++ Rules #0.0.8. Building library 'libsnark'
Environment:______________
DEVELOPER_DIR=/Applications/Xcode.app/Contents/Developer
TMPDIR=/var/folders/wz/dx0cgvqx5qn802qmc3d4hcfr0000gp/T/
SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk
EXT_BUILD_ROOT=/private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__
XCODE_VERSION_OVERRIDE=11.4.1.11E503a
INSTALLDIR=/private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__/bazel-out/darwin-fastbuild/bin/libsnark
__CF_USER_TEXT_ENCODING=0x1F6:0x0:0x0
PATH=/private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__:/usr/gnu/bin:/usr/local/bin:/bin:/usr/bin:.
BUILD_TMPDIR=/var/folders/wz/dx0cgvqx5qn802qmc3d4hcfr0000gp/T/tmp.Y34P4w2t
PWD=/private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__
EXT_BUILD_DEPS=/var/folders/wz/dx0cgvqx5qn802qmc3d4hcfr0000gp/T/tmp.rF8INNUy
SHLVL=2
BUILD_LOG=bazel-out/darwin-fastbuild/bin/libsnark/logs/CMake.log
BUILD_SCRIPT=bazel-out/darwin-fastbuild/bin/libsnark/logs/CMake_script.sh
APPLE_SDK_PLATFORM=MacOSX
APPLE_SDK_VERSION_OVERRIDE=10.15
_=/usr/bin/env

Since this is an external lib, we want to look in EXT_BUILD_ROOT:

$ find /private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__ -name libcrypto*
/private/var/tmp/_bazel_gar/3367fabd230435c540fea97e1a70bf66/sandbox/darwin-sandbox/10/execroot/__main__/bazel-out/darwin-fastbuild/bin/copy_openssl/openssl/lib/libcrypto.a

And there it is. Why couldn't CMake find it?  I dunno. Maybe because it was consulting /usr/local/bin/pkg-config. That wouldn't work, since Bazel builds libcrypto in its own little sandbox. So maybe this is a weakness in the cmake_external rule, or maybe I haven't configure it properly.

In any case, I decided that rather than debug this, I would bazelize libsnark.  Mainly because I figure that would be a good way to get to know a little more about libsnark, which is used not only by Coda, but also by ZCash, and presumbably by other blockchain projects.  How many SNARK implementations can there be, after all?

libsnark depends on various libs as well: xbyak, ate-pairing, libff, libfqfft, and some others. I've got most of them done.  It turns out doing this was a good idea, because it exposed problems that did not occur with Coda's deps.  For instance libgmp builds just fine, until you --enable-cxx. Then you have a problem.  The fix is simple, but it took me the better part of a day to find it, haha.

So current status is that most of this stuff is bazelized, at least for me, on my mac. You can get an idea of what it looks like at xbyak and ate-pairing.  What I'm now working on is support for the newer Bazel stuff like platforms and toolchains, which includes support for local native builds on Linux and Windows.  Once that is a little further along I'll push it to github and write a followup article with links.

Sunday, August 12, 2018

OCF Remote Ops - Resources



Standards


Research

Monday, May 14, 2018

Bazel: genrule patching an external repo

Just for fun I decided to try a quick-and-dirty Bazel configuration for Iotivity (github mirror).  It turned out to be much easier than I had expected. Over the space of a weekend I was able to enable Bazel builds for the core C/C++ API and also the Java and Android APIs. These should be considered Proof of Concept for the moment, since they need to be refined a bit (compiler options, platform-specific configuration, etc.)  Only tested on Mac OS X, but should be easily adapted to Linux and Windows.

I did come across one hairball that took the better part of a day to figure out.  It involves patching an external package. Since fixing it involved various troubleshooting techniques that are not documented, this article will describe the problem, the solution, and some of the ways that I figured out what was going on. I'll also try to explain how external repos and genrules work.

Iotivity uses several external packages (libcoap, mbedtls, tinycbor).  The Scons-based build system looks for them, and if it does not find them, displays a message instructing the user to download the package and exits. When Scons is rerun, it builds the packages along with Iotivity.

Bazel offers a much better solution. You just set such packages up as external repositories and Bazel will download and build them as needed, without user intervention. It's embarrassingly simple. Here's how tinycbor is handled:

In file WORKSPACE:

new_http_archive(
    name = "tinycbor",
    urls = ["https://github.com/intel/tinycbor/archive/v0.5.1.zip"],
    sha256 = "48e664e10acec590795614ecec1a71be7263a04053acb9ee81f7085fb9116369",
    strip_prefix = "tinycbor-0.5.1",
    build_file = "config/tinycbor.BUILD",
)

This defines an external repository, whose label is @tinycbor.  In file config/tinycbor.BUILD you specify the build the same way you would if it were a local repo:

cc_library(
    name = "tinycbor-lib",
    copts = ["-Iexternal/tinycbor/src"],
    srcs = ["src/cborparser.c",
            "src/cborparser_dup_string.c",
            "src/cborencoder.c",
            "src/cborerrorstrings.c"],
    hdrs = glob(["src/*.h"]),
    visibility = ["//visibility:public"]
)

That's it! Test it by running this from the command line:  $ bazel build @tinycbor//:tinycbor-lib. Bazel will download the library to a hidden directory, unzip it, compile it, and make it available to other tasks under the @<repo>//<target> label, ie. @tinycbor//:tinycbor-lib.  Add it as a dependency for a cc_* build like so:  deps = ["@tinycbor//:tinycbor-lib"]

The problem is that Iotivity patches the mbedtls library. Furthermore, it provides a customized config.h intended to replace the one that comes with the library (using copy rather than patch). It took a considerable amount of trial and error to figure out how to do this with Bazel. So we have three tasks:

  1. Configure the external repo as a new_http_archive rule in WORKSPACE
  2. Define a genrule to patch the library
  3. Arrange for the custom config.h to replace the default version
  4. Define a cc_library rule to compile the patched code
Here's how I set things up:

WORKSPACE:
new_http_archive(
    name = "mbedtls",
    urls = ["https://github.com/ARMmbed/mbedtls/archive/mbedtls-2.4.2.zip"],
    sha256 = "dacb9f5dd438c456b9ef6627637f46e16fd41e86d828217ec9f8547d3d22a338",
    strip_prefix = "mbedtls-mbedtls-2.4.2",
    build_file = "config/mbedtls/BUILD",
)

In config/mbedtls I have the following files: BUILD, ocf.patch, and config.h.

A Bazel genrule allows you to run Bash commands from Bazel. It must lists all inputs and all outputs, so that Bazel can guarantee that you indeed output exactly what you promised, no more and no less. It writes output to ./bazel-genfiles/ which it then makes available to other tasks. Unfortunately the documentation is a little weak, so I had to discover the hard way just what Bazel considers an output.


Exploring external repos using genrule

Firsts let's take a look at what happens when Bazel downloads and unzips an external repo. We can do this using a simple genrule in config/mbedtls/BUILD:

genrule(
    name = "gentest",
    srcs = glob(["**/*"]),
    outs = ["genrule.log"],
    cmd  = "pwd > $@"
)

Run "$ bazel build @mbedtls". You should get a message like the following:

Target @mbedtls//:gentest up-to-date:
  bazel-genfiles/external/mbedtls/genrule.log

Browse the genrule.log file and you'll see it contains the working directory of the genrule cmd, something like:

/private/var/tmp/_bazel_gar/a2778d8bc5379ccd6c684731e73b4da6/sandbox/4850556017797389628/execroot/__main__

The first lesson here is that Bazel sandboxes execution for this external repo.

The second lesson is that you must write outputs to the appropiate Bazel-defined directory. That's what the $@ is for: it's the name of the real the output file. If you use "pwd > genrule.log", you'll get an error: "declared output 'external/mbedtls/genrule.log' was not created...".  That does not mean that genrule.log was not written, it means rather that it was written in the wrong place.
 
You can see what $@ is by using "echo $@ > $@"; the log will then contain:

bazel-out/darwin-fastbuild/genfiles/external/mbedtls/genrule.log

Now try changing the cmd to "ls > $@".  Then genrule.log should contain:

bazel-out
external

Now try "ls -R > $@" to get a recursive listing of the tree; examine it and you will see that Bazel has unzipped the mbedtls library in ./external/mbedtls.

Finally, try this cmd: "\n".join(["ls > genrule.log", "ls > $@"])

This will show you that ls > genrule.log gets written to the execroot, whereas ls > $@ gets written to the right place.


Applying a patch

Now let's write a genrule to apply the patch. This is a little tricker, since it has multiple outputs. If you try to use $@ Bazel will complain. Furthermore, since patch updates files in place, we need to copy the library to a new directory and apply the patch there. Finally, we need to make the patch file available - since our genrule will execute in the sandboxed execroot, we do not automatically have access to config/mbedtls/ocf.patch.

First let's expose ocf.patch. This is simple but involves an obscure function. Put the following at the top of config/mbedtls/BUILD:  exports_files(["config.h", "ocf.patch"])  This will make config/mbedtls/ocf.patch available under a Bazel label: "@//config/mbedtls:ocf.patch"

Our genrule starts out like this:

genrule(
    name = "patch",
    srcs = glob(["**/*.c"])
         + glob(["**/*.h"])
         + glob(["**/*.data"])
         + glob(["**/*.function"])
         + glob(["**/*.sh"])
         + ["@//config/mbedtls:ocf.patch"],
...)

The globs pick up all the files that are listed in the patch file and thus required as input (plus others, but that's ok). It also must list the patch file, since that is an input. All inputs must be explicitly listed.

Our command will look like this:

    cmd  = "\n".join([
        "cp -R external/mbedtls/ patched",
        "patch -dpatched -p1 -l -f < $(location @//config/mbedtls:ocf.patch)"
    ....])

We first copy the entire tree to a new directory "patched" (e.g. external/mbedtls/include -> patched/include, etc).  We then need to add -dpatched to the patch command, so it runs from the correct subdir.  To access the patch file we use $(location @//config/mbedtls:ocf.patch); this is a Bazel feature that retuns the correct (Bazel-controlled) path for ocf.patch.

This will apply the patch, but it will not produce the output required by genrule. It's just like "ls > gentest.log" above: the output gets written but not in the write place. Where is the right place? That's what $(@D) is for. It's a so-called "Makefile variable"; see Other Variables available to the cmd attribute of a genrule. It resolves to the Bazel-defined output directory when you have multiple outputs. In this case:

bazel-out/darwin-fastbuild/genfiles/external/mbedtls.  (Compare this to the value of $@).

So now we need to copy the files we care about to $(@D). Fortunately this is easy; everything we need is already under patched/ so we just add "cp -R patched $(@D)" to our cmd.

Finally we need to specify the outputs.  Note that we only need source files for the library, even though the patchfile applies to additional files (e.g. some programs and test files). So we can limit our output to those files:

    outs = ["patched/" + x for x in glob(["**/library/*.c"])]
         + ["patched/" + x for x in glob(["**/*.h"],
                                   exclude=["**/config.h"])],

Here we use a Python facility (the language of Bazel is a Python variant).  We are only interested in the library files so we do not output any of the other stuff. We also exclude config.h since we are supplying a custom version.

NOTE: through trial and error, I have discovered that genrule will allow you to output files that are not listed in the outs array, but it will not emit them. In this case, our command copies the entire source tree to $(@D), but our outs array only contains c files and h files.  The resulting genfiles tree contains only those files, to the exclusion of various other files in the source (e.g. *.data). So evidently Bazel is smart enough to eliminate files not listed in outs from $(@D).

Here's the final genrule:

genrule(
    name = "patch",
    srcs = glob(["**/*.c"])
         + glob(["**/*.h"])
         + glob(["**/*.data"])
         + glob(["**/*.function"])
         + glob(["**/*.sh"])
         + ["@//config/mbedtls:ocf.patch"],
    outs = ["patched/" + x for x in glob(["**/library/*.c"])]
         + ["patched/" + x for x in glob(["**/include/**/*.h"],
                                         exclude=["**/config.h"])],
    cmd  = "\n".join([
        "cp -R external/mbedtls/ patched",
        "patch -dpatched -p1 -l -f < $(location @//config/mbedtls:ocf.patch)",
        "cp -R patched $(@D)",
        ])
)

Build the library from the patched sources


First off, get the vanilla build working. This is pretty easy, it looks similar to the tinycbor example above.

Unfortunately, getting the lib to build using the patches turned out to be quite difficult. What I came up with is the following (which I do not entirely understand).

First, I had a devil of a time getting the header paths right. In the end the only thing I found that works is to list them all explicitly; globbing does not work.  So I have:

mbedtls_hdrs = ["patched/include/mbedtls/aes.h",
                "patched/include/mbedtls/aesni.h",
                "patched/include/mbedtls/arc4.h",
           ...
]

Then I have:

cc_library(
    name = "mbedtls-lib",
    copts = ["-Ipatched/include",
             "-Ipatched/include/mbedtls",
             "-Iconfig",
             "-Iconfig/mbedtls"],
    data = [":patch"],
    srcs = [":patch"],
    hdrs = mbedtls_hdrs + ["@//config/mbedtls:config.h"],
    includes = ["patched/include", "patched/include/mbedtls", "x"],
    visibility = ["//visibility:public"]
)

Omitting either hdrs or includes causes breakage, dunno why.

For that matter, to be honest, I don't yet know if the build is good, because I have not used the lib with a running app yet.  But it builds!


Tuesday, September 12, 2017

Bazel: Building a JNI Lib

Here's how I managed to use Bazel to compile a JNI wrapper for a C library.  To understand it you'll need to understand the basics of Bazel; you should go through Java and C++ tutorials on the Bazel site, and understand workspaces, packages, targets, and labels. You will also need to understand Working with external dependencies.  That documentation is a little thin, but in this article I'll try to explain how it works, at least as I understand it.

To start, a JNI wrapper for a C/C++ library - let's call it libfoo - will involve three things (native file extensions omitted since they are platform-dependent):
  1. The Java jar that exposes the API - we'll call it libfooapi.jar in this case
  2. The JNI layer (written in C in this case, C++ also works) that wraps your C library (translates between it and the API jar) - we'll call this libjnifoo
  3. Your original C library - libfoo
So the task of your build is to build the first two.

Code Organization

My C library (libfoo) is built (using Bazel) as a separate project on the local filesystem. This makes it an "external Bazel dependency"; we'll see below how to express this in our JNI workspace/build files.

The source for the JNI lib project looks something like this (on OS X):
$ tree foo
foo
├── BUILD
├── README.adoc
├── WORKSPACE
├── src
│   ├── c
...
│   │   ├── jni_init.c
│   │   ├── jni_init.h
... other jni layer sources ...
│   │   └── z.h
│   └── main
│       ├── java
│       │   └── org
│       │       └── foo
│       │           ├── A.java
...
│       │           ├── FOO.java
...
Note that WORKSPACE establishes the project root; you will execute Bazel commands within the foo directory. Note also that we only have one "package" (i.e. directory containing a BUILD file). We're going to build the jar and the jni lib as targets in the same package.

The third thing we need is the JDK, since our JNI code has a compile-time dependency on "jni.h"; this is a little bit tricky, since this too is an external dependency, and you cannot brute-force it by giving an absolute path the the JDK include directories - Bazel rejects such paths. We'll see how to deal with such "external non-Bazel dependencies" below.

WORKSPACE

My WORKSPACE file defines my libfoo library as a local, external, Bazel project:

local_repository(
    name = "libfoo",
    path = "/path/to/libfoo/bazel/project",
)

In this example, /path/to/libfoo/bazel/project will be a Bazel project - it will contain a WORKSPACE file and one or more BUILD files.  Defining a local "repository" like this in the workspace puts a (local) name on the "project" referenced by the path.  Note that projects do not have proper names; a "project" is really just a repository of workspaces, packages, and targets defined by WORKSPACE and BUILD files - hence the name "local_repository".

The "external" output dirs

When you define an external repo like this, Bazel will create (or link) the necessary resources in subdirectories of the output directories, named appropriately; in this case, "external/libfoo".

In other words, if you define @foo_bar, you will get "external/foo_bar" in the output dirs.

TODO: explain - does Bazel just create soft links to the referenced repo?

JDK Dependency

Our JNI library will have a compile-time dependency on the header files of the local JDK. Many more traditional build systems would allow you to express this by add the absolute file path to those include directories; Bazel disallows this.  Instead, we need to define such resources as an external repository; in this case, a non-Bazel external repository.

You could do this yourself for JDK resources, at least in principle (I tried and failed).  Fortunately Bazel predefines the repository, packages and targets you need.  The external repo you need is named "@local_jdk" (note: underscore, not dash); the targets are defined in https://github.com/bazelbuild/bazel/blob/117da7a947b4f497dffd6859b9769d7c8765443d/src/main/java/com/google/devtools/build/lib/bazel/rules/java/jdk.WORKSPACE.  Frankly I do not completely understand how local_jdk and etc. definitions work, but they do. In this example, we will use:
  • "@local_jdk//:jni_header"
  • "@local_jdk//:jni_md_header-darwin"
(If you look at the source, you will see that these are labels for "filegroup" targets - a convenient way to reference a group of files.)

(See also https://github.com/bazelbuild/bazel/blob/master/src/main/tools/jdk.BUILD - I don't know how this is related to the other file.)

BUILD

The local name for the external repository (project) specified in your WORKSPACE makes it possible to refer to it using Bazel labels.  In your BUILD files you use the @ prefix to refer to such external repositories; in this case, "@libfoo//src:bar" would refer to the bar target of the src package of the libfoo repository, as defined in the WORKSPACE above. Note that because of the way Bazel labels are defined, we can expect to find target "bar" defined in file "src/BUILD" in the libfoo repo.

My buildfile has two targets, one for the Java, one for the C.  The Java target is easy:

java_library(
    name = "fooapi",  # will produce libfooapi.jar
    srcs = glob(["src/main/**/*.java"]))

The C target is a little more complicated:

cc_library(
    name = "jni",
    srcs = glob(["src/c/*.c"]) + glob(["src/c/*.h"])
    + ["@local_jdk//:jni_header",
       "@local_jdk//:jni_md_header-darwin"],
    deps = ["@libfoo//src/ocf"],
    includes = ["external/libfoo/src",
                "external/libfoo/src/bar",
                "external/local_jdk/include",
                "external/local_jdk/include/darwin"],
)

Important: note that we put the @local_jdk labels in srcs, not deps.  That's because (I think) they are filegroups, and the labels you put in deps should be "rule" targets rather than file targes.

Exposing the headers

Note that it is not sufficient to express the dependencies on local_jdk; you must also specify the include directories with that repo in order to expose jni.h etc.  That's what the "includes" attribute is for.  You must list all the (external) directories needed by your sources.

Saturday, January 21, 2017

boot-gae: Interactive Clojure Development on Google App Engine

[Third in a series of articles on boot-gae]

boot-gae supports interactive development of Clojure applications on GAE. It does not have a true REPL, but it's pretty close: edit the source, save your edits, refresh the page. Your changes will be loaded by the Clojure runtime on page refresh.

This is mildly tricky on GAE. GAE security constraints prevent the Clojure runtime from accessing the source tree, since it is not on the classpath. Nothing outside of the webapp's root directory tree can be on the classpath.

Part of the solution is obvious: monitor the source tree, and whenever it changes, copy the changed files to the output directory. The built-in watch task makes this easy; gae/monitor composes that task with some other logic to make it work.

The tricky bit here is to make sure the changed files get copied to the appropriate place in the output directory; for Clojure source files, that means

target/WEB-INF/classes    ;; for servlet apps

target/<servicename>/WEB-INF/classes    ;; for service apps

boot-gae tasks use configuration parameters to construct the path. The gae/monitor (and the gae/build) task uses the built-in sift task to move input from the source tree to the right place.

That's half a solution; we still need to get Clojure to reload the changed files. The trick here is to use a  Java filter to monitor the files in the webapp and reload them on change, just as the gae/monitor does with source files. A filter in a Java Servlet app dynamically intercepts requests and responses; by installing a filter, we can ensure that changed Clojure code can be reloaded whenever any page is loaded.  See The Essentials of Filters for more information.

The gae/reloader task generates and installs the appropriate filter. No configuration is necessary; the whole process is hidden, so all the programmer need do is run the gae/reloader task.

The reloader task generates a reloader "generator" file (named using gensym) whose contents look like this:

;; TRANSIENT FILTER GENERATOR
;; DO NOT EDIT - GENERATED BY reloader TASK
(ns reloadergen2244)

(gen-class :name reloader
           :implements [javax.servlet.Filter]
           :impl-ns reloader)

It saves this to a hidden location (this is easily done, since doing so one of the core features of boot) and then AOT-compiles it to produce the reloader.class file.

It also generates the Clojure file that implements the filter's doFilter method. Here's the content of that file:

;; RELOADER IMPLEMENTATION NS
;; DO NOT EDIT - GENERATED BY reloader TASK
(ns reloader
  (:import (javax.servlet Filter FilterChain FilterConfig
                          ServletRequest ServletResponse))
  (:require [ns-tracker.core :refer :all]))
(defn -init [^Filter this ^FilterConfig cfg])
(defn -destroy [^Filter this])
(def modified-namespaces (ns-tracker ["./"]))
(defn -doFilter
  [^Filter this
   ^ServletRequest rqst
   ^ServletResponse resp
   ^FilterChain chain]
  (doseq [ns-sym (modified-namespaces)]
    (println (str "reloading " ns-sym))
    (require ns-sym :reload))
  (.doFilter chain rqst resp))

This process results in some transient files, which are filtered out of the final result. The only files we need are reloader.class (which the servlet container needs) and reloader.clj (to which reloader.class will delegate calls to the filter methods, like doFilter).

If you want to inspect the transient files, you can retain them by passing the --keep (short: -k) flag to the gae/build task. Here's an example of what you will find in WEB-INF/classes in that case (other files omitted):

reloader.class
reloader.clj
reloadergen2244$fn__35.class
reloadergen2244$loading__5569__auto____33.class
reloadergen2244.clj
reloadergen2244__init.class

Since the reloadergen* files are not needed by the app, then are removed by default.

Deployment


This works find for local development; however, it's just a waste in an application deployed to the cloud. Before deploying (gae/deploy), be sure to omit the gae/reloader task from your build pipeline; is you're using gae/build, use the --prod (short: -p) flag.





boot-gae: building and assembling service-based apps

[Second in a series of articles on using boot-gae to develop Clojure apps on GAE]

Google App Engine supports two kinds of application. The traditional kind is what I'll call a servlet app - a standard, Java Servlet application. It may contain multiple servlets and filters, but everything is in one WAR directory. Servlets can communicate with each other using several techniques, including direct method invocation, or using System properties to pass information, etc. The key point is that they need not send each other HTTP messages in order to cooperate.

The other kind of application, which I will call a service-based, or just services app, assembles one or more servlet apps into an application. Each servlet app is called a service (formerly: module), and functions as a micro-service in the assembled application. Such microservices collaborate via HTTP.

See Microservices Architecture on Google App EngineService: The building blocks of App Engine, and Configuration Files for more information.

boot-gae makes it easy to develop service-based applications, using the same code as for servlet applications. To build a service, do this (from the root of the service project):

$ boot gae/build -s

The -s (--service) switch tells boot-gae to build a service; the result will be placed in target/<servicename>. Building a service, unlike building a servlet app, will generate a jar file for the service. Install this:

$ boot install -f target/<servicename>/<service-jar-file-name>.jar

Do this for each service. Then, from the root directory of the service-based app, run the assemble task:

$ boot gae/assemble

To run the assembled app, use gae/run. The two commands can be combined:

$ boot gae/assemble gae/run

To interactively develop a service running in a services app, change to the service's root directory and run

$ boot gae/monitor -s

Now when you edit your service's code, the changes will be propagated to the assembled service-based app, where they will be loaded on page refresh.

How It Works

The service components and the services app must be correctly configured for this to work, of course. Each service component must include a :gae map in its build.boot file; it looks like this:


(set-env!
 :gae {:app-id "microservices-app"
       :version "v1"
       :module {:name "greeter"
                :app-dir (str (System/getProperty "user.home")
                              "/boot/boot-gae-examples/standard-env/microservices-app")}}
...)

The :version string must conform to the GAE rules: The version identifier can contain lowercase letters, digits, and hyphens. It cannot begin with the prefix "ah-" and the names "default" and "latest" are reserved and cannot be used...Version names should begin with a letter, to distinguish them from numeric instances which are always specified by a number (see appengine-web.xml Reference).

The :app-dir string must be the path of the service-based app's root directory.

The :name string will be used (by gae/monitor -s) to construct the path of the service in its WAR directory in the services app; in this case, the result will will be

$HOME/boot/boot-gae-examples/standard-env/microservices-app/target/greeter

The gae/monitor -s task will copy source changes to this directory.

The services app must also include the :gae map in its build.boot file, but without the :module entry. In addition, the component services must be included in the :checkouts vector; for example:

:checkouts '[[tmp.services/main "0.2.0-SNAPSHOT" :module "default" :port 8083]
            [tmp/greeter "0.1.0-SNAPSHOT" :module "greeter" :port 8088]
            [tmp/uploader "0.1.0-SNAPSHOT" :module "uploader" :port 8089]]

The first service listed will be the default service; it must be named "default".  The :module string here must match the :module :name string of the service's build.boot.

WARNING: this will change, so that service components will be listed in :dependencies.

Finally, the services app must contain a services.edn file, which looks like this:

{:app-id "boot-gae-greetings"
 ;; first service listed is default service
 :services [{:service "default"}
            {:service "greeter"}
            {:service "uploader"}]}

WARNING: this will change. We have all the information needed to assemble the app in build.boot, so this edn file is not needed.

See standard environment examples for working demos.

Previous article: Building Clojure Apps on Google App Engine with boot-gae
Next article: boot-gae: Interactive Clojure Development on Google App Engine


Friday, January 20, 2017

Building Clojure Apps on Google App Engine with boot-gae

It's relatively easy to get a Clojure application running on GAE's devserver; you just need to use gen-class to AOT compile a servlet. See for example Clojure in the cloud. Part 1: Google App Engine. The problem is that you then need to restart the devserver whenever you want to exercise code changes, which is way too slow.

One way around this limitation is to run Jetty or some other Java servlet container rather that the devserver.  See for example:
The problem with this strategy is exactly that it does not use the official development server from Google. That server is a modified version of Jetty, with strict security constraints, providing a near-exact emulation of the production environment (which also runs a version of Jetty). If you develop with some other servlet container, you won't know if your code is going to run in production until you actually deploy to the cloud.

So there are two problems to be addressed if we want to use the official devserver.  One is that Java servlets must be compiled, since the servlet container will search for byte-code on disk when it comes time to load a servlet; most solutions I've seen end up AOT-compiling the entire app. The other problem is that GAE's security constraints will prevent your app from accessing anything outside of the webapp's directories. That means, for example, that any jar dependencies should be installed in WEB-INF/lib. If you want to load Clojure source files at runtime, they must be on the classpath, e.g. in WEB-INF/classes.

boot-gae is a new set of tools that solves these problems. Using it, you can easily develop Clojure apps with REPL-like interactivity in the devserver environment. It automates just about everything, so building and running an application is as simple as:

$ boot gae/build gae/run

To develop interactively, switch to another terminal session and run

$ boot gae/monitor

It's that simple. Now changes in your source tree will be propagated to the output tree, where they will be reloaded on page refresh.

The gae/build task is a convenience task that composes a number of core tasks that take care of everything:

  • installing jar dependencies in WEB-INF/lib
  • generating the config files WEB-INF/appengine-web.xml and WEB-INF/web.xml
  • generating one stub .class file for each servlet and filter
  • copying Clojure source files from the source tree to WEB-INF/classes for runtime reloading
  • copying static web assets (html, js, css, jpeg, etc.) from the source tree to the appropriate output directory
  • generating and installing a reloader filter, which will be used to detect and reload changed namespaces at runtime

The process is controlled via simple *.edn files. For example, servlets are specified in servlets.edn, which looks like this:

{:servlets [{:ns greetings.hello
             :name "hello-servlet"
             :display {:name "Awesome Hello Servlet"}
             :desc {:text "blah blah"}
             :urls ["/hello/*" "/foo/*"]
             :params [{:name "greeting" :val "Hello"}]
             :load-on-startup {:order 3}}
            {:ns greetings.goodbye
             :name "goodbye-servlet"
             :urls ["/goodbye/*" "/bar/*"]
             :params [{:name "op" :val "+"}
                      {:name "arg1" :val 3}

                      {:name "arg2" :val 2}]}]}

Here two servlets are specified. One task - gae/servlets - will use this data to generate a "servlets generator" source file that looks like this:

;; TRANSIENT SERVLET GENERATOR
;; DO NOT EDIT - GENERATED BY servlets TASK
(ns servletsgen2258)

(gen-class :name greetings.hello
           :extends javax.servlet.http.HttpServlet
           :impl-ns greetings.hello)

(gen-class :name greetings.goodbye
           :extends javax.servlet.http.HttpServlet
           :impl-ns greetings.goodbye)


This file is then AOT-compiled to produce the two class files, WEB-INF/classes/greetings/hello.class and WEB-INF/classes/greetings/goodbye.class. The programmer then need only supply an implementation for the service method of HttpServlet, in an appropriately named Clojure file - in this case, in the source tree, greetings/hello.clj and greetings/goodbye.clj will both contain something like (defn -service ...) or (ring/defservice ...).

Another task, gae/webxml, will use the same information to generate WEB-INF/web.xml.

Thus with boot-gae, only minimal servlet and filter stubs are AOT-compiled. The gen-class source code is itself automatically generated, then AOT-compiled to produce the corresponding class files, and discarded. The programmer never even sees this code (but can keep it for inspection by passing a -k parameter).

boot-gae is available at https://github.com/migae/boot-gae.  A companion repository,  https://github.com/migae/boot-gae-examples contains sample code with commentary.