top of page
Leeor Langer

Tensorflow C++ API for Android



Let’s say you want to develop a mobile app which includes deep learning functionality. However, deep learning is only part of the software stack. Google’s Tensorflow deep learning framework is ideal for such usage. It is written in c++, with an API in c++, but surprisingly there is no example of c++ usage on Android… Google currently only supports a Java API on Android via the JNI (libtensorflow_inference.so).


We found a seemingly innocent comment regarding Tensorflow c++ on Android here. Pete Warden from the Tensorflow team points to the benchmark tool as an example for cross platform usage (Linux PC and Android build support). So, we refactored the benchmark tool, removed all the unnecessary code and left only an API. The API includes 2 functions: Init and Run. These functions are described in the header file and the code is compiled as a dynamic library for ARM architecture (.so file).


The difficult part in creating such a library are the build tools. In particular, Bazel is not very well documented outside of Google (more on the subject) so it takes some effort to understand how to compile and link such a library, in addition to the code itself. The implementation of the library supports diverse usage scenarios, including multiple inputs, multiple outputs, multiple model instances and logging.


In order to use the API you can link against libTensorflowInference.so in c++. In order to change and compile the library, do the following:

  1. Download Tensorflow from source and checkout release 1.9

  2. Edit the WORKSPACE file in Tensorflow root to include the SDK and NDK details (see example WORKSPACE file in our repo).

  3. Copy tfwld directory into tensorflow\tensorflow\tools.

  4. Run the Bazel build command with the appropriate flags (such as as “arm64-v8a”).

Run the following command line invocation (Step 4):


bazel build -c opt — copt=”-fPIC” — cxxopt=’-std=c++11' — crosstool_top=//external:android/crosstool — cpu=arm64-v8a — host_crosstool_top=@bazel_tools//tools/cpp:toolchain — config monolithic tensorflow/tools/tfwld:libTensorflowInference.so

The BUILD file is kind of tricky, let’s go over it:

package(default_visibility = ["//visibility:public"])

load(
    "//tensorflow:tensorflow.bzl",
    "tf_copts",
    "tf_cc_test",
    "tf_cc_binary",
)

cc_library(
    name = "Logging",
    srcs = ["Logging.cpp"],
    hdrs = ["Logging.h"],
)

cc_library(
    name = "TensorflowInference_lib",
    testonly = 1,
    srcs = [
        "TensorflowInference.cc",
    ],
    hdrs = [
        "TensorflowInference.h",
    ],
    copts = tf_copts(),
    visibility = ["//visibility:public"],
    deps = select({
        "//tensorflow:android": [
            "//tensorflow/core:android_tensorflow_lib",
            "//tensorflow/core:android_tensorflow_test_lib",
            ":Logging",
        ],
        "//conditions:default": [
            "//tensorflow/core:core_cpu",
            "//tensorflow/core:lib",
            "//tensorflow/core:framework",
            "//tensorflow/core:framework_internal",
            "//tensorflow/core:framework_lite",
            "//tensorflow/core:protos_all_cc",
            "//tensorflow/core:tensorflow",
            "//tensorflow/core:test",
        ],
    }),
)

cc_binary(
    name = "libTensorflowInference.so",
    testonly = 1,
    srcs = ["TensorflowInference.cc", "Logging.cpp", "Logging.h"],
    copts = tf_copts(),
    linkopts = select({
        "//tensorflow:android": [
            "-shared",
            "-landroid",
            "-latomic",
            "-ljnigraphics",
            "-llog",
            "-lm",
            "-z defs",
	    "-Wl,--allow-multiple-definition",
            "-Wl,--version-script=/my/path/tf_source/tensorflow/tensorflow/tools/tfwld/TensorflowInference.lds",
            "-s",
        ],
        "//conditions:default": [],
    }),
    linkstatic = 1,
    linkshared = 1,
    visibility=["//visibility:private"],
    deps = [":TensorflowInference_lib", ":Logging"],
)

The idea is to compile an “internal” library that packages only the necessary components for Android and on Windows it packs in parts of the core functionality (which apparently is not supported on current ARM architectures). We wrap this library with our own interface for simple inference usage, exposing only 2 functions. Note that in order to strip all unnecessary symbols, we use “-Wl, — version-script=…”. Change this to your local path. Also, note the usage of “-shared” and the command line argument “-fPIC”. These commands enable the usage of a shared library (instead of executable).


So at the the end of the day, your c++ code will look like the following example:

#include "TensorflowInference.h"

// This is the "feed_dict" key value pair (we have a vector of these for multiple inputs definition)
typedef vector<pair<const char *, vector<long long>>> ModelInputType;
ModelInputType m_modeInputsDefs;

// Using floats for example as input \ output 
typedef std::vector<float> BufferType;

// Resize to NUM_OF_INPUTS
m_modeInputsDefs.resize(NUM_OF_INPUTS);

// Define your input names and dimensions
m_modeInputsDefs[index].first = name;
m_modeInputsDefs[index].second = dims;

// Run the init function from the shared lib
InitTensorflowModel(...)
  
// Run model (2 inputs, 1 output in this example), this can be repeated in real time!
vector<BufferType> inputs(2);
vector<BufferType> outputs(1);
RunTensorFlow(...)

We are Wearable Devices, a startup company which develops hardware and software solutions to interact with computers. Our vision is to transform interaction and control of computers to be as natural and intuitive as real-life experiences. We imagine a future in which the human hand becomes a universal input device for interacting with digital devices, using simple gestures.


Original blog post was published on Medium

Comentarios


bottom of page