Tensorflow implements Android mobile end modeling

Summary

With the wide application of in-depth learning and the open source of Tensoflow, there are endless model applications on the mobile side.This paper introduces the author's experience in the construction process, and hope to help you.

Installation of Mac-side Tensorflow CPU version

If you are not using a good GPU, you can install Tensorflow with CPU only.Linux and Mac systems can install python2 and python3 versions of Tensorflow, while Windows systems only support python3 versions.
  1. Install the dependency library bazel for Tensorflow, which is then used to generate the jar packages and so libraries for Tensorflow supporting Android.Use the brew installation command under Mac: brew install bazel, or install the appropriate version according to the official bazel documentation;
  2. Install the CPU only version of tensorflow with pip: pip install tensorflow;
  3. Verify that the Tensorflow installation was successful:
import tensorflow as tf
#Display the current Tensorflow version number
tf.__version__

Generating jar packages and so Libraries

  1. Download tensorflow from github locally
  2. Modify WORKSPACE in tensorflow directory to change the sdk and ndk paths to local corresponding paths, where the version number of sdk is greater than or equal to 23, and the version number of ndk is recommended to be 12b (some problems occur when a higher version of ndk is compiled with bazel). The build_tools_version version changes according to your own situation:
  3. Generate jar packages and so libraries according to the following instructions
    Reference link:
    https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android
Command to generate so library: cpu version number can be selected
bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
  --crosstool_top=//external:android/crosstool \
  --host_crosstool_top=@bazel_tools//tools/cpp:toolchain\
  --cpu=armeabi-v7a
so Location of library:
bazel-bin/tensorflow/contrib/android/libtensorflow_inference.so
Command to generate jar package:
bazel build //tensorflow/contrib/android:android_tensorflow_inference_java
jar Location of the package:
bazel-bin/tensorflow/contrib/android/libandroid_tensorflow_inference_java.jar

4. The so library can also choose an existing file officially provided by Tensorflow Reference link:
http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/libtensorflow_inference.so/

Setup of Android End

  1. Place the jar package in the app->libs directory and add the dependent compile files('libs/libandroid_tensorflow_inference_java.jar') on build.gradle;
  2. Create a new folder jniLibs in the src->main directory and place the generated so Library in that directory.
  3. Save PC-side training model as pb model
output_graph_def= \
graph_util.convert_variables_to_constants(sess, \
sess.graph_def,output_node_names=['output'])
with tf.gfile.FastGFile("path/to/xxx.pb","wb") as f:
    f.write(output_graph_def.SerializeToString())

4. Place the pb file in the src->main->assets directory;
5. Let's explain Tensorflow's Android-side setup with Android code

public class TensorFlowAudioClassifier implements Classifier{

    private static final String TAG = "TensorFlowAudioClassifier";

    // Only return this many results with at least this confidence.
    private static final int MAX_RESULTS = 3;
    private static final float THRESHOLD = 0.0f;

    // Config values.
    //Enter the name of the node (without the following':0', just the name of the input, such as'input')
    private String inputName;
    //Name of output node (same as input node name)
    private String outputName;
    //Enter the size of the matrix (because it is generally square, here is the size of the square matrix)
    private int inputSize;

    // Pre-allocated buffers.
    private Vector<String> labels = new Vector<String>();
    private float[] floatValues;
    private float[] outputs;
    private String[] outputNames;

    private TensorFlowInferenceInterface inferenceInterface;
    //Use singleton mode here

    private TensorFlowAudioClassifier() {
    }

    /**
     * Initializes a native TensorFlow session for classifying images.
     *
     * @param assetManager  The asset manager to be used to load assets.
     * @param modelFilename The filepath of the model GraphDef protocol buffer.
     * @param labelFilename The filepath of label file for classes.
     * @param inputSize     The input size. A square image of inputSize x inputSize is assumed.
     * @param inputName     The label of the image input node.
     * @param outputName    The label of the output node.
     * @throws IOException
     */
    public static Classifier create(
            AssetManager assetManager,
            String modelFilename,
            String labelFilename,
            int inputSize,
            String inputName,
            String outputName)
            throws IOException {
        TensorFlowAudioClassifier c = new TensorFlowAudioClassifier();
        c.inputName = inputName;
        c.outputName = outputName;

        // Read the label names into memory.
        // TODO(andrewharp): make this handle non-assets.
        //Get the label file, which can then be used to build bean s
        String actualFilename = labelFilename.split("file:///android_asset/")[1];
        Log.i(TAG, "Reading labels from: " + actualFilename);
        BufferedReader br = null;
        br = new BufferedReader(new InputStreamReader(assetManager.open(actualFilename)));
        String line;
        while ((line = br.readLine()) != null) {
            c.labels.add(line);
        }
        br.close();

        c.inferenceInterface = new TensorFlowInferenceInterface();
        if (c.inferenceInterface.initializeTensorFlow(assetManager, modelFilename) != 0) {
            throw new RuntimeException("TF initialization failed");
        }
        // The shape of the output is [N, NUM_CLASSES], where N is the batch size.
        int numClasses =
                (int) c.inferenceInterface.graph().operation(outputName).output(0).shape().size(1);
        Log.i(TAG, "Read " + c.labels.size() + " labels, output layer size is " + numClasses);

        // Ideally, inputSize could have been retrieved from the shape of the input operation.  Alas,
        // the placeholder node for input in the graphdef typically used does not specify a shape, so it
        // must be passed in as a parameter.
        c.inputSize = inputSize;

        // Pre-allocate buffers.
        c.outputNames = new String[]{outputName};
        c.floatValues = new float[inputSize * inputSize * 1];
        c.outputs = new float[numClasses];

        return c;
    }

    @Override
    // Identification process
    public List<Recognition> recognizeAudio(String fileName) {
        // Log this method so that it can be analyzed with systrace.
        Trace.beginSection("recognizeAudio");

        Trace.beginSection("preprocessAudio");
        // Preprocess the audio data to normalized float based
        // on the provided parameters.
        // Builds audio files into input arrays, which are one-dimensional float arrays, so the features in Tensorflow are converted to one-dimensional
        double[][] data = RFFT.inputData(fileName);
        for (int i = 0; i < data.length; ++i) {
            for (int j = 0 ; j < data[0].length ; ++j) {
                floatValues[i * 40 + j] = (float)data[i][j];
            }
        }
        Trace.endSection();

        // Copy the input data into TensorFlow.
        Trace.beginSection("fillNodeFloat");
        //Put input into InferenceInterface
        inferenceInterface.fillNodeFloat(
                inputName, new int[]{1,40,40,1}, floatValues);
        Trace.endSection();

//      Trace.beginSection("fillNodeFloat");
//      inferenceInterface.fillNodeFloat(inputName2,new int[]{2,2},floatValues);
//      Trace.endSection();

        // Run the inference call.
        Trace.beginSection("runInference");
        // Running Model
        inferenceInterface.runInference(outputNames);
        Trace.endSection();

        // Copy the output Tensor back into the output array.
        Trace.beginSection("readNodeFloat");
        // Get the confidence of the output into the outputs array
        inferenceInterface.readNodeFloat(outputName, outputs);
        Trace.endSection();

        // Find the best classifications.
        // Get top-3 with PriorityQueue, where the Recognition comes from the interface Classifier and is a bean class
        PriorityQueue<Recognition> pq =
                new PriorityQueue<Recognition>(
                        3,
                        new Comparator<Recognition>() {
                            @Override
                            public int compare(Recognition lhs, Recognition rhs) {
                                // Intentionally reversed to put high confidence at the head of the queue.
                                return Float.compare(rhs.getConfidence(), lhs.getConfidence());
                            }
                        });
        for (int i = 0; i < outputs.length; ++i) {
            if (outputs[i] > THRESHOLD) {
                pq.add(
        //Build bean class with label, label name, confidence
                        new Recognition(
                                "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i]));
            }
        }
        final ArrayList<Recognition> recognitions = new ArrayList<Recognition>();
        int recognitionsSize = Math.min(pq.size(), MAX_RESULTS);
        if(recognitionsSize == 0){
            return null;
        }
        for (int i = 0; i < recognitionsSize; ++i) {
            recognitions.add(pq.poll());
        }
        Trace.endSection(); // "recognizeAudio"
        return recognitions;
    }

    @Override
    public void enableStatLogging(boolean debug) {
        inferenceInterface.enableStatLogging(debug);
    }

    @Override
    public String getStatString() {
        return inferenceInterface.getStatString();
    }

    @Override
    public void close() {
        inferenceInterface.close();
    }
}

We have successfully built a TensorFlowAudioClassifier to identify classes. Here's how to use the classes we built:

    public static TensorFlowAudioClassifier classifier;
    private static final String INPUT_NAME = "input";
    private static final String OUTPUT_NAME = "output";

    private static final String MODEL_FILE = "file:///android_asset/acoustic.pb";
    private static final String LABEL_FILE =
            "file:///android_asset/eventLabel.txt";
    private static int INPUT_SIZE = 40;
    try {
    // Get Classifier
            classifier = (TensorFlowAudioClassifier) TensorFlowAudioClassifier.create(
                    getAssets(),
                    MODEL_FILE,
                    LABEL_FILE,
                    INPUT_SIZE,
                    INPUT_NAME,
                    OUTPUT_NAME
            );
            // Identify corresponding Audio categories
            Recognition result = classifier.recognizeAudio(fileName);
        } catch (IOException e) {
            e.printStackTrace();
        }

summary

Now we have successfully built the TensorFlow depth model on the Android side.Of course, our idea is not to train on the Android side, but to use it (Android training is a bit fantastic, since training on the PC side is not easy to achieve, and some deep-level models need distributed training).

If my blog helps you, remember to share it with others.

Tags: Android Mac brew pip

Posted on Tue, 02 Jul 2019 10:11:15 -0700 by jalapena