Read the opencv source code: two iOS interfaces UIImageToMat() and mattoimage()

From: https://www.cnblogs.com/panxiaochun/p/5387743.html

This article is originally created by the author and cannot be reproduced without permission. The original text is published by the author in the blog Park: http://www.cnblogs.com/panxiaochun/p/5387743.html

Two interfaces are often used in the development of OpenCV based programs under ios, UIImageToMat() and mattoimage(), which are the conversion between UIImage and Mat. There is not much information about these two APIs in the opencv document, so it needs to be analyzed by reading the source code.

1.UIImageToMat details

As for this api, we always want to know some details of the returned mat, such as how many channels it has, whether it has alpha channel, and whether the color space is RGBA or BGR. We don't know. With these problems, let's read the source code together:

void UIImageToMat(const UIImage* image,
                         cv::Mat& m, bool alphaExist) {
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols = image.size.width, rows = image.size.height;
    CGContextRef contextRef;
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
    if (CGColorSpaceGetModel(colorSpace) == 0)
    {
        m.create(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
        bitmapInfo = kCGImageAlphaNone;
        if (!alphaExist)
            bitmapInfo = kCGImageAlphaNone;
        contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
                                           m.step[0], colorSpace,
                                           bitmapInfo);
    }
    else
    {
        m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
        if (!alphaExist)
            bitmapInfo = kCGImageAlphaNoneSkipLast |
                                kCGBitmapByteOrderDefault;
        contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
                                           m.step[0], colorSpace,
                                           bitmapInfo);
    }
    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows),
                       image.CGImage);
    CGContextRelease(contextRef);
}

The source code is in the folder modules/imgcodecs/src/ ios_conversions.mm Inside. As can be seen from the source code, opencv will first convert the UIImage type image to CGImga, which is a bitmap Bitmap bitmap image

@property(nullable, nonatomic,readonly) CGImageRef CGImage; // returns underlying CGImageRef or nil if CIImage based
The CGImageRef opaque type represents bitmap images and bitmap image masks, based on sample data that you supply. A bitmap (or sampled) image is a rectangular array of pixels, with each pixel representing a single sample or data point in a source image.

CGImageGetColorSpace will get the color space of the specified image. If the specified image (or image) is an image mask, NULL will be returned. The image mask is actually the mask in ps, and the specified area of the mask will be displayed. For details, see the ios documentation. The c + + version of opencv is different from the java version. There are two objects in the java version: width and height. In c + + there are cols and rows, which represent width and height respectively. In fact, they are the number of columns and rows of Mat, which represent differences.

Then get the color mode CGColorSpaceGetModel of the image, and get an enumeration value:

typedef CF_ENUM (int32_t,  CGColorSpaceModel) {
    kCGColorSpaceModelUnknown = -1,
    kCGColorSpaceModelMonochrome,
    kCGColorSpaceModelRGB,
    kCGColorSpaceModelCMYK,
    kCGColorSpaceModelLab,
    kCGColorSpaceModelDeviceN,
    kCGColorSpaceModelIndexed,
    kCGColorSpaceModelPattern
};

kCGColorSpaceModelMonochrome is a monochrome image, that is, a black-and-white gray image. If it is a gray image, write data to the data in m of type cv::Mat through CGBitmapContextCreate,

The size is the same as the original image, and the color space is the same as the original image. Here is the monochrome image, without alpha channel. If it is not a gray-scale image, write the color of the color space of the original image into the data of cv::Mat to get the converted m.

Through reading the source code, we can know that if the monochromatic gray-scale image is passed in, then the monochromatic gray-scale image is returned, without alpha channel. If RGB or BGRA is passed in, the color space RGB or BGRA of the original image will be returned. If channel A is not specified during conversion, there is one by default.

2. Transformation details of mattouiimage

Let's learn about the transformation details of MatToUIImage, source code:

UIImage* MatToUIImage(const cv::Mat& image) {

    NSData *data = [NSData dataWithBytes:image.data
                                  length:image.elemSize()*image.total()];

    CGColorSpaceRef colorSpace;

    if (image.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider =
            CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    // Preserve alpha transparency, if exists
    bool alpha = image.channels() == 4;
    CGBitmapInfo bitmapInfo = (alpha ? kCGImageAlphaLast : kCGImageAlphaNone) | kCGBitmapByteOrderDefault;

    // Creating CGImage from cv::Mat
    CGImageRef imageRef = CGImageCreate(image.cols,
                                        image.rows,
                                        8,
                                        8 * image.elemSize(),
                                        image.step.p[0],
                                        colorSpace,
                                        bitmapInfo,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault
                                        );


    // Getting UIImage from CGImage
    UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return finalImage;
}

First, the data raw data of the incoming matrix will be obtained. The length is image.elemSize()* image.total(), elemSize() will return the size of the element (in bytes). For details, see the document description of opencv (in fact, the number of channels * the size of the channel element, for example, the element type is CV_8UC3, its size is actually 3 bytes. Each channel has a byte of 8 bit s, three channels). image.total() returns the number of all elements of the image.

Then determine the element size of image. If it is 1 byte, set the color space as monochromatic gray scale (CV_8UC1), if it is not 1 byte, set the color space to RGB (so the color space of the image converted by MatToUIImage() is RGB), and then judge whether the size of the element is 4 bytes. If it is 4 bytes, it is with alpha channel, because RGB occupies one channel each, that is, one byte each, and the fourth byte is alpha channel.

CGDataProviderCreateWithCFData creates a CGDataProviderRef object (_ The bridge flag is xcode's conversion between Core Foundation and Foundation objects. To convert memory management through Toll Free bridge, only valid under ARC)

CGImageCreate will create a bitmap image through parameters, which can be seen in the description documents. Then create UIImage through bitmap, so you can know by reading the source code that the color space of the image is if the original image is monochromatic gray-scale image, then the color space of UIImage is monochromatic gray-scale image, if not, then the default is RGB, if cv::Mat is with alpha channel, then the converted UIImage is with alpha channel

 

3. Summary

The mat output by UIImageToMat has alpha channel by default. You can choose not to have alpha channel. Just specify the third parameter of UIImageToMat(). The UIImage output by mattoimga is RGB type by default

Tags: OpenCV iOS Java xcode

Posted on Fri, 29 May 2020 05:04:49 -0700 by sherry