Two dimensional code generation and scanning by iOS

Generation of QR code

 + (UIImage *)qrImageForString:(NSString *)string imageSize:(CGFloat)Imagesize logoImageSize:(CGFloat)waterImagesize{

CIFilter *filter = [CIFilter filterWithName:@"CIQRCodeGenerator"];

[filter setDefaults];

NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];

[filter setValue:data forKey:@"inputMessage"];//Give a string through kvo to generate QR code

[filter setValue:@"H" forKey:@"inputCorrectionLevel"];//Set the error correction level of two-dimensional code, the higher the error correction level is, the larger the range of possible contamination is

CIImage *outPutImage = [filter outputImage];//Get the QR code picture

return [[self alloc] createNonInterpolatedUIImageFormCIImage:outPutImage withSize:Imagesize waterImageSize:waterImagesize];


- (UIImage *)createNonInterpolatedUIImageFormCIImage:(CIImage *)image withSize:(CGFloat) size waterImageSize:(CGFloat)waterImagesize{

CGRect extent = CGRectIntegral(image.extent);

CGFloat scale = MIN(size/CGRectGetWidth(extent), size/CGRectGetHeight(extent));

// 1. Create a bitmap;

size_t width = CGRectGetWidth(extent) * scale;

size_t height = CGRectGetHeight(extent) * scale;

//Create a DeviceGray color space

CGColorSpaceRef cs = CGColorSpaceCreateDeviceGray();

//CGBitmapContextCreate(void * _Nullable data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef  _Nullable space, uint32_t bitmapInfo)

//Width: picture width pixels

//Height: image height pixel

//bitsPerComponent: bit value of each color, for example, 8 in rgba-32 mode

//bitmapInfo: the specified bitmap should contain an alpha channel.

CGContextRef bitmapRef = CGBitmapContextCreate(nil, width, height, 8, 0, cs, (CGBitmapInfo)kCGImageAlphaNone);

CIContext *context = [CIContext contextWithOptions:nil];

//Create CoreGraphics image

CGImageRef bitmapImage = [context createCGImage:image fromRect:extent];

CGContextSetInterpolationQuality(bitmapRef, kCGInterpolationNone);

CGContextScaleCTM(bitmapRef, scale, scale);

CGContextDrawImage(bitmapRef, extent, bitmapImage);

// 2. Save bitmap to image

CGImageRef scaledImage = CGBitmapContextCreateImage(bitmapRef);

CGContextRelease(bitmapRef); CGImageRelease(bitmapImage);


UIImage *outputImage = [UIImage imageWithCGImage:scaledImage];

//Add logo to QR code

UIGraphicsBeginImageContextWithOptions(outputImage.size, NO, [[UIScreen mainScreen] scale]);

[outputImage drawInRect:CGRectMake(0,0 , size, size)];


UIImage *waterimage = [UIImage imageNamed:@"icon_imgApp"];

//Draw logo on the generated QR code image, and pay attention to the size not too large (the maximum size is not more than% 30 of the QR code image), which may cause failure to scan out

[waterimage drawInRect:CGRectMake((size-waterImagesize)/2.0, (size-waterImagesize)/2.0, waterImagesize, waterImagesize)];

UIImage *newPic = UIGraphicsGetImageFromCurrentImageContext();


return newPic;


Change the color of the QR code

- (UIImage*)imageBlackToTransparent:(UIImage*)image withRed:(CGFloat)red andGreen:(CGFloat)green andBlue:(CGFloat)blue{

const int imageWidth = image.size.width;

const int imageHeight = image.size.height;

size_t bytesPerRow = imageWidth * 4;

uint32_t* rgbImageBuf = (uint32_t*)malloc(bytesPerRow * imageHeight);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef context = CGBitmapContextCreate(rgbImageBuf, imageWidth, imageHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);

CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image.CGImage); // Ergodic pixel

int pixelNum = imageWidth * imageHeight;

uint32_t* pCurPtr = rgbImageBuf;

for (int i = 0; i < pixelNum; i++, pCurPtr++){

if ((*pCurPtr & 0xFFFFFF00) < 0x99999900) // Make white transparent


// Change to the following code, the picture will be converted to the desired color

uint8_t* ptr = (uint8_t*)pCurPtr;

ptr[3] = red; //0~255

ptr[2] = green;

ptr[1] = blue;

} else {

uint8_t* ptr = (uint8_t*)pCurPtr;

ptr[0] = 0;



// Output picture

CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rgbImageBuf, bytesPerRow * imageHeight, nil);

CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaLast | kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault);


UIImage* resultUIImage = [UIImage imageWithCGImage:imageRef]; // Clean up space




return resultUIImage;


Scanning code

@interface ScanQRViewController ()

//Capture device, usually front camera, rear camera, microphone (audio input)

@property(nonatomic)AVCaptureDevice *device;

//AVCaptureDeviceInput represents the input device, which uses AVCaptureDevice to initialize

@property(nonatomic)AVCaptureDeviceInput *input;

//Set the output type to Metadata, because the scan type can be set in this output type, such as QR code

//When starting the camera to capture the input, if the input contains a QR code, the output will be generated

@property(nonatomic)AVCaptureMetadataOutput *output;

//session: he combines the input and output, and starts to start the capture device (camera)

@property(nonatomic)AVCaptureSession *session;

//Image preview layer, real-time display of captured images

@property(nonatomic)AVCaptureVideoPreviewLayer *previewLayer;

Initialization of objects, I / O device combination

- (void)creatCaptureDevice{

//Use AVMediaTypeVideo to indicate self.device Represents the video, which is initialized by default using the rear camera

self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

//Initialize input with device

self.input = [[AVCaptureDeviceInput alloc]initWithDevice:self.device error:nil];

//Generate output object

self.output = [[AVCaptureMetadataOutput alloc]init];

//Set up the agent, once the specified type of data is scanned, it will output through the agent

//During the scanning process, the contents of the scanning will be analyzed, and after the analysis is successful, the agent method will be called to output in the queue

[self.output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

//Generate session to combine input and output

self.session = [[AVCaptureSession alloc]init];

if ([self.session canAddInput:self.input]) {

[self.session addInput:self.input];


if ([self.session canAddOutput:self.output]) {

[self.session addOutput:self.output];


//Specifies that the output is generated when the QR code is scanned

//AVMetadataObjectTypeQRCode specifies QR code

//Specify the recognition type after adding to the session

[self.output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];

//Set the recognition area of the scanning information. The upper left corner is (0,0) and the lower right corner is (1,1). If it is not set, the full screen can be recognized. After setting, the information scanning area can be reduced and the recognition speed can be accelerated.

//This property is not easy to set. I haven't understood for a long time. How does x,y,width,height correspond to each other? This is the scanning area I try bit by bit. If I can't see it, I can only adjust it. Scan it and try it

[self.output setRectOfInterest:CGRectMake(0.1 ,0.3 , 0.4, 0.4)];

//use self.session , initialize preview layer, self.session It is responsible for driving input to collect information, and layer is responsible for rendering and displaying images

self.previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.session];

self.previewLayer.frame = CGRectMake(0, 0, kScreenWidth , kScreenHeight);

self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

[self.view.layer addSublayer:self.previewLayer];

//Start up

[self.session startRunning];


Implementing agent methods

#Proxy for pragma mark output

//Metadata objects: put the recognized contents into the array

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection


//Stop scanning

[self.session stopRunning];

[self.timer invalidate];

self.timer = nil;

[self.lineView removeFromSuperview];

if ([metadataObjects count] >= 1) {

//The array contains objects of type AVMetadataMachineReadableCodeObject, which contains decoded data

AVMetadataMachineReadableCodeObject *qrObject = [metadataObjects lastObject];

//Get the scanning content and personalize it here

NSLog(@"Recognition successful%@",qrObject.stringValue);



Problems encountered and Solutions

(1) When the logo is added to the QR code, the picture is very fuzzy. This is caused by the scale in UIGraphicsBeginImageContextWithOptions. Because the iPhone screen is retina screen, which is 2 times and 3 times of pixels, the scale here should be set according to the screen, i.e., [[UIScreen mainScreen] scale] so that the picture will be very clear

(2) setRectOfInterest: set the recognition area of scanning information. The upper left corner is (0,0) and the lower right corner is (1,1). If it is not set, the full screen can be recognized. After setting, the information scanning area can be reduced to speed up the recognition speed. The original scanning is the size of the whole screen. At this time, only one area can be scanned to speed up the recognition speed. setRectOfInterest is to set a scale relative to the width and height of the screen, so the four values of CGRectMake must be in the range of 0-1, and the corresponding xy width height is the opposite, namely (Y / screen)_ HEIGHT, x/SCREEN_ WIDTH, height/SCREEN_ HEIGHT, width/SCREEN_ Also note that the origin is not in the upper left corner, but in the upper right corner. (maybe this statement is not correct, but if we deal with it in this way, we can determine the area accurately.)

for instance:

[self.output setRectOfInterest:CGRectMake(95/SCREEN_HEIGHT, 40/SCREEN_WIDTH, 240/SCREEN_HEIGHT, 240/SCREEN_WIDTH];

Tags: Session

Posted on Sun, 31 May 2020 06:22:03 -0700 by avillanu