前言
iOS16 推出的一键抠图功能,还是挺有趣的,我猜想它可能是通过 Vision 实现的(毕竟 Vision 在最新的开发者大会上也介绍了新的更新),于是想试试看我能不能找到抠图的办法。
实现
具体的思路其实就是借助 Vision 帮助我找到图片中的热力图,然后检测出热力图中显著区域的边缘,并把得到的结果转化为图片上的坐标,然后进行抠图
得到热力图
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| UIImage *originImage = [UIImage imageNamed:@"test.jpg"]; CIImage *ciOriginImg = [CIImage imageWithCGImage:originImage.CGImage];//原始图片
VNImageRequestHandler *imageHandler = [[VNImageRequestHandler alloc] initWithCIImage:ciOriginImg options:@{}]; VNGenerateObjectnessBasedSaliencyImageRequest *attensionRequest = [[VNGenerateObjectnessBasedSaliencyImageRequest alloc] init];//基于物体的显著性区域检测请求 NSError *err = nil; BOOL haveAttension = [imageHandler performRequests:@[attensionRequest] error:&err];//有物品 if ( haveAttension ) {//有物品 if(attensionRequest.results && [attensionRequest.results count] > 0) { VNSaliencyImageObservation *observation = [attensionRequest.results firstObject]; /// 热力图 CIImage *heatImage = [CIImage imageWithCVPixelBuffer:observation.pixelBuffer]; } }
|
热力图边缘检测
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| - (void)heatMapProcess:(CVImageBufferRef)hotRef catOrigin:(CIImage *)catOrigin { CIImage *heatImage = [CIImage imageWithCVPixelBuffer:hotRef]; VNDetectContoursRequest *contourRequest = [[VNDetectContoursRequest alloc] init]; contourRequest.revision = VNDetectContourRequestRevision1; contourRequest.contrastAdjustment = 1.0; contourRequest.detectsDarkOnLight = NO; contourRequest.maximumImageDimension = 512; VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithCIImage:heatImage options:@{}]; NSError *err = nil;
BOOL result = [handler performRequests:@[contourRequest] error:&err];
if(result) { /// 边缘检测结果 VNContoursObservation *contoursObv = [contourRequest.results firstObject]; CIContext *cxt = [[CIContext alloc] initWithOptions:nil]; CGImageRef origin = [cxt createCGImage:catOrigin fromRect:catOrigin.extent]; } }
|
抠图
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| - (UIImage *)drawContourWith:(VNContoursObservation *)contourObv withCgImg:(CGImageRef)img originImg:(CGImageRef)origin{ CGSize size = CGSizeMake(CGImageGetWidth(origin), CGImageGetHeight(origin)); UIImageView *originImgV = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)]; originImgV.image = [UIImage imageWithCGImage:origin]; CAShapeLayer *layer = [CAShapeLayer layer]; CGAffineTransform flipMatrix = CGAffineTransformMake(1, 0, 0, -1, 0, size.height);//坐标转换为底部为(0, 0) CGAffineTransform scaleTranform = CGAffineTransformScale(flipMatrix, size.width, size.height); //对path 进行按图尺寸放大 CGPathRef scaedPath = CGPathCreateCopyByTransformingPath(contourObv.normalizedPath, &scaleTranform);//对归一化的path进行变换 layer.path = scaedPath; [originImgV.layer setMask:layer]; UIGraphicsBeginImageContext(originImgV.bounds.size); [originImgV.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; }
|
总结
从我测试的抠图的结果来看,效果还是有些瑕疵的,我也从更新了 iOS16 的小伙伴那里,体验了一下原生抠图的效果,看起来和我的差不多,都还有些瑕疵,看来进步的空间还很大~
参考
Vision