Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

knonk

macrumors newbie
Original poster
Oct 27, 2010
6
0
Hi,
I'm facing a hard problem, I'm developing an app to capture video from both front and rear camera simultaneously on IPhone 4 without jail-break and save it to one video with AVI format. I'm facing 2 problems:
1. Capture videos simultaneously from both camera (front and rear).
2. Save captured videos to only 1 file with AVI format.

I have worked arround with multimedia lib in IPhone SDK and know that UIImagePicker class support me capture video on IPhone, but it only support one camera device by setting cameraDevice property of UIImagePicker . Seem UIImagePicker can't help me more....

I want to display 2 captured videos from both camera in screen, one is background video (full screen) and one foreground video, like PIP effect (picture in picture video effect on Television program), then I wanna save them (captured videos) to only 1 video file with video format, properly AVI format.

For those purpose, I did search around on google, and know that FFMPEG seem support merge 2 video with PIP effect but I don't see any tutorial about it. I don't have any exp on FFMPEG before....

I'm very sad now, because the problem (1) & (2) are still there. I don't know how to capture videos simultaneously on IPhone, then merge and save them in only 1 video file like PIP effect.

Anyone have idea about my those problems ? please help me !
 

knonk

macrumors newbie
Original poster
Oct 27, 2010
6
0
It's not possible to capture from both cameras at once. I've had that confirmed by Apple.

Thanks for your information but did you contact with Apple and they did confirm you that ? Sorry for asking you again because I need to make sure and I have to have a solution for this problem.

If you are right, so the (1) problem is impossible, but I still face the second problem, that is save 2 video file to 1 with PIP effect.
Do you know how to do that in IPhone 4? Any framework help me do that ?
 

robbieduncan

Moderator emeritus
Jul 24, 2002
25,611
893
Harrogate
Thanks for your information but did you contact with Apple and they did confirm you that ? Sorry for asking you again because I need to make sure and I have to have a solution for this problem.

If you are right, so the (1) problem is impossible, but I still face the second problem, that is save 2 video file to 1 with PIP effect.
Do you know how to do that in IPhone 4? Any framework help me do that ?

1) On the developer forums. I posted this question regarding the 4.1 beta when Apple gave low-level access to the cameras.

2) Use the AVFoundation framework: you can get pixel-level access to each video frame and composite onto it. This is a low level API and quite tricky to use. Be prepared for a lot of annoyance.

If you have access to the dev forums you might find this thread useful: https://devforums.apple.com/thread/59490?start=0&tstart=0
 

knonk

macrumors newbie
Original poster
Oct 27, 2010
6
0
1) On the developer forums. I posted this question regarding the 4.1 beta when Apple gave low-level access to the cameras.

2) Use the AVFoundation framework: you can get pixel-level access to each video frame and composite onto it. This is a low level API and quite tricky to use. Be prepared for a lot of annoyance.

If you have access to the dev forums you might find this thread useful: https://devforums.apple.com/thread/59490?start=0&tstart=0

Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there. Could u repost that usefull thread here ?
 

dccorona

macrumors 68020
Jun 12, 2008
2,033
1
You will need a paid Developer enrollment to develop for any non-jailbroken iPhone device. So you might as well get one.

why would they spend $100 only to find out their app cant be made?
seems wasteful to me
 

robbieduncan

Moderator emeritus
Jul 24, 2002
25,611
893
Harrogate
Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there. Could u repost that usefull thread here ?

I certainly can't repost the whole thread: I only have any sort of right to repost my own contributions. Fortunately for you they are the useful bit. The code below demonstrates how to get per-pixel data of the video stream, composite something onto it (in this case a black square) and save the video.

Note: you are expected to read, understand and if necessary research anything you don't understand yourself. I'm not answering any questions on this.

Before I post the code some setup: This is all in a class that manages the AV stuff. I have a button on screen to start/stop recording. When this is touched the toggleRecording method is called.

Code:
- (id) initWithViewForPreview:(UIImageView *) aView
{
     if ([super init])
     {
          self.previewView = aView;
          self.captureSession = [[AVCaptureSession alloc] init];
          self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
          NSError *error = nil;
          AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
          if (input)
          {
               [self.captureSession addInput:input];
          }
          else
          {
               NSLog(@"Error creating video input device");
          }
          AVCaptureVideoDataOutput *outputData = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
          [outputData setSampleBufferDelegate:self queue:dispatch_queue_create("renderqueue",NULL)];

          // Set the video output to store frame in BGRA (It is supposed to be faster)
          NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
          NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
          NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
          [outputData setVideoSettings:videoSettings];              
          [self.captureSession addOutput:outputData];
          isRecording = NO;
     }
     return self;
}
The above has some issues, but works. This is the action for the button to start/stop recording:
Code:
- (void) toggleRecording
{
     if (isRecording)
     {
          NSLog(@"Stopping recording");
          [self.assetWriterInput markAsFinished];
          [self.assetWriter endSessionAtSourceTime:recordStartTime];
          [self.assetWriter finishWriting];
          NSLog(@"Export done");
     }
     else
     {
          NSLog(@"Starting to record");
          NSError *error = nil;
          NSURL *outputPath = [self tempFileURL];
          if (![outputPath isFileURL])
          {
               NSLog(@"Not file URL");
          }
          self.assetWriter = [AVAssetWriter assetWriterWithURL:outputPath fileType:AVFileTypeQuickTimeMovie  error:&error];
          if (error != nil)
          {
               NSLog(@"Creation of assetWriter resulting in a non-nil error");
               NSLog([error localizedDescription]);
               NSLog([error localizedFailureReason]); 
          }    
          NSMutableDictionary *d=[[NSMutableDictionary alloc] init];
          [d setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
          [d setValue:[NSNumber numberWithInt:1280] forKey:AVVideoWidthKey];
          [d setValue:[NSNumber numberWithInt:720] forKey:AVVideoHeightKey];
          self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:d];
          if (self.assetWriterInput == nil)
          {
               NSLog(@"assetWriterInput is nil");
          }
          //self.assetWriterInput.expectsMediaDataInRealTime = YES; // If you uncomment this you get an exception saying it's not implemented yet (this may well not be true anymore: this was written on a very early 4.1 beta
          [self.assetWriter addInput:self.assetWriterInput];
          [self.assetWriter startWriting];
          [self.assetWriter startSessionAtSourceTime:recordStartTime];
     }
     isRecording = !isRecording;
}

Finally we have a callback that we can use to get each frame as it becomes available

Code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
     if (!CMSampleBufferDataIsReady(sampleBuffer))
     {
          NSLog(@"sampleBuffer data is not ready");
     }

     CMTime timeNow = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
     CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

     // Lock the image buffer
     CVPixelBufferLockBaseAddress(imageBuffer,0); 

     // Get information about the image
     uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
     size_t width = CVPixelBufferGetWidth(imageBuffer); 
     size_t height = CVPixelBufferGetHeight(imageBuffer); 

     // Create a CGImageRef from the CVImageBufferRef
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 

      // Temp: draw a black rect: replace the next 2 lines with the correct compositing that you want.
      CGContextSetFillColorWithColor(newContext, [[UIColor blackColor] CGColor]);
      CGContextFillRect(newContext, CGRectMake(0, 0, 400, 400));

     // We unlock the  image buffer
     CVPixelBufferUnlockBaseAddress(imageBuffer,0);

     // We release some components
     CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace);
     if (isRecording)
     {
          if (![self.assetWriterInput isReadyForMoreMediaData])
          {
               NSLog(@"Not ready for data :(");
          }
          NSLog(@"Trying to append");
          if (![self.assetWriterInput appendSampleBuffer:sampleBuffer])
          {
               NSLog(@"Failed to append pixel buffer");
          }
          else 
          {
               NSLog(@"Append worked");
          }
     }
     recordStartTime = timeNow;
}

As I said: you either do the research to understand this code or you don't. This is the total extent of the help I am willing to give.
 

knonk

macrumors newbie
Original poster
Oct 27, 2010
6
0
Oh, thanks so much for your explanation ! I'm trying your code. Seem it gonna working. Thanks again!
 

knonk

macrumors newbie
Original poster
Oct 27, 2010
6
0
Hi robbieduncan,
Using switching camera to simulate capture simultaneously is impossible, I did test on device and it wasn't smooth as I wanted. I'm thinking about using multiple threading to access both cameras simultaneously, Do u think it's possible or not ?
Thanks for your solution to display captures video from both cameras using merge image frame by frame. I'm making some code to test it, but I have problem with audio, I get image data from (CMSampleBufferRef)sampleBuffer and display it to UIImage and also save it to file, but it's just video without audio.
Do u have any idea to save merged video with audio ? Anyone have idea about this help me pls!
 

robbieduncan

Moderator emeritus
Jul 24, 2002
25,611
893
Harrogate
I don't think threading will help: AVFoundation will not let both cameras be active at once.

As for audio you need to add a new capture device, asset writer input and so on.
 

pawelnathan

macrumors newbie
Feb 28, 2017
1
0
Hi robbieduncan!

Your post seems very interesting, but I don't quite get how it is possible to access both cameras with your code?
I am sorry to say that the developer link is broken, so no info from there anymore.
I have already done some research in this matter, but I am very curious to find some ideas on how to make it possible to access two cameras at the same time. Can you give me a hint?

Regards,
Werner


I certainly can't repost the whole thread: I only have any sort of right to repost my own contributions. Fortunately for you they are the useful bit. The code below demonstrates how to get per-pixel data of the video stream, composite something onto it (in this case a black square) and save the video.

Note: you are expected to read, understand and if necessary research anything you don't understand yourself. I'm not answering any questions on this.

Before I post the code some setup: This is all in a class that manages the AV stuff. I have a button on screen to start/stop recording. When this is touched the toggleRecording method is called.

Code:
- (id) initWithViewForPreview:(UIImageView *) aView
{
     if ([super init])
     {
          self.previewView = aView;
          self.captureSession = [[AVCaptureSession alloc] init];
          self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
          NSError *error = nil;
          AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
          if (input)
          {
               [self.captureSession addInput:input];
          }
          else
          {
               NSLog(@"Error creating video input device");
          }
          AVCaptureVideoDataOutput *outputData = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
          [outputData setSampleBufferDelegate:self queue:dispatch_queue_create("renderqueue",NULL)];

          // Set the video output to store frame in BGRA (It is supposed to be faster)
          NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
          NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
          NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
          [outputData setVideoSettings:videoSettings];             
          [self.captureSession addOutput:outputData];
          isRecording = NO;
     }
     return self;
}
The above has some issues, but works. This is the action for the button to start/stop recording:
Code:
- (void) toggleRecording
{
     if (isRecording)
     {
          NSLog(@"Stopping recording");
          [self.assetWriterInput markAsFinished];
          [self.assetWriter endSessionAtSourceTime:recordStartTime];
          [self.assetWriter finishWriting];
          NSLog(@"Export done");
     }
     else
     {
          NSLog(@"Starting to record");
          NSError *error = nil;
          NSURL *outputPath = [self tempFileURL];
          if (![outputPath isFileURL])
          {
               NSLog(@"Not file URL");
          }
          self.assetWriter = [AVAssetWriter assetWriterWithURL:outputPath fileType:AVFileTypeQuickTimeMovie  error:&error];
          if (error != nil)
          {
               NSLog(@"Creation of assetWriter resulting in a non-nil error");
               NSLog([error localizedDescription]);
               NSLog([error localizedFailureReason]);
          }   
          NSMutableDictionary *d=[[NSMutableDictionary alloc] init];
          [d setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
          [d setValue:[NSNumber numberWithInt:1280] forKey:AVVideoWidthKey];
          [d setValue:[NSNumber numberWithInt:720] forKey:AVVideoHeightKey];
          self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:d];
          if (self.assetWriterInput == nil)
          {
               NSLog(@"assetWriterInput is nil");
          }
          //self.assetWriterInput.expectsMediaDataInRealTime = YES; // If you uncomment this you get an exception saying it's not implemented yet (this may well not be true anymore: this was written on a very early 4.1 beta
          [self.assetWriter addInput:self.assetWriterInput];
          [self.assetWriter startWriting];
          [self.assetWriter startSessionAtSourceTime:recordStartTime];
     }
     isRecording = !isRecording;
}

Finally we have a callback that we can use to get each frame as it becomes available

Code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
     if (!CMSampleBufferDataIsReady(sampleBuffer))
     {
          NSLog(@"sampleBuffer data is not ready");
     }

     CMTime timeNow = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
     CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     // Lock the image buffer
     CVPixelBufferLockBaseAddress(imageBuffer,0);

     // Get information about the image
     uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
     size_t width = CVPixelBufferGetWidth(imageBuffer);
     size_t height = CVPixelBufferGetHeight(imageBuffer);

     // Create a CGImageRef from the CVImageBufferRef
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

      // Temp: draw a black rect: replace the next 2 lines with the correct compositing that you want.
      CGContextSetFillColorWithColor(newContext, [[UIColor blackColor] CGColor]);
      CGContextFillRect(newContext, CGRectMake(0, 0, 400, 400));

     // We unlock the  image buffer
     CVPixelBufferUnlockBaseAddress(imageBuffer,0);

     // We release some components
     CGContextRelease(newContext);
     CGColorSpaceRelease(colorSpace);
     if (isRecording)
     {
          if (![self.assetWriterInput isReadyForMoreMediaData])
          {
               NSLog(@"Not ready for data :(");
          }
          NSLog(@"Trying to append");
          if (![self.assetWriterInput appendSampleBuffer:sampleBuffer])
          {
               NSLog(@"Failed to append pixel buffer");
          }
          else
          {
               NSLog(@"Append worked");
          }
     }
     recordStartTime = timeNow;
}

As I said: you either do the research to understand this code or you don't. This is the total extent of the help I am willing to give.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.