Capture video simultaneously from both front and rear camera

Discussion in 'iPhone/iPad Programming' started by knonk, Oct 27, 2010.

  1. macrumors newbie

    Joined:
    Oct 27, 2010
    #1
    Hi,
    I'm facing a hard problem, I'm developing an app to capture video from both front and rear camera simultaneously on IPhone 4 without jail-break and save it to one video with AVI format. I'm facing 2 problems:
    1. Capture videos simultaneously from both camera (front and rear).
    2. Save captured videos to only 1 file with AVI format.

    I have worked arround with multimedia lib in IPhone SDK and know that UIImagePicker class support me capture video on IPhone, but it only support one camera device by setting cameraDevice property of UIImagePicker . Seem UIImagePicker can't help me more....

    I want to display 2 captured videos from both camera in screen, one is background video (full screen) and one foreground video, like PIP effect (picture in picture video effect on Television program), then I wanna save them (captured videos) to only 1 video file with video format, properly AVI format.

    For those purpose, I did search around on google, and know that FFMPEG seem support merge 2 video with PIP effect but I don't see any tutorial about it. I don't have any exp on FFMPEG before....

    I'm very sad now, because the problem (1) & (2) are still there. I don't know how to capture videos simultaneously on IPhone, then merge and save them in only 1 video file like PIP effect.

    Anyone have idea about my those problems ? please help me !
     
  2. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #2
    It's not possible to capture from both cameras at once. I've had that confirmed by Apple.
     
  3. thread starter macrumors newbie

    Joined:
    Oct 27, 2010
    #3
    Thanks for your information but did you contact with Apple and they did confirm you that ? Sorry for asking you again because I need to make sure and I have to have a solution for this problem.

    If you are right, so the (1) problem is impossible, but I still face the second problem, that is save 2 video file to 1 with PIP effect.
    Do you know how to do that in IPhone 4? Any framework help me do that ?
     
  4. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #4
    1) On the developer forums. I posted this question regarding the 4.1 beta when Apple gave low-level access to the cameras.

    2) Use the AVFoundation framework: you can get pixel-level access to each video frame and composite onto it. This is a low level API and quite tricky to use. Be prepared for a lot of annoyance.

    If you have access to the dev forums you might find this thread useful: https://devforums.apple.com/thread/59490?start=0&tstart=0
     
  5. thread starter macrumors newbie

    Joined:
    Oct 27, 2010
    #5
    Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there. Could u repost that usefull thread here ?
     
  6. macrumors 603

    Joined:
    Jul 29, 2003
    Location:
    Silicon Valley
    #6
    You will need a paid Developer enrollment to develop for any non-jailbroken iPhone device. So you might as well get one.
     
  7. macrumors 68020

    dccorona

    Joined:
    Jun 12, 2008
    #7
    why would they spend $100 only to find out their app cant be made?
    seems wasteful to me
     
  8. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #8
    I certainly can't repost the whole thread: I only have any sort of right to repost my own contributions. Fortunately for you they are the useful bit. The code below demonstrates how to get per-pixel data of the video stream, composite something onto it (in this case a black square) and save the video.

    Note: you are expected to read, understand and if necessary research anything you don't understand yourself. I'm not answering any questions on this.

    Before I post the code some setup: This is all in a class that manages the AV stuff. I have a button on screen to start/stop recording. When this is touched the toggleRecording method is called.

    Code:
    - (id) initWithViewForPreview:(UIImageView *) aView
    {
         if ([super init])
         {
              self.previewView = aView;
              self.captureSession = [[AVCaptureSession alloc] init];
              self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
              NSError *error = nil;
              AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
              if (input)
              {
                   [self.captureSession addInput:input];
              }
              else
              {
                   NSLog(@"Error creating video input device");
              }
              AVCaptureVideoDataOutput *outputData = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
              [outputData setSampleBufferDelegate:self queue:dispatch_queue_create("renderqueue",NULL)];
    
              // Set the video output to store frame in BGRA (It is supposed to be faster)
              NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
              NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
              NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
              [outputData setVideoSettings:videoSettings];              
              [self.captureSession addOutput:outputData];
              isRecording = NO;
         }
         return self;
    }
    
    The above has some issues, but works. This is the action for the button to start/stop recording:
    Code:
    - (void) toggleRecording
    {
         if (isRecording)
         {
              NSLog(@"Stopping recording");
              [self.assetWriterInput markAsFinished];
              [self.assetWriter endSessionAtSourceTime:recordStartTime];
              [self.assetWriter finishWriting];
              NSLog(@"Export done");
         }
         else
         {
              NSLog(@"Starting to record");
              NSError *error = nil;
              NSURL *outputPath = [self tempFileURL];
              if (![outputPath isFileURL])
              {
                   NSLog(@"Not file URL");
              }
              self.assetWriter = [AVAssetWriter assetWriterWithURL:outputPath fileType:AVFileTypeQuickTimeMovie  error:&error];
              if (error != nil)
              {
                   NSLog(@"Creation of assetWriter resulting in a non-nil error");
                   NSLog([error localizedDescription]);
                   NSLog([error localizedFailureReason]); 
              }    
              NSMutableDictionary *d=[[NSMutableDictionary alloc] init];
              [d setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
              [d setValue:[NSNumber numberWithInt:1280] forKey:AVVideoWidthKey];
              [d setValue:[NSNumber numberWithInt:720] forKey:AVVideoHeightKey];
              self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:d];
              if (self.assetWriterInput == nil)
              {
                   NSLog(@"assetWriterInput is nil");
              }
              //self.assetWriterInput.expectsMediaDataInRealTime = YES; // If you uncomment this you get an exception saying it's not implemented yet (this may well not be true anymore: this was written on a very early 4.1 beta
              [self.assetWriter addInput:self.assetWriterInput];
              [self.assetWriter startWriting];
              [self.assetWriter startSessionAtSourceTime:recordStartTime];
         }
         isRecording = !isRecording;
    }
    
    Finally we have a callback that we can use to get each frame as it becomes available

    Code:
    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
    {
         if (!CMSampleBufferDataIsReady(sampleBuffer))
         {
              NSLog(@"sampleBuffer data is not ready");
         }
    
         CMTime timeNow = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
         CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    
         // Lock the image buffer
         CVPixelBufferLockBaseAddress(imageBuffer,0); 
    
         // Get information about the image
         uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
         size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
         size_t width = CVPixelBufferGetWidth(imageBuffer); 
         size_t height = CVPixelBufferGetHeight(imageBuffer); 
    
         // Create a CGImageRef from the CVImageBufferRef
         CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
         CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    
          // Temp: draw a black rect: replace the next 2 lines with the correct compositing that you want.
          CGContextSetFillColorWithColor(newContext, [[UIColor blackColor] CGColor]);
          CGContextFillRect(newContext, CGRectMake(0, 0, 400, 400));
    
         // We unlock the  image buffer
         CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    
         // We release some components
         CGContextRelease(newContext); 
         CGColorSpaceRelease(colorSpace);
         if (isRecording)
         {
              if (![self.assetWriterInput isReadyForMoreMediaData])
              {
                   NSLog(@"Not ready for data :(");
              }
              NSLog(@"Trying to append");
              if (![self.assetWriterInput appendSampleBuffer:sampleBuffer])
              {
                   NSLog(@"Failed to append pixel buffer");
              }
              else 
              {
                   NSLog(@"Append worked");
              }
         }
         recordStartTime = timeNow;
    }
    
    As I said: you either do the research to understand this code or you don't. This is the total extent of the help I am willing to give.
     
  9. thread starter macrumors newbie

    Joined:
    Oct 27, 2010
    #9
    Oh, thanks so much for your explanation ! I'm trying your code. Seem it gonna working. Thanks again!
     
  10. thread starter macrumors newbie

    Joined:
    Oct 27, 2010
    #10
    Hi robbieduncan,
    Using switching camera to simulate capture simultaneously is impossible, I did test on device and it wasn't smooth as I wanted. I'm thinking about using multiple threading to access both cameras simultaneously, Do u think it's possible or not ?
    Thanks for your solution to display captures video from both cameras using merge image frame by frame. I'm making some code to test it, but I have problem with audio, I get image data from (CMSampleBufferRef)sampleBuffer and display it to UIImage and also save it to file, but it's just video without audio.
    Do u have any idea to save merged video with audio ? Anyone have idea about this help me pls!
     
  11. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #11
    I don't think threading will help: AVFoundation will not let both cameras be active at once.

    As for audio you need to add a new capture device, asset writer input and so on.
     

Share This Page