Go Back   MacRumors Forums > Apple Systems and Services > Programming > iPhone/iPad Programming

Reply
 
Thread Tools Search this Thread Display Modes
Old Oct 27, 2010, 03:36 AM   #1
knonk
macrumors newbie
 
Join Date: Oct 2010
Capture video simultaneously from both front and rear camera

Hi,
I'm facing a hard problem, I'm developing an app to capture video from both front and rear camera simultaneously on IPhone 4 without jail-break and save it to one video with AVI format. I'm facing 2 problems:
1. Capture videos simultaneously from both camera (front and rear).
2. Save captured videos to only 1 file with AVI format.

I have worked arround with multimedia lib in IPhone SDK and know that UIImagePicker class support me capture video on IPhone, but it only support one camera device by setting cameraDevice property of UIImagePicker . Seem UIImagePicker can't help me more....

I want to display 2 captured videos from both camera in screen, one is background video (full screen) and one foreground video, like PIP effect (picture in picture video effect on Television program), then I wanna save them (captured videos) to only 1 video file with video format, properly AVI format.

For those purpose, I did search around on google, and know that FFMPEG seem support merge 2 video with PIP effect but I don't see any tutorial about it. I don't have any exp on FFMPEG before....

I'm very sad now, because the problem (1) & (2) are still there. I don't know how to capture videos simultaneously on IPhone, then merge and save them in only 1 video file like PIP effect.

Anyone have idea about my those problems ? please help me !
knonk is offline   0 Reply With Quote
Old Oct 27, 2010, 04:18 AM   #2
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
It's not possible to capture from both cameras at once. I've had that confirmed by Apple.
robbieduncan is offline   0 Reply With Quote
Old Oct 27, 2010, 05:14 AM   #3
knonk
Thread Starter
macrumors newbie
 
Join Date: Oct 2010
Quote:
Originally Posted by robbieduncan View Post
It's not possible to capture from both cameras at once. I've had that confirmed by Apple.
Thanks for your information but did you contact with Apple and they did confirm you that ? Sorry for asking you again because I need to make sure and I have to have a solution for this problem.

If you are right, so the (1) problem is impossible, but I still face the second problem, that is save 2 video file to 1 with PIP effect.
Do you know how to do that in IPhone 4? Any framework help me do that ?
knonk is offline   0 Reply With Quote
Old Oct 27, 2010, 05:50 AM   #4
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Quote:
Originally Posted by knonk View Post
Thanks for your information but did you contact with Apple and they did confirm you that ? Sorry for asking you again because I need to make sure and I have to have a solution for this problem.

If you are right, so the (1) problem is impossible, but I still face the second problem, that is save 2 video file to 1 with PIP effect.
Do you know how to do that in IPhone 4? Any framework help me do that ?
1) On the developer forums. I posted this question regarding the 4.1 beta when Apple gave low-level access to the cameras.

2) Use the AVFoundation framework: you can get pixel-level access to each video frame and composite onto it. This is a low level API and quite tricky to use. Be prepared for a lot of annoyance.

If you have access to the dev forums you might find this thread useful: https://devforums.apple.com/thread/5...art=0&tstart=0
robbieduncan is offline   0 Reply With Quote
Old Oct 27, 2010, 10:00 PM   #5
knonk
Thread Starter
macrumors newbie
 
Join Date: Oct 2010
Quote:
Originally Posted by robbieduncan View Post
1) On the developer forums. I posted this question regarding the 4.1 beta when Apple gave low-level access to the cameras.

2) Use the AVFoundation framework: you can get pixel-level access to each video frame and composite onto it. This is a low level API and quite tricky to use. Be prepared for a lot of annoyance.

If you have access to the dev forums you might find this thread useful: https://devforums.apple.com/thread/5...art=0&tstart=0
Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there. Could u repost that usefull thread here ?
knonk is offline   0 Reply With Quote
Old Oct 27, 2010, 11:24 PM   #6
firewood
macrumors 603
 
Join Date: Jul 2003
Location: Silicon Valley
Quote:
Originally Posted by knonk View Post
Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there.
You will need a paid Developer enrollment to develop for any non-jailbroken iPhone device. So you might as well get one.
firewood is offline   0 Reply With Quote
Old Oct 28, 2010, 12:04 AM   #7
dccorona
macrumors 68020
 
dccorona's Avatar
 
Join Date: Jun 2008
Quote:
Originally Posted by firewood View Post
You will need a paid Developer enrollment to develop for any non-jailbroken iPhone device. So you might as well get one.
why would they spend $100 only to find out their app cant be made?
seems wasteful to me
dccorona is offline   0 Reply With Quote
Old Oct 28, 2010, 04:33 AM   #8
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Quote:
Originally Posted by knonk View Post
Hi, Thanks for your advise, but I don't have premium account on App Dev Forum, I can't login there. Could u repost that usefull thread here ?
I certainly can't repost the whole thread: I only have any sort of right to repost my own contributions. Fortunately for you they are the useful bit. The code below demonstrates how to get per-pixel data of the video stream, composite something onto it (in this case a black square) and save the video.

Note: you are expected to read, understand and if necessary research anything you don't understand yourself. I'm not answering any questions on this.

Before I post the code some setup: This is all in a class that manages the AV stuff. I have a button on screen to start/stop recording. When this is touched the toggleRecording method is called.

Code:
- (id) initWithViewForPreview:(UIImageView *) aView
{
     if ([super init])
     {
          self.previewView = aView;
          self.captureSession = [[AVCaptureSession alloc] init];
          self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
          NSError *error = nil;
          AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:&error];
          if (input)
          {
               [self.captureSession addInput:input];
          }
          else
          {
               NSLog(@"Error creating video input device");
          }
          AVCaptureVideoDataOutput *outputData = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
          [outputData setSampleBufferDelegate:self queue:dispatch_queue_create("renderqueue",NULL)];

          // Set the video output to store frame in BGRA (It is supposed to be faster)
          NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
          NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
          NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
          [outputData setVideoSettings:videoSettings];              
          [self.captureSession addOutput:outputData];
          isRecording = NO;
     }
     return self;
}
The above has some issues, but works. This is the action for the button to start/stop recording:
Code:
- (void) toggleRecording
{
     if (isRecording)
     {
          NSLog(@"Stopping recording");
          [self.assetWriterInput markAsFinished];
          [self.assetWriter endSessionAtSourceTime:recordStartTime];
          [self.assetWriter finishWriting];
          NSLog(@"Export done");
     }
     else
     {
          NSLog(@"Starting to record");
          NSError *error = nil;
          NSURL *outputPath = [self tempFileURL];
          if (![outputPath isFileURL])
          {
               NSLog(@"Not file URL");
          }
          self.assetWriter = [AVAssetWriter assetWriterWithURL:outputPath fileType:AVFileTypeQuickTimeMovie  error:&error];
          if (error != nil)
          {
               NSLog(@"Creation of assetWriter resulting in a non-nil error");
               NSLog([error localizedDescription]);
               NSLog([error localizedFailureReason]); 
          }    
          NSMutableDictionary *d=[[NSMutableDictionary alloc] init];
          [d setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
          [d setValue:[NSNumber numberWithInt:1280] forKey:AVVideoWidthKey];
          [d setValue:[NSNumber numberWithInt:720] forKey:AVVideoHeightKey];
          self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:d];
          if (self.assetWriterInput == nil)
          {
               NSLog(@"assetWriterInput is nil");
          }
          //self.assetWriterInput.expectsMediaDataInRealTime = YES; // If you uncomment this you get an exception saying it's not implemented yet (this may well not be true anymore: this was written on a very early 4.1 beta
          [self.assetWriter addInput:self.assetWriterInput];
          [self.assetWriter startWriting];
          [self.assetWriter startSessionAtSourceTime:recordStartTime];
     }
     isRecording = !isRecording;
}
Finally we have a callback that we can use to get each frame as it becomes available

Code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
     if (!CMSampleBufferDataIsReady(sampleBuffer))
     {
          NSLog(@"sampleBuffer data is not ready");
     }

     CMTime timeNow = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
     CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

     // Lock the image buffer
     CVPixelBufferLockBaseAddress(imageBuffer,0); 

     // Get information about the image
     uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
     size_t width = CVPixelBufferGetWidth(imageBuffer); 
     size_t height = CVPixelBufferGetHeight(imageBuffer); 

     // Create a CGImageRef from the CVImageBufferRef
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 

      // Temp: draw a black rect: replace the next 2 lines with the correct compositing that you want.
      CGContextSetFillColorWithColor(newContext, [[UIColor blackColor] CGColor]);
      CGContextFillRect(newContext, CGRectMake(0, 0, 400, 400));

     // We unlock the  image buffer
     CVPixelBufferUnlockBaseAddress(imageBuffer,0);

     // We release some components
     CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace);
     if (isRecording)
     {
          if (![self.assetWriterInput isReadyForMoreMediaData])
          {
               NSLog(@"Not ready for data :(");
          }
          NSLog(@"Trying to append");
          if (![self.assetWriterInput appendSampleBuffer:sampleBuffer])
          {
               NSLog(@"Failed to append pixel buffer");
          }
          else 
          {
               NSLog(@"Append worked");
          }
     }
     recordStartTime = timeNow;
}
As I said: you either do the research to understand this code or you don't. This is the total extent of the help I am willing to give.
robbieduncan is offline   0 Reply With Quote
Old Oct 28, 2010, 10:31 PM   #9
knonk
Thread Starter
macrumors newbie
 
Join Date: Oct 2010
Oh, thanks so much for your explanation ! I'm trying your code. Seem it gonna working. Thanks again!
knonk is offline   0 Reply With Quote
Old Nov 1, 2010, 11:15 PM   #10
knonk
Thread Starter
macrumors newbie
 
Join Date: Oct 2010
Hi robbieduncan,
Using switching camera to simulate capture simultaneously is impossible, I did test on device and it wasn't smooth as I wanted. I'm thinking about using multiple threading to access both cameras simultaneously, Do u think it's possible or not ?
Thanks for your solution to display captures video from both cameras using merge image frame by frame. I'm making some code to test it, but I have problem with audio, I get image data from (CMSampleBufferRef)sampleBuffer and display it to UIImage and also save it to file, but it's just video without audio.
Do u have any idea to save merged video with audio ? Anyone have idea about this help me pls!
knonk is offline   0 Reply With Quote
Old Nov 2, 2010, 03:13 AM   #11
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
I don't think threading will help: AVFoundation will not let both cameras be active at once.

As for audio you need to add a new capture device, asset writer input and so on.
robbieduncan is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Systems and Services > Programming > iPhone/iPad Programming

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Similar Threads
thread Thread Starter Forum Replies Last Post
iPad: iPad 5 rear and front put together Defender2010 iPad 13 Sep 2, 2013 01:59 PM
Modified OtterBox Defender to Protect Front & Rear Camera Cutouts? Brother Esau iPhone Accessories 4 Mar 8, 2013 08:43 PM
How does the rear video camera on the iPod Touch 5th gen compare to the iPhone 4S retrac1324 iPod 0 Oct 17, 2012 10:20 AM
Ipad mini will have front and rear cameras Nychot iPad 20 Aug 8, 2012 04:56 PM
Help WiFi video capture from my camera Jeanieg Digital Video 3 Jul 6, 2012 07:36 AM

Forum Jump

All times are GMT -5. The time now is 07:51 PM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps