Canon DSLR interview workflow
Before describing my DSLR workflow for shooting interviews let me first explain why it’s worth doing all of this rather than sticking to my EX1 and making life easier. When shooting an interview with something like the EX1 it’s best to put a lot of distance between the camera and the subject and have the subject a good distance from the background in order to achieve a shallow depth of field. Having a shallow depth of field is advantageous in an interview because it draws the eye to the subject and makes the background less distracting whilst still being visible enough to set the scene.
Canon DSLR’s allow you to achieve this look without so much room so they are a great tool for shooting interviews. They do however have need some special thoughts when it comes to workflow.
For the purposes of this description I’ll go into detail about the workflow steps but not necessarily the detail about each step, certain parts I’ll cover in more detail later if required. Where possible I’ll include links to other resources where the individual steps can be learned in more detail.
My usual setup for shooting an interview using DSLR’s is to have my 7D as the main camera with a follow focus, matte box, Rode video mic and an external Marshall monitor attached. I use the 7D mainly due to it working better with the Marshall monitor than the 5D2. It’s such a shame that even after adding the frame rates of the 7D and a few additional features Canon didn’t fix the 5D’s inability to keep running an HDMI monitor at the same resolution during recording.
I then use the 5D2 as a second camera on a glidetrack grabbing subtle dolly shots and close-ups during the interview.
Of course both cameras are set to the same recording format, now that the 5D2 can shoot 25p this is generally 1920×1080/25p in my case. I’ll normally set a shutter speed of 50 and then adjust aperture and apply ND using filters as necessary.
Getting correct exposure can be a bit tricky with DSLR’s so I utilise two main methods. Firstly I have a light meter which is usually a reliable way of getting settings for correctly exposed skin tones. The other method that I use a lot is to use the false colour function on my Marshall monitor. After using this for a while you soon get used to exposing correctly.
One of the biggest changes in workflow that results from shooting video with DSLR’s like the Canon 5D mark II & 7D is handling the audio. Even with the 2.04 firmware update for the 5D2 that allows manual gain control I still don’t record my main sound in camera and prefer using the Zoom h4n.
The main reasons for this are mainly due to control and monitoring rather than the actual recording quality, although I’m sure the sound quality of the Zoom is probably better than that of the DSLR’s as well. Even with the new manual gain controls on the 5D2 you can’t adjust levels during a take, and without the use of an external pre-amp device there’s no way of monitoring the actual sound being recorded.
I record sound on both DSLR’s using Rode video mics and also on the Zoom h4n using my main mics which are either a Senheisser 416 shotgun or Sony ECM-77 Lav mics connect to Sony UWP-V1 Wireless systems. This allows me to monitor and adjust recording levels on the fly. I do a levels check and start the Zoom recording before rolling the cameras. The zoom then continues to record audio regardless of how many times the cameras are stopped and started. Quality wise I record sound as 48Khz 24bit wav files.
If using wireless lavs I’ll often carry the zoom and wireless receiver with me allowing me to monitor the audio through headphones at the same time as being able to move around working with both cameras. Of course you need to be careful not to pull any cables etc when doing this and I always use the hold feature on the Zoom to make all of the controls inactive.
I’m always paranoid about making sure my media is protected so as soon as possible so when card is full or I reach the end of a shoot I copy my cards to my Nexto DI NVS2500. It’s always very reassuring to know that my footage and audio is then on the original cards and on the NVS2500’s hard drive.
Another nice thing about working with the NVS2500 is that when it comes to setting up a project and ingesting footage I can get all of my media from this one device. The first thing I do is copy all of the media from the shoot onto my macs media storage drives. I like to copy the entire directory structure from the CF and SD cards over to these drives, this is especially important if using Canon’s plugin-E1 for Final cut as the plugin expect to find original directory structures.
Ingest and transcode
When it comes to ingesting the media for use in final cut there are a few things to consider. Final Cut does not like working with the native H.264 files that the Canon DSLR’s produce, it works but it makes editing a painful experience. So the first thing I do is transcode the footage into a more usable format. I like to convert the footage to Apple’s ProRes 422 (LT) format. This can be done in a number of ways.
The first option is to use a free application called MPEG Streamclip. Using this software is a simple process that involves manually dragging all of the .MOV files generated by the DSLR’s into the app and having it convert them all to ProRes using it’s batch list feature. This process is by far the fastest way of transcoding footage I’ve found so it’s the one I use if pushed for time and I don’t need metadata or timecode in my clips.
If using MPEG Streamclip I create the ProRes files in a ‘Prores’ directory on my scratch disk (The disk defined in Final Cut as my ‘Video Capture’ location). As with the EX1 I always keep the file structure from my original cards intact on my ‘Video Store’ drives. I look at the ProRes files as replaceable as they can always be recreated at a later date as long as you keep the original files and directory structure from the CF cards.
It’s important to me to keep my workflow with DSLR as similar as possible to my EX1 workflow as I’m then working in a way that I’m familiar with and I work more efficiently.
The second option, and one that’s only recently become available is to use Canon’s E1 plugin, this utilises the ‘Log & Transfer’ feature in Final Cut and gives you a lot more logging options than the previous method. The E1 plugin allows you to preview clips, mark in and out points, make clip notes and much more. Any imported footage using this method also contains timecode based on the time the footage was shot so it makes working with DSLR footage a lot more like using footage from a dedicated video camera like the EX1.
The downside to using the E1 software though is speed, in my experience so far it seems slower at transcoding to ProRes than MPEG Streamclip so you need to weigh up the benefits of both options for yourself. The E1 plugin has only been out for a few days at the time of writing this so time will tell if I start preferring this method. I do like the idea of keeping my solid state workflow as similar as possible between cameras and this certainly makes using DSLR’s very much like the XDCAM-EX EX1.
To give you an idea of the speed difference I ran a quick test using a 5 minute 16 second 5D2 file on my Mac Pro. To be fair this mac is getting towards the end of it’s lifespan but it’s a quad core 2.66Ghz machine with 8GB RAM so still fairly beefy.
You can see that that MPEG Streamclip transcoded this clip in 7:38 whereas the E1 plugin took 10:27.
When using the E1 plugin the ProRes files will generally be created on whatever scratch disk you define in Final Cut as your ‘Video Capture’ location. If you save a project before importing the footage the files are stored within a subdirectory matching the name of your project, if not they will end up in an ‘untitled project’ folder. I don’t tend to use in and out points when importing footage using the log & transfer plugin, I import everything as complete clips and worry about choosing what I’m going to use in Final Cut. I find that to be more efficient than deciding which part of my clips I’m going to use in the log & transfer window. It also makes it easier if you have to recreate the ProRes files at a later date.
One thing to note is that by default the E1 plugin transcodes to ProRes 444, which makes huge files that are really not necessary for the DSLR footage. To change this click on the gear icon (middle top of the window) and change the EOS Movie option to ProRes 422 (LT) or whatever you prefer.
You can also transcode the footage using apples compressor software but I’ve found it ti be quite unreliable when running large batches. Even using droplets can be problematic so I recommend both of the options above over compressor.
Once I’ve transcoded and imported the footage from both cameras into Final Cut, the next thing to do is import the audio from the Zoom and sync everything up. For this job there’s a fantastic plugin for Final Cut called ‘PluralEyes‘ made by Singular Software. The plugin costs around £100, or $150 respectively. There’s a 30 day trial of the software available so if you haven’t tried it out yet then I highly recommend giving it a go.
Have a look at Singular Software’s tutorial videos, they do a great job of explaining how simple pluraleyes is to use.
This is where recording sound on the DSLR’s comes in because what pluraleyes does is look at the waveforms from each track and line them up with the wav file from the zoom accordingly. I’ve posted a full review of pluraleyes if you’re interested in finding out more about it.
The alternative to using pluraleyes is to sync everything up manually, this is fine if you’re prepared to mark each shot in with a clapper but I find it much easier not having to worry about doing that.
Storage & Archive
I leave the transcoded files on the scratch disk while working on the project then at the end of the project I use the Media Manager to copy all of my sequences and any clips used over to an archive drive. Media Manager allows you to only archive the parts of each master clip you use plus a defined amount at the start and end of each clip, known as ‘handles’. This Means that my archive of the project contains all the source files it needs should I need to re-export it in the future, but it doesn’t contain all of the unused clips. The files on the scratch disk are then deleted because I know I can recreate the transcoded ProRes files from the originals at any time in the future should I need to edit the project at a later date.
I use an 8TB Drobo to backup both my video store drives and my scratch disk throughout this process, my files are never in one location alone.
Hopefully that explains everything, if there’s something you’d like more detail about or have any questions please leave a comment.
Thanks for your time on this article Great post! Such a Canon DSLR’s allows to achieve this look shown in the image without so much room so they are a great tool for shooting the interviews and videos.
How do you deal with the time/file size limitation when shooting an interview? Do you cut every 10 minutes for a second on each camera?
I don’t shoot interviews with DSLR’s now but yes, you just have to reset every chance you get if working with a subject that’s likely to talk for that long.
Thanks Paul for sharing great information!! What is your current set-up since you no longer use DSLRs? Perhaps there is an article already out there with this info. ??
Hi there. I’m using Canon C300 and C100 cameras now and the workflow is a lot simpler given that both cameras record audio internally. I generally run XLR audio in to the C300 from my Rode shotgun mic and use the on camera mic in the C100 to give some sync audio for processing multi camera in FCPX.
What do you do if when shooting an interview, you can not get a wide enough depth of field. I’ve only got 2 Canon kit lenses (Canon EFS 55-200 and EFS 18-55) and two Nikon prime lenses (50mm f1.8 & 35mm f2). I ca’t get more than 7 ft away from the subject. The depth of field is so shallow that I can’t focus on the eyes and still have the ears in focus. Shooting on a Canon T1i using Magic Lantern. Thanks!
Hi Charles. If you want a deeper DOF then you need to use a higher aperture number (smaller aperture). Try something like f/5.6, you’ll need to then set correct exposure by either increasing ISO or adding more light to the subject.
Thanks for the tips.
Am looking for an interview lens for my 5D Mark III. Considering the Canon 70-200mm f4 L USM. Would that suffice? Or would I most certainly need the f 2.8?
Essentially need the lens to shoot an on-the-go documentary. Can’t carry a lot of lighting gear. Your opinion would really help.
The F4 is a great lens and way more portable than the 2.8 Varun. If you really need the extra bit of speed for your low light work then the 2.8 is a lovely lens, it’s a lot heavier than the F4 though so it depends on whether or not you’re happy carrying it around and paying the extra premium.
Great article Paul.
I currently work with an EX1 but like everyone I am looking for the SDoF for interviews. Haven’t got the budget for an F3 or 101 yet, so for a gentle (and cheaper) transition away from ENG I was thinking of the Canon 550 jus to cover SDoF when I need it.
What are your thoughts on the 550 ?
I’ve not used one to be honest John but I know a lot of people do and get good results.
really helpful thanks/ my only criticism is that plant competing with the subject ;)
Thanks for the article Paul.
I have a couple of questions.
You mention that you use the 2nd camera on a glidetrack. Which head do you use on the glidetrack for this? I need a lightweight one to take on a job that does not cost too much!
Zoom h4n: Is it relatively straightforward to use? I use a Sony lav mic – do I just plu the receiver into the Zoom h4n?
I use a small manfrotto ball head designed for stills. Re the zoom, yes I just plug the mini jack from my wireless pack into the mix input on the back of the zoom.
I only use the xlr’s if using two wireless packs or a boom mic.
Thank you for writing such a thorough article!
I’m an avid guy and I am a DSLR noob too. I shot an interview that is one 10 minute clip. I would like to manually sync up the audio and video and then log the interview. I don’t have the $ for PluralEyes.
Is there a way to create sync up the video & audio without generating any new media? Thank you for the great post!
I’m a little bit confused about what it is you’re trying to do, it’s pretty straight forward lining everything up in the NLE using the waveforms from the cameras audio and that of your external recorder if that’s what you mean?
Great article Paul! One question, what is your strategy for dealing with the 12 minute recording cap on the 7D? Are you just cutting people off, or keeping your interviews really short?
I do a lot of human-interest docs that require people telling long drawn-out stories (then I usually have an intern log them), but I’m new to the DSLR world so I’m curious as to how you’ve dealt with that.
It’s a tricky one. So far very few of my interviewee’s manage to get to five minutes without pausing or making a mistake, but for longer runs it would be a big problem. I instinctively reset both cameras at every opportunity and haven’t had a problem so far, although there have been a couple of times when I’ve started getting nervous about that. Same thing with batteries & cf cards though, if it happens it happens I guess.
If I had a shoot where I felt it would look unprofessional for something like that to happen I probably wouldn’t go with DSLR’s.
i have started to work with ProsRes proxy and would like to conform my final edit to Pros
Res 422HQ or 4444. I have been trying several routes but not come to a solution yet.
Could you help with this side of the workflow?
Thanks in advance,
Hi. If you need to transcode to 4×4 after the edit stage then the best way is probably to run the sequence through compressor. I’m not sure why you would want to do that though, the footage will not gain in quality over the codec you’ve edited with (assuming you’ve used ProRes 422 (LT) or similar.
Paul. I like that you often carry your H4N and wireless receiver on you. Nice idea, especially when you are working two cams like you do, or otherwise have to get up and move around. I have on more than one occasion almost pulled my rig over by the headphone wire, and it only happens on interviews when I’m leaving the camera to go and make some change in the set or the subject. Thanks.
can you explain some in camera settings when using the marshall monitor? How do you make sure that everything is calibrated and accurate? Also does your light meter consider ND filters? Thank you
I don’t think there are any in camera settings relating to using the monitor, you just plug it in and your away, other than changing the amount of information shown o course. I don’t use any calibration routines etc and I’m not overly bothered about colours being a little out as I don’t use the monitor to judge colour. I really only use it for focus, framing and exposure. The light meter does not take into account ND filters unfortunately, if I’m using ND I have to use the cameras built in metering or the false colour mode on the Marshall monitor.
Very valuable, thanks! Now that the E1 has been out for a while I was wondering if you have any additional thoughts about it. I’m about to do my first edit since E1 and wasn’t sure if I wanted to experiment with it or go back to “old faithful” and use MPEG Streamclip. Thanks!!
I used the plugin a few times but found it was cutting some of my clips short. I’m not sure if that’s a bug or just a problem with my install but I still prefer the MPEG Streamclip route myself.
Great info! I shoot short docs for the web using an EX3 and recently started using a Canon T2i(550D) as a second camera. Our turnaround time is fast and I usually edit in final cut in the EX3’s native XDCamEX codec. But now that I’m adding the Canon to my workflow I’m wondering if I should transcode the T2i footage to XDCamEX to match the EX3 or transcode both cameras footage to ProRes? Any reason I shouldn’t edit in XDCamEX format? I edit on a MacBookPro, will rendering color corrections, etc. be faster if everything’s in ProRes rather than XDCamEX?
Thanks for any insight!
There’s no reason at all not to continue editing your EX3 footage in the native XDCAM-EX it’s a great codec to work with and very efficient when working with fast turn-arounds. The Canon on the other hand will need transcoding to make life easier in editing. Because final cut works fine with multiple codecs I would just convert it to ProRes and mix it in with the XDCAM-EX footage.
As to whether ProRes would be more efficient at rendering etc, it probably would be but then add all the effort to transcode everything at the start and overall you’ll probably be more efficient working in the EX3’s native format. You can set FCP to render using ProRes in the user prefs anyway if you would prefer any rendered parts of the sequence to use that codec.
I can’t see any value in transcoding the Canon footage to XDCAM-EX, it’s another compressed format and actually a lower bitrate than the Canon DSLR’s shoot so you’ll be better off transcoding that to something like ProRes 422 (LT) so that it has a little more headroom.
The catch is I need to do multiclips so the codecs need to match. I hate to transfer the Canon’s footage to another highly compressed format (with an even lower bitrate as you pointed out) but this stuff is not for broadcast, it gets compressed for the web anyway.
I see, I guess the fastest workflow will be to encode the DSLR footage as XDCAM-EX then. let me know how it goes.
Good write up, Paul. Thanks
One question. Once it’s all synced up in a pluraleyes timeline how do you then edit it? Do you nest the pluraleyes timeline into a fresh one (how I handle ex1 footage.)
I’ve been using a couple of approaches depending on the complexity of the edit. If it’s just a simple sequence with one camera and dual system sound I’ve been syncing up and then deleting the unwanted audio from the video track and editing directly in that sequence. Sometimes I’ll link up the sep sound track to the video clips but generally I can do the edit pretty easily without doing that.
When working with two cameras and multiclips I generally export each cameras track out as a reference movie and then re-import them and make the multiclips using those tracks as sources. I haven’t tried doing the same with nested sequences so I’m not sure if that would work with multiclips.
Thanks for your time on this. Great post! If you can get your hands on Episode it beats all comers. A 1:09s clip took 1:12s in MPEG and 52s in Episode. But it’s expensive!
Also there some free software called QTChange03 that adds timecode and reel name to clips.
Thanks Todd, I already own a copy of Episode Pro but didn’t realise it did ProRes. I’ll give that a try.
Exactly the same setup here. What lenses are you using? I like to shoot with a 50mm 1.2 on the 7D and the 70-200mm 2.8 on the 5D. This way I can get a standard interview shot with shallow dof on the main camera (7d is the main camera just because the HDMI preview doesn’t switch res) and a close-up shot with the 5D. Sync everything up with pluraleyes and voil
Pretty much the same as you Christoph, 50 1.2 on the 7D and either the 70-200 2.8 IS on the 5D or the 24mm 1.4 if I need something wide to set the scene.