I was introduced to the benefits of interview transcription a surprisingly long time after I’d started learning about documentary filmmaking. Tired of introductory-level film classes and spending hours discussing editing theory between screenings of Koyaanisqatsi, I decided to take a Journalism 101 course in the hope of gaining some actual, practical knowledge. The first thing I learned, and still probably the most helpful advice I’ve ever received about doc work, was to transcribe every interview, no matter how short or seemingly inconsequential. You’re probably already familiar with this process if you’ve ever produced non-fiction of any kind, but if not, here’s the gist of it: You create a written record of your taped interviews by typing out every question and response so you can refer to them later. It’s a pretty simple task and one that’s so mind-numbing that it’s typically delegated to interns, or outsourced to online services at pennies per word. But it’s an invaluable tool that will, without question, make you a better editor. I’ve been at it for a while now (without interns), adjusting my workflow whenever I find a technique or piece of technology that makes things easier. I’m not saying this is the be-all end-all best way to transcribe your interviews. It’s just the way I do it right now. Hopefully, especially if you’re just starting to figure out the way you like to work, there’s an idea or two here that you might find helpful.
The absolute most important thing to my workflow, regarding both transcription and editing overall, is that every clip I shoot for a project has a different filename. Personally, I stick with the camera’s clip naming system from beginning to end, just changing the reel name every time I switch cards. It’s simple enough and pretty much automatic. For example, the project I’m currently working on was shot on the Arri Amira. I only shot two cards, both on the same day, so everything is relatively straightforward. As you can see, I have my files split up by reel, with the reel number represented in the first four characters of the file name. The first clip on reel one is A001C001_160528_R56S.mov. That’s camera A, reel 001, clip 001, shot on 5/28/16. R56S is the camera ID, which I can use to determine the serial number of the Amira that recorded the clip. Those 20 characters give you pretty much all the information you need. From there, I import all of my footage into Adobe Premiere Pro.
My personal workflow from here on is pretty dependent on having access to Adobe’s Creative Cloud service. If you use Premiere too, then you can follow along step-by-step. If not, though, you may just want to skim the rest. Sincerely, though, and I say this having never received any promotional compensation from Adobe, it’s the best. If you’re reading this as a beginner and still questioning what program to invest in, I can’t recommend Creative Cloud highly enough. On to the specifics:
Once everything is imported safely into Premiere, I use Adobe Media Encoder to export audio-only versions of every clip I want to transcribe. For many people, this will be only the sit-down interviews. Personally, though, I like to have every scene with dialogue of any kind. I use color labels to identify these clips. Just right click the clip, scroll down to “label,” and change it to something noticeably different from the default. I’m a mango man, myself.
Once the dialogue clips are labeled, just select them all (or do it in batches, whichever is easier) and hit Command-M to open the export menu. Next, select MP3 from the “Format” dropdown. You can change whatever settings you like here, of course, or even export a different audio codec, but I’ve always been happy with the defaults, and most programs you’ll use to transcribe will work with MP3 files. The blue “Queue” button will send everything to Media Encoder so you can continue editing in Premiere while the clips are processed.
Unless you adjust the file name setting in Media Encoder, the resulting files should have the same name as your video clips, but with a different file extension, A001C001_160528_R56S.mp3, for example. This will help keep it clear which audio files and transcript text files are associated with each clip. The next step is to open them in whichever application you’d like to use to transcribe. Personally, I like Express Scribe for this. It’s free, easy to use, and available on multiple platforms, which is helpful. I do all my Premiere work on an impossible-to-move iMac with a connected working drive and RAID backup, so it’s nice to be able to put Express Scribe on a laptop or something, transfer the tiny MP3 files over, go to a coffee shop (or beer shop) and still be somewhat productive. If you find another application that fits your needs better, though, go for it. This part of the process is just generating text by whatever means you like best.
Whichever application or device you choose, this is the time-consuming part. Just hit play and type away using start/stop hotkeys and variable playback speed to eventually get to the point where you never have to stop typing. You’ll want to identify speakers, including the interviewer, in the transcript. That’ll make it much easier later on when we connect transcripts back to clips. You may also want to timestamp the transcript, depending on the length of your clips. I typically keep my clips short by quickly starting and stopping recording during quiet points in interviews, though, so I usually skip it. Make the transcript file name the same as the name of the clip that you’re transcribing, and the following steps should be quick and straightforward.
Here’s where having access to Creative Cloud actually becomes an essential part of my work: the next step is copy/pasting the transcribed text into a little-used Adobe application called Story. Story is Adobe’s solution for scriptwriting, production scheduling, and report generating. I don’t often have use for many of those features, so I didn’t use it much myself until I figured out how helpful it could be for organizing transcribed interviews. You start by creating a new project. Call it whatever you want. Mine, for example, is 1606 because it’s my sixth video project of the year 2016. I keep all my projects organized with that number system, but if you’re more creative than me and have an actual name for the thing you’re working on, have at it. Next, create a new script within the project using the “Film Script” template. Call it something like “Interview Transcripts-Project Name,” or whatever floats your boat.
Adobe Story helpfully formats your script for you by automatically recognizing scene headings and character names. We’ll use this feature to our advantage by replacing standard scene headings, such as EXT. PARK – DAY, with clip names. In my script, the first scene, my first clip with recorded dialogue, is A001C011_160528_R56S. The next scene is A001C012_160528_R56S, and so on. Every clip with transcribed dialogue has a corresponding scene in my Adobe Story script.
My main reason for choosing Premiere over some of the other NLE options out there is how seamlessly it works with Adobe’s other applications. Story is no exception. Once my clips are transcribed and copy/pasted into my Story script, I can open Premiere and import the transcribed “scenes” directly into the metadata of the video clips in my Premiere project. Just switch to the “Metalogging” workspace in Premiere, open the Adobe Story window, load your script, and drag and drop the script for each “scene” onto its corresponding clip. For some reason, this only works in Icon View. The transcribed dialogue is now part of the clip’s metadata and is visible directly in Premiere, which means I can view it while editing without having to open another application or refer to a printed transcript. I can also send any or all of my video clips to a colleague or client without having to worry about also finding and sending transcript files. It’s part of the actual .mov file now, so the transcript goes wherever the video goes and is accessible as long as the application the other person is using can recognize the metadata.
This may seem like a lot of work to go through, and honestly, it is, but once you consider how fundamentally this improves the editing process, I think it’s well worth it. Obviously, it allows me to print out and read all my interviews, which is undeniably helpful. Now that I work this way I do just as much editing with a highlighter and pen as I do with my computer. But it’s also handy in smaller ways that present themselves on a case-by-case basis. Story can generate “reports” on different parameters that make it really easy to identify and organize information in your script. You can separate out every piece of dialogue spoken by a particular interviewee, for example. There’s also a “find,” function, just like in Microsoft word. So, if you want to identify every clip in which a particular subject or person is mentioned, it’s just a matter of typing it into the script rather than having to scrub through every single interview. Overall I’d say that it saves way more time than it costs.
Finally, and more broadly, it’s really opened up when, where, and how I’m able to edit. Adobe makes a Story app for tablets and phones, and it’s available online through a web portal. Every script syncs automatically, so I have access to text versions of my most relevant video clips everywhere I go. Sure, not every documentary project is dialogue-focused, but the vast majority are. Thinking of the work as text and working with it that way means that I can still edit on a layover, review dialogue at the DMV, organize a rough cut with index cards on my kitchen table, or work from bed, which is really all any of this is about. And, again, it’s not dependent on using Creative Cloud. With a few workarounds you could accomplish pretty much the same thing with Final Cut and just about any word processing program. If you’re not a meticulous transcriber already, try it on your next documentary project, and I guarantee it’ll be worth the effort.
If you have any questions or want to share some of your own transcription techniques, feel free to comment.
For quite a while now, some you have asked where the MTF results were for the Zeiss Batis and Loxia lenses, and my answer has been not done yet. There were a lot of reasons for that. Chief among these was the lenses are so popular that we can’t get as many as we want and keep them in stock long enough test them. There were also the issues we had modifying our optical bench for testing lenses with an electromagnetic focus for the Batis lenses.
So this arrives pretty late, way after most of you have made whatever purchasing decisions you are going to make regarding these lenses. But better late than never. With that in mind, though, I’m going to post all of our Batis and Loxia results in one place (here) just like we did in summarizing the Sony FE lenses so that it can serve as a reference.
One thing I will mention, because I think it acts as a good example. Unlike most manufacturers, Zeiss publishes measured MTF curves with their lenses, not computer simulations. When you ask me why my results are different from ones released by Canon or Nikon, I quickly say mine are real; theirs are idealized computer simulations.
When you ask me why my results are different from Zeiss’ results, the answer is that they are different measurement techniques. Some will have to do with various machines; Zeiss uses their K-8 and K-9 machines, we use a Trioptics Imagemaster vertical MTF bench. The light source is slightly different, for example. The Imagemaster uses a photopic light; Zeiss uses (I believe) a broader spectrum source. It may also have to do with the number of points measured, the number of samples tested, and a host of other things.
For example, we measure each lens at four different rotations, taking cuts from side-to-side, top-to-bottom, and from each diagonal, so each lens is measured at 84 points. The main reason we measure this way is that we’ve written software to give us an easy way to compare lenses to see how they differ.
That may be a very different measuring technique from what Zeiss uses. Other differences are going to be our mount, which cuts the 20mm edges in some measurements, or the thickness of sensor glass utilized in the test. (I can only obtain optical glass in 1,2,3 or 4mm thicknesses, and the sensor glass is between 2.3 and 2.5mm thick.) This may make a slight difference, and more importantly, the amount of difference varies somewhat depending on the lens in question.
So why bother to publish this data instead of just letting you look at Zeiss’s data? Because Zeiss doesn’t test the other lens brands. If you want to compare MTF between, say, a Loxia and a Sony FE, then these are worthwhile.
Markus has written an excellent new piece of software that lets me put up side-by-side comparisons just by pushing some buttons. Since I want to play with my new toy, I’m putting each Zeiss lens MTF next to a Sony lens of similar focal length. Remember, though, that they are all measured wide open, so don’t look silly and say this f/2.8 lens is sharper than this f/1.4 lens. Because aperture. And, of course, if you want to compare these to other FE mount lenses, the MTF charts are here.
The closest comparison I came up with is the Sony FE 28mm f/2.0. It’s a fair comparison since they’re both f/2.0, although the Sony lens is far less expensive. The Sony does pretty well from a resolution standpoint in the center where it’s as good or perhaps a tiny bit better than the Batis, but away from the center, the Batis shines, maintaining its sharpness well and with little astigmatism.
I chose the Sony 35mm f/2.8 ZA Sonnar as the comparison lens, so remember, of course, that it is being tested at a stop’s smaller aperture. The Loxia would improve stopped down to f/2.8. Even given its disadvantage, though, it’s apparent the Loxia 35 isn’t the fastest horse in Zeiss’s FE mount fleet. It’s an older design so we shouldn’t be too surprised at that. And yes, I know you’re all curious as to how much better the Loxia would be at f/2.8. So am I but time takes time, and I don’t have any right now.
In this test, I gave the Loxia a full stop advantage by choosing the new Sony 50mm f/1.4 ZA as it’s comparison. On the other hand, the Loxia is considerably less expensive. Despite giving up that stop of aperture, the Sony ZA clearly has higher resolution. However, the smooth, fairly astigmatism-free MTF curves of the Loxia suggest it will have a very different ‘look’ that some people will prefer.
The comparison here is with the Sony 85mm f/1.4 GM lens, which is more expensive, but gives up some aperture. Even considering the aperture difference, the 85mm Batis puts in a most impressive performance. Excellent resolution and very flat curves across the field. It’s not an inexpensive lens, by any means, but its performance is most impressive.
I haven’t gone into great detail about this set of lenses for one fairly obvious reason: they’ve been out quite a while and photographers who love them love the look of them and don’t seem anxious about what the MTF is like. I think this does show, to some extent, why the Loxia Biogon 35mm f/2.0 isn’t as popular as the other lenses, perhaps, but that’s about the most significant thing I see here.
I know some of you are asking where the results for the 21mm and 18mm Batis are, and the answer is ‘not done yet.’ I’ve made a solemn oath not to post data like this until we’ve finished ten copy sets, and those are still so popular that we haven’t finished testing yet, we can’t keep them in stock long enough. Stay thirsty, my friends (and patient).
A Reasonably Non-Geeky Guide to Lens Tests and Reviews
Hardly a day goes by that on some forum somewhere a nasty argument is going on about lens testing and reviewing. (Going forward, I’m going to use the abbreviation R/Ts for Reviewers and Testers.) As with all things on the internet, these discussions tend to gravitate quickly to absolute black-and-white with no gray. This one’s great; this one sucks.
I made a bit of fun about his in a recent post. The truth is that most sites have worthwhile information for you, but none gives you all the information you might want. I’ve spent most of my life in various types of scientific research, and I’m accustomed to evaluating information from an ‘is this valid’ standpoint, not just ‘what does it conclude’. And because I get to see behind the curtain of the photography industry I have a different perspective than most of you. So I thought I’d share how I look at tests and reviews.
When I first saw the Sony FS5, I figured that it could be a fun secondary camera and not more. I expected to use it primarily as a slow-motion camera when I need one; I didn’t see it becoming too much more than that. Sony’s FS5 is the smaller and lighter sibling of the Sony FS7. This little guy compromises very little, making it an excellent B camera to the Sony FS7 and even a perfectly sufficient A camera.
I primarily shoot weddings and have been using the Canon C100 as my main and secondary camera for the last two years. For me, it has been an excellent workhorse and has been one of the most well-rounded camera packages that I have used. So the biggest issue for me was that it needed to be able to cut with the Canon C100 if it was going fit into my kit.
At face value, the Sony FS5 crushes the Canon C100 regarding the overall feature set. The FS5 offers High Frame Rate recording at up to 480fps, though the image is cropped and there is a max recording time of 8 seconds, 120fps has a max recording time of 16 seconds. I found that 120fps was sufficient for most situations, and I was able to get great shots even when handholding or when using a small rig such as the Zacuto Enforcer.
I found that the camera feels pretty comfortable, and the controls are pretty user-friendly. The handgrip unit is different than most others and takes a bit of getting used to. Since it’s molded to your hand, I had to always adjust the orientation based on where I held the camera. The camera is surprisingly small and light which is nice, but it is also a bit plastic, it doesn’t feel nearly as sturdy as a Canon C300 or Sony FS7.
While most professional camcorders have a built-in Neutral Density filter, the Sony FS5 not only has one, but it has an electronic variable ND that has a range of 1/4ND to 1/128ND. This is one feature that I didn’t think would be a big deal, but turned out to be one of my favorite. Walking around the streets of New York, going from shadows to full-sun, I was able to dial in my exposure using ND, allowing me to leave the aperture wide open while fluidly changing exposure.
The battery life was excellent using the typical Sony U60 battery, the same battery used with the Sony FS7 and other Sony systems. I pretty much kept the camera on for 6 hours while using the viewfinder primarily and the battery level didn’t dip below 40%. However, I completely expect it to last nearly all day, considering the battery is so big and hcumbersome.
The low-light performance was one of the most limited features for me. The shadows were very noisy right out of the camera. Even after a bit of grading, I could only minimize the noise. When I grade, I don’t like to crush the blacks too much (if at all) and I just was not able to get a look out of this camera that made me happy in high-ISO situations.
To summarize, I enjoy using the 120fps and 240fps slow motion, the ergonomics of the camera is very nice, and it is very lightweight which makes it an excellent option when shooting handheld. The built-in variable ND filter is one of the standout features to me; it is so convenient when shooting run and gun projects. Finally, the most disappointing part of this camera was the low-light performance. The noise in the shadows was very frustrating and when comparing it to my Canon C100, the Sony FS5 is significantly underwhelming.
Overall, this camera is a great option for most jobs. In my opinion, it is best suited for run and gun productions such as documentaries or weddings. Putting this camera in context of our current professional camcorder world, I think the FS5 lands somewhere between the C100 Mark II and the Canon C300 Mark II. The HFR options, internal HD 10-bit 4:2:2, and 14-stop dynamic range pretty quickly put it head and shoulders above the Canon C100 Mark II. However, the FS5 shooting in 4K is 8-bit 4:2:0 (though with the new update it can output raw 4K, we are talking internal specs here) while the Canon C300 Mark II offers 10-bit at 4:2:2 and 12-bit at 4:4:4. When looking at the Sony FS7 next to the Sony FS5, of course, the FS7 is the clear winner, as it should be, they are two cameras serving two separate facets of the market. The FS5 is much smaller and lighter than the Sony FS7, making it ideal for situations when you need high-quality HD video and in a small package. For me, the low light shadow noise is a deal breaker for me, and I don’t see myself reaching for this camera as my primary camera. However, when a project necessitates slow-motion, it will certainly be the one I’ll use.
I make a joke about this sometimes, but I think it’s more and more appropriate. Every day, I see more and more people take some single bit of data published on a lens and extrapolate the entire meaning of the photographic universe from that bit of data. My joke has always gone, “a single lens analysis is like measuring a third-grader and then publishing that third-graders are 157 cm tall.”
Almost every online resource gives you some useful information. At the same time, absolutely every resource only provides you part of the picture (pun intended). Oh, and that includes pictures (another pun intended). So without further ado, I will proceed to make fun of every type of online lens analysis, because they are interesting and fun, but really not worth getting so upset about.