Equipment

Just the Lenses: The 70-200mm f/4 Comparison

There is a very good reason that we haven’t gotten around to testing one of the most commonly used zoom lenses until just now. We forgot. I was testing the Sony FE 70-200mm f/4 G OSS lens because we had just torn one down, I was impressed by the construction, and I wanted to see if the copy-to-copy variation was improved compared to other Sony lenses. So I tested it, went to pull up the Nikon 70-200mm f/4 and Canon 70-200mm f/4 lenses for comparison, and was a bit shocked to find we hadn’t ever tested them. So I guess this lets you know that I was primarily interested in the Sony lens and that I expected it to have less variation than the Sony FE 24-70mm F/4 lens.

I should also mention that you shouldn’t go searching for our MTF and variance results on the Sony 24-70mm f/4. We tried to test it some time ago. The results were so weird and testing so difficult that we’ve never published them. Please don’t replace ‘weird’ with ‘bad’ when you quote this. It’s not what I said. We have had quite a learning curve when we started testing Sony FE lenses on an optical bench that isn’t designed for them. There is a variation in distortion from copy-to-copy with certain Sony zooms, like the FE 24-70mm f/4, that freaks out our automated testing.

But Wait, There’s More Fact Stuff

Before we get to presenting the data, I need to make you aware that the data is different. We have spent several months revamping the algorithms we use to present the variance data so THIS VARIANCE GRAPH AND DATA ARE DIFFERENT FROM WHAT WE’VE USED BEFORE. If you actually read this far you should have two questions: How is it different now, and why don’t you change all the previous data to match?

I’ll answer the second question here. We will redo all of the previous graphs to match the new methods, but it is waiting for some smart academic type people to give us more feedback on statistical significance so we aren’t 100% certain this is the final method yet. Also, redoing the data takes time and I don’t have a lot of that, so it may wait until our summer intern starts.

Please remember, this is not a consumer-oriented review site generating content so you’ll click on our ads and stuff and make me rich. It’s a ‘hey, you guys want to see what I’m interested in this month?’ site. I have ADD; my interests bounce all over the place. Right now I’m fascinated with investigating the new electromagnetic and piezoelectric linear autofocus systems, so I’m doing mostly that. MTF work is a bit on the back burner until the intern starts in May.

As to the first question, how the variance graphs are different, I wrote an addendum at the end of this post on how we are making this measurement more meaningful and scientific. I really encourage any of you who are interested to read that. For those who don’t like science, JUST BE AWARE THE VARIANCE GRAPHS IN THIS ARTICLE ARE DIFFERENT THAN THE ONES IN PREVIOUS ARTICLES. We can compare these three lens’s variance to each other, but not directly to the previous tests. The MTF values, though, are completely comparable to previous articles.

OK, Lets See Some Tests

MTF

First, let me repeat that there have been no changes in the MTF measurements so feel free to compare these to other lenses we’ve done.

As usual, we measured the MTF at three focusing distances: 70mm, 135mm, and 200mm. Please note that for the Sony lens measurements at the far right side of the graphs, which is furthest from the center, should be taken with a grain of salt. This is nothing wrong with the lenses, rather it’s a limitation of our measuring ability. The Sony mount has a light baffle that blocks a lot of the edge measurements, so while the value is an average of 80 measured points, the very last point only shows the average of 30 measurements on the Sony MTF.

 

MTF at 70mm

 

All three lenses really do pretty well. The Canon 70-200mm f/4 IS is an older design (2006 release, but based largely on a 1999 design) than the Sony 70-200mm f/4 (2015) and the Nikon 70-200 f/4 VR (2012). Particularly with stabilization systems, that’s a big gap.

Of course, everyone jumps on center sharpness first and the Sony certainly lags a bit at the middle and higher frequencies. Right in the center the Canon and Nikon have similar resolution with the Nikon slightly better at the wide end and the Canon slightly better at 200mm.

Away from the center, though, things are a bit different. At 70mm both the Nikon and Canon fall off a little bit like most zooms do, while the Sony remains amazingly flat across the entire field. In the outer half of the field, the Sony is roughly as good as the other two lenses.

Again at 135mm the Canon and Nikon are clearly a bit better in the center of the image, particularly at higher resolutions. In the outer half of the image, though, things aren’t so much better or worse, but rather just different. The Sony doesn’t resolve quite as well as the Nikon and Canon, but its lack of astigmatism is really pretty amazing for a zoom. I would expect you’d notice a different ‘look’ to the images shot with the 3 different brands shot at 135mm that in this case was as much about the lens as the sensor.

At 200mm the Canon again has the best center resolution at higher frequencies, the Nikon holds its resolution further from the center, catching the Canon less than halfway to the edge of the image. The Sony is much like the Nikon as far as maintaining its sharpness towards the edge although it doesn’t resolve quite as well as the Nikon at 200mm.

From an MTF curve standpoint, my overall take is these are three really good zooms. The Sony doesn’t have quite the absolute resolution of the others, but this seems to be a conscious decision to go for a smooth, even look, rather than obtain the highest possible center resolution. This is very similar to Nikon’s design philosophy with their recently released 24-70 f/2.8 VR lens.

Sample Variation

With the variation graphs, remember that a large area along the left side indicates variation in center sharpness between copies. If the variation gets wider the further you go to the right it there’s a larger variation in the edges of the lens. Our new algorithms also include variation within the lens so if a lens is sharper on one side than the other, that shows as a variation. Also, as with the MTF charts, it’s probably best to ignore the far right 20mm measurement on the Sony lenses. A variation appears to decrease for that lens there, but that’s just a reflection of fewer measurements at 20mm.

 

 

 

The variation for all these lenses is pretty reasonable – we expect zooms to vary more than primes and to have different amounts of variation at different focal lengths. The Nikon has a bit less variation than the Canon and the Sony at 70mm and 135mm, particularly in center sharpness. At 200mm, there is really no difference between the three. I don’t want to make any major generalizations here. I think it’s clear that the Sony lens is very similar to the Canon and Nikon as far as sample variation, which is a much better performance than we’ve seen with some other FE lenses.

Changes in Testing Methods: This Part is Important, Really

We previously were using the averages for each lens sample to calculate the standard deviation.  This folds eight measurements taken from a single copy at each image height into 1 number, then compares the copies. This works well for what we had planned on – it tells you the difference between the various copies. BUT it has a downside we hadn’t thought of. If each copy varies within itself (the left side is softer than the right one, the top softer than the bottom on the other, for example) the averaging hides that inconsistency.

The new formula now uses every number as measured to calculate the variance and standard deviation.  The result is that for 10 sample lenses, each point on the variance graph is the standard deviation of 80 numbers instead of the standard deviation of 10.  For decent-to-good lenses, like most of the 50mm primes, there is hardly any change in the variance.  For very bad lenses, like most of the wide-aperture, wide-angle primes and most zooms, the difference can be quite dramatic.

We also reduced the plot size to 1 Standard Deviation instead of 1.5 Standard Deviations, just to make the graphs more readable.

The consistency number is no longer printed on the charts – we felt it was not accurate enough to really summarize the variance. It was meant to be a ‘look more closely guideline’ but was being abused. Simplified numbers are nice and all; we understand everyone likes them. But the loss of accurate description caused by oversimplification becomes counterproductive.

The variance number wasn’t as useless as a ‘lens score’ but it still got abused so much we’re going to stop it until we are comfortable it accurately reflects total variation in a meaningful way. Right now it only reflects variation between samples at 30 lp/mm for overall sharpness and doesn’t take into account variations within a lens. So if every copy is tilted, the old variance score didn’t reflect that.

The graphs themselves are easy to compare and give much more useful information.

We’re still fine-tuning graph presentation and having people smarter than us give input on our calculation algorithms. Some of the stuff we’re working on might be pretty cool. For example, we’re considering showing a full-field MTF graph for each copy, not just a line but an actual image of the MTF as it appears throughout the field of the lens (extrapolating between the 84 measurements we make on each lens, of course).

ffd1

OlafOpticalTesting, 2016

 

OlafOpticalTesting, 2016

OlafOpticalTesting, 2016

This will, I hope, give us an intuitive way to demonstrate how the copies we test vary, a way to show how adjustments change a given copy, and even eventually a way to show someone how their lens compares to the average of all copies of a given lens.

There are several other things going on, but bottom line is that this is still a work in progress. What we’re trying to do has never been done before, at least that we know of. We’re beta testing as we go. If it isn’t changing over time, then we aren’t improving it. As for the old data that’s already published, it’s all accurate, but it isn’t showing you everything we’d like to show you.

 

Roger Cicala, Aaron Closz, and Brandon Dube

Lensrentals.com

March, 2016

 

 

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Equipment
  • Stanislaw Zolczynski

    How do they compare to their bigger f:2.8 brothers at f:4?

  • hywelphillips

    I’d just like to thank you guys for putting in a lot of work, for free, and publishing the results in a careful and scientific manner. It’s useful, but more than that it is FASCINATING. As a particle physicist turned full-time photographer, I really appreciate the work that goes into each of these blog posts.

  • Sean T

    Fascinating, thank you guys. I rented a (actually two, you guys sent me another because I was so underwhelmed by the first one, but they were both underwhelming) FE 70-200 a year ago for my a6000. Turns out 24 MP APS-C is just hard to satisfy even for a $1500 lens!

  • Steve Simon

    So which one will take the best pictures? 🙂

  • Pieter Cronje

    Thanks for the response. Obviously I don’t have access to the raw data so I couldn’t do the test myself. I’m pleased that you’ve considered alternatives and it wasn’t just the default (as it so often is!).

  • Brandon Dube

    The ImageMaster OLAF owns can test finite distances too, about 0-2m. It’s a pain in the ass and not something we are starting soon, perhaps the new intern will feel particularly masochistic and work on that.

  • Wam

    Hi Roger,
    I think Junz Inc was talking about focus distance, not zoom range.
    OLAF only tests at infinity or can it measure also close focus performances?

    Thanks for the always interesting blog

  • Junz, zooms are generally better at one end than the other, although what the various manufacturers do is always open to change lens-to-lens. But if I had to say, I’d guess this Sony is optimized in the middle of the range, if anything at all.

  • Junz Inc

    Hi Roger,
    I have heard this mentioned in some places (websites and blogs) that this particular Sony Zoom is more optimized for focus at longer distances than it is at shorter distances. Is there anyway of measuring this difference in your tests?

    Appreciate all the work that you and your team does.

    Best Regards
    JM

  • Brandon Dube

    We’ve been over this before with other people familiar with visualizing information accurately. Jet is the best suited colormap between those available stock in matlab and those available through e.g. colorbrewer2.org.
    A difference of approximately 0.1 in the MTF is ‘significant’ at moderate spatial frequencies. Jet provides a significant change in hue spaced approximately that frequently. Other colormaps, such as Parula, transition too slowly and the plot does the viewer a large disservice by playing down changes in the MTF as visually insignificant.

    Jet is not very friendly to those who are colorblind, and cannot be converted to black and white and remain sensical. These are issues we accept.

    Regards,
    Brandon

  • Pieter Cronje

    Roger:

    I would suggest some basic research into colour maps, if you are going to show the full field.

    http://blogs.mathworks.com/steve/2014/10/13/a-new-colormap-for-matlab-part-1-introduction/

    Jet is by no means a good colour map if you are trying to represent a lot of data. Matlab’s newer map – Parula – is far better. Failing that, read up on Viridis – the new colour map in Python’s matplotlib. These colour maps are designed to accurately represent data in such a way that the eye does not misinterpret it.

  • speedy fisher

    Curse you Roger, for focussing on running a business instead of giving us pretty pictures! 🙂

  • speedy, we’ve taken them apart for several reasons, but we haven’t done a published teardown of the 70-200.

    Roger

  • Thanks, Y. A. You made my day. And you’re right.

  • speedy fisher

    Looks like it. Obviously my anecdotal experience doesn’t match up to Roger’s mass of objective data, but owning the Sony and having previously owned the Nikon, it’s at least as good of a lens. The output is just so clean and reliable – where some lenses excel in a few areas and let themselves down in others, this just gives you a solid 9/10 across the board.

    I’m just hoping that this teardown Roger mentioned has an accompanying article – I’d love to see what it looks like on the inside.

  • Y.A.

    You guys are unlike DxO in the ways that matter…. the colors thing is a standard that goes beyond them though.

  • Brandon Dube

    I only have to add one “-” to one line of code to make that change, if you want it that way.

  • Brandon Dube

    There are 8 per image height if the lens has well behaved distortion, no built in petal lens hood, and no plastic baffle in the mount. In practice, no zoom is going to get 4; often on FE mount there is a single measurement per copy at 18mm and 20mm. On average it is approximately 4, maybe 5, for all lenses.

  • Chris Cheng

    Sorry don’t get where the 4 is from. Thought you have 8 measures per image height per copy? I would think 8 is a good enough number for characterize the evenness in that copy at the image height. Then you can get the standard dev for this copy at this image height. The you have 10 copies per model, you would have 10 stand dev for this model at this image height. Then do this for the other models, group the std dev according to models, then you can compare evenness across models at the image height.

  • Y. A. there is also the train of though of not being like DxO 🙂

    But I see your point. It’s still a work in progress.

  • Brandon Dube

    The variance is not normally (or randomly) distributed; most lenses are approximately bimodal. The standard deviation of each model, then working with that set, does not work as there are not enough measurements for the standard deviation to be truly meaningful at that sample size (4).

    The previous method was biased in that if a lens’ Full-Field Display appears linearly tilted, the + and – sides would average out and make it appear as if the performance was even, and the between-samples algorithm would basically not see that variance. The new algorithm is much more punitive of that behavior, which is why even with the 33% reduction in errorbar size the area around the curves is vaguely the same.

  • Ilya Zakharevich

    I wonder whether they try to optimize the information content. It is the average (over the sensor) of 1/Area?of?confusion?ellipse. Since (area-wise) the most of the sensor is far from the center, for this metric, the performance away from the center is more important…

    By the way, this is 1 number which summarizes the lens performance. If you can calculate it (should be easy given your MTF measurements), it may be a useful summary. It has a good chance to match user’s experience…

  • Ilya Zakharevich

    Zach: First, your lightbox has play/pause button (?!). Moreover, usually clicking on a lightbox closes it.

    Second, in the caption to MTF graphs, there is no way to tell which line is dashed/solid (Tan vs Sag).

  • Chris Cheng

    May not be right but here are my 2 cents:

    Assuming the variation of resolution at a certain position is random, working on 80 points across multiple samples could bias the result as one sample’s variation may happen to compensate another’s.

    Your prvious method has its merit to show inter- copy variation. To show intra-copy variation, you can calculate the standard deviation of each copy at that position and plot 10 points as a boxplot, and we can look at median to have a simple idea when comparing lenses from different manufacturer.

    It may be interesting to look at CoV as well, if a lens shows larger variation, but has higher average resolution, it could still be better than lens with lower resolution and lower variation

  • Y.A.

    You guys should flip the color range….. usually blue/green = good, red = bad. DxO and DPR use that color scale. Otherwise this looks pretty solid

  • Great work guys! It’s good to see you guys are making the test results even better with the new consistency and average scoring.

    In case no one’s thought of this already, you could display differences of field maps, so we could tell easily where one lens is weaker or stronger than another: you could overlay one field map on top of the other, and subtract one from the other, for example. It might make for a great interactive tool on The Digital Picture?

  • Yup. We’re working on it right now. We’ll have that fixed sometime in the next hour.

  • John, I think it’s perfectly safe to say that. Slightly different point of emphasis, but overall quality and variance are very comparable.

  • John Dillworth

    Is it safe to say that the new Sony zoom is in the same quality range as the big boys? to my untrained eye the overall sharpness is just about as good as the others and the copy to copy variation is better than the Canon? A bit pricier however

  • Just FYI, the images of the graphs above are so small (or compressed) that they don’t render well enough to read.

Follow on Feedly