Resolution Tests

Variation Measurement for 50mm SLR Lenses

Published July 9, 2015

Last week we posted an introductory article on how we measure copy-to-copy variation in different lenses. I’ll be continuing to publish these results over the next few weeks for prime lenses. We will eventually have a database put up, but I think it’s important to look at the different lenses in smaller groups, illustrating some principles that contribute to variation. It’s far too easy (and comfortable) to just believe quality control is the answer to variation. There’s a lot more to it than just quality control and I think this series of posts will help illustrate that.

We got a lot of good suggestions about our methods after the first post, considered all of it, and tried out some of it (particularly formula and graphing adjustments). We didn’t make any changes to our mathematical formula, but are going to change terminology just a bit. JulianH and several others pointed out that using the term ‘variation number’ was counter-intuitive; our numerical score gets higher when the lens has less copy-to-copy variation. It makes more sense to call it a ‘consistency score’, because a higher number means the lens is more consistent (it has less copy-to-copy variation). So from now on, the numerical score will be referred to as the Consistency Score.

 

Copyright Lensrentals.com, 2015

Today we’ll take a look at the 50mm wide-aperture prime lenses and compare them to the 24mm prime lenses we posted last week. This should be an interesting comparison for several reasons. The 24mm lenses are all retrofocus designs containing 10 to 12 lens groups and at least 1 aspheric element. The ones we tested were all of fairly recent design, having been released after 2008. A 24mm f/1.4 is just about the most extreme combination of wide-angle and wide-aperture anyone currently makes (an exception being the Leica 21mm f/1.4).  We thought that 50mm lenses, being mostly of simpler design, might show more consistency (less copy-to-copy variation).

The 50mm lenses are mostly double Gauss designs containing 5 to 7 elements. There are two exceptions, though. The Sigma 50mm f/1.4 Art lens and the Zeiss 55mm f/1.4 Otus are both more complex, retrofocus designs with the Sigma containing 8 groups and the Otus – 10. Most have no aspheric elements, although there are a couple of exceptions: the Sigma Art and Canon 50mm f/1.2 L have a single aspheric element, while the Zeiss Otus has a double-sided aspheric element. We also have a nice range of design dates, with the Canon 50mm f/1.4 released in 1993, several of these lenses released around 2004-2006, and a handful released just in the last year or two. Plus the 50mm lenses have a price range from under $200 to nearly $4000. It will be interesting to see if any of these factors seem to affect variation.

MTF Curves of the 50mm Lenses

Ten copies of each lens were tested on our Trioptics Imagemaster Optical Bench using the standard protocol, which we described in the last blog post. All lenses are tested at their widest apertures, so take that into consideration when comparing MTFs; stopping an f/1.4 or f/1.2 lens down to f/1.8 would improve it’s MTF. (And yes, I realize how nice it would be to have done the 50mm f/1.2 at f/1.4 and all the lenses at f/1.8. I might get to it someday. Maybe.)

Let’s take a look at the MTF curves for the 50mm lenses. Below are the average curves for each lens. (The Zeiss 55mm f/1.4 Otus graph gets repeated because otherwise one graph would be all sad and lonely, and 562 people would speculate on what my motivation was for singling out whichever one happened to be left alone.) They are in no particular order of anything other than kind of trying to keep lenses of the same brand together.

Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015

 

This article isn’t meant primarily as an MTF comparison but a couple of points are pretty obvious. First, if you can shoot at f/1.8 or f/2.0 you get better resolution. The Zeiss 50mm f/2, and inexpensive Nikon and Canon 50mm f/1.8 have the highest center MTF of any of these lenses. But remember the f/1.4 lenses would have higher MTF if they were stopped down. Nikon shooters might want to take note of the often maligned 58mm f/1.4. When first released, it got beat up pretty savagely for not being as sharp as the Zeiss Otus. It’s not as sharp as the Otus, but it does perform very well.

One note for the MTF gurus among you — you’ll notice an odd spike in the Rokinon 50mm T/1.5 (identified as f/1.4 on the graph above) tangential measurements 2mm off center. This is very consistent in all copies and occurs on both sides from center if we were showing you both sides. We found that the aspheric element causes a localized increase in distortion at that point which mucks with the Optical bench measurements.  We could have corrected for it if we had done full distortion mapping of the lens prior to measurement, but time was, as they say, of the essence and it doesn’t affect the big picture.  We could hammer that point down to be equal to the sagittal data – where it probably should be – but we don’t believe in fudging the numbers.

Finally, and this is no news to anyone, the Zeiss Otus and the Sigma 50mm f/1.4 Art are more or less tied for resolution champ at 50mm. The Sigma is a bit better near the center, the Otus at the edges, but the differences are small.

For some people the choice of a 50mm lens is made because of its special characteristics; you might absolutely need the aperture of the Canon 50mm f/1.2, the dreamy look of the Zeiss 50mm f/1.4, or the resolution of the Otus. Many more people, though, are looking for just a nice, wide-aperture 50mm prime lens at a reasonable cost. The MTF curves above suggest some of the inexpensive lenses are really quite good. But common wisdom suggests sample variation of those, or of the third-party lenses might be greater than with the more expensive lenses. So we were interested in the copy-to-copy variation among these lenses.

Copy-to-Copy Variation

The simplest way to look at variation is with our Consistency number (for a complete discussion of how we arrive at the Consistency number, see this post). In summary, a higher consistency number means there is less copy-to-copy variation; the lens you buy is more likely to closely resemble the MTF average we presented above. In general, a score over 7 is excellent, a score from 6-7 good, 5-6 okay, 4-5 is a going to have significant copy-to-copy variation, and under 4 is a total crapshoot.

Here are the variation graphs for the 50mm lenses in the same order as we presented the MTFs above. The consistency number is in bold at the lower left of each graph.

Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015
Roger Cicala and Brandon Dube, Lensrentals.com, 2015

 

Remember the Consistency number is calculated from the 30 line pairs / mm graph (green one) from center to edge. Looking at the actual graphs gives you more information. For instance some lenses have very little variation in the center and a lot near the edges. That means center sharpness would be very similar in every copy, but corners will be more random. Other lenses vary across the entire field, meaning there might be noticeable differences in center sharpness.

To us, the most surprising finding is that the inexpensive little Canon 50mm f/1.8 STM is incredibly consistent. The Sigma 50mm f/1.4 Art and the Zeiss 50mm f/2 Makro also did exceptionally well, having consistency numbers above 7. Most of the 50mm lenses were above 6, which puts them in what we consider a good range of consistency. The Canon 50mm f/1.4 was a bit behind the pack, but that’s not surprising for an inexpensive 20-year-old design. The Rokinon 50mm f/1.4 had the most variation with a Consistency number of 4, and the Nikon 50mm f/1.4 G was disappointing at 4.6.

We had previously published the Consistency number for 24mm lenses, so I’ll include those in a table for comparison. Wide angle lenses tend to have more variation, so we hoped the 50mm lenses would have less variation than the 24mm lenses.

Lens Consistency
Rokinon 24mm f/1.44.0
Nikon 24mm f/1.44.6
Sigma 24mm f/1.4 4.9
Canon 24mm f/1.4 Mk II6.3
Canon 50mm f/1.2 L 6.0
Canon 50mm f/1.45.5
Canon 50mm f/1.8 STM9.3
Nikon 58mm f/1.46.7
Nikon 50mm f/1.44.6
Nikon 50mm f/1.86.3
Zeiss 50mm f/1.46.1
Zeiss 50mm f/2 Makro7.3
Zeiss 55mm f/1.4 Otus6.5
Sigma 50mm f/1.4 Art7.5
Rokinon 50mm f/1.44.0

Overall, we do see that 50mm lenses tend to have less copy-to-copy variation than 24mm lenses, although it’s not an absolute rule. Canon’s 24mm f/1.4 Mk II has little copy-to-copy variation and scores as well as many of the 50mm lenses. A couple of the 50mm lenses (Rokinon, Nikon, and Canon 50mm f/1.4 lenses) don’t do as well as we hoped.

I think some people expected the Zeiss Otus, given its higher price, to have almost no sample variation. Given the complexity of its design, with more elements in more groups and including a difficult-to-manufacture double-sided aspheric element, it does quite well. The Canon 50mm f/1.8 STM was amazingly consistent, and I’m not sure why. It’s a simple design, but so are several of the other 50mm lenses.  I suspect there might be something different in the manufacturing process of this very new lens, but until I take one apart and look inside (we haven’t yet) I’m only speculating.

Trendspotting

We can take a minute to look at a couple of lens variation ‘folk wisdom’ trends to see how they hold up now that we’ve begun testing. First is the idea that wide angle lenses tend to have more variation (lower consistency scores) than longer focal length lenses.

There does seem to be a little bit of truth to that idea, although it’s more at the ‘best end’.  No 24mm lens has a consistency score over 6.3, while 5 of the 50mm lenses scored over 6.3. Several of the 50mm lenses had just as much variation as the 24mm lenses.

The next idea is that more expensive lenses have less variation than cheaper lenses. The Canon 50mm f/1.8 STM certainly is the exception to that rule, having the highest consistence score of any lens we’ve published so far and also being the cheapest, but the rest of the lenses do tend to show some correlation that more expensive lenses vary less than less expensive ones. The correlation isn’t as strong as I had hoped, but it’s definitely there.

 

Finally, the 50mm lenses give us a little opportunity to look at year of design. If there’s a pattern here, I can’t clearly see it. But more data points might help, and we’ll be publishing more lenses soon.

 

 

Roger Cicala and Brandon Dube

Lensrentals.com

July, 2015

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Resolution Tests
  • R. Edelman

    The new Zeiss Batis lenses come with a test certificate. It would be interesting to test variability in those lenses.

  • Roger Cicala

    George, the 85 results will be out in a week.

  • Roger Cicala

    Anton, that is older work done with Imatest, which is testing lens-camera combinations, not the optical bench that test purely lens optics. One of our big motivations the last year has been to be able to get away from Lens-camera combination tests, which add a lot more variables. Imatest is a very useful tool, and if you want to see the performance of lens on a camera, not just lens, it and DxO Analytics are the best tools, I think. But there are very valid reasons I decided about the time we did that original test that we had to invest a large amount of money and start doing optical bench testing.

    Testing just the lens eliminates a host of variables: flange variation, sensor microlens effects, cover glass thickness, and even, despite going to huge lengths to eliminate it doing Imatest or DxO testing, tiny variations in angulation of measurements to the chart or lighting variations. If the optical bench gave the same results as Imatest, I’d be beyond shocked. In fact, I’d be saying my whole assumption is wrong and the camera doesn’t add more variables.

    If I was to guess, I would guess that the original Imatest results showed lower variation with the Otus because it’s very wide focusing throw allows our focus bracketing to be more accurate with Imatest. We bracket several shots with the lens around what appears to be best manual focus when doing Imatest results. With the optical bench, the bench and computer obtain exactly the best focus and that is eliminated. The Otus manual focus is so smooth and accurate I think perhaps it allowed smaller changes in focus. In other words, the Imatest results may have been testing manual focus capabilities as well as optics, something I didn’t consider at the time.

    If we hadn’t improved our testing over the last two years, I’d be extremely disappointed. We obviously have, so I’m pleased.

    BTW – don’t make assumptions. We work on all lenses including super telephotos and Otus quite frequently and are even Zeiss Cine lens certified – so we work on lenses with costs that make an Otus or a 400 f/2.8 look inexpensive. I don’t disagree with many of the points you make online, they are quite valid and I’m glad that you made them, but once you begin guessing and assuming you get way off base.

    However, I totally agree with your point that these are a select group of lenses. We had screened out the ones that are obviously defective and that’s done before they ever reach our rental fleet. I think that is valid because I think most photographers would also have rejected those same lenses we did, but I accept your argument that it does make this a selective sample. We have, in the past, compared sets of ‘new-in-box’ lenses with ‘off our shelf’ lenses. We’ve never found an optical difference of significance, but that probably has more to do with our testing and maintenance than anything else. I also tried to make it clear that we can’t practically do enough lenses to perform true statistical analysis. That would be at least 70 and probably 100 or more copies tested for each lens depending on the variation. Ten copies is all we can afford to do – it ties up a tremendous amount of resources. But if a donor out there would like to fund a true scientific study, I’m very receptive.

    Roger

  • A test of 3 of these lenses (sample of 7 each) was done in April 2014 by LensRentals.

    The Otus 55 had the best consistency (tightest or lowest variation expressed as percentage) of the Sigma 50mm Art and Canon 50mm L.

    The results above show the Sigma scoring ~13.3% better than the Otus. The April 2014 results showed the Otus ~42% better than the Sigma Art.

    What happened between April 2014 and June/July 2015 ?

  • Roger Cicala

    CarVac 20mm is the limit of our tests – the lateral edges, but not the absolute corners, which are, I think 21.3mm.

  • Can you respond to the data set shown here that shows a much wider variance across Sigma’s 50mm than the Otus 55mm ?

    What changed since April 2014?

    http://wordpress.lensrentals.com/2014/04/yet-another-sigma-50mm-art-post

    http://wordpress.lensrentals.com/media/2014/04/data.jpg

  • CarVac

    I just noticed…these MTFs don’t go out to the corners of the full-frame sensor. Is this a limitation induced by vignetting?

  • wow.
    Great wright up Roger.
    Can’t wait for the new Canon 50mm F1.2 mk II or 85mm F1.2 mk III being tested by you guys 😉

  • Brandon

    Anton,

    Stopping any of the lenses down would remove most of the variance. There are three types of decenters:

    longitudinal decenters, which would be very small and highly unlikely – requiring all screws that hold elements in place to break entirely.

    lateral decenters, which are very common. These cause coma on-axis and unpredictable off-axis changes.

    tilts, which are also quite common. They cause astigmatism on-axis and alter the field curvature.

    (here, rho signifies ray height in the pupil)

    Coma varies by rho squared, rho to the 4th, and rho to the 6th for the relevant aberration orders. Stopping down one stop will remove approximately 60% of the coma-related variance in the center of the lens. This biases in favor of wide aperture lenses.

    Astigmatism varies by rho, rho cubed, and rho to the 5th for the relevant aberration orders. Stopping down will somewhat solve this, but never reduce it to a very large extent. Perhaps 20% or so for the first full stop.

    Upon topping down one stop, the best and the worst copies of these lenses will be mostly equal in the center, but the corners will till vary considerably. This will continue as you stop down. Perhaps by the 4th or 5th stop you close them down they will equalize across the frame.

    If you ever use the maximum aperture of the lenses, it only makes sense to test variance at the maximum aperture. If it was intended for the lens to never be shot faster than f/1.8, it would not be an f/1.4 lens.

    Regarding the age of these samples,

    8/10 of the 50mm f/1.8Gs were never rented before. 6 of the Otus lenses had been rented less than 5 times. All of the 50mm STM lenses have been rented 5+ times.

    The otus lenses are honestly no better than any ZE/ZF lens. They are in bigger, sleeker housings but we have to repair them equally as often and the sample variation is in-line with the other models of equal specification. That makes quite a bit of sense – they are made in the same place by the same people. With the same manufacturing line testing equipment. If they’re using cameras, they cannot see much difference between this poor copy: https://www.dropbox.com/s/bb9p0k4linddckj/MTF_Rotations.png?dl=0 and this good copy: https://www.dropbox.com/s/6y1jayn8464kqug/MTF_Rotations.png?dl=0

    If they are evaluating quickly as manufacturing usually is, a slightly soft one will pass. Even a few terrible ones will pass.

    Testing new lenses only is unlikely. Certainly in the future when a new model comes out Roger will grab a bunch as they come in and test them, but there are other implications to testing new only. Do we then test complete failure copies? 2-5% are duds. What about ones with mechanical issues that are optically okay? We would send that back anyway.

    You also won’t only shoot the lens for the one day it is unboxed. It is going to move around, be it in your bag, suitcase, carryon, a FedEx truck, etc. We don’t use a mixture of new and ‘old’ copies by choice, but I feel that it is more representative of the true behavior of a model than only new copies are. Some lens models are bad enough mechanically that putting them down slightly too hard will change the alignment. This should be reflected in variance information.

    Regards,
    Brandon

  • I am posting this from memory so please forgive the absolute accuracy of this ‘fooled by randomness’ post.

    There is a website called Nationmaster and they used to include correlations of selected data.

    At one point, about 8-10 years ago, the highest positive correlation (or at least one of the top ten of all Nationmaster data sets ) was between the number of runways over a certain length (3047m) and the pregnancy rates of teens in that country.

    Of course teen pregnancy has very little to do with airport runway length and is more dependent on a host of factors, (religion, access to birth control, etc).

    One factor was tested here. Variance of resolution (and at different f-stops) With a number of unknown factors such as shipping frequency and shocks encountered as mentioned above.

    A good experiment would seek to isolate variables.

    Only use new lenses as they were received.

    Use the same f-stop.

    Maybe a weighting model against higher line pairs (they’re more difficult to design and build why not reward the performance of them? )

    Google the term “Rule 8 – Never cross a river because it is on average 4 feet deep” if you’d like to learn more.

  • None of this matters if you lucked out and got a lens at the top of the std dev.

    One more reason why it’s important to test every new part of a system against known good entities (sensors and lenses) and reject and return low performing equipment.

    The more people that do this, the more pressure will be placed on the retailers and ultimately the OEMs to have higher quality control.

    If you’re a Zeiss Otus user, the test results matter even less. It’s like telling a Ferrari owner that the Prius gets better fuel economy and is more reliable.

    Lastly, Roger, you’ve known and written for years that it’s much easier to build a 2.0 lens than a 1.4. Why run the test with all of these f-stop variables. Wouldn’t apples to apples be to have conducted this test with all of the lenses at 2.0 ?

    Another consideration – did you weigh in the number of times a lens had been shipped and returned ? Shipping jostling could result in more variance over time.

    Maybe a brand new set of Otii 55mm’s would test much tighter. If I dropped a Canon 50mm 1.8 lens in the grass from a height of 1 foot I wouldn’t expect the lens to be unharmed. If I did the same with an Otus or a 300mm 2.8, I’d really worry that the mass and shock of hitting the ground would have some effect.

    While in college, I worked holidays at Fed-Ex and UPS, at several points in the shipping process ‘throwers’ are used…and not just simple drops from 1 foot into grass. That can’t be good for any sensitive instrument.

  • Roger Cicala

    Alex, remember the manufacturer’s published data are computer generated ‘ideal models’, not measured in real world. In theory, our very, very best copy should be pretty close to what’s published. But not the average.

  • Michael Maddox

    I have noticed that Canon is making very good cheep consumer lenses. The Current STM kit lens test well, among with the 24mm STM $150, 50mm STM $126, 10-18mm STM $269 and 55-250mm STM $299.
    This is current pricing from the cannon store. Something is going on as far as manufacturing is concerned.
    Now I know by definition these are not Lens Rentals kind of lenses but you did include the 50 f1.8 🙂 so one wonders if the techniques developed to consistently manufacture these optically very good and from the sample of the 50mm f1.8, consistent lenses will be extended to professional grade optics.
    I don’t expect a teardown report as they will be easier to replace than to repair. But as you develop you data base it would be interesting to see if there is a pattern. They are all also relatively new designs.

  • Brian

    The 50mm f1.8G seems to be missing from the last two graphs.

    If we ignore the Rokinon, I think there is a trend for better consistency with newer designs.

  • Alex

    @Roger and Brandon,

    Thank you for providing such useful info. Nice work. There seems to be significant differences between the measured data and the design data published by the lens makers (for example, the Canon 50mm STM). You would expect that the “best” copy curve should follow the design curve very closely. The “average” data should match the behavior of the design curve to some degree. Do you have any explanation for this ?

  • Randy

    Thanks, Roger. This one article has more useful information than 100 DxO tests. As for the new Canon lens being so consistent, they’ve probably found a way to make it with as little human intervention as possible.

    Everything else is about what I expected. I would run, not walk away from a Rokinon unless I had at least 3 to choose from. I had 3 of the 14mm and they were all over the place.

  • Robert Pitt

    I am curious how the Leica 50s compare to these DSLR lenses?

  • Lee Saxon

    I wonder if what the “lens variance by release year” chart is really telling us is how well different designs take wear/abuse. In other words, it’d be interesting to test these exact same copies in say two years.

  • Brandon

    Kenny,

    We could measure at 100lp/mm but there is not much point – it is beyond the resolution limit of every interchangable lens camera you can buy right now, and most lenses will perform very poorly.

    Regards,
    Brandon

  • Roger, with some of these higher-performing lenses, like the Art and Otus, do you think there is value in adding in a 100lp/mm measurement?

  • Aaron

    Any change to your error bar calculation from the previous post? The calculation you were using previously looked incorrect to me, it should be +/- 2.8 SD not 1.5.

  • Tobi

    Hi,

    your articles are great! I’m looking forward to seeing more measurements, also from variable focal length lenses.

    I’d love to know the manufacturers reaction to theses posts. Before you started these, they just sent out (selected) copies for review and most reviews ended up being positive. Now you’re independent, you use unselected lenses and you actually measure all results. What an eye opener!

    Tobi

  • Pieter

    Brilliant test – any way to get some results for the previous generation Canon 50mm 1.8 (i.e. the one being replaced by the STM version)? Would be excellent to compare the changes.

  • Chris Jankowski

    Perhaps the excellent consistency results for the new Canon EF 50mm F1.8 STM come from new, robotised assembly method. Afterall, the electronics industry has developed a range of robots to do very precise placement of surface mounted components and even more precise microwelding of leads to the chips before they are encased. These robot are becoming much cheaper these days, as they are nmanufactured in larger volumes and have more integrated designs.

    The cost equantion may make use of automated assembly more attractive and Canon certainly has the volume for this lens to make automation economically viable. The lens is also relatively simple and so is a good candidate for automated assembly.

    Robotic assembly of precisely machined components should give very high consistency.

    This is all speculation of course, but stripping of the lens may give some clues.

  • Brandon

    Derek,

    I suspect the STM is largely the same to the version II 50mm f/1.8. As far as I am aware the only difference between it and the II optically is that it has revised coatings so the elements have been in production for decades. The barrel is a new design and represents the latest in Canon’s mechanical engineering.

    All of the copies for it are fairly new, so perhaps the abuse of Fedex and renters will make them more variable, but the 50mm f/1.8G also had 9 of 10 copies be new ones.

    These are still 10 copies each, but we don’t have an economical way of controlling production time and lens age. Being a newer lens, the STM will tend to be newer copies than for example the 50/1.4 USM (though that particular model has 15 copies in its average instead of 10).

    -Brandon

  • derek

    Lovely write up.

    I wonder if the 1.8STM is so good because it is absolutely new, so all tooling will be brand new as well, so everything’s great.

    Maybe something to come back in a year with a few more 50mm f1.8STMs and see if manufacturing’s beginning to vary?

    relating to my earlier batch comment are these from a reduced number of batches vs the rest?

  • Roger Cicala

    Hans, we don’t carry them anymore.

  • Hans Bull

    Very interesting!
    Please include in your next tests the cheap plastic 50mm 1.8D, which has a different design than the 1.8G. For the type of photography I’m doing I cannot live without that lens.

  • @Frank: there was no change to the rating system, i.e. 0 rating did NOT mean 0 variance “the first time.” Quoting the original article, “A high score means there is little variation between copies.” They’re merely changing the terminology to follow the meaning of the score.

    @Roger and Brandon: thanks for the new batch of results! Perhaps Canon’s new STM is like when computer monitor manufacturers figured out how to make LCD panels with no dead pixels, and suddenly such defects were no longer a practical concern.

  • Frank Kolwicz

    I think the change in numerical rating is a mistake, it was right the first time: zero variance should result in a zero score, high variance should result in a high number. A complete lack of variance puts a hard minimum number on consistency and the possible range of variance is open-ended, which is intuitively obvious, as a one-time math prof used to say to us freshmen (about calculus, if you can believe it).

Follow on Feedly