Just MTF Charts

Just the MTF Charts: 70-200mm f/4 Zoom Lenses

We did the 70-200mm f/2.8 lens MTF charts last week, so let’s do the f/4 versions of the same range now. Sports shooters and portrait photographers need that f/2.8 aperture, but many of us, most of the time, are willing to trade that off for smaller-lighter-less expensive f/4 models.

So, About f/4

Before we start, let me save someone from looking dumb on the internet. Not a week goes by that I don’t see someone use our MTF graphs to say something like “the f/4 is actually sharper than the f/2.8”, or “the f/2.8 is sharper than the f/1.4” yada, yada, yada. The complete sentence has to be “the f/4 is sharper at f/4 than the f/2.8 is at f/2.8”. Because stopped down means sharper, at least for the first stop or two.

So let’s take a quick look at that f/2.8 to f/4 difference. I’ll use the Canon 70-200mm f/2.8 IS II as an example. Here are the MTF results at 70mm taken at f/2.8 and f/4.

Lensrentals.com, 2019

You can see there is a pretty dramatic improvement throughout almost all of the field of view by stopping down just one stop – at 70mm.

Let’s make the same comparison at 200mm. This time things are a little different. At f4 things are clearly sharper in the center of the image, but in the outer half, there’s no definite improvement. The curves are different, but not better.

Lensrentals.com, 2019

 

So why must I smite your quest for the Holy Grail of “just give me one number to evaluate the whole lens at all focal lengths and apertures and shooting conditions so I can go online and say the Wunderbar 70-200mm scores 74.2 which is better than Ubertoy 70-200mm which scores 73.7”? Because of optical physics, that’s why. Well, and also because it’s a stupid quest.

The reason things don’t always improve the same when you stop down is pretty straightforward. Remember, lenses have aberrations, which among other things reduce MTF. Some aberrations are dramatically improved or at least significantly improved by stopping down. Other aberrations are markedly worse the further you go from the center and not influenced very much by closing the aperture.

So, closing down one stop will make a massive difference in the center of the image;  aperture-dependent aberrations (spherical aberration mostly, but also some types of coma) improve and the ‘distance-from-center’ dependent aberrations (astigmatism and some others) aren’t significant in the center. 

Out at the edges of the image, stopping down makes some difference for many aberrations, but very little difference for others. Unless you know what aberrations the lens has, you can’t predict how much improvement stopping down makes, especially in the outer parts of the image.

I always love when someone rants online about ‘the edges are soft even stopped down so my lens must be defective.’ Lots of lenses don’t get really sharp at the edges no matter how much you stop down, others do, but none ever get as sharp on the edges as in the center. 

So let’s do a more practical test and compare the 70-200 f/2.8 IS at f/4 to the Canon 70-200mm f/4 IS II at 70mm and 200mm (the full f/4 curves are coming up later).

Lensrentals.com, 2019

 

Lensrentals.com, 2019

 

The takeaway message, in this example, is that while the f/4 lens is arguably a tiny bit sharper than the f/2.8 at f/2.8, it is not sharper than the f/2.8 at f/4. It’s damn close, though.

The other takeaway message is you can’t be sure exactly how much and where (either image circle where or focal length where) a zoom lens will improve stopped down a stop. It will be better, but it’s difficult to predict precisely how much better without testing. And no, I don’t have the resources to do stop down testing on all the zooms. Even this little 3-copy example took a day’s testing and days are something I don’t have enough of.

A Quick How to on Reading MTF Charts

If you’re new here, you’ll see we have a scientific methodology to our approach, and use MTF charts to measure lens resolution and sharpness. All of our MTF charts test ten of the same lenses, and then we average out the results. MTF (or (or Modulation Transfer Function) Charts measure the optical potential of a lens by plotting the contrast and resolution of the lens from the center to the outer corners of the frame. An MTF chart has two axis, the y-axis (vertical) and the x-axis (horizontal).

The y-axis (vertical) measures how accurately the lens reproduces the object (sharpness), where 1.0 would be the theoretical “perfect lens.” The x-axis (horizontal) measures the distance from the center of a lens to the edges (measured in millimeters where 0mm represents the center, and 20mm represents the corner point). Generally, a lens has the greatest theoretical sharpness in the center, with the sharpness being reduced in the corners.

Tangential & Sagittal Lines

The graph then plots two sets of five different ranges. These sets are broken down into Tangential lines (solid lines on our graphs) and Sagittal (dotted lines on our graphs). Sagittal lines are a pattern where the lines are oriented parallel to a line through the center of the image. Tangential (or Meridonial)  lines are tested where the lines are aligned perpendicular to a line through the center of the image.

From there, the Sagittal and Tangential tests are done in 5 sets, started at 10 lines per millimeter (lp/mm), all the way up to 50 lines per millimeter (lp/mm). To put this in layman’s terms, the higher lp/mm measure how well the lens resolves fine detail. So, higher MTF is better than lower, and less separation of the sagittal and tangential lines are better than a lot of separation. Please keep in mind this is a simple introduction to MTF charts, for a more scientific explanation, feel free to read this article.

 

Canon 70-200mm f/4L IS

This is the original Canon IS version that we introduced above. While I don’t have test results for the Non-IS it was considered a bit sharper than this original IS, but not as sharp as the IS II version below.

70mm

Lensrentals.com, 2019

135mm

Lensrentals.com, 2019

200mm

Lensrentals.com, 2019

 

Canon 70-200mm f/4L IS II

This makes a good second lens to show you because the version II is noticeably sharper than the original version at 70mm and 135mm. You can tell the difference if you shoot with them, so this gives you a good ‘this much difference is significant’ comparison for the other MTF charts. It is better at 200mm, but more at the ‘you’d have to make a careful direct comparison to see a difference’ level.

70mm

Lensrentals.com, 2019

135mm

Lensrentals.com, 2019

200mm

Lensrentals.com, 2019

Nikon AF-S 70-200mm f/4 G ED VR

Nikon makes the best f/2.8 zoom in this focal length, but their f/4 version is better described as ‘fine.’ It’s about as good as the original Canon version, maybe a bit better, but not as good as the Canon version II. The more paranoid among you can now begin discussions about ‘did they dumb-down the f/4 version so it couldn’t compete with the f/2.8?’ I can’t imagine that is true; designing lenses isn’t like turning off a video codec.

70mm

Lensrentals.com, 2019

135mm

Lensrentals.com, 2019

200mm

Lensrentals.com, 2019

 

Sony FE 70-200mm f/4 G OSS

Sony tends to put out more lenses per year than anyone else, and they have improved their lens quality rapidly over the last several years. But some of the older Sony designs are not great, and this is one of those. It’s at it’s best in the middle range, not quite as good at the two extremes. Some of you more strident Sony supporters will now state how wonderful it is and that the tests are wrong.

70mm

Lensrentals.com, 2019

135mm

Lensrentals.com, 2019

200mm

Lensrentals.com, 2019

 

Summary:

The conclusions here are pretty simple. If you shoot (or adapt) Canon EF mount lenses the Version II Canon 70-200mm f/4 IS II is excellent. It’s so good that you should only buy the 70-200mm f/2.8 version if you need f/2.8. (Since lots of people want the narrower f/2.8 depth of field for portraits or need all the light they can get for stop-motion action photography, the f/2.8 still will have lots of takers.)

The Nikon 70-200mm f/4 is good at 70mm and 135mm. While it fades a bit at 200mm it’s a really nice walk around and travel lens. The Nikon f/2.8 version is so good, though, that most people who can afford it will be willing to deal with the heavier lens and higher price for the image quality.

Sony also has a 70-200mm f/4, and it’s OK. It’s not going to wring all the resolution you might like out of a high-megapixel camera, but it’s still a decent travel lens. From what I hear, though, a lot of Sony shooters prefer the Canon IS II f/4 on an adapter, and I can understand that option, too.

 

Roger Cicala and Aaron Closz

Lensrentals.com

July, 2019

 

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Just MTF Charts
  • Matti6950 .

    I wonder why it gets this ‘negative’ around 70-200mm F4 lenses ‘okish, good, ‘but F2.8 amazingly better’. I shoot almost no details under 15 meter with my nikon 70-200mm F4. And it has by far the best sharpness of any zoom lens i own on nikon. No astigmatism. No CA sharpness drop. No decentering. Even sharpness across frame. Very good contrast. No notable field curvature, no serious vignetting i can spot, list goes on. at 116mm, (and also 70 and 130mm) i have shots, so sharp, only the better new primes can beat it at f8. With the recent tests showing f8 rarely cures corner issues, i’m a bit perplexed at how i can get pixel per pixel sharp images possible with this lens if it’s ‘just ok’? I don’t get it. I’ve seen nikon 70-200mm FL ED shots, they are only sharper at portrait distance, not at (far) landscape distance.

    My only clue is that perhaps the ‘infinity’ of lensrentals, while being almost true infinity still tests different then objects 200-2000 meter away, making the nikon there truly sharp (wich would be logical, as while it has good minimum focus distance it’s a bit softer there (trade off heh).

    Now to find a replacement for new Sony A7RIII. The hardest task so far. I’m almost wishing Sigma just makes 70-200mm sports os with maximum 1,5kg (same as sony gm) just to make me happy, it seems opportunity for them (and tamron) to lead the 70-200mm ‘performance’ gap in sony land. The 100-400mm GM is incredible tempting (seems better then other sony alternatives, but i want and need 70mm more.

  • Not an endorsement, just a statement. I agree with your point, but the degredation is mostly edges / corners and a lot of people are using 70-200s basically for center shots. But mostly I think it’s a reflection of people not being thrilled with the Sony offerings and price point, along with the fact that many already have 70-200s from previous systems.

  • Nicholas Bedworth

    Hi Brandon!

    Ambiguity function crept into the optical world from radar engineering, perhaps 70 years ago (before you and I were born), and is primarily used today for Fourier space computational imaging systems. Optics, having been an area of human endeavor for perhaps 700+ years, is of course a vast and varied field and it’s probably impossible for any single person to know about all of them, or perhaps even 50% of them.

    Research involving the ambiguity function for optics is referenced in a few hundred scientific papers, patent applications, and is incorporated into various textbooks as well. It’s perhaps not as popular as other conceptual frameworks, but I first became aware of it perhaps 45 years ago when working in x-ray imaging systems, where exploiting the ambiguity function is quite useful for retrieving phase information. And more recently, we used it in the design of a hand-held system for removing various artifacts from the image, wavelet encoding and so forth. Sort of a cool way to look at things…

  • Brandon Dube

    It is incredible that I have spent my entire career in optics among the grandfathers and fathers of aberration theory and physical optics and I have never heard of the ambiguity function.

    One might say that is tantamount to an existence proof it is not commonly used in optics.

    Your prose about aberrations is substantially incorrect as well.

  • Chris

    Thank you for the comparison, however, for the Canon 70-200/4 IS vs Non-IS, I think you got it the other way round… the IS Version was considered to be a bit sharper and this is what even Klaus Schroiff showed on his page… it was – at least on the EOS 350 – a pretty dramatical difference; no clue if your optical bench and the sample-to-sample variation would change this view but it would be a surprise if it would turn round the result for 180°…

  • Like the – say – Canon diehards are any better? 🙂 I’ve been browsing a few photography forums (and am actively participating at one), and lots of people especially on the Canon side of life start foaming at the mouth at the mere mention of Sony (or even when this brand is implied in a conversation… Or sometimes even without any apparent implication), so let’s be fair here. 🙂

    Disclosure: I’m a longtime Canon user.

  • Ashley Pomeroy

    I have an Agfa Clack, an old box brownie that takes 6×9 images on 120 film. The film curves around the inside-back of the camera, and I’ve always wondered what effect the curve has on the image quality. Perhaps it was made like that to combat field curvature. The image quality is surprisingly sharp for something so simple.

  • thepaulbrown

    Just the MTF charts: Leica M mount lenses ??

  • I’m a Sony shooter, and while AF on the native Sony lens is better than adapting, I tested the Sony 70-200mm f/4 G against the old, original non-IS Canon 70-200mm f/4L, and the Canon was notably better throughout almost the entire zoom range, save for 135mm, where the Sony was just a smidge sharper at the edges. Needless to say, at 1/3 the price, I use the adapted Canon. I love a lot of my Sony glass, but their 70-200/4 is way overpriced for what you get.

  • Ciaran

    Having worked in the semiconductor industry for decades (although not in sensors), I think that making a curved digital sensor would be completely impractical.

    All the semiconductor processing steps – the growing and slicing wafers, all the many lithography, deposition, and etching steps, assume that the wafer is flat to withing nanometers. Maintaining planarity is extremely important to create nanometer sized features using photolithographic imaging. This involves ultra-violet light projected through really big lenses (f/0.37 in 35mm equivalence) to an ultaprecise focus on the wafer surface. It is very hard to do on a nearly perfectly flat surface – it would be quite impractical on a curved surface.

    The need for planarity is so great that in the later stages in the process, where layers of metal wires are being placed and connected, a technique called chemical-mechanical polishing (CMP) is used after the deposition of each layer, where the surface of the wafer is precisely polished down to maintain the sub-optical planarity. Without this surface of the wafer would be too rough to image the next layer.

    Another basic limitation is that the electronic characteristics of silicon are anisotropic – they vary according to the plane of the silicon crystal. Devices built on a curved surface would have varying electronic properties that would make designing an accurate sensor difficult.

    One could imagine a science fiction reality where these limitations could be overcome and a curved sensor could be fabricated, but it would be astronomically expensive.

  • Nicholas Bedworth

    Roger is correct, as usual. We work with exotic computational imaging devices and sensors, and one thing to keep in mind is that some kind of theoretical insight that might possibly reduce one aberration doesn’t mean too much, compared to the overall process of building a physical lens system that will work across a range of temperature, wavelengths, focal lengths and so forth. Looking at one aberration in isolation isn’t likely to yield much of a practical benefit.

    The ambiguity function is a more generalized, fundamental way of visualizing aberrations as they occur in the Fourier space, where both modulation and phase transfer functions can be manipulated, prior to the light landing on the sensor. PhD optics programs teach it; most engineering programs usually stay with MTF. Another way to think about it is that all the aberrations fall under the category of distortion, and giving them separate names tends to give the impression that they are independently addressable, which isn’t really the case.

    So in the computational imaging approach, one could go after reducing the overall ambiguity function amplitudes, which is a more holistic way of reducing specific aberrations that might be an issue for a given application. And as other people have suggested, manufacturing and packaging considerations might well be more important than nailing a specific aberration.

    Hope this is helpful…

    Our scientists will lean back in their chairs, and start thinking about which combination of wavefront encoding and 10-15-20 layers of lens coatings will give the desired result. Aberration control is part of the discussion, but there are so many other issues: In the overall design process context, going from good to perfect spherical aberration reduction might make a vanishingly small contribution. Possibly it could introduce even problems somewhere else in the pipeline…

  • Max Manzan

    I’ve been slowly moving from Canon DSLR to Sony FE for two years and still have 1 Canon body as well as several
    EF-lenses. Since I’m not a fanboy of any brand (ok ok, I have a slight weakness for Zeiss 😀 ) I will here and now make the official statement that: I haven’t yet bought either Sony FE 70-200, preferably the f/2.8, which is a very important focal range for me, because of the lensrentals MTF results.
    So Roger, Brandon and team, you are to be blamed for that.

    P.S. thank you guys for the invaluable job.

  • Brandon Dube

    Spherical aberration was introduced by Descartes in Dioptrique.

  • Brandon Dube

    It is unwise to take the word of a mathematically inclined graduate student propping their research up as revolutionary at its face. How does their horrible equation (good luck translating this between parties) make it cheaper to manufacture an optical system? Is a crazy shape full of wiggles cheaper to make than a sphere? I’ll give you a good hint: you can make an atomically “flat” sphere by rubbing a rock against another “kinda good sphere” for long enough.

    And what does this do for telescopes, which are dominated by reflective designs that have no use for an equation that makes S2 cancel exactly the spherical aberration in S1? If they wanted a spherical aberration free surface they would just use a parabola. But good telescope designs like the Ritchey-Chretien purposfully make the mirrors “almost parabolas” because it is a better solution.

  • Brandon Dube

    An element free of spherical aberration doesn’t do you that much good. Otherwise all of your camera lenses would just be parabolic mirrors, which are also perfect (zero spherical aberration of any order, zilch, nada) in the same sense.

    The manufactures don’t spend years and invest millions into the design of 20+ element bazookas because they need to keep the design departments busy.

  • Christopher J. May

    Ah, gotcha. Thanks for the information, Roger. I guess I have missed some of the Sony write-ups in the past. I’ll go dig through those now!

    Thanks as always for the wonderful information you share!

  • Christopher, I’ve talked about that in some other Sony posts. It’s basically and artifact of testing with full-frame Sony lenses. The narrow backfocus distance at the very edges on some Sony lenses is right at the edge of the light baffles in the lens, causing either some diffraction or reflection at 20mm. I probably should put the blurb on every FE MTF chart but I forget; 20mm sagittal is a falsely high reading, the tangential should be accurate.

    Unfortunately, if I delete the 20mm reading, the software won’t accept the chart. And I can’t afford to rewrite the software again just for that bit.

  • I totally agree that it can solve spherical aberration, one of the six primary aberrations and the one that is often best controlled. I am simply stating it does nothing for the other 5 primary aberrations, nor any of the well over a dozen common higher order aberrations.

    Why try to solve it? Because you’re a graduate student writing a paper; and he succeeded admirably at that. Could it be actually used? Sure it could, probably in post- but at least theoretically in camera. Would it be dramatically different and, as you say, “change everything”. Not even close, but it could possibly make a difference in some lens designs sometimes.

    But the headline “Graduate student’s paper could possibly lead to improvement in some lenses sometimes, at least in theory” wouldn’t have gotten your attention. The click-bait headline did.

  • Misha Engel

    When spherical aberration wouldn’t have been a big problem, why try to solve it?

    “The importance of solving this problem goes well beyond giving you a sharper picture of your feet for your nine Instagram followers. It would help make better and cheaper to manufacture optical systems in all areas, be it telescopes, microscopes, and everything in between.”

    “It is important to note that both solutions—the Wasserman-Wolf problem and the Levi-Cita problem—are analytical, with symbolic math. This means that the solution to a problem, no matter how you change the input variables, is unique and not an approximation.”

    Not could, it does solve the spherical aberration problem.

  • Christopher J. May

    Roger, any theories on the crazy splits in the tangential and sagittal lines on the Sony at the edge of the frame, especially at 135mm? I don’t think I’ve ever seen such immediate variance in tan/sag lines like that.

  • No. No it doesn’t. It may change one thing. One aberration.
    Don’t quote the click-bait if you didn’t read the actual article. It was some nice math and it could correct spherical aberration, just that. Nothing more.

  • Me too!

  • Misha Engel

    The solution to the problem of spherical aberration (established by Wasserman-Wolf in 1949) has been solved by: Rafael Guillermo González Acuña

    Rafael Guillermo González Acuña studied Industrial Physics Engineering at Tecnológico de Monterrey and completed the Master’s Degree in Optomechatronics at the Optics Research Center, A.C. He is currently studying the Doctorate in Nanotechnology at the Tecnológico de Monterrey. His doctoral thesis focuses on the design of free spherical aberration lenses.

  • asad137

    Nearly every material is ‘stretchy ‘if you make it thin enough. Making a blanket statement like “silicon is stretchy” without any qualifiers is, at best, misleading.

  • Before you try flaming me, you should do a /little/ research. What I said is correct. Go read some papers. I found this one in ~1 minute of searching, and there are dozens more where this came from. Multiple research groups are working on this same strategy: take an originally flat sensor, and then bend it into the curved shape. If the SILICON is thin enough, it can absolutely be pulled into the necessary shape.

    https://www [dot] spiedigitallibrary [dot] org/conference-proceedings-of-spie/10709/2312654/Curved-detectors-developments-and-characterization-application-to-astronomical-instruments/10.1117/12.2312654.full?SSO=1

  • Franz Graphstill

    A flexible sensor might work for curving in one dimension (think curving a piece of film), but not in two dimensions. The goal is a sensor that’s curved like the human retina – like the inside of a ball. You won’t achieve that with a flexible sensor unless it can stretch!

  • Andreas Werle

    Sorry asad137, guess, i had some similar ideas. 🙂

  • Andreas Werle

    Hi Brandon!

    My speculation about curved sensors is the following:
    * either a (flexible) flat sensor, which is bended or
    * a (inflexible) flat sensor with a flat backside and an altered curved surface (pointing toward the lens)

    The first variant would require a bendable substrate material. I guess, that this can not be silicium, it would brake, when bended. If you bend an initially flat sensor in a fixed position, the pixelsensors (which are sitting on the bottom of a small tube) at the fringe of the sensor will point towards the optical axis, which is welcome for wideangel lenses but would result in vignetting in telecentric lenses. This could be avoided, either by microlenses on top of the pixel-element or if the bending is variable, which would require some adaptive elements (aktuators) at the margins of the sensor.

    The second variant would require a multilayered sensor (like the Foveon), but with much more layers. Around the optical Axis the pixelsensors may be situated at the deepest possible position and outer pixels in stepwise higher positions – like concentrical rings. This would be fine, because you can use silizium based integrated circuitry. But given the production process of multilayered microchips it makes me headache to imagine, how the connection between the different layers can be achieved.

    In result, i would prefer to idea of an “adaptive sensor”. This could either be a sensor, which consists of movable tiles of small sensors or by a correction layer before the sensor. A possibility would be a lenslet array, consisting of small moveabel microlenses – similar to the one used in a Shack–Hartmann wavefront sensor.

    Greetings Andy

  • asad137

    Maybe not “difficult” in absolute terms, but certainly significantly
    more difficult than making a flat sensor. The overwhelmingly vast
    majority of semiconductor fab processes out there in the world are based
    on processing flat wafers; to make a curved sensor, you either need to:

    a) start with a curved substrate, for which you’d have to design completely new processes, possibly new machines, etc, or

    b)
    build a curved surface up on a flat substrate and then figure out how
    to make a sensor on it, which leads you to many of the same problems as
    (a), or

    c) use a curved microlens array on top of a flat sensor
    (which comes with its own engineering challenges but is probably easier
    than making an actual curved sensor)Even if you can create a
    curved sensor, even that isn’t a panacea, because the ideal curvature
    for one lens isn’t going to be the same as for another, which is
    obviously a problem for interchangeable lens cameras — you can get the
    benefit of the curved sensor for one lens, but it might be worse than a
    flat sensor for others.

    If you google “fstoppers sigma curved sensor” you’ll find an article that discusses the latter point. I had linked to it but I think my post got caught in the post purgatory Roger mentions below.

Follow on Feedly