Measuring Lens Variance

Published June 26, 2015

Warning: This is a Geek Level 3 article. If you aren’t into that kind of thing, go take some pictures.

I’ve been writing and discussing the copy-to-copy variation that inevitably occurs in lenses since 2008. (1,2,3,4) Many people don’t want to hear about it. Manufacturers don’t want to acknowledge some of their lenses aren’t quite as good as others. Reviewers don’t want to acknowledge that the copy they reviewed may be a little better or a little worse than most copies. Retailers don’t want people exchanging one copy after another trying to find the Holy Grail copy of a given lens. And honestly, most photographers and videographers don’t want to be bothered. They realize lens’ sample variation can make a pretty big difference in the numbers a lens tester or reviewer generates without making much difference in a photograph.

It does matter occasionally, though. I answered an email the other day from someone who said, in frustration, that they had tried 3 copies of a given lens and all were slightly tilted. I responded that I’d lab-tested over 60 copies of that lens, and all were slightly tilted. It wasn’t what he wanted to hear, but it probably saved him some and his retailer some frustration. There’s another lens that comes in two flavors: very sharp in the center but weaker in the corners, or not quite as sharp in the center but stronger in the corners. We’ve adjusted dozens of them and can give you one or the other. Not to mention sample variation is one of the causes that make one review of a lens say it’s poor, when other reviewers found it to be great.

At any rate, copy variation is something few people investigate. And by few, I mean basically nobody. It takes a lot of copies of a lens and some really good testing equipment to look into the issue. We have lots of copies of lenses and really good testing equipment, and I’ve wanted to quantify sample variation for several years. But it’s really, really time-consuming.

Our summer intern, Brandon Dube, has tackled that problem and come up with a reasonably elegant solution. He’s written some Matlab scripts that grab the results generated from our Trioptics Imagemaster Optical Bench, summarizes them, and performs sample variation comparisons automatically. We’re going to eventually present that data to you just like we present MTF data: when a new lens is released we’ll also give you an idea of the expected sample variation. Before we do that though, we need to get some idea of what kind of sample variations should be expected.

For today, I’m going to mostly introduce the methods we’re using. Why? Because I’m old fashioned enough to think scientific methods are still valid. If I claim this lens scores 3.75 and that lens scores 5.21, you deserve to know EXACTLY what those findings mean (or don’t mean) and what methods I used to reach those findings. You should, if you want to, be able to go get your own lenses and testing equipment and duplicate those findings. And maybe you can give us some input that helps us refine our methods. That’s how science works.

I could just pat you on the head, blow some smoke up your backside, call my methods proprietary and too complex, and tell you this lens scores 3.75 and that lens scores 5.21, so you should run out and buy that. That provides hours of enjoyment fueling fanboy duels on various forums, but otherwise is patronizing and meaningless. Numbers given that way are as valid as the number of the Holy Hand Grenade of Antioch.


All lenses were prescreened using our standard optical testing to make certain the copies tested were not grossly decentered or tilted. Lenses were then measured at 10, 20 ,30, 40, and 50 line pairs per mm using our Trioptics Imagemaster MTF bench. Measurements were taken at 20 points from one edge to the other and repeated at 4 different rotations (0, 45, 90, and 135 degrees), giving us a complete picture of the lens.


The 4 rotation values were then averaged for each copy, giving as a graph like this for each copy.


The averages for 10 copies of the same lens model were then averaged, giving us an average MTF curve for the 10 copies of that lens. This is the type of MTF reading we show you in a blog post. The graphics are a bit different than we’ve been using, but that’s because we’re generating these with one of Brandon’s scripts now, so they’ll be more reproducible now.

Graph 1: Average MTF of 10 copies of a lens.


Graphing the Variation Between Copies

Every copy of the lens is slightly different than this ‘average’ MTF and we want to give you some idea of how much variance exists between copies. A simple way is to calculate the standard deviation at each image height. Below is a graph showing the average value as lines, with the shaded area representing 1.5 standard deviations above and below the average. In theory, (the theory doesn’t completely apply here, but it gives us a reasonable rule of thumb) MTF results for 98% of all copies of this lens would fall within the shaded areas.


Graph 2: Average MTF (lines) +/- 1.5 S. D. (area)


Obviously, these area graphs overlap so much that it’s difficult to tell where the areas start and stop. We could change to 1 or even 0.5 standard deviations and make things look better. That would work fine for the lens we used in this example, but this is actually a lens with fairly low variation. Some other lenses vary so much that they would just make a graph that basically is nothing but completely overlapping colors, even if we showed +/- one standard deviation.

The problem of displaying lens variation is one we’ve struggled with for years; most variation for most lenses just won’t fit in the standard MTF scale.  We have chosen to scale the variance chart by adding 1.0 to the 10lp/mm value, 0.9 to the 20 lp/mm value, 0.75 to 30lp/mm, 0.4 to 40lp/mm, and 0.15 to 50lp/mm.  We chose those numbers simply because they make the graphs readable for a “typical” lens.

Graph 3 presents the same information as Graph 2 above, but with the axis expanded as we described to make the variation more readable.

Graph 3: Average MTF (lines) +/- 1.5 S. D. (area); modified vertical axis


You could do some math in your head and still get the MTF numbers off of the new graph, but we will, of course, still present average MTF data in the normal way. This graph will only be used to illustrate variance. It can be quite useful, though. For example, the figure below compares the graph for the lens we’ve been looking at on the left, and a different lens on the right.




Some things are very obvious at a glance. The second lens clearly has lower MTF than the first lens. It also has a larger variation between samples, especially as you go further away from the center (center is the left side of the horizontal axis). In the outer 1/3 of the lens, in particular, the variation is extremely large. This agrees with what we see in real life: the second lens is one of those lenses that every copy seems to have at least one bad corner, and some more than one bad corner. Also if you look at the black and red  areas at the center of each lens (the left side of each graph) even the center of the second lens has a lot of variation between copies. Those are the 10 and 20 line pairs per mm graphs and these differences between copies in the center are the kind of thing that most photographers would notice as a ‘soft’ or ‘sharp’ copy.

The Variation Number

The graphs are very useful to compare two or three different lenses, but we intend to compare variation for a lot of different lenses. With that in mind we thought a numeric ‘variation number’ would be a nice thing to generate. A table of numbers certainly provides a nice, quick summary that would be useful for comparing dozens of different lenses.

As a rule, I hate when someone ‘scores’ a lens or camera and tries to sum up 674 different subjective things by saying ‘this one rates 6.4353 and this one rates 7.1263’. I double-secret hate it when they use ‘special proprietary formulas you wouldn’t understand’ to generate that number. But this number is only describing one thing: copy-to-copy variation. So I think if we show you exactly how we generate the number then 98% of you will understand it and take it for what it is, a quick summary. It’s not going to replace the graphs, but may help you decide which graphs you want to look at more carefully.

(<Geek on>)

It’s a fairly straightforward process to find the number of standard deviations needed to satisfy some absolute limits, for example, +/-12.5%. Just using the absolute standard deviation number though, would penalize lenses with high MTF. If the absolute MTF is 0.1, there’s not much room to go up or down while if it’s 0.6, there’s lots of room to change. This meant bad lenses would seem to have low variation scores while good lenses would have higher scores. So we made the Variation number relative to the lens’ measured MTF, rather than an absolute variation. We simulated the score for lenses of increasingly high resolution and saw the score would rise exponentially, so we take the square root of it to make it close to linear.

Initially we thought we’d just find the worst area of variability for each lens, but we realized some lenses have low variation across most of the image plane and then vary dramatically in the last mm or two. Using the worst location made these lenses seem worse than lenses that varied a fair amount in the center. So we decided to average the lens’ MTF across the entire image plane. To keep the math reasonable, we calculated the number just for the 30 line pair per mm (green area in the graphs) variance, since that is closest to the Nyquist frequency of 24MP-class full-frame sensors. Not to mention, higher frequencies tend to have massive variation in many lenses, while lower frequencies have less variation; 30lp/mm provides a good balance.  Since some lenses have more variation in the tangential plane and others the sagittal, we pick the worse of the two image planes to generate the variance number.

Finally we scale the score to get a reasonable scale.

For those who speak computer better than we can explain the formula in words, here’s the exact Matlab code we use:

T3Mean = mean(MultiCopyTan30);
S3Mean = mean(MultiCopySag30);
Tan30SD_Average = mean(MultiCopySDTan30);
Sag30SD_Average = mean(MultiCopySDSag30);
ScoreScale = 9;
if T3Mean > S3Mean
 TarNum = 0.125*T3Mean;
 TarNum = 0.125*S3Mean;
if Tan30SD_Average > Sag30SD_Average
 ScoreTarget = TarNum*T3Mean;
 VarianceScore = ScoreTarget/Tan30SD_Average;
 MTFAdjustment = 1 - (T3Mean/(0.25*ScoreScale));
 VarianceScore = sqrt(VarianceScore*MTFAdjustment);
 ScoreTarget = TarNum*S3Mean;
 VarianceScore = ScoreTarget/Sag30SD_Average;
 MTFAdjustment = 1 - (S3Mean/(0.25*ScoreScale));
 VarianceScore = sqrt(VarianceScore*MTFAdjustment);
VarianceNumber = VarianceScore*ScoreScale;

(</Geek off)

Here are some basics about the variance number —

  1. A high score means there is little variation between copies. If a lens has a variance number of over 7, all copies are pretty similar. If it has a number less than 4, there’s a lot of difference between copies.  Most lenses are somewhere in between.
  2. A difference of “0.5” between two lenses seems to agree with our experience testing thousands of lenses. A lens with a variability score of 4 is noticeably more variable than a lens scoring 5, and if we check carefully is a bit more variable than one scoring 4.5
  3. A difference of about 0.3 is mathematically significant between lenses of similar resolution across the frame.
  4. Ten copies of each lens is the most we have the resources to do right now. That’s not enough to do rigid statistical analysis, but it does give us a reasonable idea. In testing 10 copies of nearly 50 different lenses so far, the variation number changes very little between 5 and 10 copies and really doesn’t change much at all after 10 copies. Below is an example of how the variance number changes as we did a run of 15 copies of a lens.
How the variance number changed as we tested more copies of a given lens. For most lenses, the number was pretty accurate by 5 copies and changed by only 0.1 or so as more copies were added to the average. 

Some Example Results

The main purpose of this post is to explain what we’re doing, but I wanted to include an example just to show you what to expect. Here are the results for all of the 24mm f/1.4 lenses you can currently buy for an EF or F mount camera.

First, let’s look at the MTF graphs for these lenses. I won’t make any major comments about the MTF of the various lenses, other than to say the Sigma is slightly the best and the Rokinon much worse than the others.



Now lets look at the copy-to-copy variation for the same for lenses. The graphs below also include the Variation Number for each lens, in bold type at the bottom.


Just looking at the variation number, the Canon 24mm f/1.4L lens has less copy-to-copy variation than the other 24mm f/1.4 lenses. The Rokinon has the most variation.

The Nikon and Sigma lenses show an interesting point. Looking at the graphs the Sigma clearly has more variation, but the Sigma variation number is only slightly different than the Nikon number.  That’s because the average resolution of the Sigma is also quite a bit higher at 30lp/mm  and the formula we use considers that.  If you look at the green variation areas you can see that the weaker Sigma copies will still be as good as the better Nikon copies. But this is a good example of how the number, while simpler to look at, doesn’t give the whole picture.

The graphs show something else that is more important than the simple difference in variation number. The Sigma lens tends to vary much more in the center of the image (left side of the graph) and the variation includes the low frequency 10 and 20 line pairs per mm areas (black and red). The Rokinon tends to vary extremely in the edges and corners (right side of the graph). In the practical world, a photographer carefully comparing several copies of the Sigma would be more likely to notice a slight difference in overall sharpness between the lenses. The same person doing careful testing on several copies of the Rokinon would probably find each lens has a soft corner or two soft corners.

Attention Fanboys: Don’t use this one lens example and start making claims about this brand or that brand. We’ll be showing you in future posts that at other focal lengths things are very different. Canon L lenses don’t always have the least amount of copy-to-copy variation. Sigma Art lenses in other focal lengths do quite a bit better than this.  We specifically chose 24mm f/1.4 lenses for this example because they are complicated and are very difficult to assemble consistently.

And just for a teaser of things to come, I’ll give you one graph that I think you’ll find interesting, not because it’s surprising, but because it verifies something most of us already know. The graph below is simply a chart of variation number of many lenses, sorted by focal length. The lens names are removed because I’m not going to start fanboy wars without giving more complete information. And that will have to wait a week or two because I’ll be out of town next week. But the graph does show that wider-angle lenses tend to have more copy-to-copy variation (lower variation number), while longer focal lengths, up to 100mm, tend to have less variation. At most focal lengths, though, there are some lenses that have little, and some lenses that have a lot of copy-to-copy variation.



What Are We Going to Do with This?

Fairly soon, we will have this testing done for all wide-angle and standard range prime lenses we carry and can test. (It will be a while before we can test Sony e-mount lenses – we have to make some modifications to our optical bench because of Sony’s electromagnetic focusing.) By the end of August, we expect to have somewhere north of 75 different models tested and scored. It will be useful when you’re considering purchasing a given lens and want to know how different your copy is likely to be than the one you read the review of. But I think there will be some interesting general questions, too.

  • Do some brands have more variation than other brands?
  • Do more expensive lenses really have less variance than less expensive ones?
  • Do lenses designed 20 years ago have more variance than newer lenses? Or do newer, more complex designs have more variance?
  • Do lenses with image stabilization have more variance than lenses that don’t?

Before you start guessing in the comments, I should tell you we’ve completed enough testing that I’ve got pretty good ideas of what these answers will be. And no, I’m not going to share until we have all the data collected and tabulated. But we’ll certainly have that done in a couple of weeks.


Roger Cicala, Aaron Closz, and Brandon Dube

June, 2015

A Request:

Please, please don’t send me a thousand emails asking about this lens or that. This project is moving as fast as I can move it. But I have to ‘borrow’ a $200,000 machine for hours to days to test each lens, I have a busy repair department to run, and I’m trying to not write every single weekend. This blog is my hobby, not my livelihood, and I can’t drop everything to test your favorite lens tomorrow.

Author: Roger Cicala

I’m Roger and I am the founder of Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Equipment
  • Randy

    In the good old days, there were the engraved Linhof and Sinar lenses which were a tacit admission that expensive lenses do vary or if you prefer, that Zeiss, Schneider and Rodenstock’s quality control wasn’t demanding enough.

    I suspect you’ll find that most of the more expensive stuff doesn’t vary enough to worry about but the bargains, like Rokinon will be all over the place. As they say, one person swears by a lens…the other at it, and they’re both right.

    Should be a heck of a lot more interesting than reading some DxO report.

  • Charles

    “We simulated the score for lenses of increasingly high resolution and saw the score would rise exponentially, so we take the square root of it to make it close to linear.”

    Without knowing what the data looked like, it sounds like a square root may not be the best correction (especially if it is actually exponential). Perhaps find which of an exponential function and a low degree polynomial function best fits your simulated data and use its inverse to correct your data.

  • Leo

    These are very useful and practical results. However, the results would be really practical, when the lens manufactures with poor manufacturing and QA process will loose the sale. The result can be unexpectedly higher lens prices at least initially. Most of the CZ lenses should be very good based on their prices, however at the end the king may have no cloth.

  • This is very useful. Thank you. It is surprising to me how many photographers, even many professional ones, do not realize that there can be significant copy-to-copy variations among lenses. I am also glad to hear that you test the lenses you rent and sell, which I believe is not something that your competitors (at least not all of them) do. I once rented a 17mm Ts-e Canon lens from one of your competitors, a lens design I know that performs extremely well, and it was absolute garbage. I hope this kind of testing will encourage manufacturers and rental companies to enhance their quality-control procedures. You are in an ideal position to do this since you have access to so many copies of the same lenses.

  • First, thanks for taking the time and effort to do this. I’ve commented before that you’re uniquely positioned to provide this useful analysis, and I’m glad that you have some time to embark upon the journey!

    Thanks for applying the scientific method where possible (sharing your techniques, being open to process improvements, etc.). Other companies (like the one that rhymes with GXOBark) lack openness, which as you say results in constant bickering about bias and validity. Pretty much every other reviewer only works with a single lens, and we all know what science thinks of an n of 1.

    Also, Brandon needs a fancy lensrental Employee (or lensrental Intern?) tag when he comments.

    Also, it still says copyright 2012 in the footer of the webpage.

    Also, I don’t think I’ve said you’re awesome yet. You’re awesome.

  • Brandon


    the preprocessing so to speak works as follows:

    4 quadrants and single-copy average read by matlab, single-copy plots produced. Xlsx file written with 4 quadrants and average.

    Matlab script copies all single copies into a Master sheet for that copy. Master sheet takes the standard deviation (as well as some other information) of the individual copy’s averages – not their rotations.

    Averaging the individual rotations does not work for lenses that have built in petal hoods, such as the Canon 14mm f/2.8, Zeiss 15mm f/2.8, Rokinon 14mm f/2.8, etc. You get results like this: where some rotations are clipped, others are not.

    Additionally, we are looking at the variation between copies with the variance number. If we take the standard deviation of the rotations we are looking at the variance between as well as inside copies. We are recording the numbers you are suggesting taking in excel but we aren’t doing anything with that information yet.


  • Wally

    Without having access to all the data, it does make seeing how all the number scale relative to each other difficult. However, I would suggest that when calculating the standard deviation of the MTF values you should be doing it on all the original numbers for a given frequency value and not on any of the consolidated set of numbers. In the case of the mean it does not matter but by averaging the deviations over several steps it does result in a significantly different final number. Perhaps you already are, I of course do not know what MultiCopySDTan30 is, is it the full set of data or the values used in the graph?

  • Brandon


    If we used a system like that, we would have to invert and scale still. I think they come out with similar complexity in the end, but a big “feature” of the variance number is that the difference between “5” and “6” is about the same size as the one between “6” and “7” and so forth. This was lost in most of the other systems I tried before I settled on this. We do want to keep some consideration for the absolute variance too. E.g the sigma 24/1.4 being more variable absolutely than the Nikkor 24mm – without the adjustment to discount higher resolving lenses it sits pretty squarely between the canon and the nikon. We felt the number should reflect both absolute and relative variance to some degree.


  • Brandon


    The flange distance can change with time slightly, the mount will wear down a little, the wavy washers in the mount will wear and loosen, and internal collars and other bits will wear too. For phase detect autofocus, every lens copy has a lookup table of correction values set at the factory to counteract things like focus shift, incorrect mount distance, etc. This is usually what gets adjusted when you send a lens to be fixed for bad focus.

    Anyway, yes the mount can change =)


  • Brandon


    We do plan on looking at cinema lenses at some point but not immediately – there are many photo lenses to look at first. Some cinema lenses are not the same as their photo counterparts, for example the canon cinema 50mm and there is a noticeable difference on OLAF in the assembly of the 24mm cine and L versions.

    All in due time.


  • Wally

    Do you see a big advantage of your variance number over just using the Coefficient of Variation on the mean values. Or in your code:

    VarianceNumber = 100*(Tan30SD_Average/T3Mean) with of course code to select which set had the higher number Tan. or Sag.

    It would have the advantage of larger numbers being more variable and still scales with higher resolution lenses. For instance a 0.8 MTF lens with a 0.1 SD would end up 12.5 and a 0.6 MTF lens with a SD of 0.1 would be 16.6

  • You totally forgot the closing ‘>’ at the and of the bracket


  • JohnL

    I mentioned this in the comments on the IR article about this (excellent) piece and will also mention it here – it would be interesting at some point to see if expensive Cinema lenses use some of that extra money on better alignment.

    Also new results vs. results after a chunk of use would be interesting. I’d rather have a lens that stays the way it was over one that’s stellar for the first year and not quite so good afterwards.

  • william

    A tangential question to this subject. There is variation between lenses, and between the same lens on different bodies. Can there be a variation between the same lens on the same body after unmounting and remounting the lens? You stated previously a few microns difference can affect lens performance and I surmise this could occur at the mounting point? I ask as my PDAF microadjustment values seem to change intermittently.

  • Roger, I always find your posts very interesting, well written and always with the right touch of humor.

    What you’re going to carry on, which is really a study on lens variations, is gonna be of tremendous help! As far as I can tell, no one has ever done such a thing and I’m looking forward to reading the results.

    I think we cannot say it enough: thank a lot for all the efforts you put on your different testing procedures and this blog of course 🙂

  • Feng Chun

    Absolutely loving this!

  • Roger Cicala

    Peter, we have to create an electronically wired mount to connect the lens to a camera or power supply to maintain focus. We’re prototyping one, but it’s difficult.

  • MayaTlab

    Peter Honka,

    I suppose Lensrentals’ people will give you a much more satisfying answer than mine, but from what I gather they’re hard to test since many FE (and m43 I suppose) lenses use an autofocus mechanism that leaves autofocusing elements loose when the camera isn’t powered on – meaning that it’s hard to keep them in the right place for testing.

  • Fil

    Roger et Al., what you’ve started here is one extremely valuable service to photographers as well as manufacturers! Expect to be epically misunderstood by both sides, for daring to shatter many a belief and dream (not to mention advertisement!). In the meantime, we’ll go on using what we have, and make the best of it. From the first Box Brownie thru the latest Yunamit… all cameras were good if used within their specific abilities. It was always photographers that were the greatest variation.
    Great work, this. Looking forward to the sequels.

  • Roger do you plan to also include the results of the Angenieux 25mm f/0.95 if you come across a more copies of the lens?

  • Andrew

    I love this. Can I make a suggestion, about presentation? Given the way our visual system works, it would be much easier to compare pairs of the wonderful MTF + variation graphs if you flipped one of them horizontally (mirror image, left to right). In other words, have the “center” numbers in the middle (between the the two graphs), and the “edge” numbers on the outside. I tried it quickly, and it makes it much easier to line up the graphs and see whether lens of one type that fall within the range of variability are generally better or worse than those of the other. Does that make sense?

  • Peter Honka

    do you also test sony FE lenses?
    i hear and read horrible things about sony lenses.

    i would really love to see results for the sony E mount.

  • Thanks!!! I’ve “touted” this lens sample variation “thing” since I first posted my “Subjective Lens Evaluations (Mostly Nikkors)” on my website beginning in 1996, and it is now at:
    Being a lens sharpness nut, and having had the opportunity to go through many lenses (both owned and borrowed) over the oh-so-many years, I collected a couple of times rather nice groups of lenses, and currently I’m working on a good set of MFT lenses, with a “simpler” piece on those here:
    But, unlike you, my methods were necessarily more “subjective”, with an eye used to evaluate/compare thousands of images over time. For reasons (possibly) explained in the above articles, I used distant detailed landscapes rather than test charts for my evaluations, but I like your method, too!;-) Thanks again for doing this!
    –David Ruether

  • Brandon


    All aspects of the lenses vary to some degree, how much it matters is up to the user to decide.

    Distortion won’t change much unless the lenses are assembled very poorly – with prototypes of a particular lens I tested for a different project I saw a 1.63-1.87% barrel distortion range across 13 copies. Lenses that have much higher distortion shouldn’t be more sensitive to manufacturing error, though the point of inflection in lenses with moustache distortion can shift around a little bit.

    Focus shift is complicated – the quick answer is “yes, it can vary” – how much I don’t know. The requirement is more or less that a lens element be spaced forward/backward incorrectly instead of shifted to the side a bit, tilted, etc. Possible? Absolutely. But it’s less likely than a shift side-to-side or a tilt.

    If you would like to talk about it in more detail, feel free to email me at


  • Brandon


    The reason we try to force the score to be linear is to prevent the excellent lenses from “running away” so to speak. On the focal length plot you can see two 100mm lenses that are head-and-shoulders above the rest. Without using a square root adjustment, their scores would be higher than 20 if we scaled the score to make the 24mm lenses score about as well as they do now numerically.

    The lenses are better, but they aren’t 3x better, and if there are lenses a little better than those they may score 40 – likewise those aren’t 6x better, and so on.

    It also helps to compare lenses – here we can say that a difference of 1 is always significant. Without the square root, it isn’t significant for very poor scoring lenses, and it isn’t significant for very well scoring lenses.


  • Brandon

    Ron, Tim,

    The only way to separate the various spatial frequencies is to, well, separate them. A semilog plot or anything like that wouldn’t be particularly helpful – the issue is that for both the very variable lenses and the very high resolution lenses, the lower range of 20lp/mm may overlap with the upper range of 30lp/mm, for example. Short of actually moving the data (and re-scaling the axis to keep it visible) we do not know of a way to make the plots more readable.


  • Seth

    JulianH: I think mean vs median is mostly irrelevant in this case – there should be no outliers since Roger and the team are pre-screening lenses for issues before doing the optical test and ingesting the data. If it’s good enough to pass pre-screen then it should be considered a representative sample.

  • Tony

    I have a couple of comments on the data workup. First off, I was very pleased to see this: “So we made the Variation number relative to the lens’ measured MTF, rather than an absolute variation.” I think that helps keep the presentation more intuitive than you’d get with absolute numbers. 10% vs 20% probably appears much more different than 89% vs 99%.

    But the next sentence had me scratching my head: “We simulated the score for lenses of increasingly high resolution and saw the score would rise exponentially, so we take the square root of it to make it close to linear.” I’m not sure that making things appear more linear is automatically and inherently a good thing. You might have an excellent technical reason why that is the right way to look at things but that wasn’t explained. I fear that this might be a step away from complete transparency of the meaning of the numbers. I’ll vote for traceability over beauty.

    Like Julian, I prefer medians as opposed to averages. Half will be better, and half will be worse. Once again this would keep things intuitive, as in: “I have a coin-toss chance of getting a worse lens that that”.

  • The testing procedure seems reasonable to me and the results are very interesting already. Thanks for sharing so much about your methods, and of course thanks for wrinting this blog at all.

    I’m wondering if it wouldn’t be more informative if the variation plots would show the median sharpness across all lenses, not the mean sharpness. That would give a better representation of the sharpness that one would expect from a random lens of that type and not be as sensitive to single outlier values (e.g. one of ten copies being far better or worse than the rest).

    One other small thing: I find the terminology to be slightly confusing, especially the use of “variance (score)” as something that increases with decreasing variance. I understand that you want such a value because what the average person expects from a score is that a higher value is better. It’s slightly more confusing because, mathematically, the variance is the standard deviation squared, while your variance is the square root of the standard deviation. Or, actually, something like sqrt(mtf^2/(8*SD) * (1-mtf/2.25)). Unless I am misinterpreting your code, which is very possible.

    Anyway, cheers, I’m looking forward to more results, no matter what they’re called. 🙂

  • Roger Cicala

    Dan, our protocol is that all new lenses are checked by our intake staff and any unacceptable ones are returned, which is about 2% of lenses. This is roughly equivalent to someone testing the lens very carefully at home. The ones we test optically don’t include those, so you could probably think of it as the sample variation of acceptable, non defective copies.

Follow on Feedly