Equipment

Measuring Lens Variance

Published June 26, 2015

Warning: This is a Geek Level 3 article. If you aren’t into that kind of thing, go take some pictures.

I’ve been writing and discussing the copy-to-copy variation that inevitably occurs in lenses since 2008. (1,2,3,4) Many people don’t want to hear about it. Manufacturers don’t want to acknowledge some of their lenses aren’t quite as good as others. Reviewers don’t want to acknowledge that the copy they reviewed may be a little better or a little worse than most copies. Retailers don’t want people exchanging one copy after another trying to find the Holy Grail copy of a given lens. And honestly, most photographers and videographers don’t want to be bothered. They realize lens’ sample variation can make a pretty big difference in the numbers a lens tester or reviewer generates without making much difference in a photograph.

It does matter occasionally, though. I answered an email the other day from someone who said, in frustration, that they had tried 3 copies of a given lens and all were slightly tilted. I responded that I’d lab-tested over 60 copies of that lens, and all were slightly tilted. It wasn’t what he wanted to hear, but it probably saved him some and his retailer some frustration. There’s another lens that comes in two flavors: very sharp in the center but weaker in the corners, or not quite as sharp in the center but stronger in the corners. We’ve adjusted dozens of them and can give you one or the other. Not to mention sample variation is one of the causes that make one review of a lens say it’s poor, when other reviewers found it to be great.

At any rate, copy variation is something few people investigate. And by few, I mean basically nobody. It takes a lot of copies of a lens and some really good testing equipment to look into the issue. We have lots of copies of lenses and really good testing equipment, and I’ve wanted to quantify sample variation for several years. But it’s really, really time-consuming.

Our summer intern, Brandon Dube, has tackled that problem and come up with a reasonably elegant solution. He’s written some Matlab scripts that grab the results generated from our Trioptics Imagemaster Optical Bench, summarizes them, and performs sample variation comparisons automatically. We’re going to eventually present that data to you just like we present MTF data: when a new lens is released we’ll also give you an idea of the expected sample variation. Before we do that though, we need to get some idea of what kind of sample variations should be expected.

For today, I’m going to mostly introduce the methods we’re using. Why? Because I’m old fashioned enough to think scientific methods are still valid. If I claim this lens scores 3.75 and that lens scores 5.21, you deserve to know EXACTLY what those findings mean (or don’t mean) and what methods I used to reach those findings. You should, if you want to, be able to go get your own lenses and testing equipment and duplicate those findings. And maybe you can give us some input that helps us refine our methods. That’s how science works.

I could just pat you on the head, blow some smoke up your backside, call my methods proprietary and too complex, and tell you this lens scores 3.75 and that lens scores 5.21, so you should run out and buy that. That provides hours of enjoyment fueling fanboy duels on various forums, but otherwise is patronizing and meaningless. Numbers given that way are as valid as the number of the Holy Hand Grenade of Antioch.

 Methods

All lenses were prescreened using our standard optical testing to make certain the copies tested were not grossly decentered or tilted. Lenses were then measured at 10, 20 ,30, 40, and 50 line pairs per mm using our Trioptics Imagemaster MTF bench. Measurements were taken at 20 points from one edge to the other and repeated at 4 different rotations (0, 45, 90, and 135 degrees), giving us a complete picture of the lens.

 

The 4 rotation values were then averaged for each copy, giving as a graph like this for each copy.

 

The averages for 10 copies of the same lens model were then averaged, giving us an average MTF curve for the 10 copies of that lens. This is the type of MTF reading we show you in a blog post. The graphics are a bit different than we’ve been using, but that’s because we’re generating these with one of Brandon’s scripts now, so they’ll be more reproducible now.

Graph 1: Average MTF of 10 copies of a lens.

 

Graphing the Variation Between Copies

Every copy of the lens is slightly different than this ‘average’ MTF and we want to give you some idea of how much variance exists between copies. A simple way is to calculate the standard deviation at each image height. Below is a graph showing the average value as lines, with the shaded area representing 1.5 standard deviations above and below the average. In theory, (the theory doesn’t completely apply here, but it gives us a reasonable rule of thumb) MTF results for 98% of all copies of this lens would fall within the shaded areas.

 

Graph 2: Average MTF (lines) +/- 1.5 S. D. (area)

 

Obviously, these area graphs overlap so much that it’s difficult to tell where the areas start and stop. We could change to 1 or even 0.5 standard deviations and make things look better. That would work fine for the lens we used in this example, but this is actually a lens with fairly low variation. Some other lenses vary so much that they would just make a graph that basically is nothing but completely overlapping colors, even if we showed +/- one standard deviation.

The problem of displaying lens variation is one we’ve struggled with for years; most variation for most lenses just won’t fit in the standard MTF scale.  We have chosen to scale the variance chart by adding 1.0 to the 10lp/mm value, 0.9 to the 20 lp/mm value, 0.75 to 30lp/mm, 0.4 to 40lp/mm, and 0.15 to 50lp/mm.  We chose those numbers simply because they make the graphs readable for a “typical” lens.

Graph 3 presents the same information as Graph 2 above, but with the axis expanded as we described to make the variation more readable.

Graph 3: Average MTF (lines) +/- 1.5 S. D. (area); modified vertical axis

 

You could do some math in your head and still get the MTF numbers off of the new graph, but we will, of course, still present average MTF data in the normal way. This graph will only be used to illustrate variance. It can be quite useful, though. For example, the figure below compares the graph for the lens we’ve been looking at on the left, and a different lens on the right.

 

 

 

Some things are very obvious at a glance. The second lens clearly has lower MTF than the first lens. It also has a larger variation between samples, especially as you go further away from the center (center is the left side of the horizontal axis). In the outer 1/3 of the lens, in particular, the variation is extremely large. This agrees with what we see in real life: the second lens is one of those lenses that every copy seems to have at least one bad corner, and some more than one bad corner. Also if you look at the black and red  areas at the center of each lens (the left side of each graph) even the center of the second lens has a lot of variation between copies. Those are the 10 and 20 line pairs per mm graphs and these differences between copies in the center are the kind of thing that most photographers would notice as a ‘soft’ or ‘sharp’ copy.

The Variation Number

The graphs are very useful to compare two or three different lenses, but we intend to compare variation for a lot of different lenses. With that in mind we thought a numeric ‘variation number’ would be a nice thing to generate. A table of numbers certainly provides a nice, quick summary that would be useful for comparing dozens of different lenses.

As a rule, I hate when someone ‘scores’ a lens or camera and tries to sum up 674 different subjective things by saying ‘this one rates 6.4353 and this one rates 7.1263’. I double-secret hate it when they use ‘special proprietary formulas you wouldn’t understand’ to generate that number. But this number is only describing one thing: copy-to-copy variation. So I think if we show you exactly how we generate the number then 98% of you will understand it and take it for what it is, a quick summary. It’s not going to replace the graphs, but may help you decide which graphs you want to look at more carefully.

(<Geek on>)

It’s a fairly straightforward process to find the number of standard deviations needed to satisfy some absolute limits, for example, +/-12.5%. Just using the absolute standard deviation number though, would penalize lenses with high MTF. If the absolute MTF is 0.1, there’s not much room to go up or down while if it’s 0.6, there’s lots of room to change. This meant bad lenses would seem to have low variation scores while good lenses would have higher scores. So we made the Variation number relative to the lens’ measured MTF, rather than an absolute variation. We simulated the score for lenses of increasingly high resolution and saw the score would rise exponentially, so we take the square root of it to make it close to linear.

Initially we thought we’d just find the worst area of variability for each lens, but we realized some lenses have low variation across most of the image plane and then vary dramatically in the last mm or two. Using the worst location made these lenses seem worse than lenses that varied a fair amount in the center. So we decided to average the lens’ MTF across the entire image plane. To keep the math reasonable, we calculated the number just for the 30 line pair per mm (green area in the graphs) variance, since that is closest to the Nyquist frequency of 24MP-class full-frame sensors. Not to mention, higher frequencies tend to have massive variation in many lenses, while lower frequencies have less variation; 30lp/mm provides a good balance.  Since some lenses have more variation in the tangential plane and others the sagittal, we pick the worse of the two image planes to generate the variance number.

Finally we scale the score to get a reasonable scale.

For those who speak computer better than we can explain the formula in words, here’s the exact Matlab code we use:

T3Mean = mean(MultiCopyTan30);
S3Mean = mean(MultiCopySag30);
Tan30SD_Average = mean(MultiCopySDTan30);
Sag30SD_Average = mean(MultiCopySDSag30);
ScoreScale = 9;
if T3Mean > S3Mean
 TarNum = 0.125*T3Mean;
else
 TarNum = 0.125*S3Mean;
end
if Tan30SD_Average > Sag30SD_Average
 ScoreTarget = TarNum*T3Mean;
 VarianceScore = ScoreTarget/Tan30SD_Average;
 MTFAdjustment = 1 - (T3Mean/(0.25*ScoreScale));
 VarianceScore = sqrt(VarianceScore*MTFAdjustment);
else
 ScoreTarget = TarNum*S3Mean;
 VarianceScore = ScoreTarget/Sag30SD_Average;
 MTFAdjustment = 1 - (S3Mean/(0.25*ScoreScale));
 VarianceScore = sqrt(VarianceScore*MTFAdjustment);
end
VarianceNumber = VarianceScore*ScoreScale;

(</Geek off)

Here are some basics about the variance number —

  1. A high score means there is little variation between copies. If a lens has a variance number of over 7, all copies are pretty similar. If it has a number less than 4, there’s a lot of difference between copies.  Most lenses are somewhere in between.
  2. A difference of “0.5” between two lenses seems to agree with our experience testing thousands of lenses. A lens with a variability score of 4 is noticeably more variable than a lens scoring 5, and if we check carefully is a bit more variable than one scoring 4.5
  3. A difference of about 0.3 is mathematically significant between lenses of similar resolution across the frame.
  4. Ten copies of each lens is the most we have the resources to do right now. That’s not enough to do rigid statistical analysis, but it does give us a reasonable idea. In testing 10 copies of nearly 50 different lenses so far, the variation number changes very little between 5 and 10 copies and really doesn’t change much at all after 10 copies. Below is an example of how the variance number changes as we did a run of 15 copies of a lens.
How the variance number changed as we tested more copies of a given lens. For most lenses, the number was pretty accurate by 5 copies and changed by only 0.1 or so as more copies were added to the average. 

Some Example Results

The main purpose of this post is to explain what we’re doing, but I wanted to include an example just to show you what to expect. Here are the results for all of the 24mm f/1.4 lenses you can currently buy for an EF or F mount camera.

First, let’s look at the MTF graphs for these lenses. I won’t make any major comments about the MTF of the various lenses, other than to say the Sigma is slightly the best and the Rokinon much worse than the others.

 

 

Now lets look at the copy-to-copy variation for the same for lenses. The graphs below also include the Variation Number for each lens, in bold type at the bottom.

 

Just looking at the variation number, the Canon 24mm f/1.4L lens has less copy-to-copy variation than the other 24mm f/1.4 lenses. The Rokinon has the most variation.

The Nikon and Sigma lenses show an interesting point. Looking at the graphs the Sigma clearly has more variation, but the Sigma variation number is only slightly different than the Nikon number.  That’s because the average resolution of the Sigma is also quite a bit higher at 30lp/mm  and the formula we use considers that.  If you look at the green variation areas you can see that the weaker Sigma copies will still be as good as the better Nikon copies. But this is a good example of how the number, while simpler to look at, doesn’t give the whole picture.

The graphs show something else that is more important than the simple difference in variation number. The Sigma lens tends to vary much more in the center of the image (left side of the graph) and the variation includes the low frequency 10 and 20 line pairs per mm areas (black and red). The Rokinon tends to vary extremely in the edges and corners (right side of the graph). In the practical world, a photographer carefully comparing several copies of the Sigma would be more likely to notice a slight difference in overall sharpness between the lenses. The same person doing careful testing on several copies of the Rokinon would probably find each lens has a soft corner or two soft corners.

Attention Fanboys: Don’t use this one lens example and start making claims about this brand or that brand. We’ll be showing you in future posts that at other focal lengths things are very different. Canon L lenses don’t always have the least amount of copy-to-copy variation. Sigma Art lenses in other focal lengths do quite a bit better than this.  We specifically chose 24mm f/1.4 lenses for this example because they are complicated and are very difficult to assemble consistently.

And just for a teaser of things to come, I’ll give you one graph that I think you’ll find interesting, not because it’s surprising, but because it verifies something most of us already know. The graph below is simply a chart of variation number of many lenses, sorted by focal length. The lens names are removed because I’m not going to start fanboy wars without giving more complete information. And that will have to wait a week or two because I’ll be out of town next week. But the graph does show that wider-angle lenses tend to have more copy-to-copy variation (lower variation number), while longer focal lengths, up to 100mm, tend to have less variation. At most focal lengths, though, there are some lenses that have little, and some lenses that have a lot of copy-to-copy variation.

 

 

What Are We Going to Do with This?

Fairly soon, we will have this testing done for all wide-angle and standard range prime lenses we carry and can test. (It will be a while before we can test Sony e-mount lenses – we have to make some modifications to our optical bench because of Sony’s electromagnetic focusing.) By the end of August, we expect to have somewhere north of 75 different models tested and scored. It will be useful when you’re considering purchasing a given lens and want to know how different your copy is likely to be than the one you read the review of. But I think there will be some interesting general questions, too.

  • Do some brands have more variation than other brands?
  • Do more expensive lenses really have less variance than less expensive ones?
  • Do lenses designed 20 years ago have more variance than newer lenses? Or do newer, more complex designs have more variance?
  • Do lenses with image stabilization have more variance than lenses that don’t?

Before you start guessing in the comments, I should tell you we’ve completed enough testing that I’ve got pretty good ideas of what these answers will be. And no, I’m not going to share until we have all the data collected and tabulated. But we’ll certainly have that done in a couple of weeks.

 

Roger Cicala, Aaron Closz, and Brandon Dube

Lensrentals.com

June, 2015

A Request:

Please, please don’t send me a thousand emails asking about this lens or that. This project is moving as fast as I can move it. But I have to ‘borrow’ a $200,000 machine for hours to days to test each lens, I have a busy repair department to run, and I’m trying to not write every single weekend. This blog is my hobby, not my livelihood, and I can’t drop everything to test your favorite lens tomorrow.

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Equipment
  • Tuco

    You’re threatening the tedious, age-old “Tuco at the Gun Shop*” process. (https://www.youtube.com/watch?v=meP_Ufwj-FY)

    You should be able to show real-world performance and manufacturing consistency, hopefully leading/pushing vendors to respond with better products. Good stuff. This will be more useful than web reviews and DxO.

    But, don’t think for a moment that it can replace Eli Wallach.

  • Roger Cicala

    John D, zooms have much greater variance, although the slower apertures masks some of it a bit. But you can’t overcome the complexity of zoom groups and the increased movement of various lens elements.

  • Roger Cicala

    Derek, we buy them in smaller batches (5-20 copies) throughout the year. And we’ve definitely seen the effects you talk about. There was a batch of 300 f/4 IS lenses that the AF motor failed on every one. Another batch of 17-55s that had IS units go out, etc.

  • Roger Cicala

    Ron, that’s something I plan to set up through OlafOptical. It will have to wait until fall, when the busy summer season is over, but I fully expect it, for prime lenses at least, this year.

  • Aaron

    For a 98% confidence interval, shouldn’t it be -2.8 sd above and below the mean? Using R:
    > qt(0.01, 10)
    [1] -2.763769

  • Dan Deakin

    Are you going to test the variance on lenses as they were when they shipped from the manufacturer (perhaps more relavent for most consumers), or after they been checked / tuned by your staff?

  • John D

    Thanks Roger. Do zooms and primes have a similar variance range?

  • Tony

    It’s a pity that testing long focus lenses is impractical because the consistency of “Phase-Fresnel” lenses would be of particular interest.

    Your test of one example was much worse than Nikon’s theoretical MTF results lead to expect, and it would be good to know if this is a manufacturing tolerance issue.

  • derek

    Roger, one thought

    Do you buy all your lenses in one big batch or several smaller batches over time?

    I only ask as with silicon production batch to batch variation can be as much as unit to unit variation within a batch. I once had a batch that was so bad that we only got about 20% yield yet most batches got over 80% final yield.

    either way, it’s all good work

  • AJ

    Excellent, thank you Roger for the current blog and in anticipation of future updates.
    Even more reason for LensRentals to be on the top of the list as a ‘must read’ when I’m considering the purchase of a new lens.
    Nice to have an alternative to those who refer to the English as “Silly bed-wetting types” 🙂

  • Ron

    I don’t really have a problem with the scale MTF values since it is the same for all lenses. I find I don’t even look at the numbers and it helps me quickly eyeball between lenses to see how each generally performs at a particular lp/mm measurement.

    My question is when will we be able to send in our own ‘suspect’ lenses, have them measured for comparison against the your averages, with the option to bring them (hopefully) back into the normal range? 🙂 Perhaps the servicing aspect may not be possible for all lenses due to how they’re designed, assembled, how complex they are, etc… But at the least, just being able to get a measurement would give an indication of where the lens stands. A ‘testing certificate’ could also be beneficial on the second hand market for assuring a potential buyer that a lens truly is a good one, or at least within typical variance.

  • Frank Kolwicz

    Roger,

    This kind of reporting makes me feel sad that I don’t have more stuff I want to rent!

  • Goran

    Ingenious! I’ll be following this closely. Very very interesting. Can’t wait to find out what lenses are hidden in that 35mm column, especially that yellow dot and if it holds its ground. Thank you so much for putting this together!

  • Roger Cicala

    JGro,

    The dots were random color assignments I’m afraid, although obviously some lenses can be identified. We will probably treat zooms as three primes, measured at each end and in the center. Until we gather that data I’m not sure how we’ll present. I suspect we’ll find variation differs at different locations so it may require 3 graphs per zoom, or we may come up with something more elegant. But we have to gather data first to figure that out.

    150-600s are going to be back burnered, I’m afraid. They have to be done with Imatest, with a teardown and setup between each lens – basically 4 test runs is a full day and just a 600mm comparison will take most of a week. It will probably have to wait until after busy season.

    Roger

  • I don’t want to make you lose your time reading the same in a hundred posts so I’ll resume: 1K thanks. You just contribute to improve the world directly and indirectly.

    Just a few time ago I was wondering a thing about sample variation and thinking I’d like to ask it to you one day, so this is a perfect timing:

    Are aspects like focus shift and geometric distortion subject to sample variation too? I mean, in a perceptible amount such as in sharpness variation?

    Thanks again

  • JGro

    I can only chime in here: Roger, really amazing what you and your team are doing here.
    It feels a bit like you just announced lens nerd Christmas. 🙂

    But not too unlike a kid a few days before Christmas, the lens nerd in me couldn’t help himself but look more closely at your last “teaser” chart. We have the data of the four 24mm lenses, of which the Canon was best with 6.3. So that is the red dot.
    I first thought that this would make “reddish” dots Canon, but then this would mean one Canon EF 35mm (well, which one?) the worst of the bunch, and another the best (if the orange at ~6.6 is still reddish enough to be Canon that is…). Naaah…
    Light blue would be Rokinon (where the 14mm 2.8 is then about as bad as the 24mm), cyan would be Zeiss (21mm and 28mm). Anyway, I couldn’t match any other dot to Sigma (yellowish at 24mm with a score of 4.9) and I am not so much interested in Nikon, so that is where I stopped.

    What I am interested to know: do you already have an idea of how you represent the copy varition of Zooms? You could just do one more step of averaging, but that would say very little (are bad copies bad across the board, or just bad on one end?)

    And since we are already talking about Christmas: Is there any chance we might see a comparison of the three 150-600mm zooms any time soon? Maybe compared to the (80/100)-400 of the system manufacturers?

  • Tim

    p.s. on a rereading, I realize that the rescaled MTF plots aren’t done so linearly. This is indeed confusing! However, I think having the correct units on the rescaled MTF plots will be beneficial to you (and us), without needing to consider a conversion factor. As Jim Maynard below suggested, perhaps a semilogy plot will work or a similar variation. If you continue to use the same custom scaling, alternatively, you could update the tick mark labels to have the correct values.

  • Roger Cicala

    Alan, the design is done, we’re doing some fine tuning on the mechanicals and will be prototyping fairly soon (fairly soon means a couple of months).

  • Roger Cicala

    Thanks Tim,
    I think you’re right – changing the term to ‘consistency score’ would be more intuitive. We’ll probably do that.
    Thanks,

    Roger

  • Tim

    First, this is a fantastic project so thank you Roger, Aaron, Brandon, and to Lensrentals.com in general!

    Second, a couple of minor suggestions:

    1) For the rescaled MTF plots, which double the height on the y-axis, it seems unnecessary and confusing to rescale the y-axis and tick marks to instead go from 0 to 2 in 0.1 increments. Why not scale them from 0 to 1, using 0.05 tick marks, which will be visually the same as your double height plots? This has the additional benefit that the tick mark labels will be in the correct units, with no conversion necessary.

    2) One odd thing about your “variance” score is that higher variance scores indicate that there is less observed variance in samples of any given model. This is a little unintuitive. You might consider renaming “variance” to “consistency” instead (or something like that). Alternatively, you could mathematically reverse the “variance” score so that higher numbers indicate higher observed variance. Just don’t do both simultaneously. 😉 One way you might try to reverse the score is by inverting it; this will also give you a different scaling and you may not need the square root anymore or you could adjust the power you use, such as 1/x^2.

    Again though, thanks for undertaking this project and sharing with us all!

  • Siegfried

    Oh, I just graduated from the geek lvl 2.
    Weeeeeah!..

    P.S.
    Roger,
    don’t you think it’s worth decreasing the benchmark score resolution? I mean if there’re no significant – read: practical – difference between a lens scored of 5.1 and a lens scored of 5.2 then those decimals are insignificant and should be omitted. You said that ‘a difference of about 0.3 is mathematically significant’ – I’m not sure I fully get it (see above: I’m just lvl 3, sorry), but what about rounding it to the nearest whole or 1/2 number to keep it more practical? “A four-star lens” or “a 5 1/2 star lens” sounds easy yet informative.
    Another side-note goes as an idea to separately calculate the variance score for a given lens model. I think it’s rather depictive when comparing different lens designs and manufactures’ QC: like in the 24mm example given above some folks might want to show gratitude to the N brand (I see it on the last graph as grey-violet in the last graph, right?) over the S brand just to disrespect the S’s quality control, though most of the S 24mm lenses gonna out-resolve most of the N 24mm lenses.

  • Alan B

    Roger, I think you guys might have reached a tipping point where you have better numbers about the lens maker’s industrial processes than they have.

    Speaking of lens makers, how’s work on that 4.9mm fisheye coming along?

  • derek

    cracking project, I’ll look forward to all future posts in this.

    Having played statistics for silicon production I’m with you all the way on the description, and agree that to most people something like average minus one or two sigma performance is what we’re interested in, i.e. this is what we can guarentee to get out of whatever lens.

    It would be interesting to see how cheap lenses compare to top of the line ones… i.e. are crop kit lenses really all that bad?

  • MayaTlab

    Thanks a zillion times for this !

  • Roger Cicala

    Rick there are a lot of factors. I started to discuss them but this article was already so long. But complexity of design, use of double-sided aspheric elements, image stabilization (especially older versions), ‘simple stack’ (no adjustable elements) design, and a host of other things go into it. We’ll be going into details as we look at more lenses.

    Roger

  • Roger, this is an amazing project! Thank you for doing it!

  • Rick

    This should get VERY interesting … can’t wait. But, Roger, at some point I’d be interested in your thoughts on where lens variability comes from. I imagine that it could come from an inherently poor mechanical design which is difficult to adjust, or from a bad optical design which don’t have a clear “sweet spot” (e.g., your example of either sharp center or edges, not both), or from poor or inconsistent factory calibration/adjustment, or poor or inconsistent after factory calibration/adjustment, or ???

    I imagine that most of Lens Rental’s lenses have been adjusted by your own service department at some time. If so, it seems to me that your measurements will be independent of the quality of a manufacturer’s own lens calibration/QA. If so, then it might not give us any sense about how paranoid we should be about the quality of a lens we purchase.

  • Jim Maynard

    Thanks for making this sort of detailed honest analysis available. It may help encourage manufacturers to raise their standards, but then again I am the eternal optimist. Perhaps another plotting scheme might help in the display of lens variation without having to add an (arbitrary) value to each test variable. For example, semi log plots are of value in this regard in some cases.

  • Jay

    Very interesting. I appreciate that you have a good hobby. Thank you and very much look forward to seeing the data and results in time.

    Jay

  • Jeff Forbes

    Wow! Even though these results have little to no effect on me personally, this is fantastic and interesting reading.

    Keep it up. There’s literally nobody else doing stuff like this. And while it might cost you some money in opportunity cost and time on equipment, seeing your work inspires confidence in your customers too.

Follow on Feedly