Lenses and Optics

Geeks Go Wild: Data Processing Contest Suggestions

Published June 18, 2014

A couple of years ago I gave a talk on the history of lens design at the Carnegie-Mellon Robotics Institute. The faculty members were kind enough to spend the day showing me some of their research on computer-enhanced imaging. I’m a fairly bright guy with a doctorate of my own, but I don’t mind telling you by the end of that I was thoroughly intimidated and completely aware of my own limitations.

I’d love to tell you I gave a brilliant and entertaining talk that evening, but there were a lot of witnesses, so I’d better not lie that openly. I think only the fact that they serve cookies at the end of the talk kept most of the audience in place until I finished. I do remember, though, looking up at that room full of brilliant scientists and starting my talk with, “It somehow seems wrong that the guy with the lowest IQ in the room is the one giving the lecture.”

Two weeks ago I asked some of the more computer-literate people who read my blog to help me handle all the data our new optical bench generates. Over 70 people asked for the data sets, and 40 of those sent their ideas back to me. After spending the last five days poring over all of those suggestions, I feel just like I did that day at Carnegie-Mellon. I’m thoroughly intimidated and wondering why the participant with the lowest IQ is the one writing the blog post.

Mostly, though, I’m left with an incredible feeling of Internet camaraderie. Sure, there were some prizes offered, but the amount of time dozens of spent preparing contributions and sharing ideas dwarfed the insignificant prizes I offered. Several people sent 20+ pages of documentation along with their methods. (I don’t mind admitting I had to get out my college statistics books to help me translate some of these.)

Some submissions just suggested methods to display data for blog reports. Others focused on methods for decentered or and inadequate lenses within a batch. I’m going to show some of the data displays, because that’s the part I want reader input on. Let me know what methods you think provide you the most information most clearly.

I’ll also mention several of the submission that went way past just graphically representing the data (although most included that, too). I’ll mention these at the end of the article when I discuss the Medal Awards. Don’t mistake me leaving them to the end to mean I wasn’t overwhelmed by them. Honestly, several of them completely changed my thinking about what the best ways to detect bad lenses are, and what the most important data to share with you is.

But really, I’d like to give everyone who sent in a contribution something, because every single one helped me learn something or clarified ideas for me. It also reminded me why I do this stuff — because deep under all of the Google Adwords, the best part of the Internet lives — the part where people freely share their knowledge to help other people. Very cool things come out of that part of the Internet, and that’s the part I want to hang out in.

I also want to be clear that while those submitting gave me permission to reproduce images of their suggestions here, the images and intellectual property rights of their contributions remain theirs. You need their permission, not mine, to reproduce their work.

What I’ve Already Learned

A couple of general suggestions have been made by several people, and make so much sense that we’ve already adopted them.

From now on we’ll test every lens at 4 points of rotation (0, 45, 90, and 135 degrees). This will give us an overall image of the lens that includes all 4 corners as mounted on the camera.

Displaying 4 separate line pair/mm readings makes the graphs too crowded, so we’ll probably use 3 going forward. I’m still undecided whether that should be 10,20, and 30 or 10, 20, and 40 line pairs/mm.

Displaying MTF50 data, or more likely, Frequency response graphs, is very useful and needs to be included along with MTF curves.

We knew that each lens’ asymmetry as we rotate around its axis is a very useful way to detect bad lenses. Some contributors found looking at the average asymmetry of all the copies of a certain lens is a good way to compare sample variation between two groups.

Variation of astigmatism, both between lenses of the same type and comparing groups of different type, is also worthwhile measurement to report.

Outside of that, nothing is written in stone, and I look forward to your input about the different ways of displaying things. You don’t have to limit your comments to choosing just one thing. We want the best possible end point, so taking the way one graph does this and combining it with the way another does that is fine.

Several people found similar solutions separately, so where possible I’ve tried to group the similar entries. I apologize for not showing all the similar graphs, but that would have made this so long I’m afraid we would have lost all the readers. I can’t give you cookies for staying until the end of the article, so I’ve tried to keep it brief.

MTF Spread Comparisons

These graphs show the range or standard deviations of all lenses of that type, while comparing the two different types of lenses. Obviously they aren’t all completely labeled, etc., but they all give you a clear idea of how the data would look. Remember, the data is for just 5 Canon 35mm f/1.4 and 5 Sigma 35mmf/1.4 Art lenses, and I purposely included a mildly decentered copy in the Sigma lenses. Please don’t consider these graphs are anything other than a demonstration — they are not a test of the lenses, just of ways of displaying data and detecting bad copies.

Area Graphs

Several people suggested variations using a line to show the mean value and an area showing the range of all values. I won’t show all their graphs today, but here are some representative ones.

Curran Muhlberger’s version separates tangential and sagittal readings into two graphs, comparing the two lenses in each graph.

 

Jesse Huebsch placed sagittal and tangential for each lens on one graph and then placed the two groups side-by-side to compare. (Only the 10lp/mm has range areas. The darker area is +/- 1 SD, the lighter area is absolute range.)

 

 

Winston Chang, Shea Hagstrom, and Jerry all suggested similar graphs.

Maintaining Original Data

Some people preferred keeping as much of the original data visible as possible.

Andy Ribble suggests simply overlaying all the samples, which requires separating out the different line pair/mm readings to make things visible. It certainly gives an intuitive look at how much overlap there is (or is not) between lenses.

Error Bar Graphs

Several people preferred using error bars or range bars to show sample variation.

I-Liang Siu suggests a similar separation, but using error bars rather than printing each curve the way Andy suggested. For this small sample size the error bars are large, of course, especially since I included a bad lens in the Sigma group. But it provides a detailed comparison for two lenses.

 

William Ries suggested sticking with a plain but clear bar graph. Error bars would be easy to add, of course.

 

Aaron Baff made a complete app that lets you select any of numerous parameters, graphing them with error bars. While it’s set up in this view to compare a specific lens (the lines) to average range (the error bars), it would function very well to compare averages of between two types of lenses.

 

Separate Astigmatism Readings

Several people suggested that astigmatism be made a separate graph, with the MTF graph showing either the average, or the better, of sagittal and tangential readings.

I-Liang Siu uses an astigmatism graph as a complement to his MTF graph.

Lasse Beyer has a similar concept, but using range lines rather than error bars.

Shea Hagstrom‘s entry concentrated on detecting bad lenses, but the astigmatism graphs he used for that purpose might be useful for data presentation.

Winston Chang‘s contribution used also used astigmatism for bad sample detection, presenting each lens as an area graph of astigmatism. I’m showing his graphs for individual copies of lenses because I think it’s impressive to see how the different copies vary in the amount of astigmatism, but it would be a simple matter to make a similar graph of average astigmatism.

Sami Tammilehto brought MTF, astigmatism, and also asymmetry through rotation (how the lens differs at 0, 45, and 90 degrees of rotation) into one set of graphs. While rotational assymmetry is one of the ways we detect truly bad lenses, it is also a good way to demonstrate sample variation, too. In this graph, the darker hues show 10lp/mm, lighter ones 20, and the lightest ones 40, which would be useful if there was significant overlap.


Polar and 3-D Presentations

Ben Meyer had several different suggestions, but among them was creating a 3-D polar map of MTF (note his map is incomplete because we only tested in 3 quadrants rather than 4. The fault is mine, not his.) This one is stylized, but you get the idea.

 

 

 

Like Ben, Chad Rockney‘s entry had a lot more to it than just data presentation, but he worked up a slick program that gives a 3-D polar presentation with a slider that lets you choose what frequency to display. Chad submitted a very complex set of program options that include ways to compare lenses. In the program, you’d click which frequency you want to display, but this automated gif shows how that would work. You can also rotate the box to look at the graph from different angles.

Daniel Wilson uses polar graphs for both MTF and astigmatism. It’s like looking at the front of the lens, which makes it very intuitive.

He also made a nice polar graph comparing the Canon and Sigma lenses which I think is unique and useful.

Vianney Tran made a superb app to display the data as a rotatable, selectable 3-D graph. I’ve posted a screen clip, but it loses a lot in the translation and I don’t want to link directly to his website and cause a bandwidth meltdown for him. This screen grab compares Canon and Sigma 35s at 10 lp/mm. 

Walter Freeman made an app that creates 3-D wireframes. It’s geared toward detecting bad lenses and the example I used is doing just that – showing the bad copy of the Sigma 35mm compared to the average of all copies.

Subrashsis Niyogi came up with one of the coolest looking entries, presenting things in a way I’d never thought of.  His Christmas tree branches represent the MTF at 10,20, 30, and 40 lp/mm for each lens with each branch showing one rotation. How low the branches bend represents the average MTF. The darker the color of the branch the more astigmatism is present. It’s beautiful and brilliant. 

 

His application makes them tiltable, rotatable, the displayed lp/mm can be selected, and multiple copies can be displayed at once to pick up outliers.

 

Rahul Mohapatra and Aaron Adalja put together a complete package for testing lenses to detect outliers, but also included a very slick 3-D graph for averages.

 

Still More Different Ways of  Presenting MTF data

These graphs are really different, but that makes them interesting. I’ll let you guys decide if they also have a better data presentation factor.

Brandon Dube went with a graph that shows the difference between lenses. In this example, all 5 Sigma lenses are plotted against the mean for all Canon lenses (represented as “0” on the horizontal axis) at each location from center to edge.  This would have to be a supplemental graph, but it does a nice job of clearly saying “how much better” one lens is than another.

 

Tony Arnerich came up with something completely new to me. His graph presents the various tested points as a series of ovals on a line (each line consists of the measurements at one rotation point). More oval means more astigmatism and more color means lower MTF readings.

 

William Ries suggested a “heat map” similar to the old PopPhoto methods, giving actual numbers in a table, but using color to signify where MTF falls off.

 

Bronze Medals

The Bronze Medal is for people who made a suggestion for graphing methods that we will use in the blog. I’m still not sure which methods we’ll finally choose and want reader input, so we may award more Bronze Medals later. But for right now, the following people have made suggestions that I will definitely incorporate in some way, so they are Bronze Medal winners. The official Bronze Medal prize is we will test two of your lenses on our optical bench and furnish reports, but those of you who live outside the U. S. email me and we’ll figure out some other prize for you, unless you want to send your lens by international shipping.

Jesse Huebsch (I’ve already used his side-by-side comparison suggestion in my first blog post). Several people made similar suggestions, but Jesse’s was the first I received.

Sami Tammilehto triple graphs of MTF, astigmatism, and asymmetry are amazingly clear and provide a huge amount of information in a very concise manner.

Winston Chang, whose display of astigmatism as an area graph will be incorporated.

Again, Bronze Medal awards aren’t closed. There are several other very interesting contributions and I suspect the comments from readers will help me see things I’ve missed, after which I’ll give more Bronze Medals. Which, of course, aren’t really medals. Each Bronze medalist can send me two lenses to have tested on our optical bench. If they live overseas or don’t have lenses they want tested, we’ll figure out some other way to thank them – so if that fits you, send me an email.

Outlier Lens Analysis

A number of people did some amazing things to detect decentered and outlier lenses. To be blunt, we’ve been doing a pretty good job with this for years, better than most factory service centers. But after getting this input, I can absolutely say we’ll be upping our abilities significantly soon.

Nobody actually made the Platinum Prize, but a number of people came close. So instead, I split the Platinum Prize up so we could award a large number of Gold Medal prizes. Since most of the winners live outside the U. S. and can’t use the $100 Lensrentals credit given for a Gold Medal, I’ll give it in cash (well, check or Paypal, actually).

Gold Medal Winners

Gold Medal Winners had to develop a fairly simple way to create logical, easy to understand graphs that demonstrate the variation copies for each type of lens, and offer an easy way to compare different types of lenses. It turns out there were a lot of paths to Gold, because so many people taught me things I didn’t know, or even things I didn’t know were possible.

Professor Lester Gilbert. His work doesn’t generate any graphs, but the statistical analysis to detect outlier lenses is extremely powerful.

Norbert Warncke‘s outlier analysis using Proper Orthogonal Decomposition not only shows a new way to detect outliers, it does a good job of detecting if someone has transcribed data improperly.

The following win both Gold and Bronze medals.

Daniel Wilson’s polar graphs provide a great amount of information in a concise package. Several people used polar graphs, but Daniel’s implementation was really clear and included a full program in R for detecting bad copies.

Rahul Mohapatra and Aaron Adalja whose freestanding program written in R not only made the cool graph you saw above, but also does a powerful analysis for variation and bad copies.

Curran Muhlberger’s (You saw the output of his Lensplotter web app at the top of the article) programmed a method to overlay individual lens results over the average for all lenses of that type, showing both variation and bad copies.

Chad Rockney wrote a program in Python that displays the graphs shown above, it analyzes lenses against the average of that type and against other types.

Subrashsis Niyogi’s Christmas Tree graphs are amazing. While his Python-Mayavi program doesn’t mathematically detect aberrant lenses, his graphics make them stand out dramatically.

Where We Go From Here

First I want to find out what you guys want to see — which type of displays and graphics you find most helpful for blog posts. So I’m looking forward to your input.

The contest was fun and I got more out of it than I ever imagined. I want to emphasize again that the submissions are the sole property of the people who did the work (and they did a lot of work).

I’m heading on vacation for 10 days. (There won’t be any blog posts from the cruise ship, I guarantee you that.) Once I get back and get everyone’s input on what they like, I’ll contact the people who did the work and negotiate to buy their programming and/or hire them to make some modifications.

We’ve already been modifying our data collection procedures (and our optical bench to account for what we’ve learned about sensor stack thickness). Hopefully we’ll be cranking out a lot of new lens tests, complete with better statistics and better graphic presentation within a month or so.

 

Roger Cicala

Lensrentals.com

June, 2014

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Lenses and Optics
  • Igor

    Sorry if you find my activity annoying but one thought about the illustrative presentation. I like the idea of Tony Arnerich since it presents the frame that everyone can easily imagine. However, one can not actually see the degree of distortion or loss of resolution at that points (just more or less). This returned me to my suggestion simply to shoot a grid of LEDs (possibly located at the same points as on Tony’s picture). That would show the cumulative level of coma and astigmatism. To assess the resolution visually, the simplest way is to shoot the appropriate text or grid. One look – and you see what lens is better for you in the respects concerned. If you would like to point on differences between the copies, you could present the corresponding shots. Personally for me these several shots would be more useful than dozens of measurements and graphs.

  • Igor

    > My goal is to present that data in a way that a customer would be able to decide if it would affect them. > I try to present numbers so a customer will know, generally, if there might or might not be an issue.

    That is exactly what I can not see: how a customer could decide it from pure numbers? The only way for the customer is to rely on the tester’s *blind* opinion that 25% *could matter* in *some cases*. Kill me but I can not see any sense in that.

    However, I can see the point that in any case the customer should be pre-warned of any possible issue, even if nobody hitherto did not see it. That is for you to decide (including how to define the treshold). I am just saying that imho that is not the most valuable object to work at.

  • Roger Cicala

    Igor, I think we’re actually agreeing. If a customer can see it, it’s too large. If they can’t see it, it doesn’t matter.

    Certainly there is a point where angular variation is at a place where one customer might find it objectionable, yet another not notice it. Photographic subject, lighting, method of print, aperture of the shot and other things all would count in that area. My goal is to present that data in a way that a customer would be able to decide if it would affect them. One way is to take 1,000 different shots modifying all those variables so that a given customer can look and say “that’s how I shoot”. Unfortunately doing it that way is impractical, at least for me. Instead, I try to present numbers so a customer will know, generally, if there might or might not be an issue.

  • Igor

    Roger, just two brief points. I think it is the tester’s job to find an object where the customer could see the difference. If such object does no exist even in a studio, the difference does not exist either (in practical sense). Just numbers.

    Lenses differ in angular variation of resolution; average (by angle) resolution; flare, contrast, bokeh, weight, focussing speed, handling… add what I forgot. Can it be that for a significant part of your customers the angular variation is the point of choice/satisfaction? Or you will not offer such lenses despite their possible superiority in other respects? What variation would you consider too large if you can not see it in the shots? Well, it is for you to decide, I can not comment on it since I do not have such a business.

    Best,
    Igor

  • Roger Cicala

    Igor,

    I’m not claiming MTF 50 as an overall contrast equivalent. I’m claiming MTF at various frequencies from 10 lp/mm through 80 (or arguably 100) is an excellent measurement of both accutance and contrast. The definition of MTF is simply black % recorded / white % recorded for a given size of pure black / pure white lines. That’s the definition of contrast I use.

    I certainly don’t claim anyone should be frightened of a 25% variation.

    The problem is how do we ‘show it in a picture’. In some pictures it won’t show up, obviously. In others it will. But the subject matter, background distances, etc. of two different pictures are different. My own focus is on ‘can it ever show up in a picture’ because while 8 of my customers may find the lens superb, if two find that there is a soft corner, they are most unhappy. I have to gear towards making sure those two aren’t disappointed. I don’t ever say it’s necessary for everyone. But it’s what my business demands.

    But it has been an interesting discussion and I do find your point of view valid. For my own personal lenses I take one off the shelf, take pictures, and unless I see a problem I don’t look further. But that won’t work for all of my customers.

    Best,

    Roger

  • Igor

    Roger,

    I am no specialist in optics but I do not think that contrast and MTF are the same. MTF is a measure of blur but it has little common with contrast on larger areas which is affected by stray light from coatings, edges etc. I think that contrast has more common with flare. Flare is stray light produced by the lens from a small light source (sun etc) while full stray light comes from the whole frame area. With some level of stray light, theoretically you may obtain zero MTF50 but 50 lpmm MTF40 (sharp lines with 40% contrast.)

    To be honest, I do not care what the manufacturers know or test. I am simply not sure that 25% variation makes a visible difference. It is just numbers that do not bring any visual information and are not illustrated with such information. So why should I be frightened? May be someone would exclaim “Wow, 25%, I’ll never buy that lens!” but it certainly would not be me. If it was two times difference, then may be. Why so? Two points. We know that even for a good lens the resolution may fall by 25% pretty close to the frame centre. So should we consider 80% of the frame area as clearly inferior? Second, I think that human eye has angular susceptibility variation, and thus it may be not so sensitive to such lens variation.
    Same way, I find it not justified to blame the manufacturers for that difference UNTIL it is shown how much it matters. Why some manufacturers charge extraordinary prices is a much wider issue.

    In fact, all above is just conversation what may be or may not be. If you insist that 25% is important, I would like to see visual evidence (if required, with expert explanation). In the absence of visual evidence, numerical data should be supported by some scientific or real-life information (e.g. it is a proven fact that a 100 lumen/cm2 light ray is disastrous for human sight; or I know that a weight of 1 ton falling from a height of 3 metres can kill me:).

  • Roger Cicala

    Igor,

    I totally agree that flare, bokeh, and other things are very valuable, often the most valuable things, when choosing a lens. On the other hand contrast, which you mention, is MTF, which is why I find MTF of great importance.

    I would love to agree with you that manufacturers have determined a 25% deviation is not visible. Unfortunately I do too much work behind the scenes to be able to do that. You give them far too much credit. I agree with you in principle, that there are detectable deviations that can be seen in the lab but don’t affect images. But I know first hand that the manufacturers don’t use the ‘detectable in images’ criteria to set a point of acceptable deviation.

  • Igor

    I am sure that the lens you are writing about differ not only in their angular variation and average (by angle) resolution, but only in their flare, contrast, bokeh and other valuable characteristics. So I think that the statistics would be last thing what I would consider. Except if the difference would be really great by sight – then I need to see that difference in nature.

    It is even possible that the company that manufactures the lens with 25% deviation at 0/45 degrees already knows that in fact nobody can see that difference. There are whole areas of science (e.g. acoustics) that study the human perception of many factors and their combinations in various conditions. That is science (not QC job), and they have done a tremendous amount of work.

  • Igor

    Roger,

    Perhaps I would like to know the information you are writing about, but only as matter of fact. I can not imagine how the real-life or even studio shots would look with a lens showing 25% difference in resolution at 0 and 45 degrees. And I can not decide from the graphs whether I would prefer a lens with a lower resolution and perfect angular characteristics.

    Moreover, I doubt tham many people would be able at all to see that angular difference in any shot (except for the resolution chart) even if they were trained for that. And still less of them would actually bother to detect that difference. Like any statistics, the data may have some interest, but I am afraid not for use by photorgaphers.

    In science, they say that any instrument allowing for more accurate measurements is a potential tool for discovery. And that it absolutely right. However, testing lens is not science, and any results that are beyong the human perception are useless. Again, there are many things well within the human perception that still need to be covered.

    May be I am not right and that difference is clearly seen, in which case your testing is absolutely justified. Unfortunately, it can not be seen from the graphs. That is why I am asking to present some illustrative material. BTW, looking at that pictures someone could decide for himself whether to bother about that difference.

  • Roger Cicala

    Igor,

    I’m enjoying this discussion because I find it very pertinent. So let me ask you a real life question. I’m testing a lens now where all 15 copies tested have variance of 25% or more depending on the angle tested. In other words, the angular cut at 0 Degrees (top to bottom in real life) is at least 25% different from the one of the other angles tested (45, 90, 135 – the equivalent of side-to-side and both corner-to-corner). Every. Single. Copy.

    A comparable lens that costs about half as much has no copy with variance of greater than 10%. But doesn’t have quite as good MTF numbers (or Imatest numbers if you prefer).

    Is that information you’d like to have before purchasing one of these similar lenses? I would. Which is why I do this kind of thing.

    Another lens has very excellent resolution as tested by DxO or Imatest, better than it’s major competitor – but that’s at about 8 feet testing distance. When tested on the optical bench at infinity, though, the tests reverse. The second lens is much sharper at infinity. Again, I’d like to know this information before I purchase a lens.

    Roger

  • Igor

    In principle, you could derive an endless set of data (e.g. how much the deviation at 0 deg differs from that at 45 deg – that is, the deviation of deviation and so on.) Think about how deep you really need to dig in.

    After setting the borders, think of the optimization of your experiment. One example I suggested above. Next, what confidence limits would be sufficient for your purposes? You have not only to calculate the deviations (e.g. 25% and 15%) for two batches, but also to prove that these values do differ statistically. Now imagine that you calculated the deviations (at the standard 95% confidence level) 25% plus-minus 5% and 15% plus-minus 3%. The “10%” difference in the statistical sense almost disappears since the model X may have only 20% deviation (5% being your possible error), and the model Y hay have 18% dev.
    Is that really worth that much work?

  • Igor

    Roger,

    let us suppose (for this discussion only) that the model X show 25% variation, while the model Y show 15% (and possibly cost twice more.) Is that 10% difference THAT important?

    Next, do you really need to do that hell of testing just to calculate the deviation? I guess that in general the deviation at 0 degree will be close to that at other angles. Or are you trying to find a specific copy that has a particular defect? In that case, you will need to test a VERY large batch to say confidently that say 1.5% of it showed this particular behaviour. And — another batch could be totally different in this respect.

    And last, it well could happen that the X does not have an appropriate substitute for the factors much more important for me, in which case I would not consider the statistics altogether. I could look at the very basic parameters of my copy myself and decide whether it is OK for me.

    Basically, I am trying to say two things:
    1. Such a thorough testing imho is an overkill. Too much additional work for too little additional value. The requirements of strict science and of real life are different.
    2. There are many important lens properties that are poorly covered in the literature. Why not pay your attention to them? When choosing glass, I would be more interested to see THAT results.

  • Roger Cicala

    Igor, I agree they do. But I’m not sure I agree that it’s right that they do. For example I’m testing a new batch of zooms, a large batch. The variation among the good copies is 25%. Do you care if your lens is 25% lower resolution than another copy of the same lens? Especially when one considers that generally that lens can be adjusted to be as good as the others, but isn’t?

    Don’t get me wrong, I’m not saying it makes you a better photographer. I’m simply saying that quality control should be better – but the factories (and they’ve told me this) don’t feel people care. I disagree, but it could be me that’s wrong.

    Roger

  • Igor

    In my last post, it should read “BY a facrtory QC service”.

  • Igor

    Roger, I meant that such data COULD be obtained and processed for a factory QC service with some commercial use. Perhaps even they feel it an overshot.

  • Roger Cicala

    Igor,

    I accept all of your points except one: that there is such a thing as a factory quality control service that checks data such as this. With the possible exception of Zeiss and Leica.

  • Igor

    Frankly, I do not see much sense in processing that much data, except for a factory QC service. For any photographer, why in the Heaven it might be important to know the deviation of MTF50 @ XX lppm and @ XX degrees? Who would care about a greater deviation for a certain lens model when choosing glass, and who will measure that MTF in order not to buy a statistically “bad copy”?

    If the author’s purpose is to provide the advanced users (photographers, not mathematicians) with some information, imho it should be presented in a simpler and more objective way. For examle, it might be more useful to see the image of a grid of LEDs rather than the graphs describing the coma and astigmatism. That image could be accompanied with the isolines chart showing the astigmatism and coma distribution across the field. Other important things are, for example, lens flare and contrast. Are you going to describe them numerically for any light source position, focal length, aperture etc? I guess it would be more interesting to see a systematized set of test images along with some numerical data for some characteristic points (if you can define such points by clear criteria) and possible a little most characteristic statistics (e.g. average deviation – btw, I guess that higher avg dev translated to comparably higher dev @ any specific coordinates). There are some test shots out there on the Internet but they are too little, hard to compare with each other and often composed far from optimal.

    From the other side, for some custom data analyis any presentation of processed data may be not satisfying. Those interested will need pure non-processed data.

    Sorry if I did not get the idea.

  • Ilya Zakharevich

    @Subhrashis: your image is extremely sexy.

    On the other hand, I thought more about fatigue from looking at hundreds of such images a day than about sexiness (especially knowing that I would not be able to achieve this level of attraction).

    This is why:
    • I choose 2D, not 3D;
    • I weed out as much info as possible;
    • The most attention-grabbing factor (color) is for being out-of-spec.

    On the other hand, I do not know?—?maybe for people who would actually work with these images, sexiness of images is a better weapon against fatigue!

  • Aaron

    I think as an initial output from the measuring process, either the excel sheet (a bit of a pain to read, but I did some complicated Java using Apache POI to read the sheets) or a plain text tab/comma separated format. However for real storage long term including historical data & in order to easily do comparisons, I’d say a traditional RDBMS system would be better. SQLite actually might be ideal, as it’s easy to work into a standard backup practice for those who don’t normally have a sysadmin/engineer who knows how to properly set up the backups.

    Let me do a bit of work on it tonight, and I’ll see about coming up with a schema, although I do wonder that I might need to eventually shard the fact tables by lens ID in order to keep queries fast.

    Assuming 8 (tan/sag, 10, 20, 30, 40 lp/mm) rows per reading, 4 readings per lens (0, 45, 90, 135), 40 rental & returns a year (for a popular lens), 100 lenses in the group, that’s 128,000 rows. 20 popular lens models, that’s 2.5M rows. Something to think about, maybe keep only the last N per Lens ID, with N being something like 10 or 20 full readings, and then a summation row of each for long term, which would lead to 11 or 21 total full reading records per Lens ID.

  • @SoulNibbler,
    I’d think of switching to R only if it was necessary to view the 3d plots in a web app. I couldn’t find any way to do that in Mayavi. The best I could find was exporting .vtk from mayavi and read and display it using xtk, but that works only for single plot elements and not for the whole scene – too complicated. R on the other hand has a ready setup for that – shiny-rgl .
    Are there any other ways from mayavi to a web app? Of course all this is moot if we stick with raster outputs of an ideal view for web…
    And, thanks for the advice on the formats. I’ve been looking into np.save(), but tab delimited txt also seems good.

  • SoulNibbler

    You can pickel numpy arrays or use the savez option. However I’m not sure if its just not easier to have it in a tab delimited text format; its strongly inferior to a database structure but I like that I can read it using np.genfromtxt(). I’ve been impressed by the results I’ve seen from R here but I wouldn’t switch unless you need a feature. Numpy + matplotlib + scipy + mayaavi is a scarily effective combo and python is still one of the most readable languages that I’ve worked with.

  • @Ilya, Your fruits-on-a tree approach reminded me of something I had tried – showing astigmatism as colour on the branches, and standard deviation as spheres on each of the points of measurement, as shown here : http://intangi.bl.ee/LR/fruits.png

    @Tony, that image also probably shows the ideal single view you were talking about, along with cut planes for scale.

    I am also looking into redoing the whole thing in R – that could allow making this as a web app showing interactive 3d models (with the shiny wrapping for rgl).. but I know even less R than python, so this is slow going.

    @Aaron, I think the excel format provided here isn’t the most intuitive – I spent much of my time figuring out how to read this into my final data format – a numpy n-dimensional array with lens id, scan angle, lp/mm step, tangential or saggital and finally object angle on successive dimensions. However, once made, maybe I can save this array into some universal format (suggestions?) and use this for ingestion for now?

    We also need to see what format Roger finally comes up finally.. that is what we’ll need to ingest. 🙂

Follow on Feedly