20060722 Saturday July 22, 2006

Feedback, It's What's for Dinner

I may be an odd fellow. As I learn something new, I always want to know its origins. Where did it come from and how does it make our lives better. As part of this natural pull, I recently bought a 50 year old Twin Lens Reflex (TLR) camera. It was my cheapest option to enter the world of medium format photography, which promises better resolution and more incredible pictures. I haven‘t gotten the pictures back from the developer yet, so I can‘t say if the claims are true for this camera. However, I‘ve made a few observations.

As cameras have progressed, we have more and more feedback available to us. In the beginning, cameras were made without meters and you had to go by the “Sunny 16” rule. Without boring you with details, the sunny 16 rule is a guide to base your exposure on. Many photographers still swear by it and won‘t bother with modern gadgetry. But as cameras evolved, and the bikini was introduced (the biggest boon to camera sales in history), more and more cameras came equipped with exposure meters. Now you had visual feedback that would tell you what you should already know (the Sunny 16 rule). Only thing is that meters could get fooled. They judge how bright the light is by comparing the average intensity with a medium gray (18% to be exact). If the picture had too much white or black, the meter would give you erroneous feedback. Which is part of the reason some photographers carry around a callibrated gray card with them.

Things progressed and we also got feedback on whether we were in focus or not (within a certain margin of error of course, we are dealing with machines). Now in the digital age, we have the ultimate feedback: the captured picture. More and more gadgetry thrown at the problem of how do we make beautiful pictures? The gadgets can‘t make beautiful pictures, but they can offer possible reasons why it didn‘t come out like you may have wanted. And that‘s the topic for today: feedback.

I have to say, with a good feedback system in a framework or application it is much easier to diagnose what might be happening. I‘ve worked on quite a few systems with feedback that ranged from console messages to live monitoring of a running system. In each system the feedback you got could point you in the wrong direction. It could send you on a wild goose chase. That can be very frustrating, particularly when you have people wanting some sort of answer and you are no closer to having one than you were when you started. The bottom line is that your feedback system can be fooled. Even if your feedback system includes the final result (such as a digital picture), it can‘t tell you why the technically correct picture didn‘t impress you. It can‘t tell you if one composition is stronger than another. Some things are left to the person to answer.

So if your monitoring system can be fooled, how can you trust it? What do you do if you can‘t trust your instruments? Pilots have to answer that question. An airplane has very sophisticated instruments, and a skilled pilot can fly a plane successfully without looking through any of the windows. The pilot can also tell if an instrument is out of whack because the instruments have instruments and warning lights. In the event of a complete failure of the monitoring system, the pilot has to actually pilot the plane. They have to make decisions based on what they can see from the cockpit. With software, we might feel like we are flying the Wright brother‘s plane or you might feel like you are flying the most advanced stealth bomber. Either way, we need to know a certain amount of information at a glance.

The problem for us software engineers is deciding what instruments actually help. How can we tell what is going on in a running system? What is the most telling metric? Is it memory consumption? It might tell us if there is a memory leak somewhere. What about average time to process a request? Well that might only be useful in development. In my experience, the best metrics are the ones that have a definite cause and effect relationship. If the metric looks like X then I need to do Y to fix the system. If I have a compilation error, I need to fix the syntax of my program. If a unit test fails, I need to fix my program. If I get a NullPointerException, no wait you don‘t still get those do you? :)

Even when you have a set of clean and distinct cause and effect type of meters built into your system, things can still go haywire. It‘s at those times that you have to step back and pull from old-school sensibilities. I may not be old school in age, but I am in spirit. Of course you go down the list of things that could possibly be an issue. The amount of data, the amount of traffic, possible sequences of events. Eventually you have to go outside the realm of common issues. In a military institution, the computers in the control tower would all go down at the same time twice a day. It wasn‘t until an embedded programmer noticed the radar making a full sweep and frying all the electronics in its path at the exact same time the computers went down that they found out what the problem was. Yes, the answer can be that bazar.

I have four cameras now. A digital, two 35mm film cameras, and an old medium format camera. Of them all, I am most comfortable with my high end 35mm camera. It has just the sensors I need, and it is very accurate most of the time. Out of the several hundred pictures I have taken with it, only a small percentage have been not so great. I have a lot of trust in what it can do, and because of it I have less trust in my starter 35mm camera. In fact, I have less trust of my digital point and shoot camera. Most of its pictures are overexposed, and the focus isn‘t as sharp as I‘d like. I‘ve been told that my vintage medium format camera can give me better results, but until I have any kind of real feedback I won‘t know for sure. It does encourage you to think more about what you are doing before you do it.

At the end of the day, the one thing that helps to understand and diagnose issues more than anything else is to shorten the feedback time. Once you have a quick feedback, you can decide how helpful it is. The longer it takes to try/fail/diagnose the more frustrated you become. Sometimes going without a modern convenience can really improve the system. Sometimes you can retrofit the modern convenience into the older design. No matter how quick your feedback system is, if you can‘t trust what it tells you then it is useless. You have to know what can fool the system and how to compensate for it. With cameras it is relatively easy because everyone has the same basic answers regardless of whether they are using Canon, Nikon, Kodak, Rollei, or Hasselblad. The only differences have to do with what buttons you press, but not the type of adjustments you are making. Software is not nearly so simple. Many times the same symptom points to completely different issues based on whether you are using Microsoft, Oracle, or IBM. It‘s this disconnect that makes it difficult to build a stronger community of common issues and solutions. Remember that community is just a larger system that needs feedback too.

(2006-07-22 22:44:11.0) Permalink Trackback


Comments are closed for this entry.