How Everything Works
How Everything Works How Everything Works
 

QUESTIONS AND ANSWERS
 
Cameras
Page 3 of 3 (26 Questions and Answers)

1482. My roommate and I heard that it's possible to project the picture from our TV set onto the wall. We'd love to sit on our porch and watch TV while drinking a beer. Any ideas? - JK
Permalink
The simple answer to your question is yes, you can do it. But you'll encounter two significant problems with trying to turn your ordinary TV into a projection system. First, the lens you'll need to do the projection will be extremely large and expensive. Second, the image you'll see will be flipped horizontally and vertically. You'll have to hang upside-down from your porch railing, which will make drinking a beer rather difficult.

About the lens: in principle, all you need is one convex lens. A giant magnifying glass will do. But it has a couple of constraints. Because your television screen is pretty large, the lens diameter must also be pretty large. If it is significantly smaller than the TV screen, it won't project enough light onto your wall. And to control the size of the image it projects on the wall, you'll need to pick just the right focal length (curvature) of the lens. You'll be projecting a real image on the wall, a pattern of light that exactly matches the pattern of light appearing on the TV screen. The size and location of that real image depends on the lens's focal length and on its distance from the TV screen. You'll have to get these right or you'll see only a blur. Unfortunately, single lenses tend to have color problems and edge distortions. Projection lenses need to be multi-element carefully designed systems. Getting a good quality, large lens with the right focal length is going to cost you.

The other big problem is more humorous. Real images are flipped horizontally and vertically relative to the light source from which they originate. Unless you turn your TV set upside-down, your wall image will be inverted. And, without a mirror, you can't solve the left-right reversal problem. All the writing will appear backward. Projection television systems flip their screen image to start with so that the projected image has the right orientation. Unless you want to rewire your TV set, that's not going to happen for you. Good luck.


1525. Is it true that the bigger the lens on a camera, the more light goes through it and the better the photo or video? My film teacher says that while this idea is logically correct, he didn't know if it was true. Your lecture slides say the answer is yes, but my teacher still doesn't believe it. We were wondering about your source for this material. — PJ
Permalink
I'll assume that by "bigger lens" you mean one that is larger in diameter and that therefore collects all the light passing through a larger surface area. While a larger-diameter lens can project a brighter image onto the image sensor or film than a smaller-diameter lens, that's not the whole story. Producing a better photo or video involves more than just brightness.

Lenses are often characterized by their f-numbers, where f-number is the ratio of effective focal length to effective lens diameter. Focal length is the distance between the lens and the real image it forms of a distant object. For example, if a particular converging lens projects a real image of the moon onto a piece of paper placed 200 millimeters (200 mm) from the lens, then that lens has a focal length of 200 mm. And if the lens is 50 mm in diameter, it has an f-number of 4 because 200 mm divided by 50 mm is 4.

Based on purely geometrical arguments, it's easy to show that lenses with equal f-numbers project images of equal brightness onto their image sensors and the smaller the f-number, the brighter the image. Whether a lens is a wide-angle or telephoto, if it has an f-number of 4, then its effective focal length is four times the effective diameter of its light gathering lens. Since telephoto lenses have long focal lengths, they need large effective diameters to obtain small f-numbers.

But notice that I referred always to "effective diameter" and "effective focal length" when defining f-number. That's because there are many modern lenses that are so complicated internally that simply dividing the lens diameter by the distance between the lens and image sensor won't tell you much. Many of these lenses have zoom features that allow them to vary their effective focal lengths over wide ranges and these lenses often discard light in order to improve image quality and avoid dramatic changes in image brightness while zooming.

You might wonder why a lens would ever choose to discard light. There are at least two reasons for doing so. First, there is the issue of image quality. The smaller the f-number of a lens, the more precise its optics must be in order to form a sharp image. Low f-number lenses are bringing together light rays from a wide range of angles and getting all of those rays to overlap perfectly on the image sensor is no small feat. Making a high-performance lens with an f-number less than 2 is a challenge and making one with an f-number of less than 1.2 is extremely difficult. There are specialized lenses with f-numbers below 1 and Canon sold a remarkable f0.95 lens in the early 1960's. The lowest f-number camera lens I have ever owned is an f1.4.

Secondly, there is the issue of depth-of-focus. The smaller the f-number, the smaller the depth of focus. Again, this is a geometry issue: a low-f-number lens is bringing together light rays from a wide range of angles and those rays only meet at one point before separating again. Since objects at different distances in front of the lens form images at different distances behind the lens, it's impossible to capture sharp images of both objects at once on a single image sensor. With a high-f-number lens, this fact isn't a problem because the light rays from a particular object are rather close together even when the object's image forms before or after the image sensor. But with a low-f-number lens, the light rays from a particular object come together acceptably only at one particular distance from the lens. If the image sensor isn't at that distance, then the object will appear all blurry. If a zoom lens didn't work to keep its f-number relatively constant while zooming from telephoto to wide angle, its f-number would decrease during that zoom and its depth-of-focus would shrink. To avoid that phenomenon, the lens strategically discards light so as to keep its f-number essentially constant during zooming.

In summary, larger diameter lenses tend to be better at producing photographic and video images, but that assumes that they are high-quality and that they can shrink their effective diameters in ways that allow them to imitate high-quality lenses of smaller diameters when necessary. But flexible characteristics always come at some cost of image quality and the very best lenses are specialized to their tasks. Zoom lenses can't be quite as good as fixed focal length lenses and a large-diameter lens imitating a small-diameter lens by throwing away some light can't be quite as good as a true small-diameter lens.

As for my sources, one of the most satisfying aspects of physics is that you don't always need sources. Most of the imaging issues I've just discussed are associated with simple geometric optics, a subject that is part of the basic toolbox of an optical physicist (which I am). You can, however, look this stuff up in any book on geometrical optics.


1529. Why do scantron-type tests only read #2 pencils? Can other pencils work? — MW, Montgomery, AL
Permalink
The #2-pencil requirement is mostly historical. Because modern scantron systems can use all the sophistication of image sensors and computer image analysis, they can recognize marks made with a variety of materials and they can even pick out the strongest of several marks. If they choose to ignore marks made with materials other than pencil, it's because they're trying to be certain that they're recognizing only marks made intentionally by the user. Basically, these systems can "see" most of the details that you can see with your eyes and they judge the markings almost as well as a human would.

The first scantron systems, however, were far less capable. They read the pencil marks by shining light through the paper and into Lucite light guides that conveyed the transmitted light to phototubes. Whenever something blocked the light, the scantron system recorded a mark. The marks therefore had to be opaque in the range of light wavelengths that the phototubes sensed, which is mostly blue. Pencil marks were the obvious choice because the graphite in pencil lead is highly opaque across the visible light spectrum. Graphite molecules are tiny carbon sheets that are electrically conducting along the sheets. When you write on paper with a pencil, you deposit these tiny conducting sheets in layers onto the paper and the paper develops a black sheen. It's shiny because the conducting graphite reflects some of the light waves from its surface and it's black because it absorbs whatever light waves do manage to enter it.

A thick layer of graphite on paper is not only shiny black to reflected light, it's also opaque to transmitted light. That's just what the early scantron systems needed. Blue inks don't absorb blue light (that's why they appear blue!), so those early scantron systems couldn't sense the presence of marks made with blue ink. Even black inks weren't necessarily opaque enough in the visible for the scantron system to be confident that it "saw" a mark.

In contrast, modern scantron systems used reflected light to "see" marks, a change that allows scantron forms to be double-sided. They generally do recognize marks made with black ink or black toner from copiers and laser printers. I've pre-printed scantron forms with a laser printer and it works beautifully. But modern scantron systems ignore marks made in the color of the scantron form itself so as not to confuse imperfections in the form with marks by the user. For example, a blue scantron form marked with blue ink probably won't be read properly by a scantron system.

As for why only #2 pencils, that's a mechanical issue. Harder pencil leads generally don't produce opaque marks unless you press very hard. Since the early scantron machines needed opacity, they missed too many marks made with #3 or #4 pencils. And softer pencils tend to smudge. A scantron sheet filled out using a #1 pencil on a hot, humid day under stressful circumstances will be covered with spurious blotches and the early scantron machines confused those extra blotches with real marks.

Modern scantron machines can easily recognize the faint marks made by #3 or #4 pencils and they can usually tell a deliberate mark from a #1 pencil smudge or even an imperfectly erased mark. They can also detect black ink and, when appropriate, blue ink. So the days of "be sure to use a #2 pencil" are pretty much over. The instruction lingers on nonetheless.

One final note: I had long suspected that the first scanning systems were electrical rather than optical, but I couldn't locate references. To my delight, Martin Brown informed me that there were scanning systems that identified pencil marks by looking for their electrical conductivity. Electrical feelers at each end of the markable area made contact with that area and could detect pencil via its ability to conduct electric current. To ensure enough conductivity, those forms had to be filled out with special pencils having high conductivity leads. Mr. Brown has such an IBM Electrographic pencil in his collection. This electrographic and mark sense technology was apparently developed in the 1930s and was in wide use through the 1960s.


1539. How do glasses work and what is the physics behind them? — SDM, Missouri
Permalink
Like a camera, your eye collects light from the scene you're viewing and tries to form a real image of that scene on your retina. The eye's front surface (its cornea) and its internal lens act together to bend all the light rays from some distant feature toward one another so that they illuminate one spot on your retina. Since each feature in the scene you're viewing forms its own spot, your eye's cornea and lens are forming a real image of the scene in front of you. If that image forms as intended, you see a sharp, clear rendition of the objects in front of you. But if your eye isn't quite up to the task, the image may form either before or after your retina so that you see a blurred version of the scene.

The optical elements in your eye that are responsible for this image formation are the cornea and the lens. The cornea does most of the work of converging the light so that it focuses, while the lens provides the fine adjustment that allows that focus to occur on your retina.

If you're farsighted, the two optical elements aren't strong enough to form an image of nearby objects on your retina so you have trouble getting a clear view while reading. Your eye needs help, so you wear converging eyeglasses. Those eyeglasses boost the converging power of your eye itself and allow your eye to form sharp images of nearby objects on your retina.

If you're nearsighted, the two optical elements are too strong and need to be weakened in order to form sharp images of distant objects on your retina. That's why you wear diverging eyeglasses.

People are surprised when I tell them that they're nearsighted or farsighted. They wonder how I know. My trick is simple: I look through their eyeglasses at distant objects. If those objects appear enlarged, the eyeglasses are converging (like magnifying glasses) and the wearer must be farsighted. If those objects appear shrunken, the eyeglasses are diverging (like the security peepholes in doors) and the wearer is nearsighted. Try it, you'll find that it's easy to figure out how other people see by looking through their glasses as they wear them.


1553. I've read reference to "Smart" eyeglasses or contact lenses that can present more than just the visible portion of the electromagnetic spectrum. I'm wondering if you have any sources for these type of devices that are available to we civilians. — GJ, Wells, Nevada
Permalink
Since our eyes are only sensitive to light that's in the visible range, any "smart" optical system would have to present whatever it detects as visible light. That means it has to either shift the frequencies/wavelengths of non-visible electromagnetic radiation into the visible range or image that non-visible radiation and present a false-color reproduction to the viewer. Let's consider both of these schemes.

The first approach, shifting the frequencies/wavelengths, is seriously difficult. There are optical techniques for adding and subtracting optical waves from one another and thereby shifting their frequencies/wavelengths, but those techniques work best with the intense waves available with lasers. For example, the green light produced by some laser pointers actually originated as invisible infrared light and was doubled in frequency via a non-linear optical process in a special crystal. The intensity and pure frequency of the original infrared laser beam makes this doubling process relatively efficient. Trying to double infrared light coming naturally from the objects around you would be extraordinarily inefficient. In general, trying to shift the frequencies/wavelengths of the various electromagnetic waves in your environment so that you can see them is pretty unlikely to ever work as a way of seeing the invisible portions of the electromagnetic spectrum.

The second approach, imaging invisible portions of the electromagnetic spectrum and then presenting a false-color reproduction to the viewer, is relatively straightforward. If it's possible to image the radiation and detect it, it's possible to present it as a false-color reproduction. I'm talking about a camera that images and detects invisible electromagnetic radiation and a computer that presents a false-color picture on a monitor. Imaging and detecting ultraviolet and x-ray radiation is quite possible, though materials issues sometimes makes the imaging tricky. Imaging and detecting infrared light is easy in some parts of the infrared spectrum, but detection becomes problematic at long wavelengths, where the detectors typically need to be cooled to extremely low temperatures. Also, the resolution becomes poor at long wavelengths.

Camera systems that image ultraviolet, x-ray, and infrared radiation exist and you can buy them from existing companies. They're typically expensive and bulky. There are exceptions such as near-infrared cameras — silicon imaging chips are quite sensitive to near infrared and ordinary digital cameras filter it out to avoid presenting odd-looking images. In other words, the camera would naturally see farther into the infrared than our eyes do and would thus present us with images that don't look normal.

In summary, techniques for visualizing many of the invisible portions of the electromagnetic spectrum exist, but making them small enough to wear as glasses... that's a challenge. That said, it's probably possible to make eyeglasses that image and detect infrared or ultraviolet light and present false-color views to you on miniature computer monitors. Such glasses may already exist, although they'd be expensive. As for making them small enough to wear as contact lenses... that's probably beyond what's possible, at least for the foreseeable future.


1586. I have a 70 to 300 mm lens with f-5.6. But I can manually take it up to f-22. What does that mean and how does it work? Also why can't I bring it down to say f2.8? — AR, Pakistan
Permalink
The f-number of a lens measures the brightness of the image that lens casts onto the camera's image sensor. Smaller f-numbers produce brighter images, but they also yield smaller depths of focus.

The f-number is actually the ratio of the lens' focal length to its effective diameter (the diameter of the light beam it collects and uses for its image). Your zoom lens has a focal length that can vary from 70 to 300 mm and a minimum f-number of 5.6. That means the when it is acting as a 300 mm telephoto lens, its effective light gathering surface is about 53 mm in diameter (300 mm divided by 5.6 gives a diameter of 53 mm).

If you examine the lens, I think that you'll find that the front optical element is about 53 mm in diameter; the lens is using that entire surface to collect light when it is acting as a 300 mm lens at f-5.6. But when you zoom to lower focal lengths (less extreme telephoto), the lens uses less of the light entering its front surface. Similarly, when you dial a higher f-number, you are closing a mechanical diaphragm that is strategically located inside the lens and causing the lens to use less light. It's easy for the lens to increase its f-number by throwing away light arriving near the edges of its front optical element, but the lens can't decrease its f-number below 5.6; it can't create additional light gathering surface. Very low f-number lenses, particularly telephoto lenses with their long focal lengths, need very large diameter front optical elements. They tend to be big, expensive, and heavy.

Smaller f-numbers produce brighter images, but there is a cost to that brightness. With more light rays entering the lens and focusing onto the image sensor, the need for careful focusing becomes greater. The lower the f-number, the more different directions those rays travel and the harder it is to get them all to converge properly on the image sensor. At low f-numbers, only rays from a specific distance converge to sharp focus on the image sensor; rays from objects that are too close or too far from the lens don't form sharp images and appear blurry.

If you want to take a photograph in which everything, near and far, is essentially in perfect focus, you need to use a large f-number. The lens will form a dim image and you'll need to take a relatively long exposure, but you'll get a uniformly sharp picture. But if you're taking a portrait of a person and you want to blur the background so that it doesn't detract from the person's face, you'll want a small f-number. The preferred portrait lenses are moderately telephoto—they allow you to back up enough that the person's face doesn't bulge out at you in the photograph—and they have very low f-numbers—their large front optical elements gather lots of light and yield a very shallow depth of focus.


www.HowEverythingWorks.org
The Cameras Home Page
The Complete Collection of Questions about Cameras (3 pages, from oldest to newest):
Previous 1 2 3 

Copyright 1997-2017 © Louis A. Bloomfield, All Rights Reserved
Privacy Policy