The f-number of a lens measures the brightness of the image that lens casts onto the camera's image sensor. Smaller f-numbers produce brighter images, but they also yield smaller depths of focus.
The f-number is actually the ratio of the lens' focal length to its effective diameter (the diameter of the light beam it collects and uses for its image). Your zoom lens has a focal length that can vary from 70 to 300 mm and a minimum f-number of 5.6. That means the when it is acting as a 300 mm telephoto lens, its effective light gathering surface is about 53 mm in diameter (300 mm divided by 5.6 gives a diameter of 53 mm).
If you examine the lens, I think that you'll find that the front optical element is about 53 mm in diameter; the lens is using that entire surface to collect light when it is acting as a 300 mm lens at f-5.6. But when you zoom to lower focal lengths (less extreme telephoto), the lens uses less of the light entering its front surface. Similarly, when you dial a higher f-number, you are closing a mechanical diaphragm that is strategically located inside the lens and causing the lens to use less light. It's easy for the lens to increase its f-number by throwing away light arriving near the edges of its front optical element, but the lens can't decrease its f-number below 5.6; it can't create additional light gathering surface. Very low f-number lenses, particularly telephoto lenses with their long focal lengths, need very large diameter front optical elements. They tend to be big, expensive, and heavy.
Smaller f-numbers produce brighter images, but there is a cost to that brightness. With more light rays entering the lens and focusing onto the image sensor, the need for careful focusing becomes greater. The lower the f-number, the more different directions those rays travel and the harder it is to get them all to converge properly on the image sensor. At low f-numbers, only rays from a specific distance converge to sharp focus on the image sensor; rays from objects that are too close or too far from the lens don't form sharp images and appear blurry.
If you want to take a photograph in which everything, near and far, is essentially in perfect focus, you need to use a large f-number. The lens will form a dim image and you'll need to take a relatively long exposure, but you'll get a uniformly sharp picture. But if you're taking a portrait of a person and you want to blur the background so that it doesn't detract from the person's face, you'll want a small f-number. The preferred portrait lenses are moderately telephoto—they allow you to back up enough that the person's face doesn't bulge out at you in the photograph—and they have very low f-numbers—their large front optical elements gather lots of light and yield a very shallow depth of focus.
Since our eyes are only sensitive to light that's in the visible range, any "smart" optical system would have to present whatever it detects as visible light. That means it has to either shift the frequencies/wavelengths of non-visible electromagnetic radiation into the visible range or image that non-visible radiation and present a false-color reproduction to the viewer. Let's consider both of these schemes.
The first approach, shifting the frequencies/wavelengths, is seriously difficult. There are optical techniques for adding and subtracting optical waves from one another and thereby shifting their frequencies/wavelengths, but those techniques work best with the intense waves available with lasers. For example, the green light produced by some laser pointers actually originated as invisible infrared light and was doubled in frequency via a non-linear optical process in a special crystal. The intensity and pure frequency of the original infrared laser beam makes this doubling process relatively efficient. Trying to double infrared light coming naturally from the objects around you would be extraordinarily inefficient. In general, trying to shift the frequencies/wavelengths of the various electromagnetic waves in your environment so that you can see them is pretty unlikely to ever work as a way of seeing the invisible portions of the electromagnetic spectrum.
The second approach, imaging invisible portions of the electromagnetic spectrum and then presenting a false-color reproduction to the viewer, is relatively straightforward. If it's possible to image the radiation and detect it, it's possible to present it as a false-color reproduction. I'm talking about a camera that images and detects invisible electromagnetic radiation and a computer that presents a false-color picture on a monitor. Imaging and detecting ultraviolet and x-ray radiation is quite possible, though materials issues sometimes makes the imaging tricky. Imaging and detecting infrared light is easy in some parts of the infrared spectrum, but detection becomes problematic at long wavelengths, where the detectors typically need to be cooled to extremely low temperatures. Also, the resolution becomes poor at long wavelengths.
Camera systems that image ultraviolet, x-ray, and infrared radiation exist and you can buy them from existing companies. They're typically expensive and bulky. There are exceptions such as near-infrared cameras — silicon imaging chips are quite sensitive to near infrared and ordinary digital cameras filter it out to avoid presenting odd-looking images. In other words, the camera would naturally see farther into the infrared than our eyes do and would thus present us with images that don't look normal.
In summary, techniques for visualizing many of the invisible portions of the electromagnetic spectrum exist, but making them small enough to wear as glasses... that's a challenge. That said, it's probably possible to make eyeglasses that image and detect infrared or ultraviolet light and present false-color views to you on miniature computer monitors. Such glasses may already exist, although they'd be expensive. As for making them small enough to wear as contact lenses... that's probably beyond what's possible, at least for the foreseeable future.
Like a camera, your eye collects light from the scene you're viewing and tries to form a real image of that scene on your retina. The eye's front surface (its cornea) and its internal lens act together to bend all the light rays from some distant feature toward one another so that they illuminate one spot on your retina. Since each feature in the scene you're viewing forms its own spot, your eye's cornea and lens are forming a real image of the scene in front of you. If that image forms as intended, you see a sharp, clear rendition of the objects in front of you. But if your eye isn't quite up to the task, the image may form either before or after your retina so that you see a blurred version of the scene.
The optical elements in your eye that are responsible for this image formation are the cornea and the lens. The cornea does most of the work of converging the light so that it focuses, while the lens provides the fine adjustment that allows that focus to occur on your retina.
If you're farsighted, the two optical elements aren't strong enough to form an image of nearby objects on your retina so you have trouble getting a clear view while reading. Your eye needs help, so you wear converging eyeglasses. Those eyeglasses boost the converging power of your eye itself and allow your eye to form sharp images of nearby objects on your retina.
If you're nearsighted, the two optical elements are too strong and need to be weakened in order to form sharp images of distant objects on your retina. That's why you wear diverging eyeglasses.
People are surprised when I tell them that they're nearsighted or farsighted. They wonder how I know. My trick is simple: I look through their eyeglasses at distant objects. If those objects appear enlarged, the eyeglasses are converging (like magnifying glasses) and the wearer must be farsighted. If those objects appear shrunken, the eyeglasses are diverging (like the security peepholes in doors) and the wearer is nearsighted. Try it, you'll find that it's easy to figure out how other people see by looking through their glasses as they wear them.
The #2-pencil requirement is mostly historical. Because modern scantron systems can use all the sophistication of image sensors and computer image analysis, they can recognize marks made with a variety of materials and they can even pick out the strongest of several marks. If they choose to ignore marks made with materials other than pencil, it's because they're trying to be certain that they're recognizing only marks made intentionally by the user. Basically, these systems can "see" most of the details that you can see with your eyes and they judge the markings almost as well as a human would.
The first scantron systems, however, were far less capable. They read the pencil marks by shining light through the paper and into Lucite light guides that conveyed the transmitted light to phototubes. Whenever something blocked the light, the scantron system recorded a mark. The marks therefore had to be opaque in the range of light wavelengths that the phototubes sensed, which is mostly blue. Pencil marks were the obvious choice because the graphite in pencil lead is highly opaque across the visible light spectrum. Graphite molecules are tiny carbon sheets that are electrically conducting along the sheets. When you write on paper with a pencil, you deposit these tiny conducting sheets in layers onto the paper and the paper develops a black sheen. It's shiny because the conducting graphite reflects some of the light waves from its surface and it's black because it absorbs whatever light waves do manage to enter it.
A thick layer of graphite on paper is not only shiny black to reflected light, it's also opaque to transmitted light. That's just what the early scantron systems needed. Blue inks don't absorb blue light (that's why they appear blue!), so those early scantron systems couldn't sense the presence of marks made with blue ink. Even black inks weren't necessarily opaque enough in the visible for the scantron system to be confident that it "saw" a mark.
In contrast, modern scantron systems used reflected light to "see" marks, a change that allows scantron forms to be double-sided. They generally do recognize marks made with black ink or black toner from copiers and laser printers. I've pre-printed scantron forms with a laser printer and it works beautifully. But modern scantron systems ignore marks made in the color of the scantron form itself so as not to confuse imperfections in the form with marks by the user. For example, a blue scantron form marked with blue ink probably won't be read properly by a scantron system.
As for why only #2 pencils, that's a mechanical issue. Harder pencil leads generally don't produce opaque marks unless you press very hard. Since the early scantron machines needed opacity, they missed too many marks made with #3 or #4 pencils. And softer pencils tend to smudge. A scantron sheet filled out using a #1 pencil on a hot, humid day under stressful circumstances will be covered with spurious blotches and the early scantron machines confused those extra blotches with real marks.
Modern scantron machines can easily recognize the faint marks made by #3 or #4 pencils and they can usually tell a deliberate mark from a #1 pencil smudge or even an imperfectly erased mark. They can also detect black ink and, when appropriate, blue ink. So the days of "be sure to use a #2 pencil" are pretty much over. The instruction lingers on nonetheless.
One final note: I had long suspected that the first scanning systems were electrical rather than optical, but I couldn't locate references. To my delight, Martin Brown informed me that there were scanning systems that identified pencil marks by looking for their electrical conductivity. Electrical feelers at each end of the markable area made contact with that area and could detect pencil via its ability to conduct electric current. To ensure enough conductivity, those forms had to be filled out with special pencils having high conductivity leads. Mr. Brown has such an IBM Electrographic pencil in his collection. This electrographic and mark sense technology was apparently developed in the 1930s and was in wide use through the 1960s.
I'll assume that by "bigger lens" you mean one that is larger in diameter and that therefore collects all the light passing through a larger surface area. While a larger-diameter lens can project a brighter image onto the image sensor or film than a smaller-diameter lens, that's not the whole story. Producing a better photo or video involves more than just brightness.
Lenses are often characterized by their f-numbers, where f-number is the ratio of effective focal length to effective lens diameter. Focal length is the distance between the lens and the real image it forms of a distant object. For example, if a particular converging lens projects a real image of the moon onto a piece of paper placed 200 millimeters (200 mm) from the lens, then that lens has a focal length of 200 mm. And if the lens is 50 mm in diameter, it has an f-number of 4 because 200 mm divided by 50 mm is 4.
Based on purely geometrical arguments, it's easy to show that lenses with equal f-numbers project images of equal brightness onto their image sensors and the smaller the f-number, the brighter the image. Whether a lens is a wide-angle or telephoto, if it has an f-number of 4, then its effective focal length is four times the effective diameter of its light gathering lens. Since telephoto lenses have long focal lengths, they need large effective diameters to obtain small f-numbers.
But notice that I referred always to "effective diameter" and "effective focal length" when defining f-number. That's because there are many modern lenses that are so complicated internally that simply dividing the lens diameter by the distance between the lens and image sensor won't tell you much. Many of these lenses have zoom features that allow them to vary their effective focal lengths over wide ranges and these lenses often discard light in order to improve image quality and avoid dramatic changes in image brightness while zooming.
You might wonder why a lens would ever choose to discard light. There are at least two reasons for doing so. First, there is the issue of image quality. The smaller the f-number of a lens, the more precise its optics must be in order to form a sharp image. Low f-number lenses are bringing together light rays from a wide range of angles and getting all of those rays to overlap perfectly on the image sensor is no small feat. Making a high-performance lens with an f-number less than 2 is a challenge and making one with an f-number of less than 1.2 is extremely difficult. There are specialized lenses with f-numbers below 1 and Canon sold a remarkable f0.95 lens in the early 1960's. The lowest f-number camera lens I have ever owned is an f1.4.
Secondly, there is the issue of depth-of-focus. The smaller the f-number, the smaller the depth of focus. Again, this is a geometry issue: a low-f-number lens is bringing together light rays from a wide range of angles and those rays only meet at one point before separating again. Since objects at different distances in front of the lens form images at different distances behind the lens, it's impossible to capture sharp images of both objects at once on a single image sensor. With a high-f-number lens, this fact isn't a problem because the light rays from a particular object are rather close together even when the object's image forms before or after the image sensor. But with a low-f-number lens, the light rays from a particular object come together acceptably only at one particular distance from the lens. If the image sensor isn't at that distance, then the object will appear all blurry. If a zoom lens didn't work to keep its f-number relatively constant while zooming from telephoto to wide angle, its f-number would decrease during that zoom and its depth-of-focus would shrink. To avoid that phenomenon, the lens strategically discards light so as to keep its f-number essentially constant during zooming.
In summary, larger diameter lenses tend to be better at producing photographic and video images, but that assumes that they are high-quality and that they can shrink their effective diameters in ways that allow them to imitate high-quality lenses of smaller diameters when necessary. But flexible characteristics always come at some cost of image quality and the very best lenses are specialized to their tasks. Zoom lenses can't be quite as good as fixed focal length lenses and a large-diameter lens imitating a small-diameter lens by throwing away some light can't be quite as good as a true small-diameter lens.
As for my sources, one of the most satisfying aspects of physics is that you don't always need sources. Most of the imaging issues I've just discussed are associated with simple geometric optics, a subject that is part of the basic toolbox of an optical physicist (which I am). You can, however, look this stuff up in any book on geometrical optics.
The simple answer to your question is yes, you can do it. But you'll encounter two significant problems with trying to turn your ordinary TV into a projection system. First, the lens you'll need to do the projection will be extremely large and expensive. Second, the image you'll see will be flipped horizontally and vertically. You'll have to hang upside-down from your porch railing, which will make drinking a beer rather difficult.
About the lens: in principle, all you need is one convex lens. A giant magnifying glass will do. But it has a couple of constraints. Because your television screen is pretty large, the lens diameter must also be pretty large. If it is significantly smaller than the TV screen, it won't project enough light onto your wall. And to control the size of the image it projects on the wall, you'll need to pick just the right focal length (curvature) of the lens. You'll be projecting a real image on the wall, a pattern of light that exactly matches the pattern of light appearing on the TV screen. The size and location of that real image depends on the lens's focal length and on its distance from the TV screen. You'll have to get these right or you'll see only a blur. Unfortunately, single lenses tend to have color problems and edge distortions. Projection lenses need to be multi-element carefully designed systems. Getting a good quality, large lens with the right focal length is going to cost you.
The other big problem is more humorous. Real images are flipped horizontally and vertically relative to the light source from which they originate. Unless you turn your TV set upside-down, your wall image will be inverted. And, without a mirror, you can't solve the left-right reversal problem. All the writing will appear backward. Projection television systems flip their screen image to start with so that the projected image has the right orientation. Unless you want to rewire your TV set, that's not going to happen for you. Good luck.
What a neat observation! Digital cameras based on CCD imaging chips are sensitive to infrared light. Even though you can't see the infrared light streaming out of the remote control when you push its buttons, the camera's chip can. This behavior is typical of semiconductor light sensors such as photodiodes and phototransistors: they often detect near infrared light even better than visible light. In fact, a semiconductor infrared sensor is exactly what your television set uses to collect instructions from the remote control.
The color filters that the camera employs to obtain color information misbehave when they're dealing with infrared light and so the camera is fooled into thinking that it's viewing white light. That's why your camera shows a white spot where the remote's infrared source is located.
I just tried taking some pictures through infrared filters, glass plates that block visible light completely, and my digital camera worked just fine. The images were as sharp and clear as usual, although the colors were odd. I had to use incandescent illumination because fluorescent light doesn't contain enough infrared. It would be easy to take pictures in complete darkness if you just illuminated a scene with bright infrared sources. No doubt there are "spy" cameras that do exactly that.
Just as most good camera lenses have more than one optical element inside them, so your eye has more than one optical element inside it. The outside surface of your eye is curved and actually acts as a lens itself. Without this surface lens, your eye can't bring the light passing through it to a focus on your retina. The component in your eye that is called "the lens" is actually the fine adjustment rather than the whole optical system.
When you put your eye in water, the eye's curved outer surface stops acting as a lens. That's because light travels at roughly the same speed in water as it does in your eye and that light no longer bends as it enters your eye. Everything looks blurry because the light doesn't focus on your retina anymore. But by inserting an air space between your eye and a flat plate of glass or plastic, you recover the bending at your eye's surface and everything appears sharp again.
The only source of common light source that presents any real danger to a child with a magnifying glass is the sun. If you let sunlight pass through an ordinary magnifying glass, the convex lens of the magnifier will cause the rays of sunlight to converge and they will form a real image of the sun a short distance after the magnifying glass. This focused image will appear as a small, circular light spot of enormous brilliance when you let it fall onto a sheet of white paper. It's truly an image—it's round because the sun is round and it has all the spatial features that the sun does. If the image weren't so bright and the sun had visible marks on its surface, you'd see those marks nicely in the real image.
The problem with this real image of the sun is simply that it's dazzlingly bright and that it delivers lots of thermal power in a small area. The real image is there in space, whether or not you put any object into that space. If you put paper or some other flammable substance in this focused region, it may catch on fire. Putting your skin in the focus would also be a bad idea. And if you put your eye there, you're in serious trouble.
So my suggestion with first graders is to stay in the shade when you're working with magnifying glasses. As soon as you go out in direct sunlight, that brilliant real image will begin hovering in space just beyond the magnifying glass, waiting for someone to put something into it. And many first graders just can't resist the opportunity to do just that.
A projector is essentially a camera that's operating backward. When you take a picture of a tree, all of the light striking the camera lens from a particular leaf is bent together to one small spot on the film. Overall, light from each leaf is bent together to a corresponding spot on the film and a pattern of light that looks just like the tree—a real image of the tree—forms on the surface of the film. The film records this pattern of light through photochemical processes, and subsequent development causes the film to display this captured light pattern forever. Because of the nature of the bending process, the real image that forms on the film is upside-down and backward. Because it forms so near the camera lens, it's also much smaller than the tree itself.
A projector just reverses this process. Now light starts out from an illuminated piece of developed film—such as a slide containing an image of a tree. Now the projector lens bends all of the light striking it from a particular leaf spot on the slide together to one small spot on a distant projection screen. Again, light from each leaf on the slide is bent together to a corresponding spot on the screen and a pattern of light that looks just like the slide—a real image of the slide—forms on the surface of the projection screen. As before, this image is upside-down and backwards, which is why you must be careful how you orient a slide in a projector, lest you produce an inverted image on the screen.
Copyright 1997-2017 © Louis A. Bloomfield, All Rights Reserved