The size of your pupil does not depend on the distance to an object. It depends only on how bright the scene in front of you is. But the size of your pupil does affect your ability to focus. When it is relatively dark and your pupil is wide open, the whole lens of your eye is involved in light gathering. Focusing becomes very critical and you have very little depth of focus. Moreover, if your lens isn't perfect, you will see things as blurry. But when it is bright out and your pupil is small, you are only using the center portion of your lens and everything is in focus. That's why it is harder to focus at night than during the day. When you squint, you are artificially shrinking the effective diameter of the lens in your eye and increasing your depth of focus. Unfortunately, you are also reducing the amount of light that reaches your eye. If you look through a pinhole in a sheet of paper, you will find everything in focus, although it will appear very dim.
There are many parts to this question, so I'll deal with only two: how the camera forms an image of the scene in front of the camera on its imaging chip and how the camera obtains a video signal from that imaging chip. The first part involves a converging lens—one that bends rays of light toward one another. As the light from a particular spot in the scene passes through the camera's lens, the lens slows the light down. Because the lens' surfaces are curved, this slowing process causes the light rays to bend so that they tip toward one another. These rays continue toward one another after they leave the lens and they all meet at a single point on the surface of the camera's imaging chip. That point on the chip thus receives all the light from only one spot in the scene. Likewise, every point on the imaging chip receives light from one and only one spot in the scene. The lens is forming what is called a "real image"—a pattern of light in space (or on a surface) that is an exact copy of the scene from which the light originated. You can form a real image of a scene on a sheet of paper with the help of a simple magnifying glass. The actual camera lens often contains a number of individual glass or plastic elements, which allow it to bend all colors of light evenly and to adjust the size and brightness of the real image that it forms on the imaging chip.
The second part of this question revolves around the imaging chip. In this chip, known as a "charge-coupled device," the arriving light particles or "photons" causes electric charge to be transferred into a narrow channel of semiconductor—that is a material that can conduct electricity in a controllable manner. Each photon contains a tiny amount of energy and this energy is enough to move the electric charge into the channel. The imaging chip has row after row of these light-sensitive channels so that the pattern of light striking the chip creates a pattern of charge in its channels. To obtain a video image from these channels, the camera uses an electronic technique to shift the charge through the channels. The camera thus reads the electric charge point-by-point, row-by-row until it has examined the pattern of charge (and thus the pattern of light) on the whole imaging chip. This reading process is just what is needed to build a video signal, since a television also builds its image point-by-point, row-by-row. To obtain a color image, the imaging chip is covered with a tiny pattern of colored filters so that each point on its surface is only sensitive to a certain primary color of light: either red, green, or blue. This sort of color sensitivity mimics that of our own eyes—our retinas respond only to red, green, or blue light, but we see mixtures of those three colors as a much richer collection of colors.
Modern cameras use a variety of techniques to find the distance to objects. Some cameras bounce sound off of the objects and time how long it takes for the echo to return. Others observe the central portion of the image (presumably the object) from two vantage points simultaneous and then adjust the angles at which those two observations are made until the images overlap. This rangefinder technique is the one you use to sense distance with your eyes. You view the object through each eye and adjust the angles of view until the two images overlap (in your brain). At that point, you can tell how far away the object is by how crossed or uncrossed your eyes are. A rangefinder camera has two small viewing windows and lenses to look at the object, just as you have two eyes to look at the object. Finally, some cameras don't really measure the distance to the object but instead adjust the lens until it forms the sharpest possible image. A sharp image has the highest possible contrast while an out-of-focus image will have relatively low contrast. The cameras adjust the lens until the light striking a sensor exhibits maximal contrast (brightest bright spots and darkest dark spots).
Yes, your eye is exactly like a camera, except that the real image forms on your light sensitive retina rather than on a sheet of film. The lens bends light to a focus on the retina. If you are nearsighted and can only see nearby objects clearly, then your lens is too strong and bends light too much. Light from a distant object focuses before reaching your retina. If you are farsighted and can only see distant objects clearly, then your lens is too weak and bends light too little. Light from a nearby object doesn't reach a focus by the time it strikes your retina. It would focus beyond your retina, if it could continue on through space.
The different speeds of film have to do with how light sensitive the film emulsion is. A portion of the surface of a high-speed film will register exposure to light when only a few particles of light (photons) reach it. In contrast, a low speed film requires more photons per square millimeter to undergo the chemical changes of exposure.
While high speed film can take pictures with less light than low speed film, there is a trade-off. High-speed films are grainier and have less resolution than low speed films. Thus photographs that you would like to enlarge should be taken with relatively slow film.
The retinas of your eyes appear reddish when you look at them with white light. The red eye problem occurs because light from the flash passes through the lens of your eye, strikes the retina (which allows you to see the flash), and reflects back toward the camera. This reflection is mostly red light and it is directed very strongly back toward the camera. The camera captures this red reflection very effectively and so eyes appear red. The double flash is meant to get the irises of your eyes to contract (as they do whenever your eyes are exposed to bright light or you are startled or excited). The first flash causes your irises to contract so that less light from the second flash can pass into and out of your eyes. Unfortunately, this trick doesn't work all that well.
When light from the flash illuminates people's eyes, that light focuses onto small spots on their retinas. Most of the light is absorbed, by a small amount of red light reflects. Because the lens focused light from the flash onto a particular spot on the retina, the returning light is focused directly back toward the flash. The camera records this returning red light and eyes appear bright red. To reduce the effect, some flashes emit an early pulse of light. People's pupils shrink in response to this light and allow less light to go into and out of their eyes. Professional photographers often mount their flashes a foot or more from the lens so that the back-reflected red light that returns toward the flash misses the lens.
Photographic film chemically records information about the light that it has absorbed. Normally, this light was projected on it by a lens and formed a clear, sharp pattern of the scene in front of the camera. However, if light strikes the film uniformly, the information recorded on the film will have nothing to do with an image. The entire sheet of film will record intense exposure to light and will have no structure on its chemical record.
The surfaces of most lenses are shaped like the surfaces of spheres. Such "spherical" lenses can be characterized by a single distance: the focal length. For converging lenses, those with convex or outward-bulging surfaces, light from a distant object such as the sun will converge together after passing through the lens and will form an image of the object at a distance of the focal length from the center of the lens. You can find this "real" image by holding a sheet of white paper beyond the lens and looking for a clear pattern of light corresponding to the object. If the object is closer to the lens, the image will form a bit farther from the lens. The relationship between the distance to the object (the object distance or OD), the focal length of the lens (F), and the distance to the image (the image distance or OD) is given by a simple formula: 1/F = 1/OD + 1/ID.
This lens formula works for diverging lenses, too, but those lenses have negative focal lengths and produce their images on the object side of the lens. You can only view these "virtual" images by looking at them through the lens itself.
The easiest way of determining a lens's focal length is by measuring the distance between the lens and the real image it forms of a distant object. However, you can measure the curvatures of the lens's surfaces and calculate its focal length. Special gauges exist that touch the lens at several points, usually a circle and a central point, and determine how curved its surface is.
There are several different systems for autofocusing. I think that the three most popular systems are optical contrast, rangefinder overlap, and acoustic distancing. The optical contrast scheme places a sophisticated light sensitive surface in the focal plane of the camera's lens. This sensor recognizes when sharp focus is achieved by looking for the moment of maximum contrast in the image. When the lens is out of focus, the image is fuzzy and has little contrast. But when the lens is focused properly, the image is sharp and the sensor detects the strong spatial variations in darkness and brightness. The camera automatically scans the focus of its lens until it detects maximum image contrast.
The rangefinder overlap system observes the scene in front of the camera through two auxiliary lenses that are separated by a few inches. It uses mirrors to overlap the images from these two lenses and can determine the distance to the objects in the picture by the angles of the mirrors. The camera uses this distance measurement to set the focus of its main lens.
The acoustic distancing system bounces sound waves from the objects in front of the camera to determine how far away they are. The camera then adjusts its main lens for that distance. While this acoustic scheme has the advantage of working even in complete darkness, it's confused by clear surfaces—if you take a picture through a window, it will focus on the window. The optical schemes will focus on the objects rather than the window, but they will only work when there is light coming from the objects. That's why many autofocus cameras that use optical autofocus schemes have built in lights to illuminate the objects during the autofocusing process.
Copyright 1997-2017 © Louis A. Bloomfield, All Rights Reserved