Most four-tube fluorescent fixtures are effectively two separate two-tube units. They share the same ballast, but otherwise each pair of tubes is independent of the other. Removing one of those pairs from the fixture will save nearly half the energy and expense, and is a good idea if you don't need the extra illumination.
The two tubes within a pair operate in series: current flowing as a discharge through the gas in one tube also flows through the gas in the other tube. That's why they both go out simultaneously. Only one of them is actually dead, but since the dead one has lost its ability to sustain a discharge, it can't pass any current on to its partner. Replacing the dead tube is usually enough to get the pair working again, at least for while.
Leaving dead tubes in a fixture isn't the same as removing unnecessary tubes. Tubes often die slow, lingering deaths during which they sustain weak or flickering discharges that consume some energy without providing much light. Also, most fluorescent fixtures heat the electrodes at the ends of the tubes to start the discharge. During startup, the ballast runs an electric current through each electrode (hence the two metal contacts at each end of the tube) and the heated electrodes introduces electric charges into the gas so the discharge can start.
That heating current is only necessary during starting, but if the discharge never starts then the ballast may continue to heat the electrodes for days, weeks, or years. If you look at the ends of a tube that fails to start, you may see the electrodes glowing red hot. Because of that heater current, leaving a failed fluorescent tube in a fixture can be waste of energy and money. Be careful removing those tubes from the fixture—although they produce no light, they can still be hot at their ends.
Our eyes sense color by measuring the relative brightnesses of the red, green, and blue portions of the light spectrum. When all three portions of the spectrum are present in the proper amounts, we perceive white.
The color sensing cells in our eyes are known as cone cells and they can detect only three different bands of color. One type of cone cell is sensitive to light in the red portion of the spectrum, the second type is sensitive to the green portion of the spectrum, and the third type is sensitive to the blue portion of the spectrum.
Their sensitivities overlap somewhat, so light in the yellow and orange portions of the spectrum simultaneously affects both the red sensitive cone cells and the green sensitive ones. Our brains interpret color according to which of three cone cells are being stimulated and to what extent. When both our red sensors and our green sensors are being stimulated, we perceive yellow or orange.
That scheme for sensing color is simple and elegant, and it allows us to appreciate many of the subtle color variations in our world. But it means that we can't distinguish between certain groups of lights. For example, we can't distinguish between (1) true yellow light and (2) a carefully adjusted mixture of true red plus true green. Both stimulate our red and green sensors just enough to make us perceive yellow. Those groups of lights look exactly the same to us.
Similarly, we can't distinguish between (3) the full spectrum of sunlight and (4) a carefully adjusted mixture of true red, true green, and true blue. Those two groups stimulate all three types of cone cells and make us perceive white. They look identical to us.
That the primary colors of light are red, green, and blue is the result of our human physiology and the fact that our eyes divide the spectrum of light into those three color regions. If our eyes were different, the primary colors of light would be different, too.
Many things in our technological world exploit mixtures of those three primary colors to make us see every possible color. Computer monitors, televisions, photographs, and color printing all make us see what they want us to see without actually reproducing the full light spectrum of the original. For example, if you used a light spectrum analyzer to study a flower and a photograph of that flower, you'd discover that their light spectra are different. Those spectra stimulate our eyes the same way, but the details of the spectra are different. We can't tell them apart.
As the candle burns, its wax melts into a liquid, that liquid "wicks" up the wick (like water flowing up into a paper towel), and then the extreme heat of the flame vaporizes the wax (it is become gaseous wax). Once the wax is a gas, it burns in much the same way that natural gas burns — it reacts with oxygen in the air to become water and carbon dioxide. That reaction released chemical potential energy as thermal energy.
One important difference between a candle flame and a natural gas flame: whereas the flame of a well-adjusted natural gas burner emits very little light (a dim blue glow), the flame of a candle is quite visible. That's because the wax vapor in a candle flame isn't mixed well with air before it begins to burn. Instead of burning quickly and completely, as natural gas does in a burner that premixes the gas with air, the wax vapor in a candle flame burns gradually as it continues to mix with air. The partially burned wax forms tiny carbon particles. Those carbon particles are so hot that they glow yellow-hot — they emit thermal radiation. In other words, they are "incandescent". It's those glowing carbon particles that produce the candle's yellowish light. Eventually the carbon particles burn away to carbon dioxide.
Yes. My solution is to fill the well hole with objects that are dense enough and hydrodynamically streamlined enough to descend by gravity alone through the upward flow of oil. As they accumulate in the 3+ mile deep well hole, those objects will impede the flow until it becomes a trickle. Large steel balls (e.g., cannonballs) should do the trick. If they are large enough, they will have a downward terminal velocity, even as they move through the upward flowing oil. Because they descend, they will eventually accumulate at the bottom of the well hole and form a coarse "packed powder." That powder will use its enormous weight and its resistance to flow to stop the leak. Most importantly, building the powder doesn't require any seals or pressurization at the top of the well hole, so it should be easy to do.
The packed powder will exert downward drag forces on the upward flow of oil and gas, slowing its progress and decreasing its pressure faster than gravity alone. With 3+ miles of hole to fill, the dense steel objects should impede the flow severely. As the flow rate diminishes, the diameters of the metal spheres can be reduced until they are eventually only inches or even centimeters in diameter. The oil and gas will then be forced to flow through fine channels in the "powder," allowing viscous drag and pressure drag to extract all of the pressure potential energy from the flow and convert that energy into thermal energy. The flow will, in effect, be attempting to lift thousands of pounds of metal particles and it will fail. It will ooze though the "packed powder" at an insignificant rate.
Another way to think about my technique is that it gradually increases the average density of the fluid in the well hole until that column of fluid is so heavy that the high pressure at the bottom of the hole is unable to lift it. The liquid starts out as a light mixture of oil and gas, but it gradually transforms into a dense mixture of oil, gas, and iron. Viscous forces and drag forces effectively couple the materials phases together to form a single fluid. Once that fluid is about 50% iron by volume, its average density will be so high (4 times the density of water) that it will stop flowing upward. If iron isn't dense enough (7.8 times water), you could use silver cannonballs (10.5 times water). Then you could say that "silver bullets" stopped the leak! The failed "top kill" concept also intended to fill the well hole with a dense fluid: heavy mud. But it required pushing the oil and gas down the well hole to make room for the mud. That displacement process proved to be impossible because it required unobtainable pressures and pumping power. My approach takes no pressurization or pumping at all because it doesn't actively displace the oil and gas.
Including deformable lead spheres in the mixture will further plug the upward flow. The lead will deform under the weight of metal overhead and will fill voids and narrow channels. Another refinement of this dense-fill concept would be to drop bead chains down the well hole. The first large ball in such a chain would be a "tug boat" that is capable of descending against the upward flow all by itself. It would be followed by progressively smaller balls that need to draft (travel in the wake of) the balls ahead of them in order to descend into the well. Held together by steel or Kevlar cord, those bead chains would accumulate at the bottom of the well and impede the flow more effectively than individual large balls. Especially streamlined (non-spherical) objects such as steel javelins, darts, rods, and rebar could also be dropped into the well at the start of the filling process. In fact, sturdy sacks filled with junk steel objects—nuts and bolts—might even work. Anything that descends into the well hole is good and smaller particles are better. The point is not to form a seal, since the enormous pressure that will develop beneath any seal will blow it upward. The point is always to form narrow channels through which the oil and gas will struggle to flow.
A video of this idea appears at: http://www.youtube.com/watch?v=8H29H_1vTHo and a manuscript detailing this idea appears on the Physics ArXiv: http://arxiv.org/abs/1006.0656. I'm trying to find a home for it in the scientific literature, but so far Applied Physics Letters, Physic Review E (which includes the physics of fluids), and PLoS (Public Library of Science) One have turned it down—they want articles with new physics in them, not articles applying old physics to new contexts, no matter how important those contexts. It's no wonder that the public views science as arcane and irrelevant.
Both the fork and the food are almost certainly safe. While the microwave oven is operating, electric current will flow through the fork and electric charge will accumulate momentarily on the tips of the fork's tines. However, most forks are thick enough to handle the current without becoming noticeably hot and have tines that are dull enough to accumulate the charge without sparking. The end result is that the fork doesn't do much while the oven is operating; it reflects the microwaves and therefore alters the cooking pattern slightly, but you probably won't be able to tell. Once the cooking is over, the fork is just as it was before you put it in the oven and the food is basically just microwaved food.
If a fork has particularly sharp tines, however, then you should be careful not to put in the microwave oven. Sharp metal objects can and do spark in the microwave oven. Those sparks are probably more of a fire hazard than a food safety hazard—they can ignite the food or its container and start a fire.
The f-number of a lens measures the brightness of the image that lens casts onto the camera's image sensor. Smaller f-numbers produce brighter images, but they also yield smaller depths of focus.
The f-number is actually the ratio of the lens' focal length to its effective diameter (the diameter of the light beam it collects and uses for its image). Your zoom lens has a focal length that can vary from 70 to 300 mm and a minimum f-number of 5.6. That means the when it is acting as a 300 mm telephoto lens, its effective light gathering surface is about 53 mm in diameter (300 mm divided by 5.6 gives a diameter of 53 mm).
If you examine the lens, I think that you'll find that the front optical element is about 53 mm in diameter; the lens is using that entire surface to collect light when it is acting as a 300 mm lens at f-5.6. But when you zoom to lower focal lengths (less extreme telephoto), the lens uses less of the light entering its front surface. Similarly, when you dial a higher f-number, you are closing a mechanical diaphragm that is strategically located inside the lens and causing the lens to use less light. It's easy for the lens to increase its f-number by throwing away light arriving near the edges of its front optical element, but the lens can't decrease its f-number below 5.6; it can't create additional light gathering surface. Very low f-number lenses, particularly telephoto lenses with their long focal lengths, need very large diameter front optical elements. They tend to be big, expensive, and heavy.
Smaller f-numbers produce brighter images, but there is a cost to that brightness. With more light rays entering the lens and focusing onto the image sensor, the need for careful focusing becomes greater. The lower the f-number, the more different directions those rays travel and the harder it is to get them all to converge properly on the image sensor. At low f-numbers, only rays from a specific distance converge to sharp focus on the image sensor; rays from objects that are too close or too far from the lens don't form sharp images and appear blurry.
If you want to take a photograph in which everything, near and far, is essentially in perfect focus, you need to use a large f-number. The lens will form a dim image and you'll need to take a relatively long exposure, but you'll get a uniformly sharp picture. But if you're taking a portrait of a person and you want to blur the background so that it doesn't detract from the person's face, you'll want a small f-number. The preferred portrait lenses are moderately telephoto—they allow you to back up enough that the person's face doesn't bulge out at you in the photograph—and they have very low f-numbers—their large front optical elements gather lots of light and yield a very shallow depth of focus.
The electric circuit that powers your lamp extends only as far as a nearby transformer. That transformer is located somewhere near your house, probably as a cylindrical object on a telephone pole down the street or as a green box on a side lawn a few houses away.
A transformer conveys electric power from one electric circuit to another. It performs this feat using several electromagnetic effects associated with changing electric currents—changes present in the alternating current of our power grid. In this case, the transformer is moving power from a high-voltage neighborhood circuit to a low-voltage household circuit.
For safety, household electric power uses relatively low voltages, typically 120 volt in the US. But to deliver significant amounts of power at such low voltages, you need large currents. It's analogous to delivering hydraulic power at low pressures; low pressures are nice and safe, but you need large amounts of hydraulic fluid to carry much power. There is a problem, however, with sending low voltage electric power long distances: it's inefficient because wires waste power as heat in proportion to the square of the electric current they carry. Using our analogy again, sending hydraulic power long distances as a large flow of hydraulic fluid at low pressure is wasteful; the fluid will rub against the pipes and waste power as heat.
To send electric power long distances, you do better to use high voltages and small currents (think high pressure and small flows of hydraulic fluid). That requires being careful with the wires because high voltages are dangerous, but it is exactly how electric power travels cross-country in the power grid: very high voltages on transmission lines that are safely out of reach.
Finally, to move power from the long-distance high-voltage transmission wires to the short-distance low-voltage household wires, they use transformers. The long-distance circuit that carries power to your neighborhood closes on one side of the transformer and the short-distance circuit that carries power to your lamp closes on the other side of the transformer. No electric charges pass between those two circuits; they are electrically insulated from one another inside the transformer. The electric charges that are flowing through your lamp go round and round that little local circuit, shuttling from the transformer to your lamp and back again.
Solid ice is less dense than liquid water, meaning that a liter of ice has less mass (and weighs less) than a liter of water. Any object that is less dense than water will float at the surface of water, so ice floats.
That lower-density objects float on water is a consequence of Archimedes' principle: when an object displaces a fluid, it experiences an upward buoyant force equal in amount to the weight of the displaced fluid. If you submerge a piece of ice completely in water, that piece of ice will experience an upward buoyant force that exceeds the ice's weight because the water it displaces weighs more than the ice itself. The ice then experiences two forces: its downward weight and the upward buoyant force from the water. Since the upward force is stronger than the downward force, the ice accelerates upward. It rises to the surface of the water, bobs up and down a couple of times, and then settles at equilibrium.
At that equilibrium, the ice is displacing a mixture of water and air. Amazingly enough, that mixture weighs exactly as much as the ice itself, so the ice now experiences zero net force. That's why its at equilibrium and why it can remain stationary. It has settled at just the right height to displace its weight in water and air.
As for why ice is less dense than water, that has to do with the crystal structure of solid ice and the more complicated structure of liquid water. Ice's crystal structure is unusually spacious and it gives the ice crystals their surprisingly low density. Water's structure is more compact and dense. This arrangement, with solid water less dense than liquid water, is almost unique in nature. Most solids are denser than their liquids, so that they sink in their liquids.
Liquid water can evaporate to form gaseous water (i.e., steam) at any temperature, not just at its boiling temperature of 212 F. The difference between normal evaporation and boiling is that, below water's boiling temperature, evaporation occurs primarily at the surface of the liquid water whereas at or above water's boiling temperature, bubbles of pure steam become stable within the liquid and water can evaporate especially rapidly into those bubbles. So boiling is a just a rapid form of evaporation.
What you are actually seeing when raindrops land on warm surfaces is tiny water droplets in the air, a mist of condensation. Those droplets happen in a couple of steps. First, the surface warms a raindrop and speeds up its evaporation. Second, a small portion of warm, especially moist air rises upward from the evaporating raindrop. Third, that portion of warm moist air cools as it encounters air well above the warmed surface. The sudden drop in temperature causes the moist air to become supersaturated with moisture—it now contains more water vapor than it can retain at equilibrium. The excess moisture condenses to form tiny water droplets that you see as a mist.
This effect is particularly noticeable when it's raining because the humidity in the air is already very near 100%. The extra humidity added when the warmed raindrops evaporate is able to remain gaseous only in warmed air. Once that air cools back to the ambient temperature, the moisture must condense back out of it, producing the mist.
Although that sounds like a simple question, it has a complicated answer. Gravity does affect light, but it doesn't affect light's speed. In empty space, light is always observed to travel at "The Speed of Light." But that remark hides a remarkable result: although two different observers will agree on how fast light is traveling, they may disagree in their perceptions of space and time.
When those observers are in motion relative to one another, they'll certainly disagree about the time and distance separating two events (say, two firecrackers exploding at separate locations). For modest relative velocities, their disagreement will be too small to notice. But as their relative motion increases, that disagreement will become substantial. That is one of the key insights of Einstein's special theory of relativity.
But even when two observers are not moving relative to one another, gravity can cause them to disagree about the time and distance separating two events. When those observers are in different gravitational circumstances, they'll perceive space and time differently. That effect is one of the key insights of Einstein's general theory of relativity.
Here is a case in point: suppose two observers are in separate spacecraft, hovering motionless relative to the sun, and one observer is much closer to the sun than the other. The closer observer has a laser pointer that emits a green beam toward the farther observer. Both observers will see the light pass by and measure its speed. They'll agree that the light is traveling at "The Speed of Light". But they will not agree on the exact frequency of the light. The farther observer will see the light as slightly lower in frequency (redder) than the closer observer. Similarly, if the farther observer sends a laser pointer beam toward the closer observer, the closer observer will see the light as slightly higher in frequency (bluer) than the farther observer.
How can these two observers agree on the speed of the beams but disagree on their frequencies (and colors)? They perceive space and time differently! Time is actually passing more slowly for the closer observer than for the farther observer. If they look carefully at each others' watches, the farther observer will see the closer observer's watch running slow and the closer observer will see the farther observer's watch running fast. The closer observer is actually aging slightly more slowly than the farther observer.
These effects are usually very subtle and difficult to measure, but they're real. The global positioning system relies on ultra-precise clocks that are carried around the earth in satellites. Those satellites move very fast relative to us and they are farther from the earth's center and its gravity than we are. Both difference affect how time passes for those satellites and the engineers who designed and operate the global positioning system have to make corrections for the time-space effects of special and general relativity.