Wednesday, November 30, 2011

Radiation Resistance of a Half-Wave Dipole

Last week I did some cool calculations of transmission-line impedances, and at the end of my post I said I had some tricks for antennas. Today I'm going to show you how to calculate the radiation resistance of a half-wave dipole.

Let's first recall that the impedance of free space is 377 ohms. What does this mean? That's a tough question in general, but one interesting interpretation is the theory that it tells you how much power radiates from a flat sheet of current. Here is how it works. Suppose you have a metal sheet and current flows on the surface. Let the current density be one amp/meter. (You really have to understand why current density is expressed in amps per meter and not amps per meter-squared!) According to this interpretation, you will be radiating power from the surface at a rate of i-squared-r. If your surface is 100 square meters at 1amp/meter, then you are radiating 37700 watts.

DISCLAIMER: IT DOESN'T EXACTLY WORK THIS WAY.

It would be nice if it was that simple, but there are all kinds of complications, and in general it doesn't work this way at all. And yet there are situations where you can get away with this method. The calculation I'm about to do is one such situation. I'm not even sure why it works, but it gives a pretty good answer.

Let's take my favorite local radio station, 680 CJOB. You can easily verify that the wavelength of the station is close to 440 meters. For my half-wave dipole, I'm going to take a giant metal sphere 440 meters in circumference (r = 70m) and feed it at the equator.

You can see I've chosen the size of my antenna so that I've hopefully placed a natural current node at both the north and south poles. The idea is that the current, supposing it to be 1A/m at the equator, diminished to zero at the poles in a natural way. Oddly enough, one "natural" way to do this is by letting the current simply be 1A/m everywhere! Since the lines of latitude get shorter as you go north, I will have a total current of 440A at the equator, 220 A at latitude 60 North (and South), and zero at the poles. The current density is everywhere 1A/m, but the total current goes smoothly to zero at the poles.

That makes my power calculation easy. I have  377 W/m^2 everywhere on the surface, and a total surface area of just over 30,000 m^2, for a total power of 12.6 MW. Taking into account the total current and equating power = i-squared-R, I get an effective resistance of 65 ohms.

(EDIT: I just noticed that I left out something very important from this calculation...where did I get my current??? It's like this: we assume the sphere is cut in half at the equator, and we are feeding current in uniformly around the equatorial circumference. Since the circumference is 440 meters, we need exactly 440 Amps to provide our uniform current density of 1 A/m. That "440 Amps" is the number I used in the i-squared-R formula in the last paragraph.)

This is really close to the "correct" value of 73 ohms, but it seems I was more lucky than I had a right to be. In fact, my current distribution is not all that physical, although it made the calcuation easy. You can see there is something wrong with my current distribution because it gives me a uniform power density over the whole surface of the sphere, and everyone knows that an actual dipole radiates in a donut pattern.

Does this help fix up my calculation? Unfortunately, it does just the opposite. Correcting for the donut pattern, my actual power should be close to half of what I initially estimated, which knocks my resistance down to 32 ohms. It's not really that good, but at least it's in the ballpark. Sometimes it's good just to get into the ballpark.

Regardless of the exact value of the resistance, there is one big question that I've avoided mentioning altogether: a normal antenna is a skinny little wire, and I've used a sphere for my approximation. Why in the world is a big fat spherical antenna a good approximation for a wire?

It turns out that it is, but that is probably a story for another day.

Tuesday, November 29, 2011

Einstein=>Bohm=>Feynmann=>Bell

Yesterday I said I had some insights to share concerning EPR, Bell, and entanglement. This whole questions is one of the central philosophical issues of quantum mechanics, and it is talked about EVERYWHERE. But there is something very wrong with the standard narrative. The story as it is normally told does not make complete sense to me. Something important is missing, and I haven't been able to put my finger on it. Let me explain.

The EPR paradox was, from the outset, a philosphical question which was not thought to have direct experimental consequences. Over time, this situation began to change, and ultimately with the publication of Bell's ground-breaking analysis 1964, experimenters were galvanized into action. Within eight years the first results had begun to appear (Clauser 1972) culminating with the famous results of Aspect in the 19880's. I'm recalling this from memory but I think I've pretty much got it right.

When Einstein put forward the paradox in 1935, there was no talk of entangled photons. His example was much more understandable. (There is some argument as to whether the published example was due more to Einstein, Podalsky, or Rosen but that's another question.) Essentially it deals with an unstable atom that spontaneously disintegrates for some reason. I believe a suitable example would be an atom in an excited state that emits a photon. According to Copenhagen, the emitted photon may be detected anywhere on the surface of an expanding sphere: the direction is random; and the atom which recoils will therefore be found travelling in a direction diametrically opposed to the photon, in order for conservation of momentum to be preserved. None of this seems peculiar in any way.

What Einstein pointed out was that strictly speaking, right up to the moment of detection the photon might have been found anywhere on that expanding sphere....and likewise the recoiling atom. It is only after the moment of detection of one particle that the direction of the other becomes determined.

This is very different from the obvious interpretation. The obvious determination is that at the moment of disintegration, the photon took off in one direction and the atom in the other. The "randomness" in question was merely our lack of knowledge of which directions the particles took. In this interpretation, there is nothing unusual in the fact that measuring the direction of one particle automatically tells us the direction of the other particle.

Einstein's argument was that this interpretation was not consistent with the theory of quantum mechanics. According to the theory, either of the two particles, up to the very moment of detection, truly had the capabilities of manifiesting their presence at any point on the surface of their respective spherical wavefronts! and it was only after either one of them made its presence felt in a detector, that the directional vector of its counterpart became fixed. In other words, a measurement on one side of the laboratory would instantly impact on the physical status of the particle on the other side of the laboratory. And this instant change of status took place no matter how big the room...even if it would seem that the information to get from A to B would have had to travel faster than the speed of light.

The catch was that no one believed at the time that this was any more than a philosophical question. How could anyone ever prove that the two particles hadn't shot off in their chosen directions at the very moment of disintegration? Certainly the theory said otherwise; but experimentally, all that anyone could ever do was to verify that the direction were indeed opposite to each other. No one knew a way to show that the particle which was in fact detected at A "might just as well" have been detected at B, but for random chance. People could shrug off the issue of faster-than-light comminucation with the answer that this was a mere artifact of the theoretical construct, not subject to experimental verification.

And so the matter rested until 1950. In that year, David Bohm proposed a different mechanism to settle the question. It was well known that in a helium atom, the two electrons had opposite spin orientations: if one was up, the other had to be down, and vice versa. Not everyone understood, as Bohm did, that the concepts of "up" and "down" were mere arbitrary labels that did not really describe physical reality. Which way was "up"? In fact there can be no answer to this question. It is a little bit more correct to say that whatever direction the spin of one electron is oriented, the other electron has its spin pointing the opposite way. But even this description does not do justice to the truth.

Bohm understood that the spins of the two electrons were so intimately "entangled" that there was absolutely no way to speak in a meaningful way about any preferred direction. The spins simply cancelled out to zero everywhere. It was a total system with zero spin, period. (I actually explain how two particles with spin can combine into a spin-zero state in this blogpost of May 2011 ).

Bohm proposed that if a helium atom could be made to expel its electrons, the spin-zero state of the electrons would be preserved. They would shoot off in opposite directions. If either electron were captured by a spin measuring apparatus such as a modified Stern-Gerlach setup, it would of necessity be detected in one of two possible spin states, depending on the orientation of the detector. This was not controversial. What Bohm pointed out, however, was that the other electron, once the first electron had been measured, was no longer free to have any old random spin: its spin, if measured would necessarily have to be precisely opposite the spin of the first one.

What made this different from Einstein's original example? For the first time, there was the tantalizing prospect that this effect might be experimentally observable. There was unfortunately a catch: no one knew a way to make a helium atom spontaneously expel both electrons. To this day, Bohm's experimnet has never been done; nor has any other experiment been done to measurement the entangled spin state of two electrons.

The next step in the historical narrative belongs, as far as I can determine, to Richard Feynmann. Feynmann is not much talked about in the general retelling of the story, but if the next step is not his then I don't know whose it is. All I know is that at some point, somebody realized that the mathematics of photons has in some way a one-to-one correspondence with the mathematics of electrons. And in 1962 Feynmann, lecturing an undergraduate class of second-year physics students at Caltech, proposed an equivalent experiment to Bohms using photons instead of electrons.

Feynmann's example was the self-annihilation of positronium, and his explanation is found in great detail in (of course!) the Feynmann Lectures on Physics, Voloume III. Positronium is an unstable "atom" made of one electron and one positron. In certain conditions, these particles will be found in a spin state identical to the spin-zero state of the electrons in a helium atom. Unlike helium, however, positronium is definitely unstabe, and it will absolutely decay in a matter of milliseconds. In fact, it is not so much a decay as a self-annihilation: and the products are not the two electrons, but rather two high-energy photons. The remarkable thing is that the photons inherit the spin-state of the positronium! The photons are spin-entangled.

If we can measure the polarization state of these decay products, we can experimentally observe the phenomenon of entanglement. Unfortunately, for technical reasons that I do not understand, this is not readily acheivable. Perhaps polarizers are not availabe for high-energy gamma rays as they are for visible light. What I do know is that, for whatever reason, the experiment has never been carried out. To this day, we have not measured the entanglement properties of the decay products of positronium.

Which just about brings us to the year 1964, when Bell published his ground-braking analysis. But that's a story for another day.

Monday, November 28, 2011

Bell, EPR, and the Business with the 22.5 degrees

What is the big deal with Quantum Mechanics anyways? Why is there so much talk about the mysteries and the paradoxes? Sure, there are a lot of smug people out there in the world of physics who say there's nothing to worry about, it's the most perfect theory ever devised by man, and just because you can't explain what's happening is no reason to question the result of your calculations.

I don't believe that for a second. Quantum mechanics is messed up because people in the physics community don't make a serious effort to understande what is really happening. At some level, they like having all those mysteries and paradoxes. It gives the ones who can do the calculations virtually the status of priests of some revealed religion, that only they are qualified to interpret.

I first began to suspect something was really wrong back in 1987 when I managed to solve a little problem in antenna theory that had been bothering me for a number of years. I wanted to figure out the maximum theoretical power you could absorb with a small crystal radio set, and I came up with the mind-boggling result that, assuming you had access to ideal materials like perfect conductors, the answer didn't depend on the size of your antenna! You can see how I figured this out by checking out this earlier blogpost of Oct 10 2011 . It's a very cool calculation that doesn't rely on any detailed knowledge of antenna theory or radio engineering, but only on the superposition of two wave patterns: the incoming plane waves from the distant transmitter, and the outgoing re-radiated waves of the receiving antenna.

What caught my attention was the implications for atomic theory. Ever since high school, I remember being taught that the wave theory of light failed to explain the the photo-electric effect. Among other things, the most telling argument came down to the fact that the energy in a beam of light was to diffuse and weak to concentrate the necessary punch into the tiny volume of a single atom, so as to be able to knock out an electron.

But what the crystal radio showed me is that the absorption cross section of a receiving antenna had nothing to do with the physical size of that antenna! In the simplest possible case, the photo-ionization of a hydrogen atom, the effective cross-section for energy absorption was in fact a million times greater than the physical cross section of the atom.

Yet this flawed argument was considered so important in the justification of quantum mechanics that it occupied a key place in the high school curriculum of just about every school division in the world. If the teachers, professors, and textbooks could be so wrong about such a fundamental point, then where else might they be wrong?

Over the years I took up this argument wherever I had the chance, mostly in internet forums. I was usually met with ridicule and accused of being a troll, but occasionally someone with authority would weigh in and point out that my argument was basically the same as that made by respected people like Ed Jaynes in the 1960's. (I eventually found out that my argument was actually a little different, as I explain in this blogpost , but that's another question.) I always appreciated the support but it bothers me still that physics arguments should be arbitrated by recourse to higher authority. Isn't the whole idea of physics that we should be able to hash out these arguments based on their merits?

One thing that my detractors used to do was to say: "Even supposing you are correct about the photo-electric effect, how can you explain the Compton Effect?" The nasty thing about this argument is that they don't actually admit I am right about the photo-electric effect! They say "even if..." which doesn't concede anything, and then refuse to deal with the point I raised until I can deal with a completely different point!

It's unfair because there is no limit to the number of specific complications they can invoke in order to avoid meeting my argument head on. One of the most obnoxious practitioners of this tactic is the well-known ZapperZ of physicsforums.com, who would say things like, "even if you can explain some of the more basic features of the photo-electric effect, you cannot explain the detailed scattering-angle dependencies....". My high-school teacher didn't explain the "detailed scattering-angle dependencies" either when he declared that "light is made of photons", but no one objected to that! There is a world of difference between saying "you, Marty Green, have not yet shown that you can calculate a specific result based on your theory" and saying that "your theory is incapable of giving correct results for this calculation", but the argument by ZapperZ and others simply glosses over the distinction and coflates them into one single argument which is impossible for anyone to answer.

I endured almost ten years of indignities at the hands of these people because in the end, I was unable despite my best efforts to explain the Compton Effect using the wave theory of light. And then one day, I did it! I was jubilant and imagined that I was on the verge of overturning the whole paradigm that had dominated physics since the Copenhagen school adopted Max Born's probability interpretation of the wave function in 1927. Sadly, it was not to be, as I explained in my recent blogpost, "How I got Cheated Out of the Nobel Prize" . The problem was not that my argument was flawed, it was that it had already been published by no less than Schroedinger himself in 1927, only to be ignored by the then alpha-males of the physics world: Bohr, Heisenberg, Born, Lorentz, etc.

That's how it goes in physics. You come up with something and everyone says, "you're a quack, you're crazy, you don't know what you're talking about", and then you find a reference in the published literature to support what your saying! Without skipping a beat, the naysayers are singing a new tune: "oh, it's nothing new, it's well known, nobody is interested in that anymore."

You don't normally recover from a setback like that, but incredibly I still have one last shot at immortality. It's my theory of Quantum Siphoning , and if anything is a game-changer this is it. I already said that the density of the wave energy was a key problem that I had overcome in the argument over the photo-electric effect, but in fact I still had a problem with energy density. The problem was the case of light from an extremely distant star falling on a photographic plate. It seems that no matter how weak the light, even if you have to wait hours between events, here and there you get single atoms of silver bromide being reduced to metallic silver. The problem was that the energy required for this chemical transformation was more than anything conceivably available from the incident light in wave form.

What I did was to demonstrate by a brilliant analyis using no more than first-year physical chemistry, that the energy necessary for the transition was already available in the silver bromide crystal! The mistake everyone made was to consider only the enthalpy of the transition and to ignore the entropy component. I calculated the Gibbs Free Energy of the transition at very low concentrations (a few parts per trillion of metallic silver is sufficient to yield a developable image!) and showed that the crystal, considered as a solid solution, was already in a state of near-equilibrium. So only a minimal nudge was necessary to drive the chemical reduction.

The name "quantum siphoning" refers to the detailed mechanism whereby the energy of the crystal is funneled into a single atom. It's a cool name, by analogy with "quantum tunneling", and it's all mine. Google it and I come up first. So I feel I've staked out my territory. If anyone else tries to say that they came up with it first, they'll have a hard time ignoring my prior claim. Call it a long shot if you like, but that's my last and best shot at the Nobel Prize.

Is there nothing I can't explain? Sadly, it turns out there are still many problems I don't know how to solve...yet. But that doesn't mean I or somebody else will never solve them. The problem is that for me to do it personally I'd have to live to be about 400 at the rate I'm going. Still, I can't stop doing what I do. One of the hard cases that I continue to struggle with is the whole business of the EPR paradox, with Bell, Aspect entanglement, and of course the famous crossed polarizers at 22.5 degrees.

This is a problem I havent been able to solve yet, but I have some insights and I'm going to take them up in the next few days.

Sunday, November 27, 2011

Calculating Transmission Line Impedance with Pictures

It's funny but the blogpost which has received the most hits in the last month was my article about transmission line impedances. I don't know why it hit a nerve with people but I guess it deserves a follow-up. Today I'm going to show you tricks for calculating impedances without knowing any formulas.

There is however one fact you need to know: the impedance of free space is 377 ohms. What does this even mean? It's a pretty deep philosophical question when you get right down to it, but in practical terms it can be answered rather easily. It means a freely propagatin electromagnetic wave whose magnetic field amplitude is 1 Amp/Meter will have an electric field amplitude of 377 volts/meter. It's as simple as that.

Of course it's far from simple, but that will have to do for now. Let's look at some implications.

The simplest transmission line we can calculate is the parallel strip line. Lets take to metal strips 100 meters wide (!) separated by a height of one meter. Let the electric and magnetic fields be as we declared above, and let the electric field be vertical (as it must be, since it has to be perpendicular to the metal sheets) and the magnetic field horizontal. Then the magnetic potential (field times distance) is 100 Amps and the electric potential is 377 volts. The impedance is just the quotient of these, or 3.77 ohms.

Of course, it's just the same if you dimension the strip in millimeters instead of meters. The impedance is really just a function of the shape and is independent of the actual size. It's actually helpful to think of the dimensions of impedance as being "ohms per square" rather than simply "ohms".

If we take the basic square as our working unit, we can see that the impedance of a strip line whose height is equal to its width should just be 377 ohms. In practise this turns out to be not true, because it doesn't take into account edge effects. But when the strip is very wide, as in our first example, the edge effects become negligible. The edge effect is also negligible in cases where the geometry wraps around upon itself so there are no edges. These are the cases we will look at, and our basic unit will be the square whose impedance is 377 ohms.

The first case of interest is the coaxial line, which looks like this:
You can see that there are six "squares" in parallel with each other. (In this game we recognize a "square" as a quadrilateral which has been distorted according to certain intuitive rules.)  Since each square is 377 ohms, the total impedance is just 60 ohms. (Everyone knows how to do resistance in series and parallel, right?) In this case, of course, I have sketched the relative diameters of the cylinders so that six squares fit around the circle: really, it's pi squares (taking pi = 3 if you like) and the relative diameters of the cylinders are of course 2.718:1 which you recognize as "e".

If the cylinders are very much closer in diameter you can of course treat them as parallel strips with wrap-around geometry. If the cylinders are wider apart in diameter you can work logarithmically: each additional factor of e on relative diameters adds 60 ohms to the impedance.

The other interesting case is parallel wires. I'm going to take our first example as two wires of radius r whose centers are separated by a distance or 2r. I've drawn the squares in and it looks like this:

You can see how the squares line up in rows: there are for rows of two squares which easily gives an impedance of 377 x 2/4 = 190 ohms. Actually, that's wrong: there's a fifth row of squares which is the wrap-around that goes all the way behind and circles back! So we really have to multiply our unit impedance by 2/5, giving an answer of pretty close to 150 ohms.

What about the case where the parallen wires are farther apart? Before we can do that we need to analyze one more transitional case. I've just done the case where D/d (the ratio of diameters to separation) is equal to 2. Now I'm going to do the case where D/d = 4. It looks like this:

You can see I've just taken the previous diagram and shrunk the wires. Because the field is distorted, the shrunken wires don't move to the center of the original circles. You can see I've placed them so everywhere I've still got my 377-ohm unit squares. We can easily get the resistance just be adding the squares up. Remember we had an impedance of 150 ohms between the larger circles. You can see that there are 5 squares between the larger and smaller circles, so dividing that into 377 gives us an additional 75 ohms on each side. The total impedance is therefore 300 ohms.

You may remember that the original article I posted was in response to this discussion on StackExchange.com . The problem in question concerned a transmission line with a ratio of diameters D/d = 50. People who know all about formulas and things calculated the impedance as being 552 ohms. Let's see how close I get using my pictures. I've already got a value of 300 ohms for D/d = 4. That means within each circle I have to reduce my diameters by a further factor of 12.5. I already said for each factor of e I get another 60 ohms of impedance. Since 12.5 is approximately e^2.5, I get an additional 150 ohms on each side, bringing my total impedance to 600 ohms. That's an error of 48 ohms. Maybe its a little better if I actually calculate the natural logarithm of 12.5? Actually, it's just a bit worse: I get ln(12.5)=2.52, which is just a hair above my eyeball estimate. Still, I think its' pretty good on the whole.

There's more cool stuff you can do pictorially with impedances, including calculatint the radiation resistance of a half-wave dipole, but I'm going to leave that for another day.

Sunday, November 20, 2011

Marty Does Relativity

I said last time that I was going to try and explain why kinetic energy is ½*mv^2. I’m going to resort to relativity to make my case. I don’t like doing this because relativity really isn’t my territory. I find the arguments really mathematical and hard to follow. But that’s what I’m stuck with.

We start by introducing the concept of four-vectors. I hope you know what an ordinary vector is but if you don’t you don’t. That’s my starting point. (Ordinary vectors, that is.)

The distance to a point (x,y,z) is of course given by the three-dimensional Law of Pythagoras:

D^2= x^2+ y^2+z^2

In relativity, we don’t stop here. We say the true relativistic “distance” must include a time component in addition to the three space components, so the relativistic equation looks like this:

D^2= x^2 + y^2 + z^2 - t^2

Why the minus sign in front of the t^2 term? That’s a long story, and it has to do with the detailed mathematical structure of the space-time continuum. It seems that time and space, while interchangeable up to a point, are not really on a completely equal footing in relativity. Simply put, time is like an imaginary number. So when you square it, it goes negative. Also, to get consistent units, we really have to multiply by the speed of light. Otherwise we can’t have time and distance in the same equation. We can rewrite our formula for relativistic distance, for convenience letting the y and z components go to zero, and here is what it looks like:

D^2= x^2 + (ict)^2

I should mention that the intuitive meaning of relativistic distance is very different from what ordinarily think of. For example, the “distance” between two points in space-time is “zero” when they are at opposite ends of a beam of light! But that’s another question. The point is that in relativity, everything that we usually think of as an ordinary vector, or a “three-vector”, is actually a four-vector. And momentum is a perfect case in point. When we track the momentum of something, we do it as a three-dimensional vector. What then is the fourth piece of the vector in relativity, the so-called “time component” of momentum?

It seems that the answer is: mass. Something has momentum because it is careening through space in three dimensions. It is also careening through time, and that is the fourth component. Letting momentum be denoted by the vector p as is customary, and following the exact same rules as when we converted ordinary distance to a four-vector, we can formally write the relativistic equation for momentum as follows (with the y and z components set to zero):

p^2= (mv)^2+ (icm)^2

Now how are we going to make sense of this? You can see that the final term is starting to look a lot like Einstein’s formula for energy, except there’s an extra factor of m thrown in. But the important thing to remember is something that doesn’t show up explicitly in Einsteins formula, but you can’t understand anything if you don’t know about it. It’s the relativistic change in mass.

In relativity, the mass increases as you go faster, and it increases by a factor of the square root of (c^2 – v^2)/c^2). In our equation, wherever the mass appears it happens to be squared, which is nice because we won’t be carrying around square root signs. Putting in the correction for change in mass, what we get for momentum squared is:

p^2 = m^2*v^2 – m^2*c^2 – m^2*v^2

It turns out that the increase in mass exactly cancels out the “ordinary” momentum term, and all that is left is the mc^2 term. Dividing both sides of the equation by m, we get

(p^2)/m = m*c^2

which is just Einstein’s famous formula. It seems that Einstein’s mc^2 is a relativistic invariant, a property of a body which remains constant regardless of its apparent momentum. Because of the negative sign in the t component of the four-vector, the apparent gain in p^2 is compensated by the increase of mass in the t component.

Of course I cheated in this analysis. When I evaluated p^2, I used the relativistic change of mass for the t term, but I ignored it for the x term. I did it because it made my answer come out cleanly. I don’t know why it works.

That's the first and last time I'm going to try and do relativity calculations in this blog.

Why not E=mc^3???

Peter, my brother-in-law, tells me that his son Charlie asked him a physics question the other day that he couldn’t answer: why is it E=mc-squared, and not mc-cubed or some other power? I told him I’d think about it and maybe post something on my blog. Well here goes.

For starters, this question isn’t really about relativity. It’s about energy, and E=mc^2 only makes sense if energy has the units of (mass)*(velocity)^2. So we might as well ask: why is kinetic energy defined as KE = ½*mv^2?

The crazy thing is I don’t have a really good answer to this question. I did once upon a time, but I don’t any more. That time I knew the answer was almost forty years ago when I was in high school and we had just learned the formula for kinetic energy. I remember asking why is that the kinetic energy, when all of a sudden another student declared with conviction: “Can’t you see? It’s just the indefinite integral of momentum with respect to veloctiy.” (My brother-in-law will not have a hard time guessing that the “other student” in question was none other than Randy Ellis, currently a professor of computer science in Kingston, Ontario.)

This was an amazing explanation which blew my mind. We were all (including Peter) taking calculus together, and it struck me as obvious that if you just wrote

∫mvdv

that the solution was indeed 1/2mv^2, the formula for kinetic energy.

It took me about three days to realize that this didn’t even mean anything! Integrating what for what purpose? It was just a bunch of letters on the page, and it didn’t explain anything. I still don’t know what it means. I’m not going to say you absolutely can’t make any sense of this. I’m just saying that at the high school level, it couldn’t have meant anything to any of us.

Still, there are pieces of the truth in all these things. We know that work is (force)*(distance), and we know that force is (mass)*(acceleration). Putting these together we know that the units of work, and hence energy, must at the very least be equal to kilograms*(meters)^2/(seconds)^2, which are indeed the units found in the formula for kinetic energy, and also in Einstein’s formula. So it really can’t be E=mc^3 or something else.

But just having the units line up still doesn’t completely satisfy me. Why is kinetic energy proportional to the square of the velocity? Why do we keep track of 1/2mv^2 and not some other power of v? In fact, we do keep track of another power of v, namely the first power! We keep track of mv and call it momentum, and we say that momentum is conserved. Then we keep track of 1/2mv^2 and call it energy, and we say that energy is conserved! Why don’t we keep track of 1/6mv^3 and call it something else, and look for a new conservation law? Where does it stop?

We can look for some guidance to other forms of energy, because of course it doesn’t start and stop with kinetic energy. There is, for example spring energy, with the formula ½*kx^2 where x is the displacement, and there is also the energy in a capacitor, ½*CV^2 where V is the voltage. These formulas look very much like the formula for kinetic energy. If we can explain one, perhaps we can explain all the formulas?

It turns out to be not so easy. The spring formula and the capacitor formula are indeed easy to explain, and their explanations turn out to be very similar to Randy’s calculus-style explanation of the kinetic energy formula: the only difference is that unlike kinetic energy, the calculus explanation makes perfect sense in these cases! In both these cases the energy is realy the product of two different causes: for the spring, it is the force and the displacement, and for the capacitor it is the voltage and the charge. In both cases the two variables are proportional to each other, so instead of writing (voltage)*(charge), you can just write (voltage)^2 and multiply by “capacitance”, which just hides the fact that capacitance is simply the proportionality factor which relates voltage and charge. Same with the spring constant.

I don’t know any way to apply this same argument to the formula for kinetic energy. It’s not clear to me in any way that kinetic energy must be the product of momentum and velocity in the same way that the energy in a capacitor is the product of charge and voltage.

So how do we explain it? I suppose the easiest thing is to just say that that’s how it is, it works and gives us useful results, and so we accept it. But I can’t just give up that easily. I’ve been turning this over and over and I’ve come up with an explanation. I’m not that happy with it but it's going to be the topic of my next post.

Wednesday, November 16, 2011

Shout out to Slovenia

I have been blogging for going on two years now, but only last month I realized that Google Blogger tracks all kinds of statistics for me. I had no idea people were actually reading me, but they were! I have had going on 4000 hits in the 20 months I’ve been blogging, and the last two months I’ve averaged over 500 per month. The greatest number of hits are from the United States, Canada, England, and Germany. Those are the top four countries by a fairly wide margin. However, the race for fifth place is quite hotly contested by Russia, India, the Netherlands and…Slovenia?

Just today, for the first time since I’ve been monitoring my stats, little Slovenia edged past Russia for sole possesion of fifth place, with 85 all-time visits to my blog. Congratulations, Slovenia, and keep up the good work. We’re thrilled to have you here.

Tuesday, November 15, 2011

Why Not Write Hebrew with Arabic Script?

I follow the news from Israel on the internet, usually with some dismay. Today, however I had a surprise. Moshe Arens, one of the more stubborn right-wing commentators, had an article in Haaretz where he supported the retention of Arabic as an official language in Israel. It’s hard to believe that there is a move afoot to downgrade the status of Arabic, but it’s even more of a surprise to find Arens speaking out against such a move.

For a long time, I have been advocating that Israel and Israelis should be more proactive in identifying themselves as belonging to the Middle East rather than as a European outpost. This means showing more interest and respect towards Arabic culture. One of the most conspicuous aspects of that culture is their incredibly beautiful written script, recognized instantly wordlwide. I therefore came up with the idea that we should adopt the Arabic script for use in Hebrew.

Of course I am not talking about replacing the holy letters of the Torah. These cannot in any circumstances be tampered with. It’s our second alphabet I’m talking about, our written script. What do we need it for? It’s totally replaceable.

The beauty of my proposal is that since Hebrew and Arabic are so closely related, many words would become instantly recognizable to speakers of both languages, just as English and French words are mutually recognizable in print even when they are pronounced differently.

More importantly it would be a huge gesture towards the Arab world that we respect their culture and with to be a part of the Middle East. Such a gesture is long overdue on our part. Sadly, my proposal has been ignored since I first raised it five years ago. With this posting, maybe I can give it a small bump.

Sunday, November 13, 2011

Next week I am supposed to teach a unit of Grade 9 Science: static electricity. I am going crazy trying to make those damned experiments work! You rub this or that with a piece of fur and little pieces of paper are supposed to jump up and down: but try and explain it? Oh, I know, it’s all very simple: there is the triboelectric effect, and there is induced charge, but do you think it actually works the way the lab manual says it does? Try it yourself.

I can hardly begin to count the number of contradictions I find when I actually try to resolve the theory with what I am actually seeingin my basement. Actually, I’m exaggerating: I can resolve quite a few of them,but when I step into the classroom tomorrow I’m supposed to explain this stuffat the Grade 9 level: no Coulombs law, no concept of voltage or capacitance,nothing but “like repels like” and “electricity is made of particles”. It’s a scary prospect.

I’m not going to get into detail today on the experiments, but there is one glaring fact that everybody knows. It’s a lot easier to put a negative charge on something than a positive charge. Why is this?? When I rub a balloon with a piece of wool, doesn’t the wool get just as much positivie charge as the balloon gets a negative charge? But if I walk over to the wall, I can stick the balloon on the wall and it stays right there. If I put the piece of wool on the wall, it just falls down. Yes, I know, the wool is heavier than the balloon. But I have not been able over the last three days of effort to find one indication that there is positive charge on the wool. What am I doing wrong?

I came up with a theory that the air is actually full with a surplus of negative charges. So when you put a negative charge on the balloon, it stays there because the negative air charges are repelled. But the positive charge on the wool is rapidly neutralized by the ambient negative charge.

This sounded OK, but then I read about lighting on the internet. It seems that there is no real theory of how lightning works! There are bits and pieces of a theory, but it just doesn’t all add up. And the first thing that doesn’t make sense is that on some level, the whole planet seems to have a negative charge and the atomsphere is positively charged. So the potential rises as you gain altitude, at a rate of about 100 volts per meter. This trashes my theory of negative charge.

But it got me thinking about lightning, and I asked my self the question: could the charge of the planet be caused by radioactive decay? Alpha particles carry a positive charge and they are being given off by uranium atoms. How many of them make it out of the earth’s crust unscathed to the atmosphere? Are they enough to account for the quantity of atmospheric electricity?

From Wikipedia I got a ballpark figure of 10^20 grams of uranium in the top 25 km of the earth’s crust. This is on the order of 10^18 moles. Since there are 10^5 coulombs in a mole, it is 10^23 coulombs of available electricity. These atoms are decaying at a rate of one decay per atom per four billion years. I ran the figures and it comes to about one million amperes.

Some of these alpha particles will be neutralized before they escape the earth’s crust. But how many? Also from Wikipedia, I learn that there are, worldwide, about 50 lightning strikes per second, each delivering an average of 15 coulombs. In other words, planetary lightning represents an average current of about 1000 amps.

It seems that if less than 1% of the alpha particles generated by uranium decay actually make it into the atmosphere, it would be enough to explain lightning. I guess that’s my theory for today. I recently wrote about how I got ripped off for the Nobel Prize because I was just 78 years late in explaing the Compton Effect by the wave theory of light. Well, maybe I’ll have more luck with this theory.

Saturday, November 5, 2011

How I got Cheated Out of the Nobel Prize

Two weeks ago I wrote about how I almost won the Nobel Prize. Now I have to come clean and tell you what went wrong. You may remember that I had just come up with a way to explain the Compton Effect without photons. It’s actually a really straightforward application of wave-on-wave interactions. I’d been trying to think of an explanation for about ten years without success. I think the reason I couldn’t do it was I was trying to build on my earlier success with explaining the photo-electric effect. In that case, the key was to look at the superposition of the two electron wave functions, before and after, and match the frequency of that superposition with the frequency of the driving e-m wave.

The Compton effect is entirely different! You don’t match up frequencies, you match up wavelengths. The “before” and “after” electron wave functions are simply a plane wave moving to the right, and a plane wave moving to the left. There is no frequency involved because the energies of the incoming and outgoing electrons are the same. But the superposition sets up a standing wave of charge, and it’s the wavelength that interacts strongly with an e-m wave. It works because in quantum mechanics, wavelength is momentum: so an electron interacts with a “photon” whent they have the same wavelength. The only thing to remember is you don’t need to parcel your light up into “photons”: it works because of the wavelength relationship, and that’s all you need.

You can imagine that people told me I was a quack when I tried to tell them about this. It was obviously wrong, way out in left field, nonsense, you name it. I didn’t mind because I knew my day would come. The analysis was correct, and some day the world would recognize it.

This went on for about six months until disaster struck. I was browsing articles on the internet one day and I stumbled across a paper by a fellow named Strnad. His paper was about Schroedinger’s 1927 explanation of the Compton Effect. I started reading with disbelief: Schroedinger’s explanation was my explanation.

Did the naysayers change their tune? Yes they did, without missing a stride. One day it was “nonsense, garbage, quackery” and the next day is “been done before, nothing new, everybody knows that already”. Either way, the bottom line was clear. I’m not getting the Nobel Prize after all.

By the way, Strnad had a peculiar take on the whole question. While he agreed with Schroedinger’s description, he didn’t think it should be taught to students. “It’s important not to confuse them about the existence of photons.” (I’m paraphrasing.) It seems if the students were exposed to Schroedinger’s ideas, it might shake their faith in what their professors are telling them.

I have to wonder why someone would say that’s a bad thing.

Thursday, November 3, 2011

Transmission Lines and Charateristic Impedance

I once did a calculation on an electromagnetic wave propagating between two parallel plates to see which was greater: the electrostatic attraction between the opposite charges induced on the plates, or the magnetic repulsion between the parallel currents induced in the same plates. You can almost see where this is going if you have any intuition for these things. It turns out that in Nature, these two forces are perfectly balanced. It’s just one of those things.

I was actually trying to answer the question of “why do like currents attract?” instead of repelling, the way like charges do. I wanted to draw some cosmic conclousion about the fact that if they repelled, then there would be a net repulsion between the two parallel plates; and that the only way the universe made sense was that if there was no net force of any kind between those plates. I was able to verify by direct calculation that the net force indeed went to zero for a freely propagating e-m wave, but I fell short of my ultimate goal. I never came up with a reason why the net force had to go to zero for reasons that were self-evident.

Someone posted a problem the other day on a website I go to called stackeckchange.com . The problem was a parallel-wire transmission line connected to a resistor. The poster had recognized that when you connect a battery, opposite charges appear on the two wires, so they attract; and also, when current flows, the currents are opposite so they repel. The amount of charge hardly depends on the resistance, so it (and the attraction) are virtually constant. The amount of repulsion depends on the amount of current, or the resistance. So the poster askes: for what value of resistance does the net force go to zero?

You can see that this is similar to the problem I analyzed with the parallel plate. So I posted an answer in the form of a conjecture: that if you treat this as a transmission line, the condition for zero net force ought to be a freely propagating wave; and the way to get this was to have your load impedance matched to your line impedance, so there is no reflected wave.

This answer generated a lot of ridicule from the experts in the group, and we’re still arguing about it. But it looks to me very much like I’m turning out to be right. Of course no one will admit it, but what else is new? In the meantime, the process of arguing has solidified some ideas I’ve had floating around for a while about how to calculate the impedance of different transmission lines, and I’ve come up with some pretty cool tricks that I think I’m going to post one of these days.

You can check out the discussion at http://physics.stackexchange.com/questions/3306/when-is-the-force-null-between-parallel-conducting-wires