It's the damned calculus that gets me every time. I actually ran the integration on my antenna cross-section calculation, and I got the same answer I got the first time by just guessing. Which means I'm still out of whack with the rest of the world by a factor of 2. Hard to understand.

One the one hand, you don't get that close to the right answer unless your physics is pretty sound to begin with. There's little doubt that the discrepancy is just a matter of a math error on my part. The problem is: do you go back and obsess over it until you find the error, or do you accept that you're basically on the right track and move on to other things? You'd theoretically make more progress that way, by pressing ahead; but you never know when you might miss something important. It's a tough one.

The calculus was in fact rather picturesque, but it's not really the kind of thing I that lends itself to a blog treatment. However, I suppose I ought to sketch it out, just in case anyone is wondering. So here goes:

If you recall the superposition of the two wave systems, one circular and one planar, the question becomes: how fast doe the circular waves go out of phase with the plane waves, as you move away from the axis of symmetry? It shouldn't be that hard to convince yourself that the phase is quadratic in x, where x is the radial distance. (That's because a circle is basically the same as a parabola, if you take it over a shallow enough section of the curve. Which we can always do by backing off sufficiently far from the antenna.) The assumption is going to be that we're far enough away from the receiving antenna that the circular field is much smaller than the planar field, so the binomial approximation for power density will apply: (1-x)^2 = 1-2x, where x is the in-phase component of the circular wave.

Here it gets a little tricky: we still don't know what the phase difference of the two wave patterns is, taken along the axis of symmetry. I've drawn it as though the phase difference is zero and grows as you move outwards; but in fact, it might be anything. It's not clear exactly how it has to start off in order to maximize the power absorption. Fortunately, we can cover all possibilities by two special cases: sin and cosine. Either you start off exactly in phase, or you start of 90 degrees out of phase...or it's an intermediate case which you can make up by putting together those two special cases.

Then we just have to integrate the field over the cross-section. Don't forget the factor of 2*pi*xdx which you get for the circular rings when you set up the integrals: we then have to evaluate:

and we also have to check the quadrature case,

where k is a scaling factor to be determined by the physics of the situation. I actually ran these integrals in Excel and had two surprises: first, it was the sin integral that gave me the positive result. I always thought it was the cosine integral, but that one goes to zero. It means that along the axis of symmetry, far away from the antenna, the two fields are 90 degrees out of phase. For maximum power absorption, of course. The other weird thing was that doing it numerically, I clearly came up with a simple multiple of pi. It's funny in physics how the crazy integrals that come up happen to be analytically solvable as often as they are. So I looked again, and there is an obvious substitution of u=x^2. I guess anyone would have seen that. The integral solves pretty easily after all. It's the scaling factor that takes a bit of work.

But if you really think about it, you shouldn't be that quick to accept the formal mathematical solution. Just look at the function. It grows and grows, and oscillates faster and faster. If we graph it, it's pretty nasty looking:

Doesn't that grow like crazy? Yes and no. Physically, what's happening is that as you get farther from the axis of symmetry, the two wave patterns are going in and out of phase so rapidly that there is no net effect. It's like shining to flashlight beams across each other. In theory, you should calculate the power flow by adding the wave vectors everywhere and calculating the Poynting vector, which will be going crazy. But in practise, the effect is for all the micro-fluctuations to average out to zero. All the real physics happens in the first few lumps.

But how do we handle that mathematically? I actually did a problem like this once before, when I was adding up one of those crazy Ramanujan series. It came up in my calculation of the Casimir effect, and it was something like 1+2-3+4.... which adds up to 0.25. How? You cover it over with a very gentle Gaussian that preserves the low end and gradually suppresses the high-end fluctuation. I did exactly the same thing when I did this sin integral in Excel. You just keep adjusting the width of the Gaussian until the value stabilises, and then you know you're done. Actually, the fluctuations in this integral are almost like that other series, except instead of alternating integers, it's pretty much the square roots of the integers that alternate. (Because of the x-squared inside the sine.)

Like I said, the integral turned out to be not the hard part. It was getting the right scaling factor to line it up to the physics. I did my best; I'm not going to drag you through the details, but when it was all over, I was still out by a factor of 2. It's just one of those things.

## Wednesday, February 29, 2012

Subscribe to:
Post Comments (Atom)

## No comments:

Post a Comment