Thursday, January 27, 2005

The Harvard Tower Experiment Was Frauded

The Harvard Tower Experiment, done in the early 1960's on the Harvard University campus, was hauled as one of the most precise experiments confirming Einstein's General Relativity. See
http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/gratim.html#c2

But I want to point out this is but another typical example where scientists, for all kinds of told or untold reasons, manipulate their data to get the result they want. Progress of science relys on none-biases and objective observation of the nature. When the credibility of specific experiments are seriously questionable, the science community must set the record straight and disclose any potential fraud committed.

Before I continue, I must emphasis it has nothing to do with the correctness of GR, and I personally believe the GR is a correct theory. However it is wrong to manipulate data to yield desired result, when the precision of the experiment itself is questionable.

In the Harvard Tower experiment, the energy of a gamma photon, which is around 14.4KeV, is modified by a very small amount, 3.5x10^-11 eV, when it drops a height of 22.6 meter in the earth's gravity, according to the calculation of GR. The Harvard group claimed to have measured that 3.5x10^-11 eV displacement of energy level, and matched it to withing 1% of the predicted value. That would require the measurement of the 3.5x10^-11 eV displacement, to the precision of better than 1.4x10^-12 eV.

The question to asked is whether an experiment precision of 1.4x10-12 eV, or even 3.5x10^-11 eV, out of a photo energy of 14400 eV, is possible or not. My answer is it is impossible based on quantum mechanics, specifically, based on the uncertainty principle.

First, the source of gamma photo, Iron-57, has a natural line width of about 10^-8 eV, due to the short life time of decaying, and the uncertainty principle. See
http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html#c1

That means you can not detect a photo energy change much smaller than the natural line width of 10^-8 eV. The claimed 1% precision (it was actualy 4%, based on the 5.1 versus 4.9 figure) would require a measurement precision of 1.4x10^-12 ev, which is 4 orders of magnitude narrower than the natural line width. Impossible to measure.

Second, it took a very short travel time for the gamma photon to travel the 22.6 meter distance. The time is 7.5x10-8 second. The photon existed for a very brief lifetime of just 7.5x10^-8 sec, from the time it was emitted to being absorbed. Based on uncertainly principle, this short lifetime brings about an uncertainly in the energy level of
0.5*hbar/t = 0.5*6.582x10^-16 eV*sec/(7.5x10^-8sec)
= 4.388x10^-9 eV

So the measurement precison of the energy level could never be better than 4.388x10^-9 eV. The GR effect is only 3.5x10^-11 eV, a quantity more than 100 times too small to be measured!!!

Third, even if such a miniscure amount is measured, any possible doppler shift due to relative movement of the source and detector will subject the data to question. To shift the energy level 3.5x10^-11 eV, out of a total of 14400 eV, by doppler effect, all it take is a relative speed of

V = C * 3.5x10^-11 eV / 14400 eV = 7.3 x10^-7 meter/second = 0.73 micro-meter/second

The Mossbauer Effect would only be sensitive enough to detect doppler shift of speeds down to milimeters per second. Speeds of 0.73 micro meters per second, which is roughly 2 to 3 milimeter per hour, is far below that measurable sensitivity of Mossbauer Effects.

Fourth, the equivalent doppler shift speed, 0.73 micrometer per second, or 2.6 milimeter per hour, is far smaller than any thermal expansion rate of the building, at a height of 20+ meters, during the sunshine and sunset of the day/night change.

There are solid and undeniable evidence that there is simply no way the researchers could have obtained the claimed the result at claimed precision. Any reasonable person would have to conclude that the data is probably doctored.

Quantoken


14 Comments:

Blogger Quantoken said...

Zelah:

You do not need to know all details to conclude that an experiment is frauded, if it breaks some of the most basic physics principles! For example, if you start to read any meationing of "perpertual movement" or any claim that an infinite precision is possible, or sort of things, you immediately know it's a fraud, without going into too details. You do need to know the details if you want to investigate exactly how the fraud is committed, but that's a totally different story.

If you trace the original web page I gave, you will find that the original experiment is reported on

Phys Rev Lett. 4, 337 (1960)

A little further research reveals it is just a 4 page paper, titled "Apparent Weight of Photons":

http://prola.aps.org/abstract/PRL/v4/i7/p337_1

It would cost me $22.5 to buy that 4 pages of junk, or a trip to a library. I am not going to do that. If you have easier access you are welcome to scan the four pages and email to me at quantoken @ yahoo dot com.

But let's be realistic: it's probably a 3 and half page paper. After you take away the long list of references at the end and long introduction of background at the beginning, there is not much space to talk about any detail of experiment. So what I read from the GSU web site is about all the details there are to be obtained.

It's clearly a scientic fraud because the fundamental principles of quantum mechanics must be broken to make that kind of precision possible at all. Averaging would not be able give you un-limited precision at all. Try to measure a length using a meter stick a billion times, it will not give you micron precision by averaging.

Quantoken

3:49 PM

 
Blogger JesseM said...

The Pound-Rebka paper is available for free online at http://physics.carleton.edu/Courses/P342/pound-rebka.pdf. I would happily bet $1000 dollars that you missed something in your analysis, since physicists are not stupid people and I doubt every single one who reviewed the experiment missed an obvious flaw that only you were clever enough to detect.

12:32 AM

 
Blogger Quantoken said...

Jesse:
Thanks for prividing the source for that original paper. I glanze it through and could not find anything significent that I may have missed. It's even less credible after you analysis the paper carefully. A lot of important systematic errors are not even meantioned.
I do not think I am the only one smart enough to recognize the credibility problem of this experiment. Any one with some good training in physics can see it. But as in the story of the emperor's new clothes, is the kid the only one seeing that the emperor is naked? No all the adults see it. But they are smart enough to know what not to say.

In your case, since you have bet $1000 on this, I think you will not be so stupid as to easily give in and acknowledge I am right, regardless of whether you could see it or not. Most physicists who work in the fields for a living have much important things to consider than $1000, before they decide whether they should say anything or not. That's not stupid, that's smart, to be survivable. The truth is something less relevant.

Quantoken

9:01 AM

 
Anonymous Anonymous said...

Your analysis is erronerous.

The difference in gravitational potential (phi) from the top to the bottom of the tower (22.6m) is

phi(top) - phi(bottom)= Delta phi
= -[(980cm/sec^{2})(2260cm)] /(3 x 10^10cm/sec)^{2}
=-2.46 x 10^{-15)

A photon arriving at the target having traversed though the gravitational field of the earth through this distance should be (violet) shifted upward in frequency f by an amount

(Delta f)/f= - Delta phi

This lowers the photon counting rate by a factor

C=W^{2}/[(delta f)^{2} + W^{2}]

where W is the full width of the gamma ray line. The fractional width (W/f)=1.13 x 10^{-12}, which is larger than the predicted value (Delta f)/f by a factor of 460 so the reduction in counting rate is only one part in 2.1 x 10^{5}. This seems to make the experiment impossible at first sight. Pound and Rebka thought they would have to let the photons to fall several km in order to get a frequency shift comparable to W.

They devised a clever trick to measure very small shifts--the solution is to move the gamma source sinusoidally up and down with velocity Vcos(wt), where w is some arbitrary fixed frequency (10-50Hz) and V is also arbitrary but much greater than -Delta phi, in other words much greater than 7.4 x 10^{-5}cm/sec. To the gravitional violet shift, Delta f, there is then added a LARGER Doppler shift Delta f_{D}/f=-Vcos(wt). The counting rate is then reduced by a time-dependent factor

C(t)= W^{2}/[(Delta f + Delta f_{D})^{2} + W^{2}]

or C(t)=

(W/f)^{2}/[((Delta f)/f) - Vcos(wt))^{2} + (W/f)^{2}]

That changes everything. The Delta f can be determined by looking for a term linear in cos(wt) by measuring the asymmetry between the number of counts registered when the source is going up (e.g. cos(wt)> 1/sqrt(2)) and going down (coswt < -1/sqtr(2)). Pnd and Rebka then obtained a value for (Delta f)/f about 4 times greater than the expected value of 2.46 x 10^{-15}. This discrepancey is an intrinsic frequency shift due to the actual difference between the source and target crystals--they are 2 different crystals samples even if they are of the same stuff. The discrepancy is removed by subtracting the asymmetry in gamma ray counts when the source is below the target from the asymmetry when it is above the target. Their final result is
(delta f)/f = (2.57 plus/minus 0.26) x 10^{-15}, which agrees excellently with the value predicted by GR of 2.46 x10^{-15). Subsequent experiments have confirmed this to greater accuracy (to within 1%) (Hopefully I ahve not made any typos)

They take count samples --it is statistical--not photon by photon , plus their method used to determine very small shifts, so your use of the uncertainty principle is irrelevant and of no consequence.

6:37 PM

 
Blogger JesseM said...

It's clearly a scientic fraud because the fundamental principles of quantum mechanics must be broken to make that kind of precision possible at all. Averaging would not be able give you un-limited precision at all. Try to measure a length using a meter stick a billion times, it will not give you micron precision by averaging.Of course, what's going on here is not analogous to measuring a distance on the scale of centimeters with a ruler that only shows increments in meters. Rather, it's analogous to using a ruler that shows increments in centimeters, but each measurement has an uncertainty of around a meter--for example, your first measurement might show an object was at a position between 15 and 115 centimeters, your next measurement might show an object was at a position between 33 and 133 centimeters, your next measurement might show an object was at a position between -8 and 92 centimeters, etc. If you average a large enough number of these fuzzy measurements, you can certainly find an average value that has a much smaller uncertainty than 1 meter.

If your argument was correct, that would mean the uncertainty in the position of the center of mass of a baseball could be no smaller than the uncertainty in the position of each individual proton, neutron and electron that makes up the baseball. But of course this is not the case, virtually any textbook on QM will explain that the uncertainty in the position and velocity of classical objects like baseballs is much smaller than the uncertainty in the position and velocity of elementary particles like electrons, simply because the mass of the classical object is so much greater. Here are some pages which discuss the uncertainty principle applied to baseballs:

http://www.ux1.eiu.edu/~cfadd/1160/Ch28QM/Uncert.htmlhttp://people.ccmr.cornell.edu/~muchomas/P214/Notes/QuantumMechanics/node7.htmlhttp://zebu.uoregon.edu/~js/21st_century_science/lectures/lec14.html

7:30 PM

 
Blogger Quantoken said...

Jesse:

Your error analysis was completely wrong. The natural broading of the gamma ray spectrum line has a normal distribution of width df = 1x10^-8 eV, or df = 7x10^-13 * f0 of the 14.4 KeV energy level. The counts would be given by the standard normal distribution formula:

C = C0 * exp(-(f-f0)^2/(2*df^2))

What it means is when the frequency exactly equals to the central absorbtion frequency f0, the count is highest, at C0, when f deviate away from f0, the count decreases. Like say when f-f0 = df, then C = 0.60 * C0.

The questions of experiment are:
1.Measure the counts and use them to figure out the frequence deviation (f-f0).

2.Once f-f0 is known, extract all other factors that contributed to f-f0, leave only the part that is contributed by GR, see if it agrres with GR.

First, the researchers were totally lacking in discussing and calculation all factors that may contrubute to the frequency deviation, for example, thermal expansion of the building uner the sun shine, which causes a tiny relative movement of the source and the detector, and the corresponding doppler shift.

You got to know the GR effect is very small, as I discussed, a relative movement of just 2-3 MILIMETER per HOUR, would already cause a doppler shift bigger than the GR red shift. A building of 22.6 meter high would easily have thermal expansion rate much faster than 2-3 milimeters per hour.

Now go back to No. 1, given the count measurement, how accurate can you determine (f-f0)? We know for any measured count value C, it's standard random deviation would be sqrt(C). So the relative accuracy of C measurement would be 1/sqrt(C). Certainly if C is small, like just one or two, the error will be big, If C is 100, the error would be 1/sqrt(100) = 10%. If C is 1000, the accuracy is 1/sqrt(1000) = 3%. So on.
How does delta C affect the error of f, delta f? Take the derivative of the normal distribution formula:
C = C0 * exp(-(f-f0)^2/(2*df^2))
delta C = C0 * exp(-(f-f0)^2/(2*df^2)) * (f-f0)/(df^2) * delta f
or
delta C = C * ((f-f0)/(df^2)) * delta f

We know delta C = sqrt(C). So:
delta f = df^2/(sqrt(C)*(f-f0))
delta f = df^2/sqrt(C0)*(1/(f-f0))*exp(+(f-f0)^2/(4*df^2))

Our goal is minimize the delta f, at the f=f0 peak, it's least sensitive (1/(f-f0)) approaches infinity. So it's no good. At f-f0 >> df, then the exponent approaches infinity, or sqrt(C) approaches zero, not good either.

You can prove that when f-f0 = sqrt(2)*df, you have the smallest error:
delta f = 1.166 * df * 1/sqrt(C0)
delta f = 1.166 * 7x10^-13 * f0 * 1/sqrt(C0)

The required peak count C0 to achieve desired precision of delta f is then:

sqrt(C0) = (f0/(delta f)) * 8x10^-13

The required sensitivity to measure the GR effect to within 1% accuracy, as claimed, is
delta f = 1% * 4.92x10^-15 * f0 = 4.92x10^-17

So the require count would be:
sqrt(C0) = 1.66x10^4
C0 = 2.75 x 10^8

So to verify GR to within 1% discrepancy, they would have to accumulate a count to the order of 3x10^8 events. There is no way they have measureed that many counts.

Even to very GR to within 10%, it would still require a C0 = 2.75 x 10^6.

The original paper never discussed what exactly is the count they have measured, and how they calculate standard error based on that count.

There is no way they could record such a high count. The gamma source was first excited using a 0.4 curies of Co57, which decays at a rate of 1.5x10^10 particles per second. Let's say 1% of the decay is absorbed by the Fe57 gamma source. Then the gamma source would be releasing 1.5x10^8 gamma photons per second.

This is a omni-directional source, when it lands 22.6 meters away on a small disk only 2 inch in diameter, only 3x10^-7 of the original gamma photon will reach the 2 inch disk. That is only 45 photons per second!!! How many of those photons landed on the 2 inch disk actually gets absorbed? probably just a few every few seconds, or even much lower!!!

This is the reason why they never talked about what was the actual count they measured. Because there is no way they can get any accuracy out of just a few counts per seconds.

And we have not even talked about the fact that since the source and tarket are attached to a louderspeaker, and oscillating slowing at 10Hz, so only during a very brief period during each cycle could it absorb the photon of matching frequency!!!

So far we only talked about how accurate the given experimental setting can reach in measuring the precise frequency shift (f-f0). To determine the part attributable to GR, you need to calculate lots of systematic errors, many of which the original paper fail to even meantion!!!

In conclusion, there is no credibility in this experimental data. The same experiment has never been repeated by another independent group. The data was probably manipulated.

Quantoken

11:43 PM

 
Blogger Quantoken said...

Jesse:
On a more closer examination. Looks like your derivation of formula are basically the same as my followup one, if considering you are just taking the first order approximation of the normal distribution.
Both yours and my will result in a measurement of delta f linearlly proportion to the count asymmetry. Yours would leads to:
delta C = C * V /(2*(W/f)^2) * (delta f)
or, allow V to be of the same order of (w/f), you have:
delta C = C /(2W/f) * (delta f)

I had:
delta C = C * ((f-f0)/(df^2)) * delta f

Allowing f-f0 = sqrt(2)*df, I have:
delta C = C * 1.4 /(df) * delta f

So other than a unitary numerical factor, and naming difference, our results are the same.

(delta C)/C = (delta f)/W

i.e., the relative accuracy of count number should be equal to the expected GR effect divided the natural line width of the spectrum line, just to measure GR effect at all.

The GR effect divided by natural line width is 3.5x10^-11 eV / 1x10^-8 eV = 3.5x10^-3.

So the relative accuracy of count would have to be better than 3.5x10^-3, just to see the GR effect and not necessarily with great accuracy.

delta C / C = 1/sqrt(C)

1/sqrt(C) = 3.5x10^-3.

C = 1x10^5

So the count would have to reach at least hundreds of thousands just to make the asymmetry number meaningful.

To reach 1% precision measurements, the count would have to increase by 10000 times.

The whole question is do they actually reach that many counts, the original paper never meantioned how much the count is. That's where the fraud is.

Quantoken

8:26 AM

 
Blogger JesseM said...

When you measure a light pulse, you don't have to measure it on a photon-by-photon level, and the peak of the energy or frequency or whatever should be the same as what you'd get if you averaged the measurement for each photon. I would guess that in this experiment, they were measuring pulses of gamma rays containing more than a hundred thousand photons, rather than a bunch of measurements of individual gamma ray photons, although I haven't studied the details.

8:40 AM

 
Anonymous Anonymous said...

It wasnt Jesse who gave the analysis, but me. I don't believe the experiment was a fraud by any means. But one would have liked a rather detailed statistical analysis of how they worked with their data and arrived at their concusion. Also, with all experiments, I believe it to be very important that someone reproduces it and its results using the same methods, and comes to the same conclusion. Only then can you really have a lot of confidence in the conclusion.

10:46 AM

 
Blogger Quantoken said...

Pulses of gamma ray, in the sense of pulses like electric pulses, are impossible with current human technology. We can only try to excite some internal nuclear states using radiactive material, then let the random process of decay to release the gamma photon at random time, and towards random direction. So yes, in the Mossbauer experiments, we measure individual gamma photon events.

In the Harward Tower experiment, the gamma source was excited using a radioactive source of 0.4 Curies, which is approx. 1.5x10^10 decays per second. So the gamma photon count would be be terribly high. Since gamma photons are released omnidirectional (towards all directions), the fraction of gamma photos that travel 22.6 meters and landed onto a 2 inch diameter disk would be extremely small. And out of all photons landed on the disk, only a tiny fraction will actually be absorbed by the Fe57. I do not know how small a portion, will have to calculate using the absorbtion cross-section of Fe67.

The original paper avoided ever meantioning the number of photo count, or how they reduce they data and standard error from photo counts. That's where the frad is. The fact is no one has ever been able to re-produce the same experiment using the same setting.

Further, the question is not whether the conclusion of GR red shift is right or not: We know GR is right. The question is ethenicity in scientific research.

We have precedence where Sr. Eddington manipulated his 1919 solar eclipse data to support GR. Many people know the data was completely worthless. But they keep teaching students abut the great discovery in the 1919 solar eclipse. The science community never bothered to set the record straight. Why?

Quantoken

10:17 PM

 
Blogger JesseM said...

This comment has been removed by a blog administrator.

11:19 PM

 
Blogger JesseM said...

In the second-to-last paragraph they say "These data were collected in about 10 days of operation"...does this mean they collected data continuously for that period? You estimated earlier they'd need at least 10000 data points to get the necessary reduction in uncertainty--assuming for the sake of argument that your estimate is about right, they'd only need a counting rate of about one photon a minute to acheive this if they collected data continuously for 10 days.

11:41 PM

 
Blogger Quantoken said...

10000 data count would only be enough to see the GR effect at all. Anything below would see GR effect buried in random noise. What I mean is at 10000 data point the expected margin of error is as big as the expected GR effect. You would have to have 100000000 data point to obtain a margin of error that is only 1% of the expected GR effect. If they don't have that many count, and still claim a 1% accuracy, it could be purely by chance that the data happen to be 1.01 when it could have bee any where from 0.5 to 1.5. Or, more likely, they have simply manipulated data by eliminating points that they don't like: those that do not give them the expected results.

Quantoken

3:47 PM

 
Blogger anymouse said...

"They take count samples --it is statistical--not photon by photon , plus their method used to determine very small shifts, so your use of the uncertainty principle is irrelevant and of no consequence."

All of these measurements are too small to be real. It's all fudged. Einstein is proven wrong with a compass.

http://www.aamorris.net/properganderatpropaganda/2016/1/19/einsteins-general-relativity-is-an-ether-theory

"Just remember for negative powers of 10:

For negative powers of 10, move the decimal point to the left.

So Negatives just go the other way.

Example: What is 7.1 × 10-3 ?

Well, it is really 7.1 x (1/10 × 1/10 × 1/10) = 7.1 × 0.001 = 0.0071

But it is easier to think "move the decimal point 3 places to the left" like this:

7.1 0.71 0.071 0.0071"

https://www.mathsisfun.com/index-notation-powers.html

4:38 AM

 

Post a Comment

<< Home