Care to Comment

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.


Could you share with us the measurements of those two distances.
 
You are somehow skipping several steps when you jump to working in your alleged 1g error with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is irrefutable.

I see you have since re-examined the collapse using a different tool, but let's put some numbers to the uncertainty since you seem to be questioning my math.

Here is a subset of data from page 7 of your paper:

Time (sec) Roof fall distance (feet)
1.000 11.44
1.167 14.96
1.334 20.24

We can see that the average velocity between points 1 and 2 is 21.08 f/s, and 31.62 f/s between points 2 and 3. This gives an average acceleration of 63.11 f/s2 over these three points. Since this is nearly 2g - which is impossible - we know your error is at least 1g. Back-calculating to the distances, we find that the related error is at most 0.89 feet for any of the three data points, which is a significant percentage of the change in fall distance between data points.
 
I still see this implicit assumption that you actually know the resolution which you have in the video. All Tony did in his paper was divide some number of pixels by some other number and correct for what he thought was camera angle. The smallest things that I can see are some of the antennae on the television tower which are larger than one foot (maybe three?) So, it's absolutely no surprise to me that the results might show anything. If you can't demonstrate a specific one foot (one pixel) object, then I think that this whole attempt to detect a jolt based on video evidence is completely flawed at the outset. In the Verinage videos, what is the calculated resolution - no one seems to know that figure either. Making "more accurate" measurements doesn't get over this problem ata ll . How about this, what's the smallest object you can point to in any of these videos and how big is it? I must be missing something here since no one else seems to be concerned about this - comment?
Rgrds-Ross
 
I still see this implicit assumption that you actually know the resolution which you have in the video. All Tony did in his paper was divide some number of pixels by some other number and correct for what he thought was camera angle. The smallest things that I can see are some of the antennae on the television tower which are larger than one foot (maybe three?) So, it's absolutely no surprise to me that the results might show anything. If you can't demonstrate a specific one foot (one pixel) object, then I think that this whole attempt to detect a jolt based on video evidence is completely flawed at the outset. In the Verinage videos, what is the calculated resolution - no one seems to know that figure either. Making "more accurate" measurements doesn't get over this problem ata ll . How about this, what's the smallest object you can point to in any of these videos and how big is it? I must be missing something here since no one else seems to be concerned about this - comment?
Rgrds-Ross

Full details of various tracing methods are found in the following thread, from about page 15 onwards (a long read):
http://the911forum.freeforums.org/missing-jolts-found-film-at-11-t222.html
I'm not advocating the methods Tony used in the original paper, but the techniques used in that thread have developed such that very small displacements can be detected accurately, well sub-foot. See the image I posted earlier. Forgot to highlight it's axes. The *smooth* line is vertical displacement of the NW corner in feet, with 59.97 samples per second base footage, against time in seconds. The *wobbly* line is velocity and is scaled correctly on the time axis, but arbitarily on the vertical axis.
 
To appreciate the full sophistication of Tony Szamboti's argument, consider these facts concerning the data presented in the paper by MacQueen and Szamboti:

  1. The quantization error in the position data is plus or minus 1/2 pixel.
  2. Velocities are calculated from the positions by differencing (whether simple or balanced).
  3. The quantization errors for adjacent position measurements are independent, so the correct way to calculate the worst case quantization errors for those differences is to subtract the most negative possible position error from the most positive position error, and vice versa.
  4. 1/2-(-1/2)=1
  5. -1/2-(1/2)=-1
  6. Hence the quantization error in the differences used to calculate velocities is plus or minus 1 full pixel.
  7. Each pixel represents about 0.88 feet.
  8. Hence the quantization error in the calculated velocities is plus or minus 0.88 feet per interval.
  9. For simple differencing, the interval is 1/6 second.
  10. For balanced differencing, the interval is 1/3 second.
  11. Hence the quantization error in the calculated velocities is plus/minus 5.28 ft/s for simple differencing (the "per interval" means you divide by 1/6 second, which is the same as multiplying by 6) or plus/minus 2.64 ft/s for balanced differencing (you multiply 0.88 by 3).
  12. Accelerations are calculated from the velocities by differencing (whether simple or balanced).
  13. Although the errors in adjacent velocities are not entirely independent (the estimates for two adjacent velocities cannot both be at the high end of the quantization error, nor can they both be at the low end), the worst case difference for two adjacent velocity estimates comes when the quantization error for one of those estimates is at the high end and the other at the low end. That situation is entirely possible.
  14. Hence the correct way to calculate the worst case quantization errors for those differences is to subtract the most negative possible velocity error from the most positive velocity error, and vice versa.
  15. 5.28-(-5.28)=10.56
  16. -5.28-(5.28)=-10.56
  17. 2.64-(-2.64)=5.28
  18. -2.64-(2.64)=-5.28
  19. Hence the quantization error in the calculated accelerations is plus/minus 10.56 ft/s2 per interval for simple differencing, and plus/minus 5.28 ft/s2 per interval for balanced differencing.
  20. Hence the total quantization error for the calculated accelerations is plus/minus 63.36 ft/s2 for simple differencing (you multiply by 6), and plus/minus 15.84 ft/s2 if the accelerations are also calculated via balanced differencing (you multiply by 3).
  21. Notice, however, that the MacQueen and Szamboti paper never calculates, tabulates, or graphs accelerations at all; the paper tabulates and graphs velocities only, and compares adjacent velocities visually, as Tony has continued to do in this thread.
  22. That's equivalent to calculating the accelerations via simple differencing from velocities calculated via balanced differencing.
  23. The total quantization error for accelerations calculated via simple differencing from velocities calculated via balanced differencing is plus or minus 31.68 ft/s2 (obtained by dividing the plus/minus 5.28 ft/s quantization error of the calculated velocities by the 1/6 second that separates adjacent estimates of the velocity).
  24. 31.68 ft/s2 is approximately 1g.
  25. Hence the quantization error for the accelerations that Tony Szamboti has been discussing in this thread is plus or minus 1g.
  26. To compute the total error bound for the calculated accelerations, we'll have to add measurement error to that plus/minus 1g quantization error.

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.
What's going on here is that Tony Szamboti is warning us against taking the data presented in his paper too seriously. He wants us to add some positive measurement error to the inherent plus or minus 1g quantization error of the accelerations he gets by simple differencing of velocities calculated via balanced (symmetric) differencing.

You are somehow skipping several steps when you jump to working in your alleged 1g error with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is irrefutable.
So I filled in the skipped steps. As for that last sentence...

Feckless arrogance rocks.

I see you have since re-examined the collapse using a different tool, but let's put some numbers to the uncertainty since you seem to be questioning my math.

Here is a subset of data from page 7 of your paper:

Time (sec) Roof fall distance (feet)
1.000 11.44
1.167 14.96
1.334 20.24

We can see that the average velocity between points 1 and 2 is 21.08 f/s, and 31.62 f/s between points 2 and 3. This gives an average acceleration of 63.11 f/s2 over these three points. Since this is nearly 2g - which is impossible - we know your error is at least 1g. Back-calculating to the distances, we find that the related error is at most 0.89 feet for any of the three data points, which is a significant percentage of the change in fall distance between data points.
 
Thank you WD and AZCat I love math porn.

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.

Tony
I think by now the data should be "done."
What are those two measurements for the distance traveled.
 
Last edited:
Tony
I think by now the data should be "done."
What are those two measurements for the distance traveled.

The data taken with the Tracker program gives the following values at the times between 1.500 and 2.000 seconds into the fall. which we were discussing. The data is Time (sec), Vertical distance traveled (ft.), and Delta distance traveled (ft.) from the last measurement.

1.500 sec, 17.361 ft.
1.667 sec, 22.055 ft., 4.694 ft.
1.834 sec, 27.395 ft., 5.340 ft.
2.000 sec, 33.487 ft., 6.092 ft.

In the entire overall measurement set, taken over about 3.3 seconds, the distance traveled in a given length time increment is always greater than it was in the previous time increment of equivalent length, so the velocity is constantly increasing. I will be updating the Missing Jolt paper with data taken using the Tracker program.
 
Last edited:
I will be updating the Missing Jolt paper with data taken using the Tracker program.

Will you, in the meantime, withdraw the current version of the paper, which we now all know is based on a dataset that exhibits noise artefacts that mimic the effect whose absence is claimed in the discussion? Will you also submit your alterations to the same peer review process that failed to spot this glaring error in the paper, or will you be looking for some better reviewers?

Dave
 
The data taken with the Tracker program gives the following values at the times between 1.500 and 2.000 seconds into the fall. which we were discussing. The data is Time (sec), Vertical distance traveled (ft.), and Delta distance traveled (ft.) from the last measurement.

1.500 sec, 17.361 ft.
1.667 sec, 22.055 ft., 4.694 ft.
1.834 sec, 27.395 ft., 5.340 ft.
2.000 sec, 33.487 ft., 6.092 ft.
Just for grins, let's compare that new data to the data on page 7 of the current version of the paper. While we're at it, let's add the velocities computed by dividing the simple differences shown above by 1/6 second.

The new position data, simple differences, and velocities are shown in green, with the old position data, simple differences, and velocities shown in brown.

Position data:
1.500 sec: 17.361 ft versus 25.52 ft
1.667 sec: 22.055 ft versus 32.56 ft
1.834 sec: 27.395 ft versus 38.72 ft
2.000 sec: 33.487 ft versus 45.76 ft

Simple differences:
1.500-1.667 sec: 4.694 ft versus 7.04 ft
1.667-1.834 sec: 5.340 ft versus 6.16 ft
1.834-2.000 sec: 6.092 ft versus 7.04 ft

Velocities calculated from simple differences:
1.500-1.667 sec: 28.164 ft/s versus 42.24 ft/s
1.667-1.834 sec: 32.040 ft/s versus 36.94 ft/s
1.834-2.000 sec: 36.552 ft/s versus 42.24 ft/s

At least one of the following statements must be true:
  • The raw data presented in the current version of the paper are really, really bad---so bad, in fact, that every argument that has ever been based upon that data should be retracted/rejected pending analysis of better data.
  • The new and old data were measured using different origins for time, which means Tony was wrong when he said above that the new data for 1.5-2.0 seconds correspond to the old data for the interval we've been discussing.
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave

Who knows? Tony is still reading this forum (his last activity was last night, according to his profile) so maybe he'll deign to enlighten us.
 
Oh, BLAH-DEE-BLAH, BLAH! Why does there need to have been a jolt in the first place? This was not exactly like verinage. It was about as close as you can get to it, but not quite.

There would be a jolt IF a separated geometric solid fell onto the standing geometric solid that was the lower floors of the towers. The top part just settled very quickly onto the bottom part. Instead of a jolt, you get the vertical movement converted to horizontal movement at varying points. The rotating of the top of the south tower shows this very clearly.

Stop obsessing over number-crunching and just look at the damned towers. You don't even need to know the compressibility of any of the columns. They weren't destroyed by compression. Bazant was a bit off when he said that crush-up would be the final stage of the collapse, but his reasoning was sound. Just not that great an observer.
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave

I think the quote 'the man doth protest too much" is appropriate here.

The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever in the fall of WTC 1, so the premise of the paper is sound.

I will be revising the paper shortly to use the more accurate data which more soundly supports the premise.

Zdenek Bazant had more than just artifacts in his papers on this issue and there has never been a revision to correct those errors let alone a retraction.
 
The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever

I think you need to clarify *no deceleration*.

378476413.jpg


The lower line is velocity, but hope it's clear that at least one velocity *decrease* is evident to you.

I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?
 
The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever in the fall of WTC 1, so the premise of the paper is sound.
:p

The paper's premise was
...a refutation that is:

  • easy to understand but reasonably precise
  • capable of being stated briefly
  • verifiable by any reader with average computer skills and a grasp of simple mathematics.
As readers with a grasp of grade-school arithmetic have known for several months now, the raw unsmoothed data presented by MacQueen and Szamboti refute the main claim of their paper.

Even if that had not been so, the raw data presented in their paper have neither the accuracy nor the resolution necessary to support the authors' primary claim (that no deceleration occurred).

I will be revising the paper shortly to use the more accurate data which more soundly supports the premise.
The new data could hardly support the premise less soundly than the old.

Tony's been claiming that his raw position data were accurate to within ±0.44 ft. Until, that is, he decided to present "more accurate data" which differ from his allegedly ±0.44 ft data by 8 to 12 feet.

Tony's new claims imply he has been exaggerating the accuracy of his data by a factor of more than 15, which means the accelerations derived from his data contained potential errors of over ±15g.

Remembering that history, I trust femr2's data more than Tony's:
I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?
Yes, there is much to be said for pre-publication peer review.

Had MacQueen and Szamboti availed themselves of competent peer review, the paper's failings would have been communicated to the authors via confidential channels, and all of us would have been deprived of the ensuing public hilarity.
 
I think you need to clarify *no deceleration*.

[qimg]http://femr2.ucoz.com/_ph/6/2/378476413.jpg[/qimg]

The lower line is velocity, but hope it's clear that at least one velocity *decrease* is evident to you.

I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?

You should put some labels on your axes so one can tell what you measured.

I haven't taken a second derivative of the Tracker measurements yet and I will say again that decreases in the rate of acceleration aren't germane to the argument. Real deceleration is needed to cause load amplification. It just isn't there.

It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
 
You should put some labels on your axes so one can tell what you measured.
True. The axes are vertical drop of NW corner, in feet, and time, in seconds....for the *smooth* position graph. The *wobbly* velocity graph is correctly aligned on the time axis, but abitrary on the vertical axis. As my intention was simply to find the low magnitude jolts, it's not too important. If you want the raw position data, no probs. (Am sure I've already given you the link, and the data itself in CSV form)

I haven't taken a second derivative of the Tracker measurements yet and I will say again that decreases in the rate of acceleration aren't germane to the argument.
But as I've tried to remind you, the argument doesn't take account of the lack of upper block rigidity, the probability of CC jolts actually transmitting to the NW corner, or the recent FEA showing rapid jolt magnitude reduction as distance from impact locaion increases (and that's with simple perimeter assembly FEA).

My point, however, was simply to highlight that stating *no deceleration whatsoever* ain't a great thing to say.

Real deceleration is needed to cause load amplification. It just isn't there.
Please define *real* deceleration, as opposed to acceleration rate reduction.

It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
To be honest, what's the point ? As long as the tracking method uses deinterlaced, stabilised DVD quality footage at 59.94fps and preferably automated on visual features to output sub-pixel feature position (by eye is just no good), and again preferably with static feature position subtraction, most definitely using full 24bit colour depth... the resultant base data is all going to be fairly similar. As it's jolts that are being looked for, accounting for viewpoint perspective is not a big issue.

As derivatives are taken, there is likely to be differing smoothing methods used, but with suitably high sample rates (59.94sps) I see no reason that quite wide symmetric differencing should not be acceptable.

There's no massive jolt. It's not there. Some little ones, but that's all. I think everyone is clear on that.

What would be really interesting is application of some structural engineering knowledge I personally don't have to determine the actual effect of the non-rigid structure (and wotnot, inclusing the very minor tilt) on the transmission of any actual large jolts through the various building elements to the NW corner.

Even if a large section of core *vanished* there are still many building elements colliding, the jolts from which do not reach the roofline in any sort of scale you expect, which is a bit of a paradox for the argument.
 
It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
The claims are yours, Tony. As I told you several months ago, the very first thing you should have done was to determine whether the available data are good enough to detect the jolt you think is missing.

They weren't. Had you taken a moment to consider the Nyquist rate for your alleged 90ms jolt, or performed a forward error analysis as described in chapter 1 of R W Hamming's Numerical Methods for Scientists and Engineers, then you'd have known better than to go public with your argument. Estimating the Nyquist rate should have taken you about two seconds; if your math skills are rusty, then the forward error analysis might have taken you a couple of minutes.

Your argument divides into two main parts:
  1. The upper block fell so cleanly onto the lower block that one would expect one large, clear jolt instead of a near-continuous cascade of lesser jolts.
  2. Observations show there was no jolt.
Your own data showed an apparent jolt, and it was gob-smackingly obvious that your data weren't good enough to rule out the possibility of other unobserved jolts. That's why we've been discussing your spectacular failure to establish the second point.

Note well, however, that you haven't established the first point either. I expect to see less than 1g acceleration (as in the data) but I do not expect to see a single jolt that's large enough to show up in the best possible analysis of the available data. Yes, I understand how there could be such a jolt; I also understand how there might not be such a jolt. I therefore have no reason to care about the second part of your argument: It doesn't matter whether the downward acceleration was relatively smooth or was punctuated by large jolts.

Because I understand that it doesn't matter, there is no reason for me to take better measurements. If you persist in your two-part argument, however, then you will need far better data to support the second part of your argument, and you will also need a far more convincing argument for the first part as well.
 
The claims are yours, Tony. As I told you several months ago, the very first thing you should have done was to determine whether the available data are good enough to detect the jolt you think is missing.

They weren't. Had you taken a moment to consider the Nyquist rate for your alleged 90ms jolt, or performed a forward error analysis as described in chapter 1 of R W Hamming's Numerical Methods for Scientists and Engineers, then you'd have known better than to go public with your argument. Estimating the Nyquist rate should have taken you about two seconds; if your math skills are rusty, then the forward error analysis might have taken you a couple of minutes.

Your argument divides into two main parts:
  1. The upper block fell so cleanly onto the lower block that one would expect one large, clear jolt instead of a near-continuous cascade of lesser jolts.
  2. Observations show there was no jolt.
Your own data showed an apparent jolt, and it was gob-smackingly obvious that your data weren't good enough to rule out the possibility of other unobserved jolts. That's why we've been discussing your spectacular failure to establish the second point.

Note well, however, that you haven't established the first point either. I expect to see less than 1g acceleration (as in the data) but I do not expect to see a single jolt that's large enough to show up in the best possible analysis of the available data. Yes, I understand how there could be such a jolt; I also understand how there might not be such a jolt. I therefore have no reason to care about the second part of your argument: It doesn't matter whether the downward acceleration was relatively smooth or was punctuated by large jolts.

Because I understand that it doesn't matter, there is no reason for me to take better measurements. If you persist in your two-part argument, however, then you will need far better data to support the second part of your argument, and you will also need a far more convincing argument for the first part as well.

Unfortunately, for you the Verinage demolitions refute what you are saying. They all show large decelerations which has been observed everytime someone measures their falls. The Verinage demolitions need the dynamic load caused by the deceleration of the upper section in order to continue their collapse.

The lack of a jolt in WTC 1 proves there was no dynamic load and there is no other natural way for the building to collapse with the large reserve strength in the columns below. The tilt does not explain it as even separate impacts would show a deceleration and there is no chance all of the columns missed each other.

I think those of you who claim to believe these buildings could have collapsed naturally without a deceleration are playing games. There isn't a chance that could of happened and all of your posturing won't change that reality.
 
Unfortunately, for you the Verinage demolitions refute what you are saying. They all show large decelerations which has been observed everytime someone measures their falls. The Verinage demolitions need the dynamic load caused by the deceleration of the upper section in order to continue their collapse.

The lack of a jolt in WTC 1 proves there was no dynamic load and there is no other natural way for the building to collapse with the large reserve strength in the columns below. The tilt does not explain it as even separate impacts would show a deceleration and there is no chance all of the columns missed each other.

I think those of you who claim to believe these buildings could have collapsed naturally without a deceleration are playing games. There isn't a chance that could of happened and all of your posturing won't change that reality.

This discussion would go a lot more smoothly if you used the same language as every other engineer. Examples of your terminology disconnect are the use of "deceleration" in previous posts, and the phrase "dynamic load" in the one referenced here. I know how the rest of us would define dynamic load, but it seems you are using some other definition (based on context). Can you explicitly define this term so I (and the others here) know what you mean?
 
Unfortunately, for you the Verinage demolitions refute what you are saying. They all show large decelerations which has been observed everytime someone measures their falls.
I addressed that objection in a previous post.

You ignore that, of course, pretending no one has ever noticed the holes in your arguments.

The lack of a jolt in WTC 1 proves there was no dynamic load and there is no other natural way for the building to collapse with the large reserve strength in the columns below. The tilt does not explain it as even separate impacts would show a deceleration and there is no chance all of the columns missed each other.
Bare assertion is not proof, especially when the person doing the asserting has also told such whoppers as
  • The WTC towers were accelerating at 1g from the time they were built to the time they collapsed.
  • The alleged ±0.44ft quantization error of his position data do not imply a ±1g error for acceleration.
  • His new improved position data are 8 to 12 feet off from his old position data, whose error was allegedly ±0.44ft.
As for the highlighted text, you persist in failing to understand how a tilt can give rise to a sustained cascade of jolts that are too small and short to show up as individual jolts, given the quantization and discretization errors and resolution of your data, and show up instead as a decrease in the average acceleration from 1g to 0.7g.

I think those of you who claim to believe these buildings could have collapsed naturally without a deceleration are playing games. There isn't a chance that could of happened and all of your posturing won't change that reality.
Yes, it always comes down to that same old argument from incredulity, coupled with your unshakeable belief that, notwithstanding your oft-demonstrated difficulties with undergraduate physics and mathematics, not to mention your unreasoning fear of competent peer review, you're always the smartest guy in the room.

Feckless arrogance rocks.
 
I think the quote 'the man doth protest too much" is appropriate here.

The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever in the fall of WTC 1, so the premise of the paper is sound.

If you're happy for a paper to remain in publication under your name despite the fact that the data presented in that paper has been shown not to support its conclusions, I have no problem with that; it's just that it's a clear indication of the fact that you've worked backwards from a conclusion to the data, and that you don't really care what the data says as long as you can fabricate an argument that leads to the conclusion you want. So, by all means, don't hurry to correct your errors; they're more instructive by far than your conclusions.

Dave
 
Just out of interest, by the way, Tony: If the resistance of the lower structure has been removed by explosives, what exactly is providing the resistive force that reduces the acceleration of the upper block to about 0.7G?

Dave
 
Just out of interest, by the way, Tony: If the resistance of the lower structure has been removed by explosives, what exactly is providing the resistive force that reduces the acceleration of the upper block to about 0.7G?

Dave

The way I believe it was done was to remove the strength of the outer core columns and the corners of the perimeter.

The smaller inner core columns would then buckle under the full static load and the perimeter walls would petal outward. While these remaining structural elements could not sustain the load they would provide some level of resistance, restraining it from full freefall acceleration.
 
If you're happy for a paper to remain in publication under your name despite the fact that the data presented in that paper has been shown not to support its conclusions, I have no problem with that; it's just that it's a clear indication of the fact that you've worked backwards from a conclusion to the data, and that you don't really care what the data says as long as you can fabricate an argument that leads to the conclusion you want. So, by all means, don't hurry to correct your errors; they're more instructive by far than your conclusions.

Dave

You are being disingenuous saying the hand data does not support the conclusions. The average velocity was still greater despite an artifact with a lower velocity. That can't happen with a real deceleration.

You seem to be grasping at anything to try and obfuscate the fact that the lack of deceleration in WTC 1 shows it could not have been a natural collapse.
 
Y
You seem to be grasping at anything to try and obfuscate the fact that the lack of deceleration in WTC 1 shows it could not have been a natural collapse.

Sorry, Tony, but your solemn declaration that it could not have been a natural collapse has had enough doubt cast on it not just in this forum--but in the engineering community in general--that you will have to pardon me if I don't take what you say as gospel, but require a second opinion.

Until there is some kind of respected engineering or scientific journal that backs you up, this layman respectfully believes you're full of it.
 
You are being disingenuous saying the hand data does not support the conclusions. The average velocity was still greater despite an artifact with a lower velocity. That can't happen with a real deceleration.

You're the one being disingenuous. In effect, you're saying that any negative spike in the acceleration is impossible, therefore your data proves that there was no negative spike despite the fact that it actually shows one. But, as I said, you've made your bias clear by refusing to withdraw the paper because it presents the right conclusion even though that conclusion doesn't follow from the data. I just wish you could be honest enough to admit to yourself that you'd reach the same conclusion whatever data you were presented with.

Dave
 
The way I believe it was done was to remove the strength of the outer core columns and the corners of the perimeter.

The smaller inner core columns would then buckle under the full static load and the perimeter walls would petal outward. While these remaining structural elements could not sustain the load they would provide some level of resistance, restraining it from full freefall acceleration.

So why wouldn't you expect to see jolts from the impact of the upper block on the remaining structural elements? You see, all you're doing here is reducing the instantaneous force proportionately; you're doing nothing to smooth out the jolts, just making them smaller. You'd therefore expect, if your beliefs about the jolt had any validity, to see periods of freefall acceleration, alternating with jolts, to give an average acceleration of 0.7G. But that's not what's actually seen; what's actually seen is a roughly constant acceleration with only very small jolts.

You're in a cleft stick here, unfortunately. If you argue that the jolts would be smoothed out if only the inner core columns were providing resistance, then by the same argument they would be equally smoothed with all the columns providing resistance. Conversely, if you insist that there must have been discontinuities in the acceleration with the full column set present, then there must have been proportionately sized discontinuities in the acceleration with the reduced set.

But at least you started your post accurately; this is a statement of belief, not of analysis leading to a conclusion. We all know that; you should try to understand it yourself.

Dave
 
Ya know in all the discussion of a jolt, and the tilts, I've yet to see a single mention of eccentric loading being the factor responsible for weakening the structure. Which would further smooth the overall impacts from floor to floor. I believe it's irresponsible to think that the only weakening mechanism possible is man-made. For christ sakes we learn why collapse initiation is prevented as best as possible in college, before we design the real deal. In what world do we learn this "as we go"? :\
 
Last edited:
"Real deceleration" differs from "deceleration" how, exactly?

I was trying to figure this out myself. Perhaps it is a pseudo force, akin to centrifugal, that arises when the observer is in a rotational frame of reference?

Such a case could arise if your head were spinning.:eye-poppi
 
You're the one being disingenuous. In effect, you're saying that any negative spike in the acceleration is impossible, therefore your data proves that there was no negative spike despite the fact that it actually shows one. But, as I said, you've made your bias clear by refusing to withdraw the paper because it presents the right conclusion even though that conclusion doesn't follow from the data. I just wish you could be honest enough to admit to yourself that you'd reach the same conclusion whatever data you were presented with.

Dave

Thats it in a nutshell. Anything he doesn't like he ignores or handwaves. It's like his imaginary Silverstein confession. It is not just ignorance, it is dishonesty.
 
Ya know in all the discussion of a jolt, and the tilts, I've yet to see a single mention of eccentric loading being the factor responsible for weakening the structure. Which would further smooth the overall impacts from floor to floor. I believe it's irresponsible to think that the only weakening mechanism possible is man-made. For christ sakes we learn why collapse initiation is prevented as best as possible in college, before we design the real deal. In what world do we learn this "as we go"? :\

If the north face of WTC 1 collapsed due to the separate impact theory espoused by Dave Rogers, and some others here, it would have had to decelerate itself and would be observable.

Sorry, but the eccentric loading smooths out the jolt argument doesn't explain the fact that the separate decelerations would themselves be observable in a natural event.
 
Last edited:
I was trying to figure this out myself. Perhaps it is a pseudo force, akin to centrifugal, that arises when the observer is in a rotational frame of reference?

Such a case could arise if your head were spinning.:eye-poppi

See answer below.
 
Last edited:
If the north face of WTC 1 collapsed due to the separate impact theory espoused by Dave Rogers, and some others here, it would have had to decelerate itself and would be observable.

Wrong. Only in the vanishingly unlikely case of every column of the face making a simultaneous axial impact on the column below would a deceleration be observable. Any variation in the lengths of the initial column buckles, any significant amount of tilt or any significant degree of non-axial impact would distribute the impact sufficiently that no deceleration would be observable.

Dave
 

Back
Top Bottom