The physics toolkit

Well femr,

I hate to be the bearer of bad news regarding your video data, but...

In short, either:

A. The measured point on that building is undergoing absurd levels of acceleration (10 to 80 G's).
B. I can't program on the fly like I used to.
C. There are some serious artifacts that your video technique is introducing into your data.

I vote "C".

I can assure you that A is false. No building is going to withstand that sort of acceleration without flying apart.

I can pretty much assure you that B is true. But I have reason to believe that my program is behaving. (i.e., I tested it with some well behaved artificial data and it worked perfectly.)

So that leaves A.

It's certainly possible that I've transposed bracket or misplaced comma. Such is the nature of Mathematica programming. We'll see if / as we get into this.

Your raw data produces squirrely results. I believe that I know why. We'll see as this conversation evolves.

This was pretty much a colossal waste of time that I don't have. That said, I've got the program now, & can turn data into finished analysis in seconds.

All told, I'd guess that I've got eight friggin' hours in this. When will I learn...?

Most of that time was wasted trying to chase down why the program adamantly refuses to produce an exponential model of the same type that NIST used on their data. (That's a mystery that I'm not likely to bother chasing down.)

Not even a high order polynomial would fit all the data well at both extremes. So I broke it up into two:

1. a 3rd order polynomial for 0 < t < 1.3 seconds
2. a 5th order polynomial for 1.3 < t < 4.7 seconds (end of data).

Note that I re-indexed the time reference such that the ultimate descent starts at 0 seconds.

I'll post the program, the resulting images & a (brief) discussion in the next three posts.


Tom
 
The Mathematica Code:

If you know someone with the program, you can input it & run it yourself.

picture.php


picture.php


picture.php



That's it.
 
The Results:

Graph:

1. Your data, plotted as points.
2. Your data, first 1 second expanded.
3. Your data plus empirical equation #1 (red line, applies from 0 to 1.3 sec.)
Next 3, Your data plus empirical equations #1 (Red) & #2 (blue line, applies from 1.3 sec to 4.7 sec)

You can see that the fit & the transition between the lines are good.

picture.php


picture.php


The Empirical Equations for drop, velocity & acceleration vs. Time.

The next graph is the "residual" (=data - model) for drop distance. Not bad. Typical variation < about 1 foot.

But the next graph starts to show the real problem...

Drop velocity vs. time.

The solid curve is the empirical equation.

The dots are from your data points, calculated as a "balanced difference". That is, the velocity is given by (DropPoint[i+1] - DropPoint)/(Time[i+1] - Time). This value is set at the midpoint of the sample times. (= Time + 0.5 dt, where dt = constant for your data = Time[i+1]-Time.)

The empirical equation velocity is also calculated at this midpoint time.

As mentioned, I used one empirical equation for t< 1.3 sec & a different one for t > 1.3 seconds. The discontinuities at 1.3 seconds are not surprising.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.

I strongly believe that this scatter is an artifact of the errors in your measurement technique.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.

Which, of course, gets rid of all the high frequency components that your data shows.

But here's the rub: I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
___

The acceleration is handled exactly the same as the velocity was.

But now, you can see that you're manipulating velocity data that has huge artifacts built in.

This makes the calculated acceleration data absurd: over 2500 ft/sec^2 or ~80G's.

Here are the curves:

picture.php



And, last, here is the results of the Empirical Equation "best fit".



picture.php


The "best fit" to your drop distance vs. time data produces 41 ft/sec^2 (about 1.3Gs of acceleration initially, decreasing to about 33 ft/sec^2 (just above 1 G) over the first 1.3 seconds.

Sorry, I don't believe this for a second.

I've got other things that I've got to do.

I'll talk to you about this over the next couple of days.

I can send you the raw data, but you can just as easily imput the empirical equations into an Excel spreadsheet & create the graphs yourself.


Tom
 
The Results:

Graph:
Can't see your graphs. Can't see them even if I C&P the URL on edit.

But the next graph starts to show the real problem...
I know what's coming, of course, yes, there's noise when going to velocity and accel.. 3DP sub-pixel tracing at 59.94 samples per second includes an amount of noise, that must be smoothed.

I generally use 9 sample wide symmetric differencing, though have been looking at the effectiveness of XlXtrFun for high-order least squares fits.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.
Of course they will. No option but to smooth or curve fit. Would have thought that would be obvious to you. Relative to all other datasets the accuracy of each individual datapoint is unrivalled. Throw away 9 out of ten samples if you like...

I strongly believe that this scatter is an artifact of the errors in your measurement technique.
Well, yes, but those *errors* are (in terms of the raw footage) sub-pixel (as soon as you deal with deinterlace jitter). More to do with the procedure you are using. Of course it's going to amplify the very small measurement errors if you don't smooth/curve fit. And especially more-so if you use adjacent points to determine velocity. Use a wider band.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.
Whoop, whoop. Yes, 14 months of working with data like this has resulted in honing techniques and procedures for actually using it. You asked for the raw data, so that's what I gave you. First thing to do is apply a 2 sample running SD to iron out any interlace jitter. Other smoothing processes must follow.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
That would just be silly.

The "best fit" to your drop distance vs. time data produces 41 ft/sec^2 (about 1.3Gs of acceleration initially, decreasing to about 33 ft/sec^2 (just above 1 G) over the first 1.3 seconds.
Horrible isn't it. I've been replicating using the NIST camera 3 view which has highlighted slight scaling issues, but not much. The maximum acceleration of the NW corner still reaches 37ft/s^2. Bizarre.

Interesting to note, however, that the NIST curve fit also exceeds 32.2ft/s^ ;)

Sorry, I don't believe this for a second.
I don't like it myself, and I'm trying to find reasons for it. First dip is the base NIST scaling which is a couple of feet off. Next port of call is to look as various video time-bases to see if that can shed any light, or the reality will be >32.2.

I'll talk to you about this over the next couple of days.
Okay.

I can send you the raw data, but you can just as easily imput the empirical equations into an Excel spreadsheet & create the graphs yourself.
I have endless graphs thanks. I may make my spreadsheets a bit more presentable and upload, but I think it'll be useful for you to get to grips with effective noise removal techniques, as I'm not really liking (trusting) the results of higher order curve fits that I've performed and it's always useful to be on a level field.

Again, the per-sample accuracy is very accurate, and of course includes some noise (very low magnitude) and interlace jitter (easily sorted). I've been working with data like this for a long time now, and have picked up lot's of useful techniques on dealing with noise along the way.

Quite happy to give you more tips, but I imagine you'll be happier with curve fitting, which is fine.

I don't intend on getting into endless banter about data quality though. It's great base data. Perhaps I should have given you the data in pixel/frame units instead. That would let you see the noise magnitude in physical measurement terms. Hmm.

DO have to deal with the slight NIST scaling metric problem (shortly) but even that doesn't *sort* the above-freefall issue.

The NIST metric of 242ft is actually just under 240ft.

If the NIST metric on slab-to-slab height for typical floors is accurate (12ft 9in) then all that's left is video timebase...

Later.
 

(click to zoom)


(click to zoom)

I assumed the above graphs posted earlier would have made the noise-level clear.

The lower graph is a zoom of the upper, and I suggest an estimated noise-level of +/- 0.7ft (+/- 0.2px)

(Your previous suggested maximum accuracy was 6ft, so not shabby at all ;) )
 
Last edited:
Well femr,

I hate to be the bearer of bad news regarding your video data, but...
Let's identify the video data in question. It took me a while to figure out that you're talking about video data for WTC7, and not the data femr2 provided for WTC1 in post 94.

In short, either:

A. The measured point on that building is undergoing absurd levels of acceleration (10 to 80 G's).
B. I can't program on the fly like I used to.
C. There are some serious artifacts that your video technique is introducing into your data.

I vote "C".
I vote "D": Every set of physical measurements contains noise. Those who wish to interpret physical data bear the responsibility for extracting signal from noise. Alternative "A" comes from over-interpreting the noise, and the blame for that would rest with tfk. Alternative "C" is blaming femr2 for the noise, but I'm not sure that's fair; the noise (or most of it) could already have been present in the data before femr2 touched anything.

Not even a high order polynomial would fit all the data well at both extremes. So I broke it up into two:

1. a 3rd order polynomial for 0 < t < 1.3 seconds
2. a 5th order polynomial for 1.3 < t < 4.7 seconds (end of data).
In other words, you are fitting a curve to the data. Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials. That assumption is related to the Chandler-MacQueen-Szamboti fallacy I attacked in post 79. I'm disappointed to see you succumb to a similar fallacy.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.

I strongly believe that this scatter is an artifact of the errors in your measurement technique.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.
That's one way; downsamplingWP is another.

Which, of course, gets rid of all the high frequency components that your data shows.

But here's the rub: I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.
This is not the first time in history that numerical analysts have been faced with such problems. We do not have to fall back on arguments from personal belief or incredulity.

The technical problem is to apply enough smoothing, downsampling, and other techniques to reduce the noise to an acceptable level without throwing away all information about high frequency components. That's a well understood tradeoff, and it's essentially mathematical. We can calculate how much information we're losing, and can state quantitative limits to our knowledge.

We can also use science and engineering to estimate the noise level.

For example, we know that downward accelerations greater than 1g are physically implausible. Treating the descent as an initial value problem, we find that limiting the downward acceleration to 1g makes it impossible to match femr2's data at full resolution: Noise in the data force occasional large upward accelerations in the model, reducing the downward velocity so much that it can't recover in time to match the next sampled position. (Tony Szamboti has made a similar argument, although he usually gets the calculations wrong and refuses to acknowledge corrections unless they further his beliefs.) One way to estimate the noise is to reduce the resolution by smoothing and/or downsampling until the downward-acceleration-limited models begin to match the sampled positions.

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy. For that pedagogical purpose, it hardly matters whether the noise in femr2's data was present in the original video or was added by femr2's processing of that video.
 
Last edited:
Hey WD,

Wow. You're harsh.

I like that...

Let's identify the video data in question. It took me a while to figure out that you're talking about video data for WTC7, and not the data femr2 provided for WTC1 in post 94.

Sorry, I should have ID'd the source. It came from post #176. Since it came from femr in a reply discussing WTC7, I assumed that it was from WTC7 data. Although femr didn't say that explicitly with the data.

Correct, femr?

I vote "D": Every set of physical measurements contains noise. Those who wish to interpret physical data bear the responsibility for extracting signal from noise. Alternative "A" comes from over-interpreting the noise, and the blame for that would rest with tfk. Alternative "C" is blaming femr2 for the noise, but I'm not sure that's fair; the noise (or most of it) could already have been present in the data before femr2 touched anything.

If it please the persecution... ;-)

Granted, there is a fair amount of "my preliminary beliefs" in what I wrote. This is because this is my first pass on femr's data. And I have no idea of the details of his video or how he generated his numbers.

Nonetheless, I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building. And that, as a direct result, I do not (at this time) accept the conclusions of this analysis. In either the raw data OR the empirical data that resulted from a very "smoothed" raw data.

(You can see that the empirical equation generated for the first 1.3 seconds in the last post's drop vs. time graph is a darn good "smoothed" version of the raw data. And yet this still produced an acceleration significantly greater than 1G for the first 1.3 seconds. I think that I made it clear that I do not believe this conclusion. Which constitutes a rejection of the whole raw data set.)

I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.

Femr suggested that the noise is being introduced by my technique. That's not true. Granted that I haven't taken steps to eliminate noise, but I have not introduced any. As I mentioned, I've run validation runs with artificial data to verify this.


In other words, you are fitting a curve to the data. Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials. That assumption is related to the Chandler-MacQueen-Szamboti fallacy I attacked in post 79. I'm disappointed to see you succumb to a similar fallacy.

If you look at the displacement vs time data set (the raw data & the overlain empirical curve) in my previous post, you'll see a pretty good agreement. Once the raw data is low-pass filtered, I believe that the agreement will be even better.

If the agreement is this good, then increasing the polynomials degree amounts to a bit of gilding the lily. And will likely result in poly constants that are close to zero.

Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?

If so, please post your velocity & acceleration vs. time data (or graphs).
__

Nonetheless, I've already redone the analysis using 5th order (with 6 constants), and the results are not hugely different. I'll be interested to see what happens with smoothed data.

Here's the result of using this higher order polynomial. (I used it over the entire times span. You can see that it doesn't provide as good a fit at the early times as the previous one. But you can also see that it follows the gross (i.e., low freq) shape of the raw data pretty darn well.

picture.php


picture.php


You can see that the fit between the empirical curve & raw data is pretty good. And that the empirical curve is a pretty good "smoothed" version of the raw data.

The acceleration is proportional to the radius of curvature of the red line in the drop curves. I can see that a better fit (as the lower order poly in the previous graphs) at the earliest time (t< 0.6). But I don't see much leeway for increasing the radius between 0.7 < t < 1.4 seconds. And the results say that this amount of curvature in the drop curve results in >1G accel.

It's possible to construct a "1 G" arc for this chart to see if it can be fit to this raw data. Looking at the data, the curvature of the empirical (red line) equation right around 1.4 seconds corresponds to 1G of acceleration.

In order for femr's data to be correct, one would have to be able to overlay that degree of curvature (or less) on all the data points throughout the data set. I do not see how that is going to happen for t < 1 second. No matter how much low-pass filtering one does.

Here are the resultant velocity & acceleration curves, again for a 5th order poly with 6 constants:

picture.php


Again, for t < 1.4 seconds, accel is > 1G.

That's one way; downsamplingWP is another.

Without knowing the origin of the noise, I'd prefer smoothing to downsampling. Can't a priori tell if your chosen point is a good one or not.

This is not the first time in history that numerical analysts have been faced with such problems. We do not have to fall back on arguments from personal belief or incredulity.

I thought that I made the basis for my incredulity clear: the wall is far too massive & fragile to exhibit or withstand the acceleration levels that the data implies.

The technical problem is to apply enough smoothing, downsampling, and other techniques to reduce the noise to an acceptable level without throwing away all information about high frequency components. That's a well understood tradeoff, and it's essentially mathematical. We can calculate how much information we're losing, and can state quantitative limits to our knowledge.

I see two problems.

First, I don't believe that smoothing the data is going to significantly reduce the empirically derived acceleration. The low-order polynomial has already essentially done that, and I still came up with > 1G acceleration.

I could be wrong about that. It's happened before. We'll see.

Second, we've got a chicken & egg problem. We're trying to figure out what the acceleration really was. But we're going to smooth the data until the acceleration (maybe) gets "reasonable".

The ultimate conclusion will simply mirror what we deem "reasonable".

We can also use science and engineering to estimate the noise level.

Perhaps femr can, because he has access to his original data. And the details of how he generated the numbers.

For example, we know that downward accelerations greater than 1g are physically implausible. Treating the descent as an initial value problem, we find that limiting the downward acceleration to 1g makes it impossible to match femr2's data at full resolution: Noise in the data force occasional large upward accelerations in the model, reducing the downward velocity so much that it can't recover in time to match the next sampled position. (Tony Szamboti has made a similar argument, although he usually gets the calculations wrong and refuses to acknowledge corrections unless they further his beliefs.) One way to estimate the noise is to reduce the resolution by smoothing and/or downsampling until the downward-acceleration-limited models begin to match the sampled positions.

I'll be interested to see if smoothing the data can get us from here to there. My current impression is that the answer is "no".

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy. For that pedagogical purpose, it hardly matters whether the noise in femr2's data was present in the original video or was added by femr2's processing of that video.

Agreed.

Femr, would you care to add some detailed explanation of your number generation technique. (Or point to where you've described it previously.)

Most specifically, what do you estimate as your error (in pixels), and what is the pixel to real world scale factor.


Tom
 
Last edited:
It took me a while to figure out that you're talking about video data for WTC7
Yes, my bad. Followed a discussion with tfk, so was posted with a *here y'are* only. It's from the *Dan Rather* view.



the noise (or most of it) could already have been present in the data before femr2 touched anything.
There is an amount of variance in the tracing process. I use the SynthEyes system, which incorporates professional-level feature tracking facilities. Tracked feature position is output in pixels, to 3DP. As indicated in post #205 the noise level in the provided graphs equates to +/- 0.2px (call it half a pixel).

I do not stabilise the video data, to ensure it is *untouched*, so the alternate frame jitter resulting from deinterlacing is also present in the underlying data. My preference with dealing with that is to subtract a static point trace (which by definition has the same vertical alternate frame shift) to cancel it out.

An alternative (or additional step) is applying the 2-sample running average previously suggested.

That does of course still leave an amount of noise, which must be treated as required.

Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials.
I should highlight that the purpose of providing tfk with the data was not to identify low magnitude variance in velocity/acceleration, but rather to extrapolate and determine the descent time for WTC 7. In that context I personally have no issue with the general trend.

I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy.
Interesting. I'll make sure I read it.
 
produced an acceleration significantly greater than 1G for the first 1.3 seconds.
As previously stated, I've replicated similar results using the NIST Camera #3 footage (which has also highlighted a slight error in the baseline NIST scaling factor, as the dimension they state as 242ft is nearer to 240). That error alone does not account for the above-G acceleration however, and I'm still looking for other sources.

The pixel-level trace data however is rock-solid in it's attachment to the NW corner, so I have no personal issues with the underlying pixel position data.

I may have to look at perspective correction, though analysis of the scene reveals very little variance in same-height feature separation (the inter-window distances), so even that will make very little difference.



I do not know femr's techniques for producing these numbers.
Quite happy to go into as much detail as is necessary.

I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.
As you don't know the techniques, this is some more hand-waving I'm afraid.

Femr suggested that the noise is being introduced by my technique.
Not at all. I suggested the problem with your treatment of the data, not that you introduced noise into it. The noise is already there.

Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?
Of course.

If so, please post your velocity & acceleration vs. time data (or graphs).
I'm cleaning up the presentation of the spreadsheet for the Dan Rather view, but here's a slightly more presentable one for the NIST Camera #3 view...
Download
You'll need excel and a plugin available here to open it...
http://www.xlxtrfun.com/XlXtrFun/XlXtrFun.htm

(First thing you'll have to do is change the NIST Height parameter from 242 to 239.8)

You can also use that parameter to scale the entire dataset to whatever metric you please.

(tfk - I cannot see any of your graphs. Do you know why ?)

I still came up with > 1G acceleration.
So did NIST.
 
Can't see your graphs. Can't see them even if I C&P the URL on edit.

I'm not sure. Try logging out & back in to your JREF account.

On my system (a mac):
If you're not logged in (with Chrome or Safari), then the graphs don't show. Curiously, they do show with Firefox. When logged in, they show in all these browsers.

Anyone else not seeing them?

Tom
 
Last edited:
I'm not sure. Try logging out & back in to your JREF account.
No joy. Tried all sorts. Nowt. PC on Firefox. Tried taking the text out of quote. Changing qimg to img and to url, posting the url directly etc. Nothin'.

No worries though. I can visualise them from your description.

The links don't refer directly to images btw, but to albums on JREF via a script. If you can link to .jpg, .png or .bmp I'll definitely be able to see'em.
 
Hey WD,

Wow. You're harsh.

I like that...
Sorry. It was late, and I could hardly believe you were criticizing femr2 for providing data that (gasp!) contain noise.

Sorry, I should have ID'd the source. It came from post #176.
Thanks.

Nonetheless, I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building. And that, as a direct result, I do not (at this time) accept the conclusions of this analysis. In either the raw data OR the empirical data that resulted from a very "smoothed" raw data.
Everyone agrees there's a lot of noise in the data. By definition, the noise does not reflect real motions of the building.

On the other hand, I see no reason to doubt that femr2's data, when analyzed properly, will reflect real motions of the building.

On the third hand, my main goal here is to explain the limits of such analysis, to warn against over-analysis of sampled data, and especially to warn against the overconfidence shown by certain people who perform a single very poor analysis and then condemn the rest of the world for not sharing a preconceived conclusion that's contradicted by their own data.

(You can see that the empirical equation generated for the first 1.3 seconds in the last post's drop vs. time graph is a darn good "smoothed" version of the raw data. And yet this still produced an acceleration significantly greater than 1G for the first 1.3 seconds. I think that I made it clear that I do not believe this conclusion. Which constitutes a rejection of the whole raw data set.)
That's where I think you're going wrong. The decision to fit a single smooth curve to that data was your decision, not something that was forced upon you by the data. That decision was equivalent to deciding that nothing terribly interesting could be going on during the first 1.3 seconds. Speaking now of my beliefs, for which I will give evidence, I do not believe your decision is justified by the data.

Even after reducing the noise by crude averaging of every 6 adjacent data points, which reduces the sampling interval to 1/10 second, I still see several discrete jolts during the first 1.3 seconds. As I will show in a later post using femr2's WTC1 data, the magnitude and location of those jolts depend upon artifacts of the noise reduction and the resolution, but that doesn't mean the jolts aren't real. As can be shown mathematically, real jolts would also show up with different magnitudes and locations when sampled at slightly different times or resolutions.

So at least some of those apparent jolts could be real. If so, attempting to fit a smooth curve to the data can give misleading results.

I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.
I'm with you on all of that.

One way to demonstrate that point to femr2 and others is to vary the smoothing and resolution in a systematic way, obtaining a dozen different results. By doing the same thing for a known signal (that is, a fixed mathematical model), and observing the same kind of variation in results, we can show that this variation is exactly what we would expect to happen for indubitably real signals. The morals of the story are (1) there is only so much information we can extract by sampling a non-cyclic signal, and (2) it's easy to draw incorrect inconclusions from artifacts of our analysis.

Without knowing the origin of the noise, I'd prefer smoothing to downsampling.
Why not analyze using smoothing and also analyze using downsampling? That's what I'm doing. (I have to admit that one of my motivations for using downsampling is to demonstrate how much the results can vary as the resolution changes, even while the signal is held fixed.)

I see two problems.

First, I don't believe that smoothing the data is going to significantly reduce the empirically derived acceleration. The low-order polynomial has already essentially done that, and I still came up with > 1G acceleration.
That just means you can't get a good fit to the data with a low-order polynomial. That in turn means (1) high-frequency noise isn't the problem, and (2) you should suspect irregularities (jolts) in the actual signal. If you analyze the data using techniques that allow you to see irregularities, as I have done, you'll see them.

Second, we've got a chicken & egg problem. We're trying to figure out what the acceleration really was. But we're going to smooth the data until the acceleration (maybe) gets "reasonable".

The ultimate conclusion will simply mirror what we deem "reasonable".
Yes, that's a problem. It's part of the Chandler-MacQueen-Szamboti fallacy. They picked a low sampling rate and (in MacQueen and Szamboti's case) lowered the resolution still further by smoothing, using noise to argue for both. Ignoring the fundamental theorem of sampling, they then used their smoothed, low-resolution data as "proof" there were no 90-millisecond jolts in the signal.

You and I don't have to be quite so incompetent.
 
I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.

He seems to be saying things that are self-contradictory:

That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.

Can't have it both ways.
I'm with you on all of that.
Am not at all sure sure what this relates to.
The only *claim* I've made about the data is an estimation of the noise level...
172155712.jpg

Which I'd put at +/- 0.7ft (which is sub-pixel for the Dan Rather viewpoint)

The spreadhseet I included above for the NIST Camera #3 viewpoint includes the raw pixel data.
I prefer the viewpoint, and it can be compared with the NIST results fairly directly.

The only thing I need to look into further is some perspective correction. I've checked vertical and it's fairly trivial, but may see what horizontal perspective correction is required to convert the (slightly incorrect) NIST scaling metric in the middle of the building to the NW corner.

ETA: That might be the beastie. Initial rough estimate could approach an additional modifier of something around 1.02. I'll have to spend a bit of time extracting horizontal metrics, so that value is far from definite, but horizontal skew does look like a culprit for excessive over-G derivations.
 
Last edited:
WD,

Are you able to see my graphs?

Perhaps a way thru the chicken & egg problem of how smoothing affects the measured acceleration.

We've got two accelerations over any interval: the gross average acceleration that we can get from the position data at any two points, and the instantaneous acceleration that results from the model.

For the model to be correct, this could be the criteria: "The integrated average of the instantaneous acceleration over that interval has to equal the gross acceleration." (Or equivalently: "the total distance dropped over that interval has to be the same using both accelerations.")

You'd have to keep careful track of the initial velocity on that interval, because high initial velocity will yield lower gross acceleration for the same displacement. But that should be doable starting from an initial point (0 ft/sec). Of course, this error will accumulate over time, but I don't think it'll be a problem for the time intervals we're discussing.

And this technique will allow any combination of jolts within the interval.

maybe...


Tom
 
Abstract: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

WD,

Are you able to see my graphs?
Yes. I couldn't see them at first, but I saw them when I logged in to JREF, and I continued to see them after I logged out.

I don't know what t=0 means in your graphs, so I'll use femr2's time scale (without rounding).

The corner's descent is preceded by some oscillation and begins in earnest near t=4.871538205. One second later, at t=5.872539206, the corner of the building has dropped 19.7 feet. Two seconds later, at t=6.873540207, the total drop for the first two seconds is 74.0 feet. That's an average of over 1.2g for the first second, and about 1.15g over the first two seconds.

I have to retract this statement:
For example, we know that downward accelerations greater than 1g are physically implausible.
That isn't true in this case, because the corner being tracked is not the roof's center of gravity.

Looking at the YouTube video femr2 cited, I see that the corner did not begin its descent until some time after the opposite side of the building had already fallen some distance and built up downward velocity. The means the roof's center of gravity began its descent before t=4.871538205, and the roof had rotated during that descent.

Taking a idealized view of the situation, let's assume the roof's rotation ends abruptly at t=4.871538205, and the corner's downward component of velocity (from that time onward) is equal to the downward velocity for the roof's center of gravity. That implies a very large downward acceleration for the corner at t=4.871538205 as its velocity rises rapidly from zero to match the velocity of the roof's center of gravity, to which the corner is still attached. If you spread that acceleration over the first two seconds, say, you'll get accelerations that look like what you see in femr2's data and in tfk's graphs.

In short: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

ETA:
For the model to be correct, this could be the criteria: "The integrated average of the instantaneous acceleration over that interval has to equal the gross acceleration." (Or equivalently: "the total distance dropped over that interval has to be the same using both accelerations.")

You'd have to keep careful track of the initial velocity on that interval, because high initial velocity will yield lower gross acceleration for the same displacement. But that should be doable starting from an initial point (0 ft/sec). Of course, this error will accumulate over time, but I don't think it'll be a problem for the time intervals we're discussing.

And this technique will allow any combination of jolts within the interval.

maybe...
You're suggesting we look at it as an initial value problemWP, which is what I've been doing throughout.

The spreadhseet I included above for the NIST Camera #3 viewpoint includes the raw pixel data.
I prefer the viewpoint, and it can be compared with the NIST results fairly directly.

The only thing I need to look into further is some perspective correction. I've checked vertical and it's fairly trivial, but may see what horizontal perspective correction is required to convert the (slightly incorrect) NIST scaling metric in the middle of the building to the NW corner.

ETA: That might be the beastie. Initial rough estimate could approach an additional modifier of something around 1.02. I'll have to spend a bit of time extracting horizontal metrics, so that value is far from definite, but horizontal skew does look like a culprit for excessive over-G derivations.
If "horizontal skew" means rotation of the roof, then I agree. What we really need are data for the center of the roof with exactly the same timescale as the corner. (Ideally, we'd want data for the opposite corner as well, but I don't know whether video of that is available.) Once all of that data is in hand, we'll be able to calculate how well it matches the physical explanation I offered above.
 
Last edited:
Yes. I couldn't see them at first, but I saw them when I logged in to JREF, and I continued to see them after I logged out.
Hmm. Logged in, out, shook it all about. Nowt.

If "horizontal skew" means rotation of the roof, then I agree.
Not exactly.

NIST used a point towards the centre of the roofline. I'm using the NW corner. Features at the West edge are a bit closer to the camera, and so are roughly 1.026 times the size of my previous scale multiplier. My base pixel multiplier has changed to 0.467662228

That has the effect of reducing the over-G values to around 36ft/s^2

What we really need are data for the center of the roof with exactly the same timescale as the corner.
Can do, although there is not much horizontal detail for SynthEyes to latch onto. I would suggest the best route would be to lock horizontal position. Yes ?

If so, what point ?

370825048.jpg


The NIST description is slightly vague...
The chosen feature was the top of the parapet wall on the roofline aligned with the east edge of the louvers on the north face.
I take that to be the position at the right hand edge of the *black box* on the facade (for a few reasons that might not be obvious).

(Ideally, we'd want data for the opposite corner as well, but I don't know whether video of that is available.)
The source video includes the frame as-per the image above, so in theory, no problem. In reality though, the NE corner contrast is very poor and the trace quality may not be too hot.

Once all of that data is in hand, we'll be able to calculate how well it matches the physical explanation I offered above.
I have no issue generating mounds of data. It's coming out m'ears at the mo.

Any preference on units ?

I'm pretty sure my scaling variable is as good as it's going to get for the NW corner now, but there's still possibility for MINOR change to it (0.467662228 - ft/pixel NW corner)

Important note: The scalar is for the latest NIST Camera #3 data, NOT the Dan Rather data.
 
NIST used a point towards the centre of the roofline. I'm using the NW corner. Features at the West edge are a bit closer to the camera, and so are roughly 1.026 times the size of my previous scale multiplier. My base pixel multiplier has changed to 0.467662228

That has the effect of reducing the over-G values to around 36ft/s^2
A point toward the center of the roofline would be closer to the center of gravity, which would simplify the physics and improve accuracy of the analysis.

Can do, although there is not much horizontal detail for SynthEyes to latch onto. I would suggest the best route would be to lock horizontal position. Yes ?
Should be good enough.

The NIST description is slightly vague...
The chosen feature was the top of the parapet wall on the roofline aligned with the east edge of the louvers on the north face.
I take that to be the position at the right hand edge of the *black box* on the facade (for a few reasons that might not be obvious).
Not obvious to me. I'd have assumed "the louvers on the north face" are on the wall of the building below the roof, and the feature is above and aligned with the left edge of those louvers in the photograph.

Any preference on units ?
Only because the WTC7 data you've already posted are in feet.
 
A point toward the center of the roofline would be closer to the center of gravity, which would simplify the physics and improve accuracy of the analysis.
I'll stick it at the point the kink develops then.

Only because the WTC7 data you've already posted are in feet.
Okay, though I'll additionally include the pixel data with a single-cell scalar.

Now, just the NIST Camera #3 data ? (NW corner, NE corner, Kink)

Include static point data ?

Include ANY horizontal data ? (Will make the spreadsheet cleaner if omitted)

(Not a problem to dump the Dan Rather data. The metrics on the NIST view are more accurate)
 
Last edited:
WD,

Abstract: The corner's downward acceleration of greater than 1g at the beginning of its descent is just what you'd expect from the physics.

Stating the obvious, if in freefall, the accel jumps to 1.0g at the start of fall & stays there.

I don't know what t=0 means in your graphs...,

Just to make the math simple, I indexed z to the start of downward motion.

The IDing of t0 is a complexity for all the analyses.

The corner's descent is preceded by some oscillation and begins in earnest near t=4.871538205. One second later, at t=5.872539206, the corner of the building has dropped 19.7 feet. Two seconds later, at t=6.873540207, the total drop for the first two seconds is 74.0 feet. That's an average of over 1.2g for the first second, and about 1.15g over the first two seconds.

I have to retract this statement: " For example, we know that downward accelerations greater than 1g are physically implausible. "

That isn't true in this case, because the corner being tracked is not the roof's center of gravity.

Looking at the YouTube video femr2 cited, I see that the corner did not begin its descent until some time after the opposite side of the building had already fallen some distance and built up downward velocity. The means the roof's center of gravity began its descent before t=4.871538205, and the roof had rotated during that descent.

Taking a idealized view of the situation, let's assume the roof's rotation ends abruptly at t=4.871538205, and the corner's downward component of velocity (from that time onward) is equal to the downward velocity for the roof's center of gravity. That implies a very large downward acceleration for the corner at t=4.871538205 as its velocity rises rapidly from zero to match the velocity of the roof's center of gravity, to which the corner is still attached. If you spread that acceleration over the first two seconds, say, you'll get accelerations that look like what you see in femr2's data and in tfk's graphs.

Four ways (that I can think of) to get > 1G fall:

1. falling rigid body that rotating (if rotating clockwise, part at 3 o'clock will have a downward velocity that is greater than can be attributed by g. It will actually have an linear downward acceleration greater than g for positions between 12 & 6 o'clock, reaching a max at 3 o'clock. It will have a downward acceleration less than g for positions between 6 & 12, reaching a minimum at 9 o'clock.

I think this is implausible as a cause, because the rotation is so slow & the roof line is at approx. 1 o'clock to the cg, giving any rotation a tiny effect.

2. A variant of the same: a falling lever, pinned on the ground at one end. The free end of the lever falls at slightly greater than g. Favorite physics demo.
http://www.youtube.com/watch?v=SfZk6o88nSU&feature=related
I think this is possible, but it requires an internal member to be supported on a structure, and heavy weight to fall on an intervening beam. Possible, but IMO, unlikely.

3. Hang a heavy weight out in space off of the roof of a building. Attach it to an object on the roof with a beam. Put pivot joints at each end of the beam. Drop the weight. The weight falls at g. Initially, the beam pivots, and the object to which it is tied stays stationary on the roof. Finally, the beam can pivot no more, and the object is jerked off the roof. The objects initial acceleration will be greater than G.

This is similar to the description that you proposed, WG.

But here's a slight variant that I think has merit. (Think of Sprint's "pin drop" commercial.)

4. Everything starts just as WD described it. The wall fails first near the east end, where we know the initial failure occurred (i.e., at the kink). The east end of the wall falls nearly at G because it has a multi-story buckling failure. The wall as a whole falls & pivots counterclockwise, west end high, because the west end has not yet failed. The bulk of the wall & attached structure builds up momentum.

This is just like the situation you described, WD.

Suddenly, some point at the bottom of the falling section near the east end hits some significant resistance. As we know it ultimately will.

As long as the impact point was east of the wall's c.g., this impact would transmit a huge dynamic load thru the wall, perhaps instigating the failure at the west end of the wall.

If true (and it does make sense), there should be a sudden, perhaps measurable drop in both the CCW angular velocity of the wall & the downward linear velocity of the wall towards the east end just before the west end starts its fall.

If it turns out that the initial G is > 1, then this is, IMO, the best explanation.

I'm not yet convinced that the initial G is this high. But I will believe good data.
 
Last edited:
I'm not yet convinced that the initial G is this high. But I will believe good data.

A factor to bear in mind is that it's not just my data that exhibits above-G segments, NIST being one of 'em...

34gllzs.gif


Also, the purpose of posting the data initially was your statement about the descent time for WTC 7.

I've provided my raw data, the procedures applied are in the spreadsheet, and there's also a video showing the positioning in video form.

I've highlighted the scaling factor progress, and as I've said, my current dataset maxes out around the 36ft/s^2 mark. Not much above NISTs curve fit max.

Am happy to generate and provide additional data for whatever purpose, but if it's going to evolve into another endless process involving you trying to *debunk* my data, or *discredit* the methods, then I'll apply my time and effort in more personally productive arenas. You're welcome to generate your own data of course. You'll note I've made no claims about the data, other than defend it's integrity.
 
A quick and dirty trace of the three features suggested (NW corner, Near Kink & NE corner)

280647948.png


Just eyeballing suggests that, as NIST stated 32.196ft/s^2 for their linear fit (32.196 indeed), that we can expect a 2D trace of the NW corner to exceed that.
 
femr,

tfk said:
I do not know femr's techniques for producing these numbers.

femr said:
Quite happy to go into as much detail as is necessary.

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

Care to give that a whirl?


Tom
 
A factor to bear in mind is that it's not just my data that exhibits above-G segments, NIST being one of 'em...

http://i33.tinypic.com/34gllzs.gif

Also, the purpose of posting the data initially was your statement about the descent time for WTC 7.

I've provided my raw data, the procedures applied are in the spreadsheet, and there's also a video showing the positioning in video form.

I've highlighted the scaling factor progress, and as I've said, my current dataset maxes out around the 36ft/s^2 mark. Not much above NISTs curve fit max.

Am happy to generate and provide additional data for whatever purpose, but if it's going to evolve into another endless process involving you trying to *debunk* my data, or *discredit* the methods, then I'll apply my time and effort in more personally productive arenas. You're welcome to generate your own data of course. You'll note I've made no claims about the data, other than defend it's integrity.

I mean no disrespect to you. But you & I butt heads.

The reasons why are probably best left as water under the bridge. So I'll keep the snark out of the discussion and simply address the technical side.
___

This is good raw data.

You're to be commended for the (clearly extensive) work that you've put into it. You're to be especially commended for your willingness to share your raw data. Something that is exceedingly rare in the truther world.

We have disagreements about several things that overlap into my area: engineering. Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.

The biggest lesson of just a few hours for me on this data is something that is readily evident in your excel data: the fact that very slight variations in the position vs. time model result in sizable variations in the velocity vs. time mode. And then result in enormous variations in the acceleration vs. time model.

With the sensitivity that I've seen, it doesn't surprise me in the slightest that NIST's acceleration exceeded G. It would not surprise me now if their MODEL's result had produced momentary accelerations that approached 2G.

This does not mean that the north wall necessarily fell with this acceleration.

It far more likely means that the MODEL's acceleration curve is extraordinarily sensitive to the slight variations in the acquisition, massaging & HONEST manipulation of the data that is typically used by competent engineers & scientists to try to get the best results possible.


Tom
 
This is good raw data.
It's as good as I can get using my current methods and the available footage. I use the very best footage I can lay my hands upon, but it would be better still if I could use the *original* media. SynthEyes does a great job of feature tracking, and I've seen no better. Plotting points manually by eye cannot reliably compare. I'm sure there are improvements possible, including in terms of signal processing, but with diminishing gains.

We have disagreements about several things that overlap into my area: engineering.
Most of what I do is based on the visual record. Engineering doesn't really play much part, like the orientation study. It's just a close look at the visual record using tools and methods honed over the period. I think most of the wall is built on lack of trust to be honest.

Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.
I probably don't use formal methods, but it's all done as well as possible with nothing hidden or deliberately distorted. Orientation study - NIST trajectory and orientation is wrong. Exactly how much is probably the source of some more discussion at some point. The implications are a different kettle of fish. No idea. Only way to know would be a re-run of the simulation with more accurate input parameters.

The biggest lesson of just a few hours for me on this data is something that is readily evident in your excel data: the fact that very slight variations in the position vs. time model result in sizable variations in the velocity vs. time mode. And then result in enormous variations in the acceleration vs. time model.
Absolutely. The biggest gain in generating high fidelity and high sample-rate dat is the opportunity to have enough of it to cut through the fuzz by *losing* some of it through things like smoothing. Choice of smoothing methods then becomes important, as is understanding the nature of the source (such as the immediate need to deal effectively with deinterlace jitter). An alternative route for jitter is to leave the video unfolded, and perform two separate traces of the same feature at half framerate, as each field will not suffer from jitter. I tend to try both and see how they compare. For finding *mini-jolts* sample rate is critical. For finding average acceleration less so.

With the sensitivity that I've seen, it doesn't surprise me in the slightest that NIST's acceleration exceeded G.
It would be handy to have their raw data ;)
It does surprise me that their linear fit works out at 32.196ft/s^2 though.

This does not mean that the north wall necessarily fell with this acceleration.
Using wider and wider sample ranges and linear fits would give good approximation without amplifying low level noise.

I tried a running 29 sample wide symmetric difference drop->velocity->acceleration test with very favourable results. Still over-G for the NW corner, but I do think that is to be expected (and actual).

Shall apply some time as-soon-as to traces of the NE corner and Kink location, and post the data when I can. Might be a couple of days (quite a time consuming process. just did a quick run to generate the previous graph)
 
femr,

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

Care to give that a whirl?

Tom

No problem. Have done so several times though...

I use the professional feature tracking system SynthEyes
http://www.ssontech.com/synsumm.htm

Check out the list of Movies SynthEyes is used on

SynthEyes employs various methods to track features including pattern matching, bright spot, dark spot & symmetric spot.

ETA: I could go into the whole pattern match and spot processes, but essentially it's like the process used by NIST in NCSTAR 1-9 Vol2 C.1.3 p680, but automated and honed over many years of upgrades :)

It's output is sub-pixel 3DP.

To give you an idea of it's applications, it will also take a number of tracked points and *solve* the scene into three dimensions (including camera motion, lens distorion, ...) in order to facilitate the inclusion of three-dimensional models into a composite piece of video footage. It's used regularly in the movie industry for live-footage->cgi composite work.

The tracker itself is excellent. As I've said, I've blind-tested it using video generated at high resolution with known feature movements, downscaled and applied noise to the video, then used SynthEyes to track the relevant points. Results were shockingly good.

My pre-requisite to tracking is very careful preparation of the video footage, performing steps such as interlace unfold and correctly applied bob doubling. All video preparation steps are chosen such that they *cannot* deteriorate the quality of the original video signal. They can only improve it, or, at worst, match the original.
 
Last edited:
A quick and dirty trace of the three features suggested (NW corner, Near Kink & NE corner)

http://femr2.ucoz.com/_ph/3/280647948.png

Just eyeballing suggests that, as NIST stated 32.196ft/s^2 for their linear fit (32.196 indeed), that we can expect a 2D trace of the NW corner to exceed that.


Thanks for providing unequivocal proof that NIST's conclusions are absolutely true.

Do you see it in this graph. It's sitting right in front of you.


Tom
 
Congratulations, femr.

IMNSHO, this is the first significant piece of data that I've seen you produce.

I don't know if anyone has published this data yet. But if not, it is significant. You should be proud.
___

Time out for a sports metaphor...

Unfortunately for you, THIS is what you've accomplished.

Don't worry. it's not like the score was tied. Or that the score was close. Or that the game was even still going on. The game ended about 3 years ago. It was a rout.

Enough with the ESPN moment. Back to the data.
___

ASSUMING THAT both femr & the video software did their jobs (that is, the software has identified & tracked a fixed, well defined location on the northeast corner), then this graph puts the last of 100 nails in the coffin of the Truther nonsense regarding the collapse of WTC7.

It does not seem possible that the motion of the northeast corner is an artifact. Regardless, this will be easily confirmed or denied with a little more investigation. femr, you can do this and produce something really meaningful. First steps will be to carefully check, recheck, and attempt to carefully quantify your info. If you do this well, there's only one thing that I can see that keeps it from being publication-worthy: the fact that the issue was settled a long, long time ago. And the info that you've discovered here simply reinforces, very slightly, what the pros have said for years.

What is absolutely clear from this is that the northeast corner of the building was slowly collapsing for over 4 seconds before the kink developed in the center of the north wall. Just as NIST said. And, as was already known, it was then approximately one second after the kink that the global collapse of the north west corner began.

There are several critical points.

1. The motion of the northeast corner does not appear to be an artifact. The stability of the center & northwest corner's location of the building DURING the time that the northeast corner was moving rules out most other explanations for artifact motion.

2. Just as NIST said and truthers deny, the collapse of the building was NOT a sudden event. But that it was a continuous process that extended over many seconds before the LAST act: the fall of the North wall. The slow, gradual motion of the northeast corner proves this beyond doubt.

In some fantasy delusion, Truthers have claimed that the collapse of the east penthouse was somehow a separate, unrelated event from the collapse of the North wall. Which they mistakenly refer to as "the collapse of the building". This data completely negates that illusion.

Something the pros knew all along, of course.

This s why there absolutely zero problem, danger, risk associated with any additional competent investigation.

Nice catch, femr.


Tom
 
Last edited:
tfk said:
[paraphrase] Please explain exactly how you achieve the "sub-pixel" accuracy that you claim.

No problem. Have done so several times though...

And all of you explanations (that you've provided to me) have been identical to this result: no explanation at all.=

I use the professional feature tracking system SynthEyes
http://www.ssontech.com/synsumm.htm

I know that.

Check out the list of Movies SynthEyes is used on

I'm not particularly interested in movies.

SynthEyes employs various methods to track features including pattern matching, bright spot, dark spot & symmetric spot.

Not what I asked.

ETA: I could go into the whole pattern match and spot processes, but essentially it's like the process used by NIST in NCSTAR 1-9 Vol2 C.1.3 p680, but automated and honed over many years of upgrades

Not an explanation.

It's output is sub-pixel 3DP.

Assertion. Not explanation.

Please don't refer me to someone or someplace else. Just a simple 2 or 3 paragraph explanation in your own words, please.


Tom
 
Just to make the math simple, I indexed z to the start of downward motion.
You mean, of course, that your t=0 corresponds to downward motion of the northwest corner of the roof, which occurs well after the start of downward motion for the east side and center of the roof. That's why you got average accelerations of greater than 1g for the first 1.3 seconds.

3. Hang a heavy weight out in space off of the roof of a building. Attach it to an object on the roof with a beam. Put pivot joints at each end of the beam. Drop the weight. The weight falls at g. Initially, the beam pivots, and the object to which it is tied stays stationary on the roof. Finally, the beam can pivot no more, and the object is jerked off the roof. The objects initial acceleration will be greater than G.

This is similar to the description that you proposed, WG.
Yes, but I didn't postulate a heavy weight; the weight of the roof itself will do fine.

What I described is more similar to a plank (the roof) resting on top of two empty beer cans placed at its extreme ends. Knock the leftmost can (the support for the east corner) away at some time t0. The plank (roof) begins to rotate about the hinge at its rightmost end.

A little after t0, at some time t1, the rightmost end of the plank slips off of the rightmost can. The plank has acquired some angular momentum (counterclockwise) before t1, but the rightmost end was stationary up to t1. After t1, the counterclockwise rotation continues (which means the downward velocity of the rightmost end is less than the downward velocity of the leftmost end) but you have to add the motion of that rotation to the motion caused by downward acceleration of the plank's center of gravity.

The downward acceleration due to gravity began at t0, not at t1. If you look only at the position of the plank's rightmost end starting at t1, you'll see an acceleration greater than 1g but won't understand why.

If you do the math, you'll find that the downward acceleration of delta-vee for the rightmost end is greater than (eta: would result from an acceleration of) 1g for some period of time starting at t1, but is less than 1g for intervals starting at t0 (until the plank rotates to vertical or the leftmost end hits something, which is your scenario 4).

4. Everything starts just as WD described it. The wall fails first near the east end, where we know the initial failure occurred (i.e., at the kink). The east end of the wall falls nearly at G because it has a multi-story buckling failure. The wall as a whole falls & pivots counterclockwise, west end high, because the west end has not yet failed. The bulk of the wall & attached structure builds up momentum.

This is just like the situation you described, WD.
Yes, that is what I described. In free fall, however, the left end of the plank (the eastern corner of the roof) could experience an acceleration of greater than 1g because the center is accelerating at almost 1g and the left end of the plank is accelerating downward faster than the center.

Suddenly, some point at the bottom of the falling section near the east end hits some significant resistance. As we know it ultimately will.

As long as the impact point was east of the wall's c.g., this impact would transmit a huge dynamic load thru the wall, perhaps instigating the failure at the west end of the wall.

If true (and it does make sense), there should be a sudden, perhaps measurable drop in both the CCW angular velocity of the wall & the downward linear velocity of the wall towards the east end just before the west end starts its fall.
That's entirely plausible, but I'd like to emphasize that no such event is needed to explain the northwest corner's acceleration at more than 1g when measured from time t1 (your t=0) or the east corner's acceleration at more than 1g when measured from time t0 (before your t=0).

femr,

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

femr2's just using the software; he didn't write it and probably doesn't understand all of its algorithms. I don't either, but I can demystify this a bit by explaining a couple of standard techniques.

The first technique is easiest to explain if we assume the software is tracking a light-colored feature against a dark background, and the light-colored feature is approximately one pixel in size.

If the light-colored feature is perfectly centered within a pixel, then that pixel would be the color of the feature while all 8 pixels that surround it would be the color of the dark background.

If the light-colored feature is exactly halfway between the centers of two pixels, then both of those pixels would be the same color, lighter than the background but not as bright as a single pixel would be if the feature were centered within it.

If we do the math and work out a threshold for distinguishing between the two scenarios above, we'll get half-pixel resolution. (The math is a little more complicated than I made it sound, because a pixel-sized feature could overlap with as many as 4 pixels.) If you iterate that process a second time, you can get quarter-pixel resolution; I think that's pretty close to the practical limit of this technique.

A second technique exploits the improved spatial resolution that can obtained by integrating successive frames of a video. Synthetic aperture radarWP is a well-known application of a similar technique.
 
Last edited:
Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

As I said, I'll do the traces over the next couple of days.

What would you interpret from this animated GIF ?

NIST_WTC7_redo-newoutsmallhy.gif


As for how sub-pixel tracing accuracy is possible, I think the clearest way is to use a visual example (as is my wont).
I'm not really into protracted technical verbage.

I generated the following animation of a simple circle moving in a circle itself...
26543418.gif


I then downscaled it until the circle is a small blob...
372817686.gif


If we look at a blow-up of the shrunken image, such that we can see the individual pixels, it looks like this...
477542845.gif


Sub-pixel feature tracking essentially works by enlarging the area around the feature you want to track, and applying an interpolation filter to it. Lots of different filters can be used with varying results.

Applying a Lanczos3 filter to the previous GIF, to smooth the colour information between each pixel, results in the following...
371929626.gif


I think you will see that there will be no problem for a computer to locate the centre of the circle quite accurately in that smoothed GIF, even though the circle in the original tiny image was simply a random looking collection of pixels. This process of upscaling and filtering generates arguably more accurate results than simply looking at inter-pixel intensities.

The resulting position determined is therefore clearly sub-pixel when translated back into the units of the original tiny source.

It is a side effect of aliasing that small movements of the object cause slight variation in inter-pixel intensity, saturation and colour.

Tracing the position of the small blob on the tiny version results in the following...
323039059.png


The raw data is here...
http://femr2.ucoz.com/SubPixelTracing.xls


The graph shows accurate (not perfect) sub-pixel location data for the position of the small blob.

I could go into more detail, but hope that clarifies.

ETA: Another test using exactly the same small blob rescale, but extended such that it takes 1000 frames to perform the circular movement. This results in the amount of movement being much much smaller between frames. This will give you an idea of how accurate the method can be...

(click to zoom)

Would you believe it eh.

Here's the first few samples...
Code:
0	0
-0.01	0
-0.024	-0.001
-0.039	-0.001
-0.057	-0.001
-0.08	-0.002
-0.106	-0.002
-0.136	-0.002
-0.167	-0.002
-0.194	-0.004
-0.214	-0.005
-0.234	-0.005
-0.251	-0.007
-0.269	-0.008
-0.289	-0.009
-0.31	-0.009
-0.337	-0.012
-0.365	-0.014
-0.402	-0.015
-0.431	-0.018
-0.455	-0.019
-0.48	-0.02
For this example, I'll quite confidently state that the 3rd decimal place is required, as accuracy under 0.01 pixels is clear. There are other sources of distortion, such as the little wobbles in the trace, which are caused by side-effects of the smoothing and upscaling when pixels cross certain boundaries. This reduces the *effective* accuracy. Can be quantified, by graphing the difference between *perfect* and the trace location, but not sure how much it matters.

Now, obviously this level of accuracy does not directly apply to the video traces, as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.
 
Last edited:
Hey WD,

You mean, of course, that your t=0 corresponds to downward motion of the northwest corner of the roof, which occurs well after the start of downward motion for the east side and center of the roof. That's why you got average accelerations of greater than 1g for the first 1.3 seconds.

I see what you're saying. That's an interesting thought.

And all of your comments apply not only for motion & dynamics within the plane of the north wall, but for those perpendicular to it as well.

femr should easily be able to catch the dynamics if they are within the plane of the wall. In fact, it's likely that he already has. It'd be a lot more difficult, of course, to see them if it's the collapse of the internal framework that drags it down in the way that you're describing. Perhaps that conclusion will be reached by the process of eliminating the within the north wall plane possibility.

It'd be very useful if he'd provide the same curves for the motion of several (say about 3 additional) points along the roofline. Also, the motion of the four corners of the black louvres on that wall would be very useful to give angular rotation of the wall.

Yes, but I didn't postulate a heavy weight; the weight of the roof itself will do fine.

Well, the weight of all the structure, of course.

That's entirely plausible, but I'd like to emphasize that no such event is needed to explain the northwest corner's acceleration at more than 1g when measured from time t1 (your t=0) or the east corner's acceleration at more than 1g when measured from time t0 (before your t=0).

I agree completely. And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.

femr2's just using the software; he didn't write it and probably doesn't understand all of its algorithms. I don't either, but I can demystify this a bit by explaining a couple of standard techniques.

Thanks.

I wanted a clear explanation of the specific techniques that he used in this case.

Even now, he's given "what can be done". And alluded to techniques "similar to what NIST used".

My skepticism is based on the fact that NIST had access to a boatload of experts in vidoegrammetry. Guys that are world class experts in these techniques.

As I said, the NIST report gives NIST's estimate of their error as ±6'. femr is claiming 6 inches.

I wasn't born a giant pain-in-the-ass skeptic. But I did grow into it.


Tom
 
Last edited:
Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

I'm not really into protracted technical verbage.

It seems Tom's (tfk) unwarranted arrogance and pomposity just doesn't lend itself to being appreciated.

The guy (if he is actually a guy) writes like he thinks he is the only one in the world who knows something about anything, and he doesn't seem to mind being abusive.

I have a feeling that the real reason for his not being forthcoming with his identiity is that he feels his anonymity allows him to project an air of authority, as a ploy in the discussion, in a way that he wouldn't be able to if his identity were known. His excuse for not revealing his identity, that he is afraid of some form of retaliation by those who don't buy the government line on the events of Sept. 11, 2001, has no real basis. There are others like Ryan Mackey and Ron Wieck who publicly support the official story and nobody has threatened them or attempted to harm them in any way that I know of.

There can be no credibility given to anonymous posters since they face no risk of losing that credibility.
 
Last edited:
femr should easily be able to catch the dynamics if they are within the plane of the wall.
It may also be possible to resolve the movement into three dimensions. Deformation of the facade is the only stumbling block to do that with ease.

Perhaps that conclusion will be reached by the process of eliminating the within the north wall plane possibility.
NW and NE corners are clearly moving in three dimensions. Flex behaviour of the north face can be seen simply by scrubbing through video of the descent.

It'd be very useful if he'd provide the same curves for the motion of several (say about 3 additional) points along the roofline.
The lack of decent contrast at the roofline makes tracing features along it very prone to error. It will only be possible to provide traces on the roofline for sections of the full clip length. The Near Kink trace provided uses the TL corner of the *black box* on the facade. It's simply not possible to identify the actual position of the roofline for much of the width of the building. Yes, NISTs raw data would be quite useful.

Also, the motion of the four corners of the black louvres on that wall would be very useful to give angular rotation of the wall.
Okay. In addition I'm tracking a number of points down the East edge.

And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.
Until I have access to a longer version of the Camera #3 footage, there is no raw data available to confirm that timing.

I wanted a clear explanation of the specific techniques that he used in this case.

Even now, he's given "what can be done". And alluded to techniques "similar to what NIST used".
I've provided you with a very clear example, including the very simple process (upscaling and interpolating) that allows sub-pixel tracing to actually work. As I said, I can go into more detail, though it shouldn't really be necessary.

If you want more hand-holding, fine. We can go into pattern matching algorithms, FFT, search ranges, all manner of gibberish (aka unnecessary protracted technical banter)

My skepticism is based on the fact that NIST had access to a boatload of experts in vidoegrammetry. Guys that are world class experts in these techniques.
I'll take that as a compliment, regardless of your intent.

As I said, the NIST report gives NIST's estimate of their error as ±6'. femr is claiming 6 inches.
You're getting confused about metric between the Flight 175 video studies and WTC 7. If you study the Camera 3 details in the NIST report, you'll find they state rather higher levels of accuracy post-noise-treatment.

My current noise level estimate for the NIST Camera #3 footage traces currently stands at +/- 0.7ft. I think even you are capable of working out that's 1.4ft (16.8 inches).

However, my stated estimation is noise level. I'm inclined to suggest that post-noise removal (smoothing) it's valid to state a higher level of accuracy. Haven't done so yet though.

Try and remain accurate Tom.

It's worth pointing out again that the roofline does not contain appropriate contrast detail for accurate tracing to be performed. NISTs use of a (badly defined) point on the roofline is therefore a source of lack-of-confidence in their trace graphs (The raw data from which has not been released).

I wasn't born a giant pain-in-the-ass
Bearing in mind you appear to have come into contact with data containing noise only a couple of days ago, have no obvious personal experience or understanding of any of the techniques being used, are clearly prepared to make conclusions on a simple graph of *quick and dirty* trace data, hand-wave away provided proof-of-process details, and it seems you have not actually looked at the movement of WTC 7 in video form in any kind of detail before (as you seem surprised by movement many have seen since the first scrub through video)...forgive me for stating the obvious...

You're not really in a position to be complaining Tom.

If there are things you don't understand, by all means ask.

Is there some additional detail about sub-pixel tracing techniques you don't understand ?

Obviously the exact inner-workings of SynthEyes are not public, but feature tracking systems all share a common set of baseline methods, which I'll go into further detail upon if absolutely necessary.

Oh, and just a small point...NIST do not appear to have rxtracted static point data from their traces, and also appear to have overlooked horizontal perspective implications (as their vertical distance metric was derived from a different horizontal position to their roofline trace position). Ho hum eh.
 
There can be no credibility given to anonymous posters since they face no risk of losing that credibility.
Any information I provide is attached to my *name*, and it is simply the responsibility of others to focus on the validity of the information, rather than the person it originates from.

I have no desire, nor intention, to converse under any name other than femr2.
 
femr,

Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

This is not "venting". It's stating one of the most fundamental principles of Measurements and Analysis 101.

I've posted THIS for you before. Please watch it. Pay close attention. It'll only take about 10 seconds.

The KEY comment:
MIT physics professor said:
"There is an uncertainty in every single measurement. Unless you know the uncertainty, you know absolutely NOTHING about your measurement."

When I was an engineering student, if we turned in our lab results without an error analysis, it was not accepted. If we didn't get it done, we got a failing grade. Even if we did all the analysis exactly right.

In a previous post, you replied to my comment:

tfk said:
[You & I have disagreed about] ... Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.

femr said:
I probably don't use formal methods, but it's all done as well as possible with nothing hidden or deliberately distorted.

You seem to be suggesting that you think an error analysis is a statement of a researcher's mistakes or dishonesty. Nothing could be further from the truth.

It is an acknowledgment of the inescapable fact that there is an inherent error in every single measurement that anybody, anywhere takes. And a formal analysis of how much those errors will impact any calculated conclusion.

This is an absolutely crucial component of any measurement analysis.

The LACK of an understanding and clear statement of the sources & magnitudes of one's measurement errors is an incompetence.

The LACK of an acknowledgment of those error is a dishonesty.

What I can tell you is this: the single biggest lesson of error analysis is a shocked: "I cannot BELIEVE that, after taking all those measurements so carefully, we've got to acknowledge THIS BIG an error." But that was precisely the message that our instructors were trying to get us to appreciate.

What would you interpret from this animated GIF ?
http://femr2.ucoz.com/NIST_WTC7_redo-newoutsmallhy.gif

Are you talking about "what this video says about the continuum vs. discrete process of collapse"?

If that's your topic, the drop of the penthouse says to an experienced structural engineer the same thing that the collapse of the previous (east) penthouse said, and that the subsequent collapse of the north wall confirmed: "the building is in the process of collapsing".

And a careful analysis - like your "east corner" graph confirms that.

And there is absolutely nothing that can be seen in this gif to deny that.

The resulting position determined is therefore clearly sub-pixel when translated back into the units of the original tiny source.

There are a half-dozen or more features of your tiny, circling pixel that are not available to you in the WTC videos.

That's not to say that Syntheyes cannot do significant image enhancement. That is to say that the results of that enhancement is not going to be anywhere near as effective as your example.

It is a side effect of aliasing that small movements of the object cause slight variation in inter-pixel intensity, saturation and colour.

And a principle source of aliasing in video images is leakage in the stop bands of image processing filters. Did you get that? "Filters". As in "things that ALTER the exact rendition of your measured quantity." Filters that have altered your video before you ever got your hands on it.

Not hugely. Subtilely.

UNTIL you get down to the individual pixel level.

... The graph shows accurate (not perfect) sub-pixel location data for the position of the small blob.

I could go into more detail, but hope that clarifies.

No, actually it does not.

I didn't ask what type of sub-pixel techniques your program CAN use. I asked which specific ones you DID use.

ETA: Another test using exactly the same small blob rescale, but extended such that it takes 1000 frames to perform the circular movement...

Utterly irrelevant to WTC video. You don't have repetitive, oscillating motion that you can sample multiple times. You don't have 1000 frames of motion to analyze.

For this example, I'll quite confidently state that the 3rd decimal place is required, as accuracy under 0.01 pixels is clear.

Any talk about 1/100 of a pixel in any discussion of available WTC videos is fantasy.

Can be quantified, by graphing the difference between *perfect* and the trace location, but not sure how much it matters.

Stop talking about quantifying it. Stop demeaning the valuable act of quantifying your error. Do it.

But do it right. Not like you have been half-assedly doing it. (See next paragraph.)

Now, obviously this level of accuracy does not directly apply to the video traces,

That's right. It doesn't apply.

(As an aside: You probably shouldn't have brought it up, then. It makes it look like you are indulging in "baffling with the bs".)

This whole method that you use to guess your accuracy - creating perfect graphic images, with 100% contrast, with perfectly defined edges, moving in perfectly geometric, repeating motions, at any number of frame acquisitions, at any spatial resolution, applying perfectly symmetrical blurs & then using filters & processing to reconstruct your original shape & motion - is self-deluding.

Your artificially created reference video ignores the 20 - 100 sources of unpredictable, asymmetric, non-constant in space & time distortions that occur before the image gets to disk.

as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.

My guess, based on measurements that I've made in the past: If you performed a real error analysis, and you used re-interlaced video, you'd find that during the collapse (the time of interest, of course), you've got uncertainties of about ± 2 to ±3 pixels before image enhancements & ±1 to ±2 pixels after image enhancements.

If you use single frames, then your uncertainties will be about twice as large.

Now, I could be wrong about that. Yeah, it's happened before.

Do you know what it would take to convince me that I'm wrong? And to get me to immediately change my mind?

Funny thing. It'd take a competent error analysis.


Tom
 
tfk said:
Time out for a sports metaphor...

Unfortunately for you, THIS is what you've accomplished.

Don't worry. it's not like the score was tied. Or that the score was close. Or that the game was even still going on. The game ended about 3 years ago. It was a rout.

Enough with the ESPN moment. Back to the data.

This is not "venting". It's stating one of the most fundamental principles of Measurements and Analysis 101.

No Tom, it is venting your personal viewpoint of what you perceive to be *the truth movement*, it's viewpoint, it's goals and intentions.

Unfortunately for me ? What kind of idiot are you Tom ?

Keep your personal crusade out of posts to me.

I've posted THIS for you before.
No, you haven't.

Please watch it.
When I have time perhaps.

It is an acknowledgment of the inescapable fact that there is an inherent error in every single measurement
No excrement Sherlock. Moving on.

Are you talking about "what this video says about the continuum vs. discrete process of collapse"?
No, I'm asking you what you see :)

there is absolutely nothing that can be seen in this gif to deny that.
Just WHAT is your problem Tom ? Who exactly is denying what exactly ?

There are a half-dozen or more features of your tiny, circling pixel that are not available to you in the WTC videos.
Such as...

That's not to say that Syntheyes cannot do significant image enhancement. That is to say that the results of that enhancement is not going to be anywhere near as effective as your example.
Feature tracking is not image enhancement Tom. I'm fully aware of the difference between a perfect test case and real video, and made such clear earlier...
femr2 said:
Now, obviously this level of accuracy does not directly apply to the video traces, as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.
Why is it that AFTER you have read such statements, you seem to reword them in a shielded derisory manner then state them as if they are something you had known for ever and ever eh ? :)

And a principle source of aliasing in video images is leakage in the stop bands of image processing filters. Did you get that?
HA HA. You're looking in from the wrong end of the video chain Tom. The primary source of aliasing in live video footage is that the lens focusses light from a region of space onto a single CCD receptor. It's a side effect of the optical hardware first, the CCD properties second, the internal storage compression atrefacts third, then gets some messing up from subsequent processes such as format conversion, contrast enhancement (which IS applied to the NIST video, by NIST)

This whole method that you use to guess your accuracy
You just can't get it can you Tom. It's not any kind of attempt to guess WTC trace accuracy. It's an example of the validity of sub-pixel feature tracing.

re-interlaced video
The simplest way to illustrate you talking out of your ass Tom.

Now, I could be wrong about that. Yeah, it's happened before.
You've been learning lot's recently. No need to stop now.

Do you know what it would take to convince me that I'm wrong? And to get me to immediately change my mind?
Frankly I really don't care Tom. You are not going to change. You are a pompous idiot with delusions of grandeur. If you can separate your *snark* (as you put it) from courteous dialogue, I'll continue discussion with you. If not, do one. I've got better things to do.
 
femr,

Okay. In addition I'm tracking a number of points down the East edge.

Good. I'll look forward to seeing your data.

tfk said:
And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.

Until I have access to a longer version of the Camera #3 footage, there is no raw data available to confirm that timing.

We "know" it in the same way that we know the Challenger blew up because of an o-ring failure. And the Columbia blew up because of wing damage: "Because competent experts analyzed the situation, and stated clearly that there was sufficient hard evidence to support those conclusions."

I've provided you with a very clear example, including the very simple process (upscaling and interpolating) that allows sub-pixel tracing to actually work. As I said, I can go into more detail, though it shouldn't really be necessary.

If you want more hand-holding, fine. We can go into pattern matching algorithms, FFT, search ranges, all manner of gibberish (aka unnecessary protracted technical banter)

And, once again, I know that there are dozens of techniques used to perform this sort of enhancement. I am NOT asking "what generic techniques are available?"

I am asking, "which specific ones did you use in THIS specific analysis?"

You're getting confused about metric between the Flight 175 video studies and WTC 7. If you study the Camera 3 details in the NIST report, you'll find they state rather higher levels of accuracy post-noise-treatment.

No, I'm not getting confused.

NiST used moire analysis to enhance their resolution of the lateral movement of WTC7. You haven't discussed that technique in your discussion of the building's collapse.

And moire techniques were only available with WTC7 because the building had a repeating pattern of straight vertical lines. Something that the airplane lacked. So that technique was out.

But all of the techniques that you have discussed would have been readily available to NIST video analysts when attempting to measure the speed of the plane. They went thru great effort to get as accurate info as possible. And they still ended up with video based errors of something around ±40 mph.

Try and remain accurate Tom.

Always do.

NISTs use of a (badly defined) point on the roofline is therefore a source of lack-of-confidence in their trace graphs (The raw data from which has not been released).

It's a source of "lack of confidence" in the baselessly suspicious & paranoid.

There are 100,000 trivial little details that they didn't specify. Most of them because they analyzed & tossed out, deemed irrelevant or trivial.

Otherwise the report would have been 10x as big & taken 10x as long.

Bearing in mind you appear to have come into contact with data containing noise only a couple of days ago,

try 40 years worth of noise laden data...

have no obvious personal experience or understanding of any of the techniques being used,

Wrong.

It would be a mistake for you to assume that my asking you to explain these processes in your own words is so that I can learn them.

It would be a mistake for you to assume that I have no professional experience in video processing of moving images.

are clearly prepared to make conclusions on a simple graph of *quick and dirty* trace data,

Wrong.

I have several conclusions based on my spending about 3 hours running many analyses of various models. After spending about 6 hours writing & debugging a program to do the analysis. It'll take me little time to write up the results. I'll post them here within the next week.

hand-wave away provided proof-of-process details,

You've provided "proof-of-process details"??

That dot & moving circle was your "PROOF of process" for your sub-pixel accuracy claims in the WTC video?

really...?

and it seems you have not actually looked at the movement of WTC 7 in video form in any kind of detail before (as you seem surprised by movement many have seen since the first scrub through video)

You're correct that I haven't extracted my own data from video. No need, IMO. I haven't been the one claiming to do video analysis. And for commenting on others' (like yours), I'm perfectly happy to use their raw data.

Unless of course, like Chandler, they refuse to supply it. Then I'll just discount them as secretive, insincere investigators.

...forgive me for stating the obvious...

Be my guest. I've been known to do the same on occasion.

You're not really in a position to be complaining Tom.

Please... Complaining is for drama queens. I haven't been complaining. I've been "commenting".

Is there some additional detail about sub-pixel tracing techniques you don't understand ?

Yes. What I've asked you about 10x in the last week.

Which SPECIFIC technique did you use in your specific analyses?

Obviously the exact inner-workings of SynthEyes are not public, but feature tracking systems all share a common set of baseline methods, which I'll go into further detail upon if absolutely necessary.

Consider my request to be, IMO, "absolutely necessary".

Oh, and just a small point...NIST do not appear to have rxtracted static point data from their traces, and also appear to have overlooked horizontal perspective implications (as their vertical distance metric was derived from a different horizontal position to their roofline trace position). Ho hum eh.

NIST did not "extract static data points"??

What do you think that the dots on their "drop distance vs. time" graph were?

Or do you mean "location of the roof prior to collapse"?

If that is the case, what do you think all that commentary about the horizontal movement of the roof prior to global collapse was about?

Perhaps, rather than my guessing, you should tell me what you mean by "static data points".


Tom
 
Last edited:

Back
Top Bottom