The physics toolkit

I am asking, "which specific ones did you use in THIS specific analysis?"
I've shown you Tom.

I'm using area based feature tracking, which employs x8 upscaling and Lanczos3 filtering to smooth pixel value transitions. Pattern matching is then employed to provide a best-fit of video data within the search range area for the subsequent frame. SynthEyes also provides a nice handy graph of each sample FOM (The tracker Figure of Merit (FOM) curve measures the amount of difference between the tracker’s reference pattern and what is found in the image. (0..1) 0 being perfect).

No, I'm not getting confused.
Yes, you are. NIST repeatedly state metric accuracies in the order of low inches throughout the Cam#3 analysis.

NiST used moire analysis to enhance their resolution of the lateral movement of WTC7. You haven't discussed that technique in your discussion of the building's collapse.
Probably because I didn't use it. And you seriously then suggest that method will provide only a 6ft accuracy ? Hmmm. Definitely confused :)

It's a source of "lack of confidence" in the baselessly suspicious & paranoid.
Nope. With the full 7 minute video trace I imagine I could detect movement BEFORE that point. Are you keeping up Tom ?

There are 100,000 trivial little details that they didn't specify. Most of them because they analyzed & tossed out, deemed irrelevant or trivial.
Conjecture.

try 40 years worth of noise laden data...
Your recent posts have been a clear example of your lack of experience in dealing with noise levels in real-world data Tom.

You've provided "proof-of-process details"??

That dot & moving circle was your "PROOF of process" for your sub-pixel accuracy claims in the WTC video?

really...?
For the WTC videos ? No, of course not, and stated clearly. It's certainly proof-of-process on the ability to perform sub-pixel accurate tracing of small features within video footage. The effect of the additional sources of noise and error within the available WTC footage is an entirely different thing. Your obsession with extending the scope of presented information is astounding.

You're correct that I haven't extracted my own data from video. No need, IMO. I haven't been the one claiming to do video analysis.
Could you state a claim I've made about the WTC 7 Camera #3 traces please Tom ? :)

Please... Complaining is for drama queens. I haven't been complaining. I've been "commenting".
I refer the less-than-honourable gentleman to my previous response. Exactly what claims are you *commenting* upon eh ?

NIST did not "extract static data points"??
No, they did not.

What do you think that the dots on their "drop distance vs. time" graph were?
Positional markers for MOVING points.

Or do you mean "location of the roof prior to collapse"?
No.

If that is the case, what do you think all that commentary about the horizontal movement of the roof prior to global collapse was about?
Where ARE you going Tom ... ?

Perhaps, rather than my guessing, you should tell me what you mean by "static data points".
Good idea. I've mentioned it many times during this thread, and engaged with a reasonable amount of discussion about it with WDC. Perhaps you skimmed over the thread content to better serve your purposes. Who knows.

Static Points...

One source of noise within the video is very slight movements of the camera.

By performing traces of multiple points upon the video frame that are guaranteed to remain static (ie features on foreground buildings that are NOT dropping to the ground) it is possible to quantify low-magnitude *camera wobble*.

When I refer to static point extraction, I mean the subtraction of camera wobble data from moving point trace data.

A specific application of this technique was applied to the obvious camera shake of the camera during the Sauret footage with excellent results.

I really do have better things to do than sort out your continual misunderstanding Tom.
 
femr,


I gotta say, guy, that you have an amazing propensity for focusing on, and reacting to, the Fluff, while ignoring the significant. This response is a perfect example.

Allow me to supply my subjective assessment of what is Fluff & what is Substance. And to quote your response.

[My little "wrong way touchdown" video.]
My opinion: Fluff

Your response: You quoted it in its entirety.
___

This is not "venting".
My opinion: Fluff

Your response:
No Tom, it is venting your personal viewpoint of what you perceive to be *the truth movement*, it's viewpoint, it's goals and intentions.

Uh, I think a lot of people here have opinions about the truth movement. And more than one or two have been known to express them. If you're gonna be that thin skinned about that topic, perhaps this isn't the forum for you.

Just a thought.
___

It's stating one of the most fundamental principles of Measurements and Analysis 101.
My opinion: Extreme Substance

Your response:
[silence]
___

[my suggestion that you're a truther.
My opinion: Fluff

Your response:
Unfortunately for me ? What kind of idiot are you Tom ?
Keep your personal crusade out of posts to me.

Even tho it's Fluff, let me ask you for a clear statement of your opinion.

1. Do you consider that OBL & his crew was responsible for 9/11?
2. Do you think that any component of the US gov't was involved in any way?
3. Do you think that NIST or the engineers who supported them committed fraud in any way?
4. Do you consider yourself a truther?
5. If not, how do you distinguish yourself from the rank & file truther?
___

[My statement that your graph supports NIST & undermines the truther perception.
My opinion: Substance

Your response:
[Silence]
___

I've posted THIS for you before.
My opinion: Fluff

Your response:
No, you haven't.

Coulda sworn ...
Ah well, not nice to swear.

OK, if I hadn't, it is an absolutely pivotal, crucial concept for you to learn.
You welcome.
___

[The content of the video] Please watch it.
My opinion: Substance

Your response:
When I have time perhaps.

In other words, no response.

BTW, you're not very good at the convenient little fib. You wouldn't know if you've seen the video before unless you looked at it. Since I set it up to jump right to the pertinent quote, you also saw that quote.
___

[Error analysis] is an acknowledgment of the inescapable fact that there is an inherent error in every single measurement that anybody, anywhere takes.
My opinion: Extreme Substance

Your response:
No excrement Sherlock. Moving on.
___

But you skipped entirely the next, core comment.
And a formal analysis of how much those errors will impact any calculated conclusion.
My opinion: Substance

Your response:
[silence]
___

And now, the heart of my post:

This is an absolutely crucial component of any measurement analysis.

The LACK of an understanding and clear statement of the sources & magnitudes of one's measurement errors is an incompetence.

The LACK of an acknowledgment of those error is a dishonesty.

What I can tell you is this: the single biggest lesson of error analysis is a shocked: "I cannot BELIEVE that, after taking all those measurements so carefully, we've got to acknowledge THIS BIG an error." But that was precisely the message that our instructors were trying to get us to appreciate.
My opinion: Extreme Substance

Your response:
[silence]
___

Then, being the courteous fella that I am, I tried to answer a question that you asked me:
What would you interpret from this animated GIF ?
http://femr2.ucoz.com/NIST_WTC7_redo-newoutsmallhy.gif

I see 100 different things in that gif. You gave me no context, so I answered within the context of the discussion.
I replied:
Are you talking about "what this video says about the continuum vs. discrete process of collapse"?

If that's your topic, the drop of the penthouse says to an experienced structural engineer the same thing that the collapse of the previous (east) penthouse said, and that the subsequent collapse of the north wall confirmed: "the building is in the process of collapsing".

And a careful analysis - like your "east corner" graph confirms that.

And I wrapped it up with a conclusion that tied directly into the discussion.
And there is absolutely nothing that can be seen in this gif to deny that.
My opinion: Substance

Your response seemed a little disproportionate:
Just WHAT is your problem Tom ? Who exactly is denying what exactly ?

Not a word of any Substance. Nothing about what the gif means in the context of our discussion. Just a bit of over-the-top drama.

This has been a continuous pattern in my discussions with you, femr.

Is there some particular reason that you ignore the Substance & fixate on the Fluff?



Tom
 
Tom,

You will recall that the graph you are discussing was presented as a quick and dirty trace of the features.

You then made assertions about what it contained, and I offered you an opportunity provide an interpretation, which you obliged.

I've also stated that it will take me a couple of days to perform the set of requested traces, as it is a very time-consuming and laborious task, involving very careful initial tracker placement followed by much checking, rechecking and analysis of tracker latch quality (which SynthEyes quantifies on a per sample level as I've indicated to you).

I also provided you with a colour processed GIF and asked you to also provide an interpretation of what you can see.

Part of the reason for doing so was to see if you could spot the *problem*.

Before going further, your continual demands for error analysis are becoming incredibly tedious. I have not made any claims about the data I've provided you, and the only person who has made claims about it is yourself, although you have yet to present your reasoning behind this...
My guess, based on measurements that I've made in the past: If you performed a real error analysis, and you used re-interlaced video, you'd find that during the collapse (the time of interest, of course), you've got uncertainties of about ± 2 to ±3 pixels before image enhancements & ±1 to ±2 pixels after image enhancements.
Your guess ?
In the past ?
Re-interlaced video ?
During collapse ? What about your observation re NE corner ?
For data provided to you 2 days ago ?

Right.

You have made claims about the quick and dirty trace provided for the purpose of seeing whether the horizontal point that NIST used to determine their 32.196 ft/s linear fit and their curve fit (which maxes out at 34 ft/s^2) descended slower than my (slow and clean) NW corner traces which (with the most recent distance scalar) max out around 36ft/s^2.

Is there any reason you are not performing your own error analysis for the claims you have made ?

If I do make claims about the WTC 7 trace data, which I publish in any kind of formal way, then it's to be expected that error analysis will be performed. Until that point, Tom, get a grip.

I love how you cherry pick through responses though, deftly ignoring the responses to your misunderstandings, and instead make assumption after assumption upon segments you choose to take out of context or further misunderstandings you have suffered. Boring. Transparent. Far too time consuming. I am not here to discuss error analysis with you. That's your baby. Jog on. Your previous post consists entirely of the assumption that if I don't bow to your whim and discuss what you want me discuss you'll throw your toys out of the cot. It's hilarious. And very sad at the same time.

Now then.

If you look at (analyse) the animated GIF you'll see that the NE corner image quality ain't too hot.

Close inspection reveals that the roofline image data for the NE corner suffers from significant bleed at the start of the clip, which receeds as the clip progresses and the smoke in the background clears. As the smoke clears, the contrast and clarity of the NE corner increases.

Here is a draft trace of the position of the window immediately below the NE corner, and the NW corner...

325296455.png


Your interpretation ? :)

As I've said, I'll get the traces done over the next couple of days (IF you will get off your high-horse and stop being such a time-wasting pompous ass).

If the data is of no use to you, fine. No problem at all. I've plenty to do. The input from W.D. Clinger has been very welcome, and I hope it continues. You are just a drain on scant resources.
 
Last edited:
Tom,

You'd best have the draft horizontal movement traces of the corner features too...
340872750.png


Must've forgot to add the legend, but am sure you can work out which is the NW corner.
 
femr,

This is the only comment that I'll address today.

It's the gem of this post.

Your recent posts have been a clear example of your lack of experience in dealing with noise levels in real-world data Tom.

Psst, femr.

One of the two of us has over 35 years experience as a professional, working engineer.

One of the two of us has written about 400 engineering test reports in his career. Which usually meant "wrote the protocols, designed, built & validated test fixtures & set-up, took the data (or trained & supervised the tech that did), reduced the data, drew the conclusion, wrote the report and signed off with "engineering approval".

One of the two of us has his name in the "designed by" box of something in the neighborhood of 3000 engineering drawings.

... in the "approved by" box of ~12,000.

One of the two of us has seen about 2000 of those drawings turned into real world parts that had to serve a purpose.

One of the two of us has seen about 500 of those drawings turn into parts that went into production & were sold in the marketplace. Either to other high tech companies to stick into their products. Or to hospitals to stick into people.

One of the two of us understands that those drawings have a boatload of dimensions, each with tolerances (i.e., "errors") attached.

One of us understands that, in order to find out what the tolerance (i.e., "error") on all of those numbers on all of those parts must be, a person had to do a competent little error analysis called a "tolerance stack up".

In other words, femr, one of the two of us has been immersed up to his eyeballs, in the real world of real numbers that are dripping with dirt & grime & noise & error.

You were saying something about my "lack of experience in dealing with noise levels in real-world data ..." ?
___

I've got no more time for this nonsense.

WD, he's all yours.

When you find out that this closet truther is laden with charts & graphs & spreadsheets & suspicion ...

... but little understanding of context or significance...

... is immune to logic ...

... publishes page after page after page of posts, but "making no claims"...

... and avoids like the plague stating what he really believes about anything ...

... and when you find out that he'll pick a squabbling fight over trivia when he finds that you disagree with him ...

... when you realize that it's all a waste of time & typing ...

... lemme know.


But this petty little bitch-fest is a pointless waste of time.

In spite of Carlitos' tuning in with a beer. (Nice touch!)


Tom

PS. I'll post the things that I found out about this modeling later this week. They're quite interesting when you put numbers to them.

Unlike femr, when I post them, I will make some claims about what they mean.
 
Last edited:
femr2 said:
Your recent posts have been a clear example of your lack of experience in dealing with noise levels in real-world data Tom.
One of the two of us...(*8)
In other words, femr, one of the two of us has been immersed up to his eyeballs, in the real world of real numbers that are dripping with dirt & grime & noise & error.

That's great. However...

1) When presented with the original set of raw data, this was your response...
tfk said:
I hate to be the bearer of bad news regarding your video data, but...

In short, either:

A. The measured point on that building is undergoing absurd levels of acceleration (10 to 80 G's).
B. I can't program on the fly like I used to.
C. There are some serious artifacts that your video technique is introducing into your data.

I vote "C".
You were deriving acceleration metrics from raw position/time data containing noise, and not treating that noise correctly. Performing first and second order derivations from noisy data using near-adjacent samples will, of course, result in extreme amplification of that noise.

2) Your *smoothing* method...
tfk said:
The dots are from your data points, calculated as a "balanced difference". That is, the velocity is given by (DropPoint[i+1] - DropPoint)/(Time[i+1] - Time). This value is set at the midpoint of the sample times. (= Time + 0.5 dt, where dt = constant for your data = Time[i+1]-Time.)

Extremely narrow band. 59.94 sample per second data, with visible noise level of roughly +/- 0.7ft (+/- 0.2 pixels) (as seen on the following graph provided to you before you even received the raw data)

(click to enlarge)

3) Assignment of blame...
tfk said:
You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.
Your velocity data (still cannot see any of your graphs) is all over the map due to your inept treatment of the raw data.
It is not a direct result of my very short time base between data points, but your use of the data.
Yet you clearly understand that small errors in the measured location will result in huge variations in velocity...with your chosen methods.

4) Interpretation...
tfk said:
I strongly believe that this scatter is an artifact of the errors in your measurement technique.
If you had treated the data correctly with regards to noise, you would not end up amplifying that noise.

5) Realisation...
tfk said:
I also believe that the only way to get rid of this is to apply a smoothing filter to the data.
Wonderful. Step number one for anyone with any experience in deriving metrics from noisy data.

6) And backwards steps...
tfk said:
I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
It would be foolish indeed to accept that the building is undergoing such completely absurd accelerations.

7) Without learning anything...
tfk said:
The acceleration is handled exactly the same as the velocity was.
You'd already realised the need to smooth, then decided not to.

8) And still not realising where the culprit lies...
tfk said:
But now, you can see that you're manipulating velocity data that has huge artifacts built in.
The artifacts in your velocity data are a side-effect of your methods.

9) And still not seeing the fundamental problem...
tfk said:
This makes the calculated acceleration data absurd: over 2500 ft/sec^2 or ~80G's.
The drop distance/time graph presented to you in advance of you receiving the raw data really should have informed you that there was a problem with your method...


10) More interpretation...
tfk said:
I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building.
From the graphs already presented, the noise variance is roughly +/- 0.7 ft. The full drop height of the building feature spans 340 ft, making the signal to noise ratio roughly 243:1. Not a huge amount of noise in my humble opinion.

Video showing the data overlaid on the source video was provided to clarify good reflection of real building feature motion. (In 2D of course)

11) A voice of reason...
W.D.Clinger said:
I could hardly believe you were criticizing femr2 for providing data that (gasp!) contain noise.

Everyone agrees there's a lot of noise in the data. By definition, the noise does not reflect real motions of the building.

On the other hand, I see no reason to doubt that femr2's data, when analyzed properly, will reflect real motions of the building.
Thanks.

12) The conclusion...
tfk said:
This is good raw data.

You're to be commended for the (clearly extensive) work that you've put into it. You're to be especially commended for your willingness to share your raw data. Something that is exceedingly rare in the truther world.

We have disagreements about several things that overlap into my area: engineering. Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.

The biggest lesson of just a few hours for me on this data is something that is readily evident in your excel data: the fact that very slight variations in the position vs. time model result in sizable variations in the velocity vs. time mode. And then result in enormous variations in the acceleration vs. time model.
(bolding mine)

I have no reason to doubt your engineering skills.
I have no reason to doubt your engineering error analysis skills.

I have good reason to suggest that you are not experienced in deriving velocity and acceleration metrics from position/time data containing noise, regardless of the other areas of experience you have stated.
 
Last edited:
Just for snicks, since Tom K is tuning out for now.

Hi femr2,
Could you please explain how your findings relate to the events of 9/11/01? Do you have a hypothesis that explains the events of that day which:
a) includes your findings
b) conforms to observed events
c) makes sense

Does your hypothesis differ with NIST and the 9/11 Commission in any way?
 
Just for snicks, since Tom K is tuning out for now.

Hi femr2,
Could you please explain how your findings relate to the events of 9/11/01? Do you have a hypothesis that explains the events of that day which:
a) includes your findings
b) conforms to observed events
c) makes sense

Does your hypothesis differ with NIST and the 9/11 Commission in any way?
I provided tfk the raw drop distance/time data for WTC 7 in order for him to extrapolate the full descent time as part of a discussion he was having with Tony.

I am still in the process of extracting the data from video, and have not really performed any analysis of the data, so no findings as yet.

The resultant *discussion* with Tom ensued due to his penchant to debunk/discredit/reject any information provided to him by anyone he classifies as a *twoofer*.

My purpose at this time is to improve the quality of the raw data, and refine the pixel->real world scaling metrics.

There has been observation of over-G acceleration in the data, which is also present within the NIST derivations, but until I have exhausted all possibilities in improvement of the scaling metrics it is not really possible to start making conclusions.
 
That's great. However...

1) When presented with the original set of raw data, this was your response...

You were deriving acceleration metrics from raw position/time data containing noise, and not treating that noise correctly. Performing first and second order derivations from noisy data using near-adjacent samples will, of course, result in extreme amplification of that noise.

2) Your *smoothing* method...

Extremely narrow band. 59.94 sample per second data, with visible noise level of roughly +/- 0.7ft (+/- 0.2 pixels) (as seen on the following graph provided to you before you even received the raw data)
http://femr2.ucoz.com/photo/1-0-445-3http://femr2.ucoz.com/_ph/1/2/172155712.jpg
(click to enlarge)

3) Assignment of blame...

Your velocity data (still cannot see any of your graphs) is all over the map due to your inept treatment of the raw data.
It is not a direct result of my very short time base between data points, but your use of the data.
Yet you clearly understand that small errors in the measured location will result in huge variations in velocity...with your chosen methods.

4) Interpretation...

If you had treated the data correctly with regards to noise, you would not end up amplifying that noise.

5) Realisation...

Wonderful. Step number one for anyone with any experience in deriving metrics from noisy data.

6) And backwards steps...

It would be foolish indeed to accept that the building is undergoing such completely absurd accelerations.

7) Without learning anything...

You'd already realised the need to smooth, then decided not to.

8) And still not realising where the culprit lies...

The artifacts in your velocity data are a side-effect of your methods.

9) And still not seeing the fundamental problem...

The drop distance/time graph presented to you in advance of you receiving the raw data really should have informed you that there was a problem with your method...
http://femr2.ucoz.com/photo/1-0-444-3http://femr2.ucoz.com/_ph/1/2/143855524.jpg

10) More interpretation...

From the graphs already presented, the noise variance is roughly +/- 0.7 ft. The full drop height of the building feature spans 340 ft, making the signal to noise ratio roughly 243:1. Not a huge amount of noise in my humble opinion.

Video showing the data overlaid on the source video was provided to clarify good reflection of real building feature motion. (In 2D of course)

11) A voice of reason...

Thanks.

12) The conclusion...

(bolding mine)

I have no reason to doubt your engineering skills.
I have no reason to doubt your engineering error analysis skills.

I have good reason to suggest that you are not experienced in deriving velocity and acceleration metrics from position/time data containing noise, regardless of the other areas of experience you have stated.


psst... I seem to have forgotten something...

Which one of us was the one claiming to be able to produce raw data points with ±0.2 pixel accuracy ...?

How big would the velocity & acceleration variation have been if your raw data points had really had that level of accuracy?

Do a little math. Or just wait a couple days & I'll do it for you as a portion of the results I'll be preparing.
 
Which one of us was the one claiming to be able to produce raw data points with ±0.2 pixel accuracy ...?
I've estimated the noise variance, based on simple eye-balling of the position-time data during the near-static portion of the trace, sure. If you're inclined to perform further analysis on old data, that's great. I imagine you'll come out with a larger value during descent. Awesome.

Make it clear what data-set you are using though, and I assume you'll actually need the source video to do it properly. I'll dig you out a link later.

just wait a couple days & I'll do it for you as a portion of the results I'll be preparing.
Thanks.
 
error analysis

You were deriving acceleration metrics from raw position/time data containing noise, and not treating that noise correctly. Performing first and second order derivations from noisy data using near-adjacent samples will, of course, result in extreme amplification of that noise.
Well said.

Although the original post of this thread asked how Chandler could possibly derive instantaneous velocity from sampled position data for WTC1 (not WTC7), Chandler was actually drawing an incorrect conclusion about instantaneous acceleration from sampled position data. When sampled position data are used to estimate acceleration, the error is proportional to the error in the sampled position data and also proportional to the square of the sampling rate.

Suppose, for example, that +/-es is the worst-case error in the sampled position data, +/-ev is the worst-case error in the velocities estimated by simple differencing, and +/-ea is the worst-case error in the accelerations estimated by second differencing. Let f be the sampling rate (in Hertz). Then

ev = 2 f es
ea = 2 f ev = 4 f2 es

where the factor of 2 comes from computing the difference of two values with +/-e error: e-(-e)=2e.

Extremely narrow band. 59.94 sample per second data, with visible noise level of roughly +/- 0.7ft (+/- 0.2 pixels) (as seen on the following graph provided to you before you even received the raw data)
[qimg]http://femr2.ucoz.com/_ph/1/2/172155712.jpg[/qimg]
Note that femr2 is not advocating a data analysis based on those characteristics of his data, but is using those characteristics to warn against analyses based directly upon his unsmoothed and unreduced data.

femr2's estimated error of +/- 0.7ft is more realistic than the +/-0.44ft error claimed by MacQueen and Szamboti for their data, before Tony Szamboti found it more useful to claim their error was really more like +/-12ft. Note also that femr2's sampling rate of almost 60 Hz provides much better resolution than the 6 Hz of the MacQueen/Szamboti data or the 5 Hz of the Chandler or Chandler/Szamboti data.

Unfortunately, femr2's improved resolution presents a trap for the unwary:
ea = 4 f2 es = 4(59.94/sec)2(0.7ft) = 10060 ft/s2which is more than 300g.

Your velocity data (still cannot see any of your graphs) is all over the map due to your inept treatment of the raw data.
It is not a direct result of my very short time base between data points, but your use of the data.
Yet you clearly understand that small errors in the measured location will result in huge variations in velocity...with your chosen methods.
Absolutely correct. To obtain meaningful estimates for acceleration, we have to reduce the resolution of femr2's data. That's not a knock on femr2's data; it's just a fact of life for this kind of analysis.

Unfortunately, reducing the resolution also reduces our ability to detect short jolts. That's not a knock on analyses that reduce the resolution; it's just a fact of mathematics. The important thing is to accept the limitations of femr2's data and our analysis, so we don't fall for the Chandler-MacQueen-Szamboti fallacy and related fallacies.

If you had treated the data correctly with regards to noise, you would not end up amplifying that noise.
Yep. (ETA: Not have amplified it so much, anyway.)
 
Last edited:
When sampled position data are used to estimate acceleration, the error is proportional to the error in the sampled position data and also proportional to the square of the sampling rate.

Suppose, for example, that +/-es is the worst-case error in the sampled position data, +/-ev is the worst-case error in the velocities estimated by simple differencing, and +/-ea is the worst-case error in the accelerations estimated by second differencing. Let f be the sampling rate (in Hertz). Then

ev = 2 f es
ea = 2 f ev = 4 f2 es

where the factor of 2 comes from computing the difference of two values with +/-e error: e-(-e)=2e.
A useful example. Thanks.

femr2's estimated error of +/- 0.7ft is more realistic than the +/-0.44ft error claimed by MacQueen and Szamboti for their data
I have an inkling that Tom's analysis will involve additional sources of uncertainty, probably relating to the full signal path between real-world source, through camera hardware to final version digital video file.

To date my eye-balled error estimations are based upon the tracking systems' adherance to the tracked feature, but your opinion on any additional suggested factors would be very welcome.

Yep. (ETA: Not have amplified it so much, anyway.)
Yes, of course. My bad. Cannot eliminate amplification of even residual noise.
 
Tom,

I forgot to post the NIST Camera #3 source video link...

HERE

...which I assume you'll need for your uncertainty analysis.

Am currently tracing some more rendered test footage in order to quantify the FOM difference from the NW corner trace. This should give an indication of the noise in the NIST video compared to the noise inherent in tracing small sub-pixel feature movements. I'll post the results soon, though it will probably be after you post your results.
 
It is so amusing, in some deranged way, to pop in time and time again and see femr the physics fraud constantly babbling like an idiot and getting his uneducated posterior handed to him.

Femr...don't you ever learn? I mean, c'mon, over a year ago you failed to answer numerous 8th grade physics questions correct..I figured the sheer embarrassment of that would send you running...or admitting you are a charlatan.

Guess not
 
It is so amusing, in some deranged way, to pop in time and time again and see femr the physics fraud constantly babbling like an idiot and getting his uneducated posterior handed to him.

Femr...don't you ever learn? I mean, c'mon, over a year ago you failed to answer numerous 8th grade physics questions correct..I figured the sheer embarrassment of that would send you running...or admitting you are a charlatan.

Guess not

Howdy Carl,

Never thought that I'd be defending femr but ...

There are parts of your comment that (I feel) aren't a fair assessment of what's happening here.

And, since I'm his principle antagonist here, I thought it appropriate to comment.

I happen to think that he's a bit of a putz. (Sometimes more than a bit.) And occasionally I overreact to his putziness a bit. (Sometimes more than a bit.) And some of my over reaction has been, shall we be kind, "intemperate".

I can't speak to the physics. I didn't see your discussion. But I've had a few of my own with him about engineering...

But the raw video data that he gets from his SynthEyes program is very good. For example, it meets the "noise level" that he claims. But that does not translate into an equivalent level of accuracy.

When he is talking about the very specific item of "what his program measures in the videos", I've found him to be usually correct.

Where I think that he goes off-track is when he tries to apply that information to the real world outside of the video.

And, even here, it's not the magnitude of the data or his calculations that we are disagreeing about. But the precision.

So, all told, I think that he is basically right about the video data measurements, wrong about their level of precision and misleading when he attempts to interpret what the numbers mean.

But I don't think that the terms that you used are appropriate when it comes to video analysis.


Tom

PS. As an example of our disagreement, NIST states (accurately) that they can measure CHANGES in horizontal position of certain points on the WTC7 down to (if you accurately read between the lines) about 1" level.

But they can only do that by using the Moire technique. That does NOT mean that they could measure absolute locations of other features in the video to anywhere near that level of precision.

In fact, again reading accurately between their lines, they say that they can identify the width of the building, based on its two vertical edges, only to an accuracy of ±4 pixels. Which translate into basically ±4' (48"). This says that they feel that they can identify a vertical (& presumably horizontal) straight line only to a accuracy of ±2 pixels.

This number makes imminent sense to me & feels right, too.

That's not to say that NIST couldn't have done better. It's clear from their data & analysis, tho, that they didn't need to.
 
But I've had a few of my own with him about engineering...
Really ? When, and about what ?

Where I think that he goes off-track is when he tries to apply that information to the real world outside of the video.
When, and about what ?

And, even here, it's not the magnitude of the data or his calculations that we are disagreeing about. But the precision.
I'll await your uncertainty analysis with interest.

So, all told, I think that he is basically right about the video data measurements, wrong about their level of precision and misleading when he attempts to interpret what the numbers mean.
The only precision I've stated has been fully qualified...adherance to the tracked feature position match, with a being-refined pixel to foot scalar. Any suggestion of being misleading is your own inference.


PS. As an example of our disagreement, NIST states (accurately) that they can measure CHANGES in horizontal position of certain points on the WTC7 down to (if you accurately read between the lines) about 1" level.

But they can only do that by using the Moire technique.
It's a technique very prone to translation error. There is really no sure-fire way to calibrate the method from vertical pixel location to horizontal real-world motion scales.

Their results are easily replicable using *8 Lanczos3 filtering and pattern matching (a quick and dirty trace. can be refined further)...
214635544.png

Other methods have been used to replicate this level of accuracy.

That does NOT mean that they could measure absolute locations of other features in the video to anywhere near that level of precision.
Which is a shame, or a travesty, depending upon how you look at it. The methods I use apply to all traced features. What method did NIST use for the roofline trace ?

In fact, again reading accurately between their lines, they say that they can identify the width of the building, based on its two vertical edges, only to an accuracy of ±4 pixels. Which translate into basically ±4' (48").
I basically agree with that (as they are not using their oddball moire technique to define a more accurate position) so it's +/- 1 pixel for them at each end, but will point out that...
a) I'm not measuring absolute distances. I'm measuring frame-to-frame changes in position, which can be performed much more accurately.
b) Their choice of vertical roofline location (which is not clearly defined) is indeed impossible to determine with much accuracy, as the contrast between roofline and penthouses is so poor. A poor choice of point bearing in mind only one point was checked.
c) You should always use the best pixel to distance metric you have available.
 
Last edited:
Where I think that he goes off-track is when he tries to apply that information to the real world outside of the video.


Tom, I could not say it any better.

The real world escapes him.
 
A full quality trace...
89078455.png


Vertical scaling is a manual fit.

The finer variations are there, as are the sharper peaks...

wsvsv.gif


(NW edge, RGB separated)

Obviously, from this view...
370825048.jpg


Quite why NIST chose to smooth out the clearly present sharp directional changes is unknown, but the tracing methods employed for the rest of the datasets I have provided clearly perform at a level of accuracy similar to the *moire* method used by NIST.

Again, what method did NIST use for the roofline drop trace ?
 
femr,

Really ? When, and about what ?
...
When, and about what ?

Both questions: the discussion on "911 forum" regarding your kinematic model of the towers.
__

BTW, why do you expect me to answer your inconsequential questions, when you won't answer my substantial ones?

What do you think about the implications of the motion of the east edge roof line for several seconds before the start of the north facade drop?

Why won't you answer my simple questions about your "truther status"?

It's not as if it's a secret, femr...

The only precision I've stated has been fully qualified...adherance to the tracked feature position match, with a being-refined pixel to foot scalar. Any suggestion of being misleading is your own inference.

Misleading? hmmmm... Like this?

Perhaps you could reconcile these two sentences. Both by you. Both in this thread.

NiST used Moiré analysis to enhance their resolution of the lateral movement of WTC7. You haven't discussed that technique in your discussion of the building's collapse.

Probably because I didn't use it.


Versus your earlier comment:

NiST used Moiré analysis to enhance their resolution of the lateral movement of WTC7. You haven't discussed that technique in your discussion of the building's collapse.

ETA: I could go into the whole pattern match and spot processes, but essentially it's like the process used by NIST in NCSTAR 1-9 Vol2 C.1.3 p680 ...

And the title of NCSTAR 1-9 Vol2 C.1.3, p680 is, ta da ... , "Moiré Technique for a Single Marker Point". In other words, a detailed description of their use of Moiré analysis.

Would you like to clarify?
___

And you seriously then suggest that method [Moiré] will provide only a 6ft accuracy ? Hmmm. Definitely confused

My comments in this post clearly stated that NIST got very high accuracy in their horizontal movement precisely because they used Moiré techniques. And that their reduced accuracies (±4 ft) applied to measurements where they could not use Moiré.

Someone is, in fact, "definitely confused", femr. Might not be who you think it is, tho.

[Moiré is] a technique very prone to translation error. There is really no sure-fire way to calibrate the method from vertical pixel location to horizontal real-world motion scales.

That's a silly statement. It's comparable to "a tape measure is prone to errors associated with the tape moving in & out of the housing."

The tape measure works BECAUSE the tape moves in & out of the housing.
The Moiré technique works BECAUSE of translation. The relative translation between the (fixed) vertical column of cells in the camera and the (moving) vertical edge of the building.

Their results are easily replicable using *8 Lanczos3 filtering and pattern matching (a quick and dirty trace. can be refined further)...
http://femr2.ucoz.com/_ph/3/214635544.png
Other methods have been used to replicate this level of accuracy.

There is no common denominator between these two techniques. None.

NIST proved that their technique worked. Down to an accuracy of <1 inch.

You have asserted that your technique works. To an accuracy that you're claiming is approximately 8 inches (0.7 feet?).

"... easily replicable ..."?

Which is a shame, or a travesty, depending upon how you look at it. The methods I use apply to all traced features. What method did NIST use for the roofline trace ?

Ain't "a shame" in the slightest. Except to people who don't understand why buildings stand or how they fall down. The NIST structural engineers were not burdened with this sort of ignorance.

The ignorance it takes to see "free fall" where no exists. Or to try to discover "missing jolts" during collapse.

Therefore they realized that they had all the data, at the level of precision, that they needed.

And then turned their (limited) resources to other areas of the investigation that actually mattered.

It would have been a "travesty" if they had not done so.

And, once again, you are alluding to some sort of malfeasance on the part of NIST. It is time for you to stand up and defend your persistent innuendo against honorable men: they hundreds of NIST & academic & industry engineers that you keep suggesting committed some sort of fraud.

If you would be so kind as to explain what the hell you think that you are going to uncover, that requires the type of analysis

And, please, put it into a context. I'm not interested in "for x.x seconds, the Northwest roofline fell at free fall acceleration". I am interested in "what the hell does that mean to you?"

Please don't treat this question like all the other significant ones: by ignoring it. Please provide some honest reply.


Tom


PS. I've got a question, btw. What is the direct source of the Dan Rather video that you analyzed. You say that it's 59.9 frames/sec. If I understand the standards correctly, this means that it's being played back at the original field rate, not the frame rate. Which means that the field has been filled somehow. (Line doubling, interpolation, etc.) Do you know what technique was used to fill the odd/even missing lines?
 
the discussion on "911 forum" regarding your kinematic model of the towers.
There was no discussion of engineering about the core model, simply your misinterpretation of it's scope. Your utter lack of comprehension and understanding, along with the kind of rudeness you seem to relish in here, led to your being temporarily suspended by a moderator.

why do you expect me to answer your inconsequential questions
I expect nothing at all from you. You don't exceed my expectations. Have you completed your analysis yet ?

What do you think about the implications of the motion of the east edge roof line for several seconds before the start of the north facade drop?
I am not, unlike yourself (most humerously), going to begin the process of interpreting the data until I've completed it's extraction. I've told you this several times.

Why won't you answer my simple questions about your "truther status"?
This is one of your substantial questions ? LOL. It's a stupid question. Apply whatever brand to me as you please for your own purposes. What I post can stand on it's own merits.

Perhaps you could reconcile these two sentences. Both by you. Both in this thread.
Okay.

The *moire* technique applied by NIST is not actually a true moire effect. There are no intersecting lines, only the pixel delineation. It's simply a method they've used to identify horizontal movement by applying a *property* of a moire analysis. The methods I use (*8 upscale with Lanczos3 filtering) achieves the same result without resorting to the dodgy non-linear side-effect of finding a centre-spot in intensity. They both work on the same principle...small sub-pixel movements result in adjacent pixel intensity changes, which can be tracked. NIST have simply chosen to track those intensity changes in a vertical direction, though they are using a non-vertical building feature and non-linear intensity change metric to define their distance translations, and taking account of neither.

And, yes, I haven't stated a specific pixel->distance scalar as yet. Too soon :)


NIST got very high accuracy in their horizontal movement precisely because they used Moiré techniques. And that their reduced accuracies (±4 ft) applied to measurements where they could not use Moiré.
They should have used a better method for the roofline trace then. The methods I use work in both directions.

Someone is, in fact, "definitely confused", femr.
I don't think confusion comes into it in the slightest Tom. Digging yourself out of a hole, and failing miserably is more like it.

That's a silly statement. It's comparable to "a tape measure is prone to errors associated with the tape moving in & out of the housing."
No. Their hotch-potch *moire* method suffers from non-linear intensity changes, which they did not account for in their translation to real-world units. I can explain in mo(i)re detail if you like. But I suggest you check figure C-7 first though.

There is no common denominator between these two techniques. None.
Incorrect.

NIST proved that their technique worked. Down to an accuracy of <1 inch.
Pretty much. Of course, any additional sources of uncertainty you might suggest also apply to NISTs method, and then some additional factors such as those I've highlighted.

You have asserted that your technique works. To an accuracy that you're claiming is approximately 8 inches (0.7 feet?).
For the VERTICAL roofline movement I've estimated +/- 0.7ft, yes. When I fully quantify horizontal sclaing metric, I'll provide a horizontal accuracy metric.

The graph I included just a post previous shows you that the accuracy of my methods is very comparable to NISTs results in the horizontal direction. I doubt you understand why.

What is the direct source of the Dan Rather video that you analyzed.
HERE


You say that it's 59.9 frames/sec.
Yep.

If I understand the standards correctly
Doubt it, especially with your repeated previous use of the phrase *reinterlaced*.

this means that it's being played back at the original field rate, not the frame rate.
Original framerate, which is 59.94 fps.

Which means that the field has been filled somehow.
No. MPEG-2 video data is interlaced, so contains 2 separate frames in each single frame of the resultant video. Once interlaced, each separate frame is then termed as a field.

(Line doubling, interpolation, etc.)
No.

Do you know what technique was used to fill the odd/even missing lines?
I deinterlace the video as part of it's preparation for tracing.

I suggest you read-up on interlaced video. You are clearly confused.
 
Last edited:
femr,

There was no discussion of engineering about the core model, simply your misinterpretation of it's scope. Your utter lack of comprehension and understanding, along with the kind of rudeness you seem to relish in here, led to your being temporarily suspended by a moderator.

LMAO.

Yeah, it IS a subjective world, ain't it.

The principle fracas that I recall was you're building a model of the immense core columns, which were muscled into place by hand by a bunch of union welders, at breakneck speed, without fixturing, with the columns dangling by chains from jump jacks, 80 stories in the sky, and frequently in, shall we say, "breezy" conditions...

... and quoting the location of those columns to, what was it again... 13 significant digits.

And I recall my chiding you for it.

And I recall, you did not reply with the ONLY reasonable reply: "oh yeah, I just didn't round them off. Ignore everything after the 3rd digit".

Instead, from you and that (what is the term for a group of truthers?) Hornswaggle of truthers, I got a chorus of objections about how little I understand about modeling...!!?

Sure thing, femr. That was MY lack of comprehension & understanding.

Sure thing, femr. You understand about "precision in measurement".

Sure thing, femr. The hostile response to my posting had absolutely nothing to do with the fact that there were 20 active truthers posting there. And, at that time, just me speaking for the other side.

Sure thing.

Have you completed your analysis yet ?

You'll know when it's done. I'll post it here.

I am not, unlike yourself (most humerously), going to begin the process of interpreting the data until I've completed it's extraction. I've told you this several times.

I asked a simple, direct question.

I'm not asking you for a final answer.

IF the northeast corner is moving at least 4 full seconds before the collapse initiation began (as your preliminary data SUGGESTS), then what does this say about the the final collapse being either a separate event from the previous collapses (e.g., the east penthouse) versus being part of a continuous process.

Ever heard the term "speculation"? Contrary to the misapprehensions of people who have never participated in the process, all engineering inquiry starts with speculation.

People who participate toss out ideas. No matter how wild. You usually don't believe what you propose to be true.

But the speculation leads you to a sequence of "if this is true, we should then see that..." And this is progress.

In the engineering field, we have a term for folks who refuse to offer their speculation: "cowards".

Not that I'm calling you one, femr. Of course, I'm not. You're not an engineer.

I'm just relating conversations that I've had with lots & lots of engineers over the years as we sat around speculating, while trying to figure something out.

This is one of your substantial questions ? LOL. It's a stupid question. Apply whatever brand to me as you please for your own purposes. What I post can stand on it's own merits.

I'm encouraging you to speak your mind plainly.

This is a simple "forest vs the trees" moment, guy. It's been 9 1/2 YEARS. Surely, some tiny little conclusion has percolated its way into your brain.

You'll have to forgive me my engineer's bluntness. But we consider people who won't speak their minds, plainly & publicly, to be wimps.

The *moire* technique applied by NIST is not actually a true moire effect. There are no intersecting lines, only the pixel delineation.

100% wrong. You don't know what you're talking about. It is EXACTLY a Moire effect.

Not "approximately". Not "pseudo-".

It's simply a method they've used to identify horizontal movement by applying a *property* of a moire analysis.

Yep, the properties being the spacial frequency of 3 columns of pixels in the camera and the nearly (but not exactly) parallel edge of the building.

Clearly, you didn't understand what NIST really said.

The methods I use (*8 upscale with Lanczos3 filtering) achieves the same result without resorting to the dodgy non-linear side-effect of finding a centre-spot in intensity. They both work on the same principle...small sub-pixel movements result in adjacent pixel intensity changes, which can be tracked. NIST have simply chosen to track those intensity changes in a vertical direction, though they are using a non-vertical building feature and non-linear intensity change metric to define their distance translations, and taking account of neither.

LMAO...

What a word salad.

You have absolutely no idea what NIST did here, do you?

Their technique & yours have nothing in common. Here's the proof:

You're process is symmetric for motions in all directions. Moire has an amplification of a factor of 100x for motion in the horizontal direction & no amplification (gain = 1) in the vertical.

Your technique needs to see the pixel that you are tracking. Moire doesn't.

Your technique works for any size, shape, geometry of building. Moire requires two nearly parallel features, one with a spacial or temporal frequency, and works best when the two features are nearly, but not exactly parallel. In essence, Moire works because of interference fringing between two spacial frequencies (or one frequency & one fixed reference).

You'd best read up on it. NIST explains it clearly.

They should have used a better method for the roofline trace then. The methods I use work in both directions.

Your method works in all directions. Theirs doesn't. Proof that you're contradicting your own comment that "they are based on the same principle."

I don't think confusion comes into it in the slightest Tom. Digging yourself out of a hole, and failing miserably is more like it.

LMAO. Sure thing, femr. Why don't you go ask one of your truther buddies to explain Moire (& NIST's use of it) to you.

No. Their hotch-potch *moire* method suffers from non-linear intensity changes, which they did not account for in their translation to real-world units. I can explain in mo(i)re detail if you like. But I suggest you check figure C-7 first though.

Friggin' priceless.

Sure thing, femr. Why don't YOU explain Moire to all of us.

The graph I included just a post previous shows you that the accuracy of my methods is very comparable to NISTs results in the horizontal direction. I doubt you understand why.

"... very comparable to..."

Silly me. I'd simply say that "NIST has this


Dead link. But I'm not looking for the proximal source. I'm looking for the pedigree back to the original recording.


tfk said:
You say that it's 59.9 frames/sec.

Yep. Original framerate, which is 59.94 fps.

Hmmm, broadcast video in the US. NTSC-M.

Every NTSC-M spec that I've seen says 29.97 FRAMES per second, and 59.94 FIELDS per second.

I'll await your, you know, "proof" for how the original frame rate was 59.8 frames/sec. And exactly how it got that way.

... your repeated previous use of the phrase *reinterlaced*.

Yeah, we've been down this road before.

When you've got little else, I guess you can fall back on vernacular.

Ahh, vernacular.

We've talked about this before. I know what the process is, how it works & why they do it.

Put it this way, femr. You use the term "loo". And I don't claim that you know nothing about indoor plumbing.

No. MPEG-2 video data is interlaced, so contains 2 separate frames in each single frame of the resultant video. Once interlaced, each separate frame is then termed as a field.

Lemme see if I follow your claims...

"A frame is really two frames. Except a frame is broken into two frames, it becomes a field."

Ahhhh, I don't think so.

I think each frame is made up of two fields. I think a frame is not two frames. And two frames is not a frame.

And I think that there are times when a 30 fps frame is broken up into two 60 field per second fields. And then, for a variety of reasons, someone might be interested in converting each of those fields into a complete frames at 60 fps.

So they do so in one of two ways: either duplicating each line into the line below it, in order to maintain aspect ratio. Or by using interpolation algorithms using the previous & succeeding fields.

In the first case (line duplication), each frame has no more info in it than the field did. In other words, half as much info as a typical frame.

In the second case (interpolated fill), the frame will have as much info as a normal frame.

Show me, with something other than "femr says", that this is not so with this video.

I deinterlace the video as part of it's preparation for tracing.

YOU "deinterlace the videos"??

And here, just a moment ago, I could have SWORN that you said that the raw video was 60 full frames per seconds.

Now, why would one have to deinterlace 60 fps full frame video?

Where would one even stick the new lines.

Now, if one had 60 FIELD per second video, then deinterlacing to achieve 30 FRAME per second video makes sense.

But you just adamantly asserted that was not the case.

... but then you said you deinterlaced them...?

'Tis a mystery.
 
Last edited:
The principle fracas that I recall was you're building a model of the immense core columns, which were muscled into place by hand by a bunch of union welders, at breakneck speed, without fixturing, with the columns dangling by chains from jump jacks, 80 stories in the sky, and frequently in, shall we say, "breezy" conditions...
The model is a visualisation aid, not a simulation, built such that dimensions were *perfect* (as far as available data goes). No matter how many times you were told this, your misinterpretation of it's purpose did not improve...

199544952.gif

109569697.gif

392239895.gif


My god man, the real floors were neither transparent NOR blue ! ROFL.

... and quoting the location of those columns to, what was it again... 13 significant digits.
No, 4dp. 3D Studio Max finest object position.

And I recall my chiding you for it.
Yes, over and over again, with increasing amounts of venom, no matter how many times you were told *it's a visualisation aid, not a simulation*.

You understand about "precision in measurement".
It's a visualisation aid Tom. The hole where the aircraft impacted is, er, missing. How inaccurate ! :)

I'm not asking you for a final answer.
You're not going to get one until I've finished extracting and analysing the data. Second element of that sentence could take a long time, so don't go blue.

It is EXACTLY a Moire effect.
I qualified my statement (a true moire effect). It is not interference between two features which exist on the actual image. It is the use of a property of the moire effect applied to a feature on the image, and the pixel delineation, which is not part of the image. The resultant moire is the result of aliasing.

You have absolutely no idea what NIST did here, do you?
Incorrect. If I could be bothered, I'd replicate their method exactly. Perhaps I will when I have time.

Their technique & yours have nothing in common.
Incorrect. They both trace feature movement by interpretation of adjacent pixel intensity changes. Theirs by searching for a position along a line which matches an initial pattern and interpreting that distance, mine by creating that distance at the local trace point by upscaling and interpolation.

You can whinge until the cows come home, but the bottom line is that I've replicated the accuracy of the NIST movement curve using my method. The same method is used upon all the other traces. Similar levels of accuracy apply...
89078455.png

...and note the *wobbles*...
wsvsv.gif

...actually exist on the footage. Too much smoothing going on on the NIST data methinks.

What method did NIST use to trace the roofline movements ?

Every NTSC-M spec that I've seen says 29.97 FRAMES per second, and 59.94 FIELDS per second.
You are confused between pre- and post-interlace terms.

Before interlacing, you have two frames.

They are then interlaced, resulting in one interlaced frame.

Depending upon the original image format, either alternate pixel rows on the two initial frames are discarded, or the original frames are captured with half the output aspect ratio in the first place.

The two original frames now combined into the single interlaced frame are then termed as fields of the interlaced frame.

Simple.

I know what the process is, how it works & why they do it.
Clearly not.

each frame has no more info in it than the field did.
Each frame (called field when still part of the interlaced frame) separated from an interlaced frame is half-height when extracted from said interlaced frame.

In other words, half as much info as a typical frame.
No. The interlaced frame contained two separate frames at all times. The amount of *info* does not change.

'Tis a mystery.
Stick to confusing 'stacked tolerances' with noise in real world data.
 
YOU "deinterlace the videos"??
Yes...

Here is a single frame of the mpeg-2 video file I pointed you to (the link works fine)...

155556393.png


It is an interlaced frame.

The mpeg video has a framerate of 29.97fps.

The frame contains two images from separate points in time, interlaced.

Deinterlacing the interlaced frame provides you with the two separate fields of the interlaced frame, as two separate frames...

535323910.png

...and...
178911251.png


They are different images, from separate points in time...
897727190.gif


Applying this process to an entire interlaced video results in a 59.94fps video.

YOU "deinterlace the videos"??
Yes.
 
Returning for a moment to your interpretation of the NIST sub-pixel feature movement method...

Moire has an amplification of a factor of 100x for motion in the horizontal direction & no amplification (gain = 1) in the vertical.
Incorrect. It would appear that your viewpoint is entirely based upon the NIST example and nothing else.

The 100x factor you mention is a case-specific metric derived from the NIST estimation of *100 +/- 10 pixels (vertical marker motion) for each horizontal pixel*, it is not a constant amplification factor of *moire*.

I also requested that you check figure C-7...
651895685.png

...in order to see that the change of intensity over vertical distance is not linear.

Unless that non-linear relationship is taken into account, subsequent conversion to horizontal offset will suffer accuracy drift.

Also bear in mind that NIST converted the 24bit RGB image data to 8bit greyscale before they began, thus reducing available image data by two thirds. Poor.

Your technique needs to see the pixel that you are tracking. Moire doesn't.
Irrelevant for the example in question, as *the (NIST) marker point that was chosen was the only one that did not fall off either the top or bottom of the northwest edge during the duration of the video*.

The only point they could trace ? Too funny. Great method. Brings clarity to your comment tho :)

Your technique works for any size, shape, geometry of building.
Correct.

Your method works in all directions. Theirs doesn't.
Correct. (ETA: There's no reason why a moire-related movement trace has to be in a particular direction. It's the limitations of the implementation of the method in this specific example which results in the statement being correct)

Proof that you're contradicting your own comment that "they are based on the same principle."
Incorrect. Both methods enable sub-pixel movement detection using variation of pixel intensity.

Tell you what, why don't you get your hands dirty with this example...
CLICK

I can easily perform a SynthEyes trace, and a NIST-moire trace on the horizontal motion, but it'd be interesting to see whether you get busy or simply argue for the sake of it.
 
Last edited:
Did anyone model all the errors associated with video? Who is the video expert and the source for the claims made? References? Sources? What a waste when you know there is no CD scenario. 911 truth is not an intellectual movement they are a movement of lies based on ignorance. Did anyone ask Dr Kabrisky to help? Love the hack job on video. Cool, trying to back in CD with waving hands BS.
 
Tell you what, why don't you get your hands dirty with this example...
CLICK

I can easily perform a SynthEyes trace, and a NIST-moire trace on the horizontal motion, but it'd be interesting to see whether you get busy or simply argue for the sake of it.

Will hold off posting the full results, Tom, as I do want to see if you are going to perform the NIST-moire method yourself.

As the test video is *perfect*, the NIST-moire method returns slightly better results, though of course the method can (according to NIST) only be used on one single point on the video, rendering it a bit useless.

The R2 values for a linear fit are...
SynthEyes - 0.9959
NIST-Moire - 0.9998

Again, what method did NIST use for their roofline trace ?
 
NIST Camera #3 Trace Data - RAW

RAW Trace Data for the NIST Camera #3 viewpoint...

Download

370825048.jpg


Scaling metrics, scaled data, derived velocity and acceleration data and basic error analysis to follow (no rush).

The linked archive contains raw trace data for the following features:

a) NW Corner
b) NE Corner
c) Louver Corners (black box on the facade - for COG)
d) East Penthouse Corners
e) Static Points

Pixel locations are *2 from the following source:
Download

Enjoy.
 
Someone asked which corner of WTC 7 dropped first...

736613183.png


558511393.png


Hmm.

Dropped first ?

NW Corner movements are larger than the NE corner movements (especially horizontally) which may be an effect of camera perspective, as the NW corner is closer to the camera.

I'd say NW corner moves first.

Any other interpretation ?

(These are VERY zoomed in btw)
 
So what is the upshot here then?

That the N side of the building, as seen above the lowest floors visible in the footage, began to sink slowly then built up in a few seconds to something near free fall?

seems to me I asked this before but how close to free fall?
Seems to me I also mentioned before that in order to get actual free fall for a distance of 100 feet, that every column on every floor(not just one floor) would have to be simultaneously severed. One thing that is quite obvious in thee videos is that no such explosive severing of several dozens of columns actually took place.
It is not possible to time such severing of columns with anything other than explosive means yet windows are not shattering or being blown outwards since such would have been visible (not to mention commented upon by those with lower views) in the lowest floors in the videos.
If femr's enhancements are indeed valid then I would also expect to see the building shuddering as these dozens of explosives rip through lower columns.

then there is the fact that the collapse of the north facade started so much slower. Are we now to assume that core columns were blown first thus beginning the collapse and THEN the columns from core to N wall were blown thus initiating free fall?

How very consequential that NISTs analysis states that core columns were failing due to a horizontal progression at the core first and then the rest of the structure could not support itself.

Seems we are back to square one with the need to show either that structural steel has certain properties(which NIST and th entire engineering community uses) that would lead to such a collapse(oh,,, right,, that has been done)
OR that some type of expolosive was used to simultaneously sever dozens of columns after the core columns were severed and of course, this would require some research as to the existance and properties of such explosives and a treatment showing that they at the very least, could do what the TM claims was done.

So far nada from the tM. I see Chris 7 on other forums touting a site that comments on the use of nano-ground compounds in EXPLOSIVES to enhance the temperature and velocity of EXPLOSIVES.
The fact is they would still be EXPLOSIVES and have much the same or louder bang. It still requires the same amount of work be done on the columns.
However, no one in the TM has gone beyond this and actually produced an explsoion utilising such technology or demonstrated that it would do (silently) what they claim was done.

After that is the tricky problem of getting this material into place. Well it was not the NYFD that did it, nor was this a spur of the moment idea when a suppsoed 3rd plane did not hit Manhattan. After all if one is employing arcane explosives one would expect that it would have to be part of a pre-arranged plot.

Oops, back to the vast, overly complex and overly complicated conspiracy............
 
JD,

The data that he inputs is pixel position vs frame. IIRC, he picks data from every (3rd? 5th??) frame. (Count the number of data points in his set, look at the real start vs. stop time interval. Compare the actual number of data points to 30 points per second if he use all of them.)
Hahahaha..... Didn't I cover the last time when someone asked about this type of data processing that you do not do this. Its a completely wrong way of doing image processing.
I'll await your uncertainty analysis with interest.
Do you have the camera that took those pictures? If not then your data is crap. End of story.
 
Last edited:
So what is the upshot here then?
Very early days on analysis of the trace data (still raw data), but...

That the N side of the building, as seen above the lowest floors visible in the footage, began to sink slowly then built up in a few seconds to something near free fall?
No. The raw vertical trace (with reference to the horizontal trace and viewing the video) suggests that early movement was *flexing* of the building around it's vertical axis. The *release* of both NE and NW corners appears very nearly simultaneous with no discernable *slow sink*. The pre release oscillation of the corners on the vertical trace appears to be a result of the horizontal *flexing*, as it returns to *zero* just prior to release.

seems to me I asked this before but how close to free fall?
All traces (including NISTs) include elements which appear to exceed 32.2ft/s2. Velocity and acceleration derivatives of this data will be provided when available. The louver traces (the black box on the facade) will enable velocity and acceleration data from the center of the facade, and are expected to be slightly slower than those of the previously presented NW corner (which exceeded freefall)

Seems to me I also mentioned before that in order to get actual free fall for a distance of 100 feet, that every column on every floor(not just one floor) would have to be simultaneously severed.
*Freefall* does not persist for the entire descent, so your statement would need to address the *every floor* element.

One thing that is quite obvious in thee videos is that no such explosive severing of several dozens of columns actually took place.
The lower levels of the building cannot be seen.

It is not possible to time such severing of columns with anything other than explosive means
Are you suggesting that rapid transition to freefall is not possible without *CD* ?

yet windows are not shattering or being blown outwards since such would have been visible
There are two large regions of window shattering on the upper levels, and the lower levels are not visible.

If femr's enhancements are indeed valid
The traces are performed as accurately as possible. Raw data and video are included, so it can all be checked.

then I would also expect to see the building shuddering
There is a lot of building movement prior to NW and NE corner release, but the most useful additional thing for me would be a copy of the original 7 minute version of the video that NIST have.

Anyone from NIST out there ? Can I have a copy please.
 
The traces are performed as accurately as possible. Raw data and video are included, so it can all be checked.
You need the dam camera to have any accuracy at all. Otherwise you are playing a deluded little game. And unlike you I've done it the right way. It takes time. It takes effort and you need the freaking camera!!!!!!
 
Last edited:
Hahahaha..... Didn't I cover the last time when someone asked about this type of data processing that you do not do this. Its a completely wrong way of doing image processing.
The quote you were responding to was about someone elses data.

Do you have the camera that took those pictures? If not then your data is crap. End of story.
NIST didn't have the camera. Are you suggesting all of their data is crap ?

Any image distortion resulting from camera optics will be consistent across frames. The graphs presented are all relative changes between frames.

Don't use the data or any results derived from it if you are not satisfied with its validity.

Bye :)
 
Last edited:
NW Corner movements are larger than the NE corner movements (especially horizontally) which may be an effect of camera perspective, as the NW corner is closer to the camera.
Based on the image you've posted in post #271, the NE corner's distances are somewhere between 2.4% and 9.4% less than the NW corner's ones. Sorry if that's not much precision; it may probably be enhanced but I don't have the tools. I know how to evaluate it but when translating it to actual pixels from the image I decided to err on the safe side.

This is the method I used to find out. The idea is pretty simple actually.

Since the windows are assumed to be parallel, the intersections of the horizontal rows of windows (originally horizontal, but perspective-deformed in the images) with the facade borders mark certain points. If we do that with two arbitrary rows of windows, we will have two vertical line segments, one for each facade border, each delimited by two points, one per window row.

The facade borders are almost vertical, so we can ignore the difference in distances from the camera between the highest row and the lowest row of windows to analyze. Since the segments are the same length in the real world, the quotient between the measures of both vertical segments will then tell you the factor to apply to convert a distance between the left and right side.

It's possible to measure instead of ignore the distance from the camera vertically, to obtain a whole distance map, but that gets messy.

If you are able to find these lines with sub-pixel precision, you will no doubt do better than me. I used whole pixels all the time, trying to be sure that the tilts of the lines I drew looked exaggerated enough as for not being any doubt that they favored the highest (resp. lowest) part of the interval.

That corrective factor, while not perfect because of the lack of a whole distance map, can at least help understanding whether some effects are due to camera distance or actual movements.

Sorry if this explanation sounds confusing. I will post a graph and explain further if needed.
 
Based on the image you've posted in post #271, the NE corner's distances are somewhere between 2.4% and 9.4% less than the NW corner's ones.
That's a small image. Here's a bigger one...
370825048.png


Distances ? The camera is quite a distance away, so distance-to-camera differences are slight. It's more the skewed view that would have an effect.

Sorry if that's not much precision; it may probably be enhanced but I don't have the tools. I know how to evaluate it but when translating it to actual pixels from the image I decided to err on the safe side.
Have spent far too long (very sad) working out various metrics for the view, and will post here with the data in real-world units when I can get around to finishing it.

I will post a graph and explain further if needed.
Cool.

I use various methods. The vertical shearing along the NW edge was done by recording the position of each window corner...
371901916.jpg

271002393.jpg

...and some metrics derived.

Similar has been done for the NE edge, along with measurement variation in building width at each *floor*.

Final test will be a rotoscope 3D fit from correct relative camera location and correctly scaled building model within 3DS max, but a bit later.
 
Originally Posted by jaydeehess
So what is the upshot here then?
Very early days on analysis of the trace data (still raw data), but...


Quote:
That the N side of the building, as seen above the lowest floors visible in the footage, began to sink slowly then built up in a few seconds to something near free fall?
No. The raw vertical trace (with reference to the horizontal trace and viewing the video) suggests that early movement was *flexing* of the building around it's vertical axis. The *release* of both NE and NW corners appears very nearly simultaneous with no discernable *slow sink*. The pre release oscillation of the corners on the vertical trace appears to be a result of the horizontal *flexing*, as it returns to *zero* just prior to release.
Suggesting! So it was either vertical and horizontal flexing or it was a slower collapse period..
In other words NIST may well be right. This would be the visible result of the horizontal progression of the core failures.
OR
It is the result of core failures due to as yet unproven explosive severing of those structural elements.


Quote:
seems to me I asked this before but how close to free fall?
All traces (including NISTs) include elements which appear to exceed 32.2ft/s2. Velocity and acceleration derivatives of this data will be provided when available. The louver traces (the black box on the facade) will enable velocity and acceleration data from the center of the facade, and are expected to be slightly slower than those of the previously presented NW corner (which exceeded freefall)

In the words of a "MeatLoaf" song

"STOP RIGHT THERE! I gotta know right now before we go any further....."

As I pointed out elsewhere, any representation of faster than freefall indicates that there is at the absolute very least and error as large as the difference between the local acelleration due to gravity and this calculated acelleration. Unless some TM member can point out the machinations in effect that allowfaster than gravitational collapse.


Quote:
Seems to me I also mentioned before that in order to get actual free fall for a distance of 100 feet, that every column on every floor(not just one floor) would have to be simultaneously severed.
*Freefall* does not persist for the entire descent, so your statement would need to address the *every floor* element.

As pointed out in the NIST report yes, the supposed free fall lasts but 2.25 seconds(+/-) and then slows as the upper level facade is impacting the debris piling up at ground level. I was of course referring only to the supposed free fall interval, my bad for not saying so.


Quote:
One thing that is quite obvious in thee videos is that no such explosive severing of several dozens of columns actually took place.
The lower levels of the building cannot be seen.

No, but there is no indication of explosives seen in the lowest levels that are visible. These would be the closest to such explosives yet their windows remain intact while we do see upper level windows breaking.

Quote:
It is not possible to time such severing of columns with anything other than explosive means
Are you suggesting that rapid transition to freefall is not possible without *CD* ?

NO, that would be the TM take on this, not mine. I am suggesting that a transition to very near free fall can indeed take place without explosives. What is your opinion on this detail?


Quote:
yet windows are not shattering or being blown outwards since such would have been visible
There are two large regions of window shattering on the upper levels, and the lower levels are not visible.

As said above, but nothing nearer the supposed explosives. You state that the building may well have been twisting and that this is most visible at the top floors. Given that they are the ends of the twisting columns they would be moving the furthest and thus also the window frames near the top of the building would be undergoing the greatest stresses due to such twisting. So your own interpretation of this data suggests that this twisting rather than explosives at or below the 12th floor are responsible to these windows shattering.

Quote:
If femr's enhancements are indeed valid
The traces are performed as accurately as possible. Raw data and video are included, so it can all be checked.

But you get faster than free fall and thus there is that absolute minimum error margin that is patently obvious.


Quote:
then I would also expect to see the building shuddering
There is a lot of building movement prior to NW and NE corner release, but the most useful additional thing for me would be a copy of the original 7 minute version of the video that NIST have.

Anyone from NIST out there ? Can I have a copy please. .

I believe that all materials can be had from NIST for a fee to cover copyright. Given that lawyers, engineers and architects make a pretty decent living and the fact the Lf911T and AEf911T, oh and not to leave out Pf911T, would then have a prime membership from which to ask for donations to raise such funds I expect that they may be able to assist you in this if you cannot do it on your own.

I mean after all, my cottage is on property that is owned jointly by myself and ten other cottegers(each with or own cabins) and we divide up common costs such as diock and road maintenance. It recently cost me $175 to have dangerously leaning trees removed. Not a large sum but of course that does translate to $1750 for the community. The above groups all claim large memberships, much larger than the ten of us at the cottage. Surely they can rasie a few bucks each.... no?

Remember, it is the TM that is trying to do a forensic investigation on this minutia. It is the TM that is hearing hoofbeats and expecting (wishing) Zebras (or perhaps Unicorns would be more apt in some of the more 'out there' TM senarios) to be the cause. NIST made its case that the hoofbeats were from horses. Until someone shows that Zebras or Unicorns actually existed in Manhattan at that time , I'll stick with horses.

(now femr, you seem to be a good enough thinker to know that the last bit there is metaphor but for the sake of those on whom metaphor or analogy is lost, I was not actually talking about hoofed animals)
 
Last edited:

Back
Top Bottom