Hey WD,
Wow. You're harsh.
I like that...
Let's identify the video data in question. It took me a while to figure out that you're talking about video data for WTC7, and not the data femr2 provided for WTC1 in post 94.
Sorry, I should have ID'd the source. It came from post #176. Since it came from femr in a reply discussing WTC7, I assumed that it was from WTC7 data. Although femr didn't say that explicitly with the data.
Correct, femr?
I vote "D": Every set of physical measurements contains noise. Those who wish to interpret physical data bear the responsibility for extracting signal from noise. Alternative "A" comes from over-interpreting the noise, and the blame for that would rest with tfk. Alternative "C" is blaming femr2 for the noise, but I'm not sure that's fair; the noise (or most of it) could already have been present in the data before femr2 touched anything.
If it please the persecution... ;-)
Granted, there is a fair amount of "my preliminary beliefs" in what I wrote. This is because this is my first pass on femr's data. And I have no idea of the details of his video or how he generated his numbers.
Nonetheless, I think that I've made it pretty clear that I believe that there is a lot of noise in the data, and that it does not reflect real motions of the building. And that, as a direct result, I do not (at this time) accept the conclusions of this analysis. In either the raw data OR the empirical data that resulted from a very "smoothed" raw data.
(You can see that the empirical equation generated for the first 1.3 seconds in the last post's drop vs. time graph is a darn good "smoothed" version of the raw data. And yet this still produced an acceleration significantly greater than 1G for the first 1.3 seconds. I think that I made it clear that I do not believe this conclusion. Which constitutes a rejection of the whole raw data set.)
I do not know femr's techniques for producing these numbers. I think that I've also made it clear that I do not believe that he can produce anywhere near the accuracy that he claims.
He seems to be saying things that are self-contradictory:
That his data reveals real high freq transients, but yet it needs to be smoothed because of noise.
Can't have it both ways.
Femr suggested that the noise is being introduced by my technique. That's not true. Granted that I haven't taken steps to eliminate noise, but I have not introduced any. As I mentioned, I've run validation runs with artificial data to verify this.
In other words, you are fitting a curve to the data. Your decision to use curve-fitting implies your assumption that the original signal was simple enough to be modelled by piecing together two low-degree polynomials. That assumption is related to the Chandler-MacQueen-Szamboti fallacy I attacked in post 79. I'm disappointed to see you succumb to a similar fallacy.
If you look at the displacement vs time data set (the raw data & the overlain empirical curve) in my previous post, you'll see a pretty good agreement. Once the raw data is low-pass filtered, I believe that the agreement will be even better.
If the agreement is this good, then increasing the polynomials degree amounts to a bit of gilding the lily. And will likely result in poly constants that are close to zero.
Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?
If so, please post your velocity & acceleration vs. time data (or graphs).
__
Nonetheless, I've already redone the analysis using 5th order (with 6 constants), and the results are not hugely different. I'll be interested to see what happens with smoothed data.
Here's the result of using this higher order polynomial. (I used it over the entire times span. You can see that it doesn't provide as good a fit at the early times as the previous one. But you can also see that it follows the gross (i.e., low freq) shape of the raw data pretty darn well.
You can see that the fit between the empirical curve & raw data is pretty good. And that the empirical curve is a pretty good "smoothed" version of the raw data.
The acceleration is proportional to the radius of curvature of the red line in the drop curves. I can see that a better fit (as the lower order poly in the previous graphs) at the earliest time (t< 0.6). But I don't see much leeway for increasing the radius between 0.7 < t < 1.4 seconds. And the results say that this amount of curvature in the drop curve results in >1G accel.
It's possible to construct a "1 G" arc for this chart to see if it can be fit to this raw data. Looking at the data, the curvature of the empirical (red line) equation right around 1.4 seconds corresponds to 1G of acceleration.
In order for femr's data to be correct, one would have to be able to overlay that degree of curvature (or less) on all the data points throughout the data set. I do not see how that is going to happen for t < 1 second. No matter how much low-pass filtering one does.
Here are the resultant velocity & acceleration curves, again for a 5th order poly with 6 constants:
Again, for t < 1.4 seconds, accel is > 1G.
That's one way; downsampling
WP is another.
Without knowing the origin of the noise, I'd prefer smoothing to downsampling. Can't
a priori tell if your chosen point is a good one or not.
This is not the first time in history that numerical analysts have been faced with such problems. We do not have to fall back on arguments from personal belief or incredulity.
I thought that I made the basis for my incredulity clear: the wall is far too massive & fragile to exhibit or withstand the acceleration levels that the data implies.
The technical problem is to apply enough smoothing, downsampling, and other techniques to reduce the noise to an acceptable level without throwing away all information about high frequency components. That's a well understood tradeoff, and it's essentially mathematical. We can calculate how much information we're losing, and can state quantitative limits to our knowledge.
I see two problems.
First, I don't believe that smoothing the data is going to significantly reduce the empirically derived acceleration. The low-order polynomial has already essentially done that, and I still came up with > 1G acceleration.
I could be wrong about that. It's happened before. We'll see.
Second, we've got a chicken & egg problem. We're trying to figure out what the acceleration really was. But we're going to smooth the data until the acceleration (maybe) gets "reasonable".
The ultimate conclusion will simply mirror what we deem "reasonable".
We can also use science and engineering to estimate the noise level.
Perhaps femr can, because he has access to his original data. And the details of how he generated the numbers.
For example, we know that downward accelerations greater than 1g are physically implausible. Treating the descent as an initial value problem, we find that limiting the downward acceleration to 1g makes it impossible to match femr2's data at full resolution: Noise in the data force occasional large upward accelerations in the model, reducing the downward velocity so much that it can't recover in time to match the next sampled position. (Tony Szamboti has made a similar argument, although he usually gets the calculations wrong and refuses to acknowledge corrections unless they further his beliefs.) One way to estimate the noise is to reduce the resolution by smoothing and/or downsampling until the downward-acceleration-limited models begin to match the sampled positions.
I'll be interested to see if smoothing the data can get us from here to there. My current impression is that the answer is "no".
I have done that for femr2's WTC1 data, and intend to use the results of that analysis to reinforce the point of my earlier analysis of the Chandler-MacQueen-Szamboti fallacy. For that pedagogical purpose, it hardly matters whether the noise in femr2's data was present in the original video or was added by femr2's processing of that video.
Agreed.
Femr, would you care to add some detailed explanation of your number generation technique. (Or point to where you've described it previously.)
Most specifically, what do you estimate as your error (in pixels), and what is the pixel to real world scale factor.
Tom