Reply
Fri 28 Feb, 2014 07:28 am
I have 50 data sets. Each set has three related time series: fast, medium, slow. My end purpose is simple, I want to generate a number that indicates a relative degree of change of the time series at each point. That relative degree of change should range between 0-1 for all the time series and all the data sets. The scales of the data set range from .0001 to 100.
To accomplish this, I calculate the differences in a time series, delta(ts)=ts(t) – ts(t-1). Now I am trying to calculate an upper limit of 90% of those deltas. In other words, draw a smooth line on those differences such that only about 10% of the differences exceed that line. I will use that 90% limit to establish a maximum to normalize the differences between 0-1. Is this the best way to do this?
I’ve been working on this for months, mostly linear programmatic methods, with no success. And trying to get it to work across 50 data sets is killing me. I’m sure there has to be an elegant mathematical way to do this. I can’t be the first guy in town trying to normalize a relative degree of change of a time series.
Any help or directions for research are greatly appreciated! Obviously my math skills are weak so examples would be most helpful. Thank you everyone for your time and brain power!