0
   

Statistically valid approach for measuring problem resolution

 
 
Reply Sun 7 Oct, 2012 09:40 am
My company measures the resolution of computer problems by dividing into 4 priority groups - P1 being serious to P4 being 'can wait' and assigning maximum permitted resolution times , P1 - 3 hours, P2 - 6 hours, P3 - 1 day, P4 - 3 days. they produce standard distribution graphs, calculate standard deviation, compare with six-sigma and measure us accordingly. The problem I see is that resolution time depends on complexity of problem to be solved and nothing to do with priority. Is it a statistically valid approach to apply norml distribution techniques here. Sample size is relatively small - perhaps 3 to 4 problems per week. P1 problems one every 2 or 3 weeks
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Question • Score: 0 • Views: 780 • Replies: 8
No top replies

 
JPB
 
  1  
Reply Sun 7 Oct, 2012 11:14 am
@AndrewHine,
Sure. They've transformed the ordinal variable "priority" into a continuous variable "time". It depends on how many data points they used in calculating the sd to know if there was sufficient data to assume a normal distribution. With so few data points it would take some time to come up with a reasonable sd for each category.
AndrewHine
 
  1  
Reply Sun 7 Oct, 2012 11:29 am
@JPB,
Thanks for your reply. To me it seemes pointles trying to find a correlation between the time to solve the problem and its priority as they are completely independant facts. A P1 could be solved in 5 minutes or take 3 days. The same applies to all the other categories. In the last year we have had about 6 P1, 10 P2, 30 P3 and 40 P4. Should the graphs be not based around complexity of problem to be able to draw valid conclusions? Otherwise you get a scatter diagram, rather than some uniform sample.
JPB
 
  1  
Reply Sun 7 Oct, 2012 12:06 pm
@AndrewHine,
Are they making four graphs or one? They shouldn't be looking for correlations -- the correlation has already been applied in determining the transformation. Each priority category was *somehow* assigned an acceptable upper limit of turnaround time. The method of that determination is important. If it was arbitrary then the graphs over time will eventually yield a proper sd per category. If they're only making one graph then I suppose they could weight the frequency of the class type to the overall sd.
JPB
 
  1  
Reply Sun 7 Oct, 2012 12:08 pm
@AndrewHine,
Quote:
A P1 could be solved in 5 minutes or take 3 days. The same applies to all the other categories.


Then how were the upper limits established? If they weren't established statistically then it's not reasonable or feasible to assume that they'll perform statistically.
AndrewHine
 
  1  
Reply Sun 7 Oct, 2012 12:13 pm
@JPB,
There are four graphs, one for each priority class. We get measured on whether we get problems solved, particularly P1 annd P2, within the specified time. Eventually our salaries depend on this which is why I am concerned that we are being measured incorrectly with no regard to complexity of problem. You seem to imply that this is statistically valid approach, and although a standard distribution can be produced, it seems meaningless to me.
0 Replies
 
AndrewHine
 
  1  
Reply Sun 7 Oct, 2012 12:16 pm
@JPB,
Upper limits i.e. the 3 hours,6 hours etc are determined randomly by management. They reflect how quickly they want the service back.
JPB
 
  1  
Reply Sun 7 Oct, 2012 03:00 pm
@AndrewHine,
AndrewHine wrote:
In the last year we have had about 6 P1, 10 P2, 30 P3 and 40 P4.
There are enough data points in the P3 and P4 categories to plot the turnaround times (TATs) in a histogram and perform an analysis to see if the data are reasonably normally distributed. P1 and P2 are too small in number to have any idea if they're normal. For normally distributed data you can determine the sd and apply a 6-sigma criteria. If it's not normally distributed data then you'll have to apply some other criteria than a small sample sd to determine expected upper and lower limits.


AndrewHine wrote:

Upper limits i.e. the 3 hours,6 hours etc are determined randomly by management. They reflect how quickly they want the service back.


This is a problem. It's also fairly common. The management team has established an upper limit based on criteria of their own choosing. Perhaps the upper limits are necessary based on other reasons beyond historical performance (customer satisfaction, workload, etc.,) If the data aren't normal then using a 6-sigma approach is bogus for purposes that assume normally distributed data (such as being within +/- 3sd of the mean 98% of the time). If they ARE normal then you can use a 6-sigma approach for this so long as you aren't expected to perform only in the lower half of the range (mean - 3sd). It sounds like they want the TAT to be "not longer than" x hours depending on the severity of the problem. They're entitled to do that --- it's their company --- but I'd be wary of being evaluated against it without fully understanding the distribution of the data.

That said, the CAN stratify the data by employee and look for performance issues by person. If there are eight support personnel and seven of the them fall within a tight range of the mean and the eighth individual represents an outlier then it can demonstrate a performance problem for the eighth person.
AndrewHine
 
  1  
Reply Sun 7 Oct, 2012 07:24 pm
@JPB,
Thanks again JPB for your insight. Although there may be enough P3s and P4s, management are only really bothered about P1 and P2 because they represent a service interruption. There are other departments (who work mainly on Windows/Unix) that are measured similarly, but their problems tend to be different to ours (dinosaurs on a mainframe) and more repetitive (same problem happens over and over on different servers). As you say, we can't even work out if it is normally distributed. I get the feeling that the statistical methods used are inappropiate and serve only to satisfy management's desire for pretty graphs and blame apportionment when Service Level Agreements are missed (money involved there!). As you can probably tell I am not a statistician, but will mull over your answers some more. Many thanks once again.
0 Replies
 
 

Related Topics

 
  1. Forums
  2. » Statistically valid approach for measuring problem resolution
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.06 seconds on 11/15/2024 at 05:32:17