Presentation is loading. Please wait.

Presentation is loading. Please wait.

ISDS Service Support Performance – May 2019

Similar presentations


Presentation on theme: "ISDS Service Support Performance – May 2019"— Presentation transcript:

1 ISDS Service Support Performance – May 2019
Incident Response Incident Response Incident Resolution Change Success Rate Volumes in this graph April 19 Met 820 Breached 48 May 19 519 34 739 54 Volumes in this graph Mar 19 Met 1863 Breached 181 April 19 1354 150 May 19 1877 205 Ticket Volumes by Channel First Contact Resolution Customer Satisfaction May-19 satisfaction: 97% May-19 responses: 236 12-month average: 97%

2 ISDS Service Support Performance Dashboard Explained
Incident Response Incident Resolution Change Success Rate Description: Time to assign an incident to a technician, based on the following targets: Calculation: All ISDS teams (but excluding Service Desk) and all priorities combined total, showing whether we met or breached the agreed targets for the last 3 months. Comments and data observations: Our incident response is taken seriously by all teams and is consistently above the 90% SLA target each month. Every ISDS team comfortably met SLA Response targets this month. Priority Time to Resolve 1 – Critical 2 Hours 2 – High 4 Hours 3 – Medium 1 day 4 - Low 3 days 5 - Minor 5 days Priority Time to Assign 1 – Critical 15 mins 2 – High 30 mins 3 – Medium 2 hours 4 - Low 1 day 5 - Minor 2 days Description: Time to resolve an incident, based on the following targets: Calculation: All ISDS teams (including Service Desk) and all priorities combined total, showing whether we met or breached the agreed targets for the last 3 months. Comments and data observations: 205 cases breached their SLA, with 53 belonging to Student Journey who closed a significant number of Aged Cases. 9 were P2 Teaching Emergencies, making up part of the 94 total breached cases within Campus Teams. Description: 13-month high level view of whether IT changes have been performed with no issues, some issues or major issues. The categorisation is based on the following info: Comments and data observations: 1 completed with Major Issues; 1 was cancelled; 1 was Rolled Back; 2 with Minor Issues. 18 completed with no issues for a success rate of 78%. Report Status Actual Change Status Completed with no issues Minor issues or cancelled Completed with minor issues Cancelled Major issues or rolled back Completed with major issues Rolled back Failed Note: Success rate % only includes changes completed with NO issues out of total number of changes attempted. Excludes Pre-Authorised Changes. Ticket Volumes by Channel First Contact Resolution Customer Satisfaction Description: 13-month view of the total number of cases in our ITSM tool, processed by the IT Helpline per month, per channel. Calculation: Includes all incidents, service requests and requests for information, and cases recorded by NorMAN (our Out of Hours service). Comments and data observations: The Service Desk processed 5227 cases during the month of April. This is 547 more cases than in May 2018; representing an increase across all channels with the exception of Face-to-Face. Description: 13-month view of incidents versus Service Requests resolved by the Service Desk at first contact. Calculation: Includes only incidents and service requests. Calculated as cases logged via phone or face-to-face and marked as resolved in our ITSM tool within 20 minutes of being logged. Excludes s. Comments and data observations: Total FCR was 54%. This is an increase of 2 percentage points from April 2019; with the overall trend being indicative of long-term work with technical teams to increase FCR through Shift Left initiatives. Description: Responses to our on-going customer satisfaction survey at the end of each ticket resolution. Excludes face to face feedback cards. Calculation: The number of customers who responded with either highly satisfied or satisfied, versus the number of customers who responded with unsatisfied or highly unsatisfied. Includes feedback for all ISDS teams. Comments and data observations: All feedback received for PoB cases is analysed and shared with the relevant teams each month. Customers who were unsatisfied are contacted by the responsible team or the Service Management Office. 

3 ISDS Service Level Overview – Critical Services – May 2019
Availability SLA Result Compared to last month Reliability Reliability Result Comment Student Records Management Not met 94.46% Met 2 (1) U4SM overran change window by hours from 00:00 on 8th May. Users informed they could access the system again at 15:16 on 9th May. (2) Intermittent QLS log in issues for 2 hours from 08:10 on 10th May Coursework 100% Timetabling Student Portal SAP Payroll Core Network Identity Management Data Centre Environments Moodle Turnitin Classroom Technology

4 1 2 3 ISDS Service Level Performance Explained Availability Target
 Incident Prioritisation Urgency = Preconfigured in PoB 1 2 3 Critical service Essential service Supporting service Impact = Selected Manually Global impact, affecting more than 80% of users of a service OR Potential of affecting more than 1,000 people AND Affecting more than 80% of system functionality 4 Affecting 20%-80% of users of a service Affecting critical piece of system functionality Affecting more than 10 people Teaching emergency 5 Affecting less than 10 people Affecting non-critical functionality of the system Single printer / single PC Availability Target Measuring the total uptime of the entire service, availability refers to that time when the service can be utilised by users in accordance with the definition incorporated in the service catalogue. Availability target 99.9% Reliability Target Reliability focuses on minimising the number of outages the University experiences, regardless of duration, during designated periods with regard to specific services. Number of outages per month <2 Availability and reliability reporting explained: Only Business Critical services are included in the report Measured & reported monthly, calculated on a 24/7 basis Any downtime approved in advance via the IT Change Management process is excluded from these calculations Downtime is recorded and reported manually, i.e. not via an automated monitoring system Availability and reliability comments and data observations: Work to produce a service catalogue is ongoing to define components of Business Critical services to help clarify uptime/downtime. U4SM Outage: There was a significant overrun of an approved Change window. Users were advised not to attempt to sign for the duration; an effective service outage. Work is on-going to address the issues with downtime associated with U4SM Changes through our Problem Management Process.


Download ppt "ISDS Service Support Performance – May 2019"

Similar presentations


Ads by Google