Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Structured Conversation: Enabling and Measuring Responsive Pedagogy Dr Christine Couper & Dr Cathy Molesworth Planning and Statistics, January 2018.

Similar presentations


Presentation on theme: "A Structured Conversation: Enabling and Measuring Responsive Pedagogy Dr Christine Couper & Dr Cathy Molesworth Planning and Statistics, January 2018."— Presentation transcript:

1 A Structured Conversation: Enabling and Measuring Responsive Pedagogy Dr Christine Couper & Dr Cathy Molesworth Planning and Statistics, January 2018 A HEFCE funded project

2 AIMS Survey responses & learning analytics A Structured Conversation
Survey questions Outcomes: Engagement & Success Apply change

3 What we did? Created survey of teaching methods for staff & summarised outcomes Linked teaching survey responses to average grade per module and fail rate per module with statistical analyses Streamlined links between JISC learning analytics and EvaSys module evaluation surveys We used statistical analysis to create links between available module related information such as level of study and student satisfaction rates measured with EvaSys surveys. Created a “pulse survey” to measure students’ engagement, views of learning gain, and views of teaching. Then we summarised the outcomes.

4 Staff Survey Overview Thus summary is focused on 537 survey responses, one for each module, from 275 staff who stated either: they were module leader or they were familiar with the most recent module presentation. Percentage of 537 survey responses relating to 2015/16 vs. 2016/17 academic session

5 Introduction: Methods
Illustrative example of multiple choice and free text question types Questions were compulsory multiple choice questions. Most of these had free text options for optional comment. For an example see

6 Predictors of average grade per module & fail rate per module
Topics we asked staff survey questions on Academic session? No. of staff – Academic? No. of staff – Hourly paid? Active Inquiry: Active inquiry – Team based? Active inquiry – Collaborative? Active inquiry – Peer? Active inquiry – Situated? Active inquiry – Flipped?

7 Predictors of average grade per module & fail rate per module
Topics we asked staff survey questions on Research: Research – Taught? Research – Latest findings? Research – Trained? Research – Prepared? Research – Project others’? Research – Project student’s? Co-design/ co-production? Diagnostic tests? Novel assessments? MMP/ MMA? TESTA? Employability? Moodle enhancements

8 Predictors of average grade per module & fail rate per module
Topics we asked staff survey questions on Audiovideo recording: Audiovideo – Recording session? Audiovideo – Assignment briefings? Audiovideo – Assignment feedback? Audiovideo – Self assessment? Audiovideo – Recording teaching? Audiovideo – Recorded advice? EvaSys used? EvaSys feedback?

9 Predictors of average grade per module & fail rate per module
Module information that we gathered No. of students Level Credit Teaching: Scheduled hours Independent hours Placement hours Assessment: Coursework % Written work % Practical work % JACS 16 subject area codes Proportion dissatisfied (EvaSys)

10 Statistically significant predictors of average mark per module
BLUE is bad! Note: Reference categories in bold italics Lower average mark than reference in blue Used linear regression

11 Statistically significant predictors of average mark per module
BLUE is bad! Note: Reference categories in bold italics Lower average mark than reference in blue Used linear regression

12 Statistically significant predictors of failure rate per module
BLUE is bad! Note: Reference categories in bold italics Larger failure rate than reference in blue Used negative binomial regression

13 Statistically significant predictors of failure rate per module
BLUE is bad! Note: Reference categories in bold italics Larger failure rate than reference in blue Used negative binomial regression

14 From a different analysis: Statistically significant predictors of failure rate per module – written work BLUE is bad! Note: Reference categories in bold italics Larger failure rate than reference in blue Used negative binomial regression

15 What helps students? Not having team based active inquiry all or most of the time Taking subjects within certain JACS codes with Biological sciences as ref. category: Physical sciences Languages Business & administrative studies Mathematical sciences Mass communications & documentation Creative arts & design Having 100 hours or more of independent work Having a higher percentage, 35% or more, of assessment by course work

16 What hinders students? Not having reached Level 6 yet
Students not conducting their own research project at all Not implementing TESTA Increases in student number (effect is small) Having a high percentage of assessment as written exams (>= 70%)

17 Caveats Some findings are difficult to interpret at this stage
e.g. Missing data is a significant predictor Having 1 staff on an academic contract predicts higher grade, than having 2 staff on academic contracts but lower grades than zero staff on academic contract Some of the sample sizes for the sub-categories are rather too small to reliable The statistical analyses pick up on patterns of association not causality Does having a higher percentage of coursework really help or does it merely signify an absence of written exams that are problematic for some students

18 Staff Survey January 2018


Download ppt "A Structured Conversation: Enabling and Measuring Responsive Pedagogy Dr Christine Couper & Dr Cathy Molesworth Planning and Statistics, January 2018."

Similar presentations


Ads by Google