Presentation is loading. Please wait.

Presentation is loading. Please wait.

Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics,

Similar presentations


Presentation on theme: "Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics,"— Presentation transcript:

1 Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics, the Institute of Education Sciences, and the Department of Education Tom Benghauser 609 542-0738 tom.benghauser@alumni.princeton.edu

2 CONTENTS Background and Overview Data Needs, Sources, and Issues Appendix

3 BACKGROUND AND OVERVIEW The Education Departments Postsecondary Institution Rating System is being developed in support of President Obamas broader goals of reversing the growth of income inequality and increasing the possibilities for upward mobility in the United States. As the President stated in a speech at the University of Buffalo last August: It will be my firm principle that our ratings have to be carefully designed to increase, not decrease, the opportunities for higher education for students who face economic or other disadvantages.

4 BACKGROUND AND OVERVIEW But it is not sufficient that such additional opportunities merely exist. The individuals who will benefit from them must first be made aware of and then motivated to take advantage of them. As President Obama subsequently said at the recent White House Higher Education Summit: We also know that too many students dont apply to the schools that are right for them. They may sometimes underestimate where they could succeed, where they could go. There may be a mismatch in terms of what their aspirations are and the nature of what's offered at the school that's close by.

5 BACKGROUND AND OVERVIEW And as Robert Kelchen, one of the data experts invited by NCES to present at the Technical Symposium on Postsecondary Institution Ratings on February 6, recently wrote in the Washington Monthly: Once the ratings get released, there is no guarantee that students [will] use the ratings in any meaningful way (although its possible).

6 BACKGROUND AND OVERVIEW Research carried out by Turner and Hoxby has demonstrated that proactively drawing high-potential/low-income high school students attention to colleges they otherwise wouldnt have considered, providing them with key financial aid and other information on these schools, even sending them application fee waivers, dramatically increases the likelihood they will apply to, enroll at, and then thrive at the best match/best fit institutions where they are most likely to derive the greatest benefit.

7 BACKGROUND AND OVERVIEW The PIRS being developed by the Department of Education must accordingly be made available to college intenders in general and socio-economically disadvantaged candidates in particular as part of an inviting, easy to use, well-publicized resource – a presentation framework that will not only draw their attention to the college(s) they can get into and afford and where they will be neither over-matched nor under-matched relative to their academic potential but also maximize the probability they will go on to actually apply to, enroll at, and graduate from a best match/best fit institution

8 BACKGROUND AND OVERVIEW The approach we urge the Department of Education to adopt as a structure- and content-template for the presentation framework for the PIRS both provides college intenders with step-by-step guidance in systematically narrowing their search for and identifying their own best-match/best-fit colleges and gives them personalized hand-holding clear though the application, financial aid, final decision, paperwork, and matriculation processes.

9

10 BACKGROUND AND OVERVIEW Research of our approach with numerous members of our nations cadre of overworked, overstretched high school guidance counselors makes clear that the vast majority will immediately recognize it as an exceptionally valuable tool that they will quickly bring to the attention of their advisees. We are confident that at least one of its several overtly entertaining aspects – in particular its highlighting of fun findings from the college alumni satisfaction and outcomes surveys that will be a source of data for numerous key metrics to be included in our scores and ratings – will immediately engage active college intenders And we are equally certain that – as a result of its ease of use and most of all usefulness - these college intenders will continue to rely on it throughout their journey to college.

11

12 BACKGROUND AND OVERVIEW Finally, the resource we have developed and are recommending for EDs adoption as a prototype also resolves a problem raised by multiple members of the higher education establishment. One of them, Christopher Nelson, wrote in a recent Huffington Post editorial (Flawed From the Start: The President's New Plan for Higher Education) that Rating of any kind suggests that all students are looking for something common; it also assumes that colleges are more alike than they actually are. Better instead to find ways of getting as much information as possible about each school to each student and let them make their own ratings - ones that will suit their own individual needs! Christopher Nelson is President of St. Johns College Annapolis and a founding member of the Annapolis Group, a consortium of more than 120 liberal arts colleges,

13 BACKGROUND AND OVERVIEW President Nelson will presumably be pleased that our resource enables students to browse and compare different colleges profiles and then to generate short lists of the colleges that best match up with their individual resources, abilities, and aspirations.

14

15

16

17 This calculation excludes need-based aid tied to on-campus jobs and/or summer employment, based on the conviction that it deprives its recipients of invaluable time for study and social acclimatization and also labels them as socioeconomically disadvantaged.

18

19

20 BACKGROUND AND OVERVIEW We then help them to zero in on their best match + best fit college, by letting them calculate comparative scores for each of their short listed colleges on the basis of how important they indicate a number of more subjective, less tangible college characteristics are to them weighted by how highly recent alumni from the short-list colleges have rated their alma maters on these same factors.

21

22 BACKGROUND AND OVERVIEW Identifying a best-match/best-fit college – an affordable institution that can make the most of a college intenders academic potential and where, based on its ability to meet her more subjective interests and preferences, she will feel sufficiently comfortable to make the most of her opportunity – is just the start for many students. It is well documented, for instance, that many students are so unfamiliar with the application process in general and the FASFA completion ordeal in particular that they simple give up. Others, unable to afford more than a handful of application fees, choose not to apply to colleges they still fear may be too great a reach to waste money on.

23 BACKGROUND AND OVERVIEW This is why our presentation framework provides college intenders with step-by-step guidance in completing the common application and even sends them emails/text message reminding them of key deadlines for all the institutions they apply to informs/reminds college intenders that application fee waivers are available and also makes it possible for them to complete a fee waiver request form - and for their guidance counselor to authenticate it – just once yet still have it sent to all the colleges they decide to apply to.

24

25 BACKGROUND AND OVERVIEW Our presentation framework will also enable intenders to not only read what affinity alumni from their short-listed colleges have said about their alma maters but also get real-time mentoring and personal reassurances from them.

26

27

28

29 BACKGROUND AND OVERVIEW Finally, for a variety of reasons that include the fear of leaving family and community, students not infrequently chicken out after being accepted at a best-match/best-fit college and instead decide to attend a local institution where they are tragically under matched. The insider jargon for this phenomenon is summer melt and it is to prevent it that our presentation framework includes mechanisms that let students Compare the institutions theyve been admitted to side by side Network in real time with others whove also been admitted Get additional mentoring from their affinity alumni Chat with undergrads who are already there It also provides assistance in completing all of the paperwork for their best-match/best fit college.

30

31 DATA NEEDS, SOURCES, AND ISSUES The data displayed in our presentation framework include several metrics whose values are derived from values contained in the IPEDS and/or Common Data Set data sets while others are obtained from alumni satisfaction and outcomes research. One of the derived values is many colleges Dirty Little Secret: The percentage of freshmen determined to have need who are not actually granted any need-based aid This new metric reveals the cynicism underlying many institutions Net Price Calculators.

32 DATA NEEDS, SOURCES, AND ISSUES Another derived variable – the average amount of need based aid per student determined to have need – is often significantly lower for the overall undergraduate student body than it is for first- time full-time freshmen only. This suggests the possibility that some colleges may be using exceptionally generous aid offers to lure in desirable prospects and then, once switching to an other institution has become problematic, reducing the amount of aid they receive. It is something prospective students should be made aware of.

33 DATA NEEDS, SOURCES, AND ISSUES Other derived variables are required for the calculation of two major indicators: our derived bang for the buck/value for money/return on investment index, which is intended for students use as a key if not final criterion for differentiating among the mission- comparable colleges on their short lists This calculation is in turn a key component of: our derived measure of the relative value added by the various colleges - their institutional effectiveness, a metric that is intended for use primarily by the government and probably doesnt need to be included in the presentation framework weve designed for college intenders

34 Bang for the Buck/Value for Money/Return on Investment Index: VariableSource Overall graduation ratesIPEDS/CDS Average years required to obtain bachelorsDerived Annual cost of attendance (sticker price)IPEDS/CDS Average need based aid per undergraduateDerived Average grads annual cost of obtaining degree net of need based aidDerived Average grads total cost of obtaining bachelors net of need based aidCalculation Average grads loan obligation at graduationDerived Average grads total cost of obtaining bachelor's degree including loans outstanding at graduation Calculation Average starting salary of graduates in their first full-time paying job (excluding volunteer/charitable work) Alumni Satisfaction and Outcomes Research

35 Continued…

36

37

38

39

40

41

42 Bang for the Buck/Value for Money/Return on Investment Index: Importantly, for college prospects who already know their goals are not primarily pecuniary, bang/value/return could also be expressed in terms of other outputs measured by our alumni surveys – e.g. the chances of getting into grad school or of gaining employment in a particular field.

43 Added Value/Institutional Effectiveness Index: The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness

44 The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness The larger the percentage, the greater the difficulty of the task The lower the score, the greater the difficulty of the task The fewer the years, the better

45 The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness Source: IPEDS

46 The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness Source: Averages Derived from Common Data Set Percentages

47 The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness (Percent Pell Grantees) x (1600 – Average SAT) ÷ (Average Years Earnings to Pay Off Cost x 0.3) = Total Score

48 The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness

49 DATA NEEDS, SOURCES, AND ISSUES Alumni Satisfaction and Outcomes Surveys: An Essential Source of Essential Data Proof of concept in the form of a successful pilot survey carried out in 2011 exists for the value and feasibility of utilizing alumni satisfaction and outcomes research to supplement IPEDS, the College Boards Common Data Sets, and the previously unpublished but highly revealing new variables that have now been derived from them. This pilot survey gathered information on the following variables, all of which will be of great utility both for inclusion in EDs PIRS and as data for academic research and analysis.

50

51

52

53

54 Additional Metrics: One criterion of community colleges effectiveness should be how many of their degree recipients or other attendees go on to four-year institutions and then how successful they are there. Data germane to this issue could be easily obtained by asking alumni survey respondents whether they originally matriculated to their alma maters directly from high school or instead transferred in. Transfer students could be asked to identify the post-secondary institution theyd transferred in from and then be asked to rate it on parameters such as how well they felt it prepared them for study at a four-year college.

55 Data Availability Timetables Although there is significant overlap in the variables reported on by the IPEDS database and the College Boards Common Data Sets, the latter do not include data for Pell recipients and are also limited to a fraction of all Title IV postsecondary institutions. On the other hand, CDS data are available much sooner than IPEDS data: as of 3 February 2014, CDS data for the 2013 – 2014 academic year were already appearing while provisional final IPEDS data for the 2012 – 2013 year had appeared just weeks earlier, at the end of December 2013. RTI reports that much of the time lag in IPEDS release dates results from requirements for multiple quality control reviews.

56 Data Availability Timetables: It is reasonable to assume that a primary purpose of this built in redundancy is to identity fraudulent data submissions. Given the importance of timely, up-to-data to the college evaluation and selection process, we recommend that the IPEDS publication schedule be significantly accelerated by delaying internal data reviews until after publication the submission of false data be controlled either through - legislation mandating that violating institutions become ineligible for Title IV aid and/or - highly publicized name and shame lists of cheaters - exclusion from the EDs college search resource

57 APPENDIX Responders Biography Review of Pilot Alumni Survey

58 A graduate of Princeton (AB cum laude) and Penn (JD plus MBA studies at Wharton), Tom Benghauser spent twenty- plus years in the US, Germany, South Africa, and the UK in high level management positions with ad agency giants such as JWT. In 1986 he co-founded and was managing director of the Brand Dynamics Group, a London-based consultancy specializing in the development of predictive models of brand behavior for clients that included Kodak, Glaxo SmithKline, RJ Reynolds, J. Walter Thompson, and Philip Morris.

59 In 1993 he founded and until 2004 was CEO of a highly successful UK-based loyalty enhancement agency that carried out its unique satisfaction research-based customer- dialogue programs for marketers such as Vauxhall Motors, Ikea-owned Habitat, General Motors Europe, Fiat-Alfa Romeo, and Saab. He returned to the U.S. – Denver – in 2005 with the intention of retiring but never got around to it and instead almost immediately began meddling in a lot of matters that at the time he knew absolutely nothing about.

60 College Straight Talk Pilot Satisfaction and Outcomes Survey Recent Alumni: Kenyon, Mount Holyoke, Penn, Princeton, Tufts Findings Methodology/Survey Instrument CONTENTS © 2011 – 2013 Tom Benghauser

61 Pilot Survey: Methodology May – June 2011 Kenyon, Mount Holyoke, University of Pennsylvania, Princeton, Tufts Bachelors recipients who graduated in 2006 - 2010 Sample respondents along with email addresses and other contact information randomly extracted from the colleges on-line alumni directories.

62 Pilot Survey: Methodology On-line using email invitations containing direct links to the survey instrument 52 questions/3 rating matrices (44 attributes) = 96 total (details page 12) Opens with overall satisfaction rating (1 – 10 scale) followed by open ended Why do you say that? Avoids the possibility of question-sequence induced response error on the single most important question Maximizes responses to the single most important question Extensive verbatims (average word count = 41) provide especially rich insights The final question is an open-ended one that invites respondents to provide observations about the survey, including questions they did not understand, questions they thought should have been included, etc.

63 Response rates are in-line with current trends for on-line surveying despite the absence of the participation incentives (e.g. chances to win major donations to respondents educational institution of choice) that will be included in the email participation-invitations for future surveys only one round of email reminders Pilot Survey: Methodological Findings

64 The resulting response volumes resulted in respectable margins of error at the 95% level of confidence. Again, these were obtained without the use of incentives or extensive reminders cum cajoling.

65 Comparisons between a key, independently established characteristic of the universe as defined and that characteristic as measured in the survey strongly suggest that our findings are fully representative of the universe. The Common Data Sets developed and employed by the College Board and posted on numerous colleges website were the sources of the data on this key demographic variable. Pilot Survey: Methodological Findings

66

67 The high correlations between the ethnic compositions measured by our survey and the independently established census-based compositions of the alumni universes is especially significant given both the great explanatory power of the ethnicity variable in analyzing our other measurements the high interest in the variable within the higher education community Pilot Survey: Methodological Outcomes

68 Notes on Self Selection-Induced Error Given the high stakes that are not infrequently involved in customer satisfaction and outcomes research, it is hardly surprising that, from virtually the beginning of the discipline, hypotheses have been put forward that members who choose to take part in such surveys are different in ways that make them collectively unrepresentative of the overall populations of which they are members. The reality may indeed be that, say, extremely satisfied or dissatisfied customers are in general more likely to find the time or go to the trouble of taking part in satisfaction surveys than are other members of a particular body of customers or users.

69 The issue, however, is not whether response bias exists but rather whether it manifests itself with the alumni bodies of certain colleges to a greater or lesser degree than it does with the alumni bodies of other colleges. This is because what we are attempting to do is to compare colleges relative to one another and not to measure absolute Truth with a capital T. nothing in the literature documents that response bias disproportionately effects some brands or even product/ service sectors more than it does others. Notes on Self Selection-Induced Error

70 Finally, the reason most frequently posited for hypothesized differences in response rates among invitees whose satisfaction levels differ is that especially unhappy or happy ones have much stronger emotional or other motivations for taking the time and/or going to the trouble of taking part. To the extent that this reason is valid, response differentials will logically have much less impact on on-line surveys - where the effort involved in participating is significantly lower than it is for in-person, hard-copy, or telephone surveys. Notes on Self Selection-Induced Error

71 The Survey Questions

72

73

74

75

76 Pilot Survey: Substantive Findings Highlights

77 The vertical red bars represent 90% confidence intervals, meaning that there is only a 1 in 10 chance that the true average satisfaction score for a particular college falls outside the range represented by its bar.

78 HolyokeTuftsPrincetonPennKenyon

79 HolyokeTuftsPrincetonPennKenyon

80 HolyokeTuftsPrincetonPennKenyon

81

82

83 HolyokePennPrincetonTufts 1 = Not Satisfied at All, 10 = Completely Satisfied

84

85

86 HolyokeTuftsPrincetonPennKenyon

87 HolyokeTuftsPrincetonPennKenyon

88 HolyokeTuftsPrincetonPennKenyon

89 What are you doing now?

90

91

92

93

94

95 Major

96 Pilot Survey: Selected Detailed Findings

97 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

98 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

99 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

100 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

101 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

102 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

103 1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

104 HolyokeTuftsPrincetonPennKenyon Please indicate the extent to which you agree - or disagree - with the following statements about [Q1]. 1 = Dont Agree at All, 10 = Agree Completely

105 HolyokeTuftsPrincetonPennKenyon Please indicate the extent to which you agree - or disagree - with the following statements about [Q1]. 1 = Dont Agree at All, 10 = Agree Completely


Download ppt "Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics,"

Similar presentations


Ads by Google