Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics,

Slides:



Advertisements
Similar presentations
Net Price Calculator The Why and What. Dilemma Colleges thought they were dealing with a compliance issue, but what they are really dealing with is the.
Advertisements

Standardized Scales.
Getting into Graduate School Montclair State University Center for Career Services and Cooperative Education 2012.
Conceptualization, Operationalization, and Measurement
Sponsored by CEPA Foundation – Cultural & Educational Programs Abroad CEPA Foundation Webinar #3 on Curriculum Integration: Evaluation Integrating Education.
Mark Troy – Data and Research Services –
ABC. Question 1 Human capital is defined as: The knowledge, talent, and skills that people possess. A The common knowledge, talent, and skills that all.
The Benefits of Independent Higher Education to Pennsylvania Association of Independent Colleges and Universities of Pennsylvania (AICUP) 101 North Front.
Summary of Key Results from the 2012/2013 Survey of Visa Applicants Who Used a Licensed Adviser Undertaken by Premium Research Prepared: July 2013.
Business Statistics for Managerial Decision
Indicators of Opportunity in Higher Education Fall 2004 Status Report COE Annual Conference September 14, 2004.
Chapter 3 Producing Data 1. During most of this semester we go about statistics as if we already have data to work with. This is okay, but a little misleading.
IPEDS C ollege O pportunities O n- L ine COOL.
Letters of Recommendations For admission to Graduate School By: Jocelyn VanNederynen.
Step Into Your Future: Understanding College Fit.
Determining the Size of
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
TEST YOUR KNOWLEDGE LESSON 4: BACK TO SCHOOL ABC Lesson 4: Back to School.
Federal Consulting Group August 2004 Department of Labor Civil Rights Center 2004 Satisfaction Study - Recipients.
Working Toward a Statewide Information System to Track the Effectiveness of Student Aid Financial Programs in Maryland Michael J. Keller Director of Policy.
Marc Geslani, Higher Education Manager, MRO MOACAC 2011 – St. Louis April 11, 2011.
Temple University Russell Conwell Learning Center Office of Senior Vice Provost for Undergraduate Studies GETTING INVOLVED IN RESEARCH AT TEMPLE UNIVERSITY.
Grade Point Average - Your grade point average (GPA) is calculated by dividing the total amount of quality points earned by the total amount of.
Michael Sestak American Consul Study in the United States.
Andrew Howard Nichols, Ph.D. Senior Research Analyst The Pell Institute Student Financial.
Financial Aid Information Session Discover Wellesley Fall 2012 Wellesley College Student Financial Services.
Investing in Value Financing a College Education.
Member perceptions survey Number of staff  Many member companies are small businesses. Nearly half of UAC member organisations (46%) have 10.
3R’s: Research, Retention and Repayment Predicting the Future by Understanding Current and Past Students.
A Balanced Formula for Your College Applications The three categories that will help you find confidence.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 9. Hypothesis Testing I: The Six Steps of Statistical Inference.
Tulane University 1 Tulane University Employee Satisfaction Survey Results October 2012.
RESEARCH METHODS Lecture 44. REPORT WRITING Every report is custom-made, yet some conventions of format. Many companies and universities also have in-house,
Summary of Key Results from the 2013/2014 Survey of Visa Applicants Who Used a Licensed Adviser Survey undertaken by: Premium Research Report prepared:
Presentation Outline  Introductions  Why are “price sensitivity” and “value” important?  Strategic pricing & value enhancement framework  From research.
Educational Debt and the Financial Aspects of Planning for Law School The webinar will begin promptly at 7:30 p.m., EST. For audio, please use audio via.
Student Engagement Survey Results and Analysis June 2011.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
NASA Earth Observing System Data and Information Systems
The Voluntary System of Accountability (VSA SM ).
Comparative Alumni Research: What Matters in College AFTER College.
EVALUATION RESEARCH. Evaluation Research  How do we begin?  What are the different types of evaluation research?  How do these different types fit.
Bruce White Ruth Geer University of South Australia.
Comparative Alumni Research 2005 Update Decade Comparison: 1990s Graduates to Earlier Graduates Lutheran Educational Conference of North America April.
Making the Case for Private Universities: Keep in Mind “Best Fit” Senior AVID September 9, 2015.
STEALTH TRACKER STEALTH TR STEALTH TRACKER Agenda: Review MyMajors Stealth Tracker Combined workflow Tracker results & stats Mobile App Questions MOBILE.
Summer 2014 Glenville State College Forensics Science Student and Teacher Post Evaluation Results.
 How do we begin?  What are the different types of evaluation research?  How do these different types fit together?  What purpose do they serve?
Lynn Mahaffie | Dec U.S. Department of Education 2013 FSA Training Conference for Financial Aid Professionals Tools to Support Higher Education Choice.
Essential Statistics Chapter 131 Introduction to Inference.
1 Do UK higher education students overestimate their starting salary? John Jerrim Institute of Education, University of London.
© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Assumes that events are governed by some lawful order
Teacher Engagement Survey Results and Analysis June 2011.
Presented by the Guidance Department. SELF-DISCOVERY Finding colleges that fit you best begins with SELF-DISCOVERY — that means getting to know your interests,
Maximizing Library Investments in Digital Collections Through Better Data Gathering and Analysis (MaxData) Carol Tenopir and Donald.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
Voluntary Disclosure Not Covered in Textbook. You’re on a job interview and the interviewer knows what the distribution of GPAs are for MBA students at.
9/26/  U.S. Department of Education  Michael Itzkowitz, Special Advisor Postsecondary Education 9/26/20132.
Comparative Alumni Research 2011 Update Overall Comparison: Lutheran Colleges to Flagship Public Universities Lutheran Educational Conference of North.
IPEDS TOOLS Mary Ann Coughlin Springfield College Sponsored by: Association for Institutional Research & National Center for Education Statistics.
Introduction Studies are important for gathering information. In this lesson, you will learn how to effectively design a study so that it yields reliable.
Blended Value Accounting & Social Enterprise Success John Anner, PhD SIERC Annual Conference Auckland, New Zealand 12 February, 2016.
Assistant Instructor Nian K. Ghafoor Feb Definition of Proposal Proposal is a plan for master’s thesis or doctoral dissertation which provides the.
ACF Office of Community Services (OCS) Community Services Block Grant (CSBG) Survey of Grantees Satisfaction with OCS Survey of Eligible Entities Satisfaction.
The High Cost of Low Morale
BANKING INFORMATION SYSTEMS
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Customer Satisfaction Research 2018 Q3 Results October 22, 2018
Analyzing Reliability and Validity in Outcomes Assessment
Presentation transcript:

Response: Postsecondary Institution Ratings System [Docket ID ED–2013–IES–0151] Prepared by Tom Benghauser for The National Center for Education Statistics, the Institute of Education Sciences, and the Department of Education Tom Benghauser

CONTENTS Background and Overview Data Needs, Sources, and Issues Appendix

BACKGROUND AND OVERVIEW The Education Departments Postsecondary Institution Rating System is being developed in support of President Obamas broader goals of reversing the growth of income inequality and increasing the possibilities for upward mobility in the United States. As the President stated in a speech at the University of Buffalo last August: It will be my firm principle that our ratings have to be carefully designed to increase, not decrease, the opportunities for higher education for students who face economic or other disadvantages.

BACKGROUND AND OVERVIEW But it is not sufficient that such additional opportunities merely exist. The individuals who will benefit from them must first be made aware of and then motivated to take advantage of them. As President Obama subsequently said at the recent White House Higher Education Summit: We also know that too many students dont apply to the schools that are right for them. They may sometimes underestimate where they could succeed, where they could go. There may be a mismatch in terms of what their aspirations are and the nature of what's offered at the school that's close by.

BACKGROUND AND OVERVIEW And as Robert Kelchen, one of the data experts invited by NCES to present at the Technical Symposium on Postsecondary Institution Ratings on February 6, recently wrote in the Washington Monthly: Once the ratings get released, there is no guarantee that students [will] use the ratings in any meaningful way (although its possible).

BACKGROUND AND OVERVIEW Research carried out by Turner and Hoxby has demonstrated that proactively drawing high-potential/low-income high school students attention to colleges they otherwise wouldnt have considered, providing them with key financial aid and other information on these schools, even sending them application fee waivers, dramatically increases the likelihood they will apply to, enroll at, and then thrive at the best match/best fit institutions where they are most likely to derive the greatest benefit.

BACKGROUND AND OVERVIEW The PIRS being developed by the Department of Education must accordingly be made available to college intenders in general and socio-economically disadvantaged candidates in particular as part of an inviting, easy to use, well-publicized resource – a presentation framework that will not only draw their attention to the college(s) they can get into and afford and where they will be neither over-matched nor under-matched relative to their academic potential but also maximize the probability they will go on to actually apply to, enroll at, and graduate from a best match/best fit institution

BACKGROUND AND OVERVIEW The approach we urge the Department of Education to adopt as a structure- and content-template for the presentation framework for the PIRS both provides college intenders with step-by-step guidance in systematically narrowing their search for and identifying their own best-match/best-fit colleges and gives them personalized hand-holding clear though the application, financial aid, final decision, paperwork, and matriculation processes.

BACKGROUND AND OVERVIEW Research of our approach with numerous members of our nations cadre of overworked, overstretched high school guidance counselors makes clear that the vast majority will immediately recognize it as an exceptionally valuable tool that they will quickly bring to the attention of their advisees. We are confident that at least one of its several overtly entertaining aspects – in particular its highlighting of fun findings from the college alumni satisfaction and outcomes surveys that will be a source of data for numerous key metrics to be included in our scores and ratings – will immediately engage active college intenders And we are equally certain that – as a result of its ease of use and most of all usefulness - these college intenders will continue to rely on it throughout their journey to college.

BACKGROUND AND OVERVIEW Finally, the resource we have developed and are recommending for EDs adoption as a prototype also resolves a problem raised by multiple members of the higher education establishment. One of them, Christopher Nelson, wrote in a recent Huffington Post editorial (Flawed From the Start: The President's New Plan for Higher Education) that Rating of any kind suggests that all students are looking for something common; it also assumes that colleges are more alike than they actually are. Better instead to find ways of getting as much information as possible about each school to each student and let them make their own ratings - ones that will suit their own individual needs! Christopher Nelson is President of St. Johns College Annapolis and a founding member of the Annapolis Group, a consortium of more than 120 liberal arts colleges,

BACKGROUND AND OVERVIEW President Nelson will presumably be pleased that our resource enables students to browse and compare different colleges profiles and then to generate short lists of the colleges that best match up with their individual resources, abilities, and aspirations.

This calculation excludes need-based aid tied to on-campus jobs and/or summer employment, based on the conviction that it deprives its recipients of invaluable time for study and social acclimatization and also labels them as socioeconomically disadvantaged.

BACKGROUND AND OVERVIEW We then help them to zero in on their best match + best fit college, by letting them calculate comparative scores for each of their short listed colleges on the basis of how important they indicate a number of more subjective, less tangible college characteristics are to them weighted by how highly recent alumni from the short-list colleges have rated their alma maters on these same factors.

BACKGROUND AND OVERVIEW Identifying a best-match/best-fit college – an affordable institution that can make the most of a college intenders academic potential and where, based on its ability to meet her more subjective interests and preferences, she will feel sufficiently comfortable to make the most of her opportunity – is just the start for many students. It is well documented, for instance, that many students are so unfamiliar with the application process in general and the FASFA completion ordeal in particular that they simple give up. Others, unable to afford more than a handful of application fees, choose not to apply to colleges they still fear may be too great a reach to waste money on.

BACKGROUND AND OVERVIEW This is why our presentation framework provides college intenders with step-by-step guidance in completing the common application and even sends them s/text message reminding them of key deadlines for all the institutions they apply to informs/reminds college intenders that application fee waivers are available and also makes it possible for them to complete a fee waiver request form - and for their guidance counselor to authenticate it – just once yet still have it sent to all the colleges they decide to apply to.

BACKGROUND AND OVERVIEW Our presentation framework will also enable intenders to not only read what affinity alumni from their short-listed colleges have said about their alma maters but also get real-time mentoring and personal reassurances from them.

BACKGROUND AND OVERVIEW Finally, for a variety of reasons that include the fear of leaving family and community, students not infrequently chicken out after being accepted at a best-match/best-fit college and instead decide to attend a local institution where they are tragically under matched. The insider jargon for this phenomenon is summer melt and it is to prevent it that our presentation framework includes mechanisms that let students Compare the institutions theyve been admitted to side by side Network in real time with others whove also been admitted Get additional mentoring from their affinity alumni Chat with undergrads who are already there It also provides assistance in completing all of the paperwork for their best-match/best fit college.

DATA NEEDS, SOURCES, AND ISSUES The data displayed in our presentation framework include several metrics whose values are derived from values contained in the IPEDS and/or Common Data Set data sets while others are obtained from alumni satisfaction and outcomes research. One of the derived values is many colleges Dirty Little Secret: The percentage of freshmen determined to have need who are not actually granted any need-based aid This new metric reveals the cynicism underlying many institutions Net Price Calculators.

DATA NEEDS, SOURCES, AND ISSUES Another derived variable – the average amount of need based aid per student determined to have need – is often significantly lower for the overall undergraduate student body than it is for first- time full-time freshmen only. This suggests the possibility that some colleges may be using exceptionally generous aid offers to lure in desirable prospects and then, once switching to an other institution has become problematic, reducing the amount of aid they receive. It is something prospective students should be made aware of.

DATA NEEDS, SOURCES, AND ISSUES Other derived variables are required for the calculation of two major indicators: our derived bang for the buck/value for money/return on investment index, which is intended for students use as a key if not final criterion for differentiating among the mission- comparable colleges on their short lists This calculation is in turn a key component of: our derived measure of the relative value added by the various colleges - their institutional effectiveness, a metric that is intended for use primarily by the government and probably doesnt need to be included in the presentation framework weve designed for college intenders

Bang for the Buck/Value for Money/Return on Investment Index: VariableSource Overall graduation ratesIPEDS/CDS Average years required to obtain bachelorsDerived Annual cost of attendance (sticker price)IPEDS/CDS Average need based aid per undergraduateDerived Average grads annual cost of obtaining degree net of need based aidDerived Average grads total cost of obtaining bachelors net of need based aidCalculation Average grads loan obligation at graduationDerived Average grads total cost of obtaining bachelor's degree including loans outstanding at graduation Calculation Average starting salary of graduates in their first full-time paying job (excluding volunteer/charitable work) Alumni Satisfaction and Outcomes Research

Continued…

Bang for the Buck/Value for Money/Return on Investment Index: Importantly, for college prospects who already know their goals are not primarily pecuniary, bang/value/return could also be expressed in terms of other outputs measured by our alumni surveys – e.g. the chances of getting into grad school or of gaining employment in a particular field.

Added Value/Institutional Effectiveness Index: The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness

The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness The larger the percentage, the greater the difficulty of the task The lower the score, the greater the difficulty of the task The fewer the years, the better

The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness Source: IPEDS

The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness Source: Averages Derived from Common Data Set Percentages

The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness (Percent Pell Grantees) x (1600 – Average SAT) ÷ (Average Years Earnings to Pay Off Cost x 0.3) = Total Score

The Difficulty of Producing the Outcomes Outcomes Institutional Effectiveness

DATA NEEDS, SOURCES, AND ISSUES Alumni Satisfaction and Outcomes Surveys: An Essential Source of Essential Data Proof of concept in the form of a successful pilot survey carried out in 2011 exists for the value and feasibility of utilizing alumni satisfaction and outcomes research to supplement IPEDS, the College Boards Common Data Sets, and the previously unpublished but highly revealing new variables that have now been derived from them. This pilot survey gathered information on the following variables, all of which will be of great utility both for inclusion in EDs PIRS and as data for academic research and analysis.

Additional Metrics: One criterion of community colleges effectiveness should be how many of their degree recipients or other attendees go on to four-year institutions and then how successful they are there. Data germane to this issue could be easily obtained by asking alumni survey respondents whether they originally matriculated to their alma maters directly from high school or instead transferred in. Transfer students could be asked to identify the post-secondary institution theyd transferred in from and then be asked to rate it on parameters such as how well they felt it prepared them for study at a four-year college.

Data Availability Timetables Although there is significant overlap in the variables reported on by the IPEDS database and the College Boards Common Data Sets, the latter do not include data for Pell recipients and are also limited to a fraction of all Title IV postsecondary institutions. On the other hand, CDS data are available much sooner than IPEDS data: as of 3 February 2014, CDS data for the 2013 – 2014 academic year were already appearing while provisional final IPEDS data for the 2012 – 2013 year had appeared just weeks earlier, at the end of December RTI reports that much of the time lag in IPEDS release dates results from requirements for multiple quality control reviews.

Data Availability Timetables: It is reasonable to assume that a primary purpose of this built in redundancy is to identity fraudulent data submissions. Given the importance of timely, up-to-data to the college evaluation and selection process, we recommend that the IPEDS publication schedule be significantly accelerated by delaying internal data reviews until after publication the submission of false data be controlled either through - legislation mandating that violating institutions become ineligible for Title IV aid and/or - highly publicized name and shame lists of cheaters - exclusion from the EDs college search resource

APPENDIX Responders Biography Review of Pilot Alumni Survey

A graduate of Princeton (AB cum laude) and Penn (JD plus MBA studies at Wharton), Tom Benghauser spent twenty- plus years in the US, Germany, South Africa, and the UK in high level management positions with ad agency giants such as JWT. In 1986 he co-founded and was managing director of the Brand Dynamics Group, a London-based consultancy specializing in the development of predictive models of brand behavior for clients that included Kodak, Glaxo SmithKline, RJ Reynolds, J. Walter Thompson, and Philip Morris.

In 1993 he founded and until 2004 was CEO of a highly successful UK-based loyalty enhancement agency that carried out its unique satisfaction research-based customer- dialogue programs for marketers such as Vauxhall Motors, Ikea-owned Habitat, General Motors Europe, Fiat-Alfa Romeo, and Saab. He returned to the U.S. – Denver – in 2005 with the intention of retiring but never got around to it and instead almost immediately began meddling in a lot of matters that at the time he knew absolutely nothing about.

College Straight Talk Pilot Satisfaction and Outcomes Survey Recent Alumni: Kenyon, Mount Holyoke, Penn, Princeton, Tufts Findings Methodology/Survey Instrument CONTENTS © 2011 – 2013 Tom Benghauser

Pilot Survey: Methodology May – June 2011 Kenyon, Mount Holyoke, University of Pennsylvania, Princeton, Tufts Bachelors recipients who graduated in Sample respondents along with addresses and other contact information randomly extracted from the colleges on-line alumni directories.

Pilot Survey: Methodology On-line using invitations containing direct links to the survey instrument 52 questions/3 rating matrices (44 attributes) = 96 total (details page 12) Opens with overall satisfaction rating (1 – 10 scale) followed by open ended Why do you say that? Avoids the possibility of question-sequence induced response error on the single most important question Maximizes responses to the single most important question Extensive verbatims (average word count = 41) provide especially rich insights The final question is an open-ended one that invites respondents to provide observations about the survey, including questions they did not understand, questions they thought should have been included, etc.

Response rates are in-line with current trends for on-line surveying despite the absence of the participation incentives (e.g. chances to win major donations to respondents educational institution of choice) that will be included in the participation-invitations for future surveys only one round of reminders Pilot Survey: Methodological Findings

The resulting response volumes resulted in respectable margins of error at the 95% level of confidence. Again, these were obtained without the use of incentives or extensive reminders cum cajoling.

Comparisons between a key, independently established characteristic of the universe as defined and that characteristic as measured in the survey strongly suggest that our findings are fully representative of the universe. The Common Data Sets developed and employed by the College Board and posted on numerous colleges website were the sources of the data on this key demographic variable. Pilot Survey: Methodological Findings

The high correlations between the ethnic compositions measured by our survey and the independently established census-based compositions of the alumni universes is especially significant given both the great explanatory power of the ethnicity variable in analyzing our other measurements the high interest in the variable within the higher education community Pilot Survey: Methodological Outcomes

Notes on Self Selection-Induced Error Given the high stakes that are not infrequently involved in customer satisfaction and outcomes research, it is hardly surprising that, from virtually the beginning of the discipline, hypotheses have been put forward that members who choose to take part in such surveys are different in ways that make them collectively unrepresentative of the overall populations of which they are members. The reality may indeed be that, say, extremely satisfied or dissatisfied customers are in general more likely to find the time or go to the trouble of taking part in satisfaction surveys than are other members of a particular body of customers or users.

The issue, however, is not whether response bias exists but rather whether it manifests itself with the alumni bodies of certain colleges to a greater or lesser degree than it does with the alumni bodies of other colleges. This is because what we are attempting to do is to compare colleges relative to one another and not to measure absolute Truth with a capital T. nothing in the literature documents that response bias disproportionately effects some brands or even product/ service sectors more than it does others. Notes on Self Selection-Induced Error

Finally, the reason most frequently posited for hypothesized differences in response rates among invitees whose satisfaction levels differ is that especially unhappy or happy ones have much stronger emotional or other motivations for taking the time and/or going to the trouble of taking part. To the extent that this reason is valid, response differentials will logically have much less impact on on-line surveys - where the effort involved in participating is significantly lower than it is for in-person, hard-copy, or telephone surveys. Notes on Self Selection-Induced Error

The Survey Questions

Pilot Survey: Substantive Findings Highlights

The vertical red bars represent 90% confidence intervals, meaning that there is only a 1 in 10 chance that the true average satisfaction score for a particular college falls outside the range represented by its bar.

HolyokeTuftsPrincetonPennKenyon

HolyokeTuftsPrincetonPennKenyon

HolyokeTuftsPrincetonPennKenyon

HolyokePennPrincetonTufts 1 = Not Satisfied at All, 10 = Completely Satisfied

HolyokeTuftsPrincetonPennKenyon

HolyokeTuftsPrincetonPennKenyon

HolyokeTuftsPrincetonPennKenyon

What are you doing now?

Major

Pilot Survey: Selected Detailed Findings

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

1 = Gained Nothing, 10 = Gained a Lot How much would you say you gained in each of the following areas from your time at [Q1]?

HolyokeTuftsPrincetonPennKenyon Please indicate the extent to which you agree - or disagree - with the following statements about [Q1]. 1 = Dont Agree at All, 10 = Agree Completely

HolyokeTuftsPrincetonPennKenyon Please indicate the extent to which you agree - or disagree - with the following statements about [Q1]. 1 = Dont Agree at All, 10 = Agree Completely