PROCESSING OF DATA The collected data in research is processed and analyzed to come to some conclusions or to verify the hypothesis made. Processing of.

Slides:



Advertisements
Similar presentations
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Data Processing and Fundamental Data Analysis CHAPTER fourteen.
Advertisements

Learning Objectives 1 Copyright © 2002 South-Western/Thomson Learning Data Processing and Fundamental Data Analysis CHAPTER fourteen.
Learning Objectives Copyright © 2004 John Wiley & Sons, Inc. Data Processing, Fundamental Data Analysis, and Statistical Testing of Differences CHAPTER.
Marketing Research Aaker, Kumar, Day and Leone Tenth Edition Instructor’s Presentation Slides 1.
1 QUANTITATIVE DESIGN AND ANALYSIS MARK 2048 Instructor: Armand Gervais
4. FREQUENCY DISTRIBUTION
McGraw-Hill/Irwin McGraw-Hill/Irwin Copyright © 2009 by The McGraw-Hill Companies, Inc. All rights reserved.
INTERPRET MARKETING INFORMATION TO TEST HYPOTHESES AND/OR TO RESOLVE ISSUES. INDICATOR 3.05.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
SOWK 6003 Social Work Research Week 10 Quantitative Data Analysis
Introduction to Educational Statistics
Quantifying Data.
After the data have been collected, the next step is to present the data in some orderly and logical form so that their essential features may become.
Marketing Research Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides.
Fundamentals of Statistical Analysis DR. SUREJ P JOHN.
MATH1342 S08 – 7:00A-8:15A T/R BB218 SPRING 2014 Daryl Rupp.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Chapter 1: Introduction to Statistics
INTRODUCTION TO STATISTICS MATH0102 Prepared by: Nurazrin Jupri.
Chapter 1 Introduction and Data Collection
CHAPTER---3 CLASSIFICATION OF DATA : FREQUENCY DISTRIBUTION.
RESEARCH IN MATH EDUCATION-3
Evaluating a Research Report
Chapter Thirteen Validation & Editing Coding Machine Cleaning of Data Tabulation & Statistical Analysis Data Entry Overview of the Data Analysis.
Analyzing and Interpreting Quantitative Data
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved Lecture Slides Elementary Statistics Eleventh Edition and the Triola.
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Lecture Slides Elementary Statistics Tenth Edition and the.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Areej Jouhar & Hafsa El-Zain Biostatistics BIOS 101 Foundation year.
Data Analysis.
Chapter Twelve Copyright © 2006 John Wiley & Sons, Inc. Data Processing, Fundamental Data Analysis, and Statistical Testing of Differences.
PROCESSING, ANALYSIS & INTERPRETATION OF DATA
Basic Statistical Terms: Statistics: refers to the sample A means by which a set of data may be described and interpreted in a meaningful way. A method.
Academic Research Academic Research Dr Kishor Bhanushali M
Quantitative Techniques. QUANTITATIVE RESEARCH TECHNIQUES Quantitative Research Techniques are used to quantify the size, distribution, and association.
1.  Interpretation refers to the task of drawing inferences from the collected facts after an analytical and/or experimental study.  The task of interpretation.
Question paper 1997.
RESEARCH METHODS Lecture 29. DATA ANALYSIS Data Analysis Data processing and analysis is part of research design – decisions already made. During analysis.
MARE 250 Dr. Jason Turner Introduction to Statistics.
Chapter 6: Analyzing and Interpreting Quantitative Data
Data Processing.
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
TEMU XIII DATA PROCESSING AND ANALYSIS. The data, after collection, has to be processed and analyzed in according with the outline laid down for the purpose.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Introduction To Statistics
1 UNIT 13: DATA ANALYSIS. 2 A. Editing, Coding and Computer Entry Editing in field i.e after completion of each interview/questionnaire. Editing again.
1.  The practice or science of collecting and analyzing numerical data in large quantities, especially for the purpose of inferring* proportions in a.
Research Methodology II Term review. Theoretical framework  What is meant by a theory? It is a set of interrelated constructs, definitions and propositions.
Research Methods in Psychology Introduction to Psychology.
WHAT IS RESEARCH? According to Redman and Morry,
Data Preparation and Description Lecture 24 th. Recap If you intend to undertake quantitative analysis consider the following: type of data (scale of.
Sociology. Sociology is a science because it uses the same techniques as other sciences Explaining social phenomena is what sociological theory is all.
ANNOUCEMENTS 9/3/2015 – NO CLASS 11/3/2015 – LECTURE BY PROF.IR.AYOB KATIMON – 2.30 – 4 PM – DKD 5 13/3/2015 – SUBMISSION OF CHAPTER 1,2 & 3.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Introduction Statistics Introduction Origin Meaning
Chapter Fourteen Copyright © 2004 John Wiley & Sons, Inc. Data Processing and Fundamental Data Analysis.
Criminal Justice and Criminology Research Methods, Second Edition Kraska / Neuman © 2012 by Pearson Higher Education, Inc Upper Saddle River, New Jersey.
Data Analysis.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Statistical tests for quantitative variables
Components of thesis.
Data Analysis & Report Writing
Research Process №5.
Analyzing and Interpreting Quantitative Data
Introduction to Statistics
Warm up – Unit 4 Test – Financial Analysis
Features of a Good Research Study
Displaying Data – Charts & Graphs
PROCESSING OF DATA The collected data in research is processed and analyzed to come to some conclusions or to verify the hypothesis made. Processing of.
Indicator 3.05 Interpret marketing information to test hypotheses and/or to resolve issues.
Presentation transcript:

PROCESSING OF DATA The collected data in research is processed and analyzed to come to some conclusions or to verify the hypothesis made. Processing of data is important as it makes further analysis of data easier and efficient. Processing of data technically means Editing of the data Coding of data Classification of data Tabulation of data.

EDITING: Data editing is a process by which collected data is examined to detect any errors or omissions and further these are corrected as much as possible before proceeding further. Editing is of two types: Field Editing Central Editing.

FIELD EDITING: This is a type of editing that relates to abbreviated or illegible written form of gathered data. Such editing is more effective when done on same day or the very next day after the interview. The investigator must not jump to conclusion while doing field editing. CENTRAL EDITING: Such type of editing relates to the time when all data collection process has been completed. Here a single or common editor corrects the errors like entry in the wrong place, entry in wrong unit e.t.c. As a rule all the wrong answers should be dropped from the final results.

EDITING REQUIRES SOME CAREFUL CONSIDERATIONS: Editor must be familiar with the interviewer’s mind set, objectives and everything related to the study. Different colors should be used when editors make entry in the data collected. They should initial all answers or changes they make to the data. The editors name and date of editing should be placed on the data sheet.

CODING: Classification of responses may be done on the basis of one or more common concepts. In coding a particular numeral or symbol is assigned to the answers in order to put the responses in some definite categories or classes. The classes of responses determined by the researcher should be appropriate and suitable to the study. Coding enables efficient and effective analysis as the responses are categorized into meaningful classes. Coding decisions are considered while developing or designing the questionnaire or any other data collection tool. Coding can be done manually or through computer.

CLASSIFICATION: Classification of the data implies that the collected raw data is categorized into common group having common feature. Data having common characteristics are placed in a common group. The entire data collected is categorized into various groups or classes, which convey a meaning to the researcher. Classification is done in two ways: Classification according to attributes. Classification according to the class intervals.

CLASSIFICATION ACCORDING THE THE ATTRIBUTES: Here the data is classified on the basis of common characteristics that can be descriptive like literacy, sex, honesty, marital status e.t.c. or numeral like weight, height, income e.t.c. Descriptive features are qualitative in nature and cannot be measured quantitatively but are kindly considered while making an analysis. Analysis used for such classified data is known as statistics of attributes and the classification is known as the classification according to the attributes.

CLASSIFICATION ON THE BASIS OF THE INTERVAL: The numerical feature of data can be measured quantitatively and analyzed with the help of some statistical unit like the data relating to income, production, age, weight e.t.c. come under this category. This type of data is known as statistics of variables and the data is classified by way of intervals. CLASSIFICATION ACCORDING TO THE CLASS INTERVAL USUALLY INVOLVES THE FOLLOWING THREE MAIN PROBLEMS: Number of Classes. How to select class limits. How to determine the frequency of each class.

TABULATION: The mass of data collected has to be arranged in some kind of concise and logical order. Tabulation summarizes the raw data and displays data in form of some statistical tables. Tabulation is an orderly arrangement of data in rows and columns. OBJECTIVE OF TABULATION: Conserves space & minimizes explanation and descriptive statements. Facilitates process of comparison and summarization. Facilitates detection of errors and omissions. Establish the basis of various statistical computations.

BASIC PRINCIPLES OF TABULATION: Tables should be clear, concise & adequately titled. Every table should be distinctly numbered for easy reference. Column headings & row headings of the table should be clear & brief. Units of measurement should be specified at appropriate places. Explanatory footnotes concerning the table should be placed at appropriate places. Source of information of data should be clearly indicated.

7. The columns & rows should be clearly separated with dark lines 8. Demarcation should also be made between data of one class and that of another. 9. Comparable data should be put side by side. 10. The figures in percentage should be approximated before tabulation. 11. The alignment of the figures, symbols etc. should be properly aligned and adequately spaced to enhance the readability of the same. 12. Abbreviations should be avoided.

ANALYSIS OF DATA The important statistical measures that are used to analyze the research or the survey are: Measures of central tendency(mean, median & mode) Measures of dispersion(standard deviation, range, mean deviation) Measures of asymmetry(skew ness) Measures of relationship etc.( correlation and regression) Association in case of attributes. Time series Analysis

TESTING THE HYPOTHESIS Several factor are considered into the determination of the appropriate statistical technique to use when conducting a hypothesis tests. The most important are as: The type of data being measured. The purpose or the objective of the statistical inference. Hypothesis can be tested by various techniques. The hypothesis testing techniques are divided into two broad categories: Parametric Tests. Non- Parametric Tests.

PARAMETRIC TESTS: These tests depends upon assumptions typically that the population(s) from which data are randomly sampled have a normal distribution. Types of parametric tests are: t- test z- test F- test 2- test

NON PARAMETRIC TESTS The various types of Non Parametric Tests are: Wilcox on Signed Rank Test ( for comparing two population) Kolmogorov Smirnov Test( to test whether or not the sample of data is consistent with a specified distribution function) Runs Tests (in studies where measurements are made according to some well defined ordering, either in time or space, a frequent question is whether or not the average value of the measurement is different points in the sequence. This test provides a means of testing this. Sign Test (this is single sample test that can be used instead of the single sample t- test or paired t- test.

INTERPRETATION: Interpretation is the relationship amongst the collected data, with analysis. Interpretation looks beyond the data of the research and includes researches, theory and hypothesis. Interpretation in a way act as a tool to explain the observations of the researcher during the research period and it acts as a guide for future researches. WHY Interpretation? the researcher understands the abstract principle underlying the findings. Interpretation links up the findings with those of other similar studies. The researcher is able to make others understand the real importance of his research findings.

PRECAUTIONS IN INTERPRETATION: Researcher must ensure that the data is appropriate, trust worthy and adequate for drawing inferences. Researcher must be cautious about errors and take due necessary actions if the error arises Researcher must ensure the correctness of the data analysis process whether the data is qualitative or quantitative. Researcher must try to bring out hidden facts and un obvious factors and facts to the front and combine it with the factual interpretation. The researcher must also ensure that there should be constant interaction between initial hypothesis, empirical observations, and theoretical concepts.