Training Calendar 2019

icon Training Calendar 2019

Chapter Three


Research methodology is normally presented in chapter three of both proposal and thesis

Methodology is a detailed procedure aimed at answering research question(s)

The purpose of answering research questions is to achieve objectives of the study

Methodology starts with a description of research philosophy, research design, population, sampling design and techniques, instrumentation, and data analysis methods and techniques

In entails describing in detail what needs to be done, and how it will be done

It has various subsections which forms the basis of today's focus


Developing self-awareness is a key issue in life since different individuals do not view the world in the same way as everybody else does

It is a common practice in life to believe that the way you look at the world is the same way that others look at it – “common sense”

But one person’s common sense is not necessarily the same as another person’s

There are likely to be (almost always are) differences between people’s view of the world

It may not therefore come as a surprise that the way some researchers view the world, is very different from other’s

Existence of different views of the world, and the processes that operate within it, is part of what is known as philosophy

Philosophy is concerned with views about how the world works

As an academic subject, philosophy focuses, primarily, on reality, knowledge and existence

An individual’s view of the world is closely linked to what that individual perceives as reality which influences his thinking and actions

In daily life outside academics, it is uncommon to think about how one perceives reality and hence the world around him

However, in academics, it is very important to know how you perceive reality

Your individual perception of reality affects how you gain knowledge of the world and how you act within it

Therefore your perception of reality, and how you gain knowledge, will affect the way in which you conduct the research in your thesis

The key term relating to the way of looking at the world is ‘paradigm’ (Khun, 1970)

The paradigm we use to view the world, on a day-to-day basis, is very likely to influence how we conduct research

A paradigm is a pre-requisite of perception itself – what you see depends on what you look at, the way you have been taught to think, and how you look (Long, 2007: 196)

Over the past one hundred years or so, only two major ways (objective and subjective) of ‘looking at the world’ have proved common

Objective view is measurable in terms of the use of numbers and holds that there is only one truth or a limited number of universal truths

Subjective view suggests that the world is largely open to several interpretations measurable in terms of the use of numbers

Under subjective view, numeric measurement is not always possible or desirable and hence words are able to indicate nuances more accurately

In summary, the two views are usually referred to as the quantitative and the qualitative paradigms, respectively

A comparison of the two extreme paradigms leads to three important questions

What is real? (ontology)

How can we know anything? (epistemology)

What methods should we use to conduct research? (Methodology)


Is concerned with the question “What is real?”

There are two possible responses depending in the paradigm chosen

In one paradigm, the response to the question: ‘Is there a single objective truth/a knowable reality affected by a consistent set of laws?’ would be a “Yes”

These single objective truth viewers are usually referred to as ‘positivists’

Positivists believe there are universal truths that are waiting to be discovered

These can be ‘discovered’ by carrying out ‘objective’ research, in which the researcher does not interact with what is being researched

In this context, neutral, objective research will be the appropriate way to gain unbiased knowledge

In the other paradigm, the answer to the question is that everything is relative

In other words, there is no such thing as one objective truth or even universal truths, but merely a number of subjective truths

Those who believe there is no reality other than what individuals create in their heads are known as ‘constructivists’ or ‘interpretivists’

‘Constructivists’ or ‘Interpretivists’ believe that there is no objective reality, but that reality is constructed by each individual and hence subjective

Phenomenology is the term given to the research approach of such researchers


Once we answer the question “what is real”?, the next question is “how do we know anything about the world?”

What we perceive of as reality has an effect on our knowledge of the world

Hence, each of the two different paradigms not only has a different perception of reality, but a different perception of knowledge about the world

In other words, what we think of as real, affects the way we gain knowledge

If we perceive the world as having a number of universal truths, then these truths can be ‘discovered’ by carrying out ‘objective’ research, in which the researcher does not interact with what is being researched

In this context, neutral, objective research will be the appropriate way to gain unbiased knowledge

However, if we see the world as having multiple, contextualized ‘realities’, rather than objective, universal truths, then an appropriate way to gain knowledge would be for the researcher to interact with those being studied

This will be in an attempt to reveal their attitudes and behavior in relation to whatever is being studied


If we accept that our understanding of reality affects the way we gain knowledge of reality, then we need to accept that this will affect how we actually conduct research about reality (or what we term the ‘methodology’)

The links between the important concepts of ontology, epistemology and methodology are neatly summarized by Taylor and Edgar (1999:27):

‘the belief about the nature of the world (ontology) adopted by an enquirer will affect their belief about the nature of knowledge in that world (epistemology) which in turn will influence the enquirer’s belief as to how that knowledge can be uncovered (methodology).


These differences in ontology (our views about the world) and epistemology (how we mine the knowledge) mean that different research methods have been employed, with quantitative researchers using deductive approaches, whereas, in contrast, qualitative researchers have tended to use inductive approaches



How should the term `research design' be understood? An analogy might help

When constructing a building there is no point ordering materials or setting critical dates for completion of project stages until we know what sort of building is being constructed.

The first decision is whether we need a high rise office building, a factory for manufacturing machinery, a school, a residential home or an apartment block.

Until this is done we cannot sketch a plan, obtain permits, work out a work schedule or order materials

Similarly, social research needs a design or a structure before data collection or analysis can commence.

A research design is not just a work plan. A work plan details what has to be done to complete the project but the work plan will flow from the project's research design.

The function of a research design is to ensure that the evidence obtained enables us to answer the initial question as unambiguously as possible

Obtaining relevant evidence entails specifying the type of evidence needed to answer the research question, to test a theory, to evaluate a programme or to accurately describe some phenomenon.

In other words, when designing research we need to ask: given this research question (or theory), what type of evidence is needed to answer the question (or test the theory) in a convincing way?

Research design `deals with a logical problem and not a logistical problem' (Yin, 1989: 29)

Before a builder or architect can develop a work plan or order materials they must first establish the type of building required, its uses and the needs of the occupants; the work plan flows from this.

Similarly, in social research the issues of sampling, method of data collection (e.g. questionnaire, observation, and document analysis), and design of questions are all subsidiary to the matter of `What evidence do I need to collect?'

There are many types of research design

The type adopted for a study is determined by the nature of the problem and hence the type of research question to be answered

Types of design

Target population, sampling design and techniques

Basic Constructs

Sampling is conducted when conducting a census is impossible or unreasonable.

An understanding of the target population is essential and usually done in terms of ‘elements’, ‘sampling units’ and ‘sampling frame.’

An element is defined as a person or object which data is sought and about which inferences are to be made.

Sampling units are the target population element available for selection during the sampling process.

A sampling frame is a representation of the elements of the target population.

Errors in sampling are classified as sampling or nonsampling errors.

Sampling errors represent any type of bias that is attributable to mistakes in either drawing a sample or deriving the sample size. (Random sampling error)

Nonsampling errors represent a bias that occures regardless of sample or census being used.

Determining Sample Size

It is a complex task and involves much clarity with regard to the balance between the resources available and number or accuracy or information obtained.

Several qualitative and quantitative factors are considered when determining sample the size.

The qualitative issues considered may include factors such as :

Nature of research and expected outcome

Importance of decision to organization

Number of variables being studied

Sample size in similar studies

Nature of analysis

Resource constraints

Various quantitative measures are also considered when determining sample size such as:

Variability of the population characteristics (greater the variability, larger the sample required)

Level of confidence desired (higher the confidence desired, larger the sample required)

Degree of precision desired in estimating population characteristics (more precise the study, larger sample required)

Classification of Sampling Techniques

There are two basic sampling designs:

Probability sampling design – allows researchers to judge the reliability and validity of the findings in comparison to the defined target population  

Nonprobability sampling design – the selection of each sampling unit is unknown and therefore, the potential error between the sample and target population cannot be computed.

Classification of sampling techniques

Probability Sampling Techniques

1. Simple Random Sampling

2. Systematic Random Sampling

3. Stratified Sampling

4. Cluster Sampling

1. Simple Random Sampling

It is a probability sampling technique wherein each population element is assigned a number and desired sample is determined by generating random numbers appropriate for the relevant sample size.

In simple random sampling, researchers use a table of random numbers, random digit dialling or some other random selection methods that ensures that each sampling unit has a known, equal and nonzero chance of getting selected into the sample.

2. Systematic Random Sampling

In systematic random sampling the sample is chosen by selecting a random starting point and then picking each İ th element in succession from the sampling frame.The sampling interval İ is determined by dividing the population size N by the sample size n and rounding to the nearest integer.

Systematic random sample elements can be obtained via various means such as customer list, membership list, taxpayer, roll and so on.

3. Stratified Sampling

It is distinguished by the two-steps procedure it involves. In the first step the population is divided into mutually exclusive and collectively exhaustive sub-populations, which are called strata. This technique is used when there is considerable diversity among the population elements. The major aim of it is reduce cost without lose in precision:

           a) Proportionate stratified sampling

         b) Disproportionate stratified sampling

There are several disadvantages of stratified sampling including the assurance of representativeness, comparison between strata and understanding of each stratum as well as unique characteristics.

4. Cluster Sampling

Cluster sampling quite similar to stratified sampling wherein in the first step the population is also divided into mutually exclusive and collectively exhaustive sub-populations, which are called clusters.

Nonprobability Sampling Techniques

Nonprobability sampling is mainly used in product testing, name testing , advertising testing where researchers and managers want to have a rough idea of population reaction rather than precise understanding.

Types of nonprobability sampling including;

  • Convenience sampling
  • Judgment sampling
  • Quota sampling
  • Snowball sampling
  • But where then does data come from?
  • How is it gathered?
  • How do we ensure it’s accurate?
  • Is the data reliable?
  • Is it representative of the population from which it was drawn?
  • A logical order to questions
  • Minimized writing
  • Simple language
  • The provision of survey/question instructions.
  • Use simple wording
  • Be brief
  • Be specific
  • Be vague
  • Use biased wording
  • Use abbreviations or scientific jargon
  • Use objectionable(offensive) questions
  • Be redundant
  • Mix negative statements with positive statements

1. Convenience Sampling

As the name implies, in convenience sampling, the selection of the respondent sample is left entirely to the researcher.

The researcher makes assumption that the target population is homogenous and the individuals interviewed are similar to the overall defined target population.

2. Judgement Sampling

It is also known as purposive sampling is an extension to the convenience sampling.

Respondents are selected according to an experienced researcher’s belief that they will meet the requirements of study.This method also is incorporates a great deal of sampling error since the researcher’s judgement may be wrong however it tends to be used in industrial markets quite regularly when small well-defined populations are to be researched.

3. Quota Sampling

It is a procedure that restricts the selection of the sample by controlling the number of respondents by one more criterion.

The restriction generally involves quotas regarding respondents’ demographic characteristics, specific attitudes, or specific behaviors.

Quota sampling is also viewed as a two-stage restricted judgement sampling. In the first stage restricted categories are built as discussed above and in the second stage respondents are selected on the basis of convenience of judgement of the researcher.

4. Snowball Sampling

An initial group of respondents is selected, usually at random.

It is used in research situations where defined target population is rare and unique and compiling a complete list of sampling units is a nearly impossible task.

The main underlying logic of this method is that rare groups of people tend to form their own unique social circles.

A researcher has to consider the research objectives first and if a qualitative or quantıtative research is required.

Secondly, available resources should be kept in mind including the time frame available for conducting the researcher and making the findings available.

Researchers should also focus on the need for statistical analysis and degree of accuracy required with regard to the research and the expected outcomes.


Before starting the sampling process one must be aware of several basic constructs involved with sampling namely; population, target population, elements, sampling unit and sampling frame.

Determining the final sample size for research involves various qualitative and quantitative considerations.

Section one :

Data Collection


Statistics is a tool for converting data into information

We now explore some of these issues

Data Collection

Data collection is the process of gathering and measuring information on targeted variables in an established systematic fashion, which then enables one to answer relevant questions and evaluate outcomes.

1. Questionnaire


It is a list of a research or survey questions asked to respondents, and designed to extract specific information.

Basic Purposes of a Questionnaire

(1) To collect the appropriate data

(2) To make data comparable and amenable to analysis

(3) To minimize bias in formulating and asking questions

(4) To make questions engaging and varied

Principles for good questionnaire

Should be based on the objectives of the study/survey.

Target population to be surveyed.

The length of the questionnaire

The spacing of questions

The size of typefaces used,


Consider the layout of questions and answers

Principles for good questionnaire

Dos and Don’ts in questionnaire design


Do not:

2. Interview Guide

An interview schedule is basically a list containing a set of structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue. The schedule will be used by the interviewer, who will fill in the questions with the answers received during the actual interview

Advantages of an Interview Schedule

(1) An interview schedule facilitates the conduct of an interview.

Since the questions have already been prepared beforehand, it is easier to carry out and complete the interview.

(2) It increases the likelihood of collecting accurate information or data.

   The questions, which were already prepared beforehand, are expected to be well-thought out and have focus, so they target the objectives of the study, thereby ensuring that the answers obtained are correct or accurate. Interview schedules can increase the reliability and credibility of data gathered

(3) It allows interviewers and researchers to get more information

This is because they can ask follow-up queries or clarifications to the questions they have prepared. Thus, the information gathered is more relevant and useful

Disadvantages of an Interview Guide

(1)It can be time-consuming.

Preparation of the interview schedule can take quite a chunk of the time of an interviewer, especially if it is for an extensive or in-depth interview. Significant amounts of research must be performed in order to be able to craft good questions.

The analysis of the qualitative data generated by an interview guide could be tedious and time consuming.

(2)Variability may be high when the interview schedule is used by multiple interviewers.

This may result to unreliable information gathered during the interviews.

Section Two :   Quality control    

Data reliability

It is concerned with:

The consistency of the data collected

The precision (or lack of same) with which it is collected (asking people questions about something about which they may have little/full direct knowledge).

The repeatability of the data collection method ( if another researcher attempted to repeat your study, would she/he achieve similar results? ).

Cronbach Alpha usually used to measure reliability where alpha >= 0.7 for acceptable reliability. Sources of unreliable observations

The observer’s (or researcher’s) subjectivity.

Asking imprecise or ambiguous questions

Asking questions about issues that respondents are not very familiar about or care about.

Data Validity

Validity refers to the extent to which the data we collect gives a true measurement / description of "social reality" (what is "really happening" in society).

Validity is whether indeed we are measuring the unobservable construct that we wanted to measure.

For instance, is a measure of compassion really measuring compassion, and not measuring a different construct such as empathy?

Assessment of Validity

(1) Theoretical Approach

   Theoretical assessment of validity focuses on how well the idea of a theoretical construct is translated into or represented in an operational measure. This approach involves panel of expert judges, who rate each item (indicator) on how well they fit the conceptual definition of that construct .

Assessment of Validity

(2) Empirical Assessment

Empirical assessment of validity examines how well a given measure relates to one or more external criterion, based on empirical observations.

This assessment is based on quantitative analysis of observed data using statistical techniques such as:

Correlational Analysis,

Factor Analysis. A threshold of 0.4 factor loading is usually used.

Link between Reliability and Validity

A measure can be reliable but not valid, if it is measuring something very consistently but is consistently measuring the wrong construct.

A measure can be valid but not reliable if it is measuring the right construct, but not doing so in a consistent manner.

Before analyzing any data ( Primary /Secondary data), you should always seek to apply the concepts of reliability and validity to the data.

Section Three: Data analysis       

Data Organization

(1) Coding Data

It is common practice to code data by assigning numerical values to nonnumeric measurements. An example might be to code gender as 1 and 2 instead of"male" and "female“ respectively.

(2) Data Capture

Choose a software that you are (including your supervisor? ) conversant with. Eg: SPSS, STATA, R, Matlab, Excel, etc

Group relevant questions together.

Data Presentation - Graphical Data Summaries

(a) Pie Chart – For discrete Variable

Data Presentation - Graphical Data Summaries

(b) Bar Graph – For Discrete Variable

Data Presentation - Graphical Data Summaries

(c) Histogram – For Continuous variable

Data Presentation - Graphical Data Summaries

(d) Scatter Plot – Two Continuous Variables

Data Presentation - Graphical Data Summaries

(e) Box Plot – A continous Variable Vs a discrete/Categorical Variable

Data Presentation - Numerical Data Summaries

(a) Frequency Table – One Categorical Variable


                 Freq.               Percent Valid %    Cum. %

MALE15        60.0              60.0                       60.0           

FEMALE        10        40.0          40.0                    100.0    

Total  25        100.0        100.0                    

Data Presentation - Numerical Data Summaries

(b) Cross Tabulation – Two Categorical Variables

Data Presentation - Numerical Data Summaries

(c) Measures of Central Tendency – Continous Variable

Interpretation of Results
Conducting your data analysis and drafting your results chapter are important milestones to reach in your dissertation process.

The light is finally shining on you from the end of the tunnel, and you are winding down. With only two chapters to go, you are finally feeling relieved… until you get the output from your data analysis.

What do these numbers mean and what should you do with them?

(a)How to Determine Statistical Significance

With p values, t values, F values, correlation coefficients, and a bunch of other numbers staring at you, it is easy to get discouraged.

The basic question you need to answer, do I or do I not have statistical significance, can be answered looking at one simple number: the p value (or Sig. in SPSS).

However, the underlying null and Alternative hypotheses must be known to the researcher.



Regression Parameters

Diagnostic Tests

(a) Durbin – Watson Test

A measure for serial correlation/dependence in the residuals/dependent variable:

H0: No autocorrelation


H1 : Presence of autocorrelation

The statistic should lie between 1.5 and 2.5 to fail to reject H0.

(b) Multicollinearity

Use Variance Inflation Factor (VIF)

Use a threshold of 10

(c) Normality Test

   The residuals/dependent variable should be normal in distribution for linear regression model to be admissible

Ho: Data is normal in Distribution


H1: Data not normal in distribution

Example1- Diagnostic tool

Q-Q plot: Most of the data (apart from the tails) should lie on the diagonal line for normality.

Example 2 – Kolmogrov Smirnov Test for normality

Ethical considerations      

Major ethical issues in conducting research:

(1) Informed consent

Informed consent is the major ethical issue in conducting research. A potential respondent should knowingly, voluntarily and intelligently, and in a clear and manifest way, give his consent.

(2) Respect for anonymity and confidentiality

Anonymity is protected when the subject's identity cannot be linked with personal responses. If the researcher is not able to promise anonymity he has to address confidentiality, which is the management of private information by the researcher in order to protect the subject's identity.

Vc Interview

NTV Interview with the Vice-Chancellor about the Importance of Higher Education in Kenya

Important Links

Important Links

  • KEPSA                                 
  • HELB
  • UN-HABITAT                       
  • CUE
  • HUAWEI                             
  • KIM
  • TSC                                     
  • KUCCPS                               
  • KNEC
  • EACC                                 
  • KICD
  • KISE



E-learning Students Portal

All Odel students are required to submit  their assignments on the e-learning portal