Overview on Data Collection and Analytic Program

A well chosen and implemented technique for data collection and analysis are essential for all program. What to be done and how the data is analysed and synthesised to answer key evaluation questions (KEQs) will be the first things to consider before any collection or analytic program begin. Below are the main points to consider for success program:

  • Data collection and analysis methods should be chosen to match the particular evaluation in terms of its key evaluation questions (KEQs) and the resources available.
  • Impact evaluations should make maximum use of existing data and then fill gaps with new data.
  • Data collection and analysis methods should be chosen to complement each other’s strengths and weaknesses.

Step for Data Collection and Analytic Program

Before anything start, the purpose of evaluation and KEQs must be decided. Normally KEQs do not more than 10 are agreed with input from key stakeholders. A good KEQs are not just about “What were the results?” but also “How good were the result?”. Below are the type of KEQs available:

  1. For descriptive KEQs, a range of analysis options is available. The options can largely be grouped into two key categories: options for quantitative data (numbers) and options for qualitative data (e.g., text).
  2. For causal KEQs, there are essentially three broad approaches to causal attribution analysis: (1) counterfactual approaches; (2) consistency of evidence with causal relationship; and (3) ruling out alternatives. Ideally, a combination of these approaches is used to establish causality.
  3. For evaluative KEQs, specific evaluative rubrics linked to the evaluative criteria employed should be applied in order to synthesize the evidence and make judgements about the worth of the programme or policy.

Next, we’ll start to plan by reviewing to what extend the existing data such as official statistics and previous program’s record can be used. Besides that baseline data which can be used to determine the groups’ equivalence before the programme began or to ‘match’ different groups should also be considered.

The next things to do is to create an evaluation matrix showing which data collection and analysis methods will be used to answer each KEQ and then identify and prioritize data gaps that need to be addressed by collecting new data. By using the mixing methods is that it helps to overcome the weaknesses inherent in each method when used alone. It also increases the credibility of evaluation findings when information from different data sources converges. The key purposes of combining data sources are as below:

  1. Enriching – Using qualitative data to identify issues or obtain information about variables that cannot be obtained by quantitative approaches
  2. Examining – Generating hypotheses from qualitative data to be tested through the quantitative data (such as identifying subgroups that should be analysed separately in the quantitative data, e.g., to investigate differential impact)
  3. Explaining – Using qualitative data to understand unanticipated results from quantitative data
  4. Triangulating (confirming or rejecting) – Verifying or rejecting results from quantitative data using qualitative data(or vice versa)

Once the planning is complete, it is important to check the feasibility of the data collection methods and analysis to ensure that what is proposed can actually be accomplished within the limits of the evaluation time frame and resources.

As the program begins its first step, data management becomes critical. Good data management includes developing effective processes for: consistently collecting and recording data, storing data securely, cleaning data, transferring data, effectively presenting data and making data accessible for verification and use by others. Below are the aspect of data that you can consider:

  • Validity: Data measure what they are intended to measure.
  • Reliability: Data are measured and collected consistently according to standard definitions and methodologies; the results are the same when measurements are repeated.
  • Completeness: All data elements are included (as per the definitions and methodologies specified).
  • Precision: Data have sufficient detail.
  • Integrity: Data are protected from deliberate bias or manipulation for political or personal reasons.
  • Timeliness: Data are up to date (current) and information is available on time.

Last but not least to consider is sampling error. It exists within any sample. Three basic clusters of sampling options available for your consideration as different ways of sampling will introduce different types of bias when assessing the results.

  1. Probability: Use random or quasi-random methods to select the sample, and then use statistical generalization to draw inferences about that population.
  2. Purposive: Study information rich cases from a given population to make analytical inferences about the population. Units are selected
    based on one or more predetermined characteristics and the sample size can be as small as one.
  3. Convenience: These sampling options use individuals who are
    available or cases as they occur.

Reference:

Overview: Data Collection and Analysis Methods in Impact Evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *