A complete and adequate description of the components of a program is essential to assessing its implementation.
– the strategies
– Ways of communication and
– Technologies for the implementation of the program and indication of the beneficiaries and the place of implementation.
Proper and accurate identification of the components of the program will assess which aspects of the program have been implemented as planned and what factors may have impacted the implementation differences.
Correct identification of the components to assess concerns as the scope of the program (intended beneficiaries) has been observed. In addition, conjectures about the possible links between the results of implementation and the results of the program itself (in terms of production, intermediate results, impact, etc.) …
At the same time, the specification (or detailing) of the content of the program is a prerequisite for the evaluation process.
The personal initiative for planning and carrying out the assessment process contributes to a specification of the most suitable and realistic content of the course. This is an important condition to ensure that the program is more effective (because the internal consistency of the program has been pre-checked) and secondly, that the assessment of outcomes and impacts is more effective as the performance of the program is compared against objectives and more consistent and more realistic expectations.
To enable evaluation that the process can improve the design and specification of a public program, several techniques can be used.
1 – Formative assessment: based on data collected from pilot projects and beneficiaries on the implementation of a specific intervention and information on the feasibility of specific activities and tools and their suitability for the design plan and beneficiaries provided;
2 – Review of the “evaluability” systematic procedure set for the proper development of the theory behind a public programme, detailing and clarifying the intended use of the data in the evaluation process before starting a comprehensive assessment.
His most important steps include (Scheirer, 1994:49-50):
a) Engage key policy makers, managers and staff in a series of meetings to clarify their expectations of the program and the evaluation itself;
b) Using a model called a matrix logic diagram, which lists the expected causal relationships between three aspects of the program: resources allocated to the program, implementation of specific activities planned by the program, and expected outcomes;
c) refining the theory behind the program through an interactive process using visits to project sites and available information to examine the reality of operations on the ground and the extent to which the proposed theory is plausible;
d) clarification of intended use of information derived from the evaluation through discussions with policy makers and program managers, including changes to the programme;
e) Using the theory to help specify the program. Applying theories relevant to the substantive issue from which the program comes, and using data to elucidate the underlying processes.
This type of evaluation process is important not only to specify the content of the program but also to link program activities to income measures (indicators) that will be used in later impact assessments.
The term theory here refers to the interconnected principles that explain and assume the behavior of a person, group or organization.
Chen (1990) distinguishes two types of theories:
– the normative that defines what a program should be and
– causality, which empirically describes the causal relationships between proposed solutions (including contextual factors) and outcome.
The central problem in this case is to examine the effectiveness of the program, and to achieve this purpose it uses the mechanisms to establish causal relationships between a program’s actions and the end result.
The purpose of such an assessment can be defined as determining the net effects of a social intervention. Like the evaluation of goals, this approach is performed after the end of the program or the same steps.
Evaluation procedure – This type of evaluation explores the development of social programs in a systematic way in order to measure the reach of the social program, determine the degree of achievement of objectives and, in particular, to monitor its internal processes. The aim is to reveal possible flaws in the development of procedures by recording events and activities, identifying barriers and obstacles to their implementation and generating important data for your reprogramming.
Thus, the appropriate use of information generated during the development of the program allows changes in its content during execution. In contrast to previous approaches, this evaluation method is therefore carried out simultaneously with the development of the program, also called formative evaluation. However, its implementation presupposes that we can design the program’s procedures and processes.
Also assumes the existence of an adequate management information system, which served as a basis for the work of managers and evaluators, where appropriate.
An application of the methodology of evaluation of social programs:
A comprehensive scoring system that uses methods that enable the assessment of outcomes and assessment processes. Also the settings and modes of operation used in the proposed model.
Evaluation of the results:
Results are defined as consisting of immediate results, results (impacts) and medium-term results (impacts) for the long term.
For the evaluation it is proposed to use impact indicators to measure the long-term results related to the objectives of the program and output indicators to measure the immediate and medium-term results. The output indicators measure the impact of the programme: on the target group as a whole and on u
program. In the first case, two types of output indicators should be collected, with research in the field or with the help of databases and/or existing records:
– Degree of global coverage:
Measures the coverage rate of the target population for the program. Both the deficit and the surplus of beneficiaries are the reasons for changes in the route. The first shows the need for enlargement and the second that resources are wasted (untargeted individuals benefit);
– Coverage varies by program:
Measures the participation of different subgroups of the target group proposal. This rate may reflect discrimination (or bias) in selecting program customers based on region, age, gender, etc. As for the second point, ie evaluating the results for users of the programme, indicators of benefits can be used that take into account the specific objectives of each program or project.
Rob Vos (1993) gives some examples of indicators most commonly used by program users and target audience:
1 – for nutrition programs – malnutrition rates by age, mortality and morbidity;
2 – for educational programs – illiteracy rate, repetition, evasion; school education coefficients and educational qualifications;
3 – health programs – mortality rates in general, infant mortality, maternal mortality and childbirth, fertility and life expectancy at birth;
4 – for housing programs – quantitative deficit in housing construction, quality of housing construction and availability of basic services. The indicators show the input means and resources available to achieve the goals. Scarce resources and insufficient (in terms of money, manpower, equipment, etc.). They almost always tend to undermine expected results.
Vos (1993) gives some examples of the most common input indicators such as:
a) – on nutrition programs – availability of food per person;
b) – on educational programs – student/teacher, student/school ratio, number of series offered by the school and availability of educational materials for students;
c) – for health programs – number of doctors per capita, for health posts per capita; Beds per inhabitant and available vaccines per capita.
But the indicators of access to identify the determinants that effectively use the available resources in programs to achieve the intended goals. The most common are:
a) – on the health programs – the number of medical consultations per adult equivalent; distance to nearest health service, disposable income per family (if useful, for example, to facilitate purchase of medicines), and cultural factors;
b) – for educational programs – outside of school, adequacy of the curriculum and disposable income for the family (to enable, for example, the purchase of school supplies).
In addition, the use of questionnaires allows the satisfaction of the customer, as it is a good indicator of quality, but not the only one or the most complete. In this sense, it is still possible to create composite indicators through the construction of indices consisting of a set of attributes defined from the characteristics of the service.
Evaluation process The evaluation process can be defined as a method of determining the actual content of a public broadcast, whether it is delivered as planned, whether it reaches the intended audience and whether the benefits are disseminated at the planned intensity (Scheirer, 1994:40).
Thanks to Artur Victoria | #Components #public #administration #program