By contrast, the nonparticipants - in the role of students - believe they assimilate the material better when presented with smaller quantities of information in informal settings. Their different approaches to the question might reflect different perceptions based on this temporary rearrangement in their roles. The department chair occupies a different structural position in the university than either the participating or nonparticipating faculty.
She may be too removed from day-to-day exchanges among the faculty to see much of what is happening on this more informal level. By the same token, her removal from the grassroots might give her a broader perspective on the subject. Data display in cross-case analysis. The principles applied in analyzing across cases essentially parallel those employed in the intra-case analysis.
Looking down column a , one sees differences in the number and variety of knowledge-sharing activities named by participating faculty at the eight schools. Brown bag lunches, department newsletters, workshops, and dissemination of written hard-copy materials have been added to the list, which for branch campus A included only structured seminars, e-mail, informal interchanges, and lunchtime meetings.
This expanded list probably encompasses most, if not all, such activities at the eight campuses. In addition, where applicable, we have indicated whether the nonparticipating faculty involvement in the activity was compulsory or voluntary.
In Exhibit 11, we are comparing the same group on different campuses, rather than different groups on the same campus, as in Exhibit Column b reveals some overlap across participants in which activities were considered most effective: structured seminars were named by participants at campuses A and C, brown bag lunches.
However, as in Exhibit 10, the primary reasons for naming these activities were not always the same. Brown bag lunches were deemed most effective because of their interactive nature campus B and the relaxed environment in which they took place campus H , both suggesting a preference for less formal learning situations. However, while campus A participants judged voluntary structured seminars the most effective way to communicate a great deal of information, campus C participants also liked that the structured seminars on their campus were compulsory.
Participants at both campuses appear to favor structure, but may part company on whether requiring attendance is a good idea. It would also be worthwhile to examine the reasons participants gave for deeming one activity more effective than another, regardless of the activity. Data in column c show a tendency for participants on campuses B, D, E, F, and H to prefer voluntary, informal, hands-on, personal approaches.
By contrast, those from campuses A and C seemed to favor more structure although they may disagree on voluntary versus compulsory approaches. The answer supplied for campus G "best time" is ambiguous and requires returning to the transcripts to see if more material can be found to clarify this response.
To have included all the knowledge-sharing information from four different respondent groups on all eight campuses in a single matrix would have been quite complicated. Therefore, for clarity's sake, we present only the participating faculty responses. However, to complete the cross-case analysis of this evaluation question, the same procedure should be followed - if not in matrix format, then conceptually - for nonparticipating faculty and department chairpersons.
For each group, the analysis would be modeled on the above example. It would be aimed at identifying important similarities and differences in what the respondents said or observed and exploring the possible bases for these patterns at different campuses.
Much of qualitative analysis, whether intra-case or cross-case, is structured by what Glaser and Strauss called the "method of constant comparison," an intellectually disciplined process of comparing and contrasting across instances to establish significant patterns, then further questioning and refinement of these patterns as part of an ongoing analytic process.
This activity is the third element of qualitative analysis. Conclusion drawing involves stepping back to consider what the analyzed data mean and to assess their implications for the questions at hand. Validity means something different in this context than in quantitative evaluation, where it is a technical term that refers quite specifically to whether a given construct measures what it purports to measure. Here validity encompasses a much broader concern for whether the conclusions being drawn from the data are credible, defensible, warranted, and able to withstand alternative explanations.
Reducing the data and looking for relationships will provide adequate information for developing other instruments. For many qualitative evaluators, it is above all this third phase that gives qualitative analysis its special appeal.
At the same time, it is probably also the facet that quantitative evaluators and others steeped in traditional quantitative techniques find most disquieting. Once qualitative analysts begin to move beyond cautious analysis of the factual data, the critics ask, what is to guarantee that they are not engaging in purely speculative flights of fancy? Indeed, their concerns are not entirely unfounded. If the unprocessed "data heap" is the result of not taking responsibility for shaping the "story line" of the analysis, the opposite tendency is to take conclusion drawing well beyond what the data reasonably warrant or to prematurely leap to conclusions and draw implications without giving the data proper scrutiny.
The question about knowledge sharing provides a good example. The underlying expectation, or hope, is for a diffusion effort, wherein participating faculty stimulate innovation in teaching mathematics among their colleagues. A cross-case finding might be that participating faculty at three of the eight campuses made active, ongoing efforts to share their new knowledge with their colleagues in a variety of formal and informal settings. At two other campuses, initial efforts at sharing started strong but soon fizzled out and were not continued.
In the remaining three cases, one or two faculty participants shared bits and pieces of what they had learned with a few selected colleagues on an ad hoc basis, but otherwise took no steps to diffuse their new knowledge and skills more broadly. Taking these findings at face value might lead one to conclude that the project had largely failed in encouraging diffusion of new pedagogical knowledge and skills to nonparticipating faculty.
After all, such sharing occurred in the desired fashion at only three of the eight campuses. However, before jumping ahead to conclude that the project was disappointing in this respect, or to generalize beyond this case to other similar efforts at spreading pedagogic innovations among faculty, it is vital to examine more closely the likely reasons why sharing among participating and nonparticipating faculty occurred, and where and how it did.
The analysts would first look for factors distinguishing the three campuses where ongoing organized efforts at sharing did occur from those where such efforts were either not sustained or occurred in largely piecemeal fashion. However, it will also be important to differentiate among the less successful sites to tease out factors related both to the extent of sharing and the degree to which activities were sustained. In the absence of these supports, a great burst of energy and enthusiasm at the beginning of the academic year will quickly give way under the pressure of the myriad demands, as happened for the second group of two campuses.
Similarly, under most circumstances, the individual good will of one or two participating faculty on a campus will in itself be insufficient to generate the type and level of exchange that would make a difference to the nonparticipating faculty the third set of campuses.
At the three "successful" sites, for example, faculty schedules may allow regularly scheduled common periods for colleagues to share ideas and information.
In addition, participation in such events might be encouraged by the department chair, and possibly even considered as a factor in making promotion and tenure decisions. The department might also contribute a few dollars for refreshments in order to promote a more informal, relaxed atmosphere at these activities. In other words, at the campuses where sharing occurred as desired, conditions were conducive in one or more ways: a new time slot did not have to be carved out of already crowded faculty schedules, the department chair did more than simply pay "lip service" to the importance of sharing faculty are usually quite astute at picking up on what really matters in departmental culture , and efforts were made to create a relaxed ambiance for transfer of knowledge.
At some of the other campuses, structural conditions might not be conducive, in that classes are taught continuously from 8 a. At another campus, scheduling might not present so great a hurdle.
However, the department chair may be so busy that despite philosophic agreement with the importance of diffusing the newly learned skills, she can do little to actively encourage sharing among participating and nonparticipating faculty. In this case, it is not structural conditions or lukewarm support so much as competing priorities and the department chair's failure to act concretely on her commitment that stood in the way.
By contrast, at another campus, the department chairperson may publicly acknowledge the goals of the project but really believe it a waste of time and resources. His failure to support sharing activities among his faculty stems from more deeply rooted misgivings about the value and viability of the project.
This distinction might not seem to matter, given that the outcome was the same on both campuses sharing did not occur as desired. However, from the perspective of an evaluation researcher, whether the department chair believes in the project could make a major difference to what would have to be done to change the outcome.
We have begun to develop a reasonably coherent explanation for the cross-site variations in the degree and nature of sharing taking place between participating and nonparticipating faculty.
Arriving at this point required stepping back and systematically examining and re-examining the data, using a variety of what Miles and Huberman , pp. Qualitative analysts typically employ some or all of these, simultaneously and iteratively, in drawing conclusions. One factor that can impede conclusion drawing in evaluation studies is that the theoretical or logical assumptions underlying the research are often left unstated. In this example, as discussed above, these are assumptions or expectations about knowledge sharing and diffusion of innovative practices from participating to non-participating faculty, and, by extension, to their students.
For the analyst to be in a position to take advantage of conclusion-drawing opportunities, he or she must be able to recognize and address these assumptions, which are often only implicit in the evaluation questions.
Toward that end, it may be helpful to explicitly spell out a "logic model" or set of assumptions as to how the program is expected to achieve its desired outcome s.
Recognizing these assumptions becomes even more important when there is a need or desire to place the findings from a single evaluation into wider comparative context vis-a-vis other program evaluations. Once having created an apparently credible explanation for variations in the extent and kind of sharing that occurs between participating and nonparticipating faculty across the eight campuses, how can the analyst verify the validity - or truth value - of this interpretation of the data?
Miles and Huberman , pp. We will discuss only a few of these, which have particular relevance for the example at hand and emphasize critical contrasts between quantitative and qualitative analytic approaches. However, two points are very important to stress at the outset: several of the most important safeguards on validity - such as using multiple sources and modes of evidence - must be built into the design from the beginning; and the analytic objective is to create a plausible, empirically grounded account that is maximally responsive to the evaluation questions at hand.
As the authors note: "You are not looking for one account, forsaking all others, but for the best of several alternative accounts" p. One issue of analytic validity that often arises concerns the need to weigh evidence drawn from multiple sources and based on different data collection modes, such as self-reported interview responses and observational data.
Triangulation of data sources and modes is critical, but the results may not necessarily corroborate one another, and may even conflict. For example, another of the summative evaluation questions proposed in Chapter 2 concerns the extent to which nonparticipating faculty adopt new concepts and practices in their teaching. Answering this question relies on a combination of observations, self-reported data from participant focus groups, and indepth interviews with department chairs and nonparticipating faculty.
In this case, there is a possibility that the observational data might be at odds with the self-reported data from one or more of the respondent groups. For example, when interviewed, the vast majority of nonparticipating faculty might say, and really believe, that they are applying project-related innovative principles in their teaching. However, the observers may see very little behavioral evidence that these principles are actually influencing teaching practices in these faculty members' classrooms.
It would be easy to brush off this finding by concluding that the nonparticipants are saving face by parroting what they believe they are expected to say about their teaching. But there are other, more analytically interesting, possibilities.
The conclusion helps in making marketing decisions involving targeting potential customers. Neil Kokemuller has been an active business, finance and education writer and content media website developer since He has been a college marketing professor since Kokemuller has additional professional experience in marketing, retail and small business.
How to Calculate a P-Value. Use survey to determine if propaganda raised their awareness of recycling benefits. Users rate themselves. Use linear charts to show amount of awareness prior to program, participation, and motivation growth--on one chart. Conditional Probability and the Rules of Probability: Understand Independence and conditional probability and use them to Interpret Data Again, use survey results to assess if propaganda raised their participation in the program.
This time, instead of linear and bar graphs, utilize the raters information in using the outcomes participation, not participation as Venn diagrams --intersection is the way propaganda affected their decision. Use count for each Venn--for example. Probabilty 35 of pool of not recycle regardless while of recycle regardless and so on…….
Data analysis is a process, within which several phases can be distinguished. The initial data analysis phase is guided by examining, among other things, the quality of the data for example, the presence of missing or extreme observations , the quality of measurements, and if the implementation of the study was in line with the research design.
In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis, no clear hypothesis is stated before analyzing the data, and the data is searched for models that describe the data well.
In a confirmatory analysis, clear hypotheses about the data are tested. The type of data analysis employed can vary. One way in which analysis often varies is by the quantitative or qualitative nature of the data. Quantitative data can be analyzed in a variety of ways, regression analysis being among the most popular. Regression analyses measure relationships between dependent and independent variables, taking the existence of unknown parameters into account.
A large body of techniques for carrying out regression analysis has been developed. In practice , the performance of regression analysis methods depends on the form of the data generating process and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process.
These assumptions are sometimes testable if a large amount of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods give misleading results.
Qualitative data can involve coding--that is, key concepts and variables are assigned a shorthand, and the data gathered is broken down into those concepts or variables.
Coding allows sociologists to perform a more rigorous scientific analysis of the data.
0コメント