Monday, February 16, 2009

wk6 case studies

A case study is an intensive and documented research project on a population, scenario, or group. They may contain qualitative and/or quantitative research. You can organize your subjects multiple ways, and are limited to the five hat racks: categorically (by similarity or relatedness) [i.e Graves], time (chronologically) [Brandt], location (position) [Hayes], alphabet, and continuum.

Data is collected and organized in many different methods. Surveys, recordings, observations, opinions, oral discourse, written discourse, and through conversations and de/briefings. If the researcher has a defined hypothesis or theory, the research trends can be compared in some way. The idea of development and code come into play during the analysis.

Generalizations can be made based on the purpose of the research. If hard evidence and trends are demanded, then a well-documented and tested study needs to be completed. Studies are usually generalized though and typically state that future research may be required. Yet, some research may be composed of many case studies to present an argument.

Sunday, February 8, 2009

wk5

Blog Question: How does conducting research on the Internet impact the
ways that researchers must deal with human subjects?

Internet-based research needs to take into consideration basic human rights while conducting research. The IRB notes how Beneficence, Justice, and Respect for Persons must be upheld.

Depending on the nature of the online research, one must carefully consider if the human subjects understand the risks, are aware of the research, have consented to the research, and that respect is upheld. Gathering or publishing data that did not follow is unacceptable.

After reviewing the CITI modules online (requested by Clemson's review board), I am much more familiar with the histories of unethical research and the resulting consequences.




Monday, February 2, 2009

Wk4 - Measurement

What distinguishes qualitative from quantitative designs?

Qualitative designs are categorial; quantitative are numerical. Yet, they are both rhetorical. In terms of scale, one can adjust the perception of quantitative results, whereas this is much harder to achieve with qualitative designs. Qualitative designs are not mathematical (i.e. you cannot calculate a distance or assign a comparative 'value' to them).


What is the difference between validity and reliability? 
The classic example is the dart board. You may be able to produce reliable results (consistent) but are they actually valid for measuring what it is your researching? Result that are valid does not imply reliability (the internal and external consistency of the data), and vise-versa.  

What is meant by probability and significance?
These go hand-in-hand. the percent chance of occurrence is the probability, but it may mean little if your data is not significant. You can provide probabilities, means, deviations, etc for objects, but if they are not significantly different, your probabilities are not valid.


Sunday, January 25, 2009

Week 3

Kinneavy - Modes of Discourse (a brief review):

The Modes - in relationship to the object over time:
1. Narration: dynamic
2. descriptive: static
3. Classificatory: static
4. evaluation: dynamic

I believe Kinneavy aims for "aims." Of course all good discourse needs to involve each mode, but well-composed discussions should 'aim' towards one of the four

(4) evaluation of the readings

Garrett: reading is clearly classifying elements. There are touches of narration (histories, hypertext, peeps) as well as descriptions (definitions), and evaluations (the future 'powers' of the web and larger claims). But, the objective was to classify elements - to deconstruct the architecture of an eCommerce site.

Miller: though beneath the surface and into the skeletal plane of this work, I find the aim to be an evaluation of practical rhetoric. But, the aim of the paper is classificatory. Of course, one could deconstruct the paper and find bits and pieces of all the modes, but the paper is static. It is about a stative description/evaluation of a subject over time. The descriptions of "procedural rhetoric [could, would, may, be]" are not-nonstative. Thus, the category can neither be considered argumentative nor an evaluation.

Plato: First, my favorite quote: "Every speech must be put together like a living creature, with a body of its own; it must be neither without head nor without legs; and it must have a middle and extremities that are fitting both to one another and to the whole work."

This clearly identifies the message of the story - at first a narrative (surface), but the intent is to classify rhetoric as the art of persuasion. Again, to classify.

Hackos and Redish: from start to finish, this is clearly an act of classificatory taxonomy. Answering in a static sense 'what is the object' over time. This is one scary document - the idea of creating a workflow evaluation on tasks seems to undermine human existence.

All the readings are classificatory. After the read-relatings, I found that Garrett provides an excellent description to Kinneavy's taxonomy. Surface/skeleton can be seen as narrative, structure as description, scope as classification and strategy as evaluation. All are required, but each discourse aims to promote an idea (for example why lecturers should not be fired @ Clemson)

1. Narrative: the story/histories of the lecture. Their role (dynamically, over time)
2. description: a stream on conscience concerning the lecturer (as in outside the "professor's window). How the department's rigamarole is typically deposited on the lecturer...almost a skeletal backbone of the university.
3. Classifying the importance of a lecturer - the need, the idea of 'intermittent faculty' or non-tenure track teaching. How a lecture fits into the breath of specialization and importance to each department.
4. evaluation: best put, the 'practical rhetoric' of a lecturer. a case for the position.

Wednesday, January 14, 2009