Monday, May 02, 2005

Elaboration and Rosenberg Summary Part Two

The thoeretical purpose of introducing an antecedent variable is the same as that of introducing an intervening variable, namely, to trace out a causal sequence.

ANTECEDENT VARIABLES is one which comes BEFORE the Independent Variable in the sequence.

Note that the ANTECEDENT VARIABLE is a true effective influence; it does not explain away the relationship between the Independent and Dependent Variables, but clarifies the influences which preceed the relationship.

So say you have discovered a causal relationship between Education (Independent Variable) and Political Knowledge (Dependent Variable). You have found evidence of causality and everyone agrees with you and all is well in the world. But, then you get bored and your 15 minutes of fame are nearing their end so you decide to get your ass back in gear and build upon your already solid relationship. What ever will you do? How about trying to find out what causes your Independent Variable (Education)? This is the question that ANTECEDENT VARIABLE analysis is designed to answer.

Now... yippee! You've found support for a causal relationship between Social Class (Independent Variable) and Education (Dependent Variable). In the relationship between Social Class and Education and Political Knowledge, Social Class is the ANTECEDENT VARIABLE. You are in the spotlight once more and are the talk of the town (for a little bit longer...)

In order to fully qualify as an ANTECEDENT VARIABLE, there are three statistical requirements which must be satisifed:
1. All three variables - antecedent, independent, and dependent - must be related. (That only makes sense! And is just a little bit obvious.)
2. When the ANTECEDENT VARIABLE is controlled, the relationship between the Independent and the Dependent Variable should not vanish. (Well, no shit! This relationship had already been established before we introduced the antecedent vairiable, it sure as hell better not disappear when we put things back the way they were!)
3. When the Independent Variable is controlled, the relationship between the Antecedent Variable and and the Dependent Variable should disappear. (This makes sense if you remember how that worked with the Intervening Variable. In this case, the Independent Variable is in the middle - like the Intervening Variable was - and just like before, it would be like taking the meat out from in between our slices of bread - no more sandwich!)


Rosenberg Chapter 4

SUPPRESSOR VARIABLES are nasty little buggers that can fool you if you don't look out for them and uncover their existence. They sneak into your study/research, doing their best to remain undetected, and work to "cover up" and/or "hide" things that directly affect the apparent relationship between your Independent and Dependent Variables.

Basically, you need to know that it is possible to be misled into thinking that there is seemingly no realtionship between two variables when there really is one. It may appear that there is not an inherent link between the variables, whereas, in fact, the fact that you cannot "see" the relationship (or that there there is an absence of any relationship) between the Independent and Dependent Variables may be due to the intrusion of a third variable.

Remember the overhead example with the study that wanted to measure Feelings of Estrangement in African-Americans as compared to Caucasion peoples? The simple analysis done in a Bivariate Table showed "no" difference between culture and estrangement, but then they controlled for education and then we could see that whites had a higher percent of cultural estrangement than african-americans when basic v. higher education was controlled for.

All in all, SUPPRESSOR VARIABLES are like the "opposite" of EXTRANEOUS VARIABLES. Remember, Extraneous Variables (storks and babies) made it look like two variables were related, but they weren't and when the Extraneous Variable was controlled for the relationship disappeared. Well, in this case, when the SUPPRESSOR VARIABLE is controlled for (tah-dah!) the relationship is revealed.


Now, if you thought the Suppressor Variables were menacing, wait until you hear about these DISTORTER VARIABLES. When you find one of these (hidden deep within your analysis, undetectible in Bivariate Tables just like the Suppressor Varaibles were) they reveal that the correct interpretation is (now watch this) precisely the reverse of that suggested by the original data. These rat bastards convert a positive relationship into a negative relationship.

Rosenberg (and I think Nigem, too) gave the example of a study done in which it was found that lower-class people are somewhat more likely to hold favorable attitudes towards civil rights. Researchers thought that could mean that lower-class people have a generally more "liberal" or "progressive" idoelogy (favoring civil rights). They then went on to speculate on the bearing of an underprivelaged social position on an ideology favoring equal rights. (Now for the big switcharoo...) The study was done in Washington D.C. (a city with a high African-American population) and when they examined the relationship of class to civil rights among Whites and African-Americans seperately they found (guess what?): the relationship is exactly opposite of that originally shown. In reality, among African-Americans, upper-class people are more likely than lower-class people to favor civil rights and the same is true for Whites.

There you have it - Rosenberg, Cliff's Notes Style

It's 3:08 am... do you know where your Methods Project is?

Sunday, May 01, 2005

Elaboration Station (and Rosenberg Summary) Part One

Okay, so ELABORATION is so not even in my Babbie Book so I'm totally going on The Little Man's words and our good buddy Rosenberg.

Here are some highlights:

Chapter 2

The most important systematic way of examining the relationship between two variables is to introduce a third variable, called a test factor, into the analysis. This is what is meant by the process of elaboration.

Typically, one begins with a relationship between an Independent Variable and a Dependent Variable (then introduce test factor - aka explanatory variable). The purpose of introducing test factors is to aid in the meaningful interpretation of the relationship between two variables.

Stratify (need to use this word to look good) on the test factor. Stratification means that we have broken the test factor into its component categories. (ie. If we are using sex as our test factor... stratify it into Male and Female)

The process of stratification creates "contingency associations" and if the relationship between between your Independent Variable and Dependent Variable DISAPPEARS within each contingent association, then we can say that the relationship is due to the TEST FACTOR.

Is the relationship (that you have "discovered") between two variables REAL? You'd better make sure it's not SPURIOUS before you go telling everyone about it.

How do you decide what to use as a control variable, then? Through LOGIC of course!

Consider these:
EXTRANEOUS VARIABLES: Remember the storks and the babies? It turned out that their relationship wasn't causal at all. The EXTRANEOUS VARIABLE of residence in rural areas was making it look that way though. But nobody really thought about that because it was so far out and away from the "world of storks and babies" (hence it was/is EXTRANEOUS) that we ended up with the whole silly story about how storks deliver babies (just another example of how the crucial role of the woman in the whole delivery process gets passed over by the conclusions drawn by men long ago and nobody bothered - for a long time - to question the spuriousness of the stork/baby idea - never mind... thought I was in a WGST course for a moment).

We need to consider EXTRANEOUS VARIABLES in order to guard against misleading interpretations which might derive from the assumption that an inherent link exists between the two variables. They thought the storks and the babies were inherently linked. WRONG!!!

Here, the test factor leads to the Independent Variable and the test factor also leads to the Dependent Variable. THE CRUCIAL POINT IS: The Independent Variable DOES NOT lead to the Dependent Variable. The relationship is symmetrical; the variables are consequences of a common determinate.

Rosenberg goes on to discuss Component Variables, but since Nigem doesn't seem to care about them, neither do I right now........

Chapter 3

An INTERVENING VARIABLE is viewed as a consequence of the independent variable and as a determinant of the dependent variable. In other words, when you think you have found a causal relationship between an Independent Variable and a Dependent Variable, someone points out that you skipped a link in the "causal chain". Something occurs between your Independent and Dependent Variable that you missed. That "thing" is going to be our test variable (again).

Remember Nigem talking about how Durkheim was all about the way that Religion (Independent Variable) was causal to Suicide (Dependent Variable) and it turned out that (this was somewhat true, but it wasn't the whole truth and nothing but the truth) Integration was the meat missing from his Independent and Dependent Variable white bread sandwhich. Integration is the INTERVENING test factor (where's the beef? I want a last meal before I die!) VARIABLE.

So now we know that there are three parts to this deal (we got two pieces of bread and the meat - vegetarians insert meat substitute here) To establish a variable as INTERVENING the presence of three asymmetrical relationships must be present. You need your original relationship (I causes D, x causes y, I can't make no full-sized sandwich witout 2 slices of bread), and then you need the meat. But for the sandwich to be real, your top piece of bread need to be connected to the meat. If they're not (looking from my sandwich from the side from top bread to meat to lower bread) in the right order with my Bread leading directly to my Meat (relationship 2 where your original Independent Variable causes the test factor now making it a Dependent Variable) and then that meat needs to be followed directly by that other slice-o-bread (relationship 3 where the test factor now becomes the Independent Variable that causes your original Dependent Variable). INTERVENING VARIABLES are all about sandwhiches baby! that's why they're called Intervening Variables in the first place, don't you know? Because sandwiches are good for lunch and lunch INTERVENES in the middle of your day!

But don't forget, if you control for the Intervening Variable (or take the meat away) the original relationship disappears or decreases significantly. You no longer HAVE a sandwich, you've just got some seperate, unrelated pieces of bread.

(Now I need to take a quick break and listen to the Detroit Grand Pubahs' song "Sandwhiches" which you will certainly see me silently (yet somewhat visibly) dancing to in my desk tomorrow when I get to this portion of the exam.

I'll be back soon!

The night is young and looks to be very long....

Alright, so tomorrow the Little Man is going to have his last chance to play stump the monkeys (the monkeys being us) and I am not about to go down without a fight! (The French Final I have later in the day/evening looks to be another story altogether...)

So, we know we need to know all about both sides of the RESEARCH METHODS 328/527 and Table 1: ECONOMICALLY ACTIVE POPULATION BY STATUS AND SEX, 1976* handout.(Which, FYI, is the year I was born and therefore makes me "old" to some of you, I'm sure)

We also need to know ALL ABOUT the Process of Elaboration.

Other than the Multiple Choice, was there anything else?

I'm going to start putting some stuff up for my own studying's sake (even though it would most definitely have been better to have started this at the 11AM mark instead of the now 11PM time) But, better late than never. I was just about to make a comment on how "time" seems so different around finals, and I remembered a line from a song I am now going to listen to by ReHaB called Crazy People where this guy says:

Hey pal, got the time?
Does anybody really know?

Man, ya'll done lost ya'll mind.

Well, so has everybody else
We're just cuttin' in line

So, anyways, here we go (again)........

(Other than Crystal) Is anyone doing the Estimated Time Schedule and/or Budget for the Research Paper

I just need to get an idea of whether or not people are doing that part since we've never discussed anything like that in class, so I know whether or not I need to go back and make that up or not.

Thanks!

For Hope: COMPLETLY OT: Music

http://s18.yousendit.com/d.aspx?id=0O7C5YENTPKYK0BAE5NUJX51E6
Who can resist bill cosby singing?

http://s48.yousendit.com/d.aspx?id=12AX9YWPKYB3S1P10CAYT32859
Happy remixes, yay!

http://s49.yousendit.com/d.aspx?id=3E64Q9NM1JY4E309P8ILJEKRLH
Slow and soothing and a little spacy.

http://s49.yousendit.com/d.aspx?id=2D7RQ7USTJRKN20JTEB69P46MT
Happy food!

http://s37.yousendit.com/d.aspx?id=1RKOW8E17CJ1N0PEZHTL30K7E0
I don't know - lush and dancy and happy? Sounds like somewhere warm.

http://s37.yousendit.com/d.aspx?id=3UVGBX5X1RQGY0EPI35H5AFQ6C
If I close my eyes and listen loud enough, I'm pleasantly drunk, doing something fun - and I've forgotten all the work I have to do.

Right. Procrastination over.

Another handy reference for the Research Project...

In the back of the Babbie Bible (at least in the second edition) Appendix B is all about things to consider, beware of, make certain of, etc. in regards to a Research Report.

I just came across it and it has some useful hints and tips.

Just a friendly FYI!

exam time?

What time is the exam tomorrow? Thx.

Friday, April 29, 2005

Now that I'm really working on the Final Project.....

GO TO THE ON-LINE COURSE RESERVES FOR THIS CLASS AND LOOK AT THE 6 SAMPLE RESEARCH PROJECT NIGEM HAS POSTED!!!!!!!!!!!!!!!!!!

Okay, don't make me say I told you so later. I also want to say thank you to those of you who encouraged me to go and look at those myself. I am so not even worried about this thing anymore. I'm serious!

The first four samples are alright. Kind of good for getting a better feel for this thing. They're all about 13-15 pages long and the Code Books in a couple of them are beautiful. The good thing about these four is that you can really see what mistakes others have made so you don't end up making them too.

I have been paying special attention to Sample Paper 5 (the check plus on the first page caught my attention - especially since Sample Paper 6 had a check minus on it). I have found this one to be so very helpful. The Codebook.... just gorgeous! (Sample Paper 6 reads more like a "what not to do" example.)

And I'll tell you what... I don't give a damn if this is supposed to be written in traditional paper style, I'm formatting mine like Sample 5. It just seems so much more clear to me and there's no way he can miss the fact that you have done EVERYTHING! But... that's just me.

My point is... if you have not already... go and look at the Sample Papers in the Course Reserves. You will be thankful that you did!

Tuesday, April 26, 2005

Article Review

Are we supposed to turn in the article along with our review?

How elaborate is this thing supposed to be? Is it like a paper or do we just answer his questions going number by number?

I just wondered what anyone else's impression was of how this is supposed to be.

Why have we never talked about this in class?

Friday, April 22, 2005

Research Project

Does he want a "faux paper" or does he want us to prepare an informative/explanatory/descriptive outline based on the outline-guideline he provided us with early on?

Monday, April 11, 2005

And Now on to Bigger and Better Things...

Now, I can procrastinate with the best of them, but if I can get (some)thing(s) done reasonably early or on time, I much prefer to go that route as I find it to be much less stressful in the long run.

I consider this post-exam time to be a bit of a breather period in Nigem's course and therefore a prime opportunity to address the Research Project and Article Review and their rapidly upcoming due dates of April 27, 2005. I (admittedly) have been procrastinating on this, but NO MORE!! (at least for tonight) Truth be told, I'm not even sure what we're supposed to be doing exactly.

I have this "Suggested Outline for a Research Prospectus" handout from way-back-when. Is this what he wants? (note that there are two sides to this handout - the second side being the center of my concern)

Soooooooo.....

Has anyone done anything beyond the original "proposal" and perhaps the actual creation of their survey? If so, please share. I think we can all be a great resource for one another since (as usual) the information exchanged in the classroom is in need of some discussion.

I'm going to get started on this thing (which, in actuality, if I had a better understanding of what it is I am supposed to be doing, I believe I could do in an afternoon's time) and wing it up and away until I hear from y'all.

Also, there is the Article Review. Do we just pick a random article or what?

One final note...
On the syllabus, in the Grading/Weights section it reads as follows:
Your final grade will be based (evaluated) on examinations, assignments, participation, and attendance. Weights are allocated as follows:
Four Examinations 75%
Two Quizzes* 5%
Research Project 20%
Article Review 5%

Do I need to retake some basic math course again, because according to my calculations, that's 105%.

Granted, below the percentage area, in reference to the Two Quizzes, it is stated:
[If the quizzes are not administered, their 5% allocation will be added to the examinations.]

It says ADDED! What is going on? Are we working on a crazy 105% scale here? I know that he will probably just drop the 5% from the non-existent quizzes, so don't think I'm freakin' out over here or anything, but the point is.... Hello? WTF? Don't get me wrong, I love the man! I just find this "mistake" (if you'd call it that) oddly out of character. I'm curious and amused, that's all.

So, let's get the ball rolling on these April 27 things now, while we still have a bit-o-time, and figure it out together as we move along.

La-Dee-Da,
Hope

Three Words (dashed into one) on the Multiple Choice Exam...

Piece-O-Cake!

Thanks for the pre-study session, Crystal. I am ready to do a happy dance right here in the Student Union South Lounge!!!

-hope

Sunday, April 10, 2005

Ch9 Notes and Qs

Notes on my Homework from Nigem discussion of answers:

Secondary analysis' biggest problem is VALIDITY. The biggest benifit besides cost is in scale testing.

Types of Interview:
Structured - don't deviate from Q on survey
Informal - ??
Analytical - go into more depth, also look around and collect own data about person


Had to kick myself on #9 - I didn't finish reading the Q and answered A when it should be that (D) hand delivery/pick up is the best way to improve self-administered survey response rates.

#10 I have a Q on return rate graphs -- I thought they were useful for BOTH examining history effects AND estimating nonresponse bias...? I vaguely remember that this got discussed in class but didn't make notes about what Nigem said. Anyone remember why they AREN'T useful for history effects?

Note to self (#12): Interviews do NOT offer increased reliability over questionares. Or - Interviews and Questionaries have EQUAL reliability.


My notes on ch:
CAPI (Personal Interviewing): face-to-face interview where interview uses computer
CASI (Self Interviewing): 'face-to-face' interview where respondant uses computer
CSAQ (Self-Administered Questionaire): dumb acronyms - person obtains software that surveys them and they return data. (Online surveys Could, I think, be this?)
CATI (Telephone Interviewing): telemarketers and modern telephone surveys; computer dials and provides script for interviewer allowing instant data analysis
TDE (Touchtone): keep reading this as touchStone?!! - automated hell
VR (Voice): automated hell with spoken responses

Data Archives: collection of survey data available for secondary analysis. Ex is General Social Survey (GSS)
Response Rate: 50% adequate, 60% good, 70% V good
Questionnaire: "Survey used to elicit information"
Bias: Any property of Q that encourages as certian answer (wording is critical)

Clarification on Ch8 -- also Solomon vs classic vs post-only control group designs

(Edited to add Hope's clarifications)

One-shot v. Static Group
- One Shot has ONE GROUP with stimulus and a post test
- Static Group Comparison has TWO GROUPS, one with the stimulus and both with post tests.
- They're both lousy. Neither has pretests.

The question came about because of Q9 on the homework about TV on emotional health in public vs Montessori school kids. Besides being lousy experimental design (what about emotional health as a result of going to different schools or one being public and one private and the economic status of families as a result of that?!!) I noticed that there's a lack of pretest so I'm assuming it's static-group comparison. However, it could also (I *think*) be a one-shot case study (why isn't it?). Because it's two groups.
---

I have some issues with 'natural experiments'.

In the example described on p233 about three-mi-island WHY is their 'natural experement' NOT classified as survey research. They say it's 'quasi-experimental and after-the-fact' and they use interviews and surveys to collect their data.

Is it because of subject selection differences? Help?!
---

Additionally, from the homework: Q15: Natural experiments are most likely to resemble which design?
a) static group comparison (YES?) (Correct, TWO GROUPS are compared, one with stimulus, one w/o, no pretests, both posttested).
b) classical (NO)
c) Solomon (NO)
d) 1group pre-post (No)
e) post-only control group (No, because of lack of randomization of subjects?)
---

Solomon 4-Group:
1: P, S, Po
2: P, _, Po
3. _, S, Po
4. _, _, Po

All have post-tests. Groups 1 and 2 have pretesting and are (exactly) classical design. Groups 3 and 4 lack pretests and are exactly Post-Test-Only Control group design.

Solomon allows comparison between control and experemental groups both with and without pretesting.

So: Classic + PoTCG --> Solomon
---

Why does classical theory suck? (Or, what's it bad at?). Can't control for interactions between pretest and stimulus (sensitization taking place).

Also, extra (unnecissary) expense of pretesting (and addl source of error) when can (theoretically) obtain same results with PostTest Only Control Group design because you've randomized assignment of subjects to control and experemental groups.

Here's What I Remember...

Here are the questions from our Study Guides that (I think) I remember seeing on the MC Exam Preview. I'm sure I don't have all of them (and I may have a few that I think I saw, but really weren't there) but here's a start:

Chapter 8

5. An experimenter wanted to see the effects of caffeine intake on the arousal state of her subjects. She randomly assigned subjects to experimental and control groups and administered caffeine-rich soda to one group of subjects and caffeine-free soda to the other group; she then compared the arousal states of the members of both groups. This is an example of a
a. posttest-only control group
b. pretest-posttest design
c. double-blind experiment
d. classical design
e. static-group comparison

8. Matching refers to
a. linking subjects in the pretest group with those in the posttest group
b. selecting pairs of subjects who are included and not included in an experiment
c. selecting similar pairs of subjects and assigning each member randomly to the experimental and control groups
d. linking pairs of subjects on the independent variable with those on the dependent variable
e. assigning similar pairs of subjects to different settings for the same experiment

11. Elmer, a subject in Professor Jencken's experiment testing the effects of certain films on a person's emotional state, has just undergone a break-up with his girlfriend. He continues with the experiment, however. Which one of the following threats to internal validity is reflected in this example?
a. history
b. maturation
c. selection biases
d. statistical regression
e. experimental mortality

14. What is the basic difference between the classical design and the Solomon four-group design?
a. There is no difference.
b. More time elapses between the stimulus and the second observation in the Solomon four-group design.
c. The Solomon four-group design has randomization.
d. The Solomon four-group design repeats the classical design but adds groups that are not pretested.
e. The Solomon four-group design repeats the classical design but adds groups that are not posttested.


Chapter 9

7. Professor Kaled wishes to ask three additional questions only of those respondents who have been active in a political organization in the previous year. Best to use would be
a. contingency questions
b. matrix questions
c. matched questions
d. separate questionnaires
e. different response sets

8. Response set is most likely to occur in which kinds of questions?
a. matrix questions
b. contingency questions
c. closed-ended questions
d. open-ended questions
e. interview questions

9. A particularly useful strategy for improving rates to self-administered questionnaires is to
a. offer an inducement
b. use commemorative stamps
c. use colored paper
d. use hand delivery and/or pick up
e. use a jazzy cover letter

11. According to Babbie, a 60 percent return rate is considered
a. poor
b. adequate
c. good
d. very good
e. excellent

13. Interviewers can be helpful in dealing with confusing situations regarding a given item through the use of clarifying comments known as
a. specifications
b. elaborations
c. matrix questions
d. response set formats
e. conversations

14. Which one of the following is false regarding telephone interviews?
a. They are cheaper than in-person interviews.
b. They save time over in-person interviews.
c. They enhance the safety of the interviewer.
d. They make it harder for the respondent to terminate the interview.
e. They have a bad reputation.


Chapter 10

5. Professor Sullivan performed an observational study of the norms that govern interactions between cab drivers and their passengers. Which one of the following does this example reflect?
a. roles
b. encounters
c. episodes
d. groups
e. settlements

9.Which one of the following is false regarding field notes? Or are they all true?
a. Don't trust your memory more than you have to.
b. Take notes in stages.
c. Get the major points, but don't worry about getting as many details as you can.
d. Rewrite your notes before going to sleep.
e. All are true.

13. In comparison to surveys and experiments, field research has
a. high validity and high reliability
b. high validity and low reliability
c. low validity and low reliability
d. low validity and high reliability
e. high reliability, but only when the validity is high


Those are the ones I remember so far from the Study Guides, but it's not these ones that I am worried about. It's the ones that I have not seen before, can not remember, and don't know the answers to.

Any help on remembering any other questions - or even just the subject or topic that might trigger my own memory would be greatly appreciated!

-hope

Expts: tell me about em and use as much termanology as possible

So Experiments are the thing, so far, that I'm most rusty on... so tell me about em...

What part of the design process are there? What decisions are to be made? Why would you make those decisions? What're they useful for and why? What's some expt-related terminology that you think is important?

Would you ever want to run one? Why? What'd you want to test? How would you do it? Why?

--Crystal

ExamPartTwoYuck

Soooooooo. It's 9a Sunday AM and my day will be spent writing various research papers and studying for nigem. Who-hoo.

I'll check in periodically or more frequently if I/other folks have questions. The conversational aspect of this blogthing seem to help the material stick best for me, so later on I may post a general question or two if there's some material (experements!) that I'm having trouble remembering all aspects of.

Also, wanted to thank everyone for participating - I hope everyone else felt a much needed breath of confidence after our discussions (I know I did)!

--Crystal

Tuesday, April 05, 2005

Ethical problems in survey research?

Any examples of ethical problems in survey research?

I can think of general problems of misrepresented results and intentinally bad sampling but neither are survey-specific....?

Confidentiality? Always a problem.

??

Still Looking for Causal Time-Order and Compensation Information

Still Looking for Causal Time-Order and Compensation Information to fill in the hole in Question Number 1. If anyone has anything on this, now's the time to give it up.

Selection Biases

Can someone provide a BRIEF explanation of Selection Biases. I know what they are, but it takes me like 5 sentences to explain it and I know I don't have that kind of time.

What is a Regression Artifact?

what is a regression artifact?

anyone?.... anyone?.... anyone?.....

Is everyone still working on the questions they volunteered for?

Just wondering if everyone is still working on the question they chose earlier today so I know if I should be working on anything in particular. (That is, other than the 500 million billion other things I am trying to do - which does NOT include joining my friends at the bar around the corner on the patio where they continue to harass me from via the telephone, trying to get me to cave! All I can do is picture The Tiny Man looming over me like a giant fig tree, just waiting to rain figs down all over my GPA parade.)

So, are there any questions that need to be covered - we have 2 (Numbers 1 & 3) + 1/2 (My poor attempt at Number 5) + 1/2 (Terminology) posted so far, but I'm not worried. We're going to open up 5 big ol' cans of whoop-ass tomorrow when that essay sheet hits our desks.

Yeee-Haaaw!

What's the story with Internal Extrinsic Validity?

That's all. I just want to know a bit more about Internal Extrinsic Validity....

Three Methods of Data Collection: Survey, Telephone, Personal Interview (Strengths and weaknesses)

This is that table that he wants us to regurgitate w/o you know, regurging it. Yea.

The easiest way for me to remember it was to sit down and look at what each method was particularly strong and particularly weak on and then figure out the rest ('moderate') by common sense.

Additionally, ya'll get a sample of my stellar spelling skills - Nigem'll get the full effect tomorrow with my chicken scrawl handwriting.

I also made flash cards out of this stuff, cause it's the only bloody way I'll remember it.

What type of survey can you ask the most Q on?
Face to Face Interview

What survey type has limited contingency Qs?
Mail Survey

What type of survey has the greatest bias from social desirability?
Face to Face Interview

What is field research bad for?
Weak on internal validity
Low Control
High Risk of ethical errors
Low Reliability (of results)
Bad for Hypothesis Testing
Bad for Explanatory Research

What is field research good for?
Very natural
High External Validity
Allows moderate #s of subjects
Useful for long descriptive research

What are surveys strong on?
Large populations
High #s of subjects
(note: those two things are different, I had to stare at them for a while before I had it straight)
Allow Hypothesis testing
Useful for both explanatory and descriptive research

What are surveys bad at?
Only moderatly:
  • Natural
  • Xternal Validity
  • Internal Validity
  • Control (did one person fill out your survey, was it the person you wanted?, did they watch the news while they were doing it?)
Moderate ethical problems (Ex of what these are?)

What are experements strong on?
Internal validity
Control
High Reliability
Allow formation and testing of hypotheses
Useful for explanatory research and INTRODUCED phenomenia

A Note: Experements allow introduced phenomonia, you control the action so you can test before, after, and during. Surveys are ex post facto - you can only ask about stuff that's already happened (have you ever discriminated against someone in a job interview?) OR hypothetical actions (if given the opty, would you descriminate in job interview?). Field research is 'in the now' -- it's ongoing, complex phenomonia.

Just answered this, but in flashcard format:
Phenomenon in experements, survey and field research?
Introduced: Exp
Past: Survey
Ongoing: Field

What are experments weak on?
Ethical errors are common
External validity is a problem
unnatural
Small populations, low #s of subjects (with ass sample selection)
Not useful for descriptive research
Short duration

What methods can be used for explanatory research?
Experement or survey

What methods can be used for descriptive research?
Survey or field research

What's low on external validity?
Experement

What's high on internal validity?
Experement

Where are the most problems with ethics in research?
Both experements and field research

Where can you have hypotheses?
Experements and surveys


Discuss Advantages and Disadvantages of Specific Methods Used in Qualitative Research

Let's see.... there are what 7 methods? I don't even know that for sure! Let's start with what I do know...

Overall Advantages and Disadvantages of Qualitative vs. Quantitative Research

Advantages
1. The chief strength of qualitative field research lies in the depth of understanding it permits.
2. Flexibility is another advantage of field research.
3. Field research is (usually) relatively inexpensive.
4. Field research has greater validity than do survey and experimental measurements.


Disadvantages
1. It is not an appropriate means for arriving at statistical descriptions of a large population.
2. Low in reliability.
3. Often very personal

I Think These Are The Methods:
(then again, I could be way off here... I know these are the paradigms, but hell if I know if they're the "methods" or not)
1. Naturalism (Ethnography)
2. Ethnomethodology
3. Grounded Theory
4. Case Studies
5. Extended Case Method
6. Institutional Ethnogrpahy
7. Participatory Action Research

Not Every Girl Can Explain Insane Pain
(hey, gimme a break, it rhymes and "insane" correlates well with "institutional" and you can remember that this tip/trick goes with the qualitative question because it has the word "Explain" in it. (Even though it should be "descriptive" but just remember that the explaining is more related to qualitative that quantitave and you'll be fine.)


Note: At the bottom of page 287 in the Second Edition, our buddie Babbie states, "There aren't any specific methods attached to each of these paradigms..... The important distinctions of this section are epistemological, that is, having to do with what data mean, regardless of how they were collected." WTF?!?!?

1. Naturalism: an approach based on the assumption that social reality is "out there," ready to be observed and reported by the researcher as it "really is." An ethnography is a type of naturalist method.

2. Ethnomethodology: an approach that examines the rules that govern everyday life, often by breaking the rules

3. Grounded Theory: deriving theories from an analysis of the patterns, themes, and common categories discovered in observational data

4. Case Studies: an approach that focuses attention on one or a few instances of some social phenomenon

5. Extended Case Method: the purpose of this approach is to discover flaws in, and to modify existing social theories

6. Institutional Ethnogrpahy: an approach in which members of subordinated groups are asked about "how things work" so that researchers can discover the institutional practices that shape their realities

7. Participatory Action Research: with this paradigm, the researcher's function is to serve as a resource to those being studied, typically disadvantaged groups, as an opportunity for them to act effectively in their own interest


Okay, now that I've done everything BUT answer the question, I'm hoping someone can actually do what I was supposed to do - and that is Discuss Advantages and Disadvantages of Specific Methods Used in Qualitative Research! :-(

Some Terms Nigem Mentioned

While the tiny man have been tryng to scare us for his own amusement, here are the only two terms I caught (along with what Alicia captured as well) as he mentioned the possibility of their appearing on the (most likely not-an-optional-but-a- required-to-answer) Terminology Question List:

Hawthorne Effect: people are influenced by the presence of the experimentor

Symbolic Realism: indicates the need for social researchers to treat the beliefs they study as worthy of respect rather than as objects of ridicule

Regression Artifact: i have stuff on statistical regression and regression to the mean, but nothing on this (c'mon... someone has to have this!)

Reflexivity: things acting on themselves. your own characteristics can affect what you see and how you interpret it while doing field research.

Mortality: Experimental mortality (a source of internal invalidity) refers to experimental subjects dropping out of the experiment before it's completion, and this can affect statistical comparisons and conclusions.

Selection: (was he talking about selection biases?)

Intrinsic: inherent, being an innaate or essential part

Probe: request for elaboration

As a matter of intuition, I just want to put the possible need for knowing the O X O diagrams for the various testing designs and their proper names because I can so see "Solomon four-group design", "posttest only control group design", "one-group pretest-posttest design", etc. and static-group comparison being on that Terminology list.

Discuss Research (In)Validity

Research Validity
1. External Validity
2. Internal Validity
a. Extrinsic Validity
b. Intrinsic Validity
- History
- Maturation
- Testing
- Instrumentation
- Statistical Regression
- Selection Biasis
- Mortality
- Diffusion of Treatments
- Causal Time-Order
- Compensation
- Compensatory Rivalry
- Demoralization


Nigem's not too concerned with details on External Validity. In fact, he said, "Now watch this. External Validity deals with generalization and representation. Can you generalize your findings to the outside or "real" world? That is all that you need to say for the essay."

Internal Validity on the other hand is EXTREMELY important (it even earned its own handout: See Sources of Invalidity for Designs 1 through 6).

Internal Invalidity refers to the possibility that the conclusions drawn from EXPERIMENTAL results may not accurately reflect what actually occured during the experiment. You MUST have control over all factors affecting your experiment.

Side Note for people who like the x,y stuff:
Independent Variable (x) = stimulus
Dependent Variable (y)
MAKE SURE "x" (and ONLY "x") influences "y"

Internal Validity has both INTRINSIC and EXTRINSIC components.

The Extrinsic bit just deals with the selection of people used in the experiment. (that's all i've got on this... gimme what ya' got)

INTERNAL INTRINSIC VALIDITY is what it's all about!!!
All we're talking about here, really, is "What could go wrong?" Hmmmmmm.....
These are the sources that can interfere with your Independent Variable (x)
(and of course, we need to BRIEFLY explain these a bit and give an example)

a. History
- something historical happens during the experiment that has some effect
- ie: an assassination of an African-American leader during an experiment on prejudice

b. Maturation
- people change
- ie: short-term=get tired, bored long-term=grow older and wiser

c. Testing
- the actual testing/retesting process
- ie: people become influenced by the tests and their responses change

d. Instrumentation
- what is used to do the measuring/testing
- ie: must use same survey and pre & post-test so one is not more sensitive than another

e. Statistical Regression (to the mean)
- when subjects are so extreme the results will erroneously be attributed to the stimulus
- ie: kids are so bad in math, they can't get any worse or tall people have shorter children

f. Selection Biases
- experimental and control groups MUST be comparable
- ie: use proper methods for subject selection (we studied this before)
- Note: when subjects come from all volunteer group you lose External Validity

g. Mortality
- subjects die or drop out of experiment
- ie: bigots leave during film leaving higer ratio of less prejudiced people for post-testing

h. Diffusion of Treatments
- the more groups you use the more complications arise
- common sense - more groups = more opportunities for Internal Invalidity

i. Causal Time-Order
j. Compensation

k. Compensatory Rivalry (thank you Alicia for noting/adding that... Compensatory Rivalry is when one group competes against the other in an attempt to better performance.)

(i've got nothing on these three... i think i was still recovering from the repeated use of the kittens in the dark experiment... I actually have written in my notes, "Why does he keep referring to the deprivation of light experiment with 1 mo/2, 3, 6, etc. w/the maze and the food? It is so upsetting and disturbing!")

l. Demoralization
- oh, i'm upset because I'm in the Control Group and I feel left out, boo-hoo
- demoralized kids in educational studies may stop studying, act up, or get angry


Now for the fun stuff... how are we supposed to remember all of the factors? I mean, I can tell you what each of them is if you ask me, but listing them all from memory to begin with? Not so much. Sooooo.....


Hairy Men Don't Do Much Shaving To Create Sexy Clean Cut Images

That's right... HMDDMSTCSCCI (I know that's a lot, but hell, there are 11+1 (so that's 12) types of Internal Intrinsic Invalidity Issues - I point that out as your "double checker"... the I's at the beginnning of the words "Internal Intrinsic Invalidity" should remind you that there are 11+ 1 (three III's = three 111's = 11+1 = 12) factors. That way you don't skip a word in Hairy Men Dont Do Much Shaving To Create Sexy Clean Cut Images.)


History
Maturation
Diffusion
Demoralization
Mortality
Selection Bias
Testing
Causal Time-Order
Statistical Regression
Compensation
Compensatory Rivalry
Instrumentation

If you're a cynic and have an easier time remembering negative things instead of funnier stuff, try this one:

How Many Deeply Troubled Marriages Can Improperly Supervised Couples Counseling Sessions Destroy?

Does he want more than this? What do you think?

Third Time's A Charm... (hopefully)

Okay, here we go, one more time, yadda yadda yadda...

As I understand it, the following are the expected essay questions (however he seemed to make alot of changes Monday and I don't know if I'm on the right page anymore or not - so correct me where I'm wrong):

1. Discuss Research (In)Validity: External and Internal (see handout)

2. Compare 3 Modes of Observations and their Strengths and Weaknesses (see handout)

3. Three Methods of Data Collection: Survey, Telephone, Personal Interview (Strengths and weaknesses)

4. Problems You Encounter in Survey Research (see handout)

5. Discuss Advantages and Disadvantages of Specific Methods Used in Qualitative Research

6. Terminology (woo-hooo fun!)



Whatever! Narrow these down or clarify them more and I am going to start to answer the damn things....

Friday, April 01, 2005

New classfolk, comment here with e-mail to be added to admin of blog

So that you too can write posts!

--Crystal (clstal.at.gmail)

Tuesday, March 15, 2005

Why Can't I Do This?!?!?!?

I have been sitting in front of this computer for nearly 5 hours now and I just can't face my Methods notes. I am forcing myself to do something NOW since I was just reminded that the new season of the Shield starts tonight @ 10. So can I do this in two hours? Do I care? (Yes, of course I do - now I just need to convince my brain of that.)

Monday, March 14, 2005

So, now that the multiple guess is over... what'd you think?

So what'd you think?

The last page of Q threw me for a loop - I swear they're taken from the next section or something... I remember one of the q was about what type of variable something was, and the other thing about figs and high income -- was that bivariate??

I thought he put the Q from the worksheets at the beginning of the test, rather than the end, like he did last time. I think I preferred that, rather than getting in there and hitting a brick wall.

I do NOT look forward to the stupid essay part (for his class, it's my least favorite part cause I study and study and study for it but always get in there and forget some important part of it). :-( Yuck.

So much for bed... :-(

So I got in bed, and not so much with the sleep... :-(

From the Ch7 Hmwk, 2 q are throwing me for a loop:

Professor Mallory determines the distribution of students on 5 variables in her target pop. She then selects students who fill the pre-established proportions of people in each combo of variables. Which stratgey?

A: Quota.

WTF does the determining distribution thing mean? She did a pre-study to learn about her pop?!!


Prof Rosenberg takes random sample of stud from 5 MI colleges. He then phrases his report in terms of "all MI college students". To what group of elements is he generalizing?

A: Population

Isn't he also committing the error of ecological fallacy? He sampled only 5 public universities and isn't qualified to make statements about "all" anything. Additionally, how can the answer be population, since the generalized statement he's talking about WASN'T his population?!!

I'm gonna try this sleep thing again, now.

Sunday, March 13, 2005

I am so heading to bed... I'm stupid-tired and don't think I'm getting anything out of going over and over and OVER this stuff. I wish I had blank homework assignments so I could test myself (mine are written all over, not like I could just cover the answer up... duh!). I guess I'll know for the future ::shrug:: -- that is, if I live through this damn test. :-( ). I *do* wish the man used a curve, this shit is hard!

Much luck to you tonight and tomorrow. I'll be checking this before chem in the AM and again after class.

Does anyone remember any of the Multiple Guess Questions?

Yeah, at this point it is truly Multiple Guess for me. And as I did not take the time to make notes after the preview of the questions the other day I am now feeling very SCREWED!!!

Crystal, I am starting to reply to your "questions in red" now. Sorry, I was gone all weekend.

7:oopm the night before the MC Methods and I'm just getting started... I feel awful. Especially after seeing all the work you have done so far Crystal.

Let's get it started...

Hope

Saturday, March 12, 2005

Ch7

sampling - process of selecting observations
probability sampling - generalizable from sample to population, requires random selection.

NONprobability Sampling:

  • Reliance on Available Subjects/Convenience - no control over representativeness of sample, generalization from data is V. limited.

  • Purposive/Judgmental - goal to compare left-wing and r-wing students, so you sample 2 orgs only - your goal is comparison so this is OK, but can't generalize to L and R-wing students in general. Also, interviewing students who don't attend school rally to learn about school spirit.

  • Snowball - when pop difficult to locate (homeless introduce to other homeless, who introduce to other homeless...). Results are of questionable representativeness, used freq for exploratory res.

  • Quota sampling - sampling based on knowledge of a population's characteristics. Selection of sample to match set of characteristics. Quotas based on vars most relevant to study. Quota frame MUST be accurate. (what is QF?)

  • Selecting Informants - member of group who can talk about group. Be V careful with selection, to pick a most normal, centrally accessible, many contacts informant.

Probability Sampling:

    • to provide useful descriptions of total pop, sample must contain same variations that exist in pop. Every member of pop has = chance of being selected for sample.

    • Bias: selections are not typical or representative of larger pop.

    • Sample is rep of pop if aggregate char of sample approximate same aggregate chars in pop. Samples that have equal probability of selection method samples are labeled EPSEM.

Element - unit info is collected on and provides basis of analysis (generally people)
Population - “theoretically specified aggregation of study elements” (dumb)
Study Population
- aggregation of elements from which sample is actually selected. (Can be artificially limited).
Random Selection - each element has = chance of selection independent of other event in selection process. ('selection' of head or tail in quarter flipping is independent of previous coin tosses)
Sampling Unit - element or set of elements considered for selection in some stage of sampling (??? How is this different from study pop? )
Probability Theory - sampling teqs that produce representative samples and analyze results of sampling statistically. Provides basis for estimating parameters of a pop.
Parameter - summary description of a var in a POPULATION.
Statistic - summary description of a var in a SAMPLE
Sampling error - degree of error to be expected for given sample design (allowed bec of probability theory -- how closely are sample stats clustered around true value?)

What do we have to know of sampling and std error? I've covered nothing, here.

Confidence interval/level of confidence - express accuracy of sample stats in terms of level of confidence that stats fall w/in an interval from the parameter (sample estimate, cause we don't know the Parameter). Additionally, provides basis for det sample size.

How much of this do we need to know?

Sampling frame - list of elements from which probability sample is selected. (sample of students taken from roster, then the roster is the sampling frame). Why is this not the study pop?!!


Types of Sampling Designs:

Simple Random - once sampling frame established, assign random number to element in list, then use table of random numbers to pick study sample.

Systematic - every kth element in a sampling frame, with first element sel at random. Be V AWARE of the dangers of periodicity (every 5h house on block is the corner house, and as such, is abnormal).
Sampling interval
- std dist bet elements sel in sample.
Sampling ratio
is proportion of elements in pop that are selected. (1/10 if every 10th person is selected)

Stratified - greater degree of representativeness. Org list into homogeneous subsets and pull every kth element from that list, making sure that subsets are in same proportion as pop.


Multistage Cluster Sampling

: initial sampling of clusters, then sel of elements w/in each sel cluster.

- Highly efficient but less accurate sample. 2 stage cluster sample is subject to two sampling errors. Maximize # of clusters while dec # of elements w/in each cluster.

    • Typically elements composing given cluster w/in pop are more homogeneous than is total pop. (residents of block more alike than nation)

    • Additionally Multistage cluster sampling can be Stratified.

PPS (Probability Proportionate to Size) Sampling: type of cluster sampling. Used when clusters are of greatly differing sizes (ie city block vs suburban block) so that everyone gets the same = chance of being selected.

Further refinement is the Disproportionate Sample and Weighing: Disproportionate - you may decide to sample to get higher # of some small subpopulation so that you can have sufficient #s to analyze results with some meaning. Analysis of 2 samples needs to be separate - but then they can be compared. Weighting happens when you want a composite sample of the entire pop, then you have to weigh samples when 'adding them back together'.

Ch 6

Multiple indicators - several q on survey addressing same concept, also interviewers asking 'essential q' and then 'extra q' that asks that same info slightly differently.

Composite measures are used freq in quantitative res: 1. no single indicator covers meaning, 2. want to use ordinal measure of var with a range of variation, 3. data analysis.

Indexes and scales (esp scales) are efficient data reduction devices: assign scores, not loosing details of response.

Index v Scale:
Both:
ordinal
rank-order units of analysis in terms of vars
score gives indication of relative 'religiosity'

use composite measures: measurements based on more than one data item. SHOULD ONLY MEASURE ONE DIMENSION. (unidimensionality)
Diffs:
Index: accumulate score assigned to individual attributes (1 point for each q).
Scale: assign score to patterns of resp; takes advantage of diff in intensity among attribs of same var to ID distinct patterns of response.
'idealized action patterns'

Index Construction:

  1. In selecting items for index, does the item have face validity?

  2. Are you measuring the concept in a general way or a specific aspect of the concept? (bal measure of religiosity or a measure only of ritual participation?)

  3. Select items differing in variance (1 item ID as conservative, another might pick up a few more)

*I don't like the 'general or specific' and the 'exam of empirical rel' step on p150. Help?

  1. Examine empirical rel among items included in proto-index. (Empirical Rel - when answers to one q let predict answers to other qs).

  2. Find bivariate relationships among items and drop items w/o relationships to other items on index, unlikely that they really measure concept. Bivariate Rel - rel bet 2 vars, responses on 2 vars likely to get same responses. Also, drop items that VERY strongly correlate as they're prob the same q.

ASIDE:

Indicators should be rel if they are 'effects' of same var. However, not case when indicators are 'cause' rather than 'effect' of variable.

Social interaction - time spent w/ fam, friends, coworkers. 3 indicators 'cause' degree of social interaction.

Self-esteem - 'good person', 'like self'. Person w/ high self esteem should Y both.

Decide if indicators are causes or effects of var before using inter correlations to assess validity.

Here's another place I fall apart -- do we need to know the percentage tables he used to analyze his physician example? I don't remember Nigem covering it in class or making a big deal out of it - do we not need to know it? Can you summarize? (?!! P 154 - 155?!!)

Index Scoring:
Assign scores for particular responses.
Decide desirable range of index scores (how many index 'points' is conservative?).
How far into extremes does index extend (consider variance and the tails of the normal curve). Your goal is to have an adequate # of cases at each point on the index, generally index scoring is equally weighted.

How do you handle missing data?

  • If few cases, simply exclude from index construction. (Will exclusion result in biased sample?)

  • Treat missing data as one of available responses (you might decide that failure to answer meant no, if respondent answered yes and left some blank)

  • Analysis may yield meaning - respondents failed to answer a Q were generally consistently conservative on other items -- you may decide to score accordingly.

  • Assign 'middle' value

  • Use proportions of what observed (if 4/4 answered strongly conservative, may score '6', if 2/4, may score '3')

Best method is to construct index through multiple methods and compare results.

Index Validation - does the index measure what it says it measures? (Does your index rank-order people in their degree of conservatism?)

  1. Item Analysis - internal validation, examine extent to which composite index is related to or predicts responses to individual items it comprises. If item adds nothing, trash it.

  2. External Validation - people who scored as politically conservative on your index should score as conservative by other methods as well. (most conservative index scorers should be most conservative on all other q on survey)

  3. Bad Index vs Bad Valida tors - This can be a problem, check carefully.


Scale Construction:

Scales offer more assurance of ordinality by taking into consideration intensity structures among indicators. (Is the senator who voted for 7 moderately conservative bills more conservative than the senator who voted for 4 strongly conservative bills (rejecting the others cause they were too moderate)?

Bogardus Social Distance Scale -- teq for determining willingness of people to socially relate to certain other people. If person allows contact next door, they'd allow person to live in country... etc. Logical structure of intensity.

  1. live in country?

  2. Live in community?

  3. Live in neighborhood?

  4. Next door?

  5. Marry child?

Thurstone Scale - format for generating groups of indicators of var that have empirical structure to them. Judges given list of indicators of a var and rated on intensity. Disagreement among Judges gets indicators tossed as ambiguous. Then items selected to represent each scale score, which then used in survey. Respondents who hit a strength of 5 would be expected to 'hit' the lower indicators too, but not hit indicators above 5.
Incredibly resource intensive, would have to be updated periodically.

Likert Scaling - goes one step beyond regular index construction, calcs avg index score for those agreeing with individual statements making up 'index'. As result of item analysis, respondents could be rescored using the avg index score for each item.
Too complex to be used frequently.

Semantic Differential - determine dimensions you want subjects to judge something and then find 2 polar opposite words along each dimension. (dimension of music: enjoyability, use enjoyable, unenjoyable. Dimension of music: complexity, use complex and simple. Etc). Allow individuals to check box along those continuums.

Guttman Scaling - based on notion that anyone giving strong indicator of some variable will also give weaker indicators.

Scale types - patterns of response that form a scale. See Table 6-2 for example.

I'm iffy on the ex given in book - I understand the example but can't extract a def from it. Skipped rest of Guttman Scale.

Typologies: summary of intersection of 2 or more variables, creating set of categories or types. Typologies MUST be used as the independent variable.

Also iffy on this, need def from somewhere else??

Wednesday, March 09, 2005

Ch5 Recap

Ch 5 Review:

Scientists like to use measurement for the word observation, because it refers to careful, deliberate, observations for the purpose of describing attributes composing a variable.

If it can be conceptualized, it can be measured.

(Note: I hate this chapter and the concept and subconcepts involved in conceptualization.)

conception - ideas about a subject, internal, individual, 'mental images'
conceptualization - process of coming to agreement about meaning of term
concept - result of conceptualization (Kaplan: is a "family of conceptions")

Kaplan's 3 classes of observables\types of observation:
Direct - simply (checkmark)
Indirect - more removed than direct observation (minutes of past mtgs)
Constructs - created, theoretical, not-directly-observable (IQ)

Reification - reguarding constructs as real

indicator - sign of absence or presence of concept (helping animal, crying during movie: indicators for concept of compassion)
dimension - grouping of indicators within a concept, a specifiable aspect of concept (compassion for animals vs compassion for humans).

Complete conceptualization involves specifying dimensions and finding indicators for each.


Interchangeability of indicators - idea that if indicators are valid, then they all rep same concept, then all will give same results. (We should get same research results no matter which indicators we use, as long as our indicators are valid).

Nominal defination - assigned to term w/o claim that def rep 'reality'. Arbritrary.
Operational defination - specifies exactly how concept will be measured. Max clarity about concept for given study.
Working defination - def for the purposes of inquiry - whatever you want it to be.

Hermeneutic circle -- cyclical process of deeper understanding

Conceptualization is also the continual refinement of the understanding of a concept.

Progression of measurement steps:

Conceptualization
|
\/
Nominal Definition
|
\/
Operational Definition
|
\/
Measurements in the real world

Srole scale: another measure of anomia (5 statements)
Durkheim theories about suicide and anomie.

Defs more problematic for descriptive research than for explanatory res. Why?
For example - descriptive: What does 'being unemployed' mean? Who qualifies? What people can be unemployed (children)?
Explanatory Res: Does conservatism inc with age? Not matter what def of conservative used, the relationship bet that and age is of interest, not the exact def used.

Conceptualization is the refinement and specification of abstract concepts, and operationalization is the dev of specific procedures.

Operationalization choices:
Range of Variation - how much is acceptable?
Variations between the Extremes - degree of precision, how fine are your distinctions among attributes composing your variable? (Do you care if person is 17 or 18?)
Be clear about which dimensions of a concept you're covering.
Attributes composing your variable should be exhaustive as well as mutually exclusive.

Levels of Measurement:
Nominal: labels for characteristics (gender, hair color), analysis available is that 2 people are the same or different.
Ordinal: rank-ordered (conservatism, alienation), analysis can say that Person A is “more” than B in terms of var.
Interval: distance separating attributes of a var HAS meaning, but lack of absolute zero. (IQ test) Can say 'how much' more A is over B, can not say IQ of 150 is 50% more intelligent than someone w/ IQ of 100.
Ratio: Intervals with a true zero point. (K temp, age, income). Can say that A is twice B.

Precision - fineness of distinctions made bet attributes of var. (43 vs 'in 40s')
Accurate - how closely something matches reality.
Reliability - does your teq give repeatable results when measuring the same object? Problem with researcher subjectivity.


Methods of ensuring reliability:
Test-Retest method: measure multiple times and compare results (survey repeated 3mo later - get same results?)
Split-Half: make more than one measurement of same concept. (Take q that test concept and analyze after splitting Q list in half).
Use Established Measures - use someone elses'
Test reliability of res workers -- call subset of samples and verify info, code results through multiple people

Validity
-- Does a measure reflect real meaning of concept?
Face validity - common sense
criterion-related validity - predictive validity - external criteria (validity of driver's test det by scores people get and rel to later driving records)

*Need help on predictive validity, in the making examples step. I understand the concept but can't make em, yet.

Construct validity - logical rel among vars. (marital satisfaction relates to cheating)
Content validity - does the measure cover range of meanings w/in concept. (does our measure consider all types of prejudice?)

Back at it.... :-(

Ch4 Summary:

3 goals of research:

  1. Exploration - familiarization, focus groups. Shortcomings: seldom satisfactory answers, not representative. 3 reasons for use:

    • curiosity

    • pre-study for larger study

    • dev methods for larger study

  2. Description - describe accurately and precisely chars of a population (Ex: US Census)

  3. Explanation - Answer Why? (Ex: ID vars to explain why City A higher crime rate than City B)

    • Nomothetic Explanation - few factors that lead to 'most' changes in results

    • Idiographic Explanation - all the reasons, all the time

Criteria for Nomothetic Causality; 3:

  1. Variables correlated (established relationship)

  2. cause before effect (time order)

  3. vars non spurious (no 3rd var)

Note what is NOT an interest of Nomothetic causality:

  • Complete explanation of causes,

  • Exceptions don't disprove the rule

  • Rel can be true even if it only applies to minority of cases


Defs:
Necessary Cause - Must be present for result (take classes to get degree)
Sufficient Cause - Condition that guarantees effect, not the only way to get effect (take right classes, get good GPA, get degree)

Idiographic causes are sufficient but NOT necessary. (Anyone with your details of life would have attended college, however, other people attend college through different means).


Units of Analysis

Units of analysis (thing seeking to describe) are generally also units of observation, individuals being most typical. Assertions about one unit of analysis need to be based on exam of that same unit of analysis.

Ecological Fallacy - applying findings from group to individuals (Ex: crime rates in large cities, large African American pop, can't say that the AA are making the crime.)

  • Individuals

  • Groups

  • Organizations

  • Social Artifacts (are you studying marriage or marriage partners?)

Individualistic fallacy - probabilistic statements not invalidated by individual exceptions

Reductionism - reducing complex phenomena to simple explanation (Ex: answering What caused the Am revolution? - with a single factor).


Other types of Studies/Time-based:

Cross-Sectional Study - observation of sample at _one_ point in time (single US Census)
Longitudinal Study - sampling over time

  • Trend Study - re sampling of same population more than once, over time

  • Cohort Study - following an age group and re sampling that age group, over time

  • Panel Study - follow the same group of people over time (special problems with panel attrition and possibility that dropouts will be atypical)




Triangulation - use of several different research methods to test same finding


Sunday, February 06, 2005

Causality diagram

Causality can be deterministic or probabilistic.

Probabilistic causality can be normothetic or idiographic.

(That's the diagram part)

So my interpretation of what that means is this (correct me if I'm wrong, please!):

Normothetic causality is one or two major contributors causing an effect. Idographic causality is a long list of every factor affecting a given situation. Nigem used the Crime example in class. Both are probalistic.

Deterministic Causality is the harder one to me. It's X determines Y and it's NOT probable, it's concrete. If X, then Y (so it's sufficient causality)?

Maybe an alternate question: What is Deterministic Causality?

Ethical Principals

What is Researcher ID (just that the researcher must identify self to subject?) Dosn't that fall under Concealing and Deception?

Example of sufficient causality?

Def: If X, then Y. Y can occur seperatly. Sufficient is not necissary condition. So if X happens, Y is a cascade effect of X and will ALWAYS happen if X happens.

What's an example?

What's a probabilistic relationship?

What's a probabilistic relationship?

Mindy's post

Mindy, reply here (you don't have to have an account, just reply anonomyously) with your questions that you want other people to answer. If you have answers to other people's questions, click reply under that posting and tell em what the answer is. --Crystal

Meet Niko (just something to lighten the mood)

I clicked on the Next Blog link at the top right of the page (thinking it was like a next message thing) and I found myself in Niko's land. According to Niko: "Hey I'm Niko and I'm 14 years old. I love games and movies. I also love music I'm also a Christian. "Apparently he is also into PS2 games since that's what his blog is about. Anyways, it was an unexpeted event that made me giggle. Thought I would pass it on.


FYI... Possible Question Topic

I have here marked in my notes that Nigem specifically made mention that the topic of Theoretical Statements would be a subject for an essay question. Personally, at this point I don't care because I have major information overload going on. But just in case this comes up, you might want to review the progression from bullshit to Theoretic Invariance, the two types of propositions and the two types of relational statements within that. Maybe even know the criteria for establishing a relationship and the basics about causality.

Assumptions of Science and so on...

I took the following from your Essay Questions Draft, Crystal. Here is my imput...


Discuss assumptions of science and principals of scientific community and factors affecting objectivity.


Assumptions of Science:
(have you seen that so very stupid Capital One commercial with David Spade and his book of like 1001 ways to say "no"? At the end of the commercial when the caller says they are going to call Capital One he yells out "Nanka!" or whatever he says. Well, that commercial finally has a point - and that point is to make it easier for us to remember the Assumptions of Science. I remember that this acronym is for this topic because David Spade has become an ASSumptions of Science. Now, he yells out "Nanka" or whatever, but if you put a little spin on it you might think of it as NANKKW with the W being pronounced like "wha". So here we go:

1. Nature is orderly and regular
2. All natural phenomena have natural causes
3. Nothing is self evident
4. Knowledge is derived from the acquisition of experience
5. Knowledge is superior to ignorance
6. We can know nature

See? NANKKW!


3. Principals of Sci Comm:
Okay, go with me here for a moment. Essentially the Principles of the Scientific Community are things that they look for and "just gotta' have." Kind of like shopping for that perfect outfit. Only in this case we are not shopping, we are not the shopper, we are the SHOPER (I guess we had to "pee" at some point)...

1. Skepticism (for the sake of duplication of results)
2. Humility (scientists are not god and they better damn well know that)
3. Objectivity (intersubjectivity)
4. Parsimony (keep it simple stupid)
5. Ethical Neutrality
6. Relativism

(Now, you had transparency, but I didn't have anything about that. Is it another word for one of those things listed or did I doze off and miss something somewhere?)


Factors affecting Objectivity:
1. Idols of the Tribe
2. Idols of the Cave
3. Idols of the Theater
4. Idols of the Marketplace

(Just remember how Billy Idol put it... "We are only human (Tribe - as humans we want to feel gooooooood) with two eyes (Cave - our view of the world is through the mouth of the cave which is constructed by family socialization - do you need to remember the whole "family socialization thing"? Just think of the first thing you saw with your eyes... mama) and two ears (Theater -
everybody says so, therefore it must be true) and one mouth from which to speak (Marketplace - words, terminology, concepts and their uses) or sing really rockin' songs like "White Wedding") And that's a FACT you cannot OBJECT to my friend!

5 Essay Qs

Discuss assumptions of science and principals of scientific community and factors affecting objectivity.

Assumptions of science:
1. Nature is orderly
2. Natural laws are discoverable
3. Events have natural causes
4. Nothing is self-evident 5. Knowledge is derived from experience 6. Knowledge is superior to ignorance

Principals of Sci Comm:
1. Humility
2. Parsimony
3. Objectivity (intersubjectivity)
5. Skepticism (for the sake of duplication of results)
6. Ethical Neutrality

Factors affecting Objectivity:
1. Tribe (species or cultural-specificity like desire to be happy)
2. Cave (social constructions like family)

3. Theatre (factors based on received opinion like sources of authority)
4. Marketplace (use of terminology like woman)



Discuss meaning of relationships

1. Recripricol. (Investments and profits, difficult to establish direction of causation)
2. Symmetrical. (No influence between variables, ice cream and rape)
Includes 5 Types:
1. Alternative indicator of same concept
2. Common Cause
3. Functional interdependence (part of same system, heart and lung)
4. Common Complex (2 parts of same phenomenon, opera and Audi)
5. Accidental
3. Asymmetrical (direction of causality can be established)
Includes 6 types:
1. Stimulus Response (experiment)
2. Disposition Response (influence)
3. Property Disposition (age affect conservatism)
4. Necessary Precondition (atomic bomb and tech)
5. Eminent Relationship (effect is intrinsic to def of concept; red tape and bureaucracy)
6. Ends and Means (answer is in mind of subject; sex and STDs, which came first depends on goal of subject).


Discuss level of theorizing. (Type, fx, level of propositions)

Construction:
1. Scope. Macro or micro.
2. Function. Structure, Process, or dynamics (Is this a theory that describes the process by something occurs or that describes the structure of something or does it describe how something happened?)
3. Structure. Logical (A because B, therefore C, constructed deductively) or
Loose (set of probabilities, constructed inductively)
4. Level (of analysis). Individual, group, large group, institution.


What is the word, something besides Loose? What do you have for #2 in your notes, I can't seem to nail down what the hell function is?!!

Explanatory Types
1. Ad-hoc. Simple, limited.
2. Taxonomies. Typologies, exhaustive categorizing.
3. Conceptual Framework. Additionally, linkage between categories. Able to form propositions from em.
4. Theory. Logical system of Propositions from 2 and 3. All concepts and relationships are defined.


Levels of Propositions:
1. Existence statements (conditions under which event will occur)
2. Relational statements (Associational (A related to B) and determinable(A cause B))

Discuss methods of settling doubt.

1. Tenacity. Believe it cause you always believed it, and because changing that belief would make you unhappy.

2. Intuition. 'Everybody says' and 'it makes sense'. The earth is flat. Subject to early learning and fashionable ideas. Fallible.

3. Authority. 2 Types: Reasonable (gonna believe someone else because I don't have the time to do my own research, but if I ever do, then going to change mind). Irrefutable (church, societal norms, what to wear to a funeral. Can't argue with this source of authority). No argument settled by authority will ever be unquestioned as there are too many sources of authority.

4. Scientific Method. Self-critical, admits that it can lead you into doubt, and provides error-correcting. Open, mutable framework for discovery.

Bloody Relationships...

Since I haven't figured out how to quote yet either, I figured a little copy/paste action would work for now.

Crystal wrote:

Bloody relationships: What the hell is the Asymetrical Ends and Means, and what is the Property Dispositoin, and what is Emenint Relationship (how are those different from others that were listed). I'm gonna make this a seperate post because I think I need to go through and contrast them.

You wanna' talk relationships? You came to the right place!

Asymmetrical Ends and Means is like this:

(Now this is a bit confusing because there are two points of view or perspectives involved in this one - but once you understand that.... you've got it)

The question about the relationship between the two lies in the question of "why".

Why do we use the means that we do?
Why do we reach the ends that we do?

If we begin with a goal in mind, say for instance (and I am being outrageous and creative here so this sticks in your brain - that's just the way I study so don't shun me forever because you think I'm weird after this)

If, for instance, I start with the goal (my ENDS) being that I wish to get an STD, say ghonnereah. With my ENDS in mind first, I determine that the MEANS (or "what I need to do") to reach these ends is to whore around with as many people as I can in order to reach my ENDS and get ghonnereah. This is a situation where the ENDS determine the MEANS.

However, let's say I'm just a slut and I like whoring around with as many people as I can (this behavior is my MEANS or "what I do".) Because I do this, or employ these MEANS, I get ghonnereah. The ghonnereah is still my ENDS, but I wasn't planning getting it. Therefore in this case, my MEANS determined my ENDS.

Both situations are possible and in this case it is easy to determine which lead to the other. But that's only because I can tell you. The bird building the nest in turn preserving the survival of the species that Nigem used in class - that's harder because the bird can't tell us if she's building the nest because she wants to protect her young and help ensure the survival of her species (which would be ENDS leading to MEANS) or if she builds the nest from instinct and it just ends up being something that helps to preserve the species (MEANS leading to ENDS).

Now, I'm going to get a shot of pennicillin for my ghonerreah and move on to Property Disposition.

Did you understand the Disposition Response Relationship? I'm guessing you did since it was not on your list.

Therefore you already understand what a disposition is (what something can be and is a pretty stable thing like a liberal or a conservative, being able to play an instrument, having an extremely high sex drive - just think of the disposition as sexual positions and you won't forget that Dispositions, while they are traits, are not fixed and are able to be altered, just as sexual positions can be, like a top can sometimes become a bottom (not often but it can happen).

Now, properties on the other hand, CANNOT be changed. Whether you are a top or a bottom doesn't matter if you bring the wrong equipment to the game being held under the sheets. So things like sex (and age, race, eye color, - basically things that just "are") are properties.

Now this is where it gets interesting. When we look at the Property Disposition Relationship we know that the PROPERTY is ALWAYS the INDEPENDENT variable because you just can't change it no matter how hard you try. So, to follow our example we will use SEX (our PROPERTY) as our Indpependent variable. Inquiring minds want to know how sex affects sexual behavior. Get it? Property Disposition Relationship.

(Yes, I use sex in alot of my examples, but I figure "Hey, if it can sell beer why can't it sell sociology?")

Immanent Relationships are actually quite simple. It's like where there's smoke there's fire. Meaning, if you build a fire you're going to get smoke - it's just a matter of time (hence the fact that it is immanent). We discussed in class how red tape arises out of beaurocracy and how oligarchy arises out of democratic organization. As in the fire example, smoke aRISES from fire. It's kind of (by a stretch of imagination) in there already, and just waiting for the right moment to JUMP OUT AND GET YOU! One does not necessarily cause the other, it's more like spawning. Red tape is the devil child of beaurocracy and oligargy is democracy's Rosemary's Baby.


Did any of this help or did I make things worse. It makes sense in my head, but then again, I've been know to create what my friends call "Hope Logic" that makes sense to me easily, but takes an awful lot of explaining to get anyone else to make sense of it at all.